MIT's breakthrough in neural science helps to create AI and deep neural networks for autonomous learning.
MIT's breakthrough in neural science helps to create AI and deep neural networks for autonomous learning.
One of the most impressive and common deep neural networks is the human brain. And when researchers work with deep networks they learn more about the human brain. That gives researchers and developers the ability to transform things like how the brain works into artificial neural networks. MIT researchers decode the human learning process. And that gives an ability to mirror that process to the deep neural networks. The new autonomous learning model base is in self-supervising learning. That thing helps to mirror learning processes in deep learning networks. The breakthrough makes a revolution for the self-learning process in deep neural networks.
In the self-supervising model, only part of the neural network participates in the learning process. And in that model, the other side of the deep neural network supervises that process. The idea is that the deep neural network operates as an entirety, where part of the entirety operates with the learning process.
A digital twin is a simulation. That can have the same role as imagination in our brain. In that model, the self-supervising system can use a virtual model or digital twins quite easily. Another part of the system makes the simulation. And the other part surveillances that process. The virtual or digital twin can save time and make the computer able to simulate things and test their operational abilities in the virtual world. That makes the R&D process cheaper because there is no need to make a physical prototype all the time when the system must create something new.
"MIT research reveals that neural networks trained via self-supervised learning display patterns similar to brain activity, enhancing our understanding of both AI and brain cognition, especially in tasks like motion prediction and spatial navigation." (ScitechDaily.com/MIT’s Brain Breakthrough: Decoding How Human Learning Mirrors AI Model Training)
Image below) A new Chinese-built analog microprocessor model, this processor uses network-based architecture. That should be the most powerful analog chip or analog deep neural network in the world.
"a, The workflow of traditional optoelectronic computing, including large-scale photodiode and ADC arrays. b, The workflow of ACCEL. A diffractive optical computing module processes the input image in the optical domain for feature extraction, and its output light field is used to generate photocurrents by the photodiode array for analog electronic computing directly. EAC outputs sequential pulses corresponding to multiple output nodes of the equivalent network. The binary weights in EAC are reconfigured during each pulse by SRAM, by switching the connection of the photodiodes to either V+ or V− lines. The comparator outputs the pulse with the maximum voltage as the predicted result of ACCEL. c, Schematic of ACCEL with an OAC integrated directly in front of an EAC circuit for high-speed, low-energy processing of vision tasks. MZI, Mach–Zehnder interferometer; D2NN, diffractive deep neural network ." (TomsHardware.com/
ACCEL= The All-analog Chip Combining Electronic and Light Computing
ADC=Analog-to-Digital Converter
EAC= Electronic analog computing
OAC= Optical analog computing
SRAM= Static random-access memory
Digital twins are the AI's imagination.
Digital twin requires precise and accurate information on how the system should act. And almost every physical thing can have a digital twin. The digital twin can simulate how some molecules interact in certain temperatures. However, it requires accurate information on how those molecules interact with electromagnetic, pressure, or chemical stress.
The computers of tomorrow have an imagination. They can use the digital twins of some processes and then change the components to make the process more effective. The idea is simple to introduce to use of things like combustion engines as an example. The combustion engine runs in a controlled environment. In that environment, the AI records the value that the machine gives. That virtual model is called a digital twin.
Then the system can change components like fuel injection and turbochargers in that digital model to make the system more effective. That kind of simulation called digital twins can involve actions or machines and the digital twins are used in fighter aircraft development and fusion test simulations. But things like research laboratories like CERN use digital twins to make their results better. And there is a digital twin in LHC and other particle accelerators. So maybe we get the digital twin of the new microchips and even the human brain. The digital twin could be the virtual character that the real robot-control software controls. The digital twin can be used to make virtual tests for almost everything.
The new analog- and photonic microchips will tested by using virtual or digital twins. And that saves work hours and money. Aircraft bodies and their abilities can tested by using digital twins. The holograms can be used to visualize the radio wave impacts and reflections from those hypersonic bodies' virtual models. And they can use those digital tools to calculate the heat effect that the atmosphere causes.
https://home.cern/news/news/knowledge-sharing/digital-twins-cern-and-beyond
https://www.ibm.com/topics/what-is-a-digital-twin
https://interestingengineering.com/innovation/new-microchip-material-is-10-times-stronger-than-kevlar
https://www.tomshardware.com/tech-industry/semiconductors/chinas-accel-analog-chip-promises-to-outpace-industry-best-in-ai-acceleration-for-vision-tasks
https://ts2.space/en/introducing-archax-the-3-million-japanese-robot-revolutionizing-work/
https://en.wikipedia.org/wiki/Digital_twin
Comments
Post a Comment