Skip to main content

MIT's breakthrough in neural science helps to create AI and deep neural networks for autonomous learning.

   MIT's breakthrough in neural science helps to create AI and deep neural networks for autonomous learning. 


One of the most impressive and common deep neural networks is the human brain. And when researchers work with deep networks they learn more about the human brain. That gives researchers and developers the ability to transform things like how the brain works into artificial neural networks. MIT researchers decode the human learning process. And that gives an ability to mirror that process to the deep neural networks. The new autonomous learning model base is in self-supervising learning. That thing helps to mirror learning processes in deep learning networks. The breakthrough makes a revolution for the self-learning process in deep neural networks. 

In the self-supervising model, only part of the neural network participates in the learning process. And in that model, the other side of the deep neural network supervises that process. The idea is that the deep neural network operates as an entirety, where part of the entirety operates with the learning process. 

A digital twin is a simulation. That can have the same role as imagination in our brain. In that model, the self-supervising system can use a virtual model or digital twins quite easily. Another part of the system makes the simulation. And the other part surveillances that process. The virtual or digital twin can save time and make the computer able to simulate things and test their operational abilities in the virtual world. That makes the R&D process cheaper because there is no need to make a physical prototype all the time when the system must create something new. 

"MIT research reveals that neural networks trained via self-supervised learning display patterns similar to brain activity, enhancing our understanding of both AI and brain cognition, especially in tasks like motion prediction and spatial navigation." (ScitechDaily.com/MIT’s Brain Breakthrough: Decoding How Human Learning Mirrors AI Model Training)


Image below)  A new Chinese-built analog microprocessor model, this processor uses network-based architecture. That should be the most powerful analog chip or analog deep neural network in the world.  






"a, The workflow of traditional optoelectronic computing, including large-scale photodiode and ADC arrays. b, The workflow of ACCEL. A diffractive optical computing module processes the input image in the optical domain for feature extraction, and its output light field is used to generate photocurrents by the photodiode array for analog electronic computing directly. EAC outputs sequential pulses corresponding to multiple output nodes of the equivalent network. The binary weights in EAC are reconfigured during each pulse by SRAM, by switching the connection of the photodiodes to either V+ or V− lines. The comparator outputs the pulse with the maximum voltage as the predicted result of ACCEL. c, Schematic of ACCEL with an OAC integrated directly in front of an EAC circuit for high-speed, low-energy processing of vision tasks. MZI, Mach–Zehnder interferometer; D2NN, diffractive deep neural network ." (TomsHardware.com/

ACCEL= The All-analog Chip Combining Electronic and Light Computing 

ADC=Analog-to-Digital Converter

EAC=  Electronic analog computing

OAC= Optical analog computing

SRAM= Static random-access memory



Digital twins are the AI's imagination. 


Digital twin requires precise and accurate information on how the system should act. And almost every physical thing can have a digital twin. The digital twin can simulate how some molecules interact in certain temperatures. However, it requires accurate information on how those molecules interact with electromagnetic, pressure, or chemical stress. 

The computers of tomorrow have an imagination. They can use the digital twins of some processes and then change the components to make the process more effective. The idea is simple to introduce to use of things like combustion engines as an example. The combustion engine runs in a controlled environment. In that environment, the AI records the value that the machine gives. That virtual model is called a digital twin. 

Then the system can change components like fuel injection and turbochargers in that digital model to make the system more effective. That kind of simulation called digital twins can involve actions or machines and the digital twins are used in fighter aircraft development and fusion test simulations. But things like research laboratories like CERN use digital twins to make their results better. And there is a digital twin in LHC and other particle accelerators. So maybe we get the digital twin of the new microchips and even the human brain. The digital twin could be the virtual character that the real robot-control software controls. The digital twin can be used to make virtual tests for almost everything. 

The new analog- and photonic microchips will tested by using virtual or digital twins. And that saves work hours and money. Aircraft bodies and their abilities can tested by using digital twins. The holograms can be used to visualize the radio wave impacts and reflections from those hypersonic bodies' virtual models. And they can use those digital tools to calculate the heat effect that the atmosphere causes. 


https://home.cern/news/news/knowledge-sharing/digital-twins-cern-and-beyond


https://www.ibm.com/topics/what-is-a-digital-twin


https://interestingengineering.com/innovation/new-microchip-material-is-10-times-stronger-than-kevlar


https://www.tomshardware.com/tech-industry/semiconductors/chinas-accel-analog-chip-promises-to-outpace-industry-best-in-ai-acceleration-for-vision-tasks


https://ts2.space/en/introducing-archax-the-3-million-japanese-robot-revolutionizing-work/


https://en.wikipedia.org/wiki/Digital_twin


Comments

Popular posts from this blog

The new bendable sensor is like straight from the SciFi movies.

"Researchers at Osaka University have developed a groundbreaking flexible optical sensor that works even when crumpled. Using carbon nanotube photodetectors and wireless Bluetooth technology, this sensor enables non-invasive analysis and holds promise for advancements in imaging, wearable technology, and soft robotics. Credit: SciTechDaily.com" (ScitechDaily, From Sci-Fi to Reality: Scientists Develop Unbreakable, Bendable Optical Sensor) The new sensor is like the net eye of bugs. But it's more accurate than any natural net eye. The system is based on flexible polymer film and nanotubes. The nanotubes let light travel through it. And then the film at the bottom of those tubes transforms that light into the image. This ultra-accurate CCD camera can see ultimate details in advanced materials. The new system can see the smallest deviation in the materials.  And that thing makes it possible to improve safety on those layers. The ability to see ultra-small differences on surf

Quantum breakthrough: stable quantum entanglement at room temperature.

"Researchers have achieved quantum coherence at room temperature by embedding a light-absorbing chromophore within a metal-organic framework. This breakthrough, facilitating the maintenance of a quantum system’s state without external interference, marks a significant advancement for quantum computing and sensing technologies". (ScitechDaily, Quantum Computing Breakthrough: Stable Qubits at Room Temperature) Japanese researchers created stable quantum entanglement at room temperature. The system used a light-absorbing chromophore along with a metal-organic framework. This thing is a great breakthrough in quantum technology. The room-temperature quantum computers are the new things, that make the next revolution in quantum computing. This technology may come to markets sooner than we even think. The quantum computer is the tool, that requires advanced operating- and support systems.  When the support system sees that the quantum entanglement starts to reach energy stability. I

Humans should be at the center of AI development.

"Experts advocate for human-centered AI, urging the design of technology that supports and enriches human life, rather than forcing humans to adapt to it. A new book featuring fifty experts from over twelve countries and disciplines explores practical ways to implement human-centered AI, addressing risks and proposing solutions across various contexts." (ScitechDaily, 50 Global Experts Warn: We Must Stop Technology-Driven AI) The AI is the ultimate tool for handling things that behave is predictable. Things like planets' orbiting and other mechanical things that follow certain natural laws are easy things for the AI. The AI might feel human, it can have a certain accent. And that thing is not very hard to program.  It just requires the accent wordbook, and then AI can transform grammatically following text into text with a certain accent. Then the AI drives that data to the speech synthesizer. The accent mode follows the same rules as language translation programs. The ac