Self-learning networks can replace current morphing networks.
When we talk about morphing and self-learning networks. We must realize that the difference between those networks is very thin. So what neural network does, when it learns new things? It connects new observations with databases that involve action. That can respond to that observation.
So in that kind of process, the neural network just creates a database from information that its sensors give. Then it interconnects that database with another database. The system searches action models that allow it to detect things, does the thing that sensors bring to the system require a response?
"Scientists at the Max Planck Institute have devised a more energy-efficient method for AI training, utilizing physical processes in neuromorphic computing. This approach, diverging from traditional digital neural networks, reduces energy consumption and optimizes training efficiency. The team is developing an optical neuromorphic computer to demonstrate this technology, aiming to significantly advance AI systems". (ScitechDaily.com/The Future of AI: Self-Learning Machines Could Replace Current Artificial Neural Networks)
The system will act like the human learning process.
1) Sensor drives information into RAM (Random Access Memory).
2) The system creates a database for that sense into RAM.
3) System searches if there are some pre-programmed models. That matches with that short-term database.
4) If the system finds a match, it acts like pregrammed database orders.
5) The system makes decisions. Does it crush the short-term database, or does it save it and its connections?
6) Maybe the system has two short-term memories. One involves a database that the system crushes immediately. And another keeps the information longer because the system might need to know one thing. Does that thing happen quite often? If that is so, the system can store it in the long-term or stable memory.
The human brain is one of the self-learning networks. And researching that thing researchers can model processes in human brains to computers. In the next part of this text, you can replace human brains using words self-learning neural networks.
Sometimes researchers introduced the idea that while sleeping human brains make decisions that save memory in the DNA for transferring it to descendants or crush that memory. In human brains DNA is the ROM (Read-Only memory). And neuron electric memory is RAM (Random Access Memory).
The artificial self-learning system acts similar way as human brains. In human brains, each neuron is a database. The system uses short long-term memories. So senses are not driving anything straight into the nervous system. The nervous system makes databases about the senses. The existence of those databases can remain from a couple of seconds to an entire lifetime. The reason why humans have short- and long-term memories is simple. That helps brains to filter non-necessary information.
And that saves space. Even if the human brain's ability to handle information is impressive the data storage is always limited. In the same way, computers have impressive data storage. Modern hard disks are very large, but if the system stores all data that it collects from multiple sensors in a hard disk that data fills data storage quite soon.
Humans have two short-term and two long-term memories. When a human faces some new situation brain keeps that thing in memory for a couple of hours. During that process brain searches does that thing happens often or were that thing unique. Then brain crushes that database or sends it into long-term memory. And their brain makes a decision does it write that information into the DNA and transfer it to descendants?
The fact is that when human wakes up brains download the necessary action models into the operating memory. That makes sure. That brain has the necessary information for jobs that it has to do during the daytime.
If self-learning neural networks have a connection with other similar neural networks that improves the learning process. Other neural networks can advise the new database to other neural networks and share their action matrixes.
https://scitechdaily.com/the-future-of-ai-self-learning-machines-could-replace-current-artificial-neural-networks/
https://en.wikipedia.org/wiki/Random-access_memory
https://en.wikipedia.org/wiki/Read-only_memory
Comments
Post a Comment