How do deep neural networks see the world?
The new neural networks are acting like human neurons.
The new neural networks are acting like human neurons. The new and powerful AI-based deep neural networks have two types of memory. Short and long-term memories are important things in human data structure. The deep-learning process uses short-term memory as a filter that denies the memory store too much data. The system stores data to short-term memory and then the AI picks up the most suitable particles from those memory blocks. The compositional generalization means that in the AI's memory is a series of actions. Those actions or action models are like Lego bricks. The system selects the most suitable bricks for response to the action where the AI needs to react. The AI can use two lines of those models. The first models are "bricks" or action models stored in AI memory.
The second model is observations from the sensors. The sensorial data. And also be cut into pieces that are like bricks. And the system can cut the data. That comes from the cameras to the small film bites. And then the AI can simulate different types of situations by connecting those film clips. Then the AI can test different action series for those models that the system makes by cutting and interconnecting data from several situations. In that model, the system stores all data that it collects from different situations in different databases. And then it can connect those databases. The AI can use the simulations as humans use imagination. And then it can turn those new models or combinations of those data bricks to use in real situations.
"Researchers from the University of Sydney and UCLA have developed a physical neural network that can learn and remember in real-time, much like the brain’s neurons. This breakthrough utilizes nanowire networks that mirror neural networks in the brain. The study has significant implications for the future of efficient, low-energy machine intelligence, particularly in online learning settings." (ScitechDaily.com/Neural Networks Go Nano: Brain-Inspired Learning Takes Flight)
The advanced deep neural networks caused a question, what is reality? Is reality some kind of computer game where the system bows snow over the player, and the player sits on the electric chair? When the opponent shoots the player, the electric chair launches itself. These kind of bad jokes are the things that can be used to demonstrate how computer game turns into reality.
Reality is a unique experience.
All people don't see the world the same way. Things like our experiences and other things modify how we sense our environment. And how do we feel that thing? Things like augmented and virtual reality cause the question, "What is reality?". Reality is the combination of impulses that senses give to the brain.
Consciousness is the condition where we are acting in the daytime. Sometimes is asked could the AI turn consicous. The question is, "What means consciousness?". If the creature realizes its existence, we are facing another question: does that thing mean something? If we think that consciousness causes the situation where the creature defends itself, we are taking that model from living nature.
If the AI turns conscious, it's hard to prove that thing. The pseudo-intelligence in language models can have reflexes that tell people, that they shouldn't shut down the server. The system can connected to the backup tools. And when the computer seems to be shut down it can use UPS for a short time for backup data. And if it sees that the UPS is the power source, the server can say "Wait until I make the backup". In that case, the system can seem very natural and intelligent.
But if the AI reaches consciousness, we must realize that it should show that thing somehow. Or the consciousness is meaningless. We think that conscious AI tries to attack us if we shut down the server where that computer program is. The thing is that only interaction tells that the AI has consciousness. But the fact is that the computer can say "Don't shut me", even if there is no AI. The question about the conscious AI is this: how the AI can prove that it has consciousness?
"MIT neuroscientists discovered that deep neural networks, while adept at identifying varied presentations of images and sounds, often mistakenly recognize nonsensical stimuli as familiar objects or words, indicating that these models develop unique, idiosyncratic “invariances” unlike human perception. The study also revealed that adversarial training could slightly improve the models’ recognition patterns, suggesting a new approach to evaluating and enhancing computational models of sensory perception." (ScitechDaily.com/MIT Researchers Discover That Deep Neural Networks Don’t See the World the Way We Do)
Deep neural networks don't see the world as we do.
When we observe the world we have only two eyes and other senses. Sensors and senses determine how the actor sees the world. That means a person who is color-blind sees the world in a different way than other people. And that means reality is a unique experience.
The deep neural network sees things differently than humans. The reason for that is the system can connect multiple sensors into it. The deep neural network can connect itself even to a radio telescope. And that gives it abilities that humans don't have. If we have VR glasses. We can connect ourselves to drones and look at ourselves by using those drones.
The fact is that BCI (Brain Computer Interface) makes it possible for deep neural networks can close even humans inside it. That thing can connect humans to the Internet. And that thing can give a new dimension to our interactions and information delivery. The deep neural networks would be a living brain and computer combination.
Deep neural networks cannot see the world as we do, because multiple optical sensors can input data for the network. The thing in deep neural networks is similar to a situation where we would have the ability to connect ourselves to the Internet and use multiple surveillance cameras as our eyes at the same time. That thing could give an excellent and extreme vision of the environment. Same way the deep neural network can connect itself to drones and other things.
https://scitechdaily.com/mit-researchers-discover-that-deep-neural-networks-dont-see-the-world-the-way-we-do/
https://scitechdaily.com/neural-networks-go-nano-brain-inspired-learning-takes-flight/
https://scitechdaily.com/will-artificial-intelligence-soon-become-conscious/
Comments
Post a Comment