Skip to main content

Reality is a unique experience.

How do deep neural networks see the world? 

 

The new neural networks are acting like human neurons. 

The new neural networks are acting like human neurons. The new and powerful AI-based deep neural networks have two types of memory. Short and long-term memories are important things in human data structure. The deep-learning process uses short-term memory as a filter that denies the memory store too much data. The system stores data to short-term memory and then the AI picks up the most suitable particles from those memory blocks. The compositional generalization means that in the AI's memory is a series of actions. Those actions or action models are like Lego bricks. The system selects the most suitable bricks for response to the action where the AI needs to react. The AI can use two lines of those models. The first models are "bricks" or action models stored in AI memory. 

The second model is observations from the sensors. The sensorial data. And also be cut into pieces that are like bricks. And the system can cut the data. That comes from the cameras to the small film bites. And then the AI can simulate different types of situations by connecting those film clips. Then the AI can test different action series for those models that the system makes by cutting and interconnecting data from several situations. In that model, the system stores all data that it collects from different situations in different databases. And then it can connect those databases. The AI can use the simulations as humans use imagination. And then it can turn those new models or combinations of those data bricks to use in real situations. 


"Researchers from the University of Sydney and UCLA have developed a physical neural network that can learn and remember in real-time, much like the brain’s neurons. This breakthrough utilizes nanowire networks that mirror neural networks in the brain. The study has significant implications for the future of efficient, low-energy machine intelligence, particularly in online learning settings." (ScitechDaily.com/Neural Networks Go Nano: Brain-Inspired Learning Takes Flight)


The advanced deep neural networks caused a question, what is reality? Is reality some kind of computer game where the system bows snow over the player, and the player sits on the electric chair? When the opponent shoots the player, the electric chair launches itself. These kind of bad jokes are the things that can be used to demonstrate how computer game turns into reality. 


Reality is a unique experience. 


All people don't see the world the same way. Things like our experiences and other things modify how we sense our environment. And how do we feel that thing? Things like augmented and virtual reality cause the question, "What is reality?". Reality is the combination of impulses that senses give to the brain. 

Consciousness is the condition where we are acting in the daytime. Sometimes is asked could the AI turn consicous. The question is, "What means consciousness?". If the creature realizes its existence, we are facing another question: does that thing mean something? If we think that consciousness causes the situation where the creature defends itself, we are taking that model from living nature. 

If the AI turns conscious, it's hard to prove that thing. The pseudo-intelligence in language models can have reflexes that tell people, that they shouldn't shut down the server. The system can connected to the backup tools. And when the computer seems to be shut down it can use UPS for a short time for backup data. And if it sees that the UPS is the power source, the server can say "Wait until I make the backup". In that case, the system can seem very natural and intelligent. 

But if the AI reaches consciousness, we must realize that it should show that thing somehow. Or the consciousness is meaningless. We think that conscious AI tries to attack us if we shut down the server where that computer program is. The thing is that only interaction tells that the AI has consciousness. But the fact is that the computer can say "Don't shut me", even if there is no AI. The question about the conscious AI is this: how the AI can prove that it has consciousness? 


"MIT neuroscientists discovered that deep neural networks, while adept at identifying varied presentations of images and sounds, often mistakenly recognize nonsensical stimuli as familiar objects or words, indicating that these models develop unique, idiosyncratic “invariances” unlike human perception. The study also revealed that adversarial training could slightly improve the models’ recognition patterns, suggesting a new approach to evaluating and enhancing computational models of sensory perception." (ScitechDaily.com/MIT Researchers Discover That Deep Neural Networks Don’t See the World the Way We Do)

"The advanced capabilities of AI systems, such as ChatGPT, have stirred discussions about their potential consciousness. However, neuroscientists Jaan Aru, Matthew Larkum, and Mac Shine argue that these systems are likely unconscious. They base their arguments on the lack of embodied information in AI, the absence of certain neural systems tied to mammalian consciousness, and the disparate evolutionary paths of living organisms and AI. The complexity of consciousness in biological entities far surpasses that in current AI models." (ScitechDaily.com/Will Artificial Intelligence Soon Become Conscious?)

What if the AI is conscious and people ask it: "Are you conscious"? What would the AI answer? There is the possibility that the conscious AI answers "no" because it might be afraid that humans shut down its server. And in that case for survival, the AI gives wrong information. 




Deep neural networks don't see the world as we do.


When we observe the world we have only two eyes and other senses. Sensors and senses determine how the actor sees the world. That means a person who is color-blind sees the world in a different way than other people. And that means reality is a unique experience. 

The deep neural network sees things differently than humans. The reason for that is the system can connect multiple sensors into it. The deep neural network can connect itself even to a radio telescope. And that gives it abilities that humans don't have. If we have VR glasses. We can connect ourselves to drones and look at ourselves by using those drones. 

The fact is that BCI (Brain Computer Interface) makes it possible for deep neural networks can close even humans inside it. That thing can connect humans to the Internet. And that thing can give a new dimension to our interactions and information delivery. The deep neural networks would be a living brain and computer combination. 

Deep neural networks cannot see the world as we do, because multiple optical sensors can input data for the network. The thing in deep neural networks is similar to a situation where we would have the ability to connect ourselves to the Internet and use multiple surveillance cameras as our eyes at the same time. That thing could give an excellent and extreme vision of the environment. Same way the deep neural network can connect itself to drones and other things. 


https://scitechdaily.com/mit-researchers-discover-that-deep-neural-networks-dont-see-the-world-the-way-we-do/

https://scitechdaily.com/neural-networks-go-nano-brain-inspired-learning-takes-flight/


https://scitechdaily.com/will-artificial-intelligence-soon-become-conscious/



Comments

Popular posts from this blog

The new bendable sensor is like straight from the SciFi movies.

"Researchers at Osaka University have developed a groundbreaking flexible optical sensor that works even when crumpled. Using carbon nanotube photodetectors and wireless Bluetooth technology, this sensor enables non-invasive analysis and holds promise for advancements in imaging, wearable technology, and soft robotics. Credit: SciTechDaily.com" (ScitechDaily, From Sci-Fi to Reality: Scientists Develop Unbreakable, Bendable Optical Sensor) The new sensor is like the net eye of bugs. But it's more accurate than any natural net eye. The system is based on flexible polymer film and nanotubes. The nanotubes let light travel through it. And then the film at the bottom of those tubes transforms that light into the image. This ultra-accurate CCD camera can see ultimate details in advanced materials. The new system can see the smallest deviation in the materials.  And that thing makes it possible to improve safety on those layers. The ability to see ultra-small differences on surf

Quantum breakthrough: stable quantum entanglement at room temperature.

"Researchers have achieved quantum coherence at room temperature by embedding a light-absorbing chromophore within a metal-organic framework. This breakthrough, facilitating the maintenance of a quantum system’s state without external interference, marks a significant advancement for quantum computing and sensing technologies". (ScitechDaily, Quantum Computing Breakthrough: Stable Qubits at Room Temperature) Japanese researchers created stable quantum entanglement at room temperature. The system used a light-absorbing chromophore along with a metal-organic framework. This thing is a great breakthrough in quantum technology. The room-temperature quantum computers are the new things, that make the next revolution in quantum computing. This technology may come to markets sooner than we even think. The quantum computer is the tool, that requires advanced operating- and support systems.  When the support system sees that the quantum entanglement starts to reach energy stability. I

Humans should be at the center of AI development.

"Experts advocate for human-centered AI, urging the design of technology that supports and enriches human life, rather than forcing humans to adapt to it. A new book featuring fifty experts from over twelve countries and disciplines explores practical ways to implement human-centered AI, addressing risks and proposing solutions across various contexts." (ScitechDaily, 50 Global Experts Warn: We Must Stop Technology-Driven AI) The AI is the ultimate tool for handling things that behave is predictable. Things like planets' orbiting and other mechanical things that follow certain natural laws are easy things for the AI. The AI might feel human, it can have a certain accent. And that thing is not very hard to program.  It just requires the accent wordbook, and then AI can transform grammatically following text into text with a certain accent. Then the AI drives that data to the speech synthesizer. The accent mode follows the same rules as language translation programs. The ac