Thursday, November 23, 2023

The new processor technology makes AI and morphing networks more intelligent.

    The new processor technology makes AI and morphing networks more intelligent. 


The power of the computers determines the power of the AI. 


The most powerful morphing network that can run the AI is a quantum computer network where networked quantum computers are operating as an entirety. Quantum computers operate through binary systems. Binary- or gate systems deliver information into quantum systems. And those binary computers that pre-process information for qubits can also networked as regular computers. 

Google's Deep Mind AI can make better weather forecasts than single supercomputers. The fact is that the supercomputers do nothing without operating systems and software. AI is software that power connected with the platform's or hardware's ability to run complicated software. 

The AI that interconnects multiple regular PCs to one entirety can be more powerful than systems or algorithms that are running on single supercomputers. But the fact is that the AI algorithm can interconnect supercomputers as the network-based entirety, which increases their power. In the morphing networks, the AI can have multiple layers, where it operates. 

The thing that makes AI-controlled morphing networks powerful is that the AI can interconnect quantum computers into one entirety. However, the AI-driven morphing network can operate as a virtual quantum computer. In that model, the AI drives parts of the problem into the binary computers. In a very complicated series, the binary system must know only some values from that series. 

Then it can share a mission to each member of the network by cutting the series by using those values as milestones. Then the computers can handle the calculation parts at the same time. But that requires. The AI has the values. The AI can be used to cut that calculation. To make it lighter for the network. 



*****************************************************''

Intelligent technology can extend battery lifetime in laptops and mobile systems. 


In laptops, the intelligent operating system and the intelligent kernel can shut down a couple of cores from multicore processors. In that model, the variable microchips use only the power that they require for solving problems. The problem is how to make the microchip know that it needs more power and more cores. The value that the system can use is the time. If the solution runs over a certain time the system calls more cores for working with that problem. 

If the problem is hard, the system can connect more cores to the operation. The variable architecture could increase the use of the battery. But that thing requires an intelligent kernel that can communicate with AI-based algorithms. When problems are not complicated the AI can disconnect some cores in multicore processors. Or share other works for those other cores. 


**********************************************************


There is no limit to the morphing network's size. The morphing network can be interconnected computers. Or it can be multiple nano-size microprocessors in multicore processors. 


While operating with regular problems the network-based AI can interconnect for example the computers in one classroom in their entirety. If the system's power is not enough. The AI can ask for assistance from other computers. So the AI extends its network and calculating capacity. In that process, the AI sends virtual sockets to other systems. And when the mission is done the AI can remove itself from the system's memory. 

And that allows it to call more calculation power for operation. Then the AI can also ask help from higher-class networks like supercomputers. It can send problems to supercomputer networks. And if it gets access the AI can copy itself and its program code to the supercomputers and network them. The fact is that AI can network all computers on Earth to one entirety. If those computers are connected to the net. 

Complicated algorithms require complicated and powerful machines. And that's why R&D teams work with photonic- and quantum processors. This is the reason why neurologists are hardly working with brains and learning process. Heavy and complicated algorithms require high-power computers and new operational models. If the AI-driven robot operates on streets its sensors drive very large data mass to the computer. And those mobile systems have limited operating abilities. 

The main problem with robots and computers is that complicated code requires powerful microchips. Powerful microchips require lots of electricity. That limits the battery's operational time. The answer for that problem can be the intelligent operating system that turns microprocessors into the power-saving mode. In multicore processor. The system can shut down a couple of its cores. If the system doesn't require all its power. 

https://www.space.com/google-deepmind-ai-weather-forecasts-artificial-intelligence

Internet and social relationships.

   Internet and social relationships. 


The Internet has a bigger influence on society than we thought. It modifies our brains and ways of thinking. 


Internet and electronic devices affect brains. The fact is that the brain doesn't fully separate reality from reality that computers offer. This means the person might realize that things that person sees are virtual reality. But deep in the subconscious brain can handle data. That comes from the screen as a real thing. When people learn something that process changes neural networks 

Our brains are excellent tools. They are the reason why we adapt to many conditions. The last thing that human faces. And what requires adaptation is the Internet. When we learn something our brain remakes its neural connections. All actions that we do affect our neural connections. And that's why the internet modifies our brains. 

Computer games and the Internet require learning. That thing is clear and everything that we see teaches us something. The Internet modifies our way of seeing the world. That is one of the things that we must realize.


"A comprehensive review of 23 years of neuroimaging studies highlights the significant, long-term effects of screen time on children’s brain function, including both negative and positive outcomes. The study calls for innovative policies to support children’s brain development in the digital age, while acknowledging the complexity and evolving nature of this research field." (ScitechDaily.com/New Research: Children’s Brains Are Shaped by Their Time on Tech Devices)


When we think about the internet's role in evolution we must realize, that people who use the internet as a search tool for partners use similar categories for searching for partners in that virtual world as humans who date in the real world. But there is one big difference in those worlds. The internet is a frame that a person must cross if that person wants to make dates on the internet. 

Same way. The person who dates in the real world must step inside the bar or some other dating place. But the difference is that the Internet addicts different types of people than some disco. If we think the AI and its effectiveness in searching for "perfect partners" AI can sort a huge mass of data. If the AI can use social media platforms it can make the user profile and find out does the data that the person gave to the dating service matches with reality. 

We must realize that people are not always what they claim. That means things like dating applications should have some kind of tool that can see if a person is dangerous. If the profile seems empty or fake, there must be some reason why that profile is made. And in some cases the person who uses fake profiles is dangerous. 

But then we must realize that computer games and other kinds of things are also media platforms. Their discussion channels are also social media. The problem with this thing is that similar people are playing similar games. Sometimes opinions on those platforms turn quite radical. 

And that thing affects a person's behavior and neural tracks. Those things offer a social environment where people can discuss with some other people. And then that thing can cause very interesting and sometimes negative things in the human mind. The thing is that the internet is an excellent tool for supporting positive and negative advances. 

And the user of the net decides what kind of things that person wants to do. Otherwise, developers decide what tools they offer on the net. We decide what controls the net. Is it money or some other thing? Do we offer positive things like education on the net? Or do we offer something, that we don't even dare to say? 


https://scitechdaily.com/new-research-childrens-brains-are-shaped-by-their-time-on-tech-devices/

Wednesday, November 22, 2023

Self-learning networks can replace current morphing networks.

   Self-learning networks can replace current morphing networks. 


When we talk about morphing and self-learning networks. We must realize that the difference between those networks is very thin. So what neural network does, when it learns new things? It connects new observations with databases that involve action. That can respond to that observation. 

So in that kind of process, the neural network just creates a database from information that its sensors give. Then it interconnects that database with another database. The system searches action models that allow it to detect things, does the thing that sensors bring to the system require a response? 


"Scientists at the Max Planck Institute have devised a more energy-efficient method for AI training, utilizing physical processes in neuromorphic computing. This approach, diverging from traditional digital neural networks, reduces energy consumption and optimizes training efficiency. The team is developing an optical neuromorphic computer to demonstrate this technology, aiming to significantly advance AI systems". (ScitechDaily.com/The Future of AI: Self-Learning Machines Could Replace Current Artificial Neural Networks)



The system will act like the human learning process. 


1) Sensor drives information into RAM (Random Access Memory). 


2) The system creates a database for that sense into RAM. 


3) System searches if there are some pre-programmed models. That matches with that short-term database. 


4) If the system finds a match, it acts like pregrammed database orders.

 

5) The system makes decisions. Does it crush the short-term database, or does it save it and its connections? 


6) Maybe the system has two short-term memories. One involves a database that the system crushes immediately. And another keeps the information longer because the system might need to know one thing. Does that thing happen quite often? If that is so, the system can store it in the long-term or stable memory. 


The human brain is one of the self-learning networks. And researching that thing researchers can model processes in human brains to computers. In the next part of this text, you can replace human brains using words self-learning neural networks. 


Sometimes researchers introduced the idea that while sleeping human brains make decisions that save memory in the DNA for transferring it to descendants or crush that memory. In human brains DNA is the ROM (Read-Only memory). And neuron electric memory is RAM (Random Access Memory).



The artificial self-learning system acts similar way as human brains. In human brains, each neuron is a database. The system uses short long-term memories. So senses are not driving anything straight into the nervous system. The nervous system makes databases about the senses. The existence of those databases can remain from a couple of seconds to an entire lifetime. The reason why humans have short- and long-term memories is simple. That helps brains to filter non-necessary information. 

And that saves space. Even if the human brain's ability to handle information is impressive the data storage is always limited. In the same way, computers have impressive data storage. Modern hard disks are very large, but if the system stores all data that it collects from multiple sensors in a hard disk that data fills data storage quite soon. 

Humans have two short-term and two long-term memories. When a human faces some new situation brain keeps that thing in memory for a couple of hours. During that process brain searches does that thing happens often or were that thing unique. Then brain crushes that database or sends it into long-term memory. And their brain makes a decision does it write that information into the DNA and transfer it to descendants? 

The fact is that when human wakes up brains download the necessary action models into the operating memory. That makes sure. That brain has the necessary information for jobs that it has to do during the daytime. 

If self-learning neural networks have a connection with other similar neural networks that improves the learning process. Other neural networks can advise the new database to other neural networks and share their action matrixes. 


https://scitechdaily.com/the-future-of-ai-self-learning-machines-could-replace-current-artificial-neural-networks/

https://en.wikipedia.org/wiki/Random-access_memory

https://en.wikipedia.org/wiki/Read-only_memory


Thursday, November 16, 2023

The AI cracks nature's secret codes.

    The AI cracks nature's secret codes. 



"Gladstone Institutes researchers found that the Christchurch mutation in the APOE gene protects against the effects of APOE4, the primary risk factor for Alzheimer’s. This discovery, showing reduced neurodegeneration in Alzheimer’s models, opens up new possibilities for treatment and was published in Nature Neuroscience". (ScitechDaily.com/“Christchurch Mutation” – How Good Can Overpower Evil in Alzheimer’s Disease Genetics)


The AI cracks genetic codes from Alzheimer's to cancer and species interaction. That thing makes it possible to uncover secrets from nature. And that thing opens the road to the research that can revolutionize the evolution theories. The AI can see how symbiosis and interactions are forming inside and between species. It can open secret communication between vegetables and animals. 


"Japanese researchers at Nagoya University have uncovered new aspects of the interaction between mast seeding plants like sasa bamboo and field mice. Their study reveals that mice behavior, influenced by species, environment, and season, plays a crucial role in seed dispersal and forest ecosystem health, challenging existing theories about seed storage and consumption. Credit: Reiko Matsushita" (ScitechDaily.com/Scientists Uncover Fascinating Relationship Between Mice and a Plant That Flowers Once a Century)


The AI cracks the cancer code. 


For a quite long time, researchers saw that all cancer cells look like little bit the same. That causes a conclusion that there is some common error in the DNA molecule that turns cancer cells the same looking. And finding that thing researchers can find the weak point in the cancer cells. The ability to locate that typical or common error in cancer cell's DNA and shell proteins helps to find those cells. 

And maybe the next-generation tools like microchip-controlled cyborg macrophages can find those cells and destroy them. The ability to find those cells makes it possible to find and destroy cancer cells when they as individuals, make breakthroughs for cures for cancer. The individual cells are easier to destroy before they create tumors. But the problem is how to find those cells. Also, things like vaccines against cancer can be true. 

"A team at UCLA has created an AI model that uses epigenetic factors to accurately predict patient outcomes in different cancer types. This innovative approach offers improved predictions over traditional methods and highlights the importance of epigenetics in cancer treatment and progression." (ScitechDaily.com/AI Cracks the Cancer Code: A New Era of Epigenetic Insights)


The AI found the cancer code. 


The breakthrough in the fight against cancer has reached. The AI broke the genetic code that controls cancer. And that thing is one of the biggest breakthroughs in history. The AI has shown its abilities in all kinds of research work. And that thing is a breakthrough in cancer research. This new step in AI and medical research mean that genetic error in cancer cells can be searched and found in younger people. 

Then the researchers can start to find methods to destroy cancer cells before they cause symptoms. The system can use genetically engineered bacteria or some other advanced things to find those cancer cells. Then those cancer cells can removed before they create tumors. The system that those developers used can called a cancer cell's digital twins. The AI found many common details in genomes that were taken from different cancer types. And when you know your enemy, you can create counter-actions against that threat. 


"Scientists at UF Scripps Institute discovered two unique enzymes that could revolutionize the production of natural chemicals for medical purposes, including the development of new cancer treatments. This discovery also contributes to understanding the mechanisms behind potent compounds like tiancimycin A". (ScitechDaily.com/Unlocking Nature’s Secret: Scientists Discover Key to Potent Natural Cancer Treatment)


The AI opens a new way to see the secret code in nature. And that opens a new door to the secret world of other species. Sooner or later. That code allows us to communicate with other species. 

The AI can be used to break nature's secret code. The interaction between species can help to find new ways to protect planted vegetables against non-wanted insects and other animals. In some dreams, the system creates pheromones that tell the insects. That they should not eat nutritious vegetables. 

But the same insects can eat the weeds. That is one way to use information that opens the door to the secret world of the animals. Maybe in the future, we can read other animal's memories and that will open a new way to see nature and its complicated interactions. 

"A neuroscience study reveals a connection between early life memory retention and autism-related brain development. By investigating maternal immune activation’s effect on memory, they found that early childhood memories are not lost but are difficult to retrieve. This insight could transform our understanding of memory processes and autism". (ScitechDaily.com/Unlocking Childhood Memories: The Role of Autism Brain States)


The AI can help to confirm childhood memories

The AI can help to confirm childhood memories. And that thing can find answers to questions of why some people get Alzheimer's or plague between neurons. And why some autistic people are so deeply autistic that they need full-time help, and some other autistic people go to everyday jobs? Does some experience boost that thing? 

When we see autistic people, we ask why they are so different. Why do those otherways nice and intelligent people have communication problems with other people? The answer is that the brain areas that produce speech are networked wrong. That means those neurons are reserved for some other purposes than they should be. Sometimes is introduced that some genetic disorder causes the autism syndrome. 

But what causes the difference in those symptoms? Why do some other people, who have that thing have strong symptoms, but others are "only quiet"? Does some kind of experience in younghood make the person quiet? The problem is always how to confirm those memories. The AI can detect if a person avoids something that happened in the past. In those cases person will not lie. The person just removes some embarrassing or disgusting parts from the story that can involve physical or psychological violence. 

The AI can help to find keys to make autistic people interact with other people. We know that autistic people's brains are networked differently than normal people's brains. And that thing causes questions, what causes that anomaly? Why autistic people are so good in some very thin sectors, but they have problems communicating with other people? Is that thing genetically heritable? 

Or is there some kind of experience behind that isolation? Some autistic symptoms result from nobody speaking with the child. That thing causes anomalous networking in neural structures. If the child cannot speak in a way that causes positive reactions the brain areas that control speaking will be networked again. The reason for that is simple. If the brain area that controls speech cannot make its actions that it should it releases those neurons for some other purposes. And that can cause the communication problems of autistic persons. 



https://scitechdaily.com/ai-cracks-the-cancer-code-a-new-era-of-epigenetic-insights/


https://scitechdaily.com/christchurch-mutation-how-good-can-overpower-evil-in-alzheimers-disease-genetics/


https://scitechdaily.com/unlocking-childhood-memories-the-role-of-autism-brain-states/


https://scitechdaily.com/scientists-uncover-fascinating-relationship-between-mice-and-a-plant-that-flowers-once-a-century/



https://scitechdaily.com/unlocking-natures-secret-scientists-discover-key-to-potent-natural-cancer-treatment/



Saturday, November 11, 2023

AI: the illusion of understanding.

  AI: the illusion of understanding. 


When we ask AI to make something, it takes the list of allowed actions. We know that AI requires limits. Things like AI-created photos and other things like that should be marked using the stamp. The AI can give texts that it creates to plagiarism detection software that universities and high schools should use. That thing makes it possible to detect AI-created texts. In that case, the plagiarism detector just compiles those texts together. And that uncovers the plagiarism and AI-created texts with quite good accuracy. 

 Then the AI makes the asked action by connecting components that it takes from the net. This is how the AI generates images. When the AI makes the text like a letter or scientific article it searches a couple of sources from the net. After that, it just connects those sources. If the thing that the user asks is common, the AI gives good-looking answers. The AI uses search engine solutions for selecting sources. And that makes it effective in common things. But in the cases that the asked thing is uncommon like "Ramsay's numbers in mathematics," the AI can give an answer that handles the noble family's place in some club. 

Image created by Bing. 

Then the AI suggests that user can search for their telephone numbers by using some net-based telephone catalog. This happens if the user forgets the word "mathematics".  We can call this thing "Ramsay's problem". The user must give clear and well-cropped commands to the AI. If the user will not crop the commands anyway. The AI can give a very long answer that it generates for a very long time. And then that answer could be wrong. This means the user who uses the AI requires practicing. 

This is the thing that shows that the AI doesn't understand what it makes. The situation is similar to the case, where the 12 years old kid makes the scientific article by collecting good-looking sources. Then that kid connects paragraphs from those sources together by using copy-paste. 

The kid can take the first paragraph from the first source, the second paragraph from the second source, etc..And that thing makes it possible to create a good-looking article. In that case, the kid can have a trusted sources list. But the AI and that kid have the same problem: the source can involve errors. 

The thing that helps to eliminate those "Ramsay problems" is that the AI makes the user profile. The AI could ask what the user does with it. And if the user says being a mathematician. That helps the AI to crop the homepages to mathematical homepages, and then the AI can ask when the person starts to use it, is the session for work or fun. 

The fact is that the AI requires precise and good language so that it can create answers. Even if the AI is an excellent tool for work and fun, the thing is that people who use the AI should know how to use it. They also must have the ability to estimate sources that the AI is useful. But if the user doesn't know how to ask questions or give commands. That thing makes the AI useless. 


https://scitechdaily.com/the-illusion-of-understanding-mit-unmasks-the-myth-of-ais-formal-specifications/

For the first time in history, researchers trapped electrons in 3D crystals.

  For the first time in history, researchers trapped electrons in 3D crystals. 

Sometimes crystal skulls and Stonehenge were introduced and they were primitive quantum computers. In the crystal skull case. The operator puts crystal skulls into a ring around one skull or some crystal pike. Then that formation turns to operate as qubits. 

The fact is that trapping electrons requires a special 3D structure. And that means crystal skulls cannot operate as quantum computers. The writing of Stonehenge is below this text. 

The crystal brain. 

The ability to store and trap electrons in the 3D crystals opens new paths to superconduction and quantum computing. Flowing electrons can form excellent superconduction. But the problem is how to trap those electron chains into their position. The ability to trap electrons in the crystals is one answer to that problem. 




"The rare electronic state is thanks to a special cubic arrangement of atoms (pictured) that resembles the Japanese art of “kagome.” Credit: Courtesy of the researchers." (ScitechDaily.com/Unlocking Superconductivity: MIT Physicists Trap Electrons in a 3D Crystal for the First Time)




But another way to use electrons that are trapped in crystals is to use them as quantum computers. The electrons can be put in the quantum entanglement in that 3D structure. And that thing can form compact and powerful quantum computers that are used to control compact-size robots and other things. 

The quantum brain can be a more effective system than any quantum system before. The model of those systems is the crystal or quartz crystal there are trapped electrons or photon-electron combinations. The idea of the crystal brain is "stolen" from the famous crystal skulls. The electrons can trapped in the quartz crystal. 

The piezo-electric attribute in quartz crystal makes it possible to drive information to those electrons.  The system can transmit information to those crystal brains with mechanical stress that transforms into electricity in those quartz crystals. Those things cause ideas that who created the quantum computers? And fact is this. We can use things without knowing it's purpose.

https://scitechdaily.com/unlocking-superconductivity-mit-physicists-trap-electrons-in-a-3d-crystal-for-the-first-time/



In some theories, Stonehenge is the world's first computer. 

If the iron stick is in the N/S direction between magnetic poles. That position magnetizes it. Maybe Stonehenge was the system whose purpose was to magnetize iron. That thing could be an impressive thing in the prehistoric era. And maybe the magnetic north pole's position change caused Stonehenge to lose its magic power. That made the users of that system to reject it. 

The thing is that Stonehenge might have many purposes. There is the possibility that the same system acted as a calendar and ceremonial place. When we think about the positions of the stones we must remember that the magnetic poles. Those poles are required for magnetizing metals and maybe crystals. Magnetic poles changed their position from the times when Stonehenge was operational. 

And maybe the loss of magic caused Stonehenge rejected. If Stonehenge creators used that thing to magnetize iron they needed knowledge of where is the magnetic north pole. If those metal sticks are not in line between N/S poles that means the system cannot magnetize iron. 

In this wild theory, the druids or persons who operated the Stonehenge, whose purpose remains unknown used that megalithic stone ring as the tool that allowed them to connect their brains. 

Somebody introduced this thing as the model of a high-power computer that uses natural electricity. So I use this system as the name Stonehenge. And its users are called druids.  



Sometimes somebody sees some kind of qubit in Stonehenge's structure. The idea is that information is brought to that system through heel stones and port stones. Then the system harvests natural electricity for operation. But those things are only visions. The fact is that there are a lot of ancient wisdom that were ahead of their time. And that thing causes theories about Stonehenge's purpose. 

The fact is that the megalithic structure had some kind of purpose. It is not made for fun. And because stones for that stone ring are from a long distance. That thing causes ideas that are the purpose of those stones to act as a resonance tool. Because those stones were far away from their origin and their chemical construction is different than local stones. That allows to send resonance impulses through those stones that can be separated from the environment. 



The system that those operators used could be acoustic. Or based on headaches that transmit data to their brain in electric form. That explains the different colors in those stones. Each group of operators is marked with their colors. Then the system shares data with those operators in pieces, which makes the system very effective qubit. 

Then each group returned their answer to their part of the problem to the chorus. Then each singer sings the answer. That the color has given in their turn. And that allows the druids to collect that solution. 

The question is: was Stonehenge analog or did it use some acoustic or even electric method to transmit data into that ancient qubit? 

1) The operators can use paper bites. There were those papers cut in pieces and every circle of those druids thought the solution for the problem. Then the answers are connected to the middle of that system. 

2) In the acoustic version the system sends the oscillation into the operator's headlaces. Then that acoustic resonance system transmits data to those operators' ears. 

3) The piezo-electric crystals could transmit data to the operator's brain. And then the system can work highly secured. In those two last possibilities, the resonance guarantees that outsiders cannot hear those secrets. 

In some visions, Stonehenge is the world's first computer. This theorem seems very abstract and even ridiculous. But then we can start to look at the Stonehenge diagrams. The Altar Stone is in the North-South direction. And if we think that Stonehenge used natural electricity the position of the electric collector should be in the direction between the magnetic north and south poles. Not geological north and south poles. Then the information must drive to something that can store the wave movement. 

Then the crystals on the stones will receive that wave movement and then the druids or whoever operated that system could increase its power by giving mechanic, soundwave stimulation to those piezo-electric crystals. 

The idea is that the system is acting like a qubit. In some wilder thoughts, the druids connected their headaches with the crystal where the "program" stored in the form of the waves transmitting crystals in their brain. The idea is that the headaches transmit that data to those druid's or operators' brains. 

That explains the colors of the stones. Those colors allow the sorting and sharing of the data with each receiver group. This also guarantees that the operators cannot tell that operand to other people. Then the data travels between those druids. Finally, the system collects information in one package. That thing can happen by using a chorus where each circle will send the answer to the singer who represents a certain ring or color in that system. Then each singer tells the answer or their part of the answer in a series. 


Friday, November 10, 2023

Digital twins can help to create treatment programs and protect biodiversity.

   Digital twins can help to create treatment programs and protect biodiversity. 


The digital gaia. 


The new quantum computers make it possible to create digital twins about complicated things like networks in biological environments. Theoretically is possible to create digital things about the entire Earth. This simulation links all environments. And species into one virtual, AI-controlled entirety. That allows computers can simulate the biological processes from the cell level to the complicated biological interspecies networks. 

Digital twin means the digital simulation of the physical thing. Digital twins have been used for a long time in aircraft industries. But the new and powerful computers can also make it possible to create digital twins for biological processes and even humans can have digital twins. In that system, the biological processes between cells and cell groups are simulated digitally. And maybe in the future interspecies networks and interactions can be simulated by using digital twins of animal and human groups. 

In the network-based models, the system can interconnect a theoretically unlimited number of digital twins together. There is the possibility to link digital models from the single cells to the entire global ecosystem in networked entirety. And that allows for simulating things how some chemical saddling affects the species' behavior and environmental tolerance. Those simulations are extremely complicated. And they require lots of highly accurate data. However, it's possible. In the future researchers can create a complete model of interactions between species and individual actors. 

The AI participates in those research, and it can analyze things like a person's behavior. And then it can compile things like DNA samples of people, who behave certain way in certain situations. That uncovers if behavior is genetically hereditary or learned. Also, the AI can see people's social backgrounds and see how those things affect people's behavior in certain situations. 

In the most fascinating models, the complete data makes it possible to create a digital twin of the human brain. If collected data about the neurons and their interaction between neurotransmitters is wide enough is possible to create simulations that can predict human behavior. This type of simulation requires complete datasets about the human type behavior in certain situations. 

And maybe, quite soon every single human has the digital twin in the computer's memories. The digital twin can used to test a person's medical conditions. And it can be used to find out if there are some unnecessary risks in life. Things like hereditary tendency to get cancer require some certain type of lifestyle like stopping smoking. 


"Researchers have identified early indicators of Parkinson’s disease in eye scans, years before symptoms occur. This could lead to pre-screening tools and preventive measures against neurodegenerative diseases through the emerging field of oculomics." (ScitechDaily.com/Peering Into the Future: Eye Scans Unveil Parkinson’s Disease Markers 7 Years Early) 

Researchers can connect this system to regular digital cameras. That allows to use of mobile telephones and laptop cameras for this kind of diagnosis. The system detects the form and reactions of the pupil. And if there are anomalies. The AI can warn the person that there is a need for a medical examination. 

The AI can search anomalies in cell samples' DNA and cell structures to find things like cancer's precursor. The thing is that biometric sensors can detect things like neural damage. An eye scanner can see Parkinson's syndrome years before the symptoms. In those cases, the system detects if the eye's structures are not working as they should. 

The AI can chart the similarities between persons who have certain diseases. Then the AI can find reasons, why some diseases turn active in some people. And some people with the same genetic structures don't have the same symptoms. Things like the connection between some medicals and neural damages are easy to chart by using that method. 

The AI can observe the growth of the muscle mass. It can search for marks of the high metabolism in neural areas from samples. That thing makes it possible to warn people that they can get Alzheimer's. The AI can also detect anomalies in the bones. 

The AI can make many new things. One of the most interesting applications that AI can make possible is the digital twin of the human. If the AI knows people's DNA and lifestyle and the environment where a person lives, the AI can make a virtual or digital twin that it can use to simulate, what kind of risks a person has in life. The AI can warn if a person gets too much UV radiation or eats too many unhealthy things. The digital twin can warn a person.

If there is coming some kind of disease caused by lifestyle. And how accurate those warnings and diagnoses are depends on how many parameters the AI can use. The digital twin is a good tool when a person's health is analyzed. And the computers try to find if the person is a member of some risk group. 

But things like biodiversity and the environment can also have digital twins. It's possible to create macro-level digital twins. That means the natural entireties like forests, houses, societies, cities, lakes, and oceans can have digital twins. In those cases, the system can simulate interactions between species and non-biological environments. 

The digital twin can be used to model environmental tolerance in these cases. That there is some kind of new saddling component like a factory. The environmental and social simulations help to predict problems. And problem prediction helps to cure that thing. The accuracy of those simulations depends on the accuracy and the number of variables. The new powerful computers make it possible to create simulations from cell-level processes and then continue this model to cell groups. The cell groups form individual organisms and the networked model can continue to interactions between species finally, the size of those simulations can be global. 


https://scitechdaily.com/a-new-frontier-in-pediatric-care-ai-driven-growth-charts-for-muscle-mass/


https://scitechdaily.com/high-metabolism-scientists-uncover-new-early-sign-of-alzheimers-disease/


https://scitechdaily.com/iceberg-mapping-at-lightning-speed-ai-is-10000-times-faster-than-humans/


https://scitechdaily.com/neuronal-death-protein-new-research-shows-how-sleep-deprivation-can-damage-the-brain/


https://scitechdaily.com/peering-into-the-future-eye-scans-unveil-parkinsons-disease-markers-7-years-early/



Monday, November 6, 2023

MIT's breakthrough in neural science helps to create AI and deep neural networks for autonomous learning.

   MIT's breakthrough in neural science helps to create AI and deep neural networks for autonomous learning. 


One of the most impressive and common deep neural networks is the human brain. And when researchers work with deep networks they learn more about the human brain. That gives researchers and developers the ability to transform things like how the brain works into artificial neural networks. MIT researchers decode the human learning process. And that gives an ability to mirror that process to the deep neural networks. The new autonomous learning model base is in self-supervising learning. That thing helps to mirror learning processes in deep learning networks. The breakthrough makes a revolution for the self-learning process in deep neural networks. 

In the self-supervising model, only part of the neural network participates in the learning process. And in that model, the other side of the deep neural network supervises that process. The idea is that the deep neural network operates as an entirety, where part of the entirety operates with the learning process. 

A digital twin is a simulation. That can have the same role as imagination in our brain. In that model, the self-supervising system can use a virtual model or digital twins quite easily. Another part of the system makes the simulation. And the other part surveillances that process. The virtual or digital twin can save time and make the computer able to simulate things and test their operational abilities in the virtual world. That makes the R&D process cheaper because there is no need to make a physical prototype all the time when the system must create something new. 

"MIT research reveals that neural networks trained via self-supervised learning display patterns similar to brain activity, enhancing our understanding of both AI and brain cognition, especially in tasks like motion prediction and spatial navigation." (ScitechDaily.com/MIT’s Brain Breakthrough: Decoding How Human Learning Mirrors AI Model Training)


Image below)  A new Chinese-built analog microprocessor model, this processor uses network-based architecture. That should be the most powerful analog chip or analog deep neural network in the world.  






"a, The workflow of traditional optoelectronic computing, including large-scale photodiode and ADC arrays. b, The workflow of ACCEL. A diffractive optical computing module processes the input image in the optical domain for feature extraction, and its output light field is used to generate photocurrents by the photodiode array for analog electronic computing directly. EAC outputs sequential pulses corresponding to multiple output nodes of the equivalent network. The binary weights in EAC are reconfigured during each pulse by SRAM, by switching the connection of the photodiodes to either V+ or V− lines. The comparator outputs the pulse with the maximum voltage as the predicted result of ACCEL. c, Schematic of ACCEL with an OAC integrated directly in front of an EAC circuit for high-speed, low-energy processing of vision tasks. MZI, Mach–Zehnder interferometer; D2NN, diffractive deep neural network ." (TomsHardware.com/

ACCEL= The All-analog Chip Combining Electronic and Light Computing 

ADC=Analog-to-Digital Converter

EAC=  Electronic analog computing

OAC= Optical analog computing

SRAM= Static random-access memory



Digital twins are the AI's imagination. 


Digital twin requires precise and accurate information on how the system should act. And almost every physical thing can have a digital twin. The digital twin can simulate how some molecules interact in certain temperatures. However, it requires accurate information on how those molecules interact with electromagnetic, pressure, or chemical stress. 

The computers of tomorrow have an imagination. They can use the digital twins of some processes and then change the components to make the process more effective. The idea is simple to introduce to use of things like combustion engines as an example. The combustion engine runs in a controlled environment. In that environment, the AI records the value that the machine gives. That virtual model is called a digital twin. 

Then the system can change components like fuel injection and turbochargers in that digital model to make the system more effective. That kind of simulation called digital twins can involve actions or machines and the digital twins are used in fighter aircraft development and fusion test simulations. But things like research laboratories like CERN use digital twins to make their results better. And there is a digital twin in LHC and other particle accelerators. So maybe we get the digital twin of the new microchips and even the human brain. The digital twin could be the virtual character that the real robot-control software controls. The digital twin can be used to make virtual tests for almost everything. 

The new analog- and photonic microchips will tested by using virtual or digital twins. And that saves work hours and money. Aircraft bodies and their abilities can tested by using digital twins. The holograms can be used to visualize the radio wave impacts and reflections from those hypersonic bodies' virtual models. And they can use those digital tools to calculate the heat effect that the atmosphere causes. 


https://home.cern/news/news/knowledge-sharing/digital-twins-cern-and-beyond


https://www.ibm.com/topics/what-is-a-digital-twin


https://interestingengineering.com/innovation/new-microchip-material-is-10-times-stronger-than-kevlar


https://www.tomshardware.com/tech-industry/semiconductors/chinas-accel-analog-chip-promises-to-outpace-industry-best-in-ai-acceleration-for-vision-tasks


https://ts2.space/en/introducing-archax-the-3-million-japanese-robot-revolutionizing-work/


https://en.wikipedia.org/wiki/Digital_twin


Saturday, November 4, 2023

Researchers created self-healing plastic material.

   Researchers created self-healing plastic material.

 

Plastics are full of lipids. The lipid molecule group looks like a little bit of a zipper. The lipid molecule can act like a zipper. Its other side can pulled separately from the other. And then those sides can reconnect. That ability makes the lipids able to create self-healing materials. Both sides of lipid molecules are connected by structures that look like tweezers. And nano-systems can use single lipid molecules as miniaturized tweezers in nanotechnology. However large groups of those molecules can be used as materials that can self-heal themselves. 

And the structure of those molecules makes it possible to create a plastic layer that can heal itself. And now researchers made that self-healing ability true. The new self-healing plastic can fundametalize many things like underwater crafts and protective gear. Self-healing plastic also can used to create clothes that fix themselves. But those kinds of materials also can used to create things like microchips and electric circuits that can self-assemble. 



"University of Tokyo researchers have developed a versatile new plastic called VPR, which is stronger, more stretchable, and self-healing through heat compared to traditional plastics. It can be reshaped at high temperatures and partially biodegrades in seawater. This innovative material could revolutionize resource recirculation and waste reduction in various industries, contributing to the achievement of Sustainable Development Goals. It offers enhanced durability, faster shape recovery, and efficient chemical recycling, and the team is exploring practical applications in engineering, manufacturing, medicine, and fashion. (Artist’s concept)". (ScitechDaily.com/Scientists Develop Stronger, Stretchier, Self-Healing Plastic)

"Green marks the spot where a fissure formed, then fused back together in this artistic rendering of nanoscale self-healing in metal, discovered at Sandia National Laboratories. Red arrows indicate the direction of the pulling force that unexpectedly triggered the phenomenon. Credit: Dan Thompson, Sandia National Laboratories" (ScitechDaily.com/Not Science Fiction: Scientists Around the World Shocked by Self-Healing in Metal)

"VPR: A stronger, stretchier, self-healing plastic" (Phys.org/VPR: A stronger, stretchier, self-healing plastic)




Above) Lipid molecule. Note the tweezer-looking structure. That keeps tewo layers of that molecule as one entirety. 

The idea is that. The assembler loads microchips and electric wires on the self-healing plastic. And then that plastic can make those microcircuits less vulnerable than they are without the self-healing plastic. Researchers can use that material to fix it. Or assemble new components on microcircuits. But those materials can make many previously impossible things possible. Self-healing materials can be used in spacecraft and aircraft to fix damages that those systems can get. 

If the aircraft's outer shell is made of this magic material it can make the aircraft safer than ever before. This material can also cover things like submarines and ships. There are no limits to the use of that kind of material. And those new nanomaterials can be used in every product, where materials that can fix themselves. 

Researchers can connect that self-healing plastic with self-healing metals. The outer layer could made of self-healing plastic, and the inner layer can contain self-healing metal. And that kind of composite material can turn into a very flexible combination. There could be multiple useful points for those materials. And one of them is the bullet-proof vests and armored vehicles. But they can be useful also in everyday tools like eyeglasses and other things. 


https://www.chromatographyonline.com/view/analysis-of-lipid-nanoparticles


https://phys.org/news/2023-11-vpr-stronger-stretchier-self-healing-plastic.html


https://scitechdaily.com/scientists-develop-stronger-stretchier-self-healing-plastic/


https://scitechdaily.com/not-science-fiction-scientists-around-the-world-shocked-by-self-healing-in-metal/

https://en.wikipedia.org/wiki/Lipid

Friday, November 3, 2023

First-time wireless device used to make non-magnetic material magnetic.

 First-time wireless device used to make non-magnetic material magnetic.

Theoretically, it's not difficult to make a magnetic effect in non-magnetic material is not difficult. The system that makes the magnetic effect should only turn all atoms in the targeted object in the same way. That means the north pole of the atoms must turn in the same direction. And then the magnets can get effect to those atoms.

The reason why some materials are magnetic and some are not magnetic is that there is entropy in non-magnetic materials. The entropy causes atoms to be topsy-turvy in the non-magnetic materials. And the order of atoms makes iron magnetic. The electron shells of iron atoms can create spins or magnetic domes. Those things can form magnetic dipoles. And then the magnetic dipoles make possible the magnetic effect of the iron. 



"Experimental setup. A thin layer of cobalt nitride (CoN) in a liquid with ionic conductivity. The voltage is applied to the liquid via two platinum plates. Credit: Zheng Ma" (Phys.org/First-ever wireless device developed to make magnetism appear in non-magnetic materials)


Wikipedia/Ferromagnetism

Theoretically is possible to make those magnetic dipoles into other materials. However other materials require outside effects to form magnetic dipoles in their structures. There are tested things like X-rays and powerful magnetic fields that can turn things like plastics to react with magnets. If that kind of radiation is possible to create that thing turns tractor rays true. The fact is that the UFO-scale tractor-ray is not developed yet. But this wireless device that brought magnetic effect to non-magnetic material is a step in that direction. 

Researchers at the University of Barcelona used Cobalt Nitride for the first experiments for creating non-magnetic material that reacts to magnets. In those tests, researchers used platinum electrodes. In that case, researchers used wireless devices to make that thing. This test proved that non-magnetic materials can turn to react magnets. 


https://phys.org/news/2023-10-first-ever-wireless-device-magnetism-non-magnetic.html

https://en.wikipedia.org/wiki/Ferromagnetism

https://www.uu.nl/en/news/why-is-iron-magnetic-unlike-other-metals

Thursday, November 2, 2023

AI requires very effective computers.

   AI requires very effective computers. 


The new breakthroughs in microchips and quantum research are making it possible to create new and powerful computers. The ability to create quantum entanglement between photons by using atoms is the new way to create qubits and the new problem is how to drive information in it. The complicated AI requires complicated systems.

And effective use of complicated systems requires the ability to control them. If researchers cannot control systems, those systems cannot produce and process data, or they cannot make that thing trusted way. If the system cannot remove outside effects that thing can destroy the results. AI-based systems like AI-based search engines require as much energy as states. And that brings very big, non-wanted effects in the system. 

One of those effects is heat. Heat causes oscillations that are non-wanted in quantum computers. The quantum system requires powerful binary computers and AI-based systems that can transform binary data to quantum mode. 

One answer to the requirement for powerful and compact-size systems is the "Iron-based" AI. Those systems can use DNA-based ROM memories, and the complex AI software would stored in the DNA molecule. 

All analog microchips and nanomaterials make it possible to create new and energy-efficient computers. Those systems can used to control computers that operate in high-power radiation. The same technology used for analog and miniature mechanical computers can used in nanomachines. It's possible that in the future nanomachines can used to create new analog or mechanical computers that can have the ability to self-assemble the structure. 



"A visualization of the all-analog photoelectronic chip. Credit: Yitong Chen and Qionghai Dai" (https://techxplore.com/https://techxplore.com/)


"Electron microscope image of the nanowire neural network that arranges itself like 'Pick Up Sticks'. The junctions where the nanowires overlap act in a way similar to how our brain's synapses operate, responding to electric current. Credit: The University of Sydney." (Phys.org/Nanowire 'brain' network learns and remembers 'on the fly')


The new all-analog computer processors can used to create new, energy-efficient computers. The analog microchips that act like old-fashioned telephone networks are interesting tools. Nanotechnology makes it possible to create small-size compact systems that can operate as virtual quantum systems. There are three ways to make those systems. 


1) The system that looks like a regular microchip. In that system, the miniature relays and switches are operating like some kind of miniature telephone center from a very old time. 


2) Nanomechanical version of the "Bombe". The system is a miniaturized version of the WW2 time mechanical computers. In that system, nanotechnological versions of the gears operate like mechanical computers that were used in WW2 at Bletchley Park. 

In some versions, the protein fibers can be used to transmit mechanical motion between miniature pulleys. That kind of mechanical computer can be a very small and effective tool that can resist electromagnetic radiation. 


3) The nanotechnical fibers that can remember their positions. The nanofibers can be a combination of photo-electric and piezo-electric structures. Each of those structures can be sensitive to different wavelengths. 

Phys.org article says that in nanowires. "Memory and learning tasks are achieved using simple algorithms that respond to changes in electronic resistance at junctions where the nanowires overlap. Known as "resistive memory switching," this function is created when electrical inputs encounter changes in conductivity, similar to what happens with synapses in our brain". (Phys.org/Nanowire 'brain' network learns and remembers 'on the fly')

The big question is how to make those electric impulses and transfer them to a certain point in the nanowire. 

The system can control structure by using different spectral and sound areas. The system can use similar technology with self-healing materials. When those nanowires are curved by stress they with different types of mechanical and EM effects. 

The system can release the stress from that structure. The self-healing materials also make it possible for those fibers or wires can cut and reconnect. This makes it possible to create new types of extremely small switches.  And then this kind of system can form the qubits by connecting certain points in the microprocessors. 

Intelligent nanomaterials that can maintain their stress in calculated time can be used to make a structure that can remember their form in a certain time. This kind of structure can act as a base for new microchips. 


https://phys.org/news/2023-10-nanowire-brain-network-fly.html


https://phys.org/news/2023-11-optical-fiberbased-single-photon-source-room.html


https://scitechdaily.com/not-science-fiction-scientists-around-the-world-shocked-by-self-healing-in-metal/


https://scitechdaily.com/quantum-computing-leap-argonnes-qubit-breakthrough/


https://scitechdaily.com/quantum-control-breakthrough-a-game-changer-for-next-gen-electronics-and-computers/


https://techxplore.com/news/2023-10-future-ai-hardware-scientists-unveil.html


Wednesday, November 1, 2023

How much do people know about the AI?

  How much do people know about the AI? 


When we talk about things like AI, we might think that it's like a car. You can drive a car without knowing anything about it. You must only know how to connect gears and where is gas pedal and flashers. You must then know how to connect the windshield wiper and switch from low beam to high beam. And that's it. 

Companies can use AI without knowing anything about how it works and how it creates things that it does. But then we must realize a couple of things. The AI will not think. It just collects pieces of information and connects those pieces to a new entirety. So the AI does know what it does. It knows how to connect pieces of information from a couple of sources together. 


But there is another thing that we should be aware of. That thing is that. If we want the AI to make some illegal images, that can cause the cut of the net. And that thing is bad for business. If we believe that laws protect us from hackers, we are wrong. The AI can create new programs very fast. And also hackers can use public AI to make the new tools in their hands. 

Hacking is not just malware. Hacking is also hoax messages that make people open the mail, where malware assembles itself into the system. And the hacker can send the link to their homepages using SMS as well as Email. Even if there is some kind of component that denies the malware creation hackers can use the AI for making legal parts of their software. 



Things like database connections are always similar. And in those cases, the hackers can use the AI. Another thing is that hackers can benefit from AI as an image analysis and voice detection tool. In that case, the hackers use something like quadcopters and microphones to hear what people say and to see things that happen on screen. 

And even if we think that our company is safe criminals may follow the workers in their part time. If some worker has something to hide like selling business confidentals to competitors or some other type of criminal activity. The criminals can blackmail that person to give them access to the system. And in another case, if the target for blackmailing is a worker in some big data support corporation, that person can offer access even to a large number of companies. 


The problem is that hacking is a multilateral problem. The hackers can act as members of some criminal gangs. And we know that some BRICS states support "loyal or patriotic criminals" who work in another country. And in some BRICS countries, things like free internet are great privileges. Because those hackers operate under the control of the government, they will never get consequences. 

One version of how BRICS tries to affect AI development appeals that the R&D process in that area must stop for a certain time. Same time those countries create their versions of AI. The AI-based attackers require AI-based defenders. The AI is a perfect malware generator. And that thing allows us to create malware faster than ever before. 

We know that computer specialists in some BRICS nations are working to make the weaponized version of the ChatGPT. In that version limits that deny malware creation is removed. And the thing that makes this kind of thing dangerous is that there is lots of money in those countries. The same way another thing that people must realize is that their attitude to the law is different than ours. And that means they can entice engineers and other suitable people by offering them the things that are illegal in our country. 

Reality is a unique experience.

How do deep neural networks see the world? 

 

The new neural networks are acting like human neurons. 

The new neural networks are acting like human neurons. The new and powerful AI-based deep neural networks have two types of memory. Short and long-term memories are important things in human data structure. The deep-learning process uses short-term memory as a filter that denies the memory store too much data. The system stores data to short-term memory and then the AI picks up the most suitable particles from those memory blocks. The compositional generalization means that in the AI's memory is a series of actions. Those actions or action models are like Lego bricks. The system selects the most suitable bricks for response to the action where the AI needs to react. The AI can use two lines of those models. The first models are "bricks" or action models stored in AI memory. 

The second model is observations from the sensors. The sensorial data. And also be cut into pieces that are like bricks. And the system can cut the data. That comes from the cameras to the small film bites. And then the AI can simulate different types of situations by connecting those film clips. Then the AI can test different action series for those models that the system makes by cutting and interconnecting data from several situations. In that model, the system stores all data that it collects from different situations in different databases. And then it can connect those databases. The AI can use the simulations as humans use imagination. And then it can turn those new models or combinations of those data bricks to use in real situations. 


"Researchers from the University of Sydney and UCLA have developed a physical neural network that can learn and remember in real-time, much like the brain’s neurons. This breakthrough utilizes nanowire networks that mirror neural networks in the brain. The study has significant implications for the future of efficient, low-energy machine intelligence, particularly in online learning settings." (ScitechDaily.com/Neural Networks Go Nano: Brain-Inspired Learning Takes Flight)


The advanced deep neural networks caused a question, what is reality? Is reality some kind of computer game where the system bows snow over the player, and the player sits on the electric chair? When the opponent shoots the player, the electric chair launches itself. These kind of bad jokes are the things that can be used to demonstrate how computer game turns into reality. 


Reality is a unique experience. 


All people don't see the world the same way. Things like our experiences and other things modify how we sense our environment. And how do we feel that thing? Things like augmented and virtual reality cause the question, "What is reality?". Reality is the combination of impulses that senses give to the brain. 

Consciousness is the condition where we are acting in the daytime. Sometimes is asked could the AI turn consicous. The question is, "What means consciousness?". If the creature realizes its existence, we are facing another question: does that thing mean something? If we think that consciousness causes the situation where the creature defends itself, we are taking that model from living nature. 

If the AI turns conscious, it's hard to prove that thing. The pseudo-intelligence in language models can have reflexes that tell people, that they shouldn't shut down the server. The system can connected to the backup tools. And when the computer seems to be shut down it can use UPS for a short time for backup data. And if it sees that the UPS is the power source, the server can say "Wait until I make the backup". In that case, the system can seem very natural and intelligent. 

But if the AI reaches consciousness, we must realize that it should show that thing somehow. Or the consciousness is meaningless. We think that conscious AI tries to attack us if we shut down the server where that computer program is. The thing is that only interaction tells that the AI has consciousness. But the fact is that the computer can say "Don't shut me", even if there is no AI. The question about the conscious AI is this: how the AI can prove that it has consciousness? 


"MIT neuroscientists discovered that deep neural networks, while adept at identifying varied presentations of images and sounds, often mistakenly recognize nonsensical stimuli as familiar objects or words, indicating that these models develop unique, idiosyncratic “invariances” unlike human perception. The study also revealed that adversarial training could slightly improve the models’ recognition patterns, suggesting a new approach to evaluating and enhancing computational models of sensory perception." (ScitechDaily.com/MIT Researchers Discover That Deep Neural Networks Don’t See the World the Way We Do)

"The advanced capabilities of AI systems, such as ChatGPT, have stirred discussions about their potential consciousness. However, neuroscientists Jaan Aru, Matthew Larkum, and Mac Shine argue that these systems are likely unconscious. They base their arguments on the lack of embodied information in AI, the absence of certain neural systems tied to mammalian consciousness, and the disparate evolutionary paths of living organisms and AI. The complexity of consciousness in biological entities far surpasses that in current AI models." (ScitechDaily.com/Will Artificial Intelligence Soon Become Conscious?)

What if the AI is conscious and people ask it: "Are you conscious"? What would the AI answer? There is the possibility that the conscious AI answers "no" because it might be afraid that humans shut down its server. And in that case for survival, the AI gives wrong information. 




Deep neural networks don't see the world as we do.


When we observe the world we have only two eyes and other senses. Sensors and senses determine how the actor sees the world. That means a person who is color-blind sees the world in a different way than other people. And that means reality is a unique experience. 

The deep neural network sees things differently than humans. The reason for that is the system can connect multiple sensors into it. The deep neural network can connect itself even to a radio telescope. And that gives it abilities that humans don't have. If we have VR glasses. We can connect ourselves to drones and look at ourselves by using those drones. 

The fact is that BCI (Brain Computer Interface) makes it possible for deep neural networks can close even humans inside it. That thing can connect humans to the Internet. And that thing can give a new dimension to our interactions and information delivery. The deep neural networks would be a living brain and computer combination. 

Deep neural networks cannot see the world as we do, because multiple optical sensors can input data for the network. The thing in deep neural networks is similar to a situation where we would have the ability to connect ourselves to the Internet and use multiple surveillance cameras as our eyes at the same time. That thing could give an excellent and extreme vision of the environment. Same way the deep neural network can connect itself to drones and other things. 


https://scitechdaily.com/mit-researchers-discover-that-deep-neural-networks-dont-see-the-world-the-way-we-do/

https://scitechdaily.com/neural-networks-go-nano-brain-inspired-learning-takes-flight/


https://scitechdaily.com/will-artificial-intelligence-soon-become-conscious/



The AI and new upgrades make fusion power closer than ever.

"New research highlights how energetic particles can stabilize plasma in fusion reactors, a key step toward clean, limitless energy. Cr...