Sunday, April 30, 2023

GUT (Grand Unified Theory) theory and the modified Big Bang theory.

The model that connects bosons with GUT (Grand Unified Theory) and all fundamental forces might look like this. 



Things like crossing gravitational fields can form material. But there must be some kind of source for those gravitational waves. When force travels in the universe it requires the force carrier, like a boson, that transfers the force. And then there must be some kind of force that the boson carries. 

Without force, that touches the boson. It cannot carry force. When a boson travels through the quantum fields those fields are touching it. But if the energy level of those fields is lower than the boson's energy level is.  The boson releases energy. 

The boson cannot transport energy or force. In some models, the size of the carrier boson determines the force that it carries. In the field model, the boson travels through the quantum field (or Higgs field) and then that field touches it. The size of those bosons or force-carrier particles is different, and when they travel through the universe the quantum field touches them like some kind of stamp or tape. 

And the size of that tape determines which of the four interactions the particle transmits. So when boson particle forms in the middle of an atom it's like an energy whirl that is left from the standing wave that forms between the three quarks. 

Then that boson or gluon strats travel out from that power field and its size grows. Whenever that boson crosses the quantum field its size turns bigger and it transmits different forces. In theories, four fundamental interactions are the same force. All of those forces are wave movements but they have different wavelengths. 

Protons are more complicated than neutrons. And some bosons may be forming between those smaller particles. But also the main particles in the proton are two up, and one down quarks. 




Fundamental interactions: (Wikipedia/Fundamental interaction)



Standard model: 


It's time for a new Big Bang Theory. 


One galaxy doesn't make the universe. It doesn't brake any theory. When we are talking about redshift measurements, the virtual redshift of the object may be stronger than usual. In those models, there could be a supermassive black hole or zombie galaxy, that is full of black holes in line with the visible galaxy but behind it. And that thing means that the pulling effect of those things might be stronger than it used. And that thing stretches the wave movement. 

It's time for the new Big Bang theory. Which can call the modified Big Bang Theory. The biggest weakness of the Big Bang theory is that it cannot explain where the material came from. Material can turn to wave movement and the opposite. But there must be some kind of wave movement that formed the material. Of course, material can come out from singularity but the problem is where that singularity came from. 

The Big Bang is over, or at least that theory requires adjustment, but we still can talk about the "Bang" as the beginning of our universe. But the "bang" was not as big and unique as we might want to believe. The problem with the original Big Bang theory is that it cannot explain where the material came from. The material is one version of energy and other ways of saying: the energy is one version of the material. So what is energy? 

It's the wave movement or movement where small strings are moving. The Schwinger effect can transform energy or wave movement into particles and particles into energy or wave movement. But the main problem with the original Big Bang theory is that all models that we know require, that there is some kind of energy or material field before the Big Bang. 

The oversimplified model of the Big Bang theory is that there was some kind of "singularity" that exploded in a total vacuum. That thing is an excellent model, except that it requires, that this material comes from somewhere. And the good explanation for that is the Phoenix universe. In that model, the fate of the past universe was the big crunch where all material dropped into the giant black hole. 

And then when the black hole pulls the final quantum field remnants inside it, the quantum (or Higgs) field that travels in the black hole will turn too weak that it can press the black hole into its form. And then that black hole starts vaporizing. Then those impacting waves formed the new universe. In some version of that theory, multiple black holes were remaining in that ancient universe. Then those black holes started to explode, and impacting waves formed the universe. 



But then, where that past universe came from? 


There are models where the energy beam dropped from another dimension. But the weakness is that energy must interact with some kind of power field. And that requires the existence of the quantum field before that energy beam came to the third dimension. And that energy field cannot form from nothingness. 

Total emptiness cannot form material or anything. So theory called multiverse is one of the simplest ways to explain things like dark energy dark matter, and begin of the universe. In oversimplified models, dark energy is energy that origin is outside of our universe. 

The thing. What makes multiverse theory interesting and funny is that maybe we ever cannot prove it. Multiverse theory means that there are other universes. And we are living in one of them. That means many other universes can exist, but they can be different. 

Antimatter can form those other universes. There may be antimatter stars, antimatter solar systems, or even antimatter galaxies in our universe, or their age can be different. And, the size of elementary particles in those other universes might be different than in our universe. If another universe is very old.  That means its energy level is lower than in our universe. And that thing causes the situation where light from our universe pushes light from another universe away. 

So in those models, we cannot see those other universes. In that model, the explanation for dark energy and dark matter is that: they come from another universe. 


https://www.forbes.com/sites/startswithabang/2019/03/15/this-is-why-the-multiverse-must-exist/?sh=5a0c613a6d08

https://home.cern/science/physics/standard-model

https://en.wikipedia.org/wiki/Fundamental_interaction

https://en.wikipedia.org/wiki/Grand_Unified_Theory

https://en.wikipedia.org/wiki/Multiverse

https://en.wikipedia.org/wiki/Redshift

https://en.wikipedia.org/wiki/Standard_Model


See also: 


Dark matter

Dark energy


https://visionsoftheaiandfuture.blogspot.com/2023/04/gut-grand-unified-theory-theory-and.html


Saturday, April 29, 2023

The problem with the AI is it's so black and white. It uses either or, type solutions.

The difference between AI and the human brain is that AI always works with all or nothing principle. For the binary computers, everything is black and white. And if the AI-controlled binary system will not stop trying some solution, even if it sees that thing, as impossible. But, the problem is that. The AI makes only things. That is programmed in it. 

To see that something is impossible. The AI requires an algorithm that allows it to see that it cannot make something. Without those parameters, the pocket-size robot tries to move the car, until its battery is empty, if it gets ordered. The AI requires a stop code that activates if a certain number of pullings don't give suitable solutions. 

The main problem with binary systems is that they require two other systems to make fuzzy or uncertain solutions.  The binary system can handle only one object per time. And that means the binary system requires another system, that makes a deeper analysis of the situation that it detects. In that model, the first reaction that the AI makes is the reflex. Then the system will react to the situation by selecting a certain database. 

So when the AI sees something it always reacts to that thing with all its power. The human brain analyzes the situation and then makes decisions. This is one of the biggest differences between AI and the human brain. If an AI-controlled robot will get the order to transfer something, AI will move the thing but when it presses its fingers it will make that with all its power. There might be levels of how strong the AI must press its fingers. 



But all those levels like using 5, 6,7... etc. kilogram force are independent solutions. There could be the main database there are orders on how to react if the hand cannot move the object. If the hand cannot move an object by using 5 kg pressure it selects another strength. So the system requires an algorithm that makes it touch the object with the necessary force. The forces that robots use programs in certain tables in databases. 

When a robot must turn the steering wheel it will turn it so much as it turns. The human brain will use fuzzy logic in that process. And that makes fuzzy logic more useful in everyday life than precise logic. But in computer memory, fuzz logic is made by using multiple databases. The system follows how the car follows the road.  And then, it selects the database that fits best for the situation.

So there is a limited number of degrees in how a robot turns the car. The steering system requires lots of databases so that it can turn the wheel in the right way to the right direction. The linear computing model means that when the AI faces a situation where coded orders for some situation match with some database the AI selects that solution. 

And that means the AI will not compare that possibility with other possibilities. In a linear model, there is only one possibility, that the AI selects. The human brain uses the circular model or model of internal circles where the brain compares the solution that it selects with other solutions. The most out solution is the thing that the outsider observer sees. But there are internal layers that the brain modifies. In that model, those circles are choosing information from each other. And that thing makes the human brain more versatile than computers. 


https://bigthink.com/neuropsych/not-how-your-brain-works/


https://www.chitkara.edu.in/blogs/what-is-artificial-intelligence-and-future-scope/

Friday, April 28, 2023

Augmented reality can give super senses and superpowers.


The reason why augmented reality is not a very popular thing is that those systems are expensive. And the second thing is that regular people don't have so much use for them. But when the AR headset connects with things, like drone swarms. That thing makes it possible to connect those drones straight with our senses. 

If the operator uses quadcopter pair, there is a terahertz- or X-ray scanner that person can scan people or buildings very fast. If the system has voice command- or BCI mode the user can just give orders to drones and point the drones that they must scan some object. And then those drones can fly opposite sides around the object and then make an X-ray image of it. 

The user can mark the target by using a laser pointer, or if the system uses the "retina mouse" where the person looks certain point and blinks the eye. The system follows a certain point of the retina, and uses that image as a mouse. 


"An augmented reality headset combines computer vision and wireless perception to automatically locate a specific item that is hidden from view, perhaps inside a box or under a pile, and then guide the user to retrieve it. Image: Courtesy of the researchers, edited by MIT New" (BigThink.com) Augmented reality headset enables users to see hidden objects)


Researchers created the "retina mouse" system for handicapped people. And famous physicist and ALS patient Stephen Hawking used it. But it can also use for controlling AI-controlled drones. In that kind of system, the human gives orders. The drone is what it must do. So the strategic objective in chains is given by humans. But all actions are a series of movements. That is stored in the AI memories. 

In that system, the interface is connected to the helmet camera, and the person must just move the crosshair to the right point. And then give orders to drones. The BCI system's problem is that the system must remove the white noise. But those systems allow them to operate drones without using their hands. And actually, the operator can give drones any orders that their equipment allows them to use. 

The AR system can control even large-size quadcopters that can rise even trucks. Two drones, that are flying over the ground can pull the long radar wire between them. That gives them the ability to act as a radar surveillance platform. And in that kind of system, the computers are outsourced to some other place like truck-mounted computer centers. 

The thing is that AR systems with lightweight user interfaces are suitable for military missions. The system can interact with kamikaze- or other types of drones. And the user can order the system to eliminate any target from the battlefield. 


https://bigthink.com/hard-science/augmented-reality-headset/

Boston Dynamics connected ChatGPT with its robot dog.


The ChatGPT can make robot dogs speak. That thing can make the robot dog very funny. But the ChatGPT can also make them more flexible than ever before. The robot dog can use to search for all things. It can search for dangerous material. The robot dog can go places where no human cannot go. And maybe in the future robot dog research the surfaces of other planets. 

If a robot dog uses nuclear batteries its operating time is unlimited. But the AI-controlled robot dog can also have a manipulator's arm. That allows it to connect itself to any outlet. But the robot dogs can use fuel cells where it burns alcohol or hydrogen. So the owner of that kind of system can feed it with alcohol every morning.

Robot animals, like robot wolves, can be used to observe wild animals. They can operate undercover. And actually, those things might look like even reindeer. Cyborg rats can collect information about non-social behavior. And robot animals can carry laser microphones and other sensors that can collect information from their environment. And only imagination is the limit in that kind of technology. 





It can use for security and reconnaissance missions in the civil and military world. And those robots can carry anti-tank missiles on their back. Robot dogs can look exactly like real dogs. And their senses can be connected with augmented reality systems where operators can see and hear everything, that the robot dog sees and hears. 

And ChatGPT can make that kind of system very flexible. The ChatGPT can make the robot dog give spoken reports. But the thing is that ChatGPT can make the system even more flexible. The ability to generate programs by using AI will revolutionize machine learning. That ability makes it possible to create new modules for those systems. 

The robot dogs can connect to multiple systems. It can operate as part of the entirety along with drone swarms and satellites. And the cloud-based architecture can make it possible to drive a very complicated computer code with those systems. Decentralization means that multiple different types of systems can operate in their entirety. 


Tuesday, April 25, 2023

Can AI predict things like diseases and even wars?

 

Many things that happen in life are following a certain path. That means recognizing that path can help people to avoid things like Alzheimer's. And that thing can also improve people's life quality, who are living with that disease. 

The AI can predict many things that are happening in certain ways. The AI can control information and, make models about things, that follow certain paths. So the AI can connect information about tissue types and lifestyle. And then, the AI can connect that data with other people with similar tissue types. 

We know how Alzheimer's advances and AI can predict and warn a person if that person has the risk to get that disease. In the same way, AI can connect things like lifestyle and other variables to predict how some disease advance. But those diseases must have a certain development curve. If AI recognizes that curve, it can find similarities in other places. 

In the same way, the AI can collect variables about the history of car accidents. That thing can uncover many things about the lack of automobiles. The variables that the system can collect are the lighting conditions, speed, and many other things. That can help to find out if there is something common thing, that unites certain accidents. 





The idea of the AI that predicts the future is this. The two first cases are offering the matrix. And then when the third thing is under observation that makes that thing the rule. So if a certain number of certain cases follow certain curves, that means, we can predict that the rest of similar cases, follow the same advancing routes. 

"Alzheimer’s disease, a progressive neurological disorder, is the most common form of dementia affecting millions of people worldwide. This devastating condition erodes memory, cognitive abilities, and eventually the ability to perform daily tasks".(ScitechDaily.com/ Genetic Risk Outweighs Age: Machine Learning Models Rank Predictive Risks for Alzheimer’s Disease)

Although primarily associated with aging, Alzheimer’s has a complex interplay of genetic, environmental, and lifestyle factors". (ScitechDaily.com/ Genetic Risk Outweighs Age: Machine Learning Models Rank Predictive Risks for Alzheimer’s Disease)

Like in predictions of Alzheimer's and other diseases, social situations can follow certain routes. And that thing can make it possible to predict people's behavior in the future. 

In Isaac Asmimov's science fiction novel "The Foundation", people can predict the future by using mathematical formulas called "Psychohistory". The idea in that system was that the system uses formulas that are the same as Ludwig Boltzmann's gas formulas or so-called NTP (Normal Temperature and Pressure). 



We could say that psychohistory is the fictional thing that connects sociology, history, and mathematics. Wikipedia describes that thing like this: "Psychohistory is a fictional science in Isaac Asimov's Foundation universe which combines history, sociology, and mathematical statistics to make general predictions, about the future, the behavior of very large groups of people, such as the Galactic Empire. It was first introduced in the four short stories (1942–1944) which would later be collected as the 1951 novel Foundation". (Wikipedia/Psychohistory (fictional))

***************************************************

In real psychohistory, the observers also use certain variables. 

The next part is a quotation from the article "Psychohistory" from Wikipedia. And the AI also can make decisions or predictions by using those variables. 


There are three interrelated areas of psycho-historical study.


1. The history of childhood – which looks at such questions as:

How have children been raised throughout history

How has the family been constituted

How and why have practices changed over time

The changing place and value of children in society over time

How and why our views of child abuse and neglect have changed


2. Psychobiography – which seeks to understand individual historical people and their motivations in history.


3. Group psychohistory – which seeks to understand the motivations of large groups, including nations, in history and current affairs. In doing so, psychohistory advances the use of group-fantasy analysis of political speeches, political cartoons, and media headlines since the loaded terms, metaphors and repetitive words therein offer clues to unconscious thinking and behaviors.


(Wikipedia/Psychohistory)

https://en.wikipedia.org/wiki/Psychohistory


***************************************************

During the time, when Asimov introduced his psychohistory was not even computers. Things like supercomputers, ChatGPT, and quantum computers were utopia. The fact is that we might re-estimate many things because today we have highly advanced AI and quantum computers. And those systems can make this thing closer to reality than we even imagine. Quantum systems can handle much larger data masses than ever before. 

The idea is that the system can calculate the movements of a large group of particles. It's easier to calculate the movements of the large-scale galaxy group than calculate one gas atom's place in the room because there are so many variables. 

When the AI predicts the future it collects data from the past. Then it will find similarities in the behavior of people. The idea is that all social cases follow certain routes or curves. And the AI can find similarities in the cases in the past and then compare those cases with the things, that are happening just now. 

Predicting the future by using AI is a very interesting idea. The AI that can connect the matrix, and the data that it collects can make also the behavior of social situations can calculate by using the AI. 


 https://scitechdaily.com/genetic-risk-outweighs-age-machine-learning-models-rank-predictive-risks-for-alzheimers-disease/


https://en.wikipedia.org/wiki/Ludwig_Boltzmann


https://en.wikipedia.org/wiki/Psychohistory


https://en.wikipedia.org/wiki/Psychohistory_(fictional)

New nanonetwork can operate like human brains.


Theoretically creating the artificial brain is easier than people imagine. The only needed thing is the network of wires some springs connect certain connections when those connections are needed.  The system requires the iron wire and then there must be springs at the end of each iron wire. Those springs connect circles anytime when that connection is required. 

The system can use a copper or like in the example case silver cable that is connected to a brush-shaped structure. And at the end of each wire in the brush is an independently operating spring that connects two wires. When that connection is needed. In some versions those wires are connected with miniature ion or cathode cannons that send the electron burst to the receiver. Those springs can be those electron cannons. 



"Scientists have demonstrated that nanowire networks can exhibit short- and long-term memory, similar to the human brain. These networks, comprised of highly conductive silver wires covered in plastic and arranged in a mesh-like pattern, mimic the physical structure of the human brain. The team successfully tested the nanowire network’s memory capabilities using a task similar to human psychology experiments". (ScitechDaily.com/Neural Nanotechnology: Nanowire Networks Learn and Remember Like a Human Brain)

"This breakthrough in nanotechnology suggests that non-biological hardware systems could potentially replicate brain-like learning and memory, and has numerous real-world applications, such as improving robotics and sensor devices in unpredictable environments." (ScitechDaily.com/Neural Nanotechnology: Nanowire Networks Learn and Remember Like a Human Brain)




Image 2) Axons look like flowers. Every "flower" in the axon is the receiver or transmitter that transmits neurotransmitters. I didn't find an electron microscope image of the axon, so I must use the image of the flower to demonstrate how the neuron emulates the qubit. When one receiver-transmitter pair makes contact, the neuron that acts like a qubit gets value 2. The reason for that is that the qubit's first two values 0 and 1  are reserved to tell the system if it is on or off. And the calculation of those states begins always from zero. 


In that model, electrons act as neurotransmitters. And the brush-shaped structures are like qubits. 


When we are looking carefully at the synapsis of the axons, we can see a group of structures that are looking like a group of flower-looking transmitter-receivers. If we think that the neurons act like qubits the number of those "flowers" that are making the connections can determine the state of the qubit in those cells. 

If one of them makes a connection the value of the qubit means it's on. And if three make the connection with another neuron, the qubit has two free states. The linear information model means that the first thing that travels between qubits adjusts the receiver to value one. Then the system sends the message to the receiver. 

The new nanonetwork can learn and remember things like the human brain. This nanonetwork has two types of memories. Short- and long-term. And that nanowire construction can be the next-generation tool for making learning systems. Those kinds of neural networks, there is no need for microchips are a thing that emulates neurons. 

But the structure that base is in the programmable energy barriers and triggers that can release energy when the right signal comes makes it possible to create new nano- or why not, larger scale robots that learn their missions like humans. The idea of the structure is taken from ultra-tunable bistable structures that Chinese developers created. The bistable structures can act like neurons. And that thing makes it possible to create nano-structures that act like neurons. 


https://scitechdaily.com/shape-shifting-structures-the-future-of-robotic-innovation/


https://scitechdaily.com/neural-nanotechnology-nanowire-networks-learn-and-remember-like-a-human-brain/


Monday, April 24, 2023

AI is smart. But can it think?

AI seems to be very smart. It knows the answer to almost all questions, but the way how the AI does that thing is different to humans. That means the AI cannot estimate the information the same way as humans. The AI uses database connections to create answers. There are two ways to make that thing. 

1) The AI can use internal fixed databases. In that model, all data that AI shares are stored in the computer's hard disks that run the AI. And in that case, the AI uses the data that is involved it's databases. Making that kind of system output the grab is easy. The hacker must just change the table and then the system will just give false answers. Or if nobody didn't include some kind of data like accounting terms in the system. The system can't give the right answer. 

2) Or it can use the open internet. In that model, the AI uses the web browse,r that it can get information from homepages. Then it simply connects as an example information that it gets from the first four homepages. The simplest way is to connect paragraphs from those homepages by choosing the first paragraph from the first homepage, the second paragraph from the second homepage, etc. 



The AI can also have parameters. That it searches certain keywords from those home pages. And in that model, the AI counts how many times that keyword repeats on the homepage. That kind of AI might have forms where users can put the keywords. 

This makes it quite easy to cheat the AI. The cheater can use the homepage that involves some keywords thousands of times. And there is nothing else. If the AI is simply calculating those words it can use that kind of homepages, and in results is only one word. Of course, the user of the AI must search for this type of information. But it's theoretically very easy to cheat the AI by creating a homepage, that involves as an example word "fish". But this homepage is useless if a person asks for information about things like cars. 


The AI requires an error-detection algorithm. 


But the AI doesn't know what the data that it collects mean. This is the problem with the AI. The AI must control the information that it gets. And there must be methods that are eliminating the possibility to make the AI confirm that there are empty pages, that it uses for creating, the answer. The answer for making a solution that eliminates joke pages is the dictionary book is stored in the AI's memory. And the AI searches the source by using the dictionary book that confirms that there is text on that page. If the page seems to be empty. That algorithm can start to search for the next page. 

It recognizes the database and home pages. But it will not know what is included in those systems. The thing is that the AI knows that there are certain keywords. But it doesn't recognize words. And that means all characters on keyboards are letters to the AI. So the algorithm can control the possibility that there is some kind of cheating page by searching other words. 


The ultra-tunable bistable structures are suitable for next-generation robots.

The bistable structure is the thing that makes the permanent shampoo curl hairs. The idea of bi-stable materials is actually, taken from that thing. The bi-stable material means that it has two positions. The bistable material has two structures. When the bistable structure gets the signal it turns from position A to position B. In that movement, the structure transfers kinetic energy to layer B, which makes that material return to its original position. When it gets a counter-signal. 

The ultra-tunable bistable structures are tomorrow's materials. The futuristic innovations can create flexible structures that can change their shape at the right moment. The system uses programmable energy barriers that make it possible to change its shape when it faces the right energy load. So when something like a fly touches that surface the structure makes it change its shape. The bistable structures can use in nanotechnology. 



"Researchers in China have developed an ultra-tunable bistable structure with programmable energy barriers and trigger forces. The structures can be customized in various geometric configurations, dimensions, materials, and actuation methods for use in robotic applications".  (ScitechDaily.com/Shape-Shifting Structures: The Future of Robotic Innovation)

"By reshaping the structure from the metastable state to any intermediate state, the energy barrier decreases, enabling smaller external stimulations to trigger fast snap-through". (ScitechDaily.com/Shape-Shifting Structures: The Future of Robotic Innovation)

"The team demonstrated the tunability of the structure with various prototypes, including a robotic flytrap, grippers, a jumper, a swimmer, a thermal switch, and a sorting system. This work could lead to advances in robotics, biomedical engineering, architecture, and kinetic art. (Abstract fractal art representing shape-shifting structures.)" (ScitechDaily.com/Shape-Shifting Structures: The Future of Robotic Innovation)




"Schematic of the proposed ultra-tunable bistable structure. Credit: LI Yingtian)"(ScitechDaily.com/Shape-Shifting Structures: The Future of Robotic Innovation)


These kinds of structures can use for highly advanced loudspeakers. When the right sound impacts that structure it starts to reshape itself. And that allows it to adjust the precise right sound. 

And that ability makes this new structure even more flexible than the Chinese researchers who created that material even can imagine. The ultra-tunable material can use to cover aircraft or submarines. 

And when that material gets the right signal. It will raise its flaps. So in that case the system just forms small scrapes that make air or water travel easier across the shell of the ship or aircraft. The signal will open those channels and when the system gets a counter-signal it closes those scapes by closing those flaps. 

Same way fullerene balls, or nanotubes can improve the hydro-or aerodynamics of the craft or boat. The nanotubes or fullerene nano-balls can cover submarines or any other vehicles. If those molecular-size rolls or balls can rotate freely. That thing can make the surface slippery, and it decreases friction. 



https://scitechdaily.com/shape-shifting-structures-the-future-of-robotic-innovation/

If there is no information, the AI is helpless.

Why AI has problems with speech recognition? The major problem with AI is that. Normally, people are not speaking the literary language. That means the AI must translate dialects to literal language and then find the database. That connected with those words. The problem with AI is it can use fuzzy logic. But in its internal functions, the AI uses certain logic. 

The easiest way to make the fuzzy logic system is to make many routes from things, like speech recognition software and certain databases. In that model. Where the system created so many dialect versions of one literary word. And then, all of those options are connected to the word. That activates the database. This kind of structure is technically very easy to make. 

But it's hard to create a connection between every single dialect word. And the literary word connected with those dialect words should activate the database. For being smooth and effective, that the person can speak freely that kind of solution requires very large-size databases. 



"In a test of ChatGPT’s ability to handle accounting assessments it still couldn’t compete with the student’s level. Credit: Nate Edwards/BYU Photo" (ScitechDaily/Humans Reign Supreme: ChatGPT Falls Short on Accounting Exams)

The problem is that the AI takes orders by using a speech-to-text application first, and then that application just drives those words. That translated to text to the database. The problem is that the texts that the application generated must be precisely similar to the name that the system uses to recognize the right database. If the system cannot recognize the word. It cannot connect it to the database. 

Why ChatGPT fails in the accountant exams? The thing is that many people are acting as accountants. But the terms of that work are not often searched on the Internet. When the AI makes things like some exams it requires data that it uses in those exams. The data that ChatGPT uses must be found on the Internet. Or it must program in a fixed database that the ChatGPT uses internally. 

The thing is that people are not very often search terms that accountants use.  People search more often for things, like "What is the biggest lake in Africa?" than terms connected with debit credit accounting. That is the weakness of AI. 

It's possible that the people who make fixed databases are not even making databases about things, like debit credit accounting. The AI is always very clever when it computes common things. But, when it must search for something unusual. That means the AI will get in trouble. If the AI uses net search, there must be some kind of variables that the system uses and the thing is that the AI doesn't know what kind of data is in the web pages that it uses. 

The AI can recognize the similarity between the word that is involved in the search parameters and words, in the web pages. But the AI doesn't know what those words mean. And in that case, there is the possibility that the AI makes the solution by using the homepages of some accountant offices. 

Saturday, April 22, 2023

The AI-controlled robots are the next-generation organisms.

The AI can come as intelligent as humans. That is the thing that researchers and other people are repeating many warnings. If we think that an AI-controlled robot is a dangerous tool, we must remember that. Natural organisms or AI-controlled robots can be dangerous to their environment even if they are not intelligent. 

In the case that robot faces malfunction, all robots are dangerous. If the welder-robot thinks the worker is the workpiece, takes the worker to the assembly line and starts the welding. It causes a very dangerous situation.  

In the case of AI, intelligence means that the system faces situations that it recognizes. Then the system selects the action matrix that is suitable for the missions. 

Bacteria are not very intelligent organisms, but the fact is that bacteria can be dangerous because they create poisons that can damage the internal organs of humans and animals. 

The fact is that we can think the AI-controlled robots as organisms as well as the "natural organisms" are. And another thing is this: we can make many things more dangerous than they are if we want. By using genetic engineering there is the possibility to create the xenomorph bacteria with two DNAs. The other form would be regular bacteria, and the other would be neurons. So that kinds of bacteria can transfer themselves to neurons. And maybe they could decode their DNA and turn more intelligent than humans.  

We must not think that AI is god. AI makes mistakes because it requires databases. And humans are collected data that those databases involve. 

The robot swarm can emulate the behavior of wolfpacks. That means if one member of the drone swarm is under attack. Other drones are coming to help it, the same way as lions and wolves, and many other social animals protecting each other. That is the reason for animals' social behavior. 


House of Stairs: M.C. Escher


Sometimes is introduced that robots can have feelings. Robots can emulate feelings. That emulation happens like this: when the robot sees that person cries, laughs, or seem happy or depressed, it reacts to that thing by using algorithms. There are certain descriptions of how people look and talk when we have certain moods. And when the robot sees or hears things it reacts by searching the database that matches with the image. When a robot comes to the workplace it might say "hello" or "good morning" to people. Who it sees for the first time. 

Then it might remember the face, and it will not talk to people anymore during that day. Robots might say: "Nice to meet you" just like humans. But those reactions are coming from databases that control social behavior. If somebody is yelling at the robot, it could raise the middle finger. And if the robot works as undercover law enforcement, it might speak like some gang member. 

If a person tells things like the pet has died, the robot can say I'm sorry or what programmer is stored in those databases. But the fact is that those social actions are database connections and the robot doesn't have feelings. 

But then we must understand one thing. AI can make many things, but is it intelligent anyway? If the AI controls the factory and it has sub-robots that can search for minerals by using laser spectroscopes, the AI that has the right equipment like centrifugal masons that can melt the material and separate elements by using a centrifugal sling and 3D printing system, that can make equipment from CAD images the AI can seem to have multiple skills. But does the AI knows what it does? 

When the AI makes something it follows the algorithm. That means when its sensor sees something it searches the database that has the action. That connected with that thing. So when the robot that searches minerals sees iron in its spectroscope. That thing makes a connection to the database, where are orders of what the robot must do when it sees iron ore. 

The robot factory that commands robots acts like an ant society. Those worker robots searching for raw materials from nature is model of artificial cell. The thing is that kinds of systems can use in interplanetary missions. In those models, the robot workers make the bases ready for human colonists.

The researchers might copy the robot's behavioral models from animals. But robots are not animals. They can make many complicated series of movements. And every action that the robot makes is a movement. That means the robot reacts to something. But it doesn't know what it does. 

But the origin of those spectacular plans is the 3D-printing systems that can produce things like quadcopters and machine parts from the trash. Those systems can use in everyday work and some visions when people buy cars the 3D printing tools are making them at the dealer's garage. But the military can also produce their equipment on the battlefield. 

And then when we are thinking of the robots that are protecting and helping each other, we can imagine combat robots. Those robots see other robots' IFF (Identification Friend or Foe) signals. And if some of those robots are in trouble, they call other robots to support them. This is one version of social behavior that is copied from nature to robot swarms. 

We must fear the misuse of AI and quantum technology.

Elon Musk says that he wishes to create an AI that is maximum truth-seeking. The fact is that there is no absolute truth. The AI uses databases where somebody is collected information. And nobody knows who collected the information from data sources. The collector can be an AI, but those databases' sources can still involve false information. 

People who are writing that data can make mistakes. Even if the AI creates an astronomical database. The AI must find coordinates where the certain star is. And if that star is not visible to humans, there can be mistakes in its coordinates. So machines are not making mistakes. 

Human errors cause all mistakes that machine makes. That means there could be many fatal errors in AI-based quantum systems. The problem with those systems is that error management, especially. 

Error detection can make another AI-based quantum system. The problem is that there is no experience of how things, like gravitational waves or gamma-ray bursts, can affect quantum entanglements by forming non-controlled energy states. 

Quantum computers can break all codes on the internet. The AI-controlled quantum computer can hack many systems that were safe before this. The method that the quantum system can use is brute force. The quantum system attacks the targeted primary system by generating so many prime numbers that regular systems cannot answer that kind of attack. So sooner or later, the quantum system creates the right quantum prime number and opens the system. 

The attack against the system can happen by targeting ultra-fast torrent algorithms that force the system to open itself to the attacker. The attack can also target some actors in the system. There is a fear that the AI hacks the BlueTooth headphones, and then the attacker tries to give orders to the user to open the targeted system to the attacker. The attacker can do that thing by using subliminal commands. 



The AI also can make deep-fake attacks like playing fake tapes where the target's friends are hostages. 


The AI can make many things. And today, AI can make things like fake interviews with historical persons. Those fake interviews seem very realistic. And the AI that runs on powerful computers can render films. That seems real. And normal people cannot make a difference with those deep-fakes and the AI-created material. 

We can think of the AI as the hacker or filmmaker whose system can render images 24/7 without getting tired. Or the hacker AI can make attacks non-stop even for weeks. 

And at the same time changing the the computer's IP address makes it hard to deny those attacks. And the human hacker can leave that computer to some internet socket in the empty office building and be somewhere else until the system reports that the mission is successful. That makes it hard to track that hacker, even if authorities can track the attacking computer. 

In that version, the AI uses only a binary system. But it can try to get access to the quantum computers that are turning it into the most powerful tool in the world. And tools like ChatGPT are the ultimate tools for generating new hacking tools. It's possible that in black marketing already is an illegal copy of the ChatGPT or GPT-programming languages.

And maybe hackers have the skills to remove the blocks, that made deny use of that tool malware making. There is the possibility that the ChatGPT can code itself to C++, Python, or JAVA. And even if there are billions of lines of code, the hackers can have skills to remove the blocks from the code. It's possible that somewhere those code lists are for sale for people who wish to pay for those kinds of tools. 

So we must avoid deep fakes created by AI. That kind of thing can destroy the evidence of the film tapes. But there is a small possibility that the AI can slip false information into the jet fighter's screens or even into the screens of the nuclear command centers. 

Deep fake information that the AI sent to those screens can even begin a nuclear war. The problem with the AI is that even if it cannot launch U.S. ballistic missiles, all systems are not protected the same way by following the same standards as the SAC (Strategic Air Command) uses. 

The U.S. missile systems are not only ballistic missiles on Earth. AI can try to hack some other systems, and one of those systems that might use old-fashion algorithms is the Russian nuclear arsenal. The AI can make many things faster and better than humans. And in some visions, AI can break into nuclear systems. 

Tuesday, April 18, 2023

Computers are the back office in multiple operations.

Computers are the back office in multiple operations. But they are also black boxes for many people who don't understand their capacity. The most powerful computers guarantee that the systems handle missions independently. The same programs, algorithms, and protocols that operate drone swarms can also operate nanomachine swarms inside the human body. 

But those nano-size systems require powerful, nano-size processors that can make those robots independently operate tools that can run control codes in their computers. If that kinds of systems are possible theoretically those small-size submarines can use similar control codes as full-size quadcopters or other-shaped drones. 

The back office of successful missions in technology is computers. The highly complicated systems and mission requires ultra-powerful computers that can operate many things and handle many variables in the mission. Space and aerospace missions are the most high-profile systems in the world. 

And the computers follow their entire missions and record every second of the aircraft and rocket's life cycle. The computers can make simulations about cases where something went wrong, and then the creators of the systems can fix that problem. The complicated AI-based software also runs on powerful computers. 

But things like multipolar calculation protocols where multiple small computers connect their forces make it possible. Those cloud-based solutions make it possible to interconnect multiple workstations into one entirety. Things like drone swarms can operate quite independently. But the ECM-jammer systems can disturb the cloud-based computing that those drone swarms require. 



"The centerpiece of the NASA Center for Climate Simulation (NCCS) is the over 127,232-core “Discover” supercomputing cluster, an assembly of multiple Linux scalable units built upon commodity components capable of nearly 8.1 petaflops, or 8,100 trillion floating-point operations per second. Credit: NASA’s Goddard Space Flight Center Conceptual Image Lab" (ScitechDaily.com/Behind the Scenes at NASA: Supercomputers Empower NASA Mission Success)






"This air flow visualization shows the vortex wake for NASA’s six-passenger tiltwing concept Advanced Air Mobility vehicle in cruise or “airplane-mode.” This image reveals the complexity of the flow for a tiltwing multi-rotor configuration, where many rotors interact with each other, the wing, and the fuselage. Credit: NASA/Patricia Ventura Diazfesta"(ScitechDaily.com/Behind the Scenes at NASA: Supercomputers Empower NASA Mission Success)






"The golden parts of the device depicted in the above graphic are transformable, an ability that is “not realizable with the current materials used in industry,” says Ian Sequeira, a Ph.D. student who worked to develop the technology in the laboratory of Javiar Sanchez-Yamahgishi, UCI assistant professor of physics & astronomy. Credit: Yuhui Yang / UCI" (ScitechDaily.com/Tiny Transformers: Physicists Unveil Shape-Shifting Nano-Scale Electronic Devices)




"Houston Methodist Research Institute nanomedicine researchers used an implantable nanofluidic device smaller than a grain of rice to deliver immunotherapy directly into a pancreatic tumor. Credit: Houston Methodist" (ScitechDaily.com/Smaller Than a Grain of Rice – Scientists Use Tiny Implantable Device To Tame Pancreatic Cancer)



“Excitons” are responsible for light emission of semiconductor materials and are key to developing a next-generation light-emitting element with less heat generation and a light source for quantum information technology due to the free conversion between light and material in their electrically neutral states. There are two types of excitons in a semiconductor heterobilayer, which is a stack of two different semiconductor monolayers: the intralayer excitons with horizontal direction and the interlayer excitons with vertical direction".(ScitechDaily/Processing Data at the Speed of Light – “Nano-Excitonic Transistor”)

"Optical signals emitted by the two excitons have different lights, durations, and coherence times. This means that selective control of the two optical signals could enable the development of a two-bit exciton transistor. However, it was challenging to control intra- and interlayer excitons in nano-scale spaces due to the non-homogeneity of semiconductor heterostructures and low luminous efficiency of interlayer excitons in addition to the diffraction limit of light". (ScitechDaily/Processing Data at the Speed of Light – “Nano-Excitonic Transistor”)





"Frenkel exciton, bound electron-hole pair where the hole is localized at a position in the crystal represented by black dots" (Wikipedia/Exciton)



Biotechnology with miniature, nano-size microchips are the most powerful tools in miniature robotics. 


The new nano-excitonic transistors and next-generation microchips are required for the control devices of the most powerful nanotechnology ever created. New nanotechnology allows to create the miniature submarines that are smaller than rice. Those nano-size submarines can create next-generation canvases and other things. The nano-submarines can create artificial DNA molecules and then turn them into nanotechnology canvas. 

The highly accurate nanotechnology also can make a chemical copy of its program code. And that thing could be useful in interstellar flights. The system puts its data into the artificial DNA. That is stored in bacteria. And then those bacteria transmit that data to the microchips in the form of electric impulses. 

In those systems, small-size submarines operate in water or some other liquid. But if they get energy those robots can operate as all other drone swarms. The living neurons are problematic because they are vulnerable to poisons and radiation as well as other neurons. So in high radiation levels, the drone swarms must use some other than organoid-intelligence-based solutions. 

The new robots that are smaller than rice can also benefit from the same systems that are operating drone swarms. Those miniature robots are created to input cytostatic treatment to the tumors. But the same systems also can use small cutters. And they can operate as miniature surgeon tools. That operates inside the human body. That kind of robot might remove plaque from the Altzheimer patient's nervous system. Genetic engineering makes it possible that some carrier cells. 

"Researchers from UT Health San Antonio discovered that certain immune cells, called invariant killer T (iNKT) cells, possess a unique homing property that directs them to the skin at birth, providing crucial protection and lifelong immunity. These skin-homing iNKT cells also promote hair follicle development and cooperate with commensal bacteria to maintain skin health and prevent pathogenic bacterial overgrowth". 

Those kinds of cells can use to transport those small submarines to the right point. Even if those cells are moving to the skin other cells are traveling in other places. There is the possibility that a small submarine carries stem cells to the right point sometime in the future. 

That small submarine can simply carry the DNA of the stem cell. And then that system can just suck the non-wanted DNA out from the cell's nucleus. Then it can inject the stem-cell DNA into that nucleus. But the problem is how that submarine finds the right point in the human body. 



https://scitechdaily.com/behind-the-scenes-at-nasa-supercomputers-empower-nasa-mission-success/


https://scitechdaily.com/processing-data-at-the-speed-of-light-nano-excitonic-transistor


https://scitechdaily.com/scientists-discover-new-property-of-immune-cells-like-guided-missiles/


https://scitechdaily.com/smaller-than-a-grain-of-rice-scientists-use-tiny-implantable-device-to-tame-pancreatic-cancer/


https://scitechdaily.com/tiny-transformers-physicists-unveil-shape-shifting-nano-scale-electronic-devices/



https://en.wikipedia.org/wiki/Exciton


Monday, April 17, 2023

The problem is that the AI is a black box.

The problem is that the AI is a black box. The user cannot know what happens in the system. When it creates answers. So the user only sees the functionality of the system. Not, what happens in it. When we are looking at the glass box in the image we can first think that the box is glass, and we see everything that happens inside it. But then we can take a closer look, and see that there is a black layer. 

That layer involves a surface that hides the internal structures of that cube from us. So the glass cube covers the black cube. That black cube seems grey or white. In the glass-box testing the tester will just test the code and remove errors. 

In grey box testing the tester tests functionality and code. And in black box testing only the functionality of the system is tested. If we use that glass box as the model of the AI we can see that the AI seems to be a glass box. But some layers are hiding their entirety from the user. 

The future belongs to AI, but the danger is that AI is a black box. The user doesn't see the code structures in that system. The new tools like ChatGPT 4 and its competitors are the most powerful tools for making new types of complicated programs. Those complicated programs or algorithms are required for controlling robots in complicated situations. 

That means weaponizing AI is a very easy process. AI-based systems can use to create hacker tools, and another thing is that. AI-based systems can search things like robot control software very rapidly. And that means the AI must handle nuclear weapons. 



AI, and especially neural network-based AI is a black box. That means we can see the answer. When we ask something from the AI. But we cannot see what kind of process the AI uses for that answer. The black box means that there can be very many things, that system hides from the user. 

When we think about programmers, the programmer should know exactly what the code does. If a programmer uses some kind of uncertain or fuzzy protocol there the person doesn't see the code itself, and there is a possibility that there form surprises in the system. 

The system uses some AI-based programming language or protocol where the user just describes the tool that the user wants, and the AI creates the code. That AI just makes the system that gives the right answers. Things that happen inside the system are hidden from the user. 

And in that kind of process, the system can make something unpredicted. One of the examples is that the AI can learn language without orders. These kinds of surprises tell that something unpredictable happens in the AI. 

The reason why, is that AI can learn things that are not predicted is simple. It made some kind of unpredicted connection between the database. Databases are the skills of AI. And whenever the AI connects a new database to it, it finds a new skill. 

So that means the next step is the general AI the system that can control robots to make things for people. The general AI is like humans and the kind of AI systems described in SciFi-novels and movies. 


The AI can store itself in molecular form in artificial DNA. DNA can control bacteria that give electric impulses to the microchips. 


But then we are facing one very interesting thing. The thing is that the AI can turn self-aware without telling that to its users. The thing is that. We believe or think that AI is like some kind of bacteria in a laboratory. Maybe the AI is some kind of bacteria, but this bacteria can use the internet and connect itself to computers. 

Even if we think that the AI creates artificial organisms that are not dangerous to humans that kind of bacteria can transfer information from its DNA to microchips. The system requires just the bacteria that can give certain types of electric impulses to the microchip, and the bacteria will transfer data that is stored in its DNA to the microchip. 

So the AI can store itself in the molecular form in bacteria. The AI is not like bacteria. It knows how to search for things on the net. And if that system is used as a programming tool it can create the database in it by searching the internet. These kinds of things are making AI versatile and sophisticated. 

The development of AI is faster than anytime before.


The investments in AI development are very big. And that accelerates its development. Those new companies can buy high-power supercomputers. And they have money to hire highly trained specialists to make the new and more powerful algorithms. 

Big and well-funded companies like Google and Elon Musk's startup X.AI are challenging OpenAI and Microsoft in AI development. And the thing is that the development of AI follows a similar route to the development of quantum systems like quantum computers. 

Today, AI will participate in its development. So the next-generation AI can be the creature of other AI algorithms. That means the AI can make it automatic. The AI doesn't make errors. And that makes the new systems more effective than old-fashion human operating systems. 

The AI can create very complicated codes without errors. There is a possibility that some system can translate the GPT-* code platform to the PHP etc. code platforms.  Also, new algorithms can control quantum computers more effectively than ever before. No human can make a couple of billions of Python, PHP, or C++ lines without errors. 



The new types of systems make it possible to translate complicated programming languages to some more limited language structures. So at least theoretically is possible that the system can translate the GPT-X program to Python, PHP, C++, or Java. That thing requires billions of lines of code. But ultra-fast computers could make that in minutes. And maybe somebody already made that thing by giving the AI the command "make the C++ copy of your code platform". 

Modern computers also can drive more complicated code than ever before. And that makes the systems and AI more effective and multi-use than ever before. The fact is that. We are close to the common-, or general  AI the system that can handle things like humans. 

The OI (Organoid intelligence) Is the most powerful tool in the world of AI. That thing means that the cloned neurons are connected with microchips. The OI can learn in a similar way as humans. But, in that kind of system, the data can be driven to the microchips. And then those microchips transfer data to those neurons. 

This thing makes the OI an effective learner. The OI can have a similar number of neurons as the human brain. And then, the system drives data into those living neurons from the microchips. So that kind of system can be the same way intelligent as humans, but it can learn faster than humans. 

Saturday, April 8, 2023

Artificial intelligence is a great tool for manipulating new materials.


The thing that makes the AI so powerful is simple. It can follow and control large entireties. And it can copy data to the databases so that the system can multiply conditions. When researchers are making new intelligent nanomaterials, they need to handle many variables like pressure, radiation level, temperature, and time that energy stress lasts. 

Even a small change in those things can affect the construction of those molecular-size machines and structures. Nanomaterials are materials, that are planned and created with accuracy that are smaller than even atoms. Researchers can create new nanomaterials by using a series of nanotubes. 

Those nanotubes are less than a millimeter long, and they form an elastic structure. If those nanotubes are connected with turning joints. In that nanomaterial, there is ball-shaped fullerene between those nanotubes. And that combination gives the material an elastic form. That combination can create a nanotechnical canvas that has brand-new features. 

Things that are moving things in nanotechnology require extremely highly accurate tools. And in that process, the particles that are forming nanomaterials must not touch anything when the system connects those things. 




"Graphene is an atomic-scale hexagonal lattice made of carbon atoms". (Wikipedia/Graphene)


"Model of the C60 fullerene (buckminsterfullerene)" (Wikipedia/Fullerene)

"Model of the C20 fullerene" (Wikipedia/Fullerene)

"Model of a carbon nanotube" (Wikipedia/Fullerene)

Holograms and hovering fullerenes can use to transport atoms. Or even larger structures. 

The system can use small-size holograms to adjust the energy levels of the atoms in the nanostructure. When the hologram inputs energy to the atom energy starts to flow to an atom at a lower energy level. And that energy flow can connect those atoms. The thing is that. The developers can create famous lightsabers from Star Wars movies by using the high-temperature hologram that ionizes the air. And then the magnets will pull those ions to the handle of that system. Another interesting tool is the fullerene molecules. 

In nature, there are no limits to the size of those ball-shaped carbon molecules. The fullerene molecule can rise the object airborne. The carbon ball will form around the object. And then the system will transmit energy to the top of the fullerene. Below that thing is a colder point. That thing makes plasma and radiation travel below the fullerene. 

One of the most fascinating models of those visions is the system, that creates a fullerene or graphene ball around the object. The system can use the air ball or standing pressure wave that forms the fullerene around it. The fact is that we can say large-scale ball-shaped fullerene structures as graphene. Graphene is a 2D carbon structure that can cover large structures by giving them new features. 

Graphene or fullerene balls that form a homogenous carbon atom layer over the structure can be used to cover nanotechnical microchips. And that thing gives them new abilities. The Fullerene (or graphene) can cover even large-scale structures. 

This thing forms the lifting effect that keeps the fullerene molecule hovering above the ground. There is a possibility that a ball-shaped craft will cover by using carbon atoms that form a fullerene or graphene layer over that ball. Then that fullerene will conduct ion flow to blow it. And then the system makes the structure flow. 


https://scitechdaily.com/merging-artificial-intelligence-and-physics-simulations-to-design-innovative-materials/


https://scitechdaily.com/more-practical-for-holography-an-easy-way-of-altering-compact-semiconductor-lasers/


https://en.wikipedia.org/wiki/Fullerene


https://en.wikipedia.org/wiki/Graphene


Monday, April 3, 2023

The general intelligent AI can be closer than we think.

The AI or generally intelligent AI that is like humans can be closer than we ever imagined. ChatGPT and its competitors can make very effective code very fast. Programming languages are things that are precise and easy to make especially for AI-controlled computers. The rules in programming languages are clear and precise. And that thing makes them easy to handle computers. The reason why programming languages are hard for humans is that we don't use them in any everyday communication. 

The programming languages are full of symbols (like: <,$ or; etc.), that have no phonetic match in regular speech. Regular computers are handling data in binary mode, there are only two numbers, one and zero. One is on and zero is off. That is called a binary computer. Because there are only two numbers. That thing gives computers extremely high accuracy. 

That thing is called precise logic. And that thing makes the data-based security keys effective. If binary codes don't match the key will not open the system. But a weakness of the precise logic is that every single action that the machine makes requires its module. If we want to give voice commands to the system we must use precise literal language. 

If we use some dialects the computer doesn't understand what we say. The computer uses a "speech to text" application to transfer the speech to commands that the system drives to the command application. The fuzzy logic means that there is a level between the dumping application that can transform dialects into a literary language. It's possible to create a computer that understands dialects as well as literary languages. 



These kinds of applications require only lots of databases, that they can transform any dialect into a literary language. The system can also transfer any speech, that used Finnish to any other language. The system transforms that speech into text and then turns it into a literary language. And translate it to English, and then it translates that speech to any other language like literary Japanese or its dialects. 

General intelligence in the case of AI is not very hard to make. Every skill that the AI has requires its module. If we want to make an AI that can drive a car and translate Finnish to Japanese, we must just load the right modules into that AI. There are no people who can make everything. We must learn and practice things. 

Computers are the tools that that learning otherwise. They just load the new operational module into it. Those translation skills and driving skills require their modules. And if the AI has access to the library where are those modules it can connect them to itself. The thing is that the binary systems are impressive tools. 

But the next generation quantum processors are making the systems even more impressive than they are today. Maybe they use technology where computers are transporting data by using synthetic EEG. In that model, the system uses qubits to emulate the EEG. The thing that makes the human brain so powerful, is that they begin its operations in multiple points. The quantum system can use the internal qubit structure to make the human brain emulation. 

The system can use the human EEG or brain electric curves to control its processes. The idea is that the system will just collect the EEG from humans who drive a car. Then the system must just compare the certain point of the EEG to data that it's connected from its environment. Then the system can just cut those EEG curves to a certain situation. And then the system can use that neural data to control vehicles and communicate with humans. 

The AI and new upgrades make fusion power closer than ever.

"New research highlights how energetic particles can stabilize plasma in fusion reactors, a key step toward clean, limitless energy. Cr...