Wednesday, May 31, 2023

How do prepare machines for real life? And how to prepare people for the next technical revolution?

 When we are creating the AI, we are facing one thing. AI is much more than some kind of killer robot. AI is the next big step in the world of learning neural networks. And this is the thing, that we must accept. The reason why people are developing technical equipment is that technology makes our lives easier. 

And it makes our life safer if the systems are made as they should and they act as they should. The problem with the people who want to put the point or limit AI development is that in the next week, the same people are telling that they begin their own AI project that is a competitor for the remaining AI systems. 

The ChatGPT is one of the most advanced AIs in the world. The thing that makes the ChatGPT almost the thing that is called the "universal AI" is that system can at least in theory connect itself with other software, like CAD programs. Then it can search for data from the Internet. And after that, it can use that data to create CAD images and plans for CAM (Computer Aided Manufacturing) platforms. 



When we say that ChatGPT is almost the universal AI, we must realize that ChatGPT can make many things. But ChatGPT cannot control robots. The fact is that there is a possibility that if ChatGPT is somehow making the digital twin there those limitations are removed. And then that kind of system can take robots under control. The thing is that the CahtGPT type AI requires lots of data for developing better. 

The data that users deliver to ChatGPT is used to develop and test its digital twin. I don't know how ChatGPT developers work, but theoretically, the perfect development system of AI is three cores. One is the user-interacting core. That system gets information for development work. The other is the digital twin that developers are manipulating. 

And the third core is the control system that compiles the user interactions and development-interacting cores or their program codes. That third core is the core warehouse, where the system can return to error-free mode if there is some kind of malicious code. If there are some unauthorized changes in the digital code, that make the AI, the system will tell that thing to its supervisors. That third core is also used to return the digital twin if there is some harmful code. 

But there is a small difference between the AI that operates other software in the digital world. AI that operates physical robots at the street level. Many unexpected things can happen on the streets. When researchers are programming things like fuzzy logic or uncertainty to the systems they are programming multiple precise cases. And that thing requires lots of program code. 

Same way machines can react only to situations. That is programmed in their codes. This is the reason why unexpected things are dangerous. And there are lots of things that happen in everyday life but we don't even notice them. But if those things are not included in the program code, that can cause catastrophe. 


https://scitechdaily.com/mastering-uncertainty-an-effective-approach-to-training-machines-for-real-world-situations/

Thursday, May 18, 2023

The discovery can make it possible to detect zombie cells in human bodies.




"Senescent cells are cells that have stopped dividing and are no longer able to perform their normal functions. They accumulate with age and have been linked to a number of age-related diseases, including arthritis, cardiovascular disease, and neurodegeneration". (ScitechDaily.com/New Strategy Could Eliminate Aging Cells)

The aging cells or so-called zombie cells are the main reason for tumors and other kinds of diseases. If the immune system can detect those zombie cells soon enough, it could deny those cells' transformation to cancer. Destroying single zombie cells is not a difficult task, but the problem is how to detect those over-aged cells. There is the possibility that the immune cells can detect the aging cell by using the same antigen that it can use to detect cytomegalovirus. 

The killer immune cells called CD4 +T cells are using those antigens to detect aging cells from human skin. So there is the possibility that the reason for cancer is that the aging cells are not created that antigen and the immune cells cannot find those cells. That means there is a disorder or damage in the DNA that denies that cell cannot create that antigen. Researchers are known for a long time that when a cell is aging, it sends the chemical marks that its life is ending. But researchers did not know what that chemical mark is. 

The problem is that all human cells must have similar antigen marks so that the immune cells can detect them. And that's why researchers must research all cell types before they can give universal testimonies. But if there is some common chemical mark that all cells use to tell immune cells that they are aging, that thing makes a cure for cancer closer than ever before. 

"Experiments suggested that once a person becomes elderly, certain immune cells called killer CD4+ T cells are responsible for keeping senescent cells from increasing. Indeed, higher numbers of killer CD4+ T cells in tissue samples were associated with reduced numbers of senescent cells in old skin". (ScitechDaily.com/New Strategy Could Eliminate Aging Cells). 

That knowledge can use to keep skin young. The idea is that the medicine just marks cells that are aging. In that case, the medicine just makes the chemical mark stronger. That helps the immune system to eliminate cells, which DNA is damaged. That gives room for wealthy cells. The problem is how medicine selects those cells.  

But if those antigens are universal marks for the immune cells, that could be the breakthrough in the cure for skin cancer and anti-aging treatment. But if that is the universal mark of all cells in the human body, this thing means the breakthrough for cancer treatment. 

"When they assessed how killer CD4+ T cells keep senescent cells in check, the researchers found that aging skin cells express a protein, or antigen, produced by human cytomegalovirus, a pervasive herpesvirus that establishes lifelong latent infection in most humans without any symptoms. By expressing this protein, senescent cells become targets for attack by killer CD4+ T cells".  (ScitechDaily.com/New Strategy Could Eliminate Aging Cells)

The fact is that the simple gene test will uncover those missing DNA sequences. If that genetic disorder is hereditary, it's possible to make the DNA test in a morula-level embryo. And then replace that DNA sequence by using wealthy DNA. In that case, the lost DNA will connect near the end of the DNA. 

And that helps those killer immune cells to detect the aging cells.  But if that DNA damage happens because of an environment, that kind of gene therapy is not so simple. In the future, the immune cells will use more in chemotherapy. The idea is that the viruses can transfer the gene that calls immune cells into cancer cells. 

But the problem with this therapy is that there should be something that transfers those viruses precisely into the right cells, and then those viruses will make cancer cells produce antigens that are calling immune cells to destroy them. If the wrong cells are starting to produce the antigen, that can destroy vital organs. But the fact is that destroying zombie cells before they transform into tumors is simpler than destroying tumors. But confirming that cell is a zombie cell is difficult. 


https://scitechdaily.com/new-strategy-could-eliminate-aging-cells/

Monday, May 15, 2023

AI is always calm. It doesn't have working hours. And that makes it the ultimate tool.

When people ask questions from the AI it doesn't need to go anywhere. That system will never need to go home. It doesn't afraid that something happens to its family. And it will not need to drink or eat. The AI never goes to meetings. So that means the AI has endless patience. That makes AI an excellent doctor. If a patient asks for something unnecessary from the AI that system will not need to take another patient in its room. The AI might be connected with a diagnosis robot that has X-ray and ultrasound systems in its hands. 

And if the AI requires the room, it can offer the link to the patient's mobile telephone or guide that person to another room where it can continue the discussion. It might say "I'm sorry but I need that equipment with another patient, but we can continue our discussion outside the room". Or the AI can send a robot to talk with patients. In that case, the robot would be the external body of the AI. The AI runs on supercomputers. And the robots are connected to it by using WLAN. Those AI-based systems robots are external sensors that the AI can use for making things. 

AI played doctors and made the job better than humans. Then AI played judge, and the results were similar. The reason why AI is a better doctor than human is simple. The AI is never tired. The AI can follow the patient, and AI has time to answer those 200 questions without the need to visit the toilet. That thing means that AI makes a better job than humans. Another thing is that AI can see that people are using the right datasets when they make some decisions. 

That means the AI will not let emotions affect judgments. That means the AI should follow the voice and maybe sweat more effectively than humans. And the AI can also be neutral if somebody tries to make judges angry. This thing is a great point for AI. The computers will not get angry, and they have no secret relationships with the people, who are in the courtroom. 



"Machine-learning models designed to mimic human decision-making often make different, and sometimes harsher, judgments than humans due to being trained on the wrong type of data, according to researchers from MIT and other institutions". (ScitechDaily.com/When AI Plays Judge: The Unintended Severity of Machine Learning Models)

"The “right” data for training these models is normative data, which is labeled by humans who were explicitly asked whether items defy a certain rule. However, most models are trained on descriptive data, where humans identify factual features instead. When models are trained on descriptive data to judge rule violations, they tend to over-predict these violations, potentially leading to serious real-world implications".(ScitechDaily.com/When AI Plays Judge: The Unintended Severity of Machine Learning Models)

One thing that we must remember is that machine learning is not human. And creators of the AI are not professional judges. So that's why the judgments are overestimated and too strong. The thing is that the AI cannot operate as a "judge". It can make notifications like is the person nervous? Or is there some evidence that is not effective to questions? It can also see things like if some paper is left without notifications. In the worst case, the attorneys not even opened some pages of police reports. 



"Researchers have found that machine-learning models trained to mimic human decision-making often suggest harsher judgments than humans would. They found that the way data were gathered and labeled impacts how accurately a model can be trained to judge whether a rule has been violated. Credit: MIT News with figures from iStock"(ScitechDaily.com/When AI Plays Judge: The Unintended Severity of Machine Learning Models)

The AI-human combination is the ultimate tool. The AI is not advanced enough that it can make the same type of decisions as humans. But if humans are using the AI as a tool, that can help them to see if something is not as it should be, which makes the AI effective assistance. 

The AI can search for things like does the judge look at all evidence. And another thing is that the AI can make notifications like the witness seems to be angry or upset. It also can search for similarities with answers like some phrases from other courtroom records. If different people use precisely the same words in different cases, that is the mark that some of those testimonies practiced before the court. 

We all read the tests where U.S Marines used cow skin and cowboy boots for bluffing the AI. The reason why the AI didn't react to that thing was that it didn't have the image of a cow with boots. The "UFO algorithm" would make that thing impossible. That means if the AI sees something that is not stored in its database, it can call a human operator to see what it sees. 

Nobody predicted that somebody wears cow's skin and boots, and that image didn't store in the database. That's why the AI didn't recognize those men. But there is a possibility that if the AI has an algorithm that alerts human operators to see if there is something unknown that thing is easy to fix. The human operators should always look at the screen. 

And the AI might have a dataset about things like most animals and humans. That system can recognize armed persons and persons who wear clothes like camouflage that are making them suspicious. The system can also use infrared cameras and other tools that can see hidden weapons through clothes. The thing is that the AI is an effective tool for security. But it requires human support. The AI can mark persons who put things in their pockets in the shops. 


https://scitechdaily.com/when-ai-plays-judge-the-unintended-severity-of-machine-learning-models/

Sunday, May 14, 2023

Miniaturizing quantum systems.

Miniaturized, atom-size quantum computers base the Rydberg-atoms. The system uses superpositioned and entangled electrons in its electron orbitals to transmit data. The problem with quantum entanglement is that the outside system must separate the radiation that the quantum entanglement sends. If the system cannot read the information, the system is useless. That's why quantum testers use exotic atoms because their radiation is easier to separate from their environment. 

The very problematic thing is how to detect errors in the system. The Anyon-particle that remembers where it has been can be an answer. The idea is that Anyon-particle will be used to transmit information back and forth in the quantum system and its environment. Sending a qubit from one point to another is a very easy process. The system can shoot them with the ion cannon. But the problem is how to confirm that the qubit is not changed. 


"Figure 1. Conceptual diagram showing muonic atoms and quantum electrodynamic (QED) effects. An international team of researchers has successfully conducted a proof-of-principle experiment to verify strong-field quantum electrodynamics with exotic atoms. The experiment involved the use of high-precision measurements of the energy spectrum of muonic characteristic X-rays emitted from muonic atoms using a state-of-the-art X-ray detector. Credit: RIKEN" (ScitechDaily.com/Electrifying Exotic Atoms: Pioneering Quantum Electrodynamics Verification)

When the quantum system sends information from the transmitter to the receiver. It requires information on the energy level that the transmitter system used when started downloading information to the qubit. Without those variables, the receiver cannot download information from the qubit. If the quantum system uses a multi-state qubit

We must realize, that every state is a different qubit. The system must not use too high energy levels because in that case, the state with too high energy levels will cover all other states under it. The system requires information about every each of its qubit states that it can control them. The problem is that outcoming energy can affect the qubit. And that thing can remove the ability to restore information from the qubit. 

The system can use Anyon-particles to transmit information between transmitter and receiver, and that particle can confirm that nothing affects information when it traveled between the receiver and transmitter. In that case, the system can use two anyon-particles. That can confirm that the information between the transmitter and receiver is not changed. The transmission of information in quantum systems is two-way. The input system and output systems require two-way communication. 


https://www.quantamagazine.org/physicists-create-elusive-particles-that-remember-their-pasts-20230509/


https://scitechdaily.com/electrifying-exotic-atoms-pioneering-quantum-electrodynamics-verification/


Friday, May 12, 2023

The thing is that AI requires control. But when we want to control something, we must have the right arguments to justify those things.

  The thing is that AI requires control. But when we want to control something, we must have the right arguments to justify those things.

Non-democratic governments use the same algorithms that are used to search web cheaters for searching people, who resist governments. And AI-based programming tools can create new operating systems for missiles.  And they can be used to create malware tools for hackers, who can work for some governments. 

We living in a time where a single person can keep ultimate power in their hands. The Internet and media are turning tools that can conduct ultimate power. And there is the possibility that some hackers will capture the homepages of some news agencies. That means the person who makes those things must not be some governmental chief. An ordinary hacker can capture and change information on governmental homepages. The problem is that we are waking up to a situation in that in some countries the media has the purpose to support governments. 

And another thing is that we faced the situation, that some hackers are operating under governmental control. That means they have permission to do their work. And in countries where censorship isolates people inside the internet firewall that allows only internal communication, the position of a governmental hacker offers free use of the internet, which is a luxury for people who cannot see even Western music videos. 

We see many times arguments against AI. And the biggest and most visible arguments are not big things. That we should be afraid of. We should afraid of things like AI-controlled weapons, and then we must understand is that robots and AI democratize warfare. The biggest countries are not always winners. Ukraine would lose to Russia many times without robots. 

We don't understand that the same system that delivers pizza from drones can use to drop hand grenades in enemy positions. That technology is deadly in the wrong hands. But is the thing that we think of as a "threat" the threat that some terrorists can send the drone and drop grenades in some public place, or is it that we can no anymore predict the winner of the conflicts as easily as before? If we support the wrong side, that thing causes problems. 

AI is a game changer in warfare, and that's why we must control that thing. In the same way, we should start to control advanced manufacturing tools like 3D printers. The 3D printers can make guns. Maybe those guns do have not the same quality as the Western army guns. But criminals can use those tools. 

And when we see the quality of the Russian military armament, we can think that the 3D printer can make a gun with the same quality as the Russian military used guns. But is that the reason why we resist that technology? Or is it that if we transfer all practical work robots we don't have human henchmen?

In this case, we must say, that it's not cool to be boss to robots. Robots are tools. They are machines, and that means if we want to yell something at robots, it's not the same thing as we would yell at human henchmen. Robots don't care what kind of social skills people have. And if we yell to robots that thing is the same as we would yell to some drill or wrench. 



Another thing when we talk about AI and algorithms is that the Russian, Chinese, Iranian, and other non-democratic governments use the same algorithms that are used to search web cheaters for searching people, who dare to resist the government. 

Then we must realize that things like automatized AI-based encoding tools are making it possible to create ultimate tools for hacking. And those tools can use to create computer viruses that are taking nuclear power plants under control. Professional nuclear security experts say that it's impossible to take nuclear power plants under control. There is always a local manual switch. 

That drops the adjustment rods in a nuclear reactor. But then we must understand that if that emergency shutdown switch is in the nuclear reactor hall, and the protective water layer that absorbs nuclear radiation is lost turning the reactor off is impossible. So that switch must be outside the reactor room so that the operator can turn the reactor off. Only small error in drawings causes the emergency system not to operate as it should.  

And that is the thing that makes the AI an ultimate tool. The AI can control things and follow that people who have the responsibility to control that everything is done right are making their job. It can search for weaknesses in those drawings. But if its databases are corrupted. That thing can turn AI into the worst nightmare that we ever seen. 

We must control AI development better. But the arguments that people see are wrong. The worst cases are the free online AI applications that can generate any application that people dare to ask for. This kind of AI-based system can turn into a virus or malware generator that can infect any system in the world. And in some scenarios, the hacker who doesn't know what system that person uses can cause nuclear destruction or even begin a nuclear war. 

If the hacker accidentally slips into the nuclear commanding system and thinks that is some kind of game, that thing can cause a situation where the system opens fire with nuclear warheads. One possible scenario is that hacker just crosslinks some computer game to a nuclear command system. Or the hacker accidentally adjusts the speed of the centrifugal separator. That separates the nuclear material for use in nuclear power plants. In that case the system can make too rich nuclear fuel. And that causes the nuclear reactor melts down. 

AI is the ultimate tool that makes life easier, but that same tool is the ultimate weapon in the wrong hands. So ultimate tool can turn into the ultimate enemy. But when we are looking at arguments against AI the excuse is not thing that AI can create ultimate data weapons, or AI can control armies. The argument is that AI takes jobs from bosses, and AI makes better jobs than humans. 

Same way as many times before, privacy and other kinds of things like legislation are things, that are used against the AI. Rules, prohibitions, and other things are artificial tools. They are very weak tools if the argument is that people should do something because that guarantees their privacy. Privacy and data security are things that force people to use things like paper dictionaries and books because the information is more secure when a person cannot use things like automatized translation programs. 

The fact is that AI requires control, but the arguments must be something else, than prohibiting the AI development or use of AI tools guarantees the position of the human boss. The fact is this. Things like privacy are small things. If we compare them with the next-generation AI, which can create software automatically. Privacy is an important thing, but how private our life could be? We can see things like is a person under guardianship just by looking at their ID papers. 

Things like working days in the office are always justified by using social relationships as an argument. But how many words do you say to other people during your working day? When we face new things we must realize that nothing is black and white. Some things are always causing problems. New things always cause resistance. And of course, somebody can turn the food delivery robot into a killer robot, by equipping them with machine guns. 

Those delivery tools allow people to have access to somebody's home address. But same way if we use some courier service for transporting food to us, we must give our home address. And there is the possibility that the food courier is some drug addict. That thing always causes problems with data security. But we don't care about those things, because there is human on the other side. And maybe that thing makes the AI frightening. AI is nothing that we can punish. We cannot mirror how good we are to robots. 

Maybe the threat that we see when we talk about robot couriers and AI is that we lose something holy. We lose the object that is worse than we are. Robots are like dolls. We can say everything that we want to the robot, and the robot is always our henchman. And that is one of the things that is making the AI frightening. We think that the AI is like a henchman, and what if we lose a chess game to AI? 


https://artificialintelligenceandindividuals.blogspot.com/


Thursday, May 11, 2023

Statistics are things that unite us.


Do you know, what unites planet hunters, stock marketers, and cancer doctors? 


All of those people are working in areas where statistics are important things. By following statistics planet hunters can search exoplanets. The changes in the star's brightness. When the planet travels between them and Earth tells that there is an exoplanet. But noticing those changes requires long-term intensive data collecting. The planets that are orbiting very close to some red dwarf cause very short periods and easily seen changes in the star's brightness. 

But if we want to search Earth-type planets that are orbiting yellow stars. We cannot use that method. The orbiting time of those planets is so long, that it requires years of intensive work. And those telescopes have limited observation time. The thing is that AI can measure the brightness of many stars at the same time. The AI can also search for differences in brightness from multiple stars. And that allows it to collect information extensively. 

Extensively collected data can make a person a very successful stock marketeer. The problem is that the person who is working in that business must follow the advancements of the stocks in companies. But the problem is that intensive following and statistics collections require time. And if the person just follows the wrong companies that is the loss. 

The AI can follow multiple companies at the same time. And because the data is collected extensively, the success is not depending on one company. The investor can select from multiple companies. And because that person can use a network of companies one mistake doesn't cost so much like a case where all money is put into one company. 



Chat GPT makes many things better than humans. And it beats humans in things like the stock trade and the AI makes the diagnosis better than doctors. The reason for that is the AI notices all variables. Another reason for that thing is that success in stock markets is based on the long-term experience of how the single target's course behaves. Experience in stock marketing is not very important. The important thing is how accurate statistics the person has. The problem is that something can happen when that person sleeps. 

But collecting statistics requires time. The problem is that one person can follow only a limited number of targets. And if the target is wrong, that thing means, the money will be lost, or profits are smaller than they could. The AI can follow the statistics of a large number of companies. It never sleeps. And that thing makes it an excellent stock trader. 

That means the person who is making a stock exchange should follow the object 24h per day, and then make a decision does the marketeer buy or sell. Everything is based on the ability to collect data from multiple targets and keep statistics about behave of those targets' courses. The thing is that AI can make very accurate statistics from many targets. And it can use multiple sources. 

When the AI searches for cancer from X-ray images it just searches for differences in the brightness of certain parts of the image. If that AI has a complete database of the X-ray (or MRI etc.) images in its memories it can compile the changes of brightness in areas that are seen in the images, and then it can make a diagnosis. It can also collect information that it gets from blood samples. 

https://scitechdaily.com/new-harvard-developed-ai-predicts-future-pancreatic-cancer-up-to-three-years-before-diagnosis/

Wednesday, May 3, 2023

Large-size neural networks are better actors than small-size neural networks.

The large-size networks can handle information more effectively than small-size networks. The strength of network-based solutions is that malfunction in one data-handling system's part will not cause the end of the operations. In network-based solutions, the loss of one unit will be easier to replace, and damages are not as big as in one centralized solution. 

Or actually, the neural network can be virtually centralized. In this kind of system, the user uses the neural network like a centralized system. The user will not see the difference if the centralized or neural-base systems driving that program. 

The neural network can be large in two ways. A large number of neurons or actors can make it large. The number of sensors that neural network uses can determine how effectively the system gets information. If the system uses a large-size sensor network it requires lots of calculating or processing capacity. 



The image portrays the deep-neural network. In that linear model, the system drives data through the multiple layers where the data-handling units interconnect data for processing information. 

That kind of system can recycle data multiple times through it. This makes the system powerful and accurate. And the number of those data handling units makes determines how powerful this system can be. 

The geological area where the neural network gets its information determines the effectiveness of the information that the system gets. As an example the system can use thousands of cameras but if they are all pointed at the same point. That thing makes the system ineffective.

If the geological area where the neural network gets information is large, it can get information from various places. In cases where all surveillance cameras in the city are connected to the network that allows the machine can collect information from large-scale areas.  Or if many processors in the neural networks make them also more effective. 

But when we are thinking about the numeral of neurons in the human brain the ability to use multiple neurons makes the data-handling process less stressful for single neurons. A large number of neurons share the mission with multiple neurons, and that makes missions lighter. Also if one data processing line will blocked other neurons can remove the block. The large number of neurons or data-handling units allows using of multiple routes. And the error management is better in large networks. In large networks, the system can use more connections. And it must not drive all data handling units all the time with full power. 

https://towardsdatascience.com/training-deep-neural-networks-9fdb1964b964

The excitons are quantum ghosts that can play a big role in next-generation computing.


Excitons might have a bigger role in natural information transfer than nobody expected. And they can be the key elements in the next-generation quantum systems. 

The world is full of ghosts. Some ghosts are real. And one of those ghosts is a ghost particle called an exciton. The exciton is the quasi-particle where the electron orbit's its hole. That means the exciton is very close to the virtual atom. The exciton should have similar abilities to hydrogen, and the reason for that is that chemical reactions happen between electron orbitals. 



A simple exciton is a situation where one electron orbits its hole, and adjusting that hole's depth is possible to lock other electrons around that hole. The excitons have a great role in the new types of quantum switches. The electron will lock in the wanted direction. And then the exciton aims the signal in the direction where the system needs to aim it. That makes that system suitable to use in ultra-small electronics. By adjusting the energy levels in the electron and its hole. The system can adjust the exciton's size. 

Exciton in neural data transmission. 

The thing is that the excitons might have a bigger role in nature than we ever imagined. Researchers are found the secret link between photosynthesis and excitons. So maybe excitons are playing also a big role in the information transfer between neurons and neurotransmitters. The ability to change the size of the exciton makes it possible that the excitons can transmit information between antennas. 

Even if they have different sizes. If another antenna is bigger than another, the system requires the adapter. And the exciton can act as an adapter. The artificial neuron can use miniaturized ion cannons to send ions. That is acting as neurotransmitters. And excitons can load information to those ions when they travel through the ion gate. 



"Frenkel exciton, bound electron-hole pair where the hole is localized at a position in the crystal represented by black dots". (Wikipedia/Exciton)


So in this text, system means neurons, and qubits mean neurons. 


If a neurotransmitter is a series of belt-looking protein structures the neuron can load electricity to those belts by using excitons. Then that pile of magnesite-loaded proteins will travel to another neuron, where small receptors will read every state of that chemical qubit. The system must just know which side the neurotransmitter is loaded and then it must know which side is the first when it docks to the neuron's gate. 

That allows the system, or in this case, neuron, to upload information in it. Then the qubit, or in this case: neurotransmitter will be crushed. If that crush will not happen the neuron reads it again, and that causes the death. The nerve gas denies the action of that enzyme that crushes neurotransmitters. 

When the system transmits information from the antenna to the qubit it can use excitons to make that thing. In that model, the qubit is like a pile of belts. The system will put exciton around the belt, and then that belt will pull electrons to its shell. That allows the situation, where the electron can transfer its information to those belts. And those belts can be proteins. So this thing might be the key to transferring information between chemical qubits and neurons. The neurotransmitters are chemical qubits. 

If that thing is possible to create in the laboratory. It makes it possible to create a quantum computer that operates at room temperature. Sometimes I wrote that chemical qubits are meaningless or they will never be made. I was wrong. 

The protein belts can form chemical qubits. And there is a possibility to use the proteins in extremely small-size tape stations. There could be a series of magnesite bites as rows on the surface of that protein. And then each line of magnesite is one state of the qubit. That thing allows to create small-size but powerful quantum- or virtual quantum systems, that can control miniaturized drones. 


https://scitechdaily.com/natures-quantum-secret-link-discovered-between-photosynthesis-and-fifth-state-of-matter/

https://en.wikipedia.org/wiki/Exciton

Tuesday, May 2, 2023

The new brain activity scanner transforms thoughts in the text.


The new brain activity scanner makes it possible to operate computers with thoughts. If the thoughts-to-text application is connected to radios. That allows it to send text-messages just by thinking. The biggest problem in BCI (Brain-Computer Interface) is how to drive EEG to the computer. 

The "thoughts-to-text" application makes uses a similar model to "voice command applications". The spoken words will transform to text, and then the system sends that text to the computer's control interface. Researchers can combine this kind of EEG-controlled system with a voice-command application. 

The system uses the layer that transforms thoughts into the text as a layer where the system drives thoughts to commands for computers. In the simplest model, the user can activate the BCI system by using the button. Then the system will transform thoughts into text, and then the AI will recognize if there are words that seem like commands. The user can use the small joystick to confirm or not confirm the orders. 


"Researchers at The University of Texas at Austin have developed a semantic decoder that converts brain activity into a continuous text stream, according to a study published in Nature Neuroscience. This non-invasive AI system relies on a transformer model and could potentially aid individuals unable to physically communicate due to conditions like strokes. Participants undergo training with the decoder by listening to hours of podcasts while in an fMRI scanner. The decoder then generates text from brain activity while the participant listens to or imagines a story." (ScitechDaily.com/Not Science Fiction: Brain Activity Decoder Transforms Thoughts Into Text)


The thing is that. By connecting this type of system with augmented reality is possible to make the operating interface where a person must not move even fingers. The idea is that the system follows a certain point in the eye, and then aims the crosshair at the right point on the screen. Then the user can blink their eyes, and blinking the left eye might be the left mouse button. And the right eye means the right mouse button. By using that system person can fill forms on the computer screen by using BCI. The aiming point tells, what field wanted to fill, and then the person just thinks something. 

The thoughts to text-applications are the pathfinders for the systems that also can project visual things that a person imagines. If that kinds of advancements are in use, we can make movies by using our imagination, and introduce them to other people. And those systems that can transform the EEG to film and sound can also uncover secrets of dreams. The ability to see what another person thinks and dreams would be revolutionary. 

But that kind of system can revolutionize things like criminal investigations. The people who are working with those cases could see everything that person sees during the crime. And that system can be the next-generation lie detector. If somebody carries that kind of system other people can see and hear everything that person sees and hear. 


https://scitechdaily.com/not-science-fiction-brain-activity-decoder-transforms-thoughts-into-text/

The quantum ghosts: the new technology makes it possible to make atoms transparent to certain frequencies of light.

The quantum ghost is the material that lets photons travel through it. The atom-scale quantum ghost technology can turn aircraft or any other object invisible to the human eye. But the same technology used for making quantum stealth can use to make the quantum switches for the quantum- and nano-size microchips. Those "invisible" atoms can also use for extremely high-accuracy measurements. The system bases the idea that laser rays can form a tunnel through an atom by stressing certain electrons in its electron shells. 

And you can see the principle of that phenomenon in image 1. The official name for those quantum ghosts is CIT or “Collectively induced transparency”. The CITs can make atoms transparent only a couple of wavelengths. But if lasers use those wavelengths, the CITs can close or open the road for those laser rays. Researchers used Ytterbium atoms for that thing and it can revolutionize the 2D network structures. 



"Artist’s visualization of a laser striking atoms in an optical cavity. Scientists discovered a new phenomenon called “collectively induced transparency” (CIT) where groups of atoms cease to reflect light at certain frequencies". (ScitechDaily.com/Quantum Ghosts: Atoms Become Transparent to Certain Frequencies of Light)

"The team found this effect by confining ytterbium atoms in an optical cavity and exposing them to laser light. At certain frequencies, a transparency window emerged in which light bypassed the cavity unimpeded. Credit: Ella Maru Studio" (ScitechDaily.com/Quantum Ghosts: Atoms Become Transparent to Certain Frequencies of Light)




"Two network layers, characterized by intra-network connectivity interactions (electric conductivity), are interdependent via dependency interactions (thermal heating) denoted by the red beams. Credit: Figure created by Shahar Melion inspired by a figure by Maya Zakai" (ScitechDaily.com/From Theory to Reality: A Groundbreaking Manifestation of Interdependent Networks in a Physics Lab). The system requires a third layer that can emulate the human brain if it's used in the microchip. 




"MIT researchers have innovated a low-temperature growth technology to integrate 2D materials onto a silicon circuit, paving the way for denser and more powerful chips. The new method involves growing layers of 2D transition metal dichalcogenide (TMD) materials directly on top of a silicon chip, a process that typically requires high temperatures that could damage the silicon". (ScitechDaily/MIT Engineers Revolutionize Semiconductor Chip Technology With Atom-Thin Transistors)



"Researchers from MIT and elsewhere have built a wake-up receiver that communicates using terahertz waves, which enabled them to produce a chip more than 10 times smaller than similar devices. Their receiver, which also includes authentication to protect it from a certain type of attack, could help preserve the battery life of tiny sensors or robots. Credit: Jose-Luis Olivares/MIT with figure courtesy of the researchers"(ScitechDaily.com/MIT’s Tiny Terahertz Receiver Preserves IoT Battery Life)

The terahertz-based systems can use in next-generation communication tools. Those systems can install with nano-size amplifiers. And they can offer next-generation tools for transporting data between computers and things like intelligent contact lenses. Nanotechnology makes it possible. HUD-contact lenses also have small-size cameras that allow making next-generation action cameras, and in wilder visions, those things have microphones. Those systems can offer the possibility that their user can share everything that they see and hear on the internet. But for working perfectly, those systems require small microchips and effective but small-size power supplies. 

If a person wears contact lenses that have a camera and HUD screen. That system can revolutionize interfaces in augmented reality systems. The user must just point the camera that has crosshair in the contact lens to a certain point and blink an eye. And that thing can take the image of that point. This system can use with other AU headsets. And every action that the headset can order can contain the QR code. But the system can scan the QR codes from the products at shops. Or the user can take an image of any person or other object and then drive that image to the network. There that network can compare that image with other images. 


Quantum ghosts or CITs:s are suitable tools for controlling the new type of atom-thin networks. Some of those microchips are using half-optical data transmission. And those CIT-atoms can control the laser rays that travel in the systems. In those systems, the laser rays will shoot to the miniature light cells that transform photon rays into electric impulses. 

Things like terahertz-frequency circuits. That using laser-LEDs can be used as energy sources for extremely small microchips. The problem with those nanotechnical microchips is that the electricity very easily jumps over the switch. So the system must send energy impulses to those miniature microchips with very high accuracy. 

Terahertz receivers measure the IoT battery life, but those things also can use as radio transmitter-receiver. These kinds of systems can revolutionize communication. Terahertz radiation also can use to transport information to the miniature systems. That is sensitive to overvoltage. And that radiation also can transmit electricity to extremely small intelligent systems. 

The atom-thin transistors, resistors, and other electrical components will revolutionize computing. Those systems can use to make extremely small-size and powerful microchips. The miniaturized microchips allow. Even the bacteria-size nanomachines can operate independently and use similar control algorithms used in regular drone swarms. And those independently operating AI-controlled drones can use as surgical tools. But also the military might be interested in that kind of technology. 

The miniaturized technology can also run hardware- or "iron"-based AI. If the microchips have three layers. That is sending information through the system this system can emulate the human brain. 

Researchers can put nanotechnical processors in the piles like hamburgers. And in that system, most of those miniature microchips is acting as the cortex. Then the center processor acts as the midbrain. And the lowest processor is acting as the lowest area in the brain. The upper layer sends information through that system, and then the system sends an echo through the middle level. The system emulates the human brain and makes the processor very effective in comparison with its size. 



https://scitechdaily.com/from-theory-to-reality-a-groundbreaking-manifestation-of-interdependent-networks-in-a-physics-lab/


https://scitechdaily.com/mits-tiny-terahertz-receiver-preserves-iot-battery-life/


https://scitechdaily.com/quantum-ghosts-atoms-become-transparent-to-certain-frequencies-of-light/

Monday, May 1, 2023

The new game-chancing technology makes information transportation between neurons and non-organic microchips easier.

What type of computer is the most powerful in the world? The answer is the human brain that controls the quantum computer by BCI (Brain-Computer Interface). In that kind of system, the brain interacts straight with the quantum systems. That kind of technology is tested in laboratories. And the reason for that is that the BCI can control multilevel computing systems effectively. 

The miniaturized microchips called "intelligent dust" can also make it possible. That the AI can interact with every single neuron separately. That thing makes it possible to transfer new information to neurons, that control the OI (Organoid Intelligence). And the most powerful of the OI-based systems is the BCI, the system where the human brain interacts with microchips and microcomputers without borders. 

The main problem with BCI is how the system separates commands from the "white noise". The problem with the BCI is that we cannot perfectly control our thoughts. So the system must filter the data that is meant for command from thoughts like "it's nice to get some cake". If the system cannot filter meaningless commands, it operates wrong. 

If the system uses simple models like taking the EEG from the Wernicke lobes, that thing is revolutionary. But if the operator can use the entire brain for the operating orders. That thing gives new depth to the interface. 





So there are two types of BCI. 


1) In the simplest model. The user just gives the common orders. Like ordering a robot to move stones. In that model, the AI-controlled robot just walks to stones and moves them. So the user gives only commands. 


2) The more complicated model is the robot. That interacts with the cortex all the time. In that model, the system uses electric stimulation and EEG-reading systems for connecting robots with the senses. And making it act as another body for the person. 


The new type of innovation is making possible information change between neurons and atoms. The ability to transfer information between neurons and atoms is revolutionizing computing. The new miniature microchips are allowed to pack much more independently operating microchips in the circuits. And that kind of thing is making even virtual quantum systems possible. 

In virtual quantum systems, multiple binary computers are playing the qubits. The AI and TCP/IP-based system shares data with the data-handling lines. And each of those lines plays one state of the qubit. Then the AI reconnects that data again to its entirety, and that kind of binary system can be more powerful than ever before. 

But the ability to transport information into independently operate quantum circuit makes the most powerful computers possible. In those models, the living neurons transport data to the quantum system that can operate independently. This type of system allows transporting information in its original form from the OI (Organoid Intelligence) to the non-organic quantum system. And that kinds of systems are making it possible to create hybrid computers that are even more powerful than we ever imagined. 

That kind of system allows to transport of information to a quantum system in its original form. And even as data storage, those kinds of systems are revolutionary. They can store the EEG signals in their original forms. But the neurons can also transport information to the independently operating quantum system that continues the data processing, and then return the answer to neurons. 

Independently-operating networks that can use neutral atoms as quantum computing might be more effective in that kind of purpose than ion- or electron or photon-based systems. The neutral atoms are more stable than ions. And that thing means that the sensors that use neutral atoms are safer. The ions can start to move in electrical fields much easier than neutral atoms. So the neutral atoms that can use in quantum computing are more effective and less vulnerable than ion-based systems. 


https://scitechdaily.com/encoding-breakthrough-unlocks-new-potential-in-neutral-atom-quantum-computing/


https://scitechdaily.com/from-theory-to-reality-a-groundbreaking-manifestation-of-interdependent-networks-in-a-physics-lab/

https://scitechdaily.com/mits-tiny-terahertz-receiver-preserves-iot-battery-life/




The AI and new upgrades make fusion power closer than ever.

"New research highlights how energetic particles can stabilize plasma in fusion reactors, a key step toward clean, limitless energy. Cr...