Skip to main content

Can we control the AI anyway?

 


"An in-depth examination by Dr. Yampolskiy reveals no current proof that AI can be controlled safely, leading to a call for a halt in AI development until safety can be assured. His upcoming book discusses the existential risks and the critical need for enhanced AI safety measures. Credit: SciTechDaily.com" (ScitechDaily, Risk of Existential Catastrophe: There Is No Proof That AI Can Be Controlled)


The lab-trained AI makes mistakes. The reason for those mistakes is in the data used in the laboratory. In a laboratory environment, everything is well-documented and cleaned. In the real world dirt, and light conditions are less controlled than in laboratories. 

We are facing the same problem with humans when they are trained in some schools. Those schools are like laboratory environments. There is no hurry, and there is always space around the work. And everything is dry. There are no outsiders or anybody, that is in danger. 

When a person goes to real work there are always time limits and especially outside the house, there is icy ground, slippery layers, and maybe some other blocks. So the lab environment is different than real-life situations. And the same thing that makes humans make mistakes causes the AI's mistakes.

The AI works the best way when everything is well-documented. When AI uses pre-processed datasets highly trained professionals are analyzed and sorted. But when the AI searches data from the free net or from some sensors the data that it gets is not so-called sterile. The dataset is not well-documented and there is a larger data mass. That the AI must use when it selects the data for solutions. 

What is creative AI? The creative AI doesn't create information from nowhere. It just sorts data into a new order. Or it reconnects different data sources together. And that thing makes it a so-called learning or cognitive tool. 

In machine learning the cognitive AI connects data from sensors to static datasets and then that tool makes the new models or action profiles by following certain parameters. The the system stores best results in its database and that thing is the new model for the operation. 

The fuzzy logic means that in the program are some static points. Then the system will get the variables from some sensors. In airfields, there are things like runways and roll routes that are static data. Aircraft, ground vehicles, and their positions are variables. 

The system sees if there is a dangerous situation in some landing route. And then it just orders other planes to the positions that programmers preset for the system. The idea of this kind of so-called pseudo-intelligence is that there is a certain number of airplanes that fit in a waiting pattern. There are multiple layers in that pattern. 


"A study reveals AI’s struggle with tissue contamination in medical diagnostics, a problem easily managed by human pathologists, underscoring the importance of human expertise in healthcare despite advancements in AI technology." (ScitechDaily, A Reality Check – When Lab-Trained AI Meets the Real World, “Mistakes Can Happen”)



In the case of an emergency other aircraft are dodging the plane that has problems. In that situation, there are sending and receiving waiting patterns. 


Certain points determine whether is it safer to continue landing or pull up. In an emergency, the idea is that the other aircraft pulls turn sideways, and when it moves to another waiting pattern all planes in that pattern pull up or turn away from the incoming aircraft in the same way, if they are in the same level or risk position as the dodging aircraft. 

Because all aircraft turn like ballet dancers that minimizes the possibility that the planes travel against each other. The waiting pattern where the other planes move will transfer the planes up in the order that the most up aircraft will pull up first. This logic minimizes the sideways movements. This denies the possibility that some plane will come to an impact course from upwards. 

So can we ever control the AI? The AI itself can be in multiple servers all around the world. That thing called non-centralized data processing methodology. In a non-centralized model the data that makes the AI is in multiple locations. Those pieces connect each other in their entirety by using certain marks. The non-centralized data processing model is taken from the internet and ARPANET. 

The system involves multiple central computers or servers that are in different locations. That thing protects the system against local damages and guarantees its operational abilities in a nuclear attack. But that kind of system is vulnerable to computer viruses. The problem is that the shutdown of one server will not end the AI's task. The AI can write itself into the RAMs of the computers and other tools. 

The way how the AI interacts makes it dangerous. The language model itself is not dangerous. But it creates so-called sub-algorithms that can interact with things like robots. So the language model creates a customized computer program for every situation. When the AI-based antivirus operates it searches the WWW-scale virus databases, and then it creates algorithms that destroy the virus. 

The problem is that the AI makes mistakes. If the observation tools are not what they should be, that causes a destructive process. The most problematic thing with AI is that it's superior in weapon control. Weapons' purpose in war is to destroy enemies. And the AI that controls weapons must be controlled by friendly forces. But the opponent must not have access to that tool. 

Creative AI can make non-predicted movements. And that makes it dangerous. The use of creative AI in things like cruise missiles and other equipment helps them to reach the target. But there are also risks. The "Orca" is the first public large-scale AUV (Autonomous Underwater Vehicle). That small submarine can perform the same missions as manned submarines. 

There is the possibility that in the crisis the UAV overreacts to some threat. The system can interpret things like some sea animals or magma eruptions as attack and then the submarine attacks against its targets. The system works like this. When the international situation turns tighter the submarine turns into the "yellow space". That means it will make counter-attacks.  And then the system can attack unknown vehicles. 


https://scitechdaily.com/a-reality-check-when-lab-trained-ai-meets-the-real-world-mistakes-can-happen/


https://scitechdaily.com/risk-of-existential-catastrophe-there-is-no-proof-that-ai-can-be-controlled/

Comments

Popular posts from this blog

The new bendable sensor is like straight from the SciFi movies.

"Researchers at Osaka University have developed a groundbreaking flexible optical sensor that works even when crumpled. Using carbon nanotube photodetectors and wireless Bluetooth technology, this sensor enables non-invasive analysis and holds promise for advancements in imaging, wearable technology, and soft robotics. Credit: SciTechDaily.com" (ScitechDaily, From Sci-Fi to Reality: Scientists Develop Unbreakable, Bendable Optical Sensor) The new sensor is like the net eye of bugs. But it's more accurate than any natural net eye. The system is based on flexible polymer film and nanotubes. The nanotubes let light travel through it. And then the film at the bottom of those tubes transforms that light into the image. This ultra-accurate CCD camera can see ultimate details in advanced materials. The new system can see the smallest deviation in the materials.  And that thing makes it possible to improve safety on those layers. The ability to see ultra-small differences on surf

Quantum breakthrough: stable quantum entanglement at room temperature.

"Researchers have achieved quantum coherence at room temperature by embedding a light-absorbing chromophore within a metal-organic framework. This breakthrough, facilitating the maintenance of a quantum system’s state without external interference, marks a significant advancement for quantum computing and sensing technologies". (ScitechDaily, Quantum Computing Breakthrough: Stable Qubits at Room Temperature) Japanese researchers created stable quantum entanglement at room temperature. The system used a light-absorbing chromophore along with a metal-organic framework. This thing is a great breakthrough in quantum technology. The room-temperature quantum computers are the new things, that make the next revolution in quantum computing. This technology may come to markets sooner than we even think. The quantum computer is the tool, that requires advanced operating- and support systems.  When the support system sees that the quantum entanglement starts to reach energy stability. I

Humans should be at the center of AI development.

"Experts advocate for human-centered AI, urging the design of technology that supports and enriches human life, rather than forcing humans to adapt to it. A new book featuring fifty experts from over twelve countries and disciplines explores practical ways to implement human-centered AI, addressing risks and proposing solutions across various contexts." (ScitechDaily, 50 Global Experts Warn: We Must Stop Technology-Driven AI) The AI is the ultimate tool for handling things that behave is predictable. Things like planets' orbiting and other mechanical things that follow certain natural laws are easy things for the AI. The AI might feel human, it can have a certain accent. And that thing is not very hard to program.  It just requires the accent wordbook, and then AI can transform grammatically following text into text with a certain accent. Then the AI drives that data to the speech synthesizer. The accent mode follows the same rules as language translation programs. The ac