Skip to main content

The thing is that AI requires control. But when we want to control something, we must have the right arguments to justify those things.

  The thing is that AI requires control. But when we want to control something, we must have the right arguments to justify those things.

Non-democratic governments use the same algorithms that are used to search web cheaters for searching people, who resist governments. And AI-based programming tools can create new operating systems for missiles.  And they can be used to create malware tools for hackers, who can work for some governments. 

We living in a time where a single person can keep ultimate power in their hands. The Internet and media are turning tools that can conduct ultimate power. And there is the possibility that some hackers will capture the homepages of some news agencies. That means the person who makes those things must not be some governmental chief. An ordinary hacker can capture and change information on governmental homepages. The problem is that we are waking up to a situation in that in some countries the media has the purpose to support governments. 

And another thing is that we faced the situation, that some hackers are operating under governmental control. That means they have permission to do their work. And in countries where censorship isolates people inside the internet firewall that allows only internal communication, the position of a governmental hacker offers free use of the internet, which is a luxury for people who cannot see even Western music videos. 

We see many times arguments against AI. And the biggest and most visible arguments are not big things. That we should be afraid of. We should afraid of things like AI-controlled weapons, and then we must understand is that robots and AI democratize warfare. The biggest countries are not always winners. Ukraine would lose to Russia many times without robots. 

We don't understand that the same system that delivers pizza from drones can use to drop hand grenades in enemy positions. That technology is deadly in the wrong hands. But is the thing that we think of as a "threat" the threat that some terrorists can send the drone and drop grenades in some public place, or is it that we can no anymore predict the winner of the conflicts as easily as before? If we support the wrong side, that thing causes problems. 

AI is a game changer in warfare, and that's why we must control that thing. In the same way, we should start to control advanced manufacturing tools like 3D printers. The 3D printers can make guns. Maybe those guns do have not the same quality as the Western army guns. But criminals can use those tools. 

And when we see the quality of the Russian military armament, we can think that the 3D printer can make a gun with the same quality as the Russian military used guns. But is that the reason why we resist that technology? Or is it that if we transfer all practical work robots we don't have human henchmen?

In this case, we must say, that it's not cool to be boss to robots. Robots are tools. They are machines, and that means if we want to yell something at robots, it's not the same thing as we would yell at human henchmen. Robots don't care what kind of social skills people have. And if we yell to robots that thing is the same as we would yell to some drill or wrench. 



Another thing when we talk about AI and algorithms is that the Russian, Chinese, Iranian, and other non-democratic governments use the same algorithms that are used to search web cheaters for searching people, who dare to resist the government. 

Then we must realize that things like automatized AI-based encoding tools are making it possible to create ultimate tools for hacking. And those tools can use to create computer viruses that are taking nuclear power plants under control. Professional nuclear security experts say that it's impossible to take nuclear power plants under control. There is always a local manual switch. 

That drops the adjustment rods in a nuclear reactor. But then we must understand that if that emergency shutdown switch is in the nuclear reactor hall, and the protective water layer that absorbs nuclear radiation is lost turning the reactor off is impossible. So that switch must be outside the reactor room so that the operator can turn the reactor off. Only small error in drawings causes the emergency system not to operate as it should.  

And that is the thing that makes the AI an ultimate tool. The AI can control things and follow that people who have the responsibility to control that everything is done right are making their job. It can search for weaknesses in those drawings. But if its databases are corrupted. That thing can turn AI into the worst nightmare that we ever seen. 

We must control AI development better. But the arguments that people see are wrong. The worst cases are the free online AI applications that can generate any application that people dare to ask for. This kind of AI-based system can turn into a virus or malware generator that can infect any system in the world. And in some scenarios, the hacker who doesn't know what system that person uses can cause nuclear destruction or even begin a nuclear war. 

If the hacker accidentally slips into the nuclear commanding system and thinks that is some kind of game, that thing can cause a situation where the system opens fire with nuclear warheads. One possible scenario is that hacker just crosslinks some computer game to a nuclear command system. Or the hacker accidentally adjusts the speed of the centrifugal separator. That separates the nuclear material for use in nuclear power plants. In that case the system can make too rich nuclear fuel. And that causes the nuclear reactor melts down. 

AI is the ultimate tool that makes life easier, but that same tool is the ultimate weapon in the wrong hands. So ultimate tool can turn into the ultimate enemy. But when we are looking at arguments against AI the excuse is not thing that AI can create ultimate data weapons, or AI can control armies. The argument is that AI takes jobs from bosses, and AI makes better jobs than humans. 

Same way as many times before, privacy and other kinds of things like legislation are things, that are used against the AI. Rules, prohibitions, and other things are artificial tools. They are very weak tools if the argument is that people should do something because that guarantees their privacy. Privacy and data security are things that force people to use things like paper dictionaries and books because the information is more secure when a person cannot use things like automatized translation programs. 

The fact is that AI requires control, but the arguments must be something else, than prohibiting the AI development or use of AI tools guarantees the position of the human boss. The fact is this. Things like privacy are small things. If we compare them with the next-generation AI, which can create software automatically. Privacy is an important thing, but how private our life could be? We can see things like is a person under guardianship just by looking at their ID papers. 

Things like working days in the office are always justified by using social relationships as an argument. But how many words do you say to other people during your working day? When we face new things we must realize that nothing is black and white. Some things are always causing problems. New things always cause resistance. And of course, somebody can turn the food delivery robot into a killer robot, by equipping them with machine guns. 

Those delivery tools allow people to have access to somebody's home address. But same way if we use some courier service for transporting food to us, we must give our home address. And there is the possibility that the food courier is some drug addict. That thing always causes problems with data security. But we don't care about those things, because there is human on the other side. And maybe that thing makes the AI frightening. AI is nothing that we can punish. We cannot mirror how good we are to robots. 

Maybe the threat that we see when we talk about robot couriers and AI is that we lose something holy. We lose the object that is worse than we are. Robots are like dolls. We can say everything that we want to the robot, and the robot is always our henchman. And that is one of the things that is making the AI frightening. We think that the AI is like a henchman, and what if we lose a chess game to AI? 


https://artificialintelligenceandindividuals.blogspot.com/


Comments

Popular posts from this blog

The new bendable sensor is like straight from the SciFi movies.

"Researchers at Osaka University have developed a groundbreaking flexible optical sensor that works even when crumpled. Using carbon nanotube photodetectors and wireless Bluetooth technology, this sensor enables non-invasive analysis and holds promise for advancements in imaging, wearable technology, and soft robotics. Credit: SciTechDaily.com" (ScitechDaily, From Sci-Fi to Reality: Scientists Develop Unbreakable, Bendable Optical Sensor) The new sensor is like the net eye of bugs. But it's more accurate than any natural net eye. The system is based on flexible polymer film and nanotubes. The nanotubes let light travel through it. And then the film at the bottom of those tubes transforms that light into the image. This ultra-accurate CCD camera can see ultimate details in advanced materials. The new system can see the smallest deviation in the materials.  And that thing makes it possible to improve safety on those layers. The ability to see ultra-small differences on surf

Quantum breakthrough: stable quantum entanglement at room temperature.

"Researchers have achieved quantum coherence at room temperature by embedding a light-absorbing chromophore within a metal-organic framework. This breakthrough, facilitating the maintenance of a quantum system’s state without external interference, marks a significant advancement for quantum computing and sensing technologies". (ScitechDaily, Quantum Computing Breakthrough: Stable Qubits at Room Temperature) Japanese researchers created stable quantum entanglement at room temperature. The system used a light-absorbing chromophore along with a metal-organic framework. This thing is a great breakthrough in quantum technology. The room-temperature quantum computers are the new things, that make the next revolution in quantum computing. This technology may come to markets sooner than we even think. The quantum computer is the tool, that requires advanced operating- and support systems.  When the support system sees that the quantum entanglement starts to reach energy stability. I

Humans should be at the center of AI development.

"Experts advocate for human-centered AI, urging the design of technology that supports and enriches human life, rather than forcing humans to adapt to it. A new book featuring fifty experts from over twelve countries and disciplines explores practical ways to implement human-centered AI, addressing risks and proposing solutions across various contexts." (ScitechDaily, 50 Global Experts Warn: We Must Stop Technology-Driven AI) The AI is the ultimate tool for handling things that behave is predictable. Things like planets' orbiting and other mechanical things that follow certain natural laws are easy things for the AI. The AI might feel human, it can have a certain accent. And that thing is not very hard to program.  It just requires the accent wordbook, and then AI can transform grammatically following text into text with a certain accent. Then the AI drives that data to the speech synthesizer. The accent mode follows the same rules as language translation programs. The ac