Skip to main content

Does that mean AI understands?



Understanding things is an interesting thing. We might feel that we know what understanding means. So let's take an example. We know that in most countries cars must drive on the right side of the road. We know that we must do that thing because regulations mean that we should drive on the right side. But does that mean that the AI understands why it should drive on the right side? 

There are, of course, countries where is left-side traffic. When the GPS tells the AI that the vehicle is in Great Britain, the AI knows that it should turn to the left side. The reason for that is that in the certain country is left-side traffic. These kinds of things are interesting. In the databases of the AI is programmed that the traffic is left-side in certain countries. But does the AI even know what is the country or state? 

The AI knows the GPS point or GPS coordinates where it is. Then it will compare that parameter to its database. It knows that it's in Great Britain. If we would ask about the location where the AI is it might answer us "I'm in Great Britain". But then we might check the code list and notice one thing. The AI knows that Great Britain is the area that consists of certain GPS coordinates. And when somebody asks about the location where the AI is those coordinates are connected to the database 

In that database is the answer "Great Britain". That database might involve many hierarchical structures like country, area, city area, and even streets and street addresses. The thing is that making that kind of AI that can answer the accurate location is not as hard to make as people might think. The AI will load the table "Great Britain" when it crosses the border. And if it should find the address like Downing Street 10 which is the official address of the prime minister. At the first, the AI must find the city where that address is. So it downloads the table where is the area where London is. Then it drives to London, and then it replaces the table of the database with the London database. 

Then it knows which city area it can find that address. And then it will change the tables to more and more accurate versions. If the people see that operation. The process looks a little bit like a zooming satellite image. At the first, the system uses large-area images but then they are turning more accurate and consist of smaller areas. But if that AI drives a robot car it would not use satellite images at all. It uses GPS points for finding the address. 

But if we would drive our car to the front of the Downing street 10 and ask where we are? The AI that is connected to the GPS and maybe its camera sees the plate. That system might say: "At the front of the Downing Street 10. And hat is the official home of the prime minister of Great Britain".  The thing is that the AI would find that answer from its database. And if it uses the right algorithm it can tell lots of things about the Downing Street 10 and the prime minister. 

It just searches things like Wikipedia pages about those things and then transforms those texts into speech. That means the AI would not know anything about things that it reads. The AI can search the net by connecting the information about the address. At the first, it might search the Downing Street 10. Then it finds the words "prime minister" and "home". 

Then it will position the data about the prime minister after the Downing Street 10 information. Then it would search the words like "current holder of that position". So that means the AI connects three databases Downing Street 10, the prime minister, and the personal data of the current prime minister to good and reasonable-looking things. But the fact is that the computer would not understand what it says. 

The situation is similar to the case where a regular western person reads things like Hebrew. We can read Hebrew if we have phonetic marks in our use. So if that text wrote using Western letters. We can read that thing very easily. 

We can say those words correctly. But without translation, we don't understand what we are saying. That is one thing, that we must realize when we are talking about AI. The AI can connect things like Wikipedia pages and read them. It can form a reasonable-looking entirety. But does it understand? The person who drives on the streets knows that if that person does something otherwise. Breaking the regulations causes problems. So this is the thing that we call understanding. 


https://artificialintelligenceandindividuals.blogspot.com/

Comments

Popular posts from this blog

The new bendable sensor is like straight from the SciFi movies.

"Researchers at Osaka University have developed a groundbreaking flexible optical sensor that works even when crumpled. Using carbon nanotube photodetectors and wireless Bluetooth technology, this sensor enables non-invasive analysis and holds promise for advancements in imaging, wearable technology, and soft robotics. Credit: SciTechDaily.com" (ScitechDaily, From Sci-Fi to Reality: Scientists Develop Unbreakable, Bendable Optical Sensor) The new sensor is like the net eye of bugs. But it's more accurate than any natural net eye. The system is based on flexible polymer film and nanotubes. The nanotubes let light travel through it. And then the film at the bottom of those tubes transforms that light into the image. This ultra-accurate CCD camera can see ultimate details in advanced materials. The new system can see the smallest deviation in the materials.  And that thing makes it possible to improve safety on those layers. The ability to see ultra-small differences on surf

Quantum breakthrough: stable quantum entanglement at room temperature.

"Researchers have achieved quantum coherence at room temperature by embedding a light-absorbing chromophore within a metal-organic framework. This breakthrough, facilitating the maintenance of a quantum system’s state without external interference, marks a significant advancement for quantum computing and sensing technologies". (ScitechDaily, Quantum Computing Breakthrough: Stable Qubits at Room Temperature) Japanese researchers created stable quantum entanglement at room temperature. The system used a light-absorbing chromophore along with a metal-organic framework. This thing is a great breakthrough in quantum technology. The room-temperature quantum computers are the new things, that make the next revolution in quantum computing. This technology may come to markets sooner than we even think. The quantum computer is the tool, that requires advanced operating- and support systems.  When the support system sees that the quantum entanglement starts to reach energy stability. I

Humans should be at the center of AI development.

"Experts advocate for human-centered AI, urging the design of technology that supports and enriches human life, rather than forcing humans to adapt to it. A new book featuring fifty experts from over twelve countries and disciplines explores practical ways to implement human-centered AI, addressing risks and proposing solutions across various contexts." (ScitechDaily, 50 Global Experts Warn: We Must Stop Technology-Driven AI) The AI is the ultimate tool for handling things that behave is predictable. Things like planets' orbiting and other mechanical things that follow certain natural laws are easy things for the AI. The AI might feel human, it can have a certain accent. And that thing is not very hard to program.  It just requires the accent wordbook, and then AI can transform grammatically following text into text with a certain accent. Then the AI drives that data to the speech synthesizer. The accent mode follows the same rules as language translation programs. The ac