Sometimes, when we use AI. We face things that the AI gives answers that have no logic at all. Those answers might make no sense. That makes researchers think, about how to make AI more trusted. Or rather saying. Make AI the ability to give more right answers.
When we ask something about things like mathematics, the AI is a tool that gives the right answers, if we ask things like what is the Pythagorean claim. In those cases, where AI can pick precise information from simple questions, it's invincible. Or if we want the AI can calculate the right triangle's area and give the measurements of the catheters and order to use the Pythagorean claim so that the AI will not make mistakes. But when we ask something else we face things that the AI doesn't think.
Then we must realize. What is the difference between data connections from different sources and deep knowledge? Deep knowledge means that the system knows what the words mean. And in that case, we must realize, that when we see well-known words what we know and maybe use them every day. We know what they mean. But for computers, those things are not so clear. When we think about the word "grass" we all know what that word means.
But what if we must describe what the word "Grass" means to the computer, we can see that is not as easy thing to do as we think. The grass is green in summer and brown in autumn, but then we must explain what green, autumn, and summer mean. This means that the AI can react the right way if we show some images that portray grass. The AI can see that the image matches the word "grass".
The image acts like a trigger if there is a match between the image and some database. But then we must realize that the word grass doesn't mean anything to the computer. If we know the database that involves world grass and change it to "house", the AI says that the image that portrays grass portrays a house. This kind of system is an extended version of the face recognition. There might be multiple images stored in a database.
And those databases have numbers and images. If there is a match with some image. The image activates a certain number, if there is a match. And then that number activates the database, there is a certain word. The grass can have the serial number 146, and if there is a match with that image, the computer sends a query to table number 146. This means the computer, or AI doesn't know anything about the meaning of the world. It just outputs the word that is in that card.
And that thing is the Achilles heel of the AI. The AI gives the best results with exact and well-sorted information like with math. But it has problems with spoken words. Words with dialectic speech are far from literary languages. And that limits the use of the voice to give commands to that system. It's possible to make AI-based systems that can follow dialect orders.
The system only requires information about the user's dialect, and then the dialect wordbook that it uses to transform dialectical words into the literal right language. Then that system can transform those words into English, that used to give commands to the system. The system translates other than English commands to the English, that it sends to control units.
https://www.quantamagazine.org/how-embeddings-encode-what-words-mean-sort-of-20240918/
Comments
Post a Comment