When we are creating the AI, we are facing one thing. AI is much more than some kind of killer robot. AI is the next big step in the world of learning neural networks. And this is the thing, that we must accept. The reason why people are developing technical equipment is that technology makes our lives easier.
And it makes our life safer if the systems are made as they should and they act as they should. The problem with the people who want to put the point or limit AI development is that in the next week, the same people are telling that they begin their own AI project that is a competitor for the remaining AI systems.
The ChatGPT is one of the most advanced AIs in the world. The thing that makes the ChatGPT almost the thing that is called the "universal AI" is that system can at least in theory connect itself with other software, like CAD programs. Then it can search for data from the Internet. And after that, it can use that data to create CAD images and plans for CAM (Computer Aided Manufacturing) platforms.
When we say that ChatGPT is almost the universal AI, we must realize that ChatGPT can make many things. But ChatGPT cannot control robots. The fact is that there is a possibility that if ChatGPT is somehow making the digital twin there those limitations are removed. And then that kind of system can take robots under control. The thing is that the CahtGPT type AI requires lots of data for developing better.
The data that users deliver to ChatGPT is used to develop and test its digital twin. I don't know how ChatGPT developers work, but theoretically, the perfect development system of AI is three cores. One is the user-interacting core. That system gets information for development work. The other is the digital twin that developers are manipulating.
And the third core is the control system that compiles the user interactions and development-interacting cores or their program codes. That third core is the core warehouse, where the system can return to error-free mode if there is some kind of malicious code. If there are some unauthorized changes in the digital code, that make the AI, the system will tell that thing to its supervisors. That third core is also used to return the digital twin if there is some harmful code.
But there is a small difference between the AI that operates other software in the digital world. AI that operates physical robots at the street level. Many unexpected things can happen on the streets. When researchers are programming things like fuzzy logic or uncertainty to the systems they are programming multiple precise cases. And that thing requires lots of program code.
Same way machines can react only to situations. That is programmed in their codes. This is the reason why unexpected things are dangerous. And there are lots of things that happen in everyday life but we don't even notice them. But if those things are not included in the program code, that can cause catastrophe.
https://scitechdaily.com/mastering-uncertainty-an-effective-approach-to-training-machines-for-real-world-situations/
Comments
Post a Comment