Skip to main content

Humans should be at the center of AI development.


"Experts advocate for human-centered AI, urging the design of technology that supports and enriches human life, rather than forcing humans to adapt to it. A new book featuring fifty experts from over twelve countries and disciplines explores practical ways to implement human-centered AI, addressing risks and proposing solutions across various contexts." (ScitechDaily, 50 Global Experts Warn: We Must Stop Technology-Driven AI)


The AI is the ultimate tool for handling things that behave is predictable. Things like planets' orbiting and other mechanical things that follow certain natural laws are easy things for the AI. The AI might feel human, it can have a certain accent. And that thing is not very hard to program. 

It just requires the accent wordbook, and then AI can transform grammatically following text into text with a certain accent. Then the AI drives that data to the speech synthesizer. The accent mode follows the same rules as language translation programs. The accent wordbook is needed to translate spoken commands grammatically so that the system understands them. 

The problem with spoken commands is that most people don't speak grammatically right in their normal life. Computers require that the user gives commands precisely the right way. That means the user must use precise grammatic.  In AI-based language models the system might have multiple possibilities for how the user can give commands. But the user must select one of those choices. And even if AI reacts to accents the user must use an accent that is on the list. 

Users create most AI problems themselves. If we want AI to predict our death moment, we get what we want.  When we search only negative things like death and violence the screen is full of those things. So is this kind of negative answer the user's or AI's fault?

People expect too much from them. They want the AI to make a thesis for them. And those kinds of things cause ethical problems. In those cases, the AI is like a ghostwriter. And the use of them is strictly prohibited. Giving another's text as own is called plagiarism. 

The AI is an excellent tool to collect sources for the thesis, but the person should know about topics that the AI is a useful tool. And they should write their texts themselves. Or there could be lots of topical errors. 

AI is a good tool for computer programming. But it's not so good when it must act as a therapist. If people want to get bad answers from AI, and the use of it turns into masochism, that is their problem. If people want to generate some nazi-soldier images using AI, we must realize that there are lots of authentic nazi-soldier images on the net. So why the AI should make that kind of image?

Those things are problematic. The AI  can create many things like cyborg chickens, that is over this text. But why should it make some historical characters? I once tried to make AI to generate the photorealistic version of Vincent van Gogh's "Starry Night" painting. And the AI refused to make that thing. There would be no problem with AI art if there is mention that it's made using AI. 

AI plagiarism is also easy to deny. The AI must just keep records or databases about stuff that it created. And there the clients like social media channels can check if the text or image is created using AI. And then there could come to mention that "made by A.I". This kind of plagiarism detection is already in use in high schools. Or it has been used for over 15 years. So can this type of plagiarism detection used in AI? 

https://scitechdaily.com/50-global-experts-warn-we-must-stop-technology-driven-ai/

Comments

Popular posts from this blog

The new bendable sensor is like straight from the SciFi movies.

"Researchers at Osaka University have developed a groundbreaking flexible optical sensor that works even when crumpled. Using carbon nanotube photodetectors and wireless Bluetooth technology, this sensor enables non-invasive analysis and holds promise for advancements in imaging, wearable technology, and soft robotics. Credit: SciTechDaily.com" (ScitechDaily, From Sci-Fi to Reality: Scientists Develop Unbreakable, Bendable Optical Sensor) The new sensor is like the net eye of bugs. But it's more accurate than any natural net eye. The system is based on flexible polymer film and nanotubes. The nanotubes let light travel through it. And then the film at the bottom of those tubes transforms that light into the image. This ultra-accurate CCD camera can see ultimate details in advanced materials. The new system can see the smallest deviation in the materials.  And that thing makes it possible to improve safety on those layers. The ability to see ultra-small differences on surf

Quantum breakthrough: stable quantum entanglement at room temperature.

"Researchers have achieved quantum coherence at room temperature by embedding a light-absorbing chromophore within a metal-organic framework. This breakthrough, facilitating the maintenance of a quantum system’s state without external interference, marks a significant advancement for quantum computing and sensing technologies". (ScitechDaily, Quantum Computing Breakthrough: Stable Qubits at Room Temperature) Japanese researchers created stable quantum entanglement at room temperature. The system used a light-absorbing chromophore along with a metal-organic framework. This thing is a great breakthrough in quantum technology. The room-temperature quantum computers are the new things, that make the next revolution in quantum computing. This technology may come to markets sooner than we even think. The quantum computer is the tool, that requires advanced operating- and support systems.  When the support system sees that the quantum entanglement starts to reach energy stability. I