The AI hallucinations are growing problems for the media. But why AI doesn't mark those deep fakes as "AI created"?
AI created the image above. AI can create deep fakes, and that causes discussions about AI-created hallucinations. The problem is why the system doesn't mark those images as AI-created.
The deep-fake images created by using AI are growing problems. The problem is that the AI-created deep fake doesn't differ from Photoshop deep fakes. The AI creates those images faster, and it brings this thing to the hands of people, who don't have Photoshop skills. And the main problem is that the AI will not mark those images as deep fakes.
Another problem is that. Deep fakes and other kinds of things did not seem to be a problem before social media and AI brought those things in front of lots of people. Some people say that the biggest problem with AI-created images is the creator of those deep fake images. The deep fake image itself is not a problem.
When we talk about deep fakes and other misuse of generative AI. We forget that there have always been people who copy their engineer or some other works while they study. So those things are not new. AI brought that technology into the hands of almost every student.
Those plagiarisms and fakes did not seem to be a problem for people before AI came to public use. After that, there have been only problems with AI, if we want to follow the newspaper headlines. If we create images using AI, it isn't different from publishing photographs or drawings.
When we write about articles or other texts. Or make something ordinary publishing. We don't need to follow the drawing and writing contest's orders. Outside those things, we have the freedom to do anything that we want, by following the law and good manners. But if we use deep fakes to make false news, we act against morale.
If we introduce the deep fake as reality, we lie, if we know that some image is deep fake. This is the problem with deep fakes. Deep fakes can used to denigrate people. Deep fakes can be used to undermine the authenticity of the image. Fakes that slip into the surveillance camera films can used to destroy ordinary people's reputations.
https://bigthink.com/the-future/can-we-stop-ai-hallucinations/
Comments
Post a Comment