Midjourney, DALL-E, DeepAI – programs that create photorealistic images with artificial intelligence ensure a flood of fakes on social media. Which pictures are real and which are not? Tips on how to use it.
Will Putin be arrested here? No, the image is a fake, which can be recognized by several image errors – such as a sixth finger or dissolving helmet visors
It has never been so easy to create deceptively real images: an internet connection and a tool that works with artificial intelligence is all it takes. Within seconds, photorealistic images are created that many of us perceive as real. And that's why they spread so quickly in the social networks and are often used specifically for disinformation.
Just a few of the recent examples that went viral: Vladimir Putin allegedly being arrested, Donald Trump also allegedly being arrested or Elon Musk allegedly holding hands with Mary Bara, CEO of General engines.
All AI images showing events that never happened. Photographers also publish alleged portrait photos that turn out to be AI-generated images.
This alleged photo by Elon Musk and Mary Barra was generated by an AI
Earthquakes that never happened
Events such as alleged spectacular car chases or arrests of celebrities like Putin or Trump can be checked fairly quickly by users checking reputable media sources. Images where the people are not so well known are more problematic, says AI expert Henry Ajder in an interview with DW.
“And it's not just generated images of people that can spread disinformation,” explains Ajder. “We've seen people create events that never happened, like earthquakes”.
Fact check: How do I spot manipulated images?
This is what happened in the case of an alleged severe earthquake that is said to have shaken the Northwest Pacific in 2001. But: This earthquake never happened, the images that were shared on Reddit are AI-generated.
In view of such images, it is becoming increasingly difficult to classify what really happened – and what didn't . But just as to err is human, the AI makes mistakes.
Still, one might add, because AI tools are evolving at a rapid pace. Currently (as of April 2023), programs such as Midjourney, Dall-E and DeepAI have their problems, especially with images that show people. Here are six tips from the DW fact check team on how to spot manipulation.
1. Zoom in and examine closely
Many AI-generated images appear real at first glance. The programs used can create photo-realistic images that can often only be revealed as fakes on closer inspection. That's why the first tip is: take a close look. To do this, look for the highest possible resolution versions of the image and zoom in on the details of the image. Any inconsistencies, errors or image clones that may have gone undetected at first glance become visible in the enlargement.
2. Find the origin of the image
If you're unsure whether an image is real or generated by an AI, try to find out more about the origin of the image. Sometimes other users share their findings in the comments below the image, which lead to the source or the first post of the images.
A reverse image search can also help: To do this, upload the image to tools such as Google image reverse search, TinEye or Yandex, which often leads to further information about the image and sometimes its origin. If there are already fact checks by reputable media in the hits of the search engines, these usually provide clarification and context.
3. Pay attention to body proportions
Are the body proportions of the people depicted correct? It is not uncommon for AI-generated images to have inconsistencies in proportion: hands can be too small or fingers too long. Or the head and feet don't match the rest of the body.
For example with this fake: In this picture, Vladimir Putin is said to have knelt down in front of Xi Jinping. The back shoe of the kneeling person alleged to be Putin is disproportionately large and wide. The calf of the same leg appears elongated. The person's half-covered head is also very large and out of proportion to the rest of the body. More about this fake in our fact check.
4. Watch out for typical AI errors
The main source of errors in AI image programs such as Midjourney or DALL-E are currently the hands. Again and again people have a sixth finger, such as the policeman on the left of Putin in our article picture at the top. Or in this case:
Here Pope Francis is said to be posing with a designer jacket. The images went viral – although the pope in the right image appears to have only four fingers and in the left image his fingers are unusually long – the photos are fake. Other common errors in AI images are teeth, for example because people have too many teeth, or glasses frames that are strangely deformed, or ears that have unrealistic shapes, such as the fake image mentioned above by Xi and Putin. Reflective surfaces such as helmet visors also cause problems for AI programs, sometimes they seem to dissolve, as with the alleged arrest of Putin.
AI expert Henry Ajder warns, however: “In the current version, Midjourney is still doing it Bugs like the Pope's image, but it's much better at generating hands than previous versions. The direction is clear: we can't rely on the programs making bugs like that for much longer.”
The DW image analysis shows several abnormalities in the viral photo of Putin's alleged kneeling before Xi. Ears, shoe and hands are deformed and not authentic. These aspects strongly indicate manipulation by AI
5. Does the image look artificial and smooth?
The Midjourney app, in particular, creates many images that appear idealized, i.e. too good to be true. Follow your gut feeling here: Can such a perfect, aesthetic picture with flawless people really be real?
“The faces are too pure. The textiles that are shown are also too harmonious,” explains Andreas Dengel , Managing Director of the German Research Center for AI, in a DW interview. People's skin is often smooth and free from any irritation, and their hair and teeth are also immaculate.
In reality, this is usually not the case. Many images also have an artistic look that even professional photographers can hardly achieve in studio shoots and subsequent image processing.
AI programs apparently often create idealized images that look perfect and many to please people. This is exactly a weakness of the programs, because it makes some fakes recognizable to us.
6. Examine the image background
Sometimes the background of an image gives away the manipulation. Objects can also be displayed deformed here, for example lanterns. In a few cases, the AI programs clone people and objects and use them twice. And it is not uncommon for the background of AI images to be blurred. But even this lack of clarity can contain errors. For example, when the background isn't just out of focus, but seems artificially blurred, like in this picture, which is said to show an upset Will Smith at the Oscars. This picture is also a fake.
Conclusion: Many AI-generated images can still be exposed as wrong with a little research. But as the technology gets better, errors are likely to be fewer in the future.
Can AI detectors like Hugging Face help uncover tampering? To our knowledge, the detectors provide clues, but no more. The experts we interviewed advise against using them, as the tools are not mature. Real photos are also declared fake and vice versa.
The detector apps are currently not always able to reliably answer the question of what is real and what is not. It's a “technological race” with artificial intelligence, says Antonio Krüger, AI researcher at Saarland University, in a DW interview. And he adds: “I think we have to get used to the fact that you really can't trust any picture on the internet.”