Starting today, you will no longer be able to look at any photo online without asking yourself, “Is this real?”
Since the release of AI image generators like DALL-E 2 and Midjourney, we have seen hundreds of photos of celebrities in unlikely situations circulating across the internet.
All these images were faked. More alarmingly, they were all created in seconds from relatively simple prompt sentences.
For security professionals, the rise of AI-generated photos means analysts will require more diligence when conducting any type of online investigation. From now on, examining any image will require asking yourself the question: real or fake? Every single time.
Here are some tricks for detecting fake AI-generated images that work most of the time (at least for now).
Because generating realistic-looking hands is a challenging task for AI, looking at the hands of people in a photo can help analysts determine if an image is fake. Most AI-generated pictures rely on pre-existing datasets and patterns to generate images. Hands, however, are complex, with unique shapes and contours that are difficult to simulate. As a result, AI-generated hands often look unnatural, with awkward positioning or unrealistic proportions.
AI-generated hands often look unnatural, with awkward positioning or unrealistic proportions. Image created by DALL-E 2.
Analysts can also look for inconsistencies in hand placement or lighting. AI-generated images often have perfect symmetry, and the hands may appear too well-lit compared to the rest of the image. By examining the hands in an image, analysts can often identify telltale signs of AI-generated fakes, such as repeating patterns or unnaturally smooth textures.
Looking closely at background details such as reflections, people, signs, and billboards can help analysts determine the authenticity of an image. For example, reflections in a photo may appear distorted or inconsistent if they have been generated artificially. Similarly, people, signs, and billboards in the background of an AI-generated image may appear blurred or pixelated, as AI algorithms may prioritize the main subject of the image over the details in the background.
Moreover, analysts can also look for patterns or repeating elements in the image. Once again, AI image generators rely on pre-existing datasets or templates, resulting in repeating patterns or textures that are difficult to replicate in real-life scenarios. By examining the background details of an image, analysts can often identify these patterns and inconsistencies, providing evidence that the image may be AI-generated.
Reverse image search is a method used by analysts to determine whether an image is genuine or AI-generated. This can be accomplished by using search engines such as Google, Bing, and Yandex, or specialized tools like TinEye. To do this, they simply upload the image you want to search and the search engine will generate a list of web pages that contain the image or similar images. By examining the search results, analysts can determine if the image has been generated by an AI tool. AI-generated images are likely to appear on websites or social media channels that promote or discuss AI, such as technology forums or machine learning research websites.
Another giveaway that you’re looking at an AI-generated image is often facial features in the people represented in the photo. For example, eye contact can be difficult for AI algorithms to simulate realistically. And as a result, AI-generated images often have an unnatural or inconsistent gaze. Analysts can also look for other signs of inconsistency, such as the quality of the image, the lighting, and the resolution. By scrutinizing the image in detail, analysts can identify any discrepancies that may indicate that the image is not genuine.
In addition to examining eye contact, analysts can also look for other indicators of an AI-generated image. These can include inconsistencies in hair and skin texture, which can be difficult for AI algorithms to replicate, as well as flaws in facial expressions, which may appear stiff or unnatural.
By verifying the source of an image, analysts can often determine whether or not they’re looking at a genuine photograph. For example, if an image is claimed to be taken by a particular photographer or at a particular location, analysts can investigate the metadata of the image to verify its authenticity. If the EXIF data reveals inconsistencies or discrepancies, this may indicate that the image is an AI-generated fake.
In addition to verifying the source of an image, analysts can also examine the context in which the image was shared. For example, if an image is shared on social media by a user with a history of sharing misleading or fake content, this may raise suspicions about the authenticity of the image. Similarly, if an image is shared as part of a broader disinformation campaign, analysts may be more likely to suspect that the image is an AI-generated fake. By examining the source and context of an image, analysts can build a more comprehensive understanding of its authenticity, helping them distinguish between real and fake images.
The increasing use of AI-generated images in various domains raises serious concerns about the potential misuse of these images for fraudulent or malicious purposes. As OSINT analysts and corporate security teams, it is crucial to be able to identify fake AI-generated images to avoid any adverse impact on the organizations we serve.
It is important to stay vigilant and updated on the latest developments in this field to stay one step ahead of those who seek to use fake AI images for nefarious purposes. But by following the tips mentioned in this article, you can improve your ability to spot these fake images and protect your organization from potential threats.