The recent surge in AI-generated images has raised concerns about the ability to distinguish between real and fake images, highlighting the need for more advanced tools to detect and identify fake images. Researchers in Italy analyzed various AI models designed to identify fake images and found them to be effective, but the study also revealed an ongoing “arms race” between AI-generated images and detection methods. The study’s findings, published in the May-June issue of IEEE Security & Privacy, underscore the importance of developing more sophisticated tools to keep pace with evolving generative AI tools.
The researchers identified two types of clues that hint at whether an image is generated by AI: “high-level” artifacts, or defects, in the images that are obvious to the human eye, and “low-level” artifacts that are unique to the generator that created the image. The study tested 13 AI models against thousands of images and found that they were generally effective at identifying image defects and generators they were trained to find. However, the detection models could still flag some AI-generated images that they weren’t specifically trained to find, highlighting the need for a variety of models to detect fake images.
The study’s lead researcher, Luisa Verdoliva, notes that human discretion is key in detecting fake images, emphasizing the importance of seeking information from reputable sources. The study’s findings have significant implications for the development of tools to detect fake images and underscore the need for continued research in this area.











