Comparing AI and Humans' Ability to Recognize AI-Generated Images
As AI technology evolves, the realism of AI-generated images has advanced, challenging human perception. This study investigates the comparative ability of humans and AI models in identifying AI-generated images. Participants were tasked with classifying 80 images as real or generated, across categories such as animal, architecture, landscape, and vehicle. The results indicate that both humans and machines excel in recognizing real images over generated ones. While machines outperform humans in identifying real images, the variability in individual machine accuracies surpasses that of humans.

Introduction
The proliferation of AI image generation, fueled by rapid advancements in AI technology, has gained traction in various industries, including chatbots. By December 2023, an estimated 77% of companies are exploring or utilizing AI technology (Roller, 2023). Notable AI image generation models like Dall-E, Stable Diffusion, and Midjourney have emerged, raising concerns about potential copyright infringement and the deceptive nature of AI-generated images. A study in 2022 revealed that participants had an accuracy of 48.2% in discerning real from AI-synthesized faces (Nightingale, 2022).
This study aims to compare the efficacy of AI-generated image detectors and human judgment in distinguishing between real and AI-generated realistic images. While both humans and machines show a preference for real images, machines display a notably higher accuracy in identifying authentic images.
Methods
The study utilized 80 images, equally divided into real and generated categories across animal, architecture, landscape, and vehicle themes. Real images were sourced from reputable news outlets like CNN and BBC, ensuring their authenticity. AI-generated images were produced using Google Gemini, a newer model released in December 2023. Participants were tasked with classifying these images as real or generated in a randomized order. Survey respondents were collected through Prolific, an online platform for research participation.

Additionally, the study evaluated the performance of four AI-generated image detection tools: Illuminarty, SightEngine, Is it AI, and Hive Moderation, based on their accuracy in distinguishing between real and generated images.
Results
Survey
The survey results revealed a mean accuracy of 56.2% for participants, with a higher accuracy in identifying real images (68.4%) compared to generated ones (44.0%). Across different categories, humans demonstrated varying accuracies, with the architecture category yielding the highest accuracy at 59.7%.

AI-Generated Image Detectors
AI tools achieved an overall accuracy of 71.3%, showcasing superior performance in recognizing real images (96.9%) compared to humans. However, their accuracy in detecting generated images aligned closely with human performance.
Discussions
Variation in Machines' Performance
AI tools exhibited a wider performance variation than humans, particularly in identifying generated images. Tools such as Illuminarty, SightEngine, Is it AI, and Hive Moderation displayed varying accuracies, suggesting differences in their detection capabilities.

Humans' Perception of Reality
Human perception plays a vital role in distinguishing between real and AI-generated images. While certain images may trick human judgment due to their resemblance to natural elements, contextual cues and background details often aid humans in correctly identifying authentic images.
Limitations of Text-to-Image Models
One significant limitation of AI-image generation models is their performance in capturing intricate details and contextual information present in real images. This limitation can influence humans' ability to discern between real and generated images.










