Facebook and Instagram parent company, Meta, is facing challenges in accurately detecting social media posts that have been created or manipulated using artificial intelligence (AI). Despite Meta’s earlier commitment to label posts generated or manipulated with AI, reports from TechCrunch and PetaPixel suggest that the labeling system is misidentifying real-life photos. The issue seems to stem from editing tools like Adobe’s Generative AI Fill, which can remove unwanted objects from images. Even cropping tools appear to be adding information to images, triggering false alerts in Meta’s AI detectors.
Representatives for Meta have not yet responded to requests for comment on the matter. This development raises concerns about the responsibility of social media companies in helping users determine the authenticity of other users and their posts. With the advancement of technology, particularly the widespread availability and ease of use of AI tools, distinguishing between what is genuine and what is manipulated has become increasingly challenging.
Instances of individuals passing off others’ work as their own or altering content to misrepresent it are not new throughout history. However, in today’s AI-driven era, such deception appears to be spreading more rapidly and effortlessly. Industry observers have even identified AI-powered social media accounts that masquerade as real people. Meta’s efforts to address this issue are part of a broader industry-wide initiative. OpenAI, Apple, TikTok, Google, Microsoft, and Adobe have also announced measures to combat the spread of AI-generated or manipulated content.
Despite these efforts, accurately identifying posts created or manipulated by AI is becoming increasingly difficult. The term “slop” has emerged to describe the growing flood of AI-generated posts. Media experts warn that the problem is likely to worsen as the United States approaches the 2024 presidential election in November.