Google will disclose images created or edited by artificial intelligence in search results. The tech giant plans to provide more transparency by adding labels to images created or modified by AI.
The change comes after Google partnered with major companies in the Coalition for Content Provenance and Authenticity (C2PA), which aims to combat the spread of misleading information online. Let’s take a look at the details of Google’s plan to flag AI-generated content and the questions surrounding it.
Is Google’s hunt for AI a sideshow?
In the coming months, Google will begin tagging AI-generated and edited images in search results. This will be done using metadata from the Content Credentials standard, which includes information such as when, where, and how the image was created. Users will be able to see this tag through Google’s “About This Image” feature, which can be accessed by clicking on the three dots above an image in search results.
The idea behind the move is to give users a way to trace the source of images and make it easier to spot AI-generated content. The move comes after Google joined other tech companies such as Amazon, Adobe, and Microsoft in developing the latest Content Credentials guidelines. Together, these companies hope to increase transparency and reduce the spread of fake or misleading images.
But while Google’s initiative is a step in the right direction, it’s worth noting that the label won’t be immediately obvious. Users will need to actively search for information by clicking on the “About this Image” section, which may make the feature less effective than some might expect.
How will users access AI information?
The new function can be found in Google Lens and Android’s “Circle to Search” feature, and will now indicate if an image was generated or altered using artificial intelligence. Failing to display the label directly may result in a lack of awareness about the existence of this tool among many users. Because there are additional steps involved, certain individuals may never be capable of confirming if an image was generated by AI unless they are already familiar with the functionality. Even if individuals are aware, the process may still be difficult, particularly if they anticipate AI-produced images to be more clearly marked.
Notwithstanding this constraint, Google’s partnership with C2PA may prove valuable, as the Content Credentials standard offers a wider structure for tracking the origin of digital content. Nevertheless, this standard has not been embraced by all AI developers. Some organizations, like Black Forest Labs, have chosen not to include it, potentially complicating efforts to track the source of images generated by specific models.
Concerns about effectiveness: Is Google doing enough?
There is also a bigger problem: AI-generated images are becoming increasingly difficult to detect. A University of Waterloo study found that 39% of people cannot tell the difference between AI-generated and real images. If this trend continues, Google’s labeling system may not be enough to help users confidently identify AI-generated content.
Moreover, other tech companies are not setting a good example. Meta, for example, recently moved tagging information to a less visible position in posts. This raises the question of whether tech giants are truly committed to making AI-generated content more transparent, or whether they are only taking minimal steps.
Few cameras, such as the Leica M-11P and Nikon Z9, come with built-in Content Credentials. This means that unless photographers and developers choose to use these tools, the system will not be as effective as it could be. Transparency in AI-generated content relies heavily on cross-industry collaboration, and so far not everyone seems equally invested.
At the same time, the rise of AI-generated imagery has raised new concerns about misinformation. From deepfakes to AI-generated nude photos, the potential for harm is clear. While Google’s new labeling system won’t solve all these problems, it is a small step towards reducing the spread of misleading content online.
Image credits: Furkan Demirkaya / Ideogram AI