Adobe’s Content Authenticity Initiative

By David Butow –

A bright spot in the emerging AI/photo dynamic is Adobe’s Content Authenticity Initiative (CAI). The basic mission of CAI is the development of software, block chains and open source systems that allow anyone with a computer to establish the provenance and digital history of images from the moment they are taken in the camera, to what you see on the screen generations later.

The sophistication of the technology far surpasses current metadata information and is designed for two main reasons: 1) establishing who made the image in the first place and 2) what changes to the file may have taken place after it was initially recorded. There is an additional, major benefit of the technology called the  “Do Not Train” option that I’ll explain later in this article. 

There may be a touch of irony here as the company that developed much of the technology that enables people to digitally manipulate photos has launched an extensive effort to help determine the veracity and provenance of images, but I think the effort is well-intentioned and extremely important.

No company is better positioned to do this, and the CAI is spearheaded by Santiago Lyon, formerly the head of photography at the Associated Press. (For those of you who don’t know exactly what the AP is, it’s a U.S.-based non-profit collective of international media organizations and I don’t think you could find a more trustworthy source of written and visual information about contemporary events.) 

Listen to a conversation with Lyon and Nikita Roy at the Newsroom Robots podcast.

CONTENT CREDENTIALS

You can read more about the Content Authority Initiative here, but to summarize, Adobe is involving the cooperation of camera manufacturers, software companies, media organizations, open source developers and others to create standardized tools and techniques that enable anyone with a computer to determine where a picture file came from and how much, if at all, it’s been manipulated.

As an individual photographer you can become free member of CAI giving you access to the latest tools, information, and instruction.

This technology will work not just on still photographs but on sound and video files as well. I recently watched a webinar by Adobe on the subject and I noticed that among the attendees were editors from major television news organizations. You can see how important this subject is to legitimate news media so they can verify content that originates from outside their own teams, such as video and imagery from freelancers and people in the general public.

Already, companies are springing up that allow you to simply drag a picture into a window to tell you whether or not the image is AI-generated. The New York Times recently published an article which tested how well some of those sites work. Adobe’s effort is comprehensive in the sense that that information can be embedded and gleaned along all streps of the process, beginning with the taking of the photograph. These digital imprints will be employed as global standards much in the same way that JPEGs are standardized. Gives you a little faith in humanity when you see these collective efforts taking shape.

DO NOT TRAIN

CAI tech is relevant specifically to photographers in another very interesting way that relates directly to AI. Photographers can imprint their images with a “Do Not Train” option so that their pictures, once they are out in the digiverse, can’t be used by AI systems as references or composites. The importance of giving photographers the ability to maintain some control over their pictures is huge, and that alone is reason to get familiar with the platform.

Beyond that, it’s still unknown to what extent Adobe and its partners can standardize how non-industry people will absorb and process news pictures and other imagery, how intuitive it will be to sort out the real from the invented. It’s an uphill battle but one well worth fighting.

Image above is a screen shot from an Adobe video about the CAI.

You May also Like...

What’s The Real Frame, and Why?
July 18, 2023
AI & Fauxtojournalism Part 1
July 18, 2023
What Does Reality Look Like?
July 18, 2023