AI & Fauxtojournalism Part 1

By David Butow-

In the last few months there have been many examples of AI imagery, including examples of fauxtojournalism that have drawn attention – not just inside the photo community – but also in the mainstream press. Below are several examples. For a more thorough discussion of what this might mean for photography, journalism and the public trust, please check out our other posts on this site.

Each example below is different, but they point to a trend that is moving in the same direction. Obviously there are threats to the photographic profession, and more broadly to the public’s trust in imagery and online information in general, but not in every case. As the technology and its implementation evolves it makes sense to start a discussion about the pros and cons of the different ways AI can be used. With many of these examples already, I think there’s room for reasoned disagreement.

• In the late summer of 2021 I was in Perpignan, a town in the south of France, watching the nightly projections at Visa Pour l’Image, the largest, and perhaps most serious yearly photojournalism festival in the world. If you love photojournalism, either as a practitioner or an observer, you should try to make the trip there at least once. There are a couple of dozen hanging shows in different venues, and for a few nights in a row in the large town square, in front of thousands of people, the festival projects well-produced presentations of news pictures and photo essays, mostly work made the previous year.

One of the projections I saw was by Magnum photographer Jonas Bendiksen called The Book of Veles. The series depicted a medium-sized town in North Macedonia which was a center for online “fake news” production of the sort Russia engaged in in the 2010’s. Bendiksen’s pictures showed mostly young people in the former Soviet province hanging out, working on computers and just generally looking a bit lost. I thought the people looked a bit off but Bendiksen is a good photographer and his pictures often have a quality where things look weird, so nothing made me suspicious. 

A couple of weeks after the festival, Bendiksen revealed that he had created the “people” in the pictures using computer game character software and had digitally inserted them in scenes from Veles that he actually photographed himself. The festival put out a statement of apology and hands were wrung. Bendiksen went on to explain he had had duped not just the festival organizers but also his fellow photographers who’d seen the work ahead of time.

He’d created fake Facebook profiles for the “human” subjects and subsequently published a book including text created by AI, and the whole thing took on a very experimental meta jam. It was a social and political commentary which preceded, by a year or so, the widespread use of AI generated imagery.

If Bendiksen had been relatively unknown, or known primarily as an art photographer I think there would have been great interest but little controversy. But the fact that he was well-known as a photojournalist and documentarian whose work stood alongside that of his fellow Magnum members – which was started by true OGPJ’s after World War II – made many people think he’d stepped way out of line.

Bendiksen said that he was hoping at some point around the time of the projection, someone would figure out the hoax, but it didn’t happen. He did not reveal the true nature of the project until it was rooted out by someone on the web doing a deep dive a week or two later. Many people were critical of what he did, but others embraced it – including some inside the profession – as an artistic and inventive sort of warning of various new types of deceptions on the horizon.

I’ve never met Bendiksen and initially I was angered that I and others, especially the festival organizers, had been duped. I found his initial explanations of when he was planning to reveal the truth a bit squirrelly and I think if you’re concerned about fauxtojournalism, engaging in it and keeping the secret to yourself as long as possible is a questionable way of making a statement. However, I have changed my mind somewhat about the work. The full scope of the project, which he went on to publish as a book, is a very clever and fascinating commentary on modern forms of myth-making and deception.

The extent to which “The Book of Veles” did “damage” to photojournalism is up for debate because since then, there has has been an explosion of other similar, if less-extensive examples.

• Elliot Higgins, founder of Bellingcat, a serious site about the intersection of the internet, media, security, politics and related topics created a widely-seen AI series of fake news photos on Twitter the week of Donald Trump’s arraignment in New York in March. While the pictures look “real,” they are so satirical and funny that they are not meant to be taken seriously. I think they’re great because he wasn’t trying to fool anyone and was immediately transparent about what he did.

• Another example of new uses of AI-generated media is in political campaign ads. This is not journalism of course, but the ads use the aesthetics of documentary style imagery to create an impression of something being real. Campaign adverts have never been pillars of truthful or ethical communication, so this is not shocking, but you have to wonder how fast and far this is going to go since the technology must seem like a god-send to campaign creatives, and it’s a space with basically zero guardrails. Here, in an anti-Biden video spot, the producers used computers to illustrate their own version of reality.

• GOP hopeful Ron DeSantis’ campaign used AI to fake his rival Donald Trump’s voice in a new ad. It’s a little murky because Trump actually wrote the words (in a social media post) that the fake voice speaks. Nevertheless, quoting from Alex Isenstad at POLITICO, “Political ads have used impersonation before, and the Trump-generated voice in the Never Back Down ad does not sound entirely natural. Still, the spot highlights what could be the next frontier of campaign advertising: The use of AI-generated content to produce increasingly difficult to identify, so-called deepfakes.

• The U.S. Senate Subcommittee on Intellectual Property just held their second hearing on AI. The focus was on efforts to protect the legal copyright and broader concerns of photographers, musicians and other artists whose work is in the digital ecosystem. One of the primary tools of that control is the “Do Not Train” tag that can be attached to files. This is something creators can start using now. The committee witnesses included executives from Adobe and other tech companies as well as a scholar and artist. It was cool to see that the hearing was chaired by Delaware Senator Chris Coons who wrote a TIME magazine article about my picture of Senator Jeff Flake, taken during another Senate hearing a few years ago.

You can watch a video of the hearing here

• Read more about “Do Not Train” and the broader mission of Adobe’s Content Authenticity Initiative

• Catchlight conversation panel on AI’s impact on photography with Pamela Chen, Jonas Bendiksen, Hany Farid and Alexey Yurenev

• Trupic , one of the new companies making software to help secure digital provenance. http://truepic.com

Vox article on the various efforts happening online to root out AI versus real photography

You May also Like...

What’s The Real Frame, and Why?
July 18, 2023
Leica’s New M11-P
November 13, 2023
AI & Fauxtojournalism Part II
July 18, 2023