What Does Reality Look Like?

By David Butow

Since its inception about 200 years ago, people have discussed and argued to what extent photography depicts reality. You could fill a bookshelf or your computer’s hard drive with all the words written on the subject so I won’t re-hash all those ideas here. Safe to say, there’s such a range of photographic attitudes, artistic approaches and techniques that there are no universal answers to these questions. And of course the idea of “truth” is so nebulous, trying to depict it means aiming at a fuzzy, moving target.

Looking forward, I think AI opens the possibility of completely new ways of viewing “reality” that go way beyond the traditional concept of photography, both still and motion. While creating pictures mostly on the computer seems to be in conflict with photojournalism, I am leaving open the possibility of cameras recording images of scenes and having those images combined with computational techniques that may reveal “reality” in a new and still “true” ways.

This post is going to get a little trippy and may seem off topic, but maybe you’ll find something interesting.


To me, still photographs are basically this: non-moving images captured from a single viewpoint that depict something in the photographer’s exterior field of view. What makes them “real” is that the camera is acting kind of like a person’s eye: it takes in light and records it. The image depicts something “out there” that other people on the scene besides the photographer would agree was present.

The resulting photograph sort of reproduces that view but turns it into two dimensions instead of three, (or four if you include time.) The photographer might shoot tight, or very wide or otherwise modify the view relative to how people normally see to capture something in a way that looks particularly unusual or compelling, but we still say the picture is depicting reality.

Stephen Wilkes has been doing interesting time lapses in the last few years where you see the scene gradually transition from day to night or vice versa. Those images are recording, let’s say, two hours worth of time and putting that into a single image. So you could say they compress time, which is sort of the opposite of what photography normally does, which is expand time. That “decisive moment” picture that we look at for five minutes has us experience a single moment in a way that’s not fleeting, not the way we would experience it if we’d been present at the scene. That picture allows us to linger in a singular moment.

Even if you include say VR headsets, GoPro’s, holograms or drones, what we’re talking about is recreating a visual “reality” as it appears to a human: a fixed, single point of view perceiving objects out there in three-dimensional space. (In the case of a drone image we might say casually it’s a bird’s-eye view but I bet you to a bird the scene looks a lot different. How, I have no idea, probably depends on what type of bird.)


During COVID lockdown downtime I started reading and listening to stuff about spacetime and quantum mechanics. I knew little about the subject before, and the more I read the more confused I became in some ways, but one thing I learned for sure: the “reality” we think we see bares little to no resemblance of what the reality is from a physics perspective. Modern physics gives us, for the moment, the most accurate description of nature, but many of those descriptions are extremely counter-intuitive because they are far removed from the way our senses perceive and shape our experience.

We see the world the way we do because of evolution. The “field of view” that we experience, and even our perception of time are essentially constructs for survival. What we see is primarily a function of what is happening in our brains, not what is out there. Our brain receives the electronic impulses from our retinas and constructs an image in large part based on our “priors,” our past impressions of what things are supposed to look like.

(Some of you may be familiar with this subject and are nodding along, others might think it sounds like total nonsense but a web search will explain things better than I can. Check out scientist Andy Clark on the Sam Harris podcast.)


How I think this applies to photography and AI is this: the technology will expand our ability to visualize reality and allow us to communicate those perceptions in new ways. Just over 100 years ago, around the time Albert Einstein was working on the Special Theory of Relativity, which basically explains how mass, time and space are connected, Pablo Picasso was working on cubism.

Cubism was such a radical departure from paintings before in part because it didn’t seem to reflect how things actually looked. But Picasso was “seeing” in a way no one had before, or had thought to paint, much like Einstein “saw” time and space as intertwined. Einstein’s breakthrough, formed in part by scientists before him, reshaped the Western notion that time was independent of events and space. Scientists now understood there was no “universal” time or absolute way objects or events would appear to multiple observers. It all depended on their speed and individual point of view.

Picasso’s portraits, and those of other cubists after him, gave an impression of what people looked like from different angles, and perhaps at different times, but we’re seeing that at once because his pictures were still and two-dimensional.

In 2021, photographer Christopher Morris used a fascinating technique in Washington, DC using a iPhone app designed for architects. The series is called “Fractured America,” and the images – which work as stills and gif-type videos – are sort of cubist and Matrix-y in nature. They don’t look like photographs that we’re used to, but you can’t argue they aren’t “real.”

Another example is my friend Brandon Tausik’s gifs which are videos, but seem more like still pictures that move subtly. Like Christopher Morris’ pictures, they don’t reflect exactly how we normally see, but they are real because they show people and scenes that existed, those subjects were not generated inside Brandon’s head or on the circuit board of a computer.

Just like the advent of motion pictures broadened people’s idea of how to capture and perceive reality, AI will continue that dynamic. As that plays out, we’ll be forced to revisit the question of what kind of images depict “reality” and what don’t. What might make certain types compatible with photojournalism, as opposed to fauxtojournalism, is the use of techniques that would draw not on existing imagery or styles or memes or people’s imaginations, but would record and interpret strictly what is happening in front of, or around the camera(s) in original moments, in new and exciting ways.

That’s keeping it real.

Photograph above from Myanmar of monks praying on New Years’s day 2013, part of the project Seeing Buddha. Taken with a Leica M4 on color negative film and the edges of the negative were deliberately scratched with an Exacto knife. ©David Butow

You May also Like...

Adobe’s Content Authenticity Initiative
July 18, 2023
AI & Fauxtojournalism Part II
July 18, 2023
What’s The Real Frame, and Why?
July 18, 2023