AI has transformed the ethics around deceptive images from a slippery slope into a trap door

June 9, 2023 3:33 am
The rise of images created by artificial intelligence has created existential risks for photography as we know it

The rise of images created by artificial intelligence has created existential risks for photography as we know it. (Eric Thomas)

A question that I overheard this week seemed innocent enough: Should a business owner use artificial intelligence to create photographic-looking images of himself for publicity?

Imagine the image he is aiming for.

Portraits of our friendly business owner — let’s call him Albert — presenting his project plan in a trendy sun-drenched conference room. Pixels arrange to show his clients as immaculately manicured and sharply dressed. A gallery of images on his website portrays a cornucopia of clients staring back with corporate admiration at Albert, the insightful consultant.

Albert wants “artificial images.” His shortcuts and deceptions are short of our agreed definition of a photograph. And Albert is not alone in wanting this shortcut.

His marketing images would spring from increasingly popular and numerous websites that use millions of photos to learn through artificial intelligence. These image generators are quickly solving the few glitches that made their fakeries identifiable as frauds.

Notice that I refuse to call these photographs — because Albert’s images would not be. If “photo” means capturing “light” and “graph” means representing that light as “writing,” then photographs must rely on a camera, if not a photographer. Albert has eliminated both.

As an instructor of visuals and photojournalism at the University of Kansas, even before the time of widespread use of artificial intelligence, I tried to update my presentations about photojournalism ethics each semester when a new deceptive image made big news. 

Martha Stewart’s head plopped onto a model’s body. A war photographer who combined two photos and hoped that readers wouldn’t notice. The University of Wisconsin admissions department’s blunder of creepily and clumsily inserting the head of a Black student in a football game crowd to show off the campus’ purported diversity.

For each student who strives to work in the media, I repeat one word as the litmus test during these case studies in photo ethics: deception.

Would a reasonable reader be deceived by the image?

If the audience would be deceived, your image is ethically compromised. If the photograph represents the truth, then you should be comfortable publishing it.

My firebrand insistence on the sanctity of images isn’t needed for many students. They arrive at my classroom on the first day wishing they could more often trust what they see, whether in fashion magazines or war photography.

During class, after I inform them that a particular image is a fake, students scan the pixels on the screen at the front of the room and struggle to find the fake, the specific deceptions that photographers made with their software.

During class, after I inform them that a particular image is a fake, students scan the pixels on the screen at the front of the room and struggle to find the fake, the specific deceptions that photographers made with their software.

– Eric Thomas

“Is it the grainy sport near her ankle?”

As they point toward the deceptive image, I worry about something bigger. Just like the public, my students are doubting everything.

“Is it the yellow-ish car in the background?”

The slippery slope of visual deception is an old one.

“Is it the baby on the picnic blanket?”

For well over 100 years, photographers faked images by dangling chestnuts on strings, fogging photographic prints to create the appearance of ghosts and using digital photography software.

Now, arriving at the end of that slippery slope, artificial intelligence threatens to send us falling through a trapdoor. Photographs may soon be seen as post-truth images.

My classroom lectures beg for something different. I’ve considered our discussions about deceptive images to be vital for my journalism and media students, the young people who will create photos, videos and graphic designs in their professional work.

I had faith that reaching them was a key way to avoid deception in the media.

But here comes Albert.

By creating artificial images, Albert would save money by not hiring a photographer. He would perhaps save some time by using artificial intelligence to do the creative work. And he could direct the scene with a precision fueled by fantasy (“Let’s swap out the yellow vase in the background for an Aztec piece of art.”).

Would the emotional appeal that I make to my students work on Albert? Would it help to tell Albert that each time you erode the public’s trust in the media with deceptive images, it spoils trust more widely?

Do we trust that Albert will thoughtfully weigh the value of a quick and easy marketing image against the cost of a future in which all visuals lack the weight of photographic proof?

We need to persuade Albert to resist his specific and seemingly insignificant deception. And then we need to persuade others — accountants, financial advisers and interior designers who would be similarly tempted.

In previous semesters, I told my young journalists they would be fired by their employers for faking a news photograph. Ethical standards in the media wielded that fierce consequence.

But I think that norm is endangered. The three journalists who heard Albert’s question essentially shrugged and gave him the green light.

They might reassure themselves with the distinction that Albert is not a journalist. For journalists, it’s different, they might say.

But is it? Armed with artificial intelligence, he is just as likely to create the images for his website as my talented photojournalism students.

The alluring power of artificial intelligence arrived at a landmark last week. Tech gurus around the world who lead AI efforts compared the potential might of the technology to the danger of pandemic and nuclear war. In those ways, we worry about the cataclysmic effects of artificial intelligence. Albert’s decision about his marketing images seems less urgent.

However, we are facing the end of photography as, more of less, a signifier of truth. If artificial images can masquerade as photos, then the most deceptive moment of photography may be right underneath us: a trap door that is about to open.

If it hasn’t already.

Eric Thomas directs the Kansas Scholastic Press Association and teaches visual journalism and photojournalism at the University of Kansas. Through its opinion section, Kansas Reflector works to amplify the voices of people who are affected by public policies or excluded from public debate. Find information, including how to submit your own commentary, here.

Our stories may be republished online or in print under Creative Commons license CC BY-NC-ND 4.0. We ask that you edit only for style or to shorten, provide proper attribution and link to our web site. Please see our republishing guidelines for use of photos and graphics.

Eric Thomas
Eric Thomas

Eric Thomas directs the Kansas Scholastic Press Association, a nonprofit that supports student journalism throughout the state. He also teaches visual journalism and photojournalism at the William Allen White School of Journalism and Mass Communication at the University of Kansas in Lawrence. He lives in Leawood with his wife and two children.