Arts & Culture

How AI sees war photos

American soldiers at a raid of a residence in Baghdad, Iraq in 2003.

The 3rd Infantry Division in Baghdad, Iraq, on April 6, 2003.

Photograph by Christopher Morris/VII

6 min read

Shorenstein fellow wants to deploy tech to preserve the visual record. An image from the front lines in Iraq provides a test.

Artificial intelligence is posing threats to photojournalists on issues from copyright to misinformation. But Emmy award-winning visual storyteller Kira Pollack sees potential for the technology to actually help photographers preserve their legacy and create a record of how the world looked before AI. 

Pollack, the Walter Shorenstein Media & Democracy Fellow at the Shorenstein Center on Media, Politics and Public Policy, spoke with the Gazette in this edited conversation. 


You were the creative director at Vanity Fair as AI first entered the mainstream. What were your initial thoughts about AI’s potential to impact photography as a craft and an industry? 

My immediate reaction was that we were entering an entirely new era of image-making — one that posed an existential threat to photography as we know it. On one hand, the sophistication of fabricated images was terrifying; on the other, it promised an explosion of creativity. At Vanity Fair, I was commissioning highly produced fashion images of public figures and gritty journalistic pictures from the front lines. As generative AI tools and large language models began to rapidly evolve, and fake images started circulating widely, alarm bells were sounding across the photojournalism community. In my experience, every time a new technology disrupts photography, the most important thing is to understand it — learn how it works, how it might be harnessed for good, and how to protect against its potential harms.

Your fellowship at the Shorenstein Center is focused on addressing a very specific challenge for photojournalists, so tell me about the problem you identified. 

One of the greatest challenges facing photojournalism today is the fate of its archives. When people hear the word “archive,” they often picture dusty boxes — and their eyes glaze over. But to me, archives are living, breathing bodies of work that tell the visual history of our world. Having been on the front lines of assigning that work — as director of photography at Time, deputy photo editor at The New York Times Magazine, and most recently at Vanity Fair — I know the extraordinary material these archives contain. Over the course of their careers, photojournalists amass hundreds of thousands of images, and I’d estimate that 95 percent have never been seen or published. A finite number of professionals documented the defining events of our time, and their images are often the only visual record we have. Many of the world’s greatest photojournalists are still alive and able to contextualize their life’s work — yet we’re at real risk of losing it.

At a moment when we urgently need to preserve these photographs before the era of AI further distorts our sense of visual truth, we also need to explore how AI itself might help. Can these tools help us catalog, organize, and contextualize this vital work to make it discoverable? Can we do it ethically, without exposing these images to unauthorized training or misuse? What are AI’s shortcomings? That’s the core of my research: how to use AI to help us see — at scale — without compromising the integrity of what we see.

“I want to understand where this technology is headed, where it falls short, and whether it can serve the core values of photography: truth, authorship, and memory.”

How have you started experimenting with this so far? 

Working with photojournalist Christopher Morris and engineer Gregor Hochmuth, we’ve conducted nearly a dozen case studies using images from Morris’ archive. In one study, we selected a range of stories — including the U.S. invasion of Iraq, Jan. 6, and the Yugoslav wars — and asked AI to evaluate the images. While AI can easily identify simple visuals — a cat, a car, a person — the real test is whether it can interpret the layered complexity of conflict photography. In one striking example, the AI analyzed a photo from the U.S. invasion of Iraq and correctly identified the action as a house raid, the setting as a residential building, the make of the soldiers’ guns, and the emotions on the civilians’ faces — using nuanced language like “appears nervous” or “appears apprehensive.” It even assessed the composition, lighting, and symbolism. We were stunned that it could extract such specific and accurate insights with so little context.

We’re also exploring questions of authorship and legacy. Archives should be more immersive and dynamic — there’s a depth of narrative and intent from the photographer that goes far beyond captions or keywords. We’re experimenting with how AI might help bring that storytelling to the surface.

Kira Pollak.

Kira Pollack.

Photo by Peter Hapak

You see a potential for AI to help photographers preserve their vast archives of work. How do you square that use of AI with this other side of the conversation, where there are real concerns about the erosion of trust in what’s real? 

I see these as two distinct conversations with some overlap — like a Venn diagram. One part of the conversation focuses on generative AI’s ability to create photorealistic images without a camera or lens. In today’s relentless breaking news environment, where images spread rapidly on social media without gatekeepers, this can be a dangerous mix that erodes public trust. Another concern is copyright — specifically, the risk of photographers’ work being scraped and used to train AI models without consent. This raises urgent questions about ownership, authorship, and protection.

The work I’m doing exists in a third circle of that Venn diagram. It’s about using AI not to generate or exploit images, but to preserve, organize, and surface real photojournalism. These archives are vast, often inaccessible, and under threat — both physically and digitally. I’m exploring whether AI can help responsibly unlock this material at scale, while safeguarding the photographer’s intent, rights, and legacy. It’s about using the technology to reinforce visual truth — not replace it.

What are your hopes for the Shorenstein Fellowship?

My hope is to use this time not just to examine the technology itself, but to engage deeply with the larger questions it raises for photography and journalism. What makes the Shorenstein Center so unique is the opportunity to be in dialogue with people across disciplines — technologists, ethicists, journalists, policymakers — who are thinking critically about the future and the values we carry forward.

I’m not coming to this work as a technologist. I come from journalism, having worked closely with photojournalists around the world and led teams that helped shape how the public sees history as it unfolds. Rather than getting swept up in the momentum of the tech world, I want to understand where this technology is headed, where it falls short, and whether it can serve the core values of photography: truth, authorship, and memory. Ultimately, I hope to bring these insights back to the photojournalism community — to help ensure we’re not just reacting to change but helping to shape it.