When charities use AI images, the public stops talking about the cause
University of East Anglia, School of Global Development.
Charities have always relied on images to do what words alone cannot: make a distant crisis feel immediate, personal, urgent. A photograph of a child in a flooded village. A close-up of hands cracked from labor. These images are the emotional infrastructure of giving. They turn abstract statistics into individual stories that compel people to act.
As budgets tighten and production pressures mount, a growing number of charities and NGOs have turned to generative AI to produce their campaign visuals. The appeal is obvious: AI is faster, cheaper, and eliminates the ethical complexities of photographing vulnerable people. But a new study from the University of East Anglia suggests this shortcut may be undermining the very thing it is meant to support.
The conversation shifts
The report, titled Artificial Authenticity, analyzed 171 AI-generated images and more than 400 public comments surrounding campaigns from 17 organizations, including Amnesty International, Plan International, the World Health Organization, and WWF. The central finding is striking: when AI images appear in charity communications, the humanitarian cause effectively disappears from the conversation.
Of the comments analyzed, 141 focused on AI ethics and authenticity concerns. Another 122 critiqued the technical execution and visual quality of the images. Only 80 comments, less than 20%, actually engaged with the humanitarian issue the campaign was trying to address.
"Charities exist because people care about other people," said co-author David Girling from UEA's School of Global Development. "The moment when audiences start questioning whether what they are seeing is real, the emotional connection that drives support is put at risk."
Disclosure does not neutralize backlash
One might expect that transparently labeling AI images would protect organizations from criticism. The data suggests otherwise. While 85% of the images in the study were properly captioned as AI-generated, disclosure did not prevent backlash. In campaigns where images were not labeled, audiences adopted what the researchers call an "investigative tone," spending their attention on whether the visuals were artificial rather than evaluating the charity's work.
The backlash was particularly pointed when the medium contradicted the message. WWF Denmark, for instance, faced criticism for using energy-intensive AI tools to promote environmental sustainability. The public response was not sympathetic.
The dignity paradox
There is a genuine ethical argument for AI imagery in charitable contexts. Photographing or filming vulnerable people for campaign purposes can be exploitative. AI-generated visuals could protect beneficiaries from being re-traumatized. Some organizations explicitly frame their use of AI in these terms.
But the study reveals a tension the sector has not resolved. Donors often reject synthetic images, prioritizing their own desire for an "authentic witness" over the beneficiary's right to privacy. The result is a paradox: the more ethical choice for the subject may be the less effective choice for fundraising.
What the study recommends
The researchers do not argue that charities should abandon AI entirely. Instead, they propose guardrails: working with AI providers to develop sector-specific tools with built-in bias detection and stereotype alerts, and co-creating imagery with local communities who can generate prompts and approve final visuals for accuracy and cultural appropriateness.
Whether these recommendations are practical at scale, given the budget constraints that drove charities toward AI in the first place, remains an open question. Training staff in ethical prompt engineering and establishing community review processes adds cost and complexity that partly offsets AI's efficiency advantage.
Nearly 70% of the AI images analyzed were designed to appear photorealistic. Poverty was the dominant theme, accounting for about a third of the images (51 of 171), often featuring children. Environment and human rights themes followed. The study does not measure whether AI imagery led to fewer donations, a question that would require different methods. What it does document is a clear shift in audience attention from the cause to the tool, a shift that no charity communications team would deliberately choose.