top of page

Generative AI and the Trust Challenge: Journalism in 2025

  • Writer: Felipe Palavecino
    Felipe Palavecino
  • Sep 12
  • 4 min read

Updated: Sep 13

ree


Every technological disruption in journalism has carried both promise and peril. The arrival of radio, television, and the internet expanded reach but also raised concerns about accuracy, fairness, and control. In 2025, journalism faces another such inflection point: the rise of generative artificial intelligence (AI).


AI systems can draft text, generate visuals, and summarize data at unprecedented speed. They also risk eroding the very foundation of journalism—public trust. According to the Reuters Institute Digital News Report 2025, only 40% of people globally say they trust most news, while surveys from Pew Research Center show that two-thirds worry specifically about AI-generated misinformation.


This article examines the paradox: how generative AI is simultaneously reshaping newsrooms and undermining audience confidence. It argues that the future of journalism will be defined less by whether AI is used, and more by how it is disclosed, supervised, and integrated into strategies to rebuild trust.


Behind the Scenes: Where Audiences Are Comfortable


The public does not view AI as inherently problematic. In fact, surveys show higher levels of acceptance when AI is used for backstage tasks:


  • Transcribing interviews.

  • Assisting with headline suggestions.

  • Supporting research or brainstorming questions.


In qualitative studies conducted across Europe, the U.S., and Asia, participants acknowledged that such applications could make journalism more efficient without undermining credibility. These uses remain invisible to the consumer, allowing human judgment to remain front and center.


This mirrors historical transitions: spellcheck tools or analytics dashboards were once “new technologies,” now accepted as routine infrastructure. The key difference with generative AI is scale—the speed at which backstage tasks can expand into front-stage content generation.


The Red Line: Content Creation and Credibility


Comfort levels shift dramatically when AI moves from support to creation. According to the Reuters Institute’s six-country survey:


  • Only 15–23% of respondents feel comfortable with news created mostly by AI, even with human oversight.

  • Comfort rises to over 40% when humans lead and AI plays a secondary role.


The distinction is subtle in practice but critical in perception. Audiences may not know how much of an article was drafted by AI, but they want reassurance that journalists, not algorithms, remain in control.


Concerns are especially acute in politics, crime, and local reporting, where trust is fragile and stakes are high. By contrast, soft news topics—sports, entertainment, arts—generate more tolerance for AI involvement. The challenge is that newsrooms cannot easily segment workflows by topic without risking blurred standards.


Labeling, Disclosure, and Transparency


If audiences distrust AI-driven content, can transparency mitigate the skepticism? The evidence suggests yes, but with limits.


Across surveys, a majority of respondents support labeling content produced or assisted by AI. Yet there is little consensus on what should be labeled. Some demand disclosure for any AI involvement, while others focus only on full content generation.


Practical experiments provide clues:


  • The Associated Press has begun labeling AI-assisted images.

  • The BBC has tested disclosure notes when AI tools assist in transcription or translation.

  • Smaller digital outlets in Germany and the Nordics have piloted “AI tags” in bylines.


The emerging consensus is that audiences want visibility, not technical detail. A simple phrase—“This article was produced with the assistance of AI tools and reviewed by an editor”—may be enough to preserve credibility.


Trust in a Fragmented Information Ecosystem


The trust challenge extends beyond AI. Global audiences are already fragmented, skeptical, and fatigued by the constant flow of information. Generative AI adds another layer of uncertainty.


  • News avoidance is rising: people deliberately consume less news to protect their mood.

  • Influencers are seen as both more authentic and more dangerous than publishers, complicating authority.

  • Platforms increasingly serve information directly through AI summaries, bypassing publishers altogether.


Within this ecosystem, AI-generated content risks becoming another accelerant of distrust—especially if errors or “hallucinations” go unchecked. The reputational cost of a single mistake by AI could outweigh the efficiency gains from dozens of routine tasks.


Strategic Implications for Publishers


Generative AI is not optional. Over 70% of newsrooms worldwide already use it for tasks ranging from editing to personalization. The question is how to align its use with long-term credibility. Several imperatives stand out:


  1. Maintain Human Oversight. AI can accelerate workflows, but final editorial responsibility must rest with human journalists. Every published output needs a human in the loop.

  2. Adopt Clear Labeling Standards. Transparency is non-negotiable. Whether through disclaimers, bylines, or footnotes, audiences must know where AI is involved.

  3. Focus on High-Trust Topics. Publishers should exercise caution in politics, crime, and local news, where skepticism is highest. AI may be better deployed in softer beats or as a supplementary layer (translation, accessibility).

  4. Invest in Media Literacy. News organizations can lead public education about what AI does and does not do, demystifying the technology and setting expectations.

  5. Differentiate Through Trust. In an ecosystem where content is abundant and algorithms proliferate, the competitive edge is credibility. Transparency, authorship, and accountability become brand assets.


Conclusion: Trust as the True Battleground


Generative AI represents both a disruption and a mirror. It forces journalism to confront long-standing tensions between efficiency and credibility, innovation and tradition, reach and trust.


Audiences are not rejecting AI outright. They are demanding boundaries, disclosure, and human judgment. Newsrooms that respect these demands can integrate AI without eroding credibility. Those that chase efficiency at the expense of trust risk undermining the very foundation of their value.


The trust challenge is the defining issue of journalism in 2025. Publishers who respond with transparency, humility, and accountability will not only survive the AI disruption—they will position themselves as irreplaceable anchors of credibility in an era of uncertainty.

 
 
bottom of page