world-newspapers.net - Global Media Directory Logo
World Newspapers

Explore the world's newspapers and news sites

How AI Is Changing Modern Journalism in 2025

Five years ago, the idea of artificial intelligence writing the news might have sounded like science fiction. Today in 2025, it's not only real—it's becoming a core part of how journalism works. From drafting headlines to analyzing massive data sets, AI is reshaping how stories are discovered, written, and shared.

But the transformation isn't what many predicted. AI hasn't replaced journalists—it's become a polarizing tool that some newsrooms embrace while audiences remain skeptical.

Real Newsrooms, Real AI: Who's Actually Using It

Major news organizations are already deploying AI in daily operations. The Guardian, Washington Post, Financial Times, and BBC have all integrated AI tools into their workflows, according to a comprehensive Columbia Journalism Review study of 35 news organizations.

The Associated Press has used AI to automate sports recaps and financial reports since 2014, freeing reporters to pursue more complex stories. Bloomberg developed its own AI model, BloombergGPT, specifically trained on financial data to assist with market coverage. The Financial Times now offers an AI chatbot called "Ask FT" that answers reader questions using archived articles.

But adoption isn't universal. Most major news outlets haven't introduced audience-facing AI features like chatbots or automated summaries, with 60% of readers reporting they don't regularly see AI tools on news sites.

How AI Actually Speeds Up Newsrooms (And Where It Doesn't)

AI is genuinely transforming back-end operations. One UK news organization described how AI-assisted archive systems help them quickly locate footage during breaking news—like celebrity deaths—that used to require hours of manual searching.

Financial journalism has seen dramatic efficiency gains. Several outlets now use AI to automatically analyze and extract key points from financial statements, letting reporters focus on contextual reporting rather than data entry.

However, the promise that "AI gives journalists more time to do journalism" remains largely unproven at scale. Lisa Rinehart, who studied the AP's AI initiative for local news, notes that "that hasn't really been proven" at large outlets producing thousands of stories daily. Smaller newsrooms with dozens of weekly stories see clearer benefits.

What AI Can't Do: The Limits of Automated Journalism

AI excels at synthesizing existing information but struggles with original reporting. As NBC 12 sports journalist Jake Garcia explained, "ChatGPT is really good at taking existing information and whittling it down," but "where it's not good at and where I think we need to be super cautious is creating original content."

Investigative journalism remains firmly in human territory. A German investigative reporter told researchers: "How will the exclusive stuff we want to find out be in any kind of AI? It isn't. That's not where you find information that ultimately gives us exclusives."

AI also can't replicate the judgment calls that define quality journalism—deciding what's newsworthy, assessing source credibility, or determining when public interest justifies publishing sensitive information.

The Trust Problem: Readers Remain Skeptical

Public comfort with AI in news is surprisingly low. Only 12% of people feel comfortable with fully AI-generated news, compared to 62% who trust entirely human-made content, according to Reuters Institute research across six countries.

Readers are most accepting of behind-the-scenes uses: 55% are comfortable with AI editing spelling and grammar, and 53% accept AI translation. But comfort drops sharply for front-facing applications—only 19% are comfortable with AI-generated presenters or authors, and just 26% accept AI-created images when real photos aren't available.

The distrust has merit. Apple recently suspended its AI-generated news notification feature after it falsely claimed murder suspect Luigi Mangione had killed himself, incorrectly attributing the claim to the BBC.

The Deepfake Threat Is Real

AI-generated misinformation is undermining trust in legitimate journalism. In early 2024, a France 24 journalist was targeted with a deepfake that manipulated both his voice and article headline, distorting his reporting on President Macron's visit to Ukraine.

When an AI-generated image falsely showed Princes William and Harry embracing at King Charles's coronation, news outlets had to report on the fake image itself—illustrating how AI forces journalists to spend time debunking synthetic media rather than reporting original stories.

The Economics: Will AI Save or Sink Newsrooms?

News organizations face brutal economics: declining advertising revenue, reduced subscriptions, and competition from digital platforms. Some executives view AI as a potential lifeline for efficiency. Others worry it will accelerate job losses.

Research from Germany found that participants trusted outlets using AI-generated news significantly less than those using trained journalists—and this trust gap was even wider for political coverage. Lower trust could further erode already-declining subscription revenue.

The Associated Press warns that failing to adapt to AI could repeat the mistakes newspapers made when they were slow to embrace the internet and "totally mishandled how to monetize it."

How Journalists Actually Use AI Today

When journalists do use AI, it's primarily for practical tasks that audiences find acceptable:

These uses align with what audiences find acceptable—tools that support human journalism rather than replace it.

The Coverage Gap: Journalism About AI Is Lacking

Ironically, journalism about AI itself needs improvement. Experts at a Reuters Institute conference criticized current AI coverage as "incomplete, focusing too much on hype, and not delving deeper into the issues."

Coverage tends to swing between extremes—either euphoric about AI's potential or fearful of its risks. Missing are nuanced stories about AI's real-world impact on specific communities, its environmental costs, or how it's reshaping power dynamics between tech companies and news organizations.

What Comes Next: Transparency and Standards

The journalism industry is developing ethical frameworks for AI use. Key questions remain unresolved:

Only 33% of people believe journalists routinely check AI outputs before publication—a perception gap that newsrooms need to address through transparency.

The Bottom Line: AI as Tool, Not Replacement

AI is undeniably changing journalism in 2025, but not in the revolutionary way early predictions suggested. It's proving valuable for specific, mundane tasks—data processing, transcription, translation—while falling short at the core of journalism: original reporting, source cultivation, ethical judgment, and storytelling.

The technology won't replace journalists, but it's forcing the profession to evolve. Newsrooms that use AI transparently, maintain human oversight, and prioritize trust will benefit. Those that cut corners or prioritize efficiency over accuracy will further erode public confidence in an already struggling industry.

As one researcher put it, "The relationship between a journalist and AI is not unlike the process of developing sources." AI may be knowledgeable, but it's not free of bias—it needs to be contextualized, verified, and qualified by human judgment.

The future of journalism still relies on people. They've just got some very smart—and very imperfect—tools on their side.

Related Reading