Home / News / Technology / AI / Apple Slammed for Sending Inaccurate AI News Notifications
AI
3 min read

Apple Slammed for Sending Inaccurate AI News Notifications

Published
James Morales
Published

Key Takeaways

  • Apple recently launched a new iOS feature that summarizes new articles using AI.
  • The AI summaries have come under fire for misconstruing news stories.
  • Journalists and news outlets have called for Apple to suspend the feature.

In November 2024, Apple launched a new iPhone feature that uses AI to summarize news stories into bitesize notifications.

But barely a month after the Apple Intelligence update shipped, it has come under fire for sending users inaccurate notifications that completely misinterpret the news items they are meant to summarize.”

Apple’s AI Blunder

The first reports of Apple’s inaccurate AI summaries emerged just days after the feature was launched.

In one instance, the AI falsely claimed the BBC had reported that Luigi Mangione, the man arrested following the murder of healthcare insurance CEO Brian Thompson in New York, had shot himself.

The misrepresentation prompted a complaint from the organization, which said the notifications undermined readers’ trust in BBC News.

“It is essential to us that our audiences can trust any information or journalism published in our name,” a spokesperson said, adding that “that includes notifications.”

In another instance reported by ProPublica journalist Ken Schwencke, the feature mischaracterized a story about Israeli Prime Minister Benjamin Netanyahu, erroneously alleging that he had been arrested.

Turn off AI Summaries, Demand Journalists

Journalists have been among the most vocal critics of the feature, accusing Apple’s AI of undermining their work by distorting key details.

Geoffrey Fowler, a columnist for The Washington Post, said the mixups were “wildly irresponsible” and called for Apple to “turn off summaries for news apps until it gets a bit better at this AI thing.”

Other journalists and news outlets have also demanded that Apple disable the feature until it has ironed out the bug.

The National Union of Journalists (NUJ) in the U.K. urged Apple to revoke the feature before it caused any more damage.

“AI-generated summaries falsely attributing information risk harm to the reputation of journalists reporting ethically,” commented NUJ General Secretary Laura Davison. “The public must not be placed in a position of second-guessing the accuracy of news they receive.”

Hallucinations Still a Problem

The issue of AI “hallucinations”—a term used to describe instances where AI generates false or nonsensical information—remains a persistent challenge for the industry.

Novel techniques like Retrieval Augmented Generation (RAG) can help minimize the amount of false information AI models output. However, as the Apple case demonstrates, methods of automatically fact-checking AI-generated content remain imperfect.

As the Big Tech firm faces mounting criticism, it remains unclear whether Apple will pause or overhaul the controversial feature.

For now, users are left questioning the reliability of their notifications, and journalists are left wondering if AI is ready to step into the newsroom at all.

Was this Article helpful? Yes No

James Morales

Although his background is in crypto and FinTech news, these days, James likes to roam across CCN’s editorial breadth, focusing mostly on digital technology. Having always been fascinated by the latest innovations, he uses his platform as a journalist to explore how new technologies work, why they matter and how they might shape our future.
See more