Key Takeaways
A leading journalism body has called for Apple to remove its newly released AI feature from devices after it created a misleading headline.
Reporters Without Borders has urged the tech giant to scrap Apple Intelligence, highlighting the growing concerns of the emerging technology spreading misinformation.
However, as more and more AI companies sign high-profile deals with newspapers and media companies, the role AI plays in the delivery of news is growing stronger.
Released last week, Apple Intelligence uses AI to summarize notifications, aiming to give users a more efficient overview of their devices.
Concerns began when the feature delivered a false notification to users that the BBC had written an article claiming murder suspect Luigi Mangione had shot himself.
This was untrue, and the BBC has subsequently issued a complaint to the iPhone maker. The U.K. broadcaster said it had also contacted Apple to “raise this concern and fix the problem,” but it did not confirm whether it had received a response.
Following the incident, journalism group Reporters Without Borders called for the feature to be fully removed from Apple devices.
The group said generative AI services were still “too immature to produce reliable information for the public.”
Head of technology at Reporters Without Borders, Vincent Berthier, said that “facts can’t be decided by a roll of the dice” and added:
“RSF [Reporters Without Borders ] calls on Apple to act responsibly by removing this feature. The automated production of false information attributed to a media outlet is a blow to the outlet’s credibility and a danger to the public’s right to reliable information on current affairs.”
Apple Intelligence has also reportedly had misinformation issues with other publications, including The New York Times.
The AI service inaccurately summarized three articles, with one part of the headline claiming that the Israeli Prime Minister, Benjamin Netanyahu, had been arrested.
The issue comes as AI firms are becoming an increasingly important part of some newspapers and media outlets.
In May, News Corp partnered with OpenAI, granting the AI company access to content from its publications, including The Wall Street Journal and The Times of London, to enhance AI models like ChatGPT.
The deal’s financial terms remain undisclosed, but sources revealed it may be worth around $250 million.
“Together, we are setting the foundation for a future where AI deeply respects, enhances, and upholds the standards of world-class journalism,” OpenAI CEO Sam Altman said during the deal announcement.
Similarly, in December 2023, Axel Springer, the publisher of Business Insider, announced it was forming a strategic partnership with OpenAI.
These deals provide a feeding ground for AI firms’ data-hungry AI systems. The sheer volume of data is crucial to developing large language models.
Leading chatbot systems have reportedly learned from digital text collections, totaling up to three trillion words.
Axel Springer’s CEO, Mathias Döpfner, who previously warned that AI threatened to “replace” journalists, said the deal would “explore the opportunities of AI-enabled journalism.”
For newspapers, in addition to the millions of dollars during an especially rough time for money-making in the industry, the partnership provides a new way to streamline the deployment of news.
The collaborations aim to reduce costs and enhance the quality of journalism by freeing up reporters to focus on investigative and in-depth stories.
However, this has raised ethical concerns regarding job displacement, data privacy and the potential influence of AI-generated content on journalistic integrity.
In a more dramatic and direct impact on the industry, the invention of AI-generated websites masquerading as reputable news sites is causing concerns for some experts.
Wired previously reported on an entrepreneur who bought a number of abandoned news sites, filled them with AI-generated content, and earned a large amount of cash from ad revenue.
These AI-generated websites, filled to the brim with clickbait headlines and AI-generated “slop,” are often made on defunct news sites. This ensures that some readers are tricked into thinking the site’s owner hasn’t changed.
U.S.-based NewsGuard found over a thousand AI-generated websites were “operating with little to no human oversight.”
NewsGuard’s McKenzie Sadeghi said these websites are often “low-quality clickbait farms publishing a lot of content about celebrities, entertainment and politics.”
AI is undoubtedly already impacting the news business on a large scale. Whether or not it becomes a danger or an asset to journalism depends on how responsibly it is developed and implemented, as well as the ethical standards upheld by those who choose to use it.