Executives from Amazon, Google, and Meta have been slammed as “pirates” after facing criticism for allegedly dodging questions about their use of Australians’ private and personal information.
A damning report from an Australian Senate committee has called for stronger rules surrounding Big Tech’s use of citizen data in AI training, including a new widespread “AI act.”
An inquiry from an Australian bipartisan Senate Committee found that Big Tech AI chatbot developers have committed “unprecedented theft” against the creative industry in Australia.
The inquiry said that leading AI chatbots like OpenAI’s ChatGPT and Google’s Gemini should automatically be deemed “high risk.”
It comes after the committee handed down its final report following nine months of hearings. The report featured 13 recommendations aimed at how Big Tech should approach AI training and data collection in the country.
Within the report, the Australian committee recommended implementing a new “AI act,” mirroring the path taken in the EU and U.K.
The committee has faced some backlash for this from the country’s leading banks and some tech companies, who believe it will stifle innovation.
The report also criticized the appearances of executives from Google, Meta, and Amazon, alleging they were purposefully vague when questioned on AI data scraping.
“When asked about how they use [personal or private] data to train their AI products, the platforms gave largely opaque responses,” the report reads.
When questioned whether audio captured by Alexa devices had been used in AI training, Matt Levey, Amazon’s head of public policy, reportedly said he would respond later but never did.
“This refusal to directly answer questions was an ongoing theme in the responses received from Amazon, Meta and Google,” the report added.
Labor senator Tony Sheldon, the committee’s chair, likened the Big Tech executives’ dodging of questions to “sitting through a cheap magic trick.”
“Plenty of hand-waving, a puff of smoke, and nothing to show for it in the end,” Sheldon said in a statement reported by The Sydney Morning Herald.
The committee chair slammed the tech giants as “pirates,” claiming they were “pillaging our culture, data, and creativity for their gain while leaving Australians empty-handed.”
“They want to set their own rules, but Australians need laws that protect rights, not Silicon Valley’s bottom line,” he added.
The inquiry report found that Australia’s creative workers were the most at risk from AI, significantly impacting their livelihoods.
Evidence provided by the Australian Association of Voice Actors showed that their contracts provided rights for the company to use their voices to create audiobooks with AI.
The Australian Society of Authors (ASA) welcomed the report and said it recognized “the pressing need for new whole-of-economy legislation to regulate AI and protect the livelihoods of Australian creators.”
“What is at stake is not only the sustainability of author and illustrator careers in Australia, but the richness and diversity of Australian literature.”
“We applaud the Committee’s support of Australian authors and illustrators and their vital work,” ASA CEO Lucy Hayward said.
AI systems train on massive datasets of text, images, music, and other forms of creative content, much of which can be sourced from copyrighted works without explicit permission from creators.
The report calls for developers of AI models to be transparent in their use of copyrighted works for AI training – with all work needing to be paid for.