With the advent of platforms like ChatGPT and MidJourney, the volume of AI-generated text and images circulating online has exploded, raising concerns over the spread of false information. Meanwhile, publishers are increasingly focused on securing their intellectual property rights in the face of AI models trained using content scraped from the web.
Contemporary AI impacts producers and consumers of digital media in different ways, but all parties are united by the need for greater trust and transparency. In an effort to counter AI-generated fake news and help content owners retain control of their digital assets, Fox Corporation and Polygon Labs have developed Verify – an open-source protocol that records the provenance and ownership history of text and images on-chain.
For each piece of content registered with Verify, the platform issues a digital token referencing the content itself alongside any additional metadata. Tokens can then be bound to smart contract licenses that let publishers set conditions for how the content is accessed. For example, the smart contract could specify rules governing the use of copyright-protected text and images.
Verify’s developers conceive these programmable licenses as a way for publishers to manage how automatic web scrapers access their content. As well as helping to ensure text and images are properly referenced, Verify smart contracts could also create new commercial opportunities for media publishers whose digital content is routinely scraped for use as AI training data.
With firms like OpenAI crawling huge swathes of the internet to feed their AI models’ insatiable appetite for content, for media outlets like Fox, smart contracts that automatically manage access rights offer a new way to generate revenue from digital assets.
The emergence of tools like Verify reflects a pattern of online publishers staking a claim to the wealth generated by AI developers whose products are often built using huge databases of copyright-protected material.
For example, the New York Times recently sued OpenAI for using its articles to train the large language models (LLM) that power ChatGPT.
Accusing the technology firm of seeking to “free-ride” on the Times’ journalism, the lawsuit could potentially reshape the news industry’s relationship with AI developers. As the two companies prepare for a legal showdown, the case poses the question of whether or not using copyrighted material to train AI models should be considered fair use.
Of course, tracking the ownership and control of digital media isn’t just important for publishers looking to protect their business interests and intellectual property rights. It is also a concern for readers, listeners and viewers who want to identify whether a given source is trustworthy or not.
Blockchain platforms that use digital certificates to track goods as they change hands from one party to another have been used to verify the authenticity and provenance of everything from coffee beans to artworks , leveraging the immutability of on-chain records to increase transparency and trust.
Expanding the concept to digital media, Verify was developed against the backdrop of an online mediascape that has been rocked by the rise of fake news and AI-generated content in recent years. Using the new tool, “readers will know for sure that an article or image that purportedly comes from a publisher in fact originated at the source,” a Polygon blog post explained.
Ultimately, such tools connect users’ need to verify the source and authenticity of content they discover online with the interests of publishers seeking more control over the use and distribution of their intellectual property. As AI continues to transform the digital realm, blockchains could play an important role in the emerging economy of authenticity, where original works of human creativity are increasingly valuable.