In late June, music became the latest frontier in the battle between AI companies and those involved in creating any kind of original content. Reports emerged that major music labels, including Sony Music Entertainments, Warner Records, and Capitol Records, have filed copyright infringement suits against music generation services Suno and Udio. The companies claim that artists’ work was used to train the AI algorithms, violating IP laws, and are seeking damages of up to $150,000 per song.
These allegations might be new for the music industry, but the AI companies now employ armies of lawyers to handle such suits. The emergence of ChatGPT in late 2022 coincided with the rise of generative imaging tools, leading to a legal backlash from artists, writers, news organizations, and other creatives, including Getty Images, the New York Times, and comedian and writer Sarah Silverman. Even open-source developers have sued GitHub for failing to follow the requirements of GitHub’s open source license by allowing code to be copied and republished without attribution.
One rule for them
Given the nascent nature of the technology and the fact that such cases are creating judicial precedents that could be long-held, the legal arguments are likely to rumble on for some time to come. Most likely, much of the time will be spent debating tedious points regarding precise similarities and differences between a human-created input and an AI-generated output.
However, from the consumer perspective, tech companies are leaving far broader ethical questions hanging in the air. These companies jealously – yet understandably – guard their algorithms as commercially sensitive trade secrets, but appear to have no boundaries when it comes to consuming and commercializing protected intellectual property.
As the legal fights become more high profile, involving more brands and influential names with loyal followings, choosing to engage with AI companies becomes more of an ethical decision. Any user of AI models where the company is in dispute with creatives over the use of their IP means that the user is complicit in the infringement of the artist’s rights. From a legal perspective, the extent to which users could be held liable for using an AI-generated output remains relatively untested. If Joe Public uses AI to create images or text that so closely replicate a piece of original work that the original is clearly identifiable, is Joe Public responsible for identifying a possible infringement and not using that work for commercial gain?
Regulation is not a panacea
Regulators are only just beginning to grapple with the answers to such questions. While the pragmatism of new legal frameworks such as the EU AI Act has been lauded, it approaches AI from the perspective of risk, failing to address the IP problem. So, solutions are needed. And most of the lawsuits hinge on relatively straightforward demands on the part of the content creators – proper compensation and attribution for their work, to which they’re entitled under existing IP laws.
The transparency and crypto-economics of blockchain technology offer an elegant way to address the problem. DroppLink, a solution developed by droppGroup, enables operators of established AI models to tokenize them through its platform. Doing so provides track, trace, and provenance of model activity and creates an incentive for IP owners to offer their IP to AI companies under commercial terms, since they’ll be compensated automatically via droppLink’s system of smart contracts.
The platform operates at three levels in the generative AI process. A Proof of Generation layer validates the authenticity of the training data and the AI output itself, leveraging a decentralized validator network. This authentication feeds into the process of compensating the creator for using their work as an input. The data genesis layer automates the tokenization of AI training data and creates an immutable on-chain record of data origin and usage. Finally, each output is tokenized as a unique digital asset, and the payments to creators are executed automatically.
Tech companies seeking to proactively address the ethical issues with content scraping have an opportunity to differentiate themselves by bringing their activities on-chain using tools like droppLink. Providing fair compensation to creators, along with transparency about how content and data are used, also enables companies to reduce their legal liability while demonstrating a commitment to ethical practices to regulators and lawmakers. It provides a more solid legal and moral foundation for the future development, promotion, and adoption of AI.
All images are generated by Eray Eliaçık/Bing