Skip to content

Internet Explorer is no longer supported by this website.

For optimal browsing we recommend using Chrome, Firefox or Safari.

search

Deepfakes and Intellectual Property: The Disinformation Dilemma in Publicity Rights

What is a deepfake?

Deepfakes – artificial intelligence-driven tools capable of creating hyper-realistic digital replicas of individuals – have ushered in a novel set of challenges for intellectual property and the right of publicity. Historically, actors and public figures were safeguarded by their “right of publicity” (ROP), which is a patchwork of statutes and common law that prevents the unauthorized use of an individual’s likeness, voice, or name. ROP, however, varies state by state and is generally confined to commercial applications. Moreover, ROP’s intertwining with First Amendment rights adds another layer of complexity, as creators of deepfakes cannot be barred merely for fabricating content – there must be additional violations, like defamation or trademark infringement.

 

Industry examples

Similar to other intellectual property rights, individuals can transfer or license their rights of publicity, sometimes even posthumously, leading to cases where deceased celebrities reappear in media through CGI or holographic representations. Yet, the rapid advancement of AI tools, like deepfakes, exposes a vulnerability. Unauthorized and realistic simulations of public figures are now accessible to anyone, posing significant challenges to the established IP framework.

 

Contract law remedies

Some actors like Keanu Reeves have started incorporating clauses in contracts to forbid digital manipulation of their performances. However, these contractual measures must be revised and redrafted often to keep up with the onslaught of unsanctioned deepfakes. Victims, facing limited remedies under contract law, frequently turn to internet service providers (ISPs) to eliminate manipulated content. ISPs utilize tools such as user reporting systems, automated detection, moderator reviews, DMCA takedown notices, and appeal processes.

 

Social media response

Platforms like YouTube, TikTok, and Meta have implemented strict policing policies. YouTube requires disclaimers for altered content; TikTok bans forgeries that “distort the truth of events” or cause harm unless it is “clearly disclosed” or labeled as AI-generated content; and Meta continues to invest in deepfake detection tools to ensure content genuineness. However, without overarching federal regulation, there’s growing concern that deepfakes will soon become indistinguishable from authentic content, eluding detection by these platforms.

 

Legislative call to action

In response to the challenges posed by deepfakes, some advocates think stronger regulations are necessary to mitigate this issue. Such legislation may involve creating a federal standard that emphasizes personal autonomy while simultaneously allowing for limited financial exploitation – shifting the balance from a purely economic focus.

 

Category: Intellectual Property