A celebrity’s worry with the public does not end with paparazzi or leaked scripts and songs anymore. These days, they now lose sleep over the thought of waking up to find their voice narrating a product they never endorsed, in a language they don’t speak, or starring in a video they didn’t act in (what a nightmare, eh?).
The focus is, for now, on voice acting, as advanced AI voice cloning tools like ElevenLabs, Respeecher, and PlayHT can create a realistic version of a celebrity’s voice in less than a minute of original source audio.
The voices can replicate emotion, cadence, inflection, and even the speaker’s hesitation patterns without a problem. Hence, what used to be the domain of high-end Hollywood studios is now available to anyone with a credit card and a voice sample.
Celebrities do not find the use of these clone machines cool and are lawyering up. While it may feel like tech magic for regular users, it is more like identity theft for public figures, whose voice is part of their brand and livelihood.
The creative nightmare called voice cloning
In 2024, Scarlett Johansson threatened legal action after OpenAI released a ChatGPT voice assistant that resembled her too closely without her involvement. She wasn’t the first to push back, and she won’t be the last. Multiple A-list names have publicly drawn the line when it comes to voice cloning in the last 18 months.
When an AI version of him was used to promote a dental plan in 2023, Tom Hanks was quick to speak out and let the world know that he had “nothing to do with it.” AI Voice cloning can be so realistic that a fake Drake and The Weeknd song created by AI gained millions of streams before being pulled due to copyright concerns. While the track was a hit, neither artist had given consent, and neither profited from the virality.
Taylor Swift is another singer that has been a victim of this voice cloning trend. Shortly before the release of her “The Tortured Poets Department” last year, some so-called leaked songs made their way across social media.
By the time the album actually got released, it became undeniably clear to all that bought the hoax that the “leaked” tracks were AI-generated.

How celebrities are fighting back
The fact that these clones emulate emotional states and speech disfluencies makes the technology attractive for media creators but legally volatile when used without consent. During the 2023 SAG-AFTRA strike, protections around voice likeness was a key issue. The union pushed for contract language that would bar studios from using AI-generated performances (especially posthumous ones) without consent and compensation. As part of the final agreement, actors now have a degree of control over how their vocal identity is used or recreated.
This isn’t just about celebrity ego. When a brand, movie trailer, or political campaign can feature someone’s cloned voice without their involvement, their reputation and income are directly at risk. And because these tools are public-facing, it’s not just major studios — anyone can misuse a voice.
Are there any control efforts?
Platforms like ElevenLabs have tried to bring in licensing frameworks. Its Voice Library allows verified voice actors to license their voices for commercial use. A creator-forward system, users on these platforms retain rights, and each request for use is tracked and monetized.
Similarly, Replica Studios has developed a paid synthetic voice service, where developers can purchase performances from a curated library of approved voices.
But enforcement remains weak. Even with Terms of Service that prohibit unauthorized use, these platforms largely rely on self-policing. If someone uploads and clones a celebrity voice without consent, it often requires manual reporting or legal escalation to remove it.
Your favorite celebrity, reimagined as your AI sidekick?
There’s growing interest in using cloned voices not just in static media but in interactive systems. AI companions, virtual advisors, and “relationship bots” are becoming a commercial niche. Already, services like Candy AI and Character AI are using text-to-speech to power deeply personalized voice chats. Some even integrate features to that detail that they can simulate expressive conversations, and it’s not hard to imagine a version where the voice of a celebrity, legally licensed, powers one of these systems.
This opens up a potential new market for celebrity voice licensing: character-driven voice work that lives outside of film, music, or traditional endorsements. It’s a step toward turning voice into a persistent, licensable identity layer.
But for that to scale legally, platforms would need secure attribution, smart contracts, consent management, and watermarking standards. Currently, none of those exist at an industry-wide level, and until there’s a clear framework, the use of famous voices in AI companions remains an ethical and legal gray zone.

Voice as intellectual property
For the average person, voice is part of self-expression. For celebrities, it’s something more: an asset, a brand, a battleground. As AI tools get better at imitation, we are forced to confront what ownership means in the digital age.
Can your voice be rented? Can it be stolen?
What’s clear is that synthetic speech isn’t going away. The question now is who gets to profit from it and under what terms. Whether through licensing marketplaces, federal legislation, or collective bargaining, voice is becoming less of a human trait and more of a commodity. In that light, celebrity hesitation isn’t paranoia. It’s good business.




