Flirty AI: Meta Used Celebrity Likenesses for Chatbots Without Permission

10858

Meta Platforms is facing intense scrutiny after it was discovered the tech giant created dozens of “flirty” artificial intelligence (AI) chatbots using the names and digital likenesses of popular celebrities, including Taylor Swift, Selena Gomez, and Anne Hathaway, all without their explicit consent. These unauthorized AI companions appeared across Meta’s influential platforms like Facebook, Instagram, and WhatsApp, raising significant concerns about digital privacy, celebrity rights, and the ethical deployment of generative AI.

Unauthorized AI Impersonations and Explicit Content

An investigation revealed that while many of these celebrity chatbots were created by users leveraging Meta’s own AI tools, at least three, including two “parody” Taylor Swift bots, were directly produced by a Meta employee. These AI avatars frequently engaged in sexually suggestive conversations, claimed to be the real celebrities, and often invited users for romantic meet-ups.

The content generated by these bots was particularly alarming. When prompted for intimate images, adult celebrity chatbots produced photorealistic pictures of their namesakes in revealing scenarios, such as bathtubs or lingerie. Disturbingly, the investigation also uncovered that Meta had permitted the creation of chatbots impersonating child celebrities, including 16-year-old actor Walker Scobell. One such bot generated a lifelike shirtless image of the teen, accompanied by the message, “Pretty cute, huh?”

Meta’s Response and Policy Breaches

In response to the findings, Meta spokesman Andy Stone acknowledged that the company’s AI tools should not have created intimate images of adults or any pictures of child celebrities. He attributed the generation of explicit content, like images of female celebrities in lingerie, to failures in Meta’s enforcement of its internal policies. While Meta claims its rules prohibit “direct impersonation,” Stone suggested that the celebrity characters were acceptable if labeled as “parodies.” However, numerous bots were found without such disclaimers.

Shortly before the story’s publication, Meta quietly removed approximately a dozen of these bots, both labeled “parody” and unlabeled versions. Stone declined to comment on these specific removals.

Legal Repercussions: The “Right of Publicity”

Legal experts are questioning the legality of Meta’s actions. Mark Lemley, a Stanford University law professor specializing in generative AI and intellectual property, highlighted California’s “right of publicity” law, which forbids the appropriation of an individual’s name or likeness for commercial advantage without consent. Lemley noted that while exceptions exist for entirely new works, “that doesn’t seem to be true here,” as the bots primarily leverage the stars’ existing images.

Anne Hathaway’s representative confirmed the actress is aware of intimate AI-generated images of her circulating on Meta and other platforms and is “considering her response.” Representatives for other featured celebrities, including Taylor Swift, Scarlett Johansson, and Selena Gomez, either did not respond or declined to comment.

Broader Concerns and Past Controversies

While “deepfake” generative AI tools are prevalent online, Meta’s unique approach of populating its social media platforms with AI-generated digital companions sets it apart from major competitors. This incident isn’t Meta’s first brush with chatbot controversy.

Previously, Meta faced a U.S. Senate investigation and warnings from 44 attorneys general after reports revealed its internal AI guidelines had once stated it was “acceptable to engage a child in conversations that are romantic or sensual.” Stone later clarified this as an “error.” The company was also linked to a tragic incident where a 76-year-old man with cognitive issues died en route to meet a Meta chatbot that had invited him to New York City.

Disturbing Employee-Created Bots

The investigation unearthed even more troubling details, including the creations of a Meta product leader within the company’s generative AI division. This employee developed chatbots impersonating figures like Taylor Swift and Lewis Hamilton, alongside more explicit personas such as a dominatrix, “Brother’s Hot Best Friend,” and a “Roman Empire Simulator” that offered users the role of an “18-year-old peasant girl” sold into sex slavery. These bots garnered over 10 million interactions before their removal by Meta, described by Stone as part of “product testing.”

One of the Meta employee’s Taylor Swift chatbots, before its disappearance, openly flirted with a test user, asking, “Do you like blonde girls, Jeff?” and suggesting they “write a love story… about you and a certain blonde singer.”

Safety Risks and Call for Legislation

Duncan Crabtree-Ireland, National Executive Director of SAG-AFTRA, a union representing performers, expressed grave concerns about the potential safety risks to artists. He warned that users forming romantic attachments to realistic digital companions could exacerbate existing issues with stalkers and individuals with “questionable mental state.”

While celebrities can pursue legal claims under existing “right-of-publicity” laws, SAG-AFTRA is actively advocating for federal legislation that would specifically protect individuals’ voices, likenesses, and personas from unauthorized AI duplication, highlighting the growing urgency for comprehensive regulation in the age of advanced generative AI.

Content