A series of controversial AI-generated videos featuring a fictional character named “Josh,” who made racially charged statements about the Canadian job market and immigration, have been removed by TikTok. The platform cited violations of its community guidelines regarding the clear disclosure of synthetic media, rather than the inflammatory content itself.
The videos depicted what appeared to be a young white man, “Josh,” lamenting the difficulty of finding employment in Canada. In one widely circulated segment, he claimed he couldn’t secure a job at Tim Hortons because “people from India have taken them all,” even suggesting he was asked if he spoke Punjabi during an application process. Tim Hortons has expressed significant frustration and concern over these deceptive videos.
Further deepening the deception, another video saw “Josh” questioning Canada’s immigration policies, asking why so many people are admitted when job opportunities are scarce. These clips are part of a growing, unsettling trend known as “fake-fluencing,” where companies create fictional AI personas to subtly or overtly promote products or services as if endorsed by a real individual.
The Company Behind the Deception
The company responsible for creating “Josh” is Nexa, an AI firm specializing in software for recruitment. Divy Nayyar, Nexa’s founder and CEO, admitted to creating the “Josh” persona to “have fun” with the sentiment that “Indians are taking over the job market.” He intended for the character to resonate with young, unemployed individuals. Some of the videos even subtly featured Nexa logos, which Nayyar described as “subconscious placement” of advertising.
Marketing experts, however, have strongly condemned the campaign. Markus Giesler, a marketing professor at York University, labeled it “highly, highly problematic and highly, highly unethical.” He noted that such polarizing storytelling is typically associated with extremist groups, making its use by a company to attract clients unprecedented and irresponsible.
Advanced AI: Harder to Detect
The creation of these deceptive videos was made easier by advancements in AI technology, specifically Google’s Veo AI software, along with other tools. The latest iteration, Veo3, released in May, generates videos from text prompts with a level of realism far surpassing previous versions. Earlier tell-tale signs of AI, such as extra fingers or unnatural movements, are now rare. The audio is often indistinguishable from human voices, with synchronized lip movements that were once a major challenge for AI video generators.
Despite the sophistication, some astute TikTok users did identify the videos as AI-generated in the comments. However, many others believed “Josh” was a real person and engaged with his racist messaging, some even receiving replies from the fake character defending his claims. Marvin Ryder, an associate professor of marketing at McMaster University, initially believed the character was real, highlighting the alarming potential for future fakery to become undetectable. “How are we as consumers of social media… supposed to discern reality from fiction?” he questioned.
TikTok’s Stance on AI Disclosure
TikTok did not comment on the inflammatory nature of the videos’ content. Its policy requires AI-generated content that depicts realistic scenes or people to be clearly marked with a label, caption, watermark, or sticker. The platform determined that Nexa’s “Josh” videos, despite a faint Google Veo watermark, lacked sufficient clarity regarding their artificial origin. TikTok clarified that it does not automatically label all AI-generated content; creators are expected to provide the disclosure.
The incident underscores a growing concern among experts like Professor Giesler, who warns that the ease of creating realistic videos with hateful messages through AI is a problem that will intensify. He emphasized that such an “irresponsible utilization of emotional branding tactics” should not be condoned, pointing to the urgent need for robust ethical guidelines and platform enforcement in the rapidly evolving landscape of online deception and digital avatars.