Startling claims have emerged from an unredacted court filing, alleging that Meta, the parent company of Facebook and Instagram, maintained a lenient “17-strike” policy for accounts involved in sex trafficking. This revelation comes from Vaishnavi Jayakumar, Meta’s former head of safety and well-being, whose testimony suggests the tech giant repeatedly prioritized user engagement over critical safety measures.
According to Jayakumar’s deposition, accounts engaged in the “trafficking of humans for sex” were allegedly permitted 16 violations for “prostitution and sexual solicitation” before facing suspension. “That means that you could incur 16 violations for prostitution and sexual solicitation, and upon the 17th violation, your account would be suspended,” she stated. Jayakumar described this as a “very high strike threshold” by “any measure across the industry,” with lawyers claiming internal documentation corroborates this policy.
The unredacted filing also brings to light other disturbing accusations. It claims that Instagram initially “did not have a specific way” for users to report child sexual abuse material (CSAM). When Jayakumar brought this critical issue to attention “multiple times,” she was reportedly informed it would be “too much work to build” such a reporting mechanism and to review the subsequent reports.
These allegations surface amidst Meta’s escalating legal and regulatory challenges concerning child safety on its platforms. The unredacted filing is part of a substantial lawsuit filed against Meta, TikTok, Google, and Snapchat by numerous school districts, attorneys general, and parents. The lawsuit contends that these platforms are contributing to a “mental health crisis” through their “addictive and dangerous” nature. Notably, Meta CEO Mark Zuckerberg previously stated his belief that there is “no causal connection” between social media use and teen mental health.
Further accusations in the filing underscore a pattern of Meta downplaying platform harms in favor of boosting engagement metrics. In 2019, Meta reportedly considered making all teen accounts private by default to prevent unwanted messages, but allegedly rejected the proposal after its growth team determined it would “likely smash engagement.” While Meta eventually began placing teens on Instagram into private accounts last year, the earlier decision highlights a potential conflict of interest.
The lawsuit also claims that Meta researchers discovered hiding “likes” on posts would make users “significantly less likely to feel worse about themselves.” Despite this, the company reportedly reversed these plans after finding it was “pretty negative to FB metrics.” Similarly, Meta is accused of reinstating beauty filters in 2020, even after internal findings showed they were “actively encouraging young girls into body dysmorphia,” allegedly fearing a “negative growth impact” if the filters were removed.
In response to these serious allegations, Meta spokesperson Andy Stone issued a statement: “We strongly disagree with these allegations, which rely on cherry-picked quotes and misinformed opinions in an attempt to present a deliberately misleading picture.” Stone asserted that “The full record will show that for over a decade, we have listened to parents, researched issues that matter most, and made real changes to protect teens — like introducing Teen Accounts with built-in protections and providing parents with controls to manage their teens’ experiences.”
日本語
한국어
Tiếng Việt
简体中文