Meta Accused of Silencing Internal Study Linking Facebook Use to Mental Health Decline

12635

Explosive new court filings allege that Meta, the parent company of Facebook,
deliberately shut down an internal research project that revealed a clear
link between Facebook usage and deteriorating mental health. These
unsealed documents paint a picture of a company that allegedly dismissed
its own findings while downplaying significant risks to its younger users,
particularly teenagers.

Project Mercury: Unearthing Undesirable Truths

In 2020, Meta initiated an internal research effort dubbed “Project Mercury.”
This ambitious study, conducted in collaboration with market research firm
Nielsen, involved a survey-based experiment where selected Facebook users
were asked to deactivate their accounts for one week. The newly unsealed
discovery documents reveal a striking outcome: participants who took a break
from Facebook reported notably lower levels of depression, anxiety,
loneliness, and negative social comparison.

However, instead of publishing these critical findings or commissioning
further investigation, Meta’s leadership reportedly terminated the project.
Internal correspondence, now cited in a sweeping lawsuit brought by numerous
U.S. school districts, indicates that the company questioned the study’s
validity. Critics within Meta, as revealed by the documents, attributed the
results to what they termed “prevailing negative press” surrounding social media.

Internal Dissent and Tobacco Industry Parallels

The internal response to Project Mercury’s termination was not unanimous.
At least one researcher involved in the study reportedly argued internally
that the survey provided strong evidence of a causal link between Facebook
use and social comparison stress. Disturbingly, internal discussions even
drew parallels between Meta’s handling of this sensitive data and the
historical efforts of the tobacco industry to obscure the health risks
associated with cigarette use.

The lawsuit further claims that Meta deliberately withheld these findings
from Congress. During this period, the company maintained that it possessed
no definitive evidence of harm to its teenage users, especially girls –
a position starkly contradicted by its own internal research.

Beyond the Buried Study: Allegations of Negligence in Design

The court filing extends its accusations beyond the suppressed study,
delving into intricate technical descriptions of Meta’s product design and
safety policy decisions. It outlines how many youth safety tools were
allegedly engineered in ways that rendered them largely ineffective or
rarely used. Internal tests of more restrictive features, designed to
protect younger users, were reportedly blocked due to concerns that they
might negatively impact crucial user-growth metrics.

Meta also stands accused of maintaining an exceptionally high “strike threshold”
for serious offenses, including sexual exploitation. This policy allegedly
required users to be flagged more than a dozen times before being permanently
removed from the platform.

Moreover, the documents detail how engagement-optimization algorithms,
specifically tailored for teens, reportedly amplified the spread of content
with detrimental psychological effects. This included material directly
linked to serious issues like body dysmorphia and eating disorders. Attempts
by safety engineers to mitigate these risks were, according to the filing,
deprioritized, with leadership allegedly framing inaction as a “business necessity.”

Even concerning privacy, internal emails reportedly show Meta executives
discussing how increasing privacy settings for teenage live video streams
could help retain youth engagement, potentially at the expense of parental
or educational oversight.

Meta’s Defense: Methodological Flaws and Effective Safeguards

Meta spokesperson Andy Stone has vehemently denied the allegations that the
company intentionally concealed negative findings. Stone stated that the
Facebook deactivation study was abandoned due to “methodological flaws,”
not because of its conclusions.

Speaking to Reuters, Stone emphasized that Meta has dedicated years to
developing parental controls and teen safeguards, describing the resulting
safety technology as “broadly effective.” Meta contends that the documents
referenced in the complaint are being taken out of context and that its
motion to strike them pertains to privacy and redaction procedures, not to
an attempt to suppress evidence.

Wider Litigation: A Multi-Platform Challenge

The plaintiffs, which include U.S. school districts, are targeting Meta
alongside Google, Snap, and TikTok, accusing them of systematically
downplaying known risks to young users. The filings also reference internal
communications suggesting that several companies sponsored third-party
child advocacy groups, even as these groups publicly endorsed the platforms’
safety practices.

This information, released in a Northern California federal court filing,
is part of a much larger multidistrict litigation effort. Plaintiffs allege
that these major platforms not only engineered addictive, engagement-driven
algorithms but also provided less robust safety protections to U.S. teens
compared to children in other countries.

For instance, internal presentations at TikTok reportedly described offering
“spinach”—healthier, more educational content—to children on its Chinese
app, Douyin, while American children received “opium,” a label for hyper-engaging
content that could be potentially harmful. TikTok disputes both the accuracy
and characterization of these comments.

Similarly, Snapchat’s internal emails reportedly acknowledged that features
like “Snapstreaks,” which incentivize daily communication, effectively
cultivated compulsive behavior among teens.

Precedent-Setting Case Looms

Despite mounting legislative pressure, the defendant platforms continue to
argue that their algorithmic feeds and youth-targeted features are designed
with safety in mind and that their policies are informed by external experts.
Major platforms, including Meta and Google, are currently challenging new
California regulations that would restrict algorithmic recommendations to
minors without explicit parental consent.

The outcome of these complex cases could establish significant legal
precedents for how internal research is disclosed and regulated across
the entire tech industry. With the next hearing scheduled for January 26
in the U.S. District Court for the Northern District of California,
regulators, researchers, and industry observers are closely watching to see
how much of Silicon Valley’s internal decision-making may soon be forced
into public view.

Content