A recent incident involving Substack’s recommendation algorithm has ignited a heated debate about the platform’s content moderation policies and its perceived tolerance of extremist material. Users were reportedly shocked to receive push notifications promoting a neo-Nazi newsletter, prompting questions about the platform’s underlying systems and its commitment to “free speech.”
Algorithmic ‘Error’ or Inevitable Outcome?
The controversy centers on a push notification sent to various Substack users, encouraging them to subscribe to “NatSocToday.” This newsletter explicitly describes itself as “a weekly newsletter featuring opinions and news important to the National Socialist and White Nationalist Community.” Disturbingly, the notification prominently displayed the newsletter’s swastika logo, causing immediate alarm and confusion among recipients. Many users expressed bewilderment, with one exclaiming, “wtf is this? Why am I getting this?” and promptly blocking the content.
Substack’s response was swift, albeit familiar. A company spokesperson issued a statement to User Mag, claiming, “We discovered an error that caused some people to receive push notifications they should never have received. In some cases, these notifications were extremely offensive or disturbing. This was a serious error, and we apologize for the distress it caused.”
The Algorithm Doesn’t Lie: Unpacking the ‘Accident’
However, critics argue that such algorithmic “errors” often serve as a revealing window into the learned patterns of a system. Recommendation algorithms are not random; they are designed to surface content based on engagement metrics, including subscriber numbers, likes, comments, and growth trajectories. When content, even highly offensive material, consistently meets these performance indicators, the algorithm is effectively trained to recognize and promote it as successful.
Evidence from the incident supports this view. The “NatSocToday” newsletter, despite its controversial nature, boasts 746 subscribers and hundreds of likes on Substack Notes. Furthermore, users who clicked on the initial promotion were subsequently recommended “related content from another Nazi newsletter called White Rabbit,” which has amassed over 8,600 subscribers and is reportedly featured on Substack’s “rising” leaderboard. This suggests that these publications are not merely existing on the platform but are actively thriving and gaining traction within its ecosystem.
Substack’s Content Stance Under Scrutiny
This incident brings Substack’s long-standing content moderation philosophy into sharp focus. As early as April 2023, Substack CEO Chris Best notably declined to explicitly state whether the platform would prohibit racist content. By December of the same year, the company had reportedly doubled down, confirming its intention to host and monetize newsletters propagating neo-Nazi ideologies. Critics argue that this stance effectively signaled a “Nazis Welcome” policy, inevitably shaping the platform’s content landscape.
The core of the current criticism lies in the distinction between passively hosting content and actively promoting it. While Substack previously defended its approach by claiming to provide infrastructure for broad discourse, push notifications and algorithmic recommendations transcend mere hosting. They represent active editorial decisions—choices about which content deserves amplification and which users might be interested in it. In essence, these actions move beyond facilitating speech to actively endorsing and propagating specific narratives.
While the First Amendment protects Substack’s right to make such editorial choices, including promoting controversial content, observers contend that the platform should cease pretending these promotions are accidental or unintended. When a platform cultivates an environment where neo-Nazi content flourishes, its algorithms will naturally reflect that success. The current situation reveals that Substack is not merely saying “Nazis welcome” but, through its systems, is effectively stating, “we believe other people will also appreciate this content.”
The End of Pretence
This episode is the logical culmination of Substack’s permissive content policies. A platform that positions itself as immune to “censorship” of extremist viewpoints and actively helps their creators monetize cannot credibly express surprise when its internal systems recognize and amplify the success of that content. The company’s apologies, however sincere, do little to alter the fundamental revelation brought forth by its algorithms: a platform that embraces and enables controversial content will inevitably see its systems work to bring more engagement to those very sources.
Ultimately, a platform’s reputation is forged not just by what it tolerates, but profoundly by what it chooses to actively promote. The public, in turn, retains its own freedom of speech and association, with every right to critically examine and potentially withdraw support from platforms whose content decisions conflict with their values.