UK:  X’s design and policy choices created fertile ground for inflammatory, racist narratives targeting Muslims and migrants following Southport attack 

Source: Amnesty International –

Social media platform X, formerly Twitter, played a central role in the spread of false narratives and harmful content which contributed to racist violence against Muslim and migrant communities in the UK, following the tragic murder of three young girls in the town of Southport, Amnesty International has established in a technical explainer which was published today. 

A technical analysis of X’s open-source code (or publicly available software) reveals that its recommender system (or content-ranking algorithms), which drives the “For You” page, systematically prioritizes content that sparks outrage, provokes heated exchanges, reactions and engagement, without adequate safeguards to prevent or mitigate harm. 

“Our analysis shows that X’s algorithmic design and policy choices contributed to heightened risks amid a wave of anti-Muslim and anti-migrant violence observed in several locations across the UK last year, and which continues to present a serious human rights risk today,” 

Pat de Brún, Head of Big Tech Accountability at Amnesty International. 

On 29 July 2024, three young girls – Alice Dasilva Aguiar, Bebe King and Elsie Dot Stancombe – were murdered, and 10 others injured, by 17-year-old Axel Rudakubana. Within hours of the attack, misinformation and falsehoods about the perpetrator’s identity, religion, and immigration status flooded social media platforms, and were prominent on X. 

Amnesty International’s analysis of X’s open-source recommender algorithm uncovered systemic design choices that favour contentious engagement over safety. 

X’s algorithmic ranking system, revealed in X’s own source code published in March 2023, reveals that falsehoods, irrespective of their harmfulness, may be prioritised and surface more quickly in timelines than verified information. X’s “heavy ranker” model – the machine-learning system that decides which posts get promoted – prioritizes “conversation” – regardless of the nature of the content. As long as a post drives engagement, the algorithm appears to have no mechanism for assessing the potential for causing harm – at least not until enough users themselves report it. These design features provided fertile ground for inflammatory racist narratives to thrive on X in the wake of the Southport attack.  

 
Further analysis of the system also uncovered built-in amplification biases favouring  “Premium” (formerly Blue) verified subscribers whose posts are automatically promoted over those of ordinary users. Before official accounts were shared by authorities, false statements and Islamophobic narratives about the incident began circulating on social media. Hashtags such as #Stabbing and #EnoughisEnough were also used to spread claims falsely suggesting the attacker was a Muslim and/or an asylum-seeker.  

 An account on X called “Europe Invasion”, known to publish anti-immigrant and Islamophobic content, posted shortly after news of the attack emerged that the suspect was “alleged to be a Muslim immigrant”. That post garnered over four million views. Within 24 hours, all X posts speculating that the perpetrator was Muslim, a refugee, a foreign national, or arrived by boat, were tracked to have an estimated 27 million impressions.  

The Southport tragedy occurred in the context of major policy and personnel changes at X. Since Elon Musk’s takeover in late 2022, X has laid off content moderation staff, reinstated previously banned accounts, disbanded Twitter’s Trust and Safety advisory council, fired trust and safety engineers and restored numerous accounts which had been previously banned for hate or harassment, including that of Stephen Yaxley-Lennon, a far-right activist better known as Tommy Robinson.