Imagine a world where the Nazis didn’t have a powerful propaganda tool like the newspaper Der Stürmer. Could the Nazis have perpetrated large-scale genocide without the help of media fuelling their hatred? Landmark international criminal court cases, such as the Media Trial at the Nuremberg Tribunals or the Rwanda Radio trial, have concluded that the media plays a significant role in the horrific actions of criminal actors.
Let’s fast forward to our modern world, where social media has become the primary medium for sharing, amplifying, and debating our collective voices, fears, opinions, and hopes. In 2023, social media platforms are at the centre of global communication. Just as in the past, we must ask ourselves whether social media companies should be held accountable for the hate speech proliferating on their platforms, particularly in the context of war and conflict.
This blog presents three compelling arguments on why social media companies should be held responsible for hate speech on their platforms, especially in the context of conflict and war, emphasizing the pressing need for change.
- Lack of a Blueprint for Accountability in Times of Conflict
One significant challenge that we face today is the absence of a clear and comprehensive legal framework dictating how social media companies should adapt their content policies during times of war and conflict. The consequences of this accountability gap have been devastating, as exemplified by the mismanagement of the Rohingya genocide.
Without guidance, these companies have been able to profit from advertising, largely turning a blind eye to this content’s potential to incite violence. Content, it seems, takes a back seat to profit. The absence of a blueprint for accountability is poignantly demonstrated in the case of the Rohingya genocide, in which Meta had a pivotal role.
2. The Influence of Leadership Changes on Platform Behavior
The second reason to hold social media companies accountable for hate speech in conflict and war is to recognise the evolution and significance of media platforms in our society. Throughout history, words and information have been crucial in inciting harm before physical violence occurs.
Social media operates differently from traditional media, transcending national borders and potentially offering a window of hope. The leadership of these platforms resides outside the confines of dictatorial states, which means that their role in violence can be mitigated before a spiral into mass violence occurs.
In the summer of 2023, the International Criminal Court (ICC) took a significant step by acknowledging its role in addressing social media’s significance in international crimes. Karim Khan KC, the Prosecutor of the ICC, articulated a new approach that recognises the impact of technology on contemporary international criminal injustices.
Khan’s statement highlights the need to explore pathways for holding social media executives accountable under the Rome Statute. The ICC’s recognition of the role of social media in inciting and profiting from wars within the private sector is a significant step toward holding these companies and their leaders responsible.
An illustrative example of leadership-induced shifts in social media platforms is Twitter’s response to Russia’s war in Ukraine. Under former CEO Jack Dorsey, Twitter introduced policies aimed at curbing hate speech and extremist content, aiming to reduce the presence of divisive and inciting content in relation to the war. The platform, in its policy statement, acknowledged its responsibility toward society.
However, when Elon Musk took on the role of CEO, he initiated policy reversals and technological changes, eroding the previous efforts. X’s content quality deteriorated, fostering hate speech and extremism. Musk’s actions against hate speech organisations worsened the situation, making Twitter a hub for divisive content and a rise in state-led disinformation campaigns by Russia, China, and Iran. This example highlights the significant influence of individual leaders within global businesses and their potential impact on international crime and global stability. Musk’s policy changes amplified Russian propaganda during the conflict, undermining previous restrictions on disinformation.
- Setting a Crucial Precedent for the Tech Industry
The final point emphasises the importance of setting a precedent for the tech industry. Holding social media companies accountable would send a strong message to other tech firms that develop and design technologies with significant societal impacts.
As we look ahead, social media is our last chance to set a moral and legal compass for private companies before even more transformative technologies emerge, such as generative Artificial Intelligence (AI) and deep fakes.
The Urgent Necessity of Accountability
It is more critical than ever that we hold social media companies accountable for hate speech, specifically in the context of conflict and war. While legislation such as the UK Online Safety Bill and the EU Digital Service Act seek to impose duty of care obligations of platforms, these must be expanded to include duty of care to prevent international crimes, such as genocide and war crimes, from being facilitated through social media platforms.
The absence of a blueprint for accountability, the influence of leadership changes, and the need to set a precedent for the tech industry all underscore the pressing necessity for change. Holding social media companies accountable for hate speech is not just a good idea for global society; it is a necessity. We need to act quickly to compel social media companies to act responsibly before this opportunity is lost.
- Read: From Capture to the Court: The Use of Digital Technology in Advancing the Pursuit of Justice for International Crimes
- Read: Why Artificial Intelligence is Already a Human Rights Issue
- Read: Mental Autonomy and Technology: A Cross-disciplinary Approach to Protecting Freedom of Thought and Opinion
- Read: Use of Artificial Intelligence by the Judiciary in the Face of COVID-19