Meta is accused of “systemic and global” censorship in a recent Human Rights Watch (HRW) report for particularly targeting pro-Palestinian voices on Facebook and Instagram. The report outlines thousands of instances where Meta removed content and applied suspensions or permanent bans on accounts supporting the Palestinian cause. This development is worrying, as the censorship exacerbates challenges faced by Palestinians, impeding their ability to share information on social media and influencing global perspectives on Israel’s actions.
The crux of the issue lies in Meta’s Dangerous Organizations and Individuals (DOI) policy, criticised for banning vague speech categories on Facebook. The enforcement of this policy, as highlighted in the report, has led to the suppression of posts supporting major Palestinian political movements, hampering discussion on Israel and Palestine.
A Context-Sensitive Approach: Shifting from National Lists to ICCPR
While the report raises concerns about ambiguous wording of the policy [page 2], a more substantial issue, not directly addressed, is that Facebook crafted this policy in alignment with the US designated list of Foreign Terrorist Organizations (the “US list”). By aligning itself with that list, Facebook assumes a role akin to implementing policies that mirror the designations and restrictions outlined in the US list. This compromises its status as a neutral platform and subjects the platform to the dynamics of international politics, particularly through the creation and maintenance of the US list, thereby raising concerns about its impartiality in facilitating free speech on the open internet.
To address these concerns, a potential solution is to ground the DOI policy in Article 19 of the International Covenant on Civil and Political Rights (ICCPR), thereby establishing speech restrictions based on a universal standard rather than relying on a specific country’s list. The Covenant’s inherent flexibility renders it well-suited for adoption by Facebook, as it would focus on establishing the conditions under which restrictions may be imposed, in contrast to the current practice of delineating specific content categories for restriction based on the US List.
For instance, in the current scenario, Hamas is listed as a terrorist entity in the US list, and relying on this designation has led, and may continue to lead, to the removal of posts related to this organisation. The HRW report, citing a report by Business for Social Responsibility, highlights an increased risk for Palestinians in Gaza, governed by Hamas, in inadvertently violating Meta’s policies due to potential associations or expressions of support [page 31]. Given Hamas’s status as the governing entity, any content linked to them—subjects naturally discussed by Palestinians—may be labelled as inflammatory, as demonstrated in instances mentioned in the report. This occurs even when the content itself may not inherently possess inflammatory characteristics and may be neutral in nature.
An ICCPR-based policy on the other hand could set conditions for content restriction, such as direct incitement to violence, hate speech, or genuine threats to public safety. This shift would be a move away from a rigid classification tied to a specific national list, allowing for a more context-sensitive application of content moderation. In any case, Facebook cannot block neutral content related to Hamas, as it did. This position is also confirmed by the Oversight Board’s (Meta’s appeal body responsible for reviewing content decisions) decision Shared Al Jazeera post.
This is not an unprecedented approach; the Oversight Board consistently applies Article 19 of the Covenant in every decision it renders. Should the tech giant disregard the practices set forth by its own Oversight Board, which are the result of extensive years of collaboration with civil society, such non-compliance would signify the company’s failure to fulfill the human rights obligations it earlier committed to in its Corporate Human Rights Policy.
Facebook’s Departure from Neutrality
In its routine operations, Facebook has consistently protected pro-Israel inflammatory content, disproportionately limiting neutral Palestinian voices. This departure from neutrality raises concerns. While Facebook attributes these actions to faulty algorithms, Meta cannot evade its human rights responsibility toward its users. In Delfi AS v Estonia [156-159], the European Court of Human Rights determined that even if an automatic filter based on words fails to effectively prevent objectionable content, it would still serve as sufficient justification for imposing liability on the service provider. Meta’s acknowledgment of reliance on a faulty algorithm does not absolve it of the obligation to safeguard human rights
Conclusion
Kate Klonick labels Facebook and its counterparts as “private, self-regulating entities,” evolving from mere platforms to governance systems with impactful content moderation rules. Considering Facebook has a control over global discourse, history will have its eyes on Meta’s actions, weaving a narrative where its decisions impact the delicate balance between freedom of expression and responsible governance. In this complex terrain, Meta must navigate with precision, recognising its influence extends beyond algorithms and platforms to shape the interconnected fabric of our world.
Want to learn more?
- Read: The ‘Facebook Supreme Court’ and Private Human Rights Adjudication
- Read: Holding Social Media Companies Accountable for Hate Speech in Times of Conflict and War: An Urgent Necessity
- Read: Israeli Settlements in Occupied Territories: Assessing ‘Continuing Crimes’ Against the Principle of Legality Before the ICC
0 Comments