Human rights group accuses Meta of restricting pro-Palestine speech
By Clare Duffy, CNN
New York (CNN) — Human Rights Watch on Thursday accused Meta of repeatedly removing or restricting content supporting Palestine or Palestinian human rights even when it did not violate the social media giant’s rules.
The report raises concerns about examples of “peaceful” pro-Palestine content that it said the company removed although the content did not violate any policies. The group also calls on Meta to change or share more information around several policies and moderation decisions, including government takedown requests and when it makes “newsworthiness” exceptions to leave up content that violates its policies.
“Meta should permit protected expression, including about human rights abuses and political movements, on its platforms,” Human Rights Watch said in the report, and urged Meta to consistently enforce its policies for all users.
While Human Rights Watch provided broad characterizations of content that was removed, the group provided limited specifics on the hundreds of posts it said were removed or restricted and did not provide screenshots.
The group said it identified more than 1,000 of pieces of pro-Palestine content that it claims did not violate Meta’s rules but that were restricted or removed during October and November 2023.
That included posts with images of injured or dead bodies in Gaza hospitals and comments saying, “Free Palestine” and “Stop the Genocide.” In another instance, the group said a user tried to post a comment that consisted of nothing more than a series of Palestinian flag emojis and was met with a warning from Instagram that her comment “may be hurtful to others.”
The group does not claim in the report that pro-Palestine supporters faced over-enforcement more than other groups.
Meta said in a statement that the Human Rights Watch report doesn’t reflect its efforts to protect speech related to the Israel-Hamas conflict.
“This report ignores the realities of enforcing our policies globally during a fast-moving, highly polarized and intense conflict, which has led to an increase in content being reported to us,” Meta said in the statement provided by spokesperson Ben Walters.
“Our policies are designed to give everyone a voice while at the same time keeping our platforms safe,” the statement said. “We readily acknowledge we make errors that can be frustrating for people, but the implication that we deliberately and systemically suppress a particular voice is false. Claiming that 1,000 examples — out of the enormous amount of content posted about the conflict — are proof of ‘systemic censorship’ may make for a good headline, but that doesn’t make the claim any less misleading.”
Ongoing scrutiny of moderation around the war
Thursday’s report is just the latest scrutiny that Meta and other social media companies have faced over their handling of content related to the Israel-Hamas war.
It follows a decision by Meta’s own Oversight Board earlier this week to overturn the company’s original decision to remove two videos related to the conflict that the board said showed important information about human suffering on both sides of the issue.
Those critiques came after Meta and other platforms came under fire earlier in the conflict for failing to remove potentially harmful or misleading content, highlighting the balancing act the company must play: Remove enough content quickly to prevent potential harms, without over-enforcing its rules in a way that infringes upon free expression.
Adding to the challenge is the contentious nature of the conflict, where there isn’t always agreement on what constitutes harm. For example, the Thursday report from Human Rights Watch criticized Meta’s removal of some comments and posts with the slogan “from the river to the sea, Palestine will be free” — a phrase that some see as a call for a Palestinian state and coexistence between Israelis and Palestinians, but others view as antisemitic, anti-Israel and potentially violent.
Human Rights Watch also took issue with Meta’s inclusion of Hamas in the its Dangerous Organizations and Individuals policy, based on the United States government designation of the group as a terrorist organization, saying that the company should instead rely on “international human rights standards.”
“The US list includes political movements that have armed wings, such as Hamas and the Popular Front for the Liberation of Palestine,” the report states, adding that Meta should publish the full list of organizations covered by its dangerous organizations policy. “The ways in which Meta enforces this policy effectively bans many posts that endorse major Palestinian political movements and quells the discussion around Israel and Palestine.”
Hamas been responsible for significant, bloody violence over the years against Israelis and opponents in Gaza. Its attack on October 7 claimed the lives of more than 1,200 people. Hamas is designated as a terrorist organization by the United States, the European Union and Israel.
The Thursday report says of Meta’s Dangerous Organizations and Individuals policy that, “understandably, the policy prohibits incitement to violence. However, it also contains sweeping bans on vague categories of speech, such as ‘praise’ and ‘support’ of ‘dangerous organizations,’ which it relies heavily on the United States government’s designated lists of terrorist organizations to define.”
“The ways in which Meta enforces this policy effectively bans many posts that endorse major Palestinian political movements and quells the discussion around Israel and Palestine,” the report said, adding that Meta should publish the full list of groups and people covered by the policy.
Meta in August updated the Dangerous Organizations and Individuals policy to allow for references to those groups and people in the context of social and political discourse.
The company said it is also planning to roll out a revised version of the policy in the first half of next year, after a review of its definition of “praise” of dangerous organizations, it said in September.
Human Rights Watch conducted its review by soliciting emails from Facebook and Instagram users with screenshots and other evidence of their content being removed or restricted.
Reports came from more than 60 countries in a number of languages, primarily English, and most “carried a diversity of messages while sharing a singular characteristic: the peaceful expression in support of Palestine or Palestinians,” according to the group. Human Rights Watch said it excluded cases where it could not substantiate the claims of unjustified removal or where the content could be “considered incitement to violence, discrimination, or hostility.”
Among the concerns raised in the report is a critique of Meta’s “heavy reliance” on automation to moderate content. The group said it received reports, for example, of users’ pro-Palestine comments being removed automatically and marked as “spam.”
Meta’s Oversight Board this week also called out the company’s use of automated systems to moderate content — it found that two videos related to the Israel-Hamas war, which it said showed important information about human suffering on both sides of the conflict, were initially unnecessarily removed by the automated tools.
Meta restored the videos after the board decided to review them, prior to this week’s decision.
“Both expression and safety are important to us and the people who use our services,” the company said in a blog post earlier this week.
Meta also told CNN in October that it had established “a special operations center staffed with experts, including fluent Hebrew and Arabic speakers, to closely monitor and respond to this rapidly evolving situation,” and that it was coordinating with third-party fact checkers in the region.
The-CNN-Wire
™ & © 2023 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.