Ankhi Das
x
Facebook has struggled to quash abusive content on its platforms in India

Facebook dithered in curbing hate speech, anti-Muslim content in India: Report


A damning new report based on leaked company documents has once again laid bare Facebook’s role in helping spread hate speech, propaganda and inflammatory posts, particularly against Muslims, in India, and the reluctance of the social media giant to address the problem. This despite Facebook’s own employees casting doubts over the company’s motivations and interests, the reports say.

The documents highlight Facebook’s constant struggles in quashing abusive content on its platforms in India – the company’s largest growth market, according to the AP report, which is based on disclosures made to the US Securities and Exchange Commission and provided to US Congress in redacted form by former Facebook employee-turned-whistleblower Frances Haugen’s legal counsel. The redacted versions were obtained by a consortium of news organisations.

The BJP has been credited for leveraging the platform to its advantage during elections. Last year The Wall Street Journal cast doubt over whether Facebook was selectively enforcing its policies on hate speech to avoid blowback from the ruling party.

Also read: Facebook, Google face widening crackdown over online content

According to the documents, Facebook saw India as one of the most “at risk countries” in the world and identified both Hindi and Bengali languages as priorities for “automation on violating hostile speech”. Yet it didn’t have enough local language moderators or content-flagging in place to stop misinformation that at times led to real-world violence.

“Hate speech against marginalised groups, including Muslims, is on the rise globally. So we are improving enforcement and are committed to updating our policies as hate speech evolves online,” a company spokesperson said.

The documents show that ahead of the February 2019 general election, a Facebook employee wanted to understand what a new user in India saw on their news feed if all they did was follow pages and groups solely recommended by the platform itself. The employee created a test user account and kept it live for three weeks, a period during which a suicide bombing in Kashmir led to the death of 40 Indian soldiers.

In the note, titled ‘An Indian Test User’s Descent into a Sea of Polarizing, Nationalistic Messages’, the employee whose name is redacted said they were “shocked” by the content flooding the news feed, which “has become a near constant barrage of polarizing nationalist content, misinformation, and violence and gore”.

“Following this test user’s News Feed, I’ve seen more images of dead people in the past three weeks than I’ve seen in my entire life total,” the researcher wrote.

“Should we as a company have an extra responsibility for preventing integrity harms that result from recommended content?” the researcher asked in their conclusion.

The memo, which was shared with other employees, exposed how Facebook’s own algorithms or default settings played a part in spurring such malcontent. The employee noted that there were clear “blind spots,” particularly in “local language content”.

The Facebook spokesperson said the test study “inspired deeper, more rigorous analysis” of its recommendation systems and “contributed to product changes to improve them”.

“Separately, our work on curbing hate speech continues and we have further strengthened our hate classifiers, to include four Indian languages,” the spokesperson said.

The leaked documents also reveal the scale of anti-Muslim propaganda, especially by Hindu-hardline groups – on both Facebook and WhatsApp, which Facebook owns. From the 2020 Delhi riots to ‘coronajihad’, Facebook and WhatsApp have been used to spread anti-Muslim propaganda and spur attacks on India’s largest minority group. 

In August last year The Wall Street Journal published a series of stories detailing how Facebook had internally debated whether to classify a Hindu hard-line lawmaker close to the BJP as a “dangerous individual” after a series of anti-Muslim posts from his account.

The documents reveal the leadership dithered on the decision, prompting concerns by some employees. One wrote that Facebook was only designating non-Hindu extremist organizations as “dangerous”.

Read More
Next Story