Facebook dithered in curbing hate speech, anti-Muslim content in India: Report
A damning new report based on leaked company documents has once again laid bare Facebookâs role in helping spread hate speech, propaganda and inflammatory posts, particularly against Muslims, in India, and the reluctance of the social media giant to address the problem. This despite Facebookâs own employees casting doubts over the companyâs motivations and interests, the reports say.
The documents highlight Facebookâs constant struggles in quashing abusive content on its platforms in India â the companyâs largest growth market, according to the AP report, which is based on disclosures made to the US Securities and Exchange Commission and provided to US Congress in redacted form by former Facebook employee-turned-whistleblower Frances Haugenâs legal counsel. The redacted versions were obtained by a consortium of news organisations.
The BJP has been credited for leveraging the platform to its advantage during elections. Last year The Wall Street Journal cast doubt over whether Facebook was selectively enforcing its policies on hate speech to avoid blowback from the ruling party.
Also read: Facebook, Google face widening crackdown over online content
According to the documents, Facebook saw India as one of the most âat risk countriesâ in the world and identified both Hindi and Bengali languages as priorities for âautomation on violating hostile speechâ. Yet it didnât have enough local language moderators or content-flagging in place to stop misinformation that at times led to real-world violence.
âHate speech against marginalised groups, including Muslims, is on the rise globally. So we are improving enforcement and are committed to updating our policies as hate speech evolves online,â a company spokesperson said.
The documents show that ahead of the February 2019 general election, a Facebook employee wanted to understand what a new user in India saw on their news feed if all they did was follow pages and groups solely recommended by the platform itself. The employee created a test user account and kept it live for three weeks, a period during which a suicide bombing in Kashmir led to the death of 40 Indian soldiers.
In the note, titled âAn Indian Test Userâs Descent into a Sea of Polarizing, Nationalistic Messagesâ, the employee whose name is redacted said they were âshockedâ by the content flooding the news feed, which âhas become a near constant barrage of polarizing nationalist content, misinformation, and violence and goreâ.
âFollowing this test userâs News Feed, Iâve seen more images of dead people in the past three weeks than Iâve seen in my entire life total,â the researcher wrote.
âShould we as a company have an extra responsibility for preventing integrity harms that result from recommended content?â the researcher asked in their conclusion.
The memo, which was shared with other employees, exposed how Facebookâs own algorithms or default settings played a part in spurring such malcontent. The employee noted that there were clear âblind spots,â particularly in âlocal language contentâ.
The Facebook spokesperson said the test study âinspired deeper, more rigorous analysisâ of its recommendation systems and âcontributed to product changes to improve themâ.
âSeparately, our work on curbing hate speech continues and we have further strengthened our hate classifiers, to include four Indian languages,â the spokesperson said.
The leaked documents also reveal the scale of anti-Muslim propaganda, especially by Hindu-hardline groups â on both Facebook and WhatsApp, which Facebook owns. From the 2020 Delhi riots to âcoronajihadâ, Facebook and WhatsApp have been used to spread anti-Muslim propaganda and spur attacks on Indiaâs largest minority group.Â
In August last year The Wall Street Journal published a series of stories detailing how Facebook had internally debated whether to classify a Hindu hard-line lawmaker close to the BJP as a âdangerous individualâ after a series of anti-Muslim posts from his account.
The documents reveal the leadership dithered on the decision, prompting concerns by some employees. One wrote that Facebook was only designating non-Hindu extremist organizations as âdangerousâ.