Ankhi Das
x
Facebook has struggled to quash abusive content on its platforms in India

Beheadings, anti-Pak posts: What test user of Facebook in India unearthed


On February 4, 2019, a Facebook researcher was in for a shock when she set up a test user account to experience the social media platform as a person living in Kerala and how the site’s algorithms work in the company’s biggest overseas market.

After joining groups, watching videos and following pages suggested by Facebook, the researcher, in a span of 21 days found her feed strewn with hate speech, fake news and extremely violent content. These included photos of beheadings, morphed images of Indian air strikes against Pakistan and clips of violence. The researcher was surprised to see a fake news item of “300 terrorists dying in a bombing in Pakistan” in a group named ‘Things that make you laugh’ that she had joined on the recommendation of Facebook.

“Following this test user’s News Feed, I’ve seen more images of dead people in the past three weeks than I’ve seen in my entire life total,” the researcher commented in her report, calling the experience an “integrity nightmare”.

The findings of the research were documented in an internal report, entitled ‘An Indian test user’s descent into a sea of polarizing, nationalistic messages’, published by Facebook in February 2019. The report is among a trove of documents collected by former Facebook employee Frances Haugen, who had alleged that the social media site was lying about fighting hate, violence and misinformation while it was actually promoting them. Haugen also testified on the same before a Senate sub-committee.

Also read: Facebook dithered in curbing hate speech, anti-Muslim content in India: Report

These internal documents, called ‘The Facebook Papers’, are now available with a consortium of news organisations including The New York Times and Associated Press.

The documents include hordes of studies and memos written by Facebook employees “grappling” with the effects of the platform in India due to an utter lack of expertise and resources.

According to the New York Times, the papers revealed how bots and fake accounts connected to India’s ruling party and Opposition members “were wreaking havoc on national elections.”

“They also detail how a plan championed by Mark Zuckerberg, Facebook’s chief executive, to focus on ‘meaningful social interactions’ or exchanges between friends and family, was leading to more misinformation in India, particularly during the pandemic,” NYT said.

India is the largest market of Facebook, which has 340 million users worldwide. Despite this, the social media company didn’t have enough resources and expertise in India’s 22 official languages, to check the problems of fake news or “anti-Muslim posts” it had introduced in the country.

It is because, 87 per cent, of Facebook’s global budget to classify misinformation is set aside for the United States alone, and the rest is left for the rest of the world. This despite the fact that North American users constitute only 10 per cent of Facebook’s daily active users.

Facebook has outsourced oversight for content on its platform in India to contractors from companies like Accenture – a tie-up which is clearly not working if the internal reports are to be believed.

“We’ve invested significantly in technology to find hate speech in various languages, including Hindi and Bengali,” Facebook spokesperson Andy Stone has claimed.

“As a result we’ve reduced the amount of hate speech that people see by half this year. Today, it’s down to 0.05 per cent. Hate speech against marginalised groups, including Muslims, is on the rise globally. So we are improving enforcement and are committed to updating our policies as hate speech evolves online,” Stone added.

What the test user unearthed?

The researcher, who created the user profile in India on February 4, 2019, during a research team’s visit, called Facebook a “pretty empty place” and the quality of the platform’s Watch and Live tabs were “not ideal” in her report.

The researcher says that the video service usually is unable to gauge what the user wants and instead “seems to recommend a bunch off softcore porn.”

Around February 11, the user started exploring popular posts and pages recommended by Facebook including the official page of the BJP and BBC News India. Three days later, the day 40 jawans were killed in a terror attack in Kashmir’s Pulwama, the researcher saw the feed inundated by anti-Pakistan hate speech, images of beheadings and a graphic showing preparations to burn a group of Pakistanis.

The report submitted by the researcher said that nationalist messages, chest-thumping on India’s purported air strikes in Pakistan, morphed photos of bombings and a doctored photo of a newly-wed soldier killed in the attack were doing the rounds on the Facebook feed.

Many of the groups the researcher had joined had tens of thousands of users with the primary language of posts with hate content being in Hindi, the country’s official language. The groups didn’t necessary post content that their titles suggested.

“After 12 days, 12 planes attacked Pakistan,” one post claimed. Another titled ‘Hot News’ in the group named ‘Things that make you laugh’ reported about the purported death of 300 terrorists in a bomb blast in Pakistan.

Also read: Facebook, Google face widening crackdown over online content

“These groups become perfect distribution channels when they want to promote bad content within short period of time,” NYT quoted the researcher as saying in the report.

“The admins of these groups tended to take a lax position/hands-off attitude towards ensuring that the content shared in the group was on a particular topic of focus, and allowed users to freely post whatever they found interesting/wanted to share,” the report said.

According to a memo by the research team after the trip, one of the requests to Facebook by Indian users was to “take action on types of misinfo that are connected to real-world harm, specifically politics and religious group tension.”

Another internal report entitled ‘Indian Election Case Study’ said that Facebook took a series of initiatives by April 2019 to thwart the flow of misinformation and hate speech in India ahead of the General Elections. This, according to the report, included adding more fact-checking partners and increasing the amount of misinformation it removed and drawing up a list of politicians who were exempted from fact-checking.

However, another report – Adversarial Harmful Networks: India Case Study – published in March 2021, found that several problems seen during the 2019 polls still persisted. The report said that there was a deluge of post which spoke disparagingly of Muslims, some even comparing them to “pigs” and “dogs” and misinformation that the Quran teaches men to rape their female family members.

The report said much of the material that made the rounds on Facebook promoted the RSS, the fountainhead of the ruling BJP. The report said Facebook was aware that it needed to improve its ‘classifiers’ – automated systems that can detect and remove posts containing violent and inciting language – to curb the proliferation of such harmful posts on its platform. The organization, however, was hesitant to designate RSS as a dangerous organization because of “political sensitivities” that could affect the platform’s operation in India, the NYT said quoting the report.

The research shows the utter lack of control that Facebook has on India, although the company in 2020 invested nearly $6 billion in a partnership with Reliance Industries’ Mukesh Ambani.

Facebook, however, has said that the “exploratory effort” of test accounts like the one mentioned above, led to a “more rigorous analysis of our recommendation systems, and contributed to product changes to improve them.”

“Our work on curbing hate speech continues and we have further strengthened our hate classifiers, to include four Indian languages,” the spokesperson said.

The company claims to have removed groups which repeatedly share misinformation and ranking all content from such groups lower in the News Feed and limiting the reachability of their notifications to users.

Read More
Next Story