Facebooks lack of control on fake news, hate messages lighting the matchstick

Facebook's lack of control on fake news, hate messages lighting the matchstick

While Facebook has bowed down to governments in other countries after causing violence and riots, its content moderation is yet to raise alarm in India.

What’s in common with the Bangalore riots in August, Delhi riots in February, violent anti-CAA protests in Assam (2019), Kasganj violence in Uttar Pradesh (2018), Muzaffarnagar riots in Uttar Pradesh (2013 and 2017), Basirhat or Baduria riots in West Bengal (2017)? All these played out on social media and fanned by violent, hate-mongering and communally insensitive posts on social...

What’s in common with the Bangalore riots in August, Delhi riots in February, violent anti-CAA protests in Assam (2019), Kasganj violence in Uttar Pradesh (2018), Muzaffarnagar riots in Uttar Pradesh (2013 and 2017), Basirhat or Baduria riots in West Bengal (2017)?

All these played out on social media and fanned by violent, hate-mongering and communally insensitive posts on social media platforms, particularly on Facebook. WhatsApp, the messaging platform owned by Facebook, added to the woes by not doing enough to spread the hate content and mobilising violent mobs during the riots.

With social media companies having strict guidelines on what can be posted on their platforms, one would expect that hate messages and objectionable posts would invite quick action.

But a recent Wall Street Journal report, quoting current and former Facebook employees, said the company went soft on Hindutva elements and ruling BJP members over their controversial posts, which if curbed could have helped control the violence.

In the article titled ‘Facebook Hate-Speech Rules Collide With Indian Politics’, the publication reports how the company’s top executive opposed a proposal to ban controversial politicians and their posts and looked the other way.

The Journal reported that the top executive said punishing violations by BJP workers “would damage the company’s business prospects in the country”. In this regard, a case has been filed and FIR registered against Ankhi Das, Facebook’s Director, Public Policy, India, South and Central Asia, and two other company officials, for allegedly hurting religious sentiments.

The report has sparked a full-blown political debate on the role of Facebook in inciting riots and favouring the ruling party in India. A parliamentary committee on Information Technology headed by Congress leader and MP Shashi Tharoor has now decided to pull up the company.

Facebook has, however, refuted the allegations, saying the company enforced a strict policy on hate speech “without regard to anyone’s political position or party affiliation”. “While we know there is more to do, we’re making progress on enforcement and conduct regular audits of our process to ensure fairness and accuracy.”

However, these statements are mere hogwash. The instances of Facebook meddling with the politics of various countries, changing the political discourse in favour of ruling parties, and failing to act on hate speeches and messages, are all too well known to the world, and is not unexpected in India.

Bengaluru riots

In the case of Bengaluru riots, a derogatory and insensitive post on religious lines by an aspiring politician (a BJP ideologue) angered a section of the society who gathered at the local police station to file a complaint. When police delayed action for almost 5–6 hours, the mob turned violent and attacked the police, who in turn opened fire, killing four people.

At the root of it was a verbal duel on social media between one Naveen and Firoz Pasha, a member of the political outfit, Social Democratic Party of India (SDPI), both Facebook friends, over controversial anti-Muslim posts after Prime Minister Narendra Modi laid the foundation for the Ram temple in Ayodhya on August 5.

Bengaluru riots, violence, Police
Police stand next to the charred remains of a vehicle vandalised during a violence in Bengaluru | Photo: PTI

Pasha had forwarded the controversial post to his fellow SDPI workers on WhatsApp to highlight the insensitive post so as to mobilise people to file a complaint against Naveen.

While Facebook took down the derogatory post and suspended Naveen’s account, the action came quite late, after the police stepped in and violence had already blown over.

Besides Facebook, one could see similar unfiltered hate messaging in the comment section of YouTube channels of regional TV media, which ran continuous updates of the incident. While the news channels claimed that they couldn’t monitor thousands of comments, YouTube, owned by Google, took no action.

Sometimes their action came after the misinformation and hate messages had gone viral.

Soon after the Bangalore riots, some jumped in to say Naveen’s post was a reaction to a derogatory post shared by a Muslim man, Adyar Basheer, targeting Hindu gods and Prime Minister Modi. However, it was a fake post and had nothing related to the present incident. Although Adyar Basheer’s post was taken down, derogatory posts with morphed pictures of Hindu gods are still available on Facebook to this date.

The state government has now said that it will hold discussions with the tech giants to see how they can best filter the hate content and not add to the administrative concerns.

Similar modus operandi

The Bengaluru riots were a repeat of the Basirhat violence in West Bengal in 2017. The riots began after a similar derogatory post against Muslims, police inaction and the failure of social media platforms in deleting the post.

Similarly, a fake video of two youths being killed in a Gulf country, circulated on social media, triggered the Muzaffarnagar riots in 2013.

In the case of Kasganj riots, a Facebook user had warned the police, six days before the Republic Day in 2018, about a controversial post that could turn into a Hindu-Muslim riot, after a Hindu man’s post, saying Chief Minister Yogi Adityanath asked Muslims to leave to Pakistan and that the man would sponsor tickets if they did not have. Both the communities spread venomous messages virtually, charging up the environment and leading to full-fledged violence.

During the anti-CAA protests in Delhi, the gunman who opened fire at protesters outside Jamia Millia Islamia University on January 30, identified himself as ‘Rambhakt Gopal’, posted all his activities on Facebook and even went live on the platform minutes before the incident that triggered panic in the national capital.

Yet again, no lessons were learnt. And perhaps that’s precisely what suits Facebook.

Riots in other countries

In other countries too, Facebook’s inaction has cost lives and property. After the anti-Muslim riots in Sri Lanka in 2018, the country was forced to impose emergency and block access to Facebook. After the investigation pointed fingers at Facebook, the company apologised to the government, saying a video falsely purporting to show a Muslim restaurateur admitting to mixing “sterilization pills” in the food of Sinhala-Buddhist men, went viral and sparked violence.

“We deplore this misuse of our platform. We recognize, and apologize for, the very real human rights impacts that resulted,” Facebook had said in a statement then. It went on to hire content moderators with local language skills to implement its anti-hate speech strategy to keep abusive content from spreading.

Also, after the deadly mosque attack in New Zealand last year carried out by a white supremacist who opened fire on worshippers, claiming 50 lives, Facebook said it is tightening live video streaming rules.

A documentary by Al Jazeera shows how Facebook handles extreme content. While graphic images are often allowed to remain on the site and a page with a high number of followers and a high level of engagement, the company could shield and protect its content even if it contains hateful messages. The company justified it saying it was free speech rights.

“If you start censoring too much, then people lose interest in the platform. It’s all about making money at the end of the day,” a moderator explains to the reporter.

The problem is deep rooted. While Facebook bowed down in other countries, its content moderation is yet to raise alarm in India. Even though the company claims to have deleted millions of fake accounts, hate posts and insensitive content, it continues to do a lot of damage.

Between April and June this year, Facebook took down around 1.5 billion fake accounts (nearly half its user base), 1.4 billion spams and 22.5 million hate speeches among others. And from October 2018 to March 2019, the company said it removed 3.39 billion fake accounts. That’s twice the number of fake accounts detected and removed six months prior to that.

But there’s a catch. Facebook algorithms that detect these hate content is largely restricted to English content. Regional language content with hate speeches bypass the algorithms and continue to remain on the platform. A Time magazine report indicates of the 22 official languages of India, only four — Hindi, Bengali, Urdu and Tamil — are covered by Facebook’s algorithms.

The rise of social media in India also corresponds with the rise of the Narendra Modi-led Bharatiya Janata Party (BJP) across the country.

Following the Delhi Riots, The Federal had highlighted how BJP forged ahead to push its communal agenda through WhatsApp groups at the local (mandal) level.

Similarly, in 2019, before the Lok Sabha elections, while Facebook singled out the Congress by directly mentioning that it removed pages related to political parties for coordinated unauthentic behaviour in India, there was no mention of accounts relating to other leading political parties, including the BJP.

The social media giant, in a statement, said that it took down about 687 Facebook pages and accounts linked to individuals associated with the IT cell of the Indian National Congress (INC), and 15 pages related to Indian IT firm Silver Touch, which operates for the BJP.

A BBC research shows nationalism was the driving force behind fake news in India. The survey also showed that right-wing networks in India are much more organised than left-wing networks, thereby pushing nationalistic fake news further.

A research paper by a PhD student of Northwestern University School of Law in 2013 highlighted that the very qualities of social media that promote unprecedented political progress were used to incite criminal civil disorder. The paper concluded that it was critical for the government to employ strategies to quickly respond to such threats without compromising open internet policy.

Facebook changing political discourse

The allegation of Facebook favouring Indian government and the right-wing BJP is nothing new. Globally, the company had come under criticism for enabling users to meddle with elections in the US, Europe and in Africa. New researches have found that YouTube, Tumblr, Instagram, Facebook and Twitter were all leveraged to spread propaganda.

Australia’s privacy regulator took Facebook to court over the Cambridge Analytica scandal. The Australian Information Commissioner alleged that Facebook had seriously infringed the privacy of more than 300,000 Australians by leaving personal data exposed, to be sold and used for political profiling.

In the UK, the company was fined £500,000, the maximum fine under the law for “serious breach” of the privacy laws. The British political consulting firm Cambridge Analytica harvested the personal data of millions of Facebook users without their consent to run political campaigns in different countries.

Similar was the case in the US election in 2016. The US had levied a record $5bn fine over the same issue.

Back home, the same company was in talks with the Congress before the 2014 election. But the fact remains that both the BJP and Congress, besides Janata Dal (United), were customers of Cambridge Analytica’s India partner, the Ghaziabad-based Ovleno Business Intelligence (OBI). The company’s website still carries these details.

Analysts feel there’s still a long way to go and social media companies need to be quick and effective in controlling these fake and hate posts.

“Social media companies need to work on different levels — identifying the source, framing  clear community guidelines and action that can be taken, supporting third party fake news buster initiatives, and letting users quickly validate an item online and more,” Prasanto K Roy, a technology policy consultant, had said.

Next Story