‘Despite staff alerts, Facebook ignored hate content in India during polls’
Senior exec insisted all was well in India, reveal documents; company admits its AI tools couldn’t identify hate speech in vernacular
Amid charges of allowing fake news and hate content on the social media platform, Facebook is now facing fresh accusations of ignoring red flags from its staff in this regard.
Between 2018 and 2020, Facebook staff pointed to several instances of “polarising nationalistic content”, “fake or inauthentic” messages and “misinformation”, but these were largely ignored, said an Indian Express report. The concerns were raised by Facebook employees with a mandate for content oversight, it added.
At an internal review meeting in 2019, the California-based company’s then Vice President Chris Cox brushed off the red flags, saying he found “comparatively low prevalence of problem content” on Facebook.
The social media giant came in for much flak for its alleged role in tacitly supporting the ruling party during the 2019 general election in India. It has now emerged that two internal reports were filed in early 2019, pointing to India-related ‘problem content’ on the platform. After the election results were out, in August that year, a third report acknowledged that Facebook’s artificial intelligence (AI) tools did not pick out problem content since it could not “identify vernacular languages”.
However, Cox is alleged to have said: “Survey tells us that people generally feel safe. Experts tell us that the country is relatively stable.” This was revealed from the minutes of the review meeting.
The latest findings came to light when documents were submitted under a disclosure procedure to the US markets regulator, the Securities and Exchange Commission (SEC). This, in turn, was passed on to the US Congress by the legal counsel of Frances Haugen, a former Facebook employee and whistleblower. A global media consortium reviewed the redacted versions of the documents submitted to the US Congress.
The India-related findings come less than a month after Facebook admitted that critical parts of its platform seemed to be hardwired for spreading misinformation and divisive content. This was also revealed by internal documents that showed Facebook found it difficult to curb problem content in the developing world, and that it was hesitant to censor right wing news organisations in the US.
Haugen went public in early October with hundreds of internal documents collected by her against her former employee, which suggested the American internet major prioritised “profits over user safety and security”.