Meta, Mark Zuckerberg, Llama 2, ChatGPT, Bard
x
Analysts said Threads' success is far from guaranteed, citing Meta's track record of starting standalone apps that were later shut down. (File photo)

Unmasking of Facebook; letting problematic content slip past its radar


Facebook [now Meta] may claim to be highly proactive against hate speech, deploying sophisticated technology to identify and pull them down. But news reports emerging from a whistleblower’s documents tell a totally different story: that Mark Zuckerberg’s Facebook just sidelined, ignored or has been slow to react to deep-seated problems, which were regularly red-flagged by the rank-and-file in the organisation.

For example, a recent report revealed that Facebook’s artificial intelligence (AI) tools were simply unable to detect hate speech in different languages, such as Assamese and, how Facebook staffers and external researchers had brought up this problem before the run-up to the 2021 elections in the state. But, Facebook slow-walked the process and got around to developing a “demotion” process for key hate words in Assamese only in 2020.

These news reports are stemming from a cache of internal documents, disclosed by the legal counsel of former Facebook employee and whistleblower Frances Haugen, to the United States Securities and Exchange Commission (SEC) and provided to Congress in redacted form.

The redacted versions received by Congress are being systematically reviewed by a consortium of global news organisations, including the Wire and the Indian Express.

These documents are showing up Facebook, who despite its regularly avowed good intentions, appears to have dragged its feet to address real harms the social network has magnified or even created. For example, one report pointed out that not containing borderline content could be serious since it “gets more interaction and higher distribution than benign” posts. The April 15, 2019 memo stated the need for “intervention to counteract perverse incentive”.

Also read: Facebook memos red-flagged surge in anti-minority posts ahead of 2019 LS polls

Already accused of “choosing profits over people”, Facebook was asked how they manage the conflict of having to pull down borderline content which has high engagement levels, a Meta Platforms spokesperson said that the distribution of borderline content gets decreased through trained AI systems. (Facebook has been rebranded as Meta Platforms Inc).

Quoting the spokesperson, an Indian Express report said their “proactive detection tech” ostensibly identifies likely hate speech or violence and incitement and significantly reduces this kind of content. To avoid the risk of problematic content going viral and inciting violence ahead of an election. Until the time the content is determined to be violative of their policies, they would reduce its distribution, alleged the spokesperson.

Meta also claimed to have increased the number of hours it spends on reducing inflammatory speech on its social media platforms.

Communal conflict in India

Another set of internal documents titled “Communal Conflict in India,” learnt that when Hindi and Bengali hate speech content had shot up in March 2020, Facebook’s action rates (Bengali) content slumped and “fallen off almost completely since March (2020) driven by lower action rate amid an increase in reporting rates”.

However, at the same time, the action taken for posts in English language rose sharply. Furthermore, the internal report showed that since June 2019, actioned hate content per daily active users in India has increased 4x times, particularly for English content. Facebook had said last year that the company proactively detects hate speech, and nearly 97 per cent of the hate speech content that they remove, they catch it before even anyone reports it.

But, the company shows low action rates on reported content, said news reports. This is largely due to the fact that the company believed that reported content does not violate any of the company’s policies; the reporter may not provide enough information to locate the content they are attempting to report, said the spokesperson, quoted IE.

Also read: Before Facebook outage, whistleblower spilled beans on ‘dangerous algorithms’

Or, either the company’s policies disallow it to take any action on reported content; and the reporter “writing to us regarding a dispute between themselves and a third party which Facebook is not in a position to arbitrate”, the spokesperson added.

Facebook on denial mode

The recent revelations clearly show that despite red flags concerning its operations in India raised internally between 2018 and 2020, Facebook was slow to react. Though, the company was told that there is a “constant barrage of polarising nationalistic content”, to “fake or inauthentic” messaging, from “misinformation” to content “denigrating” minority communities.

Facebook, reports said, over the last two years, red-flagged an increase in “anti-minority” and “anti-Muslim” rhetoric as “a substantial component” of the 2019 Lok Sabha election campaign.

But according to an AP report, despite explicit alerts by staffers at an internal review meeting in 2019, Chris Cox, then Vice-President, Facebook, found “comparatively low prevalence of problem content (hate speech, etc)” on the platform.

In the wake of the company’s discovery that a group of news organisations are working on reports based on these internal documents, Facebook tweeted from its public relations “newsroom” account earlier this month.

“A curated selection out of millions of documents at Facebook can in no way be used to draw fair conclusions about us”, said the tweet.

In a prepared statement, Facebook said: “At the heart of these stories is a premise which is false. Yes, we’re a business and we make profit, but the idea that we do so at the expense of people’s safety or wellbeing misunderstands where our own commercial interests lie.”

“The truth is we’ve invested $13 billion and have over 40,000 people to do one job: keep people safe on Facebook.” For Mark Zuckerberg, who has so far been unable to address stagnating user growth and shrinking engagement for Facebook, problems only continue to mount.

Read More
Next Story