Telcos revenue hit as voice calls dip 80%, SMS 94%; huge growth in OTT usage
x
Researchers suggested that social media platform designers create "building codes" informed by scientific evidence to minimise online harm and safeguard user well-being I Representative photo

Facebook's architecture sabotaged its misinformation policies: Scientists

Despite Facebook's algorithm adjustments and content removal efforts to combat vaccine misinformation, the platform's architecture continued to pose challenges, said researchers.


Facebook’s core design sabotaged the social media giant’s efforts to combat misinformation running rife on the platform, scientists analysing its misinformation policies said.

The platform’s architecture pushed back even when Facebook tweaked its algorithms and removed content and accounts to combat vaccine misinformation, the researchers at the US George Washington University found.

No reduced engagement with anti-vaccine content was seen despite Facebook's significant effort to remove a lot of such content during the COVID-19 pandemic, the study published in the journal Science Advances said.

The scientists say that these consequences resulted from what the platform is designed to do – enabling community members to connect over common interests, which include both pro- and anti-vaccine persuasions.

“(Facebook) is designed to allow motivated people to build communities and easily exchange information around any topic,” said David Broniatowski, lead study author and an associate professor of engineering management and systems engineering. “Individuals highly motivated to find and share anti-vaccine content are just using the system the way it’s designed to be used, which makes it hard to balance those behaviours against public health or other public safety concerns,” said Broniatowski.

In the remaining anti-vaccine content not removed from the social media, links to off-platform, low credibility sites and “alternative” social media platforms increased in number, the researchers said. This remaining content also became more misinformative, containing sensationalist false claims about vaccine side effects which were often too new to be fact-checked in real time, they found.

Further, anti-vaccine content producers were found to be more efficient in leveraging the platform than pro-vaccine content producers as they effectively coordinated content delivery across pages, groups, and users’ news feeds, even though both groups had large page networks.

“Collateral damage” in the form of some pro-vaccine content being removed as a result of the platform’s policies and the overall vaccine-related discourse becoming politically charged and polarised could also have contributed, the study said.

Broniatowski pointed out that the discussion about social media platforms and artificial intelligence governance largely revolves around either content or algorithms. “To effectively tackle misinformation and other online harms, we need to move beyond content and algorithms to also focus on design and architecture,” Broniatowski said.

“Removing content or changing algorithms can be ineffective if it doesn't change what the platform is designed to do. You have to change the architecture if you want to create that balance (anti-vaccine behaviours against public health concerns),” Broniatowski added.

Social media platform designers could develop a set of “building codes” for their platforms informed by scientific evidence to reduce online harms and ensure users’ protection, the researchers said.

(With agency inputs)

Read More
Next Story