Instagram algorithm found promoting paedophile networks, content: Report
Popular image-sharing platform Instagram’s algorithms allegedly recommend child sexual abuse content to users and promote networks of paedophiles who create and sell such content, according to a Wall Street Journal report.
Twitter owner Elon Musk shared a screenshot of the WSJ post on Wednesday (June 7) with the caption, “extremely concerning”. Owner Meta has reportedly told the WSJ that it was “reviewing its internal processes”.
Extremely concerning pic.twitter.com/PLz5jrocvv
— Elon Musk (@elonmusk) June 7, 2023
The joint investigation by WSJ and researchers at Stanford University and the University of Massachusetts Amherst reportedly found that these paedophile accounts are promoted using explicit hashtags like #pedowhore, #preteensex, and #pedobait.
The hashtags reportedly take users to accounts that offer to sell paedophilic content, including videos of children harming themselves or committing acts of bestiality, the researchers said.
According to the report, some accounts allow buyers to “commission specific acts” or arrange “meet ups”.
According to the WSJ report, the platform “helps connect and promote a vast network of accounts openly devoted to the commission and purchase of underage-sex content”.
“The Meta unit’s systems for fostering communities have guided users to child-sex content,” the report added.
How it works
The researchers reportedly set up a test account and viewed content shared by these networks. Instagram immediately recommended similar accounts to follow, they reported.
Canadian social media influencer and activist Sarah Adams told the WSJ how Instagram’s recommendation algorithm works by sharing her experience.
According to Adams, she came to know about an Instagram account called “Incest Toddlers” in February when one of her followers flagged it. It had an array of “pro-incest memes”. All Adams did was report the page to Instagram.
However, that brief interaction was reportedly enough for the Instagram algorithm to recommend the “Incest Toddlers” account to those who visited her page.
Media reports quoted Alex Stamos, head of Stanford’s Internet Observatory and former chief security officer at Meta, as saying that the social media giant should be alarmed that merely three academics with limited access could find such a huge network.
“I hope the company reinvests in human investigators,” Stamos was quoted as saying.
The Stanford investigators also carried out the search on Twitter, finding “128 accounts offering to sell child-sex-abuse material”, which was reportedly less than a third the number they found on Instagram.
The WSJ report also stated that many Instagram users had got a pop-up notification with the warning that certain searches might produce results containing “images of child sexual abuse”.
The algorithm reportedly directed users to either “get resources” on the topic or “see results anyway.” The report said Meta disabled the “see results anyway” option, but it is not clear why it was ever offered in the first place.
Steps by Meta
Media reports said Meta has since restricted the use of “thousands of additional search terms and hashtags on Instagram” and has “set up an internal task force to investigate these claims and immediately address them”.
The company said it took down 27 networks between 2020 and 2022 for spreading abusive content on its platforms, disabled more than 490,000 accounts in January for violating its child safety policies, and blocked more than 29,000 devices between May 27 and June 2 for policy violations.
Meta also said it hires law enforcement specialists and collaborates with child safety experts to ensure its screening methods are up to date.
However, in April, a group of law enforcement agencies, including the FBI and Interpol, had reportedly warned that Meta’s plans to expand end-to-end encryption on its platforms could “blindfold” it from detecting child sexual abuse content.
(With agency inputs)