Explained: Twitters good, bad bots and Elon Musks demand for proof
x

Explained: Twitter's good, bad bots and Elon Musk's demand for 'proof'


Elon Musk’s $44 billion deal to buy Twitter is temporarily on hold as the world’s richest person has raised concerns over fake accounts and bots on the platform.

On May 13, the Tesla CEO tweeted, “Twitter deal temporarily on hold pending details supporting calculation that spam/fake accounts do indeed represent less than 5% of users.” The next day, he posted on Twitter, “The bots are angry at being counted.”

Also read: Elon Musk’s top priority: To eliminate scam bots, bot armies on Twitter

Musk was responding to Twitter’s filing where the company stated there are less than 5% of fake users on its platform.

He claimed that more than 20% fake or spam accounts exist on Twitter and added that the company CEO Parag Agrawal did not show “proof”. “20% fake/spam accounts, while 4 times what Twitter claims, could be *much* higher. My offer was based on Twitter’s SEC filings being accurate. Yesterday, Twitter’s CEO publicly refused to show proof of <5%. This deal cannot move forward until he does,” Musk tweeted on May 17.

Last month, Musk, during an interview at TED 2022 conference in Vancouver, British Columbia, said his top priority is to eliminate bots from Twitter. “A top priority I would have is eliminating the spam and scam bots and the bot armies that are on Twitter. They (scam bots) make the product much worse. If I had a Dogecoin for every crypto scam I saw, we’d have 100 billion Dogecoin.”

What are bots?

Bots, in simple terms, are automated accounts. They are used for sending out bulk messages, retweets, for campaigns, and more.

“There’s a lot of understandable confusion and we need to do a better job of explaining ourselves. In sum, a bot is an automated account — nothing more or less,” according to Twitter.

Also read: Musk hints at paying less for Twitter than his $44 bn offer

It added, “Going back a few years, automated accounts were a problem for us. We focused on it, made the investments, and have seen significant gains in tackling them across all surfaces of Twitter. That doesn’t mean our work is done.”

“At its heart, a bot is a piece of code that mimics human interaction online,” Tamer Hassan, CEO of Human Security, which specialises in bot detection, was quoted as saying by CBS News.

What are good bots?

“Not all bots are bad. Some are delightful. Bots actually come in all shapes and sizes, and chances are, you’re already following one that you like. Like a COVID-19 bot that alerts you to vaccine availability in your area, an earthquake bot that alerts you to tremors in your region, or an art bot that delivers a colorful dose of delight on your timeline. How these bots are represented on Twitter is almost as important as what they do for their followers,” explains Twitter.

Oliver Stewart, the lead researcher on Twitter’s Identity and Profiles team, further elaborates, “There are many bots on Twitter that do good things and that are helpful to people. We wanted to understand more about what those look like so we could help people identify them and feel more comfortable in their understanding of the space they’re in.”

Stewart’s team revealed that people found content more trustworthy if they know more about who’s sharing it — starting with whether that account is human or automated.

To help address the issue of bots, Twitter has rolled out new labels that identify bots with an “automated” designation in their profile, an icon of a robot, and a link to the Twitter handle of the person who created the bot.

“Not only are we just labeling these bots, we’re also saying: this is the owner, and this is why they’re here,” said Stewart. “Based on the preliminary research that we have, we hypothesise that that’s going to create an environment where you can trust those bots a lot more.”

Twitter said it is not wrong to have automated accounts. “It’s not inherently wrong to have an automated account on Twitter; obviously automated accounts don’t have to be terrible. There was a vaccine bot that was really popular in New York,” said Dante Clemons, the senior product manager tasked with creating and testing these labels.

Also read: Tesla, Twitter shares drop as Elon Musks legal issues grow

The labels themselves don’t call bots good or bad, it just gives people the signal that it’s automated. “If it’s compliant with Twitter’s rules, we’re OK with it being on the platform. For the ones that are noncompliant, we’re already actively doing the work to remove those off Twitter,” she said.

What is prohibited on Twitter?

Automation can also be a powerful tool in customer service interactions, where a conversational bot can help find information about orders or travel reservations automatically. This is incredibly useful and efficient for small businesses, especially at a time of social distancing, said Twitter.

The following, however, are prohibited.

  • Malicious use of automation to undermine and disrupt the public conversation, like trying to get something to trend
  • Artificial amplification of conversations on Twitter, including through creating multiple or overlapping accounts
  • Generating, soliciting, or purchasing fake engagements
  • Engaging in bulk or aggressive tweeting, engaging, or following
  • Using hashtags in a spammy way, including using unrelated hashtags in a tweet (aka “hashtag cramming”)

How does Twitter detect bots/spam/fake accounts?

Twitter publishes data on removals of fake accounts every six months in the Twitter Transparency Report.

The social media giant said it detects roughly 25 million accounts per month suspected of being automated or spam accounts. In the second half of 2020, it said, it deployed 143 million anti-spam challenges to accounts, which helped bring spam reports — those coming from people who flag Tweets as spam — down by about 18% from the first half of the year.

Twitter has an entire enforcement team dedicated to tracking down these accounts and banning them. But it’s not as simple as blanket-banning all automated accounts.

“We use ‘anti-spam challenges’ to confirm whether a human is in control of an account we suspect is engaging in platform manipulation. For example, we may require the account holder to verify a phone number or email address, or complete a reCAPTCHA test. The ‘spam reports’ reflected in these numbers are an aggregate of reports from people who use Twitter after receiving an interaction (e.g., a follow, mention, or Direct Message) from a suspected spam account,” Twitter said.

Twitter’s ‘4 truths’ about bots

The company posted “four truths” about bots.

“First truth: Don’t assume an account with a peculiar name must be a bot. Second truth: Some people just tweet a whole lot (like hundreds of Tweets a day), but it doesn’t mean they are bots. Third truth: Real people have opinions that you will disagree with — it doesn’t mean they are part of a grand manipulation scheme. Fourth truth: Seeing doesn’t always lead to believing,” it stated.

What Twitter CEO said on bots

Agrawal posted a series of tweets to explain bots and how they are eliminated.

“First, let me state the obvious: spam harms the experience for real people on Twitter, and therefore can harm our business. As such, we are strongly incentivized to detect and remove as much spam as we possibly can, every single day. Anyone who suggests otherwise is just wrong.

“Next, spam isn’t just ‘binary’ (human / not human). The most advanced spam campaigns use combinations of coordinated humans + automation. They also compromise real accounts, and then use them to advance their campaign. So – they are sophisticated and hard to catch.

“Some final context: fighting spam is incredibly ‘dynamic’. The adversaries, their goals, and tactics evolve constantly – often in response to our work! You can’t build a set of rules to detect spam today, and hope they will still work tomorrow. They will not,” he wrote.

He said that the platform suspends half a million spam accounts every day. “We suspend over half a million spam accounts every day, usually before any of you even see them on Twitter. We also lock millions of accounts each week that we suspect may be spam – if they can’t pass human verification challenges (captchas, phone verification, etc).”

Explaining the challenges in identifying bad bots, he wrote, “The hard challenge is that many accounts which look fake superficially – are actually real people. And some of the spam accounts which are actually the most dangerous – and cause the most harm to our users – can look totally legitimate on the surface.

“Our team updates our systems and rules constantly to remove as much spam as possible, without inadvertently suspending real people or adding unnecessary friction for real people when they use Twitter: none of us want to solve a captcha every time we use Twitter.

“Now, we know we aren’t perfect at catching spam. And so this is why, after all the spam removal I talked about above, we know some still slips through. We measure this internally. And every quarter, we have estimated that <5% of reported mDAU (monetizable daily active users) for the quarter are spam accounts.

“Our estimate is based on multiple human reviews (in replicate) of thousands of accounts, that are sampled at random, consistently over time, from ‘accounts we count as mDAUs’. We do this every quarter, and we have been doing this for many years.”

According to Agrawal, they use both public and private data to catch spam accounts.

“Each human review is based on Twitter rules that define spam and platform manipulation, and uses both public and private data (eg, IP address, phone number, geolocation, client/browser signatures, what the account does when it’s active…) to make a determination on each account.

“The use of private data is particularly important to avoid misclassifying users who are actually real. FirstnameBunchOfNumbers with no profile pic and odd tweets might seem like a bot or spam to you, but behind the scenes we often see multiple indicators that it’s a real person.

Our actual internal estimates for the last four quarters were all well under 5% – based on the methodology outlined above. The error margins on our estimates give us confidence in our public statements each quarter,” he said.

Defending Twitter’s estimates of less than 5% of fake accounts, Agrawal said they “don’t believe” that the exercise to identify the accounts can be performed externally. “Unfortunately, we don’t believe that this specific estimation can be performed externally, given the critical need to use both public and private information (which we can’t share). Externally, it’s not even possible to know which accounts are counted as mDAUs on any given day.”

Read More
Next Story