Government talking points are being magnified through thousands of accounts during politically fraught times and silencing people on Twitter is only part of a large-scale effort by governments to stop human rights activists and opponents of the state from being heard.
- How armies of fake accounts ‘ruined’ Twitter in the Middle East
- It exists to demobilise opposition: How Twitter failed Arabs
On June 6, 2017, the world woke up to another political crisis in the Middle East.
They cited several reasons for breaking ties, but the main accusations the blockading countries laid out centred on allegations thatQatarsupported “terrorism” and was working on “destabilising the region”, accusations Doha has consistently denied.
What many did not know is that some of the groundwork for the blockade had already been laid on social media platforms likeTwitter.
An online propaganda battle, which started in the months before theGCC Crisis, continues to this day, Al Jazeera has found.
Bots predated the blockade
Using a combination of data collection and language processing, Al Jazeera has analysed more than 2.3 million tweets from almost 2,400 accounts, sent between June 2017 and October 2018.
The analysis found that bots – Twitter accounts that are either fully or partially automated to amplify certain messages, hashtags or opinions – play a significant role in the online conversation about the blockade.
“In the two months before the Gulf Crisis started, a network of Twitter accounts was set up specifically to have anti-Qatar messages in their bios,” Marc Owen Jones, assistant professor of Middle East Studies at Hamad Bin Khalifa University in Doha and fellow of the Exeter Institute of Arab and Islamic Studies, told Al Jazeera.
Jones has spent the last several years researching how Twitter in the Middle East, and in the Gulf region especially, is being manipulated to spread propaganda.
“The methodology that I use to identify these bots is by looking at account creation dates,” Jones said.
“If you see a huge amount of accounts created on the same day tweeting on the same topic and they don’t really interact with people, you can be almost certain they’re bots,” he said.
“A lot of the obviousy fake accounts we saw in the Qatar case were ones … [where] the combination of the creation date and anti-Qatar messages in their bios suggest that [a] network was specifically set up against Qatar,” he told Al Jazeera.
If you see a huge amount of accounts created on the same day tweeting on the same topic and they don’t really interact with people, you can be almost certain they’re bots.
MARC OWEN JONES, HAMAD BIN KHALIFA UNIVERSITY
Jones’researchon Twitter manipulation during the Gulf Crisis showed a vast network of bots that tried to popularise certain hashtags, sent out fake news, and disseminated propaganda before and shortly after the start of the blockade.
“The goals of these networks is not just to manipulate trends but it’s to get real people to adopt these hashtags. This is something we saw a lot in the GCC Crisis, where some hashtags were started by bots.”
He added that prominent Twitter influencers in Saudi and the UAE later tweeted about the bot-created subject, that was then picked up by “real people”.
Al Jazeera’s analysis found that as time progressed and the blockade was forgotten by the majority of the world, bots were still being created to increase the reach of political tweets.
Whereas Jones’ research mostly found accounts that sent messages backing the blockading countries, Al Jazeera’s analysis also found bots amplifying support for Qatar.
This meant that tweets with Arabic hashtags about Qatar’s Emir SheikhTamim bin Hamad Al Thani, including “Tamim, our glory”, and “Rest assured of prosperity and goodness”, quoting Al Thani, saw thousands of retweets by fake accounts, greatly increasing their reach on Twitter.
On the side of the blockading countries, hashtags calling Qatar’s leadership “Gaddafi of the Gulf” and “Zionists of Qatar” saw thousands of retweets among fake accounts.
The extensive use of retweets by so-called influencers, popular Twitter users with a large number of followers, is one of the most important weapons these fake accounts have.
Bots on both sides of the blockade
Alexei Abrahams, a fellow investigating information manipulation on social media in the Gulf region at the University of Toronto’s Citizen Lab, told Al Jazeera that using bots to retweet messages is a low risk, high reward endeavour.
“From a programming perspective, it’s trivial to create fake accounts and retweet what other people say,” Abrahams told Al Jazeera.
“If you’re a regime, you can, with a budget, create tonnes of fake accounts that can tweet to support the regime, either by tweeting certain hashtags to make them trend, or by retweeting what a state official said to make them seem more popular than what they really are,” he said.
The bot accounts Al Jazeera identified reflected the behaviour Abrahams noted, with large numbers of retweets and hardly any interaction with other accounts.
Based on Al Jazeera’s analysis, accounts like those ofSaud al-Qahtaniand Turki al-Sheikh, both advisors to the Saudi royal family, saw large numbers of retweets by identified bots, increasing the reach of their hashtags and tweets.
Al-Qahtani, one of the people allegedly involved in the Jamal Khashoggi murder, has been dubbed “lord of the flies”, a reference to the nickname pro-Saudi accounts have received online.
According toan investigationby independent research organisation Bellingcat, al-Qahtani visited several websites in recent years where he “sought to manufacture engagement activity on major social media platforms”.
A similar pattern of bot behaviour was found among the fake accounts retweeting people that could be designated as supporting Qatar during the blockade, with Qatar royal family member Joaan bin Hamad Al Thani and prominent Qatari businessman Adel Ali Bin Ali being retweeted by large numbers of automated accounts.
Other noteworthy accounts seeing large amounts of retweets by automated accounts were news channels such as Al Jazeera and Saudi News 50.
“Although we see an overarching political agenda behind these campaigns, it’s really hard to pinpoint who exactly is behind them,” Jones said.
However, creating a large number of them requires resources, said Mona Elswah from the Computational Propaganda Project at Oxford University.
“Anyone who wants to manipulate content, so it can be a regime, political parties, anyone. They hire people who can create bots, in some cases they hire foreign bot creators. They aren’t necessarily politically involved themselves, but they know how to write the code to create the bots,” she told Al Jazeera.
Some bots in the data Al Jazeera collected showed this behaviour as well, with tweets containing pornography and advertisements indicating the person or people behind the network were hired.
“When you think of bots, you have to consider that in order to build or hire a collection of bots, you need to have money, and that’s not something all countries have like the Gulf countries do.”
In the case of bots influencing online conversation about the GCC Crisis, Al Jazeera did not manage to identify the individuals, parties, groups or countries behind the accounts, making it impossible to definitively say who created them.
You don’t necessarily try to change other people’s opinions. You try to change their perception of other people’s opinions.
ALEXEI ABRAHAMS, CITIZEN LAB
According to Abrahams, the majority of the tweets did not have the intent to convince anyone, but instead were simply used as a way of magnifying the supposed reach of certain opinions.
“Most Middle Eastern and North African countries are run by authoritarian regimes. As a result, they suffer from a lack of reliable election or opinion poll data. This raises the question: how do people know if their leaders are popular?,” he said.
As a result, these types of propaganda techniques are used, either by governments themselves or those supporting the government, to make them seem more popular.
“You don’t necessarily try to change other people’s opinions. You try to change their perception of other people’s opinions.”
Abrahams also added that another important motivation is simply sowing doubt about what is real and what is not.
“There is a book about the Russian propaganda called Nothing is True, Everything is Possible, which states that the main goal [of this type of propaganda] is to create doubt,” he said.
In an emailed response to Al Jazeera, Twitter said it was doing as much as possible to deal with the issue.
“Twitter’s number one priority is to improve the health of the public conversation. Part of this work involves tackling spam and automated activity that disrupts a person’s experience on the service. To this end, we have expanded our rules and invested in better tools to help us stay ahead of malicious actors,” the company said in response to two pages of questions asked by Al Jazeera.
“We are now identifying and challenging between 8m and 10m spammy, automated accounts every single week, asking for additional details such as a phone number or email address to authenticate them. Overall, 75 percent of these accounts are failing to pass these challenges and are ultimately suspended. We’re committed to building on this progress and continuing to prioritise conversational health,” the statement said.
Despite Twitter’s attempts, however, conversation about the blockade of Qatar is continuously being hijacked by bots on both sides.
In many ways, Twitter is not a meaningful platform in the Middle East, because it’s so inundated with spam, fake accounts and propaganda.
MONA ELSWAH, OXFORD UNIVERSITY
Many of the accounts identified by Al Jazeera are still online and continue to send out tweets, muddying online debates.
For Jones, that is reason enough to lose some hope in Twitter’s usefulness as a serious discussion platform for the Gulf region.