Twitter this week said that it removed an account that pretended to be Antifa -- the anti-fascist organization President Trump has claimed is an instigator of ongoing protests surrounding the police killing of George Floyd -- but was actually the handiwork of a white power group.
After Russia leveraged social media to mount an influence campaign during the 2016 presidential campaign, social media platforms like Facebook and Twitter have more closely scrutinized accounts to weed out those that aren't what they claim to be.
The matter underscores the quicksand that the social media platform often finds itself in, balancing free speech rights while shutting down fake accounts or fact-checking the president’s politically motivated tweets, concurred two cybersecurity academics who have been studying how difficult a fraudulent account can be detected.
The white supremacist group claiming to be Antifa had reportedly created numerous fake accounts that called for violence in white neighborhoods.
“Fake account detection is a complicated business,” Christopher Whyte, professor at the Virginia Commonwealth University, told SC Media. Companies like Twitter “naturally don't publicize their exact methods because doing so would be tantamount to providing a ‘ways to get around a detection filter’ playbook,” he added.
By closely examining a group’s content activity, along what it decides to retweet, usually its modus operandi is obvious.
“The social nature of the data being looked at allows not only for detection of accounts that are obviously fake under some metric or other, but also detection of more sophisticated frauds that are embedded within a community of such behavior on the platform,” Whyte said. An algorithm is applied across the entire social network, looking for different things, weighing the value of clusters of behavior, linking of fake news, etc. to assign a probability that an account is fake, he added.
“Twitter doesn’t want to become the arbiter of truth,” said Sal Aurigemma, University of Tulsa associate professor of computer information systems, adding that going back to the Arab Spring and Russian election interference Twitter has grappled with the issue of people pretending who they are not.
Hashtags and the use of stock photos sometimes give away fraudulent groups’ modus operandi, noted . It’s not lost on Twitter that its platform can be taken advantage for nefarious purposes. But Twitter, as well as Facebook, remain skittish to outright block Trump, despite usually inaccurate communiqués over the network for fear of economic retaliation.
“If they piss off the government, there might be legally ways that prevent the platforms to exist or make money,” Aurigemma said.
To wit, President Trump last week signed an Executive Order encouraging a change to federal law referred to as Section 230 that provides liability protections to social media firms because he believes they are biased against conservative voices. The EO seeks to task the Federal Communications Commission (FCC) with examining the reach of the current law and give the Federal Trade Commission (FTC) the authority to handle political bias complaints.
Vijaya Gadde, Twitter head of legal, policy and trust, tweeted on May 27 that “No one person at Twitter is responsible for our policies or enforcement actions. We are a team with different points of view and we stand behind our people and our decisions to protect the health of the public conversation on our platform.”
The president recently took umbrage that Twitter rejected his tweet contending mail-in voting is fraught with fraud. (Trump didn’t provide any evidence, nor does he think twice about retweeting clearly unvetted propaganda.)
Trump’s stature as a world leader allows him to be treated differently from other users, according to Aurigemma. “It’s not [Twitter’s] job to put a world leader in his place,” he said,.