Application security, Application security, Network Security

Are ‘bad bots’ weaponising data centres to spread fake news?

Share

Bad bots accounted for 20 percent of all web traffic last year, according to new research from Distil Networks published today.

With bots making up 40 percent of web traffic, that half of them were ‘bad' is a cause of concern. This concern heightens when you discover the same research found 75 percent of them to be advanced enough to load JavaScript or other external resources, hold onto cookies and enable persistence through randomisation of IP addresses, headers and user agents.

Distil Networks researchers also reveal that 60 percent of all these bad bots originated from data centres, with Amazon topping The bad bot market share for the third year in a row with responsibility for 16 percent of all bad bot traffic.

We know that bots can include things such as search engine crawler with their indexing processes necessary to keep product, service and site details up to date. But what, exactly, does a bad bot do?   

SC Media UK asked Stephen Singam, managing director for research at Distil Networks, to define exactly what a bad bot is? “When you look at bad bots”, Singam told us, “it's important to differentiate between those that use data for business advantage and those that break the law through fraud and account theft.”

There is a well-known legal grey area inhabited by bots that scrape data from sites without permission and reuse data such as pricing information or inventory levels. For some retailers, this is fine as it can help customers, where the motive is to gain a competitive edge they are not so welcome.

“The truly ugly bots”, Singam continues, “undertake criminal activities, such as fraud and outright theft.” One example is account hijacking where a breach disclosing significant numbers of account details will be met with a bot swarm feeding the data into login pages and uncovering live accounts to compromise where password re-use exists.

“Bad bots might also click on ads hosted on ‘fake news' sites” Singam told SC Media UK, continuing “the fraudsters will own both publication and bot traffic, and get paid out as part of advertising networks that are operating programmatically.” By the time the advertiser audits their traffic, of course, the fraud has been committed and the payload cashed out.

And talking of fake news, which is much in the real news courtesy of President Trump and his Twitter tantrums, SC wondered if bad bots could be behind fake news campaigns on social media that are designed to influence political opinion?

“Anecdotally, bots can be used to amplify a 'voice' in social media to make it appear that more people share that opinion” Singam agrees, adding that bots are “just automated scripts completing their allotted task.” That task could be scraping airline ticket prices every ten seconds, running millions of stolen credentials against websites or reposting social media posts.

Singam admits that it's a very complex environment, so tracking political organisations or even lone wolf actors behind such social media campaigns isn't always easy. Organisations such as Political Bots by the team at the Oxford Internet Institute at the University of Oxford exist to track news and gather evidence of bots being used in this way.

Paul Fletcher, cybersecurity evangelist at Alert Logic told SC that one of the reasons bots are used for distributing fake news is that they are easy to configure. “Bots can be designed to search and replay specific types of text strings or other data” Fletcher explains, continuing “if a Bot gets a hit on what it's programmed to look for then the pre-configured response kicks in.”

Mikhail Sosonkin, a senior security researcher at Synack, also knows a bit about bad bots and told SC that they can be used draw an audience to a fake news website or social media campaign. “One of the methods is for bots to maintain 'exciting' conversations on services like Reddit or Facebook”, Sosonkin explains, “you need to use bots for that in order to look like a legitimate group of people that is distributed across the web. These operations have to be covert influence because they can't work if people know where they are coming from.”

Although we do know where they are coming from: data centres such as those operated by Amazon. Indeed, it might be said that data centres are effectively being weaponised by bot activity when you consider that 60 percent of such activity originates in the cloud.

This doesn't surprise Sosonkin at all. “Data centres provide cover for these activities” he explains “because it is hard to block out an entire data centre and all of its legitimate users.” As Sosonkin points out “bots need to be distributed, cost effective and reliable - all features provided by cloud services and data centres.”

So how can we, as an industry, get on the case and stop bad bots? If, as the research suggests, a fifth of all web traffic is generated by these bots isn't there a case to be made that the security industry is failing the public at large?

“There are technologies available to combat human error and keep people safe from malicious activity online” says Simon Crosby, CTO and co-founder at Bromium in conversation with SC, “but no solution to date has been able to successfully prevent internet trolls and bots from misleading members of the public.”

Indeed, in a post just after the recent US Presidential election, Facebook's Mark Zuckerberg recognised the issue and called it complex, both technically and philosophically.  “While social media platforms struggle with the challenge of verifying sources without limiting freedom of speech”, Crosby concludes, “we must rely on trust to reduce the impact of targeted misinformation…”.

An In-Depth Guide to Application Security

Get essential knowledge and practical strategies to fortify your applications.

You can skip this ad in 5 seconds