Application security, Asset Management

Bots may be skewing your website analytics

Google Analytics on a computer screen. (“Google Analytics on Computer Screen” by bluefountainmedia is licensed under CC BY 2.0)

When businesses try to quantify the damage caused by malicious bot activity, they might calculate lost ad revenue as the result of click/ad fraud, or tally up costs related to preventing or experiencing a data scraping or credential-stuffing attack.

But a new survey-based report from Netacea suggests that one hidden cost of bots is sometimes overlooked: skewed website traffic analytics that result in companies making ill-informed marketing and merchandising decisions.

“It’s a side effect of bot activity,” said Andy Still, CTO at Netacea, in an interview with SC Media. “The bugs are stealing data, they’re credential-stuffing… and I think the businesses… [are] trying to stop the negative objective that the bots trying to do. So therefore, what gets overlooked is whilst the bot is doing that, it's creating altered data… that businesses then make decisions on.”

“It’s only when you start to look at your underlying data, and you see unusual usage patterns, that you realize that maybe that data is not as dependable as you thought it was. And then you look back and think, actually, that's because a lot of that data has been generated as a result of unwanted activity.”

That’s why marketing and security teams must work together, such that when the former discovers a bot problem, the latter takes responsibility to fix it, the report concludes — adding: “If marketing teams base their strategies on flawed data, can they have any chance of success?

Of 440 surveyed businesses across the U.S. and U.K., 56% told Netacea that bots have negatively affected their data analytics, resulting in a minor financial impact, while another 12% said the impact was moderate in severity.

For instance, around 55% said that they have ordered new stock incorrectly due to bots artificially inflating their sales numbers, and a little over 50% said they ran special promotions based on what turned out to be data tainted by bots. Additionally, roughly 55% said that bots have “burned through our online budget, resulting in wasted developments in marketing activity.”

According to Still, many businesses conduct A-B testing on their websites, trying out different variants of user experiences or journeys to see which ones generate the best customer response — “then making marketing decisions based on popularity of particular products or particular types of products.” But if bots are significantly interfering, companies might be “making marketing decisions based on what bots are wanting to do, which is clearly not the same as what humans want to do.”

When that happens, “then they will be making a website that is harder for customers to use, and therefore they'll be losing business in that way,” he continued.

One example that Still encountered while at Netacea involved a company that offered its customers price comparisons for insurance quotes. This business trialed two different experiences for receiving this quotes. The first user journey had customers enter all their data into a single form; the second involved more of a site wizard experience that took users through a sequence of steps. “And they were using analytics to determine which was the most successful of those,” he said.

As it turns out, the majority of humans preferred the wizard approach, but because bots were using the single form, the company mistakenly thought the single form was more popular, and thus made the form the more prominent method for obtaining a quote. “Once we started engaging with them, and actually removing the bot traffic from their site, it became obvious that humans actually preferred the wizard kind of approach,” and they reversed course, he said.

For companies that want to be more aware of the cost of skewed web traffic Still offered a key recommendation: “The first step, I think, is not necessarily trusting the analytics will be correct. And if it doesn't seem right, intuitively, then look into it in more detail. Start to analyze it… maybe validate it with some real users. But dig into those stats in a bit more detail.”

Drill down into that next level of data, Still continued. You might find, for example, that certain traffic patterns may all be coming from a particular geographical region that doesn’t add up. That might be a clue of malicious bot activity.

“Often with bots… they're easy to identify,” he said. “If something looks unusual, then look at what tooling there is to get more information. Validate before you make big decisions based on data that you're not comfortable with.”

An In-Depth Guide to Application Security

Get essential knowledge and practical strategies to fortify your applications.
Bradley Barth

As director of multimedia content strategy at CyberRisk Alliance, Bradley Barth develops content for online conferences, webcasts, podcasts video/multimedia projects — often serving as moderator or host. For nearly six years, he wrote and reported for SC Media as deputy editor and, before that, senior reporter. He was previously a program executive with the tech-focused PR firm Voxus. Past journalistic experience includes stints as business editor at Executive Technology, a staff writer at New York Sportscene and a freelance journalist covering travel and entertainment. In his spare time, Bradley also writes screenplays.

You can skip this ad in 5 seconds