Bot Threats
March 23, 2023

Invalid Traffic Policy

Today, very few companies have an established policy for automated traffic. Often they are not even aware of any issues surrounding bot traffic. Given bot’s are now around 50% of all the internet it’s hard to ignore the size of the problem and the potential implications of getting this wrong.

VerifiedVisitors allows you to take control over all bot traffic, and decide exactly who gets access to your valuable data. VerifiedVisitors provides the discovery to allow you to find out exactly which bots are hitting your site, and more importantly, why.

Once you know the nature of the automated traffic, we provide a recommendation engine that guides you through the best policy per class of bot. You decide on your security policies once. Our ML engine then applies the policies and adapts and changes them as the bot threat changes.

Invalid Traffic Policy Manager

Why should we care and develop policies for these bots?

Let’s take some examples. There are thousands of bots that businesses are letting onto their sites every day.

Infrastructure Bots

Many bots target IT infrastructure to understand the full tech stack and all components used. In many cases this can be harmless data. Which webserver, Content Distribution Network (CDN), or e-commerce platform aren’t exactly state secrets.

Legitimate commercial services such as Built-With then package the data up, allowing sales and marketing teams to precisely target domains with the exact spec and build they have solutions for. It can be helpful to the entire supply chain - sellers get precise targeting, and buyers get solutions that god forbid, they actually need.

However, on the illegitimate side, you can easily see the opportunity for hackers who can target known infrastructure vulnerabilities. They can launch illegal bots to quickly and easily find compromised versions and weak tech stacks across the web. Of course, this is just another reason to ensure we’re always updating software, and we have robust version controls in place, but we all know that’s not always the reality.

Bots can extract very detailed information right down to specific releases and versions. Often these generic crawlers will hijack an existing common user agent string, pretending to be a legitimate search or media crawler.

Archival / Historical Changes to Websites

Bots, such as Way Back When and archive.org and other bots that record changes to your website over time.

Most people have used the waybackmachine. It’s fun to see what that first yahoo page looked like from 2001, track changes in design and see how the internet has evolved. It’s also helpful to track the progress of websites, see how they pivot and change their marketing and brand according to market conditions.

Yahoo First Home Page

Sophisticated bots are now looking to detect how policies, prices, definitions and other text changes over time, and have the evidence of the change they can then use. This is particularly affecting government, NGOs and others in charge of public policy. Recent examples are numerous. Bots have been tracking the definitions of vaccinated and natural immunity, for example as they have changed on health and government websites. Public service update, or smoking gun? The issue here is the evidential proof of a change that just ‘appeared’ without notice can be shown to be ‘underhand’. All the ’evidence is then all documented, with dates, showing the changes made and subject to interpretation.

Once the business understands how their own data is being used, they often are absolutely not OK with it, and actively want to block access. Most of the time, businesses don't know what the bots are actually doing.

VerifiedVisitors has over 30 categories of bots, each with their own recommendations for your sites. You can simply accept the recommendations or decide on your policy for each category.

VerifiedVisitors allows you to apply a security policy that get’s applied according to the actual threats and risks on each of the sites that policy is tied too. As the risk changes, the policy adapts to cover the risk. This is all automated for you - all you need to do is to set the policy once in the command and control console, and VerifiedVisitors does the rest.

Now you have one set of clear policies it’s much easier to manage your security at the policy layer. You have one simple set of clear security standards, that you can update centrally as the risks change over time. VerifiedVisitors dynamically applies the actual rules automatically at each endpoint.

Policy applied. Problem solved.

Photo by Rob Wicks on Unsplash

Check more blogs

Get updates on the content