Bots detection tools to prevent account take over (ATO) API abuse and data mining: Stop the bots from stealing your data for good

The Importance of Bot Detection Software in Digital Security


“In this comprehensive guide, we delve into the world of bot detection software and its crucial role in safeguarding your online business from the threats posed by malicious bots. In this article, we will explore the key features, benefits, and implementation of bot detection software and tools, helping you gain an edge over the competition and protect your digital assets effectively."

Understanding Bots and Their Impact. What are Bots?

Bots are automated software scripts that perform tasks over the internet. Over 50% of all internet traffic is automated, so this is gigantic volumes of automated traffic. Some of these bots are beneficial, such as search engine bots that index websites for search results and uptime checkers. However, many are malicious bots specifically designed to exploit vulnerabilities and cause harm. Finding them can be like finding the proverbial needle in a haystack

The impact of Malicious bots

Bot Attack Matrix
The Bot Attack Matrix, from Simple to Highly Sophisticated Bots

Looking at the bot matrix above, we can divide bots into simple bots that just use generic scripts time and time again, to customised bots that are targeted at a particular site, to highly sophisticated bots that are not only targeted, but take extensive steps to avoid detection.

Generic scripts can be stopped fairly easily. These bots crawl in massive volumes, attacking well known paths like / wordpress/ admin hoping to get lucky with the site admin credentials, and millions of other obvious vulnerability scans. These bots are scripted, and may even come from data centres or display other obvious signs that they are automated. Since the scripts don't change, they are very easy to block and stop using a proof of work, or simple CAPTCHA.

More sophisticated scripts disguise their origin.

Pretending to be a real legitimate user agent is one method commonly used. No-one want's to block search engine traffic, and it's only too easy to slide into a whitelist by faking a user agent string. Verifying each and every bot is a major task - it's not made easier when the published range of IPs is actually wrong.

Mobile devices are very useful, as they use the same mobile gateways for all their subscribers, so it's impossible to block IPs or even a range or entire ASN without blocking potentially millions of legitimate mobile users. Mobile server farms use actual devices, often rigged up to moving panel, that triggers the accelerometer in each device to fool fingerprint technology into thinking the device has human type movement.

It's the customised bots that are often the most malicious bots, which can lead to various detrimental consequences,

These malicious bots, such as web scrapers, content spammers, and DDoS bots, can wreak havoc on your website, leading to data breaches, customer fraud, and revenue loss. Increasingly sophisticated bots are a growing concern as they threaten the integrity of online interactions, and are becoming increasingly difficult to stop.

Bots can impersonate users, take over accounts, engage in fraudulent activities, and wreak havoc on websites. To safeguard against these threats, the use of bot detection tools has become paramount.

Recognising the opportunity, Cybercrime as a Service (CaaS) vendors have started to package up many of the common features you need to deploy sophisticated bots on command, with no skills necessary. To avoid detection, these CaaS providers have access to botnets or shared devices all from domestic IPs. This allows bots to disguise their origins, rapidly rotate IP addresses, and easily pass device finger-printing checks. Why? The bots originate from real devices, from real domestic IPs. They will pass regular device fingerprints

These vendors are offering increasingly sophisticated bots, that can pass CAPTCHA, fake mouse trails very accurately to pretend to be a real user, and can be scaled using millions of devices to avoid detection.

With the advent of Generative AI tools, intelligent agents can be 'programmed' automatically with no coding required - allowing sophisticated bot agent technology to be used by anyone with a PC. When combined with a large proxy platform that use millions of domestic IP and real machines, these represent a whole new potential threat. In this article, we will next explore the significance of bot detection tools and how they can help protect your online assets.

A device fingerprint is a unique identifier based on the hardware, software and digital provenance identifiers.


Now we have seen how bots are becoming increasingly sophisticated, traditional security measures are often insufficient to detect and mitigate their activities. This is where bot detection tools come into play.

Enhanced Security: Bot detection tools provide enhanced security measures specifically designed to identify and counter these malicious bots.

Protection from Data Scraping: Data scraping bots are notorious for extracting valuable information from websites without permission. Bot detection tools can effectively block such attempts, safeguarding your data.

Preventing Account Takeovers: Bots often attempt to take over user accounts through brute force attacks or credential stuffing. Bot detection tools can identify these activities and prevent unauthorized access.

Mitigated Fraud and Abuse: Fraudulent activities, such as credential stuffing and account takeovers, can severely impact your business reputation and financial stability. Bot detection software helps identify and prevent such fraudulent activities, safeguarding your customers and brand image.

Mitigating DDoS Attacks: Distributed Denial of Service (DDoS) attacks involve overwhelming a website's servers with traffic from multiple sources, rendering it inaccessible. The most aggressive ‘amplification’ attacks used by DDoS attackers force servers to respond to millions of requests simultaneously, and simply overwhelm the website. These attacks can be massive, but are effectively brute force attacks. The very largest ones need massive infrastructure to effectively load balance the attacks across many servers to absorb the threat on a global basis. Effectively, only AWS, Cloudflare and Google Cloud can absorb these massive attacks. Pick Bot detection software who partner with these DDoS providers to ensure you have the best DDoS protection. Smaller concentrated DDoS attacks using customised bots targeted at your website are a real nuisance and distinct threat, and often avoid detection as they fly under the radar on a global basis. If they are all targeted at your credit card gateway, for example, you can spend hours off-line and the upstream payment provider will simply take you off-line, no questions asked. It’s important to have the massive network global protection layer, as well as more local granular protection for these more targeted DDos attacks.

Accurate Analytics: Bots can skew your website's analytics, making it challenging to obtain accurate data on user behaviour. Bot detection software filters out bot-generated traffic, allowing you to make data-driven decisions based on reliable information.

Blocking Bots at the network edge


Investing in bot detection software offers several invaluable benefits for your online business:

1. Improved Website Security

Bot detection software acts as a digital shield, protecting your website from unauthorized access and potential cyberattacks. It identifies and blocks harmful bots, ensuring that your sensitive data remains secure.

2. Enhanced User Experience

Malicious bots can slow down your website, leading to a poor user experience. Many sites punish legitimate users by putting into place extensive two-factor authentication, hard to use CAPTCHA, pointless rate limits across the entire website, and other obstacles for all users, regardless of who they are. By employing bot detection software, you can optimize your website's performance and provide a seamless browsing experience for all your visitors.

3. Decreased Server Maintenance, lower costs

Bot spikes can cause all sorts of maintenance issues. The sudden volumes of traffic can overwhelm servers, or cause issues as your elastic compute auto-scales and fire-up new nodes, causing additional expenses and increased maintenance. Verifying bots using log data, is time consuming and fraught with identify issues. Malicious bots often impersonate common bots that you don’t wan’t to allow on your site. Many sites have whitelisted bots disguised as legitimate services, only to have their entire web site crawled or worse.

Using ML to combine thousands of factors that make up a fingerprint


Step 1: Assessing Your Bot Detection Needs

Before selecting a bot detection software, it's essential to understand your specific requirements. You may not be aware that bots are actually hitting your website. VerifiedVisitors offers a free 30-day bot audit so users can run an audit in passive mode, and assess the real issues.

Does the Bot Service sit-inline?

Anything that sits in between your customer and your services is a potential risk. What happens if the bot service fails? Does the bot service use a reverse proxy? Does this conflict with other reverse proxy services? What happens when the reverse proxy fails?

How long does the bot service take to make its decisions? Too long and you will affect the speed and performance for all your users, which you definitely don’t want to do. Too short, and you have to question what exactly is going on under the hood. What kind of checks can be made in under 2 milliseconds?

Step 2: Researching Bot Detection Solutions

Conduct thorough research to explore various bot detection software available in the market. Many of the first bot detection utilities just used simple fingerprints to detect bots. Later generations used Machine Learning using log analysis to predict bots from their behaviour as they crawl the website. Both have massive flaws.

Fingerprints don’t work if real devices like botnets are used, and they can also be faked. Fingerprints don’t work on your API, as all the traffic is automated.

Log based analysis means by definition that bots are already hitting your web-site. What we don’t want is a detailed report of the attack after the event.

VerifiedVisitors combines user behaviour and fingerprinting with Machine Learning to provide a secure and accurate platform that enhances the customer experience for your loyal and verified visitors, but makes life impossible for the bots, at the network edge, before they even hit your site.

Step 3: Choosing the Right Vendor

Select a reputable and experienced vendor that aligns with your business goals. Do they have a publicly available status page, that shows the history of downtime or latency issues?

Read customer reviews and testimonials to gauge their performance and customer support.

Step 4: Integration and Testing

Ease of integration into your current stack, is a major factor. Can it be integrated seamlessly in the cloud, or do it require a custom integration, or even custom hardware on site? Once you've chosen a bot detection software, integrate it into your website and conduct rigorous testing. Monitor its performance and effectiveness in identifying and mitigating bot threats.

Step 5: Continuous Monitoring and Updates

Bot threats are ever-evolving, so it's crucial to maintain continuous monitoring and keep the software updated. Regularly review its performance and make necessary adjustments to stay ahead of potential threats.

Step 6: Accuracy and Effective Measurement

Many bot software tools boast of high accuracy if not 100% accuracy in bot detection while minimizing false positives. This is interesting as many of the CyberCrime as a Service (CaaS) folks boast they can avoid detection in 99% of all sites. Many vendors boast about their false positive rate - the amount of times they say it’s a bot but it’s really human, but totally fail to tell you the false negative rate - the amount of time they say it’s a human, but it’s really a bot. With false positives, the human has to be authenticated, and this can cause inconvenience. Misidentifying a bot as human allows the bot access and most likely won’t ever be picked up with potentially dangerous consequences.

Ensure your bot vendor has a playback feature in test mode, so you can verify the blocking rules that would be applied, and pick up both the false positives and false negatives by analysing the results.

Hardware, software, digital provenance and behaviour all make up the fingerprint


Real-Time Monitoring

Bot detection tools offer real-time monitoring, allowing immediate responses to potential threats. Log based monitoring only will not be able to monitor and respond in real-time to threats.

Does the bot vendor learn from your traffic?

Most vendors apply global rules to every client instance, regardless of the actual traffic pattern on your site. Many simple bots, that are very high frequency can be easily detected, because they are hitting many client sites. Sophisticated bots, tailored to a particular target high value web site, will not have been seen anywhere else. Applying global detection rules, for example on rapid, repeat behaviour that can’t be human, may work on 99% of sites, but fail on yours if you’re making use of extensive APIs and making multiple simultaneous page requests as part of your normal flow. Many vendors say they are offering machine learning based services, but don’t actually use ML to learn from your traffic patterns. They may be able to detect 99% of all bots, but that means they can miss the 1% of bots that are currently targeted and written just to hit your site.

Dynamic Rules V. Static Rules

We’ve all blocked an IP address only to have the same users rotate the IP and try and avoid detection. Static rules which simply apply fixed IPs, user agents or other variables, just quickly become unmanageable and out of date the moment they are written. Tracing back to why we did block an IP or range a few years ago becomes impossible. Look for a vendor with dynamic rules, that are applied based on your policies, that dynamically update according to the actual threat type. This is massive time saver.

Behavioural Analysis

These tools analyze user behavior, identifying patterns that deviate from human interactions using logs and other behavioural data. These are usually very good indicators of bot behaviour. Sophisticated bots will attempt to hide in the overall mix of user behaviour, but many simpler bots just hit the endpoints they need over and over again. If a bot truly evades capture, but never challenges any of your critical paths or infrastructure, then the bot may have avoided detection, but has ultimately failed to achieve anything.

Play Back Monitoring

Before applying blocking rules, it’s critical that the bot management software is able to playback the results of its rules, without actually effecting your current traffic. The playback should have a detailed analysis of potential traffic that is blocked, the reason, and logs so you can verify if the cause was indeed valid. This ensures that the rules are valid and correct, and any false flags can be picked up before the integration is live, and potential valid traffic can be blocked. The playback feature should give you the ability to export logs for detailed analysis.

API Monitoring

API monitoring is impossible for fingerprint based bot detection vendors, as all the API traffic is automated. The fingerprint just makes no difference. This is a great way to test out if your vendor has real behavioural machine learning based on the unique traffic pattern hitting your API. It’s impossible to do otherwise.

Verified Bot Database

Legitimate bots are often faked by malicious hackers. Blocking major search engines or uptime checkers isn’t going to win you any favours at your workplace, so there is always a tendency to whitelist user agents with well know user agent strings. However, whitelisting these user agents can be seriously harmful, as the fake bot is now able to crawl across your entire site unchallenged. With thousands of legitimate bots, this becomes a major issue. Ensure your bot management tools has an effective way of managing these bot services.

Mitigation List

Ensure your vendor has a comprehensive log of all traffic blocked or mitigated, along with the root cause, breakdown of the traffic type and digital provenance. Without this visibility there is no accountability.

IP Rate Limiting

Be wary of vendors offering rate limiting. Although it can be useful as catch-all in certain circumstances, essentially you are degrading the performance of your services overall, because you can’t detect the acutal abusers of the service. Rate limiting isn’t so much a feature, as a flaw.


Given we know bots can pass pass CAPTCHA, what’s the point of having them? Only a small percentage of bots are currently passing CAPTCHA, and they are useful to measure false positives. How many humans passed the CAPTCHA?

Combining Methods with ML is far more effective.


In conclusion, bot detection software is a crucial tool for safeguarding your online business from the rising threat of malicious bots. By investing in this cutting-edge technology, you can enhance website security, improve user experience, and protect your valuable data from cybercriminals. Remember to assess your specific needs, research available solutions, and choose a reliable vendor to ensure successful implementation. Stay vigilant, and your business will thrive in the face of ever-changing digital challenges.

Zero Tolerance at the network Edge


VerifiedVisitors protects all your endpoints - API, & websites across the hybrid cloud - all with no software to install in milliseconds. Adding zero tolerance at the network edge greatly increases your overall security footprint, preventing bot attacks and fraud before they can do harm.