Detecting New Account Bot Fraud

What is New Account Fraud, and how to prevent it.

Recommended articles

Social share

New Account Fraud

Everyone is excited. During the previous week the business has seen a massive growth in new account registrations. Marketing has smashed all their quarterly OKR targets. The CEO is taking the credit, again. The CFO is waiting till these new accounts surely convert into paid accounts, and has the upsell ratio all set-up ready to go in Excel. Investors have been informed in hushed tones. Even the founder is smiling, for once.

However, the tech team is suspicious, something doesn’t look quite right, but they aren’t sure, and keep their heads down. The tech team has put in a sophisticated CAPTCHA - one of the new puzzles that are uniquely generated each time that only humans can solve. The accounts seem unique, they have valid IDs and even mobile numbers.What can possibly go wrong?

It turns out that the marketing department can’t put their finger on the referral source of the new registration source. They thought it must have been the new Tik-Tock video, but that’s not what the data shows. “Word of mouth”, mumbles the Chief Marketing Officer.

Understanding the Anatomy of New Account Fraud

New account fraud, a sophisticated form of cybercrime, involves the creation of fake accounts with the intent to deceive and exploit. These deceptive practices compromise the integrity of online platforms, leading to financial losses and reputational damage for businesses. They often start with a significant increase in registrations, or in a more subtle sudden re-activation of dormant accounts

So what is going on?

The company isn’t large enough to support a dedicated fraud team, but the tech team suspicions proved to be correct.  The new increase in registrations was caused by automated bot attacks, from sophisticated bots capable of bypassing their swanky new puzzle CAPTCHA protection. Although the tech team still couldn’t be sure, they took the precaution of performing a manual audit on a selection of the new batch of registrations. They couldn’t actually connect with any real customers during the audit. 

Impact of New Account Bot Fraud on Business

How do Bots exploit new accounts to commit fraud?

New account fraud is committed for a whole variety of reasons depending on the nature of the accounts and the business. Here are some of the major ones that we have seen at VerifiedVisitors. 

  1. Free SaaS services and Promotions: nearly all SaaS offerings have a free-tier or a limited time offering of some sort.  It’s a common business model to entice customers with a free-offering and upgrade them over time. Usually that free tier has important restrictions - such as usage restrictions. Creating a thousand fake accounts means the hackers can exploit your fair usage resource - and amalgamate this into one gigantic account. You might be offering a generous 1 Gigabyte of storage, only to find that you’re now forking out for 1 Terabyte, and none of the users will ever convert. The bot scammers may simply be taking advantage of a special offer, coupon or discount, with no intention of upgrading over time.

  1. Access to member areas for additional data, services and content. A well known international newspaper found that legitimate paid subscribers had signed up manually to a content subscription service, giving them access behind the paywall. These accounts were then taken over by bots that systematically data minded the entire content each and every time, and republished the entire service in China. The bots distributed the load across many accounts, and were within normal usage patterns, and did not stand out, until a major audit investigation was launched.

  1. Social Media and other networking sites. Fake accounts represent a serious challenge to the integrity of many of these platforms. In fact Social app IRL (ironically named after the acronym for “In Real Life) was shut down because 95% of its “20 million” users were in fact fake. The CEO was suspended for misconduct and the company was shut-down. The investors had been duped by the massive growth rates in the platform and lost $150 million. VerifiedVisitors operates Bot Audits, to allow investors and others the ability to quickly determine the true nature of the traffic. Elon Musk at X, recently shared that X was trialing a paid registration for all users in selected markets to try and stop fraud and the spreading of malicious information on X. 

  1. Dating sites are ripe with fake accounts: - they use bots to start conversations with thousands of people and then settle down to the terrible exploitation of lonely and vulnerable people by highly manipulative fraudsters who gain trust over-time prior to a big financial scam. Most people don’t respond, so the bots make it easy and target the vulnerable users of these platforms, which are then passed onto the real-life fraudsters.

  1. Privileged Account Access: now the bots have got through to the customer accounts, they are behind the firewall and other security defenses and can try and obtain privileged access or admin rights. Once inside the domain, they may be able to exploit weaknesses in the platform from within as a ‘legitimate’ account. For example, in a poorly architected or older site, it’s relatively easy to use a SQL injection to access other accounts, once they understand the account creation convention. Some sites still use sequential account IDs.

  1. Spam and Fake Reviews: everyone has come across fake reviews and obvious attempts to manipulate consumer opinions. Creating fake accounts is a critical first step, as these accounts can be used to verify identity. For example, it’s getting quite common practice for vendors to ask for social media accounts to verify identity. These false accounts are then created, to effectively farm synthetic identities. 

  1. Identity Theft: the hackers using stolen credentials from fullz attacks register for a new loan, credit card or other source of cash equivalent. This is the most blatant fraud type, and relies heavily on the acquisition of accurate data on the target victim, usually collected using bots from fullz data.

How do the hackers achieve New Account Fraud?

Bots operate at massive scale, and are now inexpensive to run, even with very large volumes. Most people can’t comprehend the scale of the bot traffic. The growth in Bots as A Service (BaaS) platforms now means that anyone with zero experience and programming skills can create and deploy bots using millions of residential proxies that are very hard to detect. 

For the hackers it's essentially a numbers game. Some of the dating app scandals have seen wealthy widows and older people scammed out of their entire life savings. If the attackers can get a 0.001 % return on their bot farms, they are still making a massive profit. The cost of launching attacks is pretty much free, the scale is massive, and so the hackers have a good return on investment (ROI) opportunity. The attacks carry on.

Fraud Protection Today

Simple bots are fairly easy to stop. However, the latest generation of bots using Generative AI and sophisticated proxy platforms is making life much harder. Common bots are stopped at the WAF layer, using IP reputation analysis, or signature fingerprints from previous attacks. This means that the vast majority of bots can be easily prevented. The problem is with the custom bots that are targeted at your website or domain. Although they will be a small percentage, these are the ones that cause all the damage, so it’s misleading to look at numbers alone.

Bots routinely bypass CAPTCHA and other puzzle’s using human CAPTCHA farms.

We can see in the diagram how simple bots using generic scripts are easily detectable. However, as we proceed up the Y access to highly customisable bots, they become much more human-like and hide the traces of their digital provenance much more effectively. So here we can see advanced bots passing CAPTCHA and mimicking real user mouse trails to avoid detection. Using mobile proxies makes them harder to spot still. Blocking a mobile gateway will result in blocking potentially hundreds of thousands of mobile users, and the mobile farm devices are often real - which means they will pass a simple fingerprint test. 

Why tradition Bot Protection fails:

❌  Traditional IP reputation services - these just fail for all but the most persistent dumb bots,known botnets, or rogue data centers. Blocking by country e.g. Russia and China is futile - the bots just rotate to a non-blocked country of origin.

❌  Old School Fingerprint Detection -again these old school fingerprint detection techniques worked well to detect most bots, but fail against botnets, mobile and residential proxies using actual devices, or very sophisticated emulation of the fingerprint parameters.

❌  CAPTCHA and annoying puzzles and other challenges -again these old school methods are easily bypassed by the human CAPTCHA farms, and in some case by using AI and image recognition. The latest puzzles that get harder if you fail them, are perhaps the most annoying UX aberration since Microsoft Clippy. However, CAPTCHA will defeat most bots, so it's definitely worth having as a part of your overall bot prevention strategy. 

✅  Adding 2FA such as a mobile verification is a significant deterrent. If your new account creation process is such that you can sustain the drop in registrations from legitimate signups that can’t be bothered, this is always a wise thing to do. Just bear in mind that the hackers using mobile proxies can automate passing the 2FA as well, without even resorting to SMS spoofing or other more advanced techniques. Again adding much more sophisticated MFA, such as using a mobile authentication app is going to provide a comprehensive level of account security, and if your user base can support it, it’s an obvious way to go. However, for most B to C plays it’s too much of a user burden.Bear in mind, that you are essentially pushing all your users by enforcing a rigid user verification. 

✅  Old school audit is still one of the most effective ways to discover potential fraudulent account and should definitely be done on a regular basis. However, this is time consuming, manual, and happens AFTER the accounts are created. Sorting out the fake accounts can be a huge administrative burden. 

✅  Monitoring your key login and registration statistics is critical. Detailed information from audit logs can greatly help to find and trace anomalous patterns manually. For example, recording last login dates, identifying dormant accounts, or other suspicious cohorts are all going to help. The hackers won’t know if you have two new registrations a day or 2,000, so anomalies in new account registrations are likely to be identified. Identifying new account fraud begins with recognizing irregularities in account creation patterns. Rapid, mass registrations from a single IP address or the use of disposable email addresses are red flags that necessitate immediate attention.  

✅  Effective detection also hinges on monitoring user behavior post-registration. Often the bots will never login again, or sometimes they login excessively at random times in the middle of the night. The one thing that is likely is their behaviour post-registration may well be markedly different from your legitimate new registrations.

How Does Verifiedvisitors Protect Against New Bot Account Fraud?

.

VerifiedVisitors uses AI to help identify and build up a dynamic cohort risk model based on learning from your website traffic. We use a zero trust at the network edge to identify bot traffic BEFORE it has a chance to cause potential damage. 

Cohort Risk Model

On the far left we have the high risk areas from known automated traffic hitting potentially vulnerable paths such as your user logins and registrations. On the far right we have the Verified Visitors - your legitimate repeat visitors and the traffic that is manage or blocked. 

Dividing the traffic dynamically into cohorts means that we can treat each potential risk in a different way. This has major benefits. Known risky behaviour is identified and acted on earlier. Known legitimate customers are trusted, but verified on a constant basis.  Meanwhile, as new rules are created, our AI is learning more and more about the nature of your traffic.

✅  Known users we’ve seen before and have a unique virtual ID are trusted but constant verified. This means we don’t inconvenience the legitimate users, unless their behavioural signature significantly changes. This is where additional 2FA can be applied intelligently. For example, if a user changes their country of login, or other major factor, the additional security checks won’t be seen as arbitrary and random, but can actually be welcomed

✅  Setting a managed good bot lists enables VerifiedVisitors to manage all the good bots and verify that they are genuine. Setting the good bots list also defines the bot that’s aren’t on the list - they are either fakes or unwelcome, and global rules can be easily applied accordingly.

✅  Known vulnerable paths and exploits by bot traffic are subject to dynamic rules to block or prevent access

✅  Finally, we have a cohort of “Likely automated” where the traffic is in the grey area. Our AI analysis is not conclusive, for example the traffic may have failed some behavioural checks and have a slightly defective signature. This cohort of traffic can then be subject to direct targeting to determine its true origin. For example, VerifiedVisitors has a challenge page, which allows us a couple of seconds to collect additional telemetry, which will often resolve the bot or not issue very quickly, or the site could fall back on a CAPTCHA or other validation method. The percentage of “likely automated” heavily varies between sites, and the site owner usually has a very clear idea of what is acceptable. Over time the AI learns from each interaction, and takes labeled data from the active verifications, which dramatically reduces the grey-zone. 

AI Based Adaptive Learning Security

Our AI platform continuously learns and applies adaptive security measures for each cohort. This ensures that any deviations from normal user behavior trigger immediate responses, thwarting potential fraud attempts before they can do the damage.

More Information:

VerifiedVisitors Demo

To see how our AI platform safeguarding against new account fraud using a multi-faceted approach see our demo here:

Account Take-Over (ATO)

To see how our AI platform protects you from Account-Take-Over (ATO) attempts see our article on Account Take Over ATO here.:

Dormant Sleeper Accounts

To see how the VerifiedVisitors AI platform protects dormant sleeper accounts that are reactivated, please see our article here on fake account creation.

Get Protected now and access your free trial!

To see how the VerifiedVisitors AI platform can protect your accounts, please head to our free trial

Frequently Asked Questions

What is new account Fraud?

Hackers use stolen personal details, such as name and address, contacts and even credit card and banking details to create new accounts, seemingly from a legitimate identity. They then seek to monetise the new accounts in a variety of different ways. For example, they may use the account to login into a retail site, and use up bonus points that have a significant retail value. Consumer based businesses can’t afford to disrupt the user flow too much, and are thus subject to these types of attack.

How can New Account Fraud be Stopped?

VerifiedVisitors has a unique AI platform for detecting visitors according to their risk profile. Combining Zero trust at the edge of network, with strong 2FA and other authentication for just those tiny % of suspicious users stops the fraud while letting the verified visitors pass with no additional friction.

How prevalent is New Account bot fraud?

New Account bot fraud is on the rise, with a significant increase in automated account creation across social media, dating, gaming, and sports betting sites, as well as financial services / loan specialist sites.

Are there specific challenges for small businesses in combatting bot fraud?

Businesses without a CISO or fraud have it tough. VerifiedVisitors has an AI platform and a Virtual CISO tool to enforce some of the best practice using AI that a full time dedicated team would implement.