Measuring Risk From Bots

What is a "safe space" exactly? It's a term you hear thrown about all over the place in the workplace and at colleges. We all know what a safe place should be. But is it? How can we measure that risk, and ensure it's really safe? How can we measure that risk as threats change? Maybe it was safe place once, but is it safe now, for me at 3:00 pm today at low tide?

How do we evaluate risk for your business from Automated Traffic?

At VerifiedVisitors, we're constantly looking at ways of not only identifying risk but working to dynamically mitigate that risk automatically in an open and demonstrable way to protect endpoints.

That means there is no concept of a 'safe space'.

We start with the premise that on the Internet you can't trust anything or anyone. Instead we verify. Constantly. But how do we do this and show the verification?

The famous rhyming Russian proverb doveryay, no proveryay, "Trust but Verify " which was popularised in the West by President Reagan and repeated ad nauseam during the nuclear arms reduction talks, worked in the enormous soviet bureaucracy simply because you have to have some element of trust for humans to co-operate and function. Responsible people with integrity who are allowed to get on with things, that are verified over time seems like a good model for society to follow.

It's also turns out it's also a good model for machine learning to follow as well.

Verify Constantly, Trust Occasionally.

At VerifiedVisitors we constantly verify all the traffic hitting endpoints. Our detectors use Machine Learning to constantly assess these threats according to the actual behaviour of the visits as shown below.

VerifiedVisitors Risk Dashboard
Bot Risk Surface Area and Behavioural Analysis

We then display the threats by risk area, across hundreds of endpoints, so you can identify and visually verify the risk types. As the machine learning sees more traffic over time, it learns from the traffic, accepts labelled data from our customers who flag key paths and know vulnerabilities, and apply rules. It learns and gets better. It starts to trust, based on the repeated verifications.

In the graphs above, we show the entire risk surface area of all visitors across all endpoints. The areas of high risk are clearly visible, colour coded and displayed. Where do see repeat visitors, that are proven to be human, or known bots that are allowed and verified, they are then displayed in green as 'trusted'. However, they are still verified dynamically, but can now safely go into the trusted bucket for this particular visit.

This means we can learn to trust our verified visitors over time, based on their past behaviour.

Clicking on the chart icon flips the view so you then see the detailed traffic chart for each threat type over time. This can be really useful when looking at trends, traffic patterns or particular behaviour.

Verify Constantly, Trust Occasionally.

Photo by jonathan romain on Unsplash

Check more blogs

Get updates on the content