How does zero trust apply at the network edge?

Zero-trust frameworks are very helpful in addressing a fatal flaw in security thinking - the presumption of a ‘safe space’, which is protected by a magical perimeter, or unicorn shield.

Focusing on continuous verification, access controls and breach assumption is a much better model for security.

What does this mean at the network edge? How can you possibly continually verify traffic, add access controls and breach models for traffic that hasn’t logged in? Even if you could, wouldn’t it just slow your entire site down?

At VerifiedVisitors, we're constantly looking at ways of continually verifying visitor traffic.

The famous rhyming Russian proverb doveryay, no proveryay, "Trust but Verify " which was popularised in the West by President Reagan and repeated ad nauseam during the nuclear arms reduction talks, worked in the enormous soviet bureaucracy simply because you have to have some element of trust for humans to co-operate and function. Responsible people with integrity who are allowed to get on with things, that are verified over time seems like a good model for society to follow.

It's also turns out it's also a good model for machine learning to follow as well.

Verify Constantly, Trust Occasionally.

At VerifiedVisitors we constantly verify all the traffic hitting endpoints. Our detectors use Machine Learning to constantly assess these threats according to the actual behaviour of the visits as shown below.

VerifiedVisitors Risk Dashboard

We then display the threats by risk area, across hundreds of endpoints, so you can identify and visually verify the risk types. As the machine learning sees more traffic over time, it learns from the traffic, accepts labelled data from our customers who flag key paths and know vulnerabilities, and apply rules. It learns and gets better. It starts to trust, based on the repeated verifications.

In the graphs above, we show the entire risk surface area of all visitors across all endpoints. The areas of high risk are clearly visible, colour coded and displayed. Where do see repeat visitors, that are proven to be human, or known bots that are allowed and verified, they are then displayed in green as 'trusted'. However, they are still verified dynamically, but can now safely go into the trusted bucket for this particular visit.

This means we can learn to trust our verified visitors over time, based on their past behaviour.

Clicking on the chart icon flips the view so you then see the detailed traffic chart for each threat type over time. This can be really useful when looking at trends, traffic patterns or particular behaviour.

Verify Constantly, Trust Occasionally.

Photo by Brett Jordan on Unsplash

Check more blogs

Get updates on the content