Biometrics for bots? Orwellian Dystopia
"All technology is in itself morally neutral - these are just powers that can be use for well, or ill." Aldous Huxley, 1958.
For those of you who haven't heard of "World ID" it's an attempt to create a universal human ID based on sophisticated iris scans called 'the orb', which are converted into unique digital identifiers, and in turn are used to secure your blockchain based cryptocurrency assets using WorldCoin. Orb operators have already conducted 'field trials' scanning eyeballs for cash in various countries across the globe.
If this sounds like someone has read Brave New World and used it as a how-to-guide, let's remain neutral, but first examine who are the powers behind it, what they themselves say about it, and how they propose to use it. Is it for well, or could the unintended consequences be for ill?
Reading through the Worldcoin website, the reasons for the "world ID" are set out very clearly, although soaked in technobabble.
- Limiting the number of accounts each individual can create to protect against online attacks from multiple pseudonymous identities generated by a single attacker (aka sybil attacks)
- Preventing the dissemination of realistic looking/sounding AI-generated content intended to deceive or spread disinformation at scale
So the purpose of the World ID to paraphrase is this:
- Limiting individual accounts to just one or maybe two to prevent distributed account based attacks.
- Preventing AI based content bots from broadcasting widespread 'disinformation', whatever that is.
Let's take "sybil attacks" first. The name 'Sybil attacks' is based on the story of Sybil, a young woman who allegedly created 16 separate personalities. This multiple personality disorder was written up and sensationalised in the eponmyous book "Sybil" as well as TV mini-series by Sybil's psychiatrist, Dr. Cornelia Wilbur, who profited as a result. Dr Wilbur lobbied to have multiple personality disorder included in the DSM (Diagnostic and Statistical Manual) allowing it to be diagnosed, leading to hundreds of thousands of new cases, and revenue streams.
One slight problem with the book. It wasn't true. Dr Wilbur, as part of the 'treatment' had given Sybil intravenous barbiturates, and encouraged her fantasies, and even encouraged Sybil to read up on multiple personality disorders.
One slight problem with "Sybil bot attacks" - distributed accounts are just another form of account-take-over attack, and there are many different ways of defending against them, even if launched from powerful botnets. It is true that AI/ML detection that relies on behavioural tracking of the majority of legitimate traffic to detect the outliers can be fooled by these types of attack. The curve has effectively been inverted - there is no 'normalized' behavioural data to inference from. However, this essentially is a data labelling issue which is addressable.
The idea of limiting all internet access using a universal log-in should be truly disturbing to everyone. "Sybil bot attacks are very rare, we're seriously proposing scanning eyeballs around the world to prevent these?
Turning onto the second point - preventing AI based content bots from spreading disinformation. This divides into two issues. First, is how to prevent bots from posting content? The answer is that current bot protection has many ways to do this, from simple CAPTCHA, to fingerprinting, additional verification and behavioural analysis detection. We all know the problems with CAPTCHA, but advanced edge of network detection can help to block the vast majority of bots. If we want to go further and verify that the human is also the account holder, we can do various forms of two-factor authentication inside the account or application itself. Stop the bot, you stop the content.
Next is the AI content generation. No-one has yet developed a reliable way of detecting if text is generated by Chat-GPT or not. Defining disinformation, means you know the truth. No-one has yet developed a reliable way of detecting if text is true or not. Even if the 'facts' themselves aren't in dispute, the interpretation of facts results in a polyphony of voices. How is the orb going to prevent a legitimate user posting Chat-GPT content that may or may not be misinformation?
Using the orb as the best way of preventing malicious bots is just a fantasy. It's just not true.
So what's the real purpose?
Clearly linking the human verification with a biometric to a crypto-currency gives the holders of the digital keys huge power. Who are those holders, and where are they kept? Worldcoin, was backed by amongst a whose who of notable investors include Sam Bankman-Fried. The Worldcoin Foundation is based in the Cayman Islands. The Financial Times reports that Worldcoin, is close to closing another $100 million in additional funding, at a valuation north of $1billion.
Verify Constantly, Trust Occasionally.
Photo -open source Orb Diagram