« Smerity.com

The Turing test isn't won by machines, it's lost by humans

The Turing test is meant to evaluate a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. A human evaluator is asked to converse with two conversational agents - one a human, the other a machine - and determine which is the machine.

Just as many academic tests and datasets aren't representative of the real world however, neither is the Turing test. With a simple modification, the Turing test goes from impossible to trivial in real world settings.

In the Turing test, the human evaluator is aware that one of the conversational agents is a machine. With this single statement, we've already lost any semblance to reality. The Turing test warns the evaluator. The real world doesn't give us this luxury.

Humanity has already failed the Turing test at a fundamental level.

Knowledge that you're being tested ahead of time can be a significant cheat. You wouldn't trust a restaurant if they needed advanced notice to pass a health inspection. If the manner you act in every day life is entirely different to what it would be under test conditions, such a test is not reflective of your behaviour in real life.

The real world is composed of pickpockets, not magicians

In the real world, our human evaluator loses two vital defenses. First, they're given no indication they're about to be tricked. Second, they aren't granted the time or inclination to fully interrogate the interaction.

We can compare this to the difference between a magician and a pickpocket.

When a magician takes the stage, we're already aware they're going to trick us. Even heeding such a warning a competent magician can make us believe we've seen the impossible. When a magician makes their assistant float in mid air, we don't start questioning the laws of physics, we start questioning our powers of observation. Magicians bend reality rather than break it. We know this. As such, we don't change our opinion or logic regarding gravity upon seeing evidence presented by them.

A pickpocket uses a subset of the magician's techniques but with a very different aim. With relatively minor suggestions they can direct your actions and attention a very specific way, one you'd never intentionally allow given full information. In crafting that fog of illusion they can then readily exaggerate and exploit your vulnerabilities. That fog may hold you for just an instant, long enough to slip a wallet from your bag, or even for years. A snake oil salesmen or fraudster is just a pickpocket with a longer gameplan.

From this, we can see exactly how the Turing test falls apart in the real world. The Turing test, like the magician, aims to entertain, stretching our definition of what's possible whilst returning us to reality at the end. Pickpockets, their adversarial counterparts in the real world, exploit your vulnerabilities towards a specific goal, leaving you lost in that fog of illusion.

Side note: If you want to see a magician misdirecting and tricking you whilst explaining how this applies to security, watch the hacker known as "Alex".

The Blind Turing Test

Can a human evaluator tell the difference between a human and a machine when interacting in a standard real world setting? In such a setup the human evaluator may receive zero warning and may be unable to fully interrogate the agent or the duplicitous material they receive. Even if a previous evaluator put in the time and effort to reveal the truth, that information may not have been relayed to new participants.

Disentangling truth from fiction is not a task humans are well equipped for - or one that we even deem generally necessary. For much of our life, our defenses are down. It doesn't require a sophisticated conversational magic trick to deceive you, it just requires a minor nudge. Thus, we're vulnerable to crafted conversations in the right context, just as we are to pickpockets when we enter their domain.

Note: Whilst I coined the phrase "blind Turing test" for the setup above, if you know of an existing phrase for this or for similar setups, please tell me :)

Misinformation is a virus

Viral content is aptly named. Just as with normal content, misinformation can spread like wildfire on social media. We have seen monstrous imagined stories grow in hours. Some of these have been from simple misunderstandings whilst others than have been unsophisticated jabs or clickbait. In the past year we've been given no lack of examples, ranging from political propaganda to exaggerated news headlines.

As soon as misinformation captures the imagination of the crowd and acquires a groundswell of support, you'll see it morph and evolve. This target can be impossible to stop. Attempts at fighting back such misinformation is akin to fighters facing down an inferno with a bottle of water.

All of this is a result of normal misunderstanding or traditional malice.

The way in which we react, share, and possess information as it spreads across social media can already be destructive. We adopt positions with little thought or analysis and hold to them with fierce aggression.

What happens as we continue failing the blind Turing test in the real world? What happens when adversarial agents stoke the fire and spread misinformation with a specific underlying purpose? Adversarial agents are perfectly suited to build and maintain a fog of war to keep misinformation aloft.

In computer security, the attack surface is vast and the cost of launching attacks is low. Attacks don't need to be sophisticated to catch victims given how large and variable the ecosystem is. While some within the community may have worked to be resilient and secure, nothing can be done for the vast majority of those who are vulnerable.

Misinformation attacks now have the same vast attack surface. Exploits can be deployed at scale and crafted specifically for vulnerable humans. Misinformation and propaganda can reach far behind enemy lines or through our traditional walls of defence. While these adversarial agents may not be sophisticated, the scale at which they're deployed means they will find vulnerable humans.

As this misinformation spreads, traversing the social graph from bot (adversarial agent) to human, it can become more and more resilient over time. Adversarial agents can provide support and social validation to infected humans. Infected humans can (even if flawed) apply more complex reasoning and logic to support the misinformation they now believe.

At this stage you're no longer arguing against a bot, you're arguing against a human twisted into aligning themselves with a fictitious team or narrative.

We as a society still trust by default more than we should in the digital domain. We still modify our thinking by trusting sources we shouldn't. We place too much signal in soft influence and soft logic relayed by "humans", all nudging you quietly and consistently. We still receive trust and authority from the wrong sources and assume there is truth when we hear this at scale.

Misinformation is a virus and we've just passed the asymptomatic period of this digitized disease.

Thanks to Alex Hogue for feedback on the draft and Ross Edwin Thompson for the pickpocket graphic.

There's no solution, only a sober warning

I have no clean solution for you. I have no readily available way to defend yourself. All I can say is that if you think you aren't vulnerable to this, you're wrong. Everyone in humanity is now a part of this blind Turing test. The difficulty of this game is going to increase rapidly. Your opponents are armed with your personal information, your preferences, your affiliations, and can deploy millions of calculations each day to tailoring attacks against your psychological vulnerabilities, convincing you to act in their favour. Regardless of your politics, regardless of your profession, you should be asking how you're vulnerable and how you could defend yourself. A fundamental truth we need to confront is that billions of dollars in both infrastructure and information gathering have been invested in systems that can rapidly be turned against our own interests.

Any system or network that has been optimized for advertisements has been implicitly optimized for spreading misinformation.

I personally don't think humans are capable of defending themselves against the more sophisticated and targeted attacks that are likely to come. I'm not entirely sure how to reinforce the defenses we have or craft the defenses we'll need.

We're well past the era of mass advertising. When launching a campaign, propaganda or product, we will see a shift in targeting. They'll no longer be targeting a demographic, they'll be targeting just you.