Musings on intelligence: what is real, and what is artificial?

It is clear that we are going to have to deal with artificial intelligence and its various impacts on our industry and wider economy. But to label something as artificial implies that we know what the real thing is. Do we mean human intelligence? And if we do, does that mean crows, chimps and dolphins are not intelligent? Or is intelligence something bigger and more abstract, such as evolved, cultural intelligence? This latter thought hints at a difference between the individual and the collective: an ant or a bee may not clear my hurdle for intelligence, but an ant colony and bee swarm would. And having decided what the real thing is, can we adequately define it?

The label ‘artificial’ may also send us down an unnecessary dead-end – what if the future real thing is some combination of natural and artificial? Or, what if artificial becomes the real thing? We will let that particular thought drop for now.

As for defining intelligence, I defer to David Krakauer, president of the Santa Fe Institute, who has defined intelligence as “making hard problems easy”. He also defined stupidity as “making easy things hard” (and talks of other categories including genius, ignorance and being wrong). I really like these definitions for a number of reasons; they are short and use short words, they are medium-free (ie indifferent between biological neurons and printed circuits) and they give wide scope for exploration.

I have already suggested above that culture could be a form of intelligence – by laying down behavioural rules, or norms if you prefer, culture can make hard problems easy by showing us how to choose or behave in a given situation. So it seems that there are multiple forms of intelligence, not all of which are obvious under casual observation. What about embodied intelligence (morphological computation)? It is possible to build a purely mechanical machine, based on human geometry (ie pelvis and two legs), that will walk on a tread mill – suggesting that evolution has found a design that can perform a sophisticated function without the need for external computation. Or consider the performance of top athletes, who make hard things look effortless. We could call it skill, or we could call it movement intelligence. Essentially their hours of training can be thought of as creating a set of reflexes that fire with precise timing to achieve the desired result. But movement intelligence may require language intelligence, a conjecture advanced by John Krakauer (David’s brother) of Johns Hopkins University. The top athlete has a coach providing language-based instruction. Conversely, there are no videos of monkeys juggling on YouTube – perhaps the movement intelligence behind juggling can only be learned through language …”this is what you need to do first…”.

Going back to reflexes, they are by definition involuntary. They are too fast for us to be able to think about them. And therefore they can show us the limits of knowledge. Experiments have shown that you can give subjects the necessary knowledge – such as the handlebars of bicycle have been reversed, or the mouse has been adapted to move the cursor up and down when the mouse if moved left and right – but it is of no use to them. Apparently it takes months to retrain bike-riding reflexes. All very interesting, but we should get back on track – and I would like to return to the thought I dropped earlier, about combining natural intelligence (whatever that is) and artificial intelligence (whatever that is).

If you subscribe to the Pablo Picasso school of thought – “computers are useless, they can only give you answers” – then, in effect, you believe in the cognitive outsourcing model. Under this model we give computation problems to a computer on the basis that it can perform them faster, cheaper, and probably more accurately than we can – just like any good outsourcing arrangement. Nothing much of interest is implied by this model. There is no transformative leap in our, human, intelligence, just some solid productivity improvements. This is possibly why artificial intelligence can be viewed as threatening. As the machines advance faster than we can, what if they start to know things that we don’t?

However, an alternative model is available – the cognitive transformation model. This model states that as we internalise new cognitive technologies we change the range of thoughts we can think. So computers, under this model, become a medium for expanding and spreading cognitive technologies. Artificial intelligence then becomes less threatening, as it can be viewed as offering us more powerful cognitive technologies which, in time, we will internalise – giving us more powerful ways of thinking (and allowing us to design more powerful artificial intelligence, and so on). Now that would be real intelligence.

To me, the cognitive transformation model offers optimism and hope – as an alternative to the dark march towards the technological singularity (the point at which machines can design better machines than us, and therefore take charge) – and therefore I would like to believe it’s true. But hope is not a strategy. And the extraordinary pace of development in artificial intelligence makes understanding intelligence a very practical question. The good news is that there are many bright minds studying intelligence in academia. The bad news, according to David Krakauer, is that stupidity is the single biggest threat to mankind – and no one is studying that.