Humans will not become obsolete amid the rise of the machines

If your investment portfolio has high exposure to geopolitical risks you probably followed closely the Trump-Kim summit that took place in Singapore in June 2018.

Perhaps you searched for similar historical events (e.g. the Iran denuclearisation deal or even the Nixon-Mao meeting in 1972) and their impact on asset returns. Until recently, it would have been likely to take one of your highly-paid research analysts quite a few days to gather and process the relevant information.

Today, it can be done by a smart machine in just a few minutes. It takes that long because you still need to pick from a series of drop-down menus to specify your needs. The astonishing recent advances in machines are described in a recent New York Times article.

The advantage of machines over humans on processing tasks such as this one is obvious. Compared to computational power advancement, the evolution of human intelligence is an extremely slow process. So the gap is only going to get bigger.

If the machines are that good and continue to get better, are humans still going to be needed in the institutional investment decision-making process?

The answer is a resounding yes. Let me unpack why this is the case.

Humans and machines actually have complementary strengths.

Humans are constrained by biological limits. We have limited memory. We get tired easily. It takes physical effort for us to compute. As Daniel Kahneman pointed out in his book “Thinking, Fast and Slow”, we can easily compute 5+8 while walking but try, say, 56 x 7 next time. You will probably stop walking. Machines don’t have these problems.

While both humans and machines can be biased (if the input or the people who wrote the algorithm are biased), one of the key strengths of machines over humans is that they make perfectly consistent decisions when given identical input. Humans, on the other hand, make inconsistent decisions even based on the same set of input, driven by both internal (our mood and emotional state) and external factors (distracting information that is irrelevant to decision making e.g. weather).

On the other hand, machines can’t develop common sense (at least not yet). They don’t have contextual knowledge about which problems require solving. They can’t think outside the box. They are better at discovering correlations that at identifying causality. They have narrow intelligence as opposed to humans’ general intelligence. A Go-playing algorithm, no matter how good it is, is useless in driving a car.

So where does this leave institutional investment decision making?

Investment is basically about understanding and dealing with an unknown future. As in the geopolitical risk assessment exercise with which this post begins, understanding the future in practice normally starts from digging into the past in order to uncover similar patterns.

This is where machines really excel. Machine learning algorithms are capable of gathering data and recognising patterns on their own, painting a much more complete picture of the past. If there is a story in the past data, machines will find it for you quickly and cost-effectively.

If only the past was a reliable predictor of the future.

To make the leap from past to future, three questions need asking, which is where the role of humans comes into play:

  1. Is the environment stable enough so the future will resemble the past?
  2. Do we understand the causal factors of the forces in play?
  3. Will our prediction of the future affect the future itself (the technical term here is reflexivity)?

 

With the “rhymes” discovered by machines, humans are left with an arguably even harder question: to what extent the history does repeat itself?

Most of the time there are no easy answers to these questions. But another key strength we enjoy as humans is that we are capable of recognising the limit of our own intelligence. We are capable of coming up with sensible strategies even in the absence of a complete knowledge about the future. That’s why we diversify, for example.

Circulating back to the debate between humans and machines, I envisage a human-machine partnership that is more powerful and effective than humans or machines alone. The discussion shouldn’t be about humans versus machines. It should be about achieving synergy between the two types of intelligence. The concept of collective intelligence does not need to remain within the boundary of human intelligence.

How should we split our roles in this partnership?

It looks like Pablo Picasso had already given us an answer decades ago: “They (machines) are useless. They can only give you answers.”