Fundamental limits to prediction

(This is post 1 of 5 on the ‘Limits to prediction’ ACtioN (applied complexity network) meeting organised by the Santa Fe Institute (SFI) and hosted by Willis Towers Watson on 9 September 2016.) David Krakauer, the president of SFI, opened the meeting and spoke about the fundamental limits to prediction. Scientists are getting better at predicting the future, but prediction remains an inherently difficult problem. There’s good reason to believe that we will eventually face some fundamental limits. Prior to the ACtioN meeting, SFI recently hosted a workshop bringing together researchers who work on the mathematical, algorithmic, and practical aspects of prediction across a wide range of fields, trying to understand these limits. A classic example of where prediction faces fundamental challenges is in chaotic systems. The evolution of a chaotic system, by definition, is very sensitive to its initial conditions. Krakauer used the weather system as an example where predictions over a window of just a few days are incredibly difficult (in fact, not any better than using a historical average) because points that are very close to each other in starting position will diverge dramatically over time. In this case, the exponential divergence in the dynamic system beats the exponential growth of computational power. Krakauer’s view is that the most fundamental limit to prediction is in fact human imagination. He referred to the Dirac equation (for the technically-minded, the Wikipedia link is here) as a prime example. Dirac’s equation simplified reality but also predicted negative energy which was clearly at odds with the current understanding of reality. The subsequent discovery of the positron (positively charged electron), validated the equation and changed our understanding of reality. Krakauer spoke about the “no free lunch theorem”; because no algorithm is completely assumption free, there can’t be a universally-best algorithm for a given problem. There will always be a better specialised algorithm for a specific problem than a general algorithm (mathematically provable). The implication for investment is that searching for an optimal investment strategy to work in all environments is destined to fail. Specialist context knowledge about each specific environment is critical to the solution strategy. Krakauer does not believe big data can solve all the problems associated with predictions. He suggested that the benefit of data saturates at a certain point and solutions must rely on better models and better theories. This lends support to TAG’s approach in advancing the complexity framework as a foundation of better theory for the investment world. The complex and reflexive nature of the investment landscape significantly limits the power of empirical methods, even with increased range and depth of data-sets. Our view is that big data will have significant impacts if we can link the step-up in data sources with a step-up in explicit models of reality. If big data is applied to lighter understandings of reality, then we will encounter major issues in data mining and contribute only minor understanding to the field.