AI’s transformative moment in investing: promise, pressure, and the reality gap

Artificial Intelligence (AI) is no longer an abstract future concept, it is already influencing how investment organisations operate. Yet, as highlighted in the Thinking Ahead Institute’s member forum discussion last year, the conversation around AI is characterised by a growing tension: while AI is positioned as a transformative force, its practical use across the industry still lags behind the rhetoric. Participants acknowledged AI’s potential, but also the uncertainty, constraints, and risks that make its adoption far more complex than the hype suggests.

AI is a big and often confusing topic. It is easy to get lost in jargon and hype, so it helps to focus on a few simple realities behind the buzz.

Despite the name, AI is not artificial in its impact – it is real and already having real effects across industries and society. It is everywhere, increasingly feeling like the axis around which much of the world is turning. While AI is impressive and powerful, it is not truly intelligent in the human sense. It can analyse, synthesise, and predict, but it cannot judge, contextualise, or prioritise based on values. What it offers is computational power; what it lacks is discernment – something that requires critical thinking, cost–benefit assessment, and a deep understanding of consequences.

Many organisations now feel an implicit pressure to adopt AI. Choosing not to risks falling behind peers or appearing out of step. While the sense of urgency may be more emotional than evidential, it is nonetheless real. The accelerating pace of change amplifies this feeling, even when the immediate business case remains uncertain.

However, at the discussion forum, there was realism about the gap between how much AI is talked about and how much is actually implemented. The conversation around AI often runs ahead of reality, creating hype that does not always match lived experience. Still, most participants remained cautiously optimistic. Polling showed a general belief that AI will deliver a net positive impact over time, provided it is used thoughtfully and with clear human responsibility.

Beneath the optimism, there are also some meaningful concerns, several of which merit deeper attention.

Key challenges highlighted by the forum participants were:
  • Potential uncertainty in decisions – AI can analyse huge datasets, but it can also introduce new risks, including misplaced confidence or hidden biases without proper human oversight.
  • Ethical and regulatory concerns – issues such as transparency, accountability, and data privacy are becoming more complex. Regulators are struggling to keep up with rapid innovation.
  • Cybersecurity threats – AI strengthens both defences and the sophistication of cyberattacks. Firms need stronger cyber protections.
  • Impact on skills and talent – some fear AI may weaken core learning and analytical ability, especially among younger professionals, and reduce early-career development opportunities.
  • Wider societal risks – concerns include misinformation, environmental impact, and the growing influence of large tech firms.

One of the strongest messages from the discussion was the role of humans in a future shaped by increasingly capable AI. While many agreed AI should act as a support tool rather than a replacement, participants also acknowledged the difficulty of maintaining meaningful human oversight as systems become more powerful. This introduces a fundamental paradox: if AI becomes better at discerning patterns in complexity than humans, how do humans remain both responsible and relevant?

Exploring this tension further, the discussion recognised that human judgement will matter most precisely where AI is strongest – high‑dimensional, data‑dense environments. Humans will need to shift from being the primary analysts to being the evaluators of machine‑generated insights. This is a far more demanding role, requiring new skills, new governance models, and a greater focus on interpretability and resilience. Retaining human authority in the decision loop will not be easy; it will require deliberate design, not just aspiration.

Amid the uncertainty, one conclusion stood out: AI offers real opportunities for productivity, efficiency, and insight, but adopting it responsibly demands more than technological investment. It requires a clear articulation of ethical boundaries, a culture that values judgement over speed, and a governance framework that protects against over‑delegation to systems that cannot understand consequences.

The most important takeaway from the forum may be that AI’s transformational potential will only be realised if organisations treat it as a tool, not a replacement for human judgement. Doing this well will require ongoing dialogue, thoughtful experimentation, and a willingness to slow down and reflect even as technology accelerates.

The conversation has begun – and it aims to shape a future where humans remain firmly in the loop, not by default, but by design.

Anastassia Johnson is a researcher at the Thinking Ahead Institute at WTW.