ARTICLE
5 December 2025

We Need To Protect The Humanity Of Business From AI

A
AlixPartners

Contributor

AlixPartners is a results-driven global consulting firm that specializes in helping businesses successfully address their most complex and critical challenges.
We are told that the rise of an unmatched AI superintelligence is now inevitable. It will bring about either a utopia or a dystopia, and all we can do is watch the unfolding events.
United Kingdom Technology
Rob Hornby’s articles from AlixPartners are most popular:
  • in United States
AlixPartners are most popular:
  • within Antitrust/Competition Law, Intellectual Property and Real Estate and Construction topic(s)
  • with readers working within the Retail & Leisure industries

We are told that the rise of an unmatched AI superintelligence is now inevitable. It will bring about either a utopia or a dystopia, and all we can do is watch the unfolding events. I profoundly disagree and believe that business has a decisive role to play in shaping the post-AI future.

These deterministic narratives emerge from a heady mix of radical tech culture and a string of "isms", including rationalism, determinism, utilitarianism1, materialism, transhumanism, cosmism2, and brainism3 – not to mention commercial interest. It relies on a faith commitment that AI will bridge the feasibility gap between current reality and a point of AI hyper-acceleration and convergence called thetechnologysingularity.

I will refer to this fusion of disparate perspectives as another "ism":AI futurism, because it represents a sophisticated worldview, not just a casual opinion. Despite its fringe origins, this viewpoint has now become a widely accepted assumption in public discussions of AI.

AI 2027

A fascinating illustration of AI futurism, as well as its alternative, is presented by theAI 2027scenario-based report from several leading AI forecasting researchers, published in April of this year. It outlines two possible trajectories: (1) a "transformative" scenario, in which artificial general intelligence (AGI) drives extraordinary productivity and scientific breakthroughs but also severe social and economic disruption; and (2) a "managed" or "slowdown" scenario, where improved governance and safety coordination temper the pace of development and mitigate the associated risks.

According to the report, we may soon witness the emergence of "Agent-1" or the "Superhuman Coder", marking a significant step change in agentic capability that is particularly good at accelerating AI research. In this scenario, rapid acceleration of algorithmic progress brings AGI at the end of 2027 – hence the report's title.4 Based on its compute growth assumptions, current frontier models should be one or more orders of magnitude larger than GPT-4, and on a trajectory to reach roughly 1,000 times that scale by 2027.

Instead, we have seen only moderate technical gains from leading AI labs this year, summed up by the underwhelming reception of ChatGPT-5. Progress in agentic technologies has been consequential, but very few business leaders have released anything sophisticated into the wild. In fact, firms continue to face difficulties advancing AI initiatives beyond the pilot stage more generally ("pilot purgatory"), and concerns have mounted over a potential AI sector bubble, with some recent corrections.

Nonetheless, substantial foundational investments in compute infrastructure are underway (with the leading players mostly making pledges to one another), contributing to gradual decreases in inference costs and positioning the sector for long-term scalability.5 Consistent with the report's expectations, AI-related geopolitical tensions have also intensified, particularly around access to the most advanced chips.

On 22nd November, a few weeks after I had completed the first draft of this article, a note was added to the AI 2027 website to clarify that AGI by 2027 was always amodalforecast and not the authors'medians, which were longer; all chronological predictions have now been updated to later years. This is technical forecasting speak for, "you never understood our timescales, and they were wrong anyway, so we are making another attempt". However, I appreciate their willingness to update the AI 2027 project and will not criticise anyone for sincere and informed attempts to bring clarity amidst such widespread uncertainty.

Having lived at the messy intersection of technology and reality for so many years, I am not surprised that some aspects of the original forecast are progressing more slowly, which buys us time. However, the revisions do not necessarily mean that the ideas underpinning AI 2027 are wrong, even if they are deliberately hyperbolic. Most importantly, I believe the authors are right in identifying society as the most significant lever in deciding ultimate outcomes.

We have been here before

The Enlightenment of the seventeenth and eighteenth centuries ushered in the so-called "Age of Reason". Newton reimagined the universe as a rational and predictable machine, while Descartes separated mind and matter (dualism), opening the way for a similarly mechanistic view of humans. History itself came to be seen as an unstoppable march towards progress.

However, the Romantic movement arose to challenge this view, insisting that human beings are not biological automata but emotional, embodied, and desiring creatures. Figures like Wordsworth, Coleridge, and Beethoven reaffirmed the importance of imagination, beauty, nature, and the transcendent. Their legacy has endured.

After the Second World War, the technocratic vision of progress, momentarily shattered by the atomic devastation of Hiroshima and Nagasaki, intensified again under the pressures of the Cold War. Emerging fields like cybernetics, systems theory, and, yes, early artificial intelligence promised control and certainty, recasting humanity's future once again in technological terms.

Yet, in reaction, the 1960s counterculture emerged, embracing mysticism, psychedelic exploration, ecological awareness, and struggles for civil and human rights. In parallel, postmodern thinkers began to question the "grand narratives" of modernity, replacing faith in scientific progress with a search for meaning, identity, and interpretation.

People do not like to be handed their fate

History tells us that humans have typically found ways to avoid being boxed in by totalitarian ideologies in which reason, science, and technology alone are supposed to dictate their destiny. Given that AI futurism is firmly rooted in this tradition, can we see any signs of a similar rebellion? Although there is no cohesive movement in evidence, there is a rebellion of thought – and that is usually a precursor to action.6

Firstly,meta-modernismis on the rise, a philosophical idea that oscillates between modernist rationalism and postmodernist deconstruction. It provides a path to embracing complexity and contradiction without forcing everything into a neat single theory. ThinkBuffyandTed Lassofor how this translates into media. The crucial point is that meta-modernism refuses to swallow ideologies whole. Instead, it treats them as human myths to be explored sincerely, but also with irony and caution. This is a healthy and deflationary approach to AI.

We have also witnessed the resurgence of Aristotle's pre-modernvirtue ethicsafter it spent 350 years in the intellectual wilderness.7 Shannon Vallor argues for a "technomoral" future shaped not just by ethical trade-offs but also by embedded moral virtues and human flourishing. Nigel Shadbolt and Roger Hampson apply these ideals to AI behaviour itself in "As If Human". Virtue ethics challenges AI futurism on the grounds that wisdom is more valuable than knowledge, and good character is more desirable than superintelligence.

A third challenge comes from4E cognition– the view that real intelligence is embodied, embedded, enactive, and extended. This collection of theories proposes that human thought emerges through our bodies, environments, and relationships, and not in isolation or abstraction. Today's AI systems mimic cognitive reasoning but do not participate in the world they model. Until they do, 4E exponents argue, talk of human-level understanding remains premature.

What has this got to do with business?

Although not intuitive, these counter-theories resonate with modern business culture. Most companies are very human entities. Like meta-modernism, they are rational but pragmatic, and rarely ideological; in common with virtue ethics, they usually establish shared values to guide behaviour; and, in the spirit of 4E cognition, they cater for people in holistic ways by promoting wellbeing, putting art on the walls, creating social spaces, and celebrating success. The definition of agood companynow encompasses human-centric attributes that extend well beyond the functional and analytical.

This is a far cry from the cold algorithmic vision of business in AI futurism, in which human labour is entirely replaced. Utopians will argue we can all enjoy these human pursuits at our leisure (literally). But who will oversee our businesses? Corporate leadership is about outcomes and results, but it also relies on trust, morality, judgment, and sometimes sacrifice. Since AI cannot emulate these qualities, what does a business devoid of distinctive humanity look like? Nothing that I want to exist for sure.

So, can we avoid what some tell us is inevitable? The AI 2027 report addresses that question in its second "managed" scenario. To achieve it, the authors recommend scaling up alignment research, increasing global governance, developing monitoring and safety technologies, mitigating competitive dynamics between labs and nations, and investing in societal preparedness and moral reflection.

I do not hold out much hope for global governance or slowing down the AI race in the current geopolitical context. However, research on alignment, automated monitoring, local governance, and societal adaptation has more potential.

Translating these ideas onto an explicit business platform for like-minded corporate leaders, I suggest we:

  1. Realise business has considerable agency as AI customers, employers, contributors to the economy, stakeholders in social stability, and entities with existing brands and media reach.
  2. Think through what kind of post-AI future we want and begin to articulate it clearly. This must embrace efficiency gains and respect shareholder interests, but can also follow an augmentative path in which work is designed in such a way that humans can flourish.8
  3. Engage with think-tanks, business schools, and umbrella organisations to develop research (rather than ideology) led paths to a post-AI future.
  4. Use our leverage to call for measurable progress on AI safety, ethics, and sustainability on the part of providers. These goals should be made a key factor in commercial decisions about AI partnerships.
  5. Challenge hubristic pronouncements on all-powerful AI when we are struggling to move so many initiatives out of pilot and into scaled use.
  6. Argue for proportionate, expertly crafted, efficient and localised regulation that does not weigh companies down with ineffective bureaucracy.
  7. Protect all the human factors that distinguish the very best companies from the rest.

Conclusion

If the AI 2027 authors are even approximately correct, then I believe business leaders need to advocate for the "managed" scenario, avoiding the absolutist vision of AI futurism. This "slowdown" option is reminiscent of theprotopiaproposed by Kevin Kelly ofWiredmagazine as far back as 2014. He envisioned an incremental, messy, yet realistic transition to a better, technology-enabled future. That is what we have always achieved, and I believe we can and should do it again.

It is time for business leaders to start shaping the AI future we want.

Footnotes

1 Of the effective altruism variety.

2 The view that humans will unite with a universe evolving toward higher intelligence and self-awareness.

3 The perspective that the mind, thought, consciousness, and personhood can be fully explained by brain activity alone.

4 AGI is left undefined in the document.

5Although there aregrowing concernsabout the limitations of scaling as a way of improving LLMs.

6 Neither Romanticism nor Postmodernism was actually very cohesive even at their peaks.

7 The revival of virtue ethics began in the 1950s but has been applied to technology only recently.

8 I am in the minority who believe we may see a net gain in jobs overall, just as we did in the Industrial and Internet Revolutions. However, so far, neither job displacement nor creation has reached the expected level.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More