Skip to content

Where AI Stops Being a Tool Story

Matt Hogan |    April 13, 2026

Team reviewing data on a tablet with overlaid code and system visuals, representing human oversight of AI-driven workflows and autonomous systems.

Written by Matt Hogan

If the first pillar is about helping people work more effectively, and the second is about redesigning workflows so software can carry more of the operational burden, the third pillar is where the discussion changes again.

This is the point at which AI stops being mainly a story about assistance and starts becoming a story about systems.

That shift matters because it is where many of the more dramatic claims about AI tend to be aimed. It is where people begin talking about digital labor, autonomous execution, self-managing systems, and organizations that can scale output without scaling headcount in the familiar ways. Some of that language is inflated. Some of it is merely early. But underneath it there is a real strategic question: what happens when software no longer just helps people do work, and no longer just supports workflows, but begins to execute bounded outcomes under human oversight?

That, to me, is what the third pillar is really about.

 

When AI Becomes a Systems Question, Not a Tool

It is also why it should be approached with more seriousness than enthusiasm.

The easiest mistake at this stage is to jump from visible progress in assistance and workflow automation to the assumption that autonomy is simply the next obvious step. It is not. Or at least, not in the casual way it is often discussed. Autonomy changes the design problem. It changes the governance problem. And it changes the economic logic of how the organization creates value.

At the assistant stage, the question is whether an individual can work faster. At the workflow stage, the question is whether work can move more coherently through a system. At the autonomy stage, the question becomes whether the system itself can be trusted to carry out meaningful portions of an outcome without requiring a person to manually drive every step.

That is a much higher bar.

It is also why I think the third pillar has to be understood in two related but distinct ways. The first is internal autonomy: systems inside the company that can monitor, decide, act, and escalate within bounded domains. The second is external product strategy: the degree to which AI changes what the company offers, how that offer is differentiated, and where future value creation comes from.

Those two directions are connected, but they are not identical. And I think a lot of the confusion around AI strategy comes from collapsing them into a single idea.

Autonomy Changes the Design, Governance, and Economic Model

Internally, the attraction of autonomy is fairly clear. If the first two pillars are executed well, there will already be pieces of the company in which software is assisting people and supporting workflows. The next natural step is to ask whether some bounded outcomes can be executed with less direct human involvement. In engineering, that may eventually include parts of failure diagnosis, regression handling, remediation suggestions, or infrastructure response. In support, it may mean systems that can triage, retrieve context, generate a proposed resolution, and only escalate when confidence is low or risk is high. In go-to-market functions, it may mean systems that assemble intelligence, monitor signals, generate recommendations, and trigger the next step under defined rules. None of that is science fiction. But none of it should be confused with full replacement of human decision-making either.

McKinsey’s recent work on what it calls the “agentic organization” is useful here, not because its language should be adopted wholesale, but because it captures the scale of the shift accurately. Its argument is that we are moving toward a model in which humans and AI agents increasingly work side by side in the execution of business processes, and that the companies that benefit most will be those that redesign roles, workflows, and structures around that reality rather than treating AI as an overlay on the old model. That is a useful provocation, but it also implies something important: autonomy is not merely a technical capability. It is an organizational design choice.

That is one reason I do not think the third pillar should be treated as an immediate rollout plan. It is better understood as a horizon that should shape the architecture of what we build now.

If that sounds conservative, I think it is actually the opposite. The fastest way to make autonomy strategically irrelevant is to approach it as a sequence of disconnected experiments. The more ambitious move is to build the first two pillars in a way that preserves the option of the third. That means cleaner interfaces. Better workflow observability. Stronger evaluation. Better identity and authorization models. Clearer intervention points. More reliable handling of exceptions. These things are not always exciting to talk about, but they are what determine whether autonomy becomes economically useful or remains a collection of impressive demos.

The emerging standards work points in the same direction. NIST’s AI Agent Standards Initiative is explicitly focused on trusted, interoperable, and secure agent ecosystems, and the associated work on agent identity and authorization makes the underlying point very clearly: once agents begin acting in real environments, the problem is no longer simply whether the model can reason. The problem is whether it can be identified, authenticated, authorized, constrained, audited, and governed in a way that makes the surrounding system trustworthy. In other words, autonomy is as much a control problem as a capability problem.

That is why I think the third pillar is often discussed too casually. It is not merely the continuation of Pillar 2 with more confidence. It introduces a different level of consequence. A weak assistant wastes time. A weak workflow agent creates friction. A weak autonomous system can create operational risk at scale.

The research side of this is still catching up, but the direction is already visible. The academic and technical work on software agents is a good example. Systems like SWE-agent have shown that large language models, when paired with the right interfaces, can already navigate repositories, edit files, and run tests in bounded software-engineering environments. That matters. It tells us that the building blocks of autonomy are becoming technically plausible in real domains rather than only in toy examples. But it does not yet prove that those systems should simply be generalized into production operating models without much stronger controls, observability, and governance than most companies currently have.

This is why the third pillar also has to be understood as a product question, not just an internal productivity question.

 

Building Toward Autonomy Without Breaking the System

A company like Liferaft should care about autonomous systems internally because they may eventually create meaningful operating leverage. But we should care about them externally because they influence what customers will expect and what differentiated products will increasingly require. The third pillar is the point at which AI starts to matter not only because it changes how we work, but because it changes what kind of company we can become.

That matters especially for organizations with a serious data science or innovation capability. In many firms, those teams are discussed as if they sit beside the operating model. I think that is increasingly the wrong way to view them. As AI capabilities mature, the groups closest to model behavior, retrieval, evaluation, orchestration, and productization become central to the company’s strategic direction. They are not just participating in the shift. They are one of the main reasons the shift matters.

This is also why I think the third pillar should include both autonomous systems of work and AI-driven products in the same conversation, even if they are governed differently. In both cases, the company is learning how much judgment can be delegated, how much structure is needed for trust, and what kinds of value emerge when software begins to do more than assist.

The challenge, of course, is that not every organization is equally ready for that step.

Microsoft’s 2025 Work Trend Index is helpful as a measure of where leadership thinking is moving, even if one should always read vendor research with a little discipline. Its core argument is that organizations are beginning to move toward what it calls the “Frontier Firm,” a model in which human-agent teams become common and digital labor expands capacity rather than merely serving as a feature layered onto existing work. Whether or not one adopts that language, the underlying point is important: leaders are increasingly being forced to think not just about AI as a tool, but about AI as a participant in how the firm creates value.

That, to me, is the real significance of the third pillar.

 

Why Autonomy Is Ultimately a Product and Strategy Question

It is not a promise that the company will become fully autonomous. It is not an argument that human expertise is fading into irrelevance. And it is not a suggestion that every process should now be delegated to an intelligent system. It is a recognition that once AI becomes capable enough to carry out bounded outcomes, the structure of the company becomes part of the strategic question in a different way.

Who supervises? Who intervenes? Which decisions are allowed to remain automated? Which are escalated? Which workflows should be autonomy candidates at all? What kind of evidence do we require before trusting a system with a greater share of execution? And perhaps most importantly, how do we use what we learn internally to shape differentiated external offerings rather than merely becoming a more efficient version of the company we already were?

Those are not tooling questions. They are leadership questions.

That is also why I think the third pillar should be approached with both ambition and restraint. Ambition, because the long-term gains may be substantial and because companies that ignore this horizon entirely will eventually find themselves reacting to it under pressure. Restraint, because autonomy without control is not leverage; it is exposure.

If Pillar 1 teaches the organization how to work with AI, and Pillar 2 teaches it how to redesign workflows around AI, then Pillar 3 tests whether the organization is mature enough to let systems carry a greater share of outcomes without losing trust, quality, or strategic clarity.

That is a much more demanding challenge than the earlier pillars.

It is also the point at which AI becomes unmistakably strategic.

Because in the end, the third pillar is not really about whether machines can act. We already know that, in bounded contexts, they increasingly can. The more important question is whether the company knows what to do with that capability — internally, operationally, and in the market.

That is where autonomy stops being a technology story.

And starts becoming a company story.

 


 

Matt Hogan

Matt Hogan

Chief Technology Officer

Matt brings a deep passion for data, machine learning, and engineering excellence, with a laser focus on achieving impactful outcomes through agile best practices and cutting-edge innovative solutions.