Written by Matt Hogan
If Pillar 1 is where most organizations first encounter AI, Pillar 2 is where the discussion becomes more consequential.
The first pillar is relatively easy to understand. People use AI to help them work faster. Engineers generate tests and first drafts more quickly. Marketers can accelerate research and content development. Sales teams prepare more effectively. Support teams retrieve and synthesize information with less effort. That layer matters, and it produces visible gains quickly enough that it tends to dominate the conversation.
But it is also where many organizations stop too early.
The more important shift begins when AI is no longer only helping an individual complete a task, but starts to execute pieces of a workflow. That is a different kind of change. It affects not just the speed of an activity, but the structure of how work moves through the company. It alters where context lives, how handoffs happen, and how much of the work between teams is still reconstructive rather than continuous.
That is what I mean by agent-assisted workflows.
The reason this matters is fairly straightforward. In most organizations, a meaningful portion of friction does not come from a lack of effort inside a team. It comes from what happens between teams. Work is triaged, handed off, clarified, re-explained, reformatted, reviewed, and re-entered into systems in slightly different ways at each step. Context is lost. People reconstruct the same understanding multiple times. Delays accumulate not because no one is working, but because the workflow itself is labor-intensive.
Traditional automation addressed parts of this problem, but usually only when a process was already stable, highly structured, and narrow enough to encode directly. What makes this moment different is that agents can now operate in spaces that are less rigid than traditional automation could handle. They can interpret requests, reason across steps, retrieve information, invoke tools, maintain working context, and move a process forward without requiring each intermediate action to be manually orchestrated. McKinsey’s recent work on agentic AI describes this shift directly: the real prize is not simply to accelerate existing use cases, but to redesign business processes around agents rather than bolting agents onto legacy workflows.
That distinction is important because it is the difference between local productivity and workflow leverage.
A team can become faster without the company becoming faster. That is one of the more persistent misconceptions in AI strategy right now. If each group improves its own task execution by 15 or 20 percent but the handoffs between those groups remain fragmented, the enterprise-level gains will be disappointing. McKinsey’s 2025 state-of-AI work points in this direction quite clearly. Organizations are reporting real gains from AI, but the pattern increasingly suggests that the larger returns come when companies rewire workflows and operating models around AI rather than treating it as a loose collection of role-specific tools.
This is also where the current enthusiasm around agents should be handled with some discipline.
There is no question that agentic systems are becoming a serious area of enterprise focus. McKinsey’s 2025 survey reported that 23 percent of respondents said their organizations had already scaled some form of agentic AI system, while another 39 percent were experimenting with one. Microsoft’s 2025 Work Trend Index describes a similar direction of travel, arguing that organizations are moving toward hybrid human-agent teams and outlining a progression from AI as assistant, to AI as teammate, to AI as something closer to digital labor embedded in the enterprise. Gartner’s forecasts are even more assertive: by 2028, it expects at least 15 percent of day-to-day work decisions to be made autonomously, and in a separate forecast it projected that 40 percent of enterprise applications would include task-specific AI agents by 2026, up from less than 5 percent in 2025.
Those signals are meaningful. They tell us that this is no longer a fringe topic.
But the cautionary side of the picture is just as important. Gartner has also warned that more than 40 percent of agentic AI projects will be canceled by the end of 2027 because of unclear business value, rising cost, or inadequate risk controls. McKinsey’s field observations point to something similar: organizations are finding that agents can create real value, but only when they are introduced as part of a broader redesign of workflows, governance, data foundations, and team structure. The implication is not that the market is wrong about agents. It is that the market is still early enough that many companies are confusing capability demonstration with durable operational value.
That is why I think the starting point for Pillar 2 should not be agents. It should be workflows.
That may sound like semantics, but it is not. If the conversation begins with “Where can we use agents?” the result is often a search for demos. If it begins with “Which workflows are currently slowed down by repetitive analysis, poor handoffs, or manual reconstruction of context?” the result is more likely to be a coherent business case.
The difference matters because not every workflow should become agentic. Some are too low value to justify the overhead. Some are too risky. Some remain too dependent on judgment, trust, or nuance. But a surprising number of workflows sit in the middle. They are complex enough to be costly today, but structured enough that portions of them can be safely delegated.
In engineering, the examples are fairly easy to see. Build and test failure triage, PR preparation, regression-check support, and parts of incident classification all fit this pattern. In customer-facing functions, support triage, knowledge retrieval, escalation preparation, and routine follow-up drafting are similar. In GTM functions, research synthesis, lead enrichment, narrative preparation, and proposal assembly are obvious candidates. None of these use cases is particularly glamorous. That is precisely why they matter. Organizations rarely become more effective because they found one dramatic use case. They become more effective because they removed drag from the work that happens every day.
This is also why Pillar 2 should be thought of as a cross-functional pillar, not just a functional one.
Within teams, the gains are easier to see. Less manual triage. Faster preparation. More consistency. Better access to internal context. But the larger gains often sit across boundaries. A support case that carries cleaner context into Customer Success. A customer signal that reaches Product and Engineering with less distortion. A GTM insight that moves more directly from campaign analysis into sales enablement and field messaging. A product issue that arrives in engineering already classified, summarized, and contextualized rather than requiring a fresh reconstruction.
Those kinds of improvements rarely look dramatic in isolation, but collectively they can change how quickly the company moves.
This is why I do not think Pillar 2 should be described as “AI doing more work.” That framing is technically true, but strategically too vague. A better framing is that Pillar 2 reduces the amount of workflow effort spent on carrying context, restating context, and recovering context. In most organizations, that is a substantial hidden tax.
If that sounds a bit abstract, the emerging technical literature points in the same direction. Recent survey work on LLM-agent evaluation has emphasized that enterprise deployment requires more than simply checking whether an agent can complete a task. It requires attention to reliability, safety, tool use, planning, long-horizon performance, and the practical realities of operating inside systems with access control, compliance obligations, and real business consequences. A benchmark such as CLASSIC is helpful precisely because it evaluates enterprise agents across dimensions such as cost, latency, stability, and security rather than treating raw task accuracy as sufficient.
That matters because workflow systems fail differently from simple assistants.
An assistant that drafts a weak answer wastes a little time. An agent embedded in a workflow can create cost, delay, security risk, or operational confusion at a larger scale if it is poorly governed. This is why governance in Pillar 2 is not a secondary issue. It is part of the architecture.
NIST’s new AI Agent Standards Initiative is useful here because it captures where serious institutional thinking is heading. The emphasis is on trusted, interoperable, and secure agent ecosystems, not merely on model capability. Its accompanying work on AI-agent identity and authorization is even more telling. It implicitly treats agents as actors that must be identified, authenticated, authorized, monitored, and constrained if they are to operate safely on behalf of users or organizations. That is the right mental model. Once agents begin participating in workflows, they are no longer just software features. They become participants in the system.
This is also the point at which organizations often become either too aggressive or too timid.
The overly aggressive response is to assume that because a workflow can be partially delegated, it should be fully delegated. That is rarely the right starting point. The overly timid response is to reduce agents to a set of glorified assistants and never allow them to own enough of a process to generate meaningful leverage. That is equally limiting.
The more sensible posture is selective delegation. Let agents take on the portions of a workflow that are repetitive, reconstructive, and bounded. Preserve human ownership where judgment, exception handling, trust, or accountability are central. Expand only when the workflow is observable enough that performance, failure modes, and intervention points are understood.
That kind of discipline may sound less exciting than the usual rhetoric around digital labor, but in practical terms it is far more valuable. It is how organizations move from demos to operating leverage.
And that, to me, is the real significance of Pillar 2.
Pillar 1 makes individuals more effective. Pillar 2 begins to make the organization itself more coherent. It changes how work moves, how context travels, and how many human steps are actually required for the company to get from an input to an output. It is where AI starts to affect not just productivity, but coordination.
That is why I think it deserves more attention than it currently gets.
Most companies still talk about AI as if the central question were how much faster a person can complete a task. That matters, especially at the start. But over time, the larger strategic question becomes how much of the workflow still depends on people manually bridging the gaps between systems, teams, and decisions. The organizations that answer that question well will end up with more than faster employees. They will end up with a more effective operating model.
That is the real promise of the second pillar.
And it is also what makes it more difficult than the first.