April 2026 · 6 min read

The New Management Problem Is Not Adoption

AI adoption is no longer the hard part. Deciding where it operates with low friction — and where it does not — is.

This post expands on a thread I posted on LinkedIn. It is the second in a series on what AI is actually changing in software engineering. The first post covers what happened to delivery when coding got faster.

The question has changed

A year ago, many engineering organizations were still debating whether to allow AI-assisted coding at all. That debate is largely over. Most teams have settled on "yes" and moved on.

The harder question — the one most teams have not fully answered — comes next. AI is now part of the normal development workflow. Output is rising. More code is being written, more pull requests are being opened, more ideas are becoming plausible implementations faster than before.

That is where management has to start making real decisions.

The new problem is not adoption. It is keeping speed aligned with judgment. And that requires something most "use AI to move faster" mandates do not include: a clear position on where faster execution is low risk, where review needs to get deeper, and where human judgment has to stay very close to the work.

Not all tasks carry the same risk

This is the distinction that is easiest to flatten and most expensive to ignore.

Using AI for boilerplate, repetitive refactors, test scaffolding, or first-pass documentation is one kind of task. The code is easy to review, easy to revert, and the consequences of a mistake are visible quickly.

Using AI for core abstractions, data model changes, security-sensitive paths, or behavior that creates long-term coupling is a different kind entirely. A mistake here does not show up in two days. It shows up in two months, touching ten downstream systems, by which point the original decision is long-merged and the author has moved on to something else.

The decision rule I find most useful is simple: time to discovery × blast radius.

A UI variant that is slightly wrong may be visible in two days and easy to revert. That is a short time to discovery and a contained blast radius. AI with light review is fine. A pipeline change that is slightly wrong may take two months to surface and touch ten downstream systems before anyone fully understands the damage. That is a long time to discovery and a wide blast radius. Human judgment needs to stay very close to that work.

2x2 matrix showing task types by time to discovery and blast radius, with four quadrants: accelerate, review carefully, watch closely, and keep humans in the loop
The decision matrix. Time to discovery and blast radius together determine how much friction AI-assisted work needs. Most teams treat these four quadrants identically — which is where the cost accumulates.

Why teams flatten the difference

The management message that gets transmitted through organizations is usually something like "use AI to move faster." That message is not wrong. It is incomplete.

When the message is just "move faster," the predictable result is not just more output. It is more output without enough distinction between tasks where speed is safe and tasks where speed is expensive. The team hears "move faster" and applies it uniformly. Nobody is making a bad decision individually — they are responding rationally to the incentive they were given.

And for a while, this looks like success. Velocity improves. More gets merged. People appear more productive. The metrics move in the right direction.

The cost shows up later — in production incidents, in systems that became harder to reason about, in review fatigue, and in technical debt that accumulated not through negligence but through speed applied without distinction.

The management job is not pushing adoption harder. It is deciding where speed should flow easily, where it needs more scrutiny, and where judgment has to stay firmly in the loop.

What the framework looks like in practice

The goal is not to build a formal policy with approval gates and checklists. Enforcement overhead tends to kill the benefits you were trying to capture. The goal is to build a shared mental model — one that engineers carry into their daily work and apply automatically.

That mental model has three zones.

Where speed can flow freely. Boilerplate, test scaffolding, repetitive refactors, documentation, first-pass implementations of well-specified tasks. Low time to discovery, contained blast radius. Review lightly, merge confidently, move on.

Where review needs to get deeper. New integrations, meaningful behavioral changes, anything that touches shared infrastructure or data contracts. Not blocked — but not treated as low-stakes. This is where the reviewer's job changes from "does it work" to "is this the right design."

Where human judgment has to stay close. Core abstractions, security-sensitive paths, data model changes, anything creating long-term coupling. AI can help with drafts and exploration. The decisions need a human who understands the full context and owns the outcome.

Horizontal diagram showing three zones from left to right: Flow (let speed run), Scrutinize (add friction), and Anchor (keep humans close), with example task types in each zone
Three zones, not three policies. The goal is a shared mental model engineers carry into daily work — not a formal approval process that adds overhead without adding judgment.

The compounding problem

One of the reasons this matters more now than it did two years ago is that AI makes plausible-looking output much easier to produce. A weak design that once took a day to implement can now be in a pull request in an hour. It looks finished. It probably passes tests. The reviewer, who is already looking at three other PRs today, approves it.

That is not a review failure. That is a system that was not set up to handle the new volume and pace. The reviewer did what they could with the time they had. The problem is upstream — in the absence of a shared understanding of which changes deserve more attention.

The teams that benefit most from AI will not be the ones that apply it most aggressively. They will be the ones that apply it most deliberately — with a clear sense of where the leverage is safe and where it needs to be earned.

Getting that right is not a technical problem. It is a management problem. And it starts with acknowledging that "use AI to move faster" is only half a strategy.

The next post in the series looks at how memory and repository-specific context change both the usefulness and the risk profile of AI-assisted development — and what that means for teams managing longer-running agent workflows.

← Previous post Next post →