The AI taskforce problem: why most companies form them wrong

The AI taskforce problem: why most companies form them wrong

When a company decides AI is strategically important — and most have, by now — the standard response is to form a taskforce. Pull together a cross-functional group, assign someone to lead it, give it a mandate to figure out how AI fits into the business.

This is a reasonable instinct. It’s also frequently counterproductive, for a specific reason that has nothing to do with the people involved.

What the taskforce is usually formed to do

The typical AI taskforce mandate has two components: identify where AI can be applied, and develop standards for how the team uses it.

Both are legitimate goals. The problem is the sequencing. Most taskforces start with the second component before they’ve properly done the first — they develop usage guidelines and tool recommendations before they’ve diagnosed where the coordination friction actually lives.

The result is a set of standards for how to use AI that don’t map to the specific problems the team has. They’re generic best practices, applied uniformly, that improve some workflows marginally and have no effect on the workflows where the real drag is concentrated.

The teams that get the most from AI aren’t the ones with the most comprehensive usage guidelines. They’re the ones that correctly identified the two or three places in their operations where AI could change the economics of how work gets done — and built specifically for those places.

The diagnostic step that most taskforces skip

Before any AI integration decision, there’s a prior question: where does the team’s accumulated knowledge currently live, and how accessible is it?

This sounds like an HR or knowledge management question. It’s actually a prerequisite for effective AI deployment. AI can only work with information that’s been made available to it. If the critical knowledge for doing your team’s work is locked in people’s heads, embedded in undocumented workflows, or scattered across a combination of email threads and tribal memory — AI will produce generic outputs that require as much effort to fix as to write from scratch.

The taskforce that skips this diagnostic is optimizing the interaction layer without addressing the infrastructure layer. You get better prompts producing mediocre outputs, because the underlying knowledge architecture hasn’t changed.

What the right formation sequence looks like

Map the knowledge landscape first. Where does accumulated institutional knowledge live? What does AI currently not have access to that would change what it could do? Where are the highest-friction points in how work gets done — the places where context loss, re-explanation, and coordination overhead concentrate?

Answer those questions before forming any standards. The standards should be a response to the actual terrain, not a generic framework applied to an unmapped surface.

The mandate I’d give an AI taskforce: spend the first month doing nothing but answering those questions. No tool evaluations. No usage policies. No AI days. Just a rigorous diagnostic of where the knowledge lives and where the friction is. Everything else follows from that.

This is also, not coincidentally, the shape of a Clarity Sprint. Five days. No deliverables on day one. A rigorous diagnosis before any recommendations.

Read about the Clarity Sprint