The Plausibility Trap: Why AI-Written Strategy Looks Right Until You Try to Use It

I use AI in strategy work most days. So when Angela Davis told me, sitting in her office in Dunedin, that the shortcut was dangerous, I went in ready to defend it.

I came out with a new appreciation for her view.

Angela has spent two decades in strategy. Psychology and sociology background, senior strategy roles across central government, tertiary education and the not-for-profit sector, now running her own consultancy and stepping into governance. She knows what good strategy looks like. She also knows what it costs an organisation when a strategy looks the part and quietly does nothing.

Her view on AI in strategy work is sharper than mine. And the more I think about it, the more I think she’s named something a lot of people are talking around.

The strategy that looks right at first glance

Here’s what Angela said when I asked her about AI-written strategy:

“It looks kind of good at first glance. It sort of sounds right. It’s got good strategy sounding headings, vision, objectives, pillars, roadmaps, horizons, all these kind of useful things. The priorities are worded in very catchy sounding ways. But then when you start interrogating it, it becomes quite obvious that this is not a strategy that anyone can actually implement.”

That’s the trap. AI isn’t producing bad strategy. It’s producing plausible strategy. The two are very different problems.

Bad strategy gets caught. Someone in the room says “this doesn’t make sense” and the document goes back. Plausible strategy slides through. It has the right shape and the right vocabulary. Nobody objects, because there’s nothing obvious to object to. It just quietly fails to do anything once the document leaves the room.

I’ve seen this in my own work. I’ve seen it in client work that landed in front of me. That phrase, “it sounds right”, is showing up more often in client conversations. It’s worth taking seriously.

Why it sounds right

The reason is structural, not accidental. Angela landed it cleanly when we got into the mechanics:

“AI is not a magic box that’s coming up with really innovative, new ideas. AI is a probability tool. This is what it’s seen on the internet, so this is more likely.”

That’s the whole game. A model trained on millions of strategies will, on average, produce something that looks like the average strategy. It can’t help it. The training data does the steering.

Which means the harder you push AI to “give me a strategy”, the more reliably it will give you the most common version of one. Same five pillars. Same three horizons. Same vision statement that could belong to any council, any university, any not-for-profit in the country. If your mandate already looks like every other council’s mandate, AI will compound that, not break it.

A good strategy is supposed to make a choice. Plausible strategy refuses to make one.

The truth isn’t on the internet

Then Angela said something I’ve been thinking about since:

“People don’t really put the truth on the internet. When you really discover the useful insights is when you actually have conversations with the manager who was implementing that strategy or doing that thing. You’ll hear all about the challenges that were faced, what really happened behind the scenes, some really exciting wins that they weren’t able to put in the report or some big challenges.”

If the most useful strategic intelligence in your sector lives in the heads of three to five people who’ve already tried the thing you’re about to attempt, then no amount of AI-assisted desktop research can substitute for the conversation. The reports are sanitised. The case studies are curated. The post-mortem nobody published is the one you actually need.

This is why Angela structures her discovery phase around at least three to five direct conversations, not around documents. It’s the part of strategy work AI cannot do. Not because AI is weak, but because the data isn’t there. The truth was never written down.

The flattening problem

There’s a second mechanic worth naming, and it’s the one I see most often in my own use. Angela described it with the kind of accuracy that comes from doing the work:

“Each time I ask it to make it shorter, it’s just flattening the insights down to this lowest common denominator. Sometimes if I’ve done the work and I’ve read the reports and I’ve had the conversation myself, I’m putting together quite interesting diverse concepts that create something new and it really sparks a whole new way of thinking in a new direction.”

This is the part of AI-assisted work nobody warns you about. Iterating with a model tends to regress your thinking, not sharpen it. Every “make it punchier” request shaves off a corner. Every “make it more concise” pulls toward the average. The unusual idea you started with, the one that might have moved the organisation, gets quietly smoothed away by the third or fourth pass.

The interesting strategy lives at the edges. AI defaults to the centre.

The test that breaks most AI-assisted strategy

There’s a simple way to know whether AI did too much of the work. Angela put it like this:

“That leader has to take that strategy and present it to the senior leadership team or to their boss or to a council. If they haven’t done the thinking or really been embedded in the process, that’s really hard to do. You’re sort of just presenting this thing that someone else made for you.”

Try defending it. Stand in front of the board with the strategy AI helped you write and field three sharp questions. If you can answer them, the AI was a tool. If you can’t, you’ve been helicoptered to the summit and you’re calling it a climb.

I learnt this lesson early. Human-to-human engagement first. AI used tactically to pinpoint where the real thinking happens, and to help me do the thinking rather than do it for me.

Where AI does earn its keep

I want to be careful here. None of this is an argument for keeping AI out of strategy work. I use it daily and I’m not stopping. The point is that AI is useful in narrow ways and dangerous in broad ones.

It’s useful for synthesising thirty pages of public consultation feedback into themes. It’s useful for stress-testing a draft against an opposing view you haven’t time to write yourself. It’s useful for the early discovery scan, the kind that surfaces what’s been published so you can spot what hasn’t been.

What it’s not useful for is replacing the thinking. Or the conversations. Or the choice that a real strategy actually makes. The pattern that works for me, and the one I see working for clients, is specialist plus AI. Deep expertise still doing the judgement work, AI taking out the friction. Reverse the pairing and you get plausibility without substance.

Naming the trap

The plausibility trap is this: AI will reliably produce strategy that passes the eye test, fails the implementation test, and degrades the longer you iterate with it. The cost isn’t a bad document. The cost is two years of wasted effort because nobody noticed the strategy was hollow until the milestones started slipping.

The fix isn’t to stop using AI. The fix is to stop trusting plausibility. Push back on documents that sound right. Insist on the conversations the model can’t have. Test whether the leader can defend the strategy without their notes. And keep the thinking in human hands.

That’s the part that matters. The map looks like the terrain. It isn’t.

Listen to the full conversation with Angela on The Cairn podcast.

Next
Next

Humans First: how 100 managers wrote their own AI principles