Average Is The Enemy
LLM's are fantastic at averaging and pattern machine, but...
I keep coming back to pattern matching. Once you see it, you can’t unsee it.
Law is pattern matching. Playbooks, checklists, best practices. All of them are ways of saying: *here is a known structure, now recognize it and reproduce the expected response.* We’ve been training people to do this for centuries, and we’ve been calling it education. A college exam rarely asks you to create something genuinely new. It asks how closely you can approximate the pattern the instructor has in mind. The “right” answer is the most recognizable one, not the most interesting one.
We have, in other words, been training students to think like machines. Now the machines are here, and they’re better at it.
Of course GenAI is phenomenal at pattern matching. That’s not dystopian. That’s literally what these systems are built to do. Saying “AI is good at pattern matching” is about as alarming as saying a food processor can slice 100 kilograms of onions faster than a human, and without crying. Machines have always been better than people at certain narrow things: speed, consistency, recall. This is not Armageddon. It’s kitchen equipment.
The confusion starts when people conflate *being good at pattern matching* with *being good at creation*.
Doing very well on a test does not mean you can invent a field. Matching the expected answer does not mean you can design the question. And here’s the uncomfortable part: patterns are, by definition, averages. They are the center of mass of prior behavior. Even when we celebrate someone as “exceptional,” we’re usually celebrating a tight fit to a known template with a bit of polish.
And despite the soothing voice of public radio hosts, everybody cannot be above average. Statistics doesn’t care about your feelings.
Why This Matters for Strategy
You generally do not want an average strategy. The incumbents already have that one, plus scale, plus distribution, plus political capital. An AI that gives you a perfectly competent, generic SWOT is not threatening consultants so much as it is threatening bad consulting.
I saw a LinkedIn post recently where someone crowed that Claude can generate a SWOT “for free,” unlike a human analyst. Shortly after, the same person admitted they hadn’t actually read the details of the output. When they did, the realization dawned: it was fine. Perfectly fine. Smooth. Plausible. And utterly average.
That’s the trap. Pattern-matched outputs *feel* authoritative because they resemble what authority has looked like in the past. Bullet points. Clean phrasing. Familiar structure. But if everyone can generate the same competent average instantly, the value shifts elsewhere.
The Line Between Tool and Crutch
This is where we’ve been very deliberate with VSTRAT. We blend interactivity with GenAI. We use the model as a push, not a replacement for thought. It helps you explore, surface options, stress-test ideas. It does not claim to carry you across the finish line while you nap.
We could have sold more licenses by slapping a big button on the homepage that says “Generate Your Strategy.” Plenty of people do. We don’t, because we don’t believe it. Not philosophically, not practically, and not ethically. Strategy worth having requires judgment, constraint, tradeoffs, and sometimes the courage to reject the most obvious pattern.
The future isn’t pattern matching versus people. It’s pattern matching in service of people who know when average is the enemy.
And that distinction is going to matter a lot more than whether a SWOT costs zero dollars or zero seconds.


