Better Models Make Strategic Systems More Necessary
Raw intelligence isn’t the bottleneck. Structure is.
People keep predicting that vertical AI systems like VSTRAT will be replaced any day now by frontier models. The irony is that the better the models get, the more necessary systems like VSTRAT become.
I’ve been hearing this claim for years. It still hasn’t happened.
That is not because frontier models aren’t powerful. They are. They offer extraordinary general intelligence and increasingly fluent guidance. But raw intelligence has never been the bottleneck in professional decision making. Structure is.
We have seen this movie before.
MOOCs will replace universities. Elite lectures are now free or cheap from Harvard, MIT, and Stanford. Yet universities are thriving, charging $80,000 per year, with acceptance rates at historic lows. Content was never the scarce resource.
Blockchain will replace banks. It didn’t. At best, it may become infrastructure inside banks. That is the ceiling.
The death of programming. Predicted repeatedly since FORTRAN and COBOL. Each new abstraction promised fewer programmers. What actually happened was more software, more complexity, and more programmers.
AI will replace lawyers and doctors. I build legal expert systems and earned a law degree believing some version of this myself. After enough time in court, and enough exposure to LLMs making confidently wrong legal judgments, the reality is obvious. They are tremendous support tools, but they are not a substitute for judgment, responsibility, and accountability.
None of this denies real disruption. We are already seeing it in front line customer service, basic content creation, and low level analysis. AI is already replacing low level tasks because it is good enough for work where the cost of a minor error is low.
But strategy is not a low level task. In strategy, being 90% right is often 100% wrong because the remaining 10% is where the competitive ruin or breakthrough lies. Extrapolating the success of basic chatbots to higher order work like strategic planning is a category error. Generating fluent output is not the same thing as producing sound judgment. A model can create a beautiful, confident deck. That does not make it equivalent to years of domain experience, constraint awareness, and responsibility for outcomes.
Here is the part most people miss: as models improve, unstructured use becomes more dangerous, not less. The better the output sounds, the easier it is to mistake confidence for correctness.
When you use a raw prompt for strategy, you aren’t actually strategizing. You are accessing an average of the internet’s opinions.
That is why systems like VSTRAT don’t disappear as models get better. They become more valuable. If the frontier models are the engines, VSTRAT is the cognitive scaffold. It provides the necessary guardrails: intake, sequencing, framing, and the rigorous domain discipline required to find a Blue Ocean.
VSTRAT is not for teams looking to outsource their thinking to a chatbot. It is for leaders who want a better system to do their own thinking.
Tools are only as good as the structure governing their use. That truth hasn’t changed. And it isn’t changing anytime soon.


