If LLMs Could Do Strategy, Google Would Be Winning
Why the world's smartest models still struggle with the messiest part of business: people.
Can LLMs Do Everything? Especially Business Strategy?
Probably someday. Just not now. Pretending otherwise is where the trouble starts.
I have been working on how humans actually make strategy for a long time. The hard part is not analysis. It is people. Strategy is created by people, for people, in systems where incentives shift, information is incomplete, and everyone involved is at least a little self-interested, confused, or lying.
That makes strategy a moving target.
Look at the AI industry itself. Google has the most money, the most scientists, and a decade head start. Could Google’s own LLMs have generated a perfectly sensible strategy to dominate the market? Almost certainly.
And yet, here we are. Google is currently a distant third behind OpenAI and Anthropic. This isn’t a failure of intelligence; it’s a failure of, well, strategy.
The “Around the Corner” Problem
LLMs do well when the rules are stable. Physics, math, and well-bounded engineering. Business strategy does not fit that description because of reflexivity: people respond to incentives, ego, fear, and power. Models don’t just have to predict the world; they have to predict how the world reacts to being predicted.
I take Waymo’s in Austin all the time. They are engineering marvels at 60mph, but they are “idiots” at 0mph. They still struggle with the “last mile.” Just the other day, a Waymo couldn’t figure out the curb in front of Google’s own HQ. It waited to pick me up around the corner.
In business, “around the corner” is where the deal actually happens. It’s the quiet work, the hallway trust, and the “Kabuki theater” of the office being stripped away for the real conversation.
The “Rumpled Suit” Data Gap
Elite institutions are currently telling executives that LLMs can “do” strategy. They are mistaking a well-formatted deck for a winning move. This is the same error we see in law.
LLMs are “White Shoe” lawyers. They are trained on Supreme Court briefs and appellate theory, the top 0.01% of “prestige” data. But they have never “seen” the vast majority of law: the millions of public motions and messy transcripts sitting in clerk offices and behind the PACER paywall.
An LLM can write a beautiful brief, but it doesn’t know the “Rumpled Suit” reality. It doesn’t know that a case often settles in 24 hours not because of a legal argument, but because one side says: “Show me your tax returns to prove your damages.”
The AI can’t smell a bluff or sense tax evasion. It knows the law but it doesn’t know the work, the reality of how things are done.
Our Approach
At VSTRAT, we build tools that map things like value chains, the science. But we are very careful to defer the Final Decision to the humans.
LLM output is seductive. These posts are hand-written and then LLM-edited; there is nothing wrong with using LLM’s as a tool. But the order matters. Human first. Machine second. Then human review.
If OpenAI or Google want to “align” their models with reality, they should stop chasing AGI poetry and write a $50M check to the court system to vacuum in the PACER archives. They need to move the training data from the Ivory Tower to the courthouse curb. Business strategists need to do the same, except the data is more opaque and less structured.
Until then, an LLM cannot create strategy that actually matters.
The strategists who think ChatGPT can replace them probably can be replaced. The rest of us still have work to do.



Nice one - well done Sir!