The Confidence Machine
AI keeps getting smarter. That's not the problem we think it is.
The Smartest Bouncer in Long Island
LLMs are becoming measurably more intelligent by the month. Benchmarks keep falling. Context windows keep growing. The demos keep getting more impressive. But intelligence alone isn’t quite the road to nirvana it’s been made out to be.
Consider the case of Chris Langan, a man repeatedly tested with a consistent IQ of almost 200. By most measures, one of the smartest human beings alive. He worked as a bar bouncer on Long Island for years before eventually moving to a farm in Missouri. Chris wasn’t dumb. He had staggering raw intelligence. What he lacked was structure: the frameworks, the mentorship, the institutional scaffolding that channels raw cognitive horsepower into something the world recognizes as achievement.
This is the dirty secret nobody in the AI industry wants to talk about.
The Consulting Paradox
Multiple studies, including a well-publicized experiment at BCG with Harvard researchers, have shown that LLMs can amplify the output of high-end strategy consultants. The results are real. But we conveniently ignore a critical detail: to become a high-end strategy consultant in the first place, that person likely endured years of rigorous training. Case method. Structured frameworks. Mentorship. Thousands of hours learning how to think about problems before they ever touched a keyboard.
The AI amplified their existing structure. It didn’t create it.
Hand that same LLM to someone without that training and you get something that sounds brilliant but is ultimately mediocre. Confident, articulate mediocrity. Which, if you think about it, is far more dangerous than obvious incompetence.
Computers Were Supposed to Teach
The idea that computers should provide structure, not just answers, is not new. It’s older than most of the people building today’s AI products. In the 1970s, the PLATO system at the University of Illinois had interactive courseware, forums, and messaging decades before Netscape existed. Alan Kay at Xerox PARC invented overlapping windows, mouse-driven interaction, and graphical displays primarily to teach kids. Not doctors. Not consultants. Kids. The premise was that young minds needed structure to unleash their inherent creativity and intelligence.
That vision, computers as structured thinking systems, has largely been forgotten. We got distracted by the internet, then social media, then mobile, then crypto, and now generative AI. Each wave promising transformation. Each wave mostly delivering new ways to consume content rather than new ways to think.
What I Saw in Missouri
I live in Austin, TX, and I go to a lot of pitch events. I’ve seen hundreds of presentations from founders, consultants, and MBAs backed by serious venture money. I was recently privileged to attend the Center for Transformative Technology at the University of Missouri, where students had used a structured strategy system (full disclosure: one I built, called VSTRAT) to prepare presentations.
What I saw stopped me cold.
These students articulated specific strategies and the marketing focus to enable them. They identified precise segments and explained which archetypes would appeal to those segments, and why. They mapped pain points their systems addressed to concrete opportunities. The full chain, from strategy through segmentation through execution, presented coherently and defended under questioning.
I have seldom heard this expressed so well by professional, full-time funded entrepreneurs. Putting it mildly, maybe never. And I see those presentations regularly.
Think about that for a moment. Who are you going to sell your thing to, and why do they care? That’s not an advanced question. It’s the question. And yet most pitches I see treat it as a box to check rather than a problem to solve. You get a slide that says “target market” with a giant TAM number, maybe a vague persona, and then the founder races ahead to the product demo because that’s the part they actually enjoy talking about. The specific chain of reasoning, from segment to archetype to pain point to opportunity, gets skipped. Not because these people are stupid. They’re often brilliant. But nobody forced them to do the hard, structured thinking that connects those dots before they opened PowerPoint.
These weren’t MBA candidates with five years at McKinsey. They were students who had spent hours inside a structured system that guided them through the process step by step. Naturally, a few asked the obvious question: couldn’t you do the same thing with ChatGPT?
Why the Answer Is No (and Why It Matters)
VSTRAT is not a chatbot in the normal sense. It doesn’t take a prompt and hand you a deck. We studiously avoid the “type a sentence and get a strategy” approach, and frankly, we have some disdain for the people selling that as a serious product.
Instead, the system guides people to create their own strategies using structured interactivity. You work through frameworks. You make choices. You confront tradeoffs. The AI embedded in the system is substantial, but its role is to augment your thinking, not replace it. Similarly, we avoid the unintelligent whiteboard approach where collaboration just means moving sticky notes around. People create individually inside the structure, then use collaborative whiteboards to synthesize and finish a strategy together.
This distinction matters enormously, and it’s the one the current AI discourse almost entirely ignores.
An LLM without structure can make a person feel brilliant. That feeling is a trap. It’s the Dunning-Kruger effect running on GPU clusters: the tendency to not know what you don’t know and to believe everything is easy because the output looks polished. The prose is clean. The bullet points are crisp. The confidence is absolute. And the strategy is hollow.
Guided Intelligence
We call this approach “guided intelligence,” and we think it represents the likely future of artificial intelligence. Or at least the version of it that actually helps people rather than flattering them.
The concept is simple but apparently difficult to execute: build systems that make people genuinely smarter, not just faster at producing things that look smart. Systems where the AI’s role is to structure your thinking, surface the tradeoffs you’d otherwise miss, and force you to confront the hard questions before you’ve committed to an answer. Not systems that skip straight to a polished deliverable and let you reverse-engineer a rationale.
We’ve been working on this problem for decades, long before the current hype cycle made everyone an AI expert overnight. And after watching group after group of students produce work that outclasses what I see from funded professionals, the conviction has only deepened.
Chris Langan didn’t need more intelligence. He needed structure. The same is true for most of us. And it’s certainly true for the organizations currently spending millions on AI tools that produce beautiful, confident, empty output.
The smartest bouncer in Long Island is a cautionary tale. Not about the limits of human intelligence, but about what we waste when we treat raw capability as a substitute for disciplined thinking. That waste is about to get industrialized unless we change course.


