The Great AI Flex That Isn't
Or: How I learned to stop worrying and focus on what actually works
There’s something peculiar happening in the AI community, and it’s not what you think. Walk into a meeting of lumberjacks or chefs, people who don’t know RAM from a male sheep, and you’ll hear the same refrain: “Which model are you using?” “Is that GPT-4 or Claude?” “Have you tried the new Gemini update?”
Here’s the thing: most of these people are entirely full of shit.
They’re not computer scientists. They haven’t read the research papers. They couldn’t explain the difference between a transformer architecture and a convolutional neural network if their startup funding depended on it. Yet somehow, they’ve convinced themselves, and are inexplicably trying to convince you, that dropping model names makes them AI experts.
It’s as if we’ve collectively decided that memorizing car engine part numbers makes you a Formula 1 driver. And frankly, it’s beyond ridiculous, it’s embarrassing.
The Great Infrastructure Blindness
Let me paint you a picture. When you click “buy now” on Amazon, a symphony of invisible systems springs into action. Your order flows through procurement management systems; maybe BigCommerce, maybe Shopify, possibly something proprietary Amazon built in-house. These systems query databases that could be Oracle, SQL Server, MySQL, or one of the newer NoSQL solutions. Behind the scenes, supply chain management systems coordinate with inventory tracking, payment processing, and logistics networks.
It’s a technological marvel involving dozens of interconnected systems, each representing millions of dollars in development and infrastructure costs. And here’s the kicker: I’ve never once wondered or been asked which database powers an Amazon purchase. Not once.
I have friends who work at these companies. I’ve been offered jobs at some of them. I could easily find out which systems they use, but until writing this piece, it never even occurred to me to ask. Why? Because it doesn’t matter. What matters is whether my package arrives on time, undamaged, at the time promised with the product I asked for at the price I expected.
Yet with AI, we’ve somehow convinced ourselves that knowing whether something runs on GPT-4, Claude, or Gemini is not just interesting trivia … it’s essential information. We’ve turned infrastructure into identity.
The Expensive Club
To understand why this obsession is misplaced, it helps to understand the LLM landscape. There are only a handful of major players: OpenAI, Anthropic, Google (Gemini), DeepSeek, Meta (Llama), Mistral, xAI (Grok), and a few others. Notice something? That’s a remarkably short list for a technology that’s supposedly democratizing artificial intelligence.
The reason is simple: building a state-of-the-art LLM costs hundreds of millions of dollars. The compute requirements are staggering, the talent is scarce and expensive, and the data requirements are massive. This isn’t like building a mobile app where a few developers in a garage can compete with Apple. This is heavy industry masquerading as software.
Sure, there are countless smaller models. The open-source community has produced hundreds of variations, fine-tunes, and specialized versions. But here’s what we’ve learned from extensive testing: most of them are significantly inferior to the major models. Even lesser-known versions of the major models often underperform compared to their flagship variants.
This creates an interesting paradox. We obsess over which of the few truly capable models we’re using, while largely ignoring the fact that for most practical purposes, the differences between top-tier models are often marginal.
The Cleverness Trap
So why do people care so much about which LLM powers their AI application? I have a theory, and it’s not particularly flattering.
Much of this obsession stems from people trying to signal their technical sophistication. Dropping model names has become a form of intellectual peacocking, a way to demonstrate that you’re plugged into the cutting edge of AI development. “Oh, you’re still using GPT-4? I’ve been experimenting with Claude Sonnet 3.5 and the reasoning capabilities are remarkable.”
But here’s the uncomfortable truth: obsessing over model choice often reveals the opposite of technical sophistication. It suggests a focus on the wrong variables entirely.
The truly sophisticated approach is to focus relentlessly on outcomes. Does the system solve the problem you need solved? Does it produce high-quality output? Is it cost-effective? Is it reliable? Energy and cost efficient? Fast? These are the questions that matter.
The Results Revolution
Think about every other successful technology adoption in history. When businesses adopted email, they didn’t agonize over whether the mail server ran on Microsoft Exchange or Lotus Notes. When companies moved to cloud computing, they cared about uptime, cost, and features, not whether the underlying infrastructure used Amazon’s EC2 or Google’s Compute Engine.
The pattern is consistent: successful technology adoption focuses on outputs, not inputs. It evaluates solutions based on what they deliver, not how they’re built.
This shift in thinking becomes even more important as AI applications become more complex. Today’s cutting-edge AI applications (including our own VSTRAT) often combine multiple models, use sophisticated prompt engineering, implement custom fine-tuning, and integrate with complex data pipelines. In this environment, the base model becomes just one component in a larger system.
The Transparency Paradox
There’s an irony here worth noting. The very transparency that makes LLM obsession possible is what should make it irrelevant. When Amazon’s backend systems work seamlessly, we don’t think about them. When an AI application works well, the underlying model should be equally invisible.
The best AI applications feel magical not because we can identify which model powers them, but because they solve problems so effectively that the technology disappears entirely. The moment you’re thinking about the LLM instead of the output, something has gone wrong.
A Better Way Forward
None of this is to say that model choice never matters. For developers building AI applications, understanding the strengths and limitations of different models is crucial. For researchers pushing the boundaries of AI capabilities, these distinctions are fundamental to their work.
But for everyone else like the vast majority of people who simply want AI to solve problems the model obsession is an off-putting distraction from what really matters: results.
Instead of asking “What model is this?” try asking “Does this work?” Instead of debating the merits of different architectures, focus on whether the output meets your needs. Instead of treating model knowledge as a status symbol, treat successful problem-solving as the real measure of AI sophistication.
The future belongs not to those who can name-drop the latest models, but to those who can harness AI to create real value. And that future is refreshingly model-agnostic.
What matters isn’t the engine under the hood. It’s whether you reach your destination.


