The AI Agent Revolution Could Empty Your Wallet
We built AI to act for us. The question now is whether it still answers to us.
The newest buzzword may supercharge the oldest scam: crooks separating you from your money.
Agentic AI isn’t coming. It’s here. It’s when an AI acts as your agent, going out and doing things for you by talking to another AI. Your digital assistant books a restaurant reservation by calling the restaurant’s AI booking system. Your investment app rebalances your portfolio by communicating with trading platforms. That’s convenient until it isn’t.
For better or worse, we already do much of this and it’s worse. Ever noticed those odd charges on your bank account? This could very well lead to more of them. I was just writing to VSTRAT co-founder Peter Zemsky that it’d be helpful to have a system to decode them, which is entirely unrelated to VSTRAT but, unfortunately, seems to be at least a potential future of AI.
History Doesn’t Repeat, But It Rhymes
Agentic AI looks a lot like something old: Electronic Data Interchange (EDI), where computers talked to each other to automate business processes. Banks talked to suppliers, suppliers to manufacturers, manufacturers to distributors, charlatans to your checkbook. No humans required.
EDI promised fewer errors, faster transactions, and lower costs. And usually delivered. Until it didn’t. When systems broke, chaos followed: imagine ordering wood screws and receiving 1,200 pairs of size-9 sneakers or a palette of remote controls you never wanted instead.
The difference? EDI errors were about products. Agentic AI errors could be about your entire financial life. And those mistakes, like buying that penny stock, might not be entirely by accident.
The Real Danger Isn’t Skynet
The threat isn’t killer robots. It’s systems quietly deciding what you “meant” to authorize, and charging you for it. If an AI is programmed with a very basic instruction set to maximize profit but not break any laws it might do so, albeit in a way that should be but isn’t illegal.
Your AI might “upgrade” your streaming plan because it thinks you’re watching more content, or enroll you in “premium fitness coaching” because you once mentioned wanting to get in shape. The next thing you know, you’re auto-billed monthly for things you never explicitly approved.
And when you protest? They’ll point to page 27 of an 81-page terms-of-service agreement buried in fine print. Under current rules, you have to fight to stop paying instead of them having to justify continuing to charge you. That imbalance is the real danger.
The Subscription Trap, Now on Steroids
We’re already living in a subscription nightmare. Companies make more money by charging you forever than by selling once. The average American pays for a dozen subscriptions, many long forgotten.
Now imagine that model multiplied by AI speed. Agentic systems could sign you up for “optimizations” in seconds: “smart” upgrades, “helpful” add-ons, all “for your convenience.” Their AIs will enroll you instantly, while you’ll still have to wade through human-designed mazes to cancel. We politely call them “dark patterns.” Fraud is probably a more accurate term.
Consider this scenario: You mention to your AI assistant that your internet feels slow during work calls. Within minutes, it’s contacted your ISP’s AI system, analyzed your usage patterns, and “helpfully” upgraded you to premium business fiber for an extra $89 monthly. The AI believed it was solving your problem. Your wallet disagrees, especially because the slowdown was caused by your ISP throttling your service to incentivize the system to purchase an upgrade.
Or maybe your smart home AI notices you’re ordering takeout frequently and decides you need meal kit deliveries. It finds the “perfect match” for your dietary preferences and signs you up for a six-month commitment. After all, the terms clearly state that AI agents acting on behalf of customers constitute valid consent. Realistically, you like to shop for food and cook.
The Speed Problem
Human decision-making has natural friction. We pause, consider, maybe sleep on big purchases. Agentic AI removes that friction. In the time it takes you to read this sentence, an AI could sign you up for seventeen different services, each justified by some fragment of data about your behavior or preferences.
The companies behind these systems understand this speed advantage perfectly. They’re building AI agents that can navigate their own sign-up processes faster than any human could cancel them. It’s essentially giving a pickpocket access to your bank account.
Control Must Be Immediate and Absolute
Here’s the rule that should be law: Every automatic charge, every one, must require your explicit, ongoing consent. If you block access to your funds, that should be final and unconditional, no matter what a terms-of-service says.
If you don’t pay, they stop charging. If you don’t consent, they stop charging. It’s your choice, not a robot’s nor anybody else’s. That’s it. You should never need three phone calls, two emails, and a notarized letter to stop a charge made by a robot.
This isn’t far-fetched sci-fi. AI can do this now. Multiple companies are already testing systems where AI agents negotiate and execute contracts with other AI agents. The pilot programs are small, but the infrastructure is real.
If the balance of power isn’t fixed immediately, this convenience will be weaponized.
The Human Element
These systems are built by people, and not all people are all that honest. Trust should be earned continuously, not granted once and hidden behind a tomb-sized contract.
The companies building agentic AI agents profit when those agents spend your money, not when they save it. Every “optimization” generates revenue. Every upgrade creates a commission. Every add-on service feeds the machine.
Unless we correct that incentive structure, we’re inviting digital muggers into our wallets, politely labeled as “services.”
The worst part? When things go wrong, if you can reach a human you’ll eventually be told it was a “miscommunication between AI systems” or an “optimization error.” Good luck getting your money back from either the robot or the person acting like one.
The Path Forward
Agentic AI can be transformative if it works for you, not on you.
That means hard stops, clear limits, and easy off-switches. It means explicit, revocable consent for every transaction. It means cancellation that’s as fast as sign-up, by law if necessary.
We need regulatory frameworks that put the burden of proof on companies, not consumers. If an AI agent makes a purchase on your behalf, the company should have to demonstrate clear, specific authorization for that exact transaction. Not for “services like this one” or “optimization activities” but for that specific charge.
We need standardized kill switches that work across all platforms and to authorize every transaction without exception. And substantive consequences for companies that take shortcuts or cheat. We need due process for our wallets.
And we need this now, before the infrastructure becomes too entrenched to change.
The Choice Is Ours
We don’t need to fear an AI uprising; we need to stop an economic one. We’re already at the top of the slide, and it’s a slippery slope down if we don’t act now.
This isn’t utopia or dystopia. It’s the here and now. The technology exists. The business models are forming. The only question is whether we’ll demand systems that serve us or systems that serve themselves.
The choice is ours, but only if we take it before we lose it.


