The AI Panic: What I'm Actually Worried About
Practical concerns from production experience, not hype
Every CTO is being asked the same question:
"How is AI going to change everything?"
After 18 years of watching tech hype cycles, here’s my honest answer: I’m not worried about robots taking over. I’m worried about CTOs making expensive decisions based on fear and FOMO.
What I’m Not Worried About
Before diving into real concerns, let me clarify what doesn’t keep me up at night:
The Hype I'm Ignoring:
"AI will replace all developers":
I've heard this about every tool from IDEs to frameworks. Good developers adapt and become more productive."Everything will be automated":
Production systems still need humans who understand the business context."Traditional programming is dead":
I’ve been hearing “code is dead” for two decades. Still writing code."You need an AI strategy now or you'll be left behind":
Most companies need basic operational competency before they need an AI strategy.
What Actually Keeps Me Up at Night
1. The Decay of Critical Thinking
This one scares me the most from a leadership perspective: AI can make people—and teams—lazier when it comes to thinking through problems, and worse, validating AI-generated answers without scrutiny.
It's frightening how quickly this pattern is becoming normal. Not long ago, I worked at a company where the CEO encouraged the team to “stop thinking and just ask AI.” In my view, that’s the worst version of AI adoption—one where people shut down their own thinking and blindly follow the “AI genius.”
What I’m Seeing:
Requirements gathering shortcuts:
“The AI will figure out what users want.”Architecture decisions avoided:
“Let’s see what the AI recommends.”Problem-solving atrophy:
“Why think through this when AI can do it faster?”Validation failures:
Teams implement AI suggestions without testing edge cases or understanding failure modes.Legal and compliance misuse:
I’ve seen teams use AI to answer critical legal questions—and take those answers at face value.
2. The "Magic Box" Problem
This shows up in two dangerous ways:
a) Relying on AI the team doesn’t understand
Teams depend on AI systems they can’t debug, troubleshoot, or modify. I’ve consulted with companies where critical business logic was buried in AI models no one could explain—let alone fix.
b) Using AI to build things the team can’t maintain
This one is subtle but more dangerous. AI tools can generate code faster than teams can absorb it, leading to invisible technical debt. I’ve seen teams ship AI-generated microservices that work—until they don’t. Then they realize no one understands the generated architecture well enough to modify it safely.
3. A Security Ticking Time Bomb
AI models are surprisingly easy to attack via adversarial examples, model inversion, and data poisoning. As a CTO, I worry we’ll flip the switch on some AI feature without understanding the threat surface—especially when those models touch sensitive customer or operational data.
Specific Concerns:
Adversarial inputs:
Small, intentional changes to input that cause the model to misbehave.Model inversion attacks:
Bad actors reconstruct training data from model outputs.Data poisoning:
Contaminating training data to subtly manipulate future outputs.Prompt injection:
Manipulating AI through carefully crafted inputs.
MCP (Model Context Protocol) servers are a good example. They’re powerful tools for extending AI, but currently lack basic security frameworks. We’re connecting AI to internal databases and APIs without the same rigor we’d apply to traditional integrations.
The scary part isn’t the sophisticated attacks—it’s the basic security hygiene that gets skipped because “it’s just AI.” Teams treat AI as magic rather than as software that needs proper defenses.
Thinking About AI From First Principles
The three things that keep me up—critical thinking decay, “magic box” dependencies, and security blind spots—all stem from one root problem:
Teams stop applying first principles thinking when AI gets involved.
First Principle:
Technology should solve real problems—not create impressive demos.
This counters the critical thinking decay. I’ve sat through meetings where teams reverse-engineer use cases to justify being “AI-first.” If you strip away the AI language and can’t clearly explain the problem or the business value, you’re building a solution in search of a problem.
Second Principle:
All systems fail. Failure modes define system design.
AI systems don’t just fail—they fail opaquely. If your team can’t explain how an AI model works or what happens when it breaks, you’ve created an unmaintainable dependency. If your product goes down when the AI does, you’ve introduced a single point of failure you can’t even debug.
Third Principle:
All systems can be attacked. Security must be built-in, not bolted-on.
Most teams treat AI security as an afterthought—if they think about it at all. But adversarial prompts, data leaks, and injection attacks are not edge cases. They’re the expected consequences of shipping software that processes untrusted inputs without proper safeguards.
Working AI doesn’t excuse you from engineering discipline—it demands more of it. The teams that succeed will be the ones who keep asking the hard questions:
Are we solving the right problem?
Can we build and maintain this system long-term?
Have we secured it properly?
These aren’t the flashy conversations that end up on tech blogs. But they’re what keep your AI projects from turning into expensive messes.
What I'm Actually Doing
Does worrying about AI mean I’m avoiding it? Quite the opposite.
I actively experiment with AI as part of my workflow, automation, and tooling. There’s real opportunity here—as long as we treat AI with the same respect and discipline we apply to any engineering tool.
So far, I’ve found valuable use cases in:
Quick visual prototyping
Research and data collection
Reporting and summarization
Code reviews and PR documentation
The Meta-Point
The biggest risk with AI isn’t the tech—it’s decision-making under pressure and uncertainty.
CTOs feel like they need an “AI strategy” because everyone else is talking about it. But in reality, the best AI strategy might be:
Solve real problems really well.
Understand AI’s true capabilities.
Make deliberate, grounded choices based on business value—not hype.
The companies that will win with AI aren’t chasing impressive demos. They’re using AI to amplify their strengths. They understand their business, their systems, and their people—and use AI to make them better.
My prediction?
In three years, the most successful AI implementations will be boring. They’ll solve specific, measurable problems.
Not the flashiest demos.
Not the "AI-first" experiments.
Just focused, grounded, disciplined engineering.
Speaking of the current AI hype
I was once looking to change domains, and I remember my friend joking with me, back in the day, that Big Data was so hot, anybody who could spell Hadoop was getting hired. A few years later, I saw - from the outside - what it was like when Big Data was no longer hot.
We see hype cycles come and go, and I think it is a rite of passage for each software engineer to decide for themself what is real gold and what is fake glitter. Unfortunately, we are getting a lot of non-disclosed advertising of AI touting its legendary benefits, which is unethical, like “adding poison to honey” (Arabic expression).
I see the current AI hype mania as a Critical Thinking Test as well as an Integrity Test for individuals as well as organizations. I am certain we will live to see the benefits of LLMs and that these will become consistent and well understood, but we will also see all the AI-hype critical thinking and integrity test failures as well, and those are going to be enormously painful for the affected organizations.
Thank you for sharing the useful use cases - I’m always looking out to grow my awareness of when to use LLMs!
You mentioned two usecase:
* Research and data collection
* Reporting and summarization
These two reminded me of this: https://buttondown.com/maiht3k/archive/information-literacy-and-chatbots-as-search/
Here are my TL;DR notes of that page:
Why are LLMs not a good technology for information access?
A. [Unreliable results] LLMs are statistical models of the distribution of word forms in text, set up to output plausible-sounding sequences of words. (The system being wrong, say, 5% of the time could be disastrous if people come to trust it as being correct)
B. [Sense-making is hindered, and problem solving muscles atrophy] Information literacy sense-making is undermined by getting "the answer" from a system, even if the hallucination problem were to be solved for high accuracy
1. Here are the usual sense-making actions a person carries out:
a. Refine the question
b. Understand how different sources speak to the question (and locate each source within the information landscape)
So getting "the answer" without this sense-making could mislead the user never to refine their question nor understand the information sources. Also, there are the same blindspots as web search that we need to keep in mind once we see the top results of a search query: 1) what about the low-ranked search results? 2) what about the results the search engine saw but excluded from the search results? 3) what about the data that is outside the corpus available to the search engine?
2. [Judging reliability of different information sources] You need to know the information sources that were used for the chatbot's answer, and judge the reliability of each (so the algorithm making some sporadic prioritisation and conclusion is suspect, because there's always missing context the chatbot doesn't know about)
3. [Missing out on advice from other people] Engaging with people in a discussion forum cannot be substituted by an algorithm
4. The chatbot output is manipulated to sound friendly and authoritative, which is misleading
C. Even with Retrieval Augmented Generation (RAG), it is difficult to detect when the summary you are relying on is missing critical information
D. Even with including links to sources, the system presents a set conclusion, which discourages the reader from following the links and making up their own mind