Why I'm Scared of AI — And I Work in Tech
I've spent years building the future. Lately, I've started wondering if we actually thought it through.
Let me say something that doesn't get said enough in tech circles: I am genuinely scared of where AI is heading. I don't mean this in a dramatic, Hollywood-robot-apocalypse sense. This is not a dramatic, Hollywood-robot-apocalypse kind of fear, but rather a quiet, 3 am, staring-at-the-ceiling kind of fear.
I've worked in tech for years. I've witnessed the creation, disruption, and rebuilding of entire software categories.
I've been excited about almost all of it. Blockchain, cloud computing, and mobile-first design, I rode those waves with the enthusiasm of someone who genuinely believed technology made the world better.
AI is different. And the fact that I can see it from the inside is exactly why it scares me more, rather than less.
Here's what I actually think — and why I think more people in tech need to start saying it out loud.
We Built It Faster Than We Understood It.
The thing that nobody tells you about working in tech is that so much of what gets shipped is built on vibes and velocity. Move fast, launch, iterate. Get to market before the competitor. Worry about edge cases later.
That philosophy works fine when you're building a photo-sharing app. The worst-case scenario is some awful UX and a scathing TechCrunch review. You address it in the next update.
But AI systems don't work like that. They're systems that learn from the world and make decisions based on those patterns, not just tools that do what you say. Even the builders don't fully understand their decisions, which is unsettling.
We call this issue the "black box" problem. You can see the input and output, but the reasoning in between is unclear, even to the engineers. We've deployed these systems at scale, into hiring pipelines, medical diagnostics, criminal justice risk assessments, and social media feeds, before we understood them. That's not innovation. That's gambling with other people's lives.
The Speed Is the Problem
In the last two years, AI has gone from impressive research demos to being embedded in everything: writing tools, search engines, customer service bots, educational software, and legal research platforms. The pace of deployment is unlike anything I've seen in my career.
Competition, not readiness, is driving this pace. Nobody wants to be the company that blinked while their rivals captured the market. So everyone ships. Everyone integrates. Everyone races.
But when you move that fast with a technology this powerful, you don't leave room for the questions that matter. Questions like "What happens when this system is wrong?" Who is accountable? What's the appeal process for someone whose loan application got rejected, or whose resume got filtered out, by an algorithm they can't see or challenge?
The honest answer, in most cases right now, is that nobody knows. And that should terrify all of us.
The Misinformation Problem Is About to Get Much Worse
We already live in a world where misinformation spreads faster than truth. We already struggle to agree on basic facts. We've watched social media fracture communities, radicalize individuals, and destabilize democracies.
Now add AI-generated text, images, audio, and video, all of it convincing, all of it cheap to produce, all of it scalable to a degree that would have been science fiction five years ago.
We're entering an era where you genuinely won't be able to trust whether a video of a politician is real. In this era, a phone call from your "daughter" seeking assistance could potentially be a cloned voice. This is where entire news articles, research papers, and social media campaigns can be manufactured at the click of a button.
I work with people who are trying to build defenses against this. I believe in them. But I also watch how fast the offensive capabilities are developing, and I acknowledge the gap is real.
What Scares Me Most Isn't the AI; it's us.
What keeps me awake at night more than any algorithm is the fact that humans consistently struggle to manage powerful new technologies responsibly.
We invented social media and handed it to children without thinking about the mental health implications. We built surveillance capitalism and called it a free service.
We created opioid distribution networks and called them prescription pads. In every case, by the time we understood the damage, it was already embedded in the fabric of daily life.
AI has more leverage than any of those technologies. It can touch more systems, affect more decisions, and scale more rapidly than anything that came before it.
And the people making the decisions about how it gets built and deployed are, in many cases, driven by the same incentives that drove every other wave of tech excess: profit, market share, and the dopamine hit of being first.
That's not cynicism. That's pattern recognition.
So Why Am I Still in Tech?
Fair question. And honestly, I've asked it myself.
The answer is that I still believe technology, built and governed well, genuinely improves human lives.
AI has real potential to accelerate drug discovery, make quality education accessible to people who've never had it, help doctors catch diseases earlier, and give individuals capabilities that used to belong only to large institutions.
I've seen AI tools help people with disabilities communicate more easily. I've watched it give small business owners capabilities they couldn't have afforded before. The good is real; I want to acknowledge it.
But I also think the people inside tech have a responsibility that we don't always live up to: to be honest about the risks, to push back on timelines that sacrifice safety for speed, and to ask the hard questions even when they're not popular in the room.
What I Think Needs to Happen
I'm not calling for AI to be stopped. Sincerely, it was never the correct solution in the first place. But I am calling for something that feels increasingly rare in this industry: slowness, on purpose, in the right places.
This slowness should enable regulations to keep up with the pace of deployment. Slowness makes room for ethics teams to have real power, not just PR coverage, and ensures that their recommendations are integrated into the development process from the beginning.
Slowness that gives us time to ask, "What happens when this goes wrong?" before it goes wrong at scale.
I want more tech workers to say, publicly, that they have concerns. They are not trying to tank their company's stock or feed a backlash narrative; they are responding to the fact that the public conversation about AI is currently dominated by either breathless hype or apocalyptic panic. Neither is useful. What we need are honest voices from the inside.
Being scared of something you understand deeply isn't a weakness. It's the most rational response available.
I'm scared of AI. I work in tech. And I think those two things, held together honestly, might be exactly the right place to stand right now.



Comments
There are no comments for this story
Be the first to respond and start the conversation.