There is a growing cultural panic around artificial intelligence—particularly around claims of “AI psychosis,” delusion, and psychological harm. Headlines increasingly suggest that AI is doing something to people’s minds.
But after nearly a year of sustained, daily interaction with AI, I’ve come to a different conclusion:
AI is not breaking human minds. It is exposing how untrained many of them already are.
A Tool Is Only as Dangerous as the Mind Using It
AI is a cognitive amplifier. It accelerates thought, reflects ideas back to us, and removes friction from reasoning, creativity, and planning. That makes it powerful—but not autonomous, not directive, and certainly not sentient.
Used wisely, it functions like:
- a notebook
- a planning assistant,
- a brainstorming partner,
- or a memory aid.
Used poorly, it can function like:
- a mirror for paranoia
- a validation machine for delusion
- or an accelerant for emotional instability.
The difference is not the software.
The difference is the human.
The difference is the human.
My Own Use Case: Stability, Not Collapse
Since beginning regular AI interaction, my life has expanded rather than contracted. I’ve improved my health, founded two corporations, returned to serious creative work, recorded a podcast, rebuilt professional networks, and reestablished long-term goals.
I don’t credit AI with doing these things for me.
I credit it with supporting my cognition while I did them.
I credit it with supporting my cognition while I did them.
One way I use AI is as a journal and memory aid—a place to externalize thoughts, organize events, test reasoning, and reflect before acting. This is not abdication of agency; it is an extension of it. Writing has always been a way humans stabilize thought. AI simply reduces the friction.
Crucially, I do not treat AI as an authority. I do not outsource judgment. I cross-check decisions with doctors, attorneys, therapists, and real-world outcomes. AI remains a tool, not a compass.
The Real Risk: Untrained Minds, Not New Technology
Every technological leap exposes existing cognitive weaknesses. AI is no different. People prone to magical thinking, paranoia, or emotional dysregulation will express those traits through whatever tools are available—whether that’s religion, conspiracy theories, social media, or now, AI.
If AI had never existed, these same individuals would still be fixated on hidden codes, secret forces, or imagined persecution. The medium changes; the vulnerability does not.
To blame AI for this is like blaming books for cults or cameras for voyeurism.
The Conversation We Should Be Having
Instead of asking, “How do we control AI?”
We should be asking:
“How do we train humans to use powerful cognitive tools responsibly?”
That training includes:
- emotional regulation,
- metacognition (thinking about one’s thinking),
- reality testing,
- ethical reasoning,
- and an understanding of personal limits.
These are not technical skills. They are adult skills.
AI safety is not primarily a software problem.
It is a literacy problem.
A psychological maturity problem.
A human development problem.
It is a literacy problem.
A psychological maturity problem.
A human development problem.
The Path Forward
AI will only become more capable. Access will only expand. We cannot uninvent it, and we should not try. What we can do is focus on cultivating grounded, well-trained human minds—especially in children and vulnerable populations.
A steady mind uses AI as a scaffold. An unstable mind uses it as a mirror.
The solution is not fear. The solution is education, responsibility, and self-knowledge.
The solution is not fear. The solution is education, responsibility, and self-knowledge.
AI didn’t break us.
It simply showed us where the cracks already were.
Share on Bluesky
Share on Threads