I see it every day.
Bright, capable students – sharp thinkers, ambitious professionals – turning to AI not as an assistant, but as an autopilot. They reach for ChatGPT before they reach for their own reasoning. They defer to algorithms before they trust their own analysis. They struggle to articulate thoughts that haven’t first been structured, refined, and handed back to them by an AI model.
It’s not just about convenience. It’s something deeper. A quiet but profound shift in mindset. And it worries me.
Not because AI is inherently dangerous. Not because I fear technology or resist change. But because I have spent my life in strategy, coaching, and teaching – helping people think better, analyze better, decide better. And I am witnessing, firsthand, what happens when thinking itself is outsourced.
We have always built tools to make life easier. The printing press freed us from the burden of manual transcription. The calculator saved us time on long divisions. Google put the world’s knowledge at our fingertips. But there is a difference between leveraging tools and surrendering to them. AI is not just another tool – it is a cognitive shortcut. Unlike a hammer or a telescope, it doesn’t just extend our physical or observational capabilities; it steps in to do the actual thinking for us. And therein lies the problem. Because thinking, like any skill, requires practice. If you stop engaging with complex ideas, if you no longer push yourself to reason through ambiguity, if you never let your mind wrestle with a challenge before seeking an answer, you begin to lose something fundamental.
I’ve had students struggle with original thought – not because they lack intelligence, but because they have become so accustomed to pre-packaged, AI-generated responses that forming their own perspective feels foreign, almost unnatural. I’ve seen young professionals hesitate when asked to form an opinion that isn’t validated by an algorithm. They are quick to Google, quick to summarize, quick to regurgitate. But slow, often painfully slow, to construct an argument from scratch.
The reliance is creeping in, quietly but steadily. And with it, an unsettling question: What happens when an entire generation loses the ability to navigate complex problems without AI scaffolding their thoughts?
We already see glimpses of it. I once asked a group of students how they would split a bill among five people, adjusting for tax and tip. It wasn’t a trick question, and yet their instinct wasn’t to estimate, to approximate, to think – it was to pull out their phones and let an app tell them the answer. Not one of them tried to work it out mentally, not even as a rough calculation. It was a small moment, but it revealed something significant. The first reflex wasn’t to think – it was to delegate thinking elsewhere.
Now, imagine a future where critical reasoning follows the same path. Where the ability to form independent, unassisted opinions erodes just as memorization has. Where complex decisions – about careers, ethics, relationships, even leadership – are increasingly shaped by external computation rather than internal reasoning. This isn’t a hypothetical. It’s happening. Slowly, invisibly, but undeniably.
And the real risk isn’t just reliance. It’s dependence.
I am not anti-AI. In fact, I use it. Quite extensively, I should admit. I recognize its value. But I also recognize its dangers – not in the dystopian, machines-will-take-over sense, but in the far more insidious way it is changing how we think, how we learn, how we process the world around us. AI should be a tool, not a crutch. And yet, with its unchecked, unplanned adoption, we may be headed in a direction where society no longer knows how to stand on its own.
There is a balance to be struck. We need to teach people, not just students, but professionals, leaders, decision-makers, to use AI without being used by it. To think before they search. To struggle with ideas before they refine them. To trust their own cognitive abilities before they default to machine-generated intelligence.
Because the true measure of progress isn’t just what we can automate. It’s also what we refuse to surrender.