
The most fascinating thing about artificial intelligence right now is not the technology itself, but how we are responding to it.
Walk into any conversation about AI and you’ll notice the pendulum swing violently between two extremes. On one side, it is hailed as the arrival of something akin to human-level intelligence, destined to transform everything overnight. On the other, it is brushed aside as nothing more than a clever trick dressed up with good marketing.
In my view, both reactions miss the real story unfolding before us.
The truth is less dramatic, but far more consequential. AI is advancing in ways that are subtle, uneven, and deeply worth paying attention to. It does not fit neatly into categories of either “world-changing” or “worthless.” What is happening is progress that doesn’t always announce itself with fireworks, but instead creeps into places we once assumed were the sole domain of human cognition. To ignore that is to misread history as it is being written.
Part of the problem is that humans have always survived and thrived because of a singular capacity: the ability to make sense of the world by discerning patterns, connecting them to context, and building meaning. This is how we transformed raw information into knowledge, turned knowledge into expertise, and ultimately shaped it into wisdom. It is what allowed us to stay ahead of every other species, and ahead of every machine we have ever built. But in this moment, we risk losing that advantage not because machines are suddenly more capable than us, but because we are choosing to respond with either blind awe or outright rejection rather than genuine curiosity.
Think of it this way. If you judge AI only by what it cannot yet do, you will always find reasons to dismiss it. If you see it only through the lens of hype, you will inflate its abilities to the point of fantasy. Both positions are comforting in their simplicity, but both are inadequate. The reality is that systems are beginning to show glimpses of something in between: not human wisdom, not consciousness, but adaptive intelligence that surprises even their builders. They are demonstrating that they can not only process what is known, but occasionally make leaps into the unknown. That is not trivial, nor is it the end of the world. It is something else entirely: a shift in how we think about thinking.
And yet, much of our public discourse still falls into a binary trap. Either AI is seen as the savior or the scam. This is precisely the danger. The most significant breakthroughs in history have often been missed in their moment because they looked incremental rather than revolutionary. Electricity, the internet, even antibiotics – all were initially underestimated, not because they were unimpressive, but because they did not fit the dramatic stories people wanted to tell about them. AI is following a similar path. We risk misjudging its trajectory because we expect either apocalypse or magic, when in reality what is happening is a steady, uneven, yet undeniable shift in the frontiers of what machines can do.
Already, we see evidence that the pace of change is outstripping the old forecasting models we once relied upon. Moore’s Law, which for decades guided our expectations of how computing power scales, assumed progress at a predictable cadence of doubling every two years. Yet AI has begun bending that curve, not by making chips faster in the traditional sense, but by developing architectures and learning systems that extract disproportionately more intelligence from each generation of hardware. Breakthroughs in areas like transformer efficiency, retrieval-augmented generation, and emergent reasoning behaviors are arriving at a tempo that confounds the neat timelines we once trusted. What we are observing is not just incremental growth but a shift in how computational progress itself is measured.
This calls for a different kind of leadership. Not the leadership of cheerleaders or doomsayers, but of people willing to pause, observe, and make sense. Leaders who can hold complexity without rushing to reductionist labels. Leaders who can bring wisdom to a conversation dominated by noise. What is required now is not a race to declare victory or defeat, but a commitment to careful discernment, to seeing what is actually happening rather than what fits our preferred narrative.
In many ways, this is a test of us more than of the machines. Can we resist the urge to collapse into extremes? Can we cultivate the discipline to notice subtle patterns? Can we reclaim our role as sense makers in an era when pattern recognition itself is being automated? These are the questions worth wrestling with, because our answers will determine whether AI becomes just another tool or something far more destabilizing.
We cannot outsource wisdom. That has always been humanity’s edge. If we lose that because we are too busy swinging between awe and dismissal, then the failure will not belong to the machines, but to us.
The task ahead is clear: to approach AI with humility, curiosity, and seriousness.
Not as believers or skeptics, but as sense makers. Because the future will not be written by those who scream the loudest at either extreme, but by those who are willing to quietly, consistently, and thoughtfully make sense of the messy middle.