“Just Ask AI”

“Why read books when you can just ask AI?”

I keep hearing this. As someone who uses AI every day — for coding, research, writing, analysis — I understand the appeal. AI answers questions remarkably well.

But this framing misses what reading and thinking actually do. They don’t just move information from one place to another. They reshape how your mind processes the world. That reshaping — the slow change in how you see, connect, and judge — cannot be outsourced.

A view beyond the immediate — perspective changes everything

What many people want from AI is simple: skip the effort, avoid the discomfort, get the result. For routine, well-defined tasks — formatting documents, summarising reports, generating boilerplate — this works. AI handles the repetitive brilliantly.

But for anything that requires judgment, strategy, or creative decision-making, the same people end up frustrated. “This AI is useless.” The tool isn’t the problem. The expectation is.

How I Actually Use AI

My experience has been the opposite of what the headlines promise. AI hasn’t replaced my thinking — it has intensified it. Every meaningful conversation triggers questions I hadn’t considered, exposes assumptions I didn’t know I had, and forces me to articulate things I’d left vague.

The key difference is this: I don’t look for AI’s best performance. I look for its limits.

Most people test AI on its strengths — impressive demos — then apply it to their own problems, which don’t fit, and conclude it’s broken. I do the opposite. I probe where it fails, where it hallucinates, where it gives confident-sounding nonsense. Once you map the ceiling, you deploy it precisely.

Two roles, not one

Consultant. AI’s knowledge base is vast. It connects ideas across domains, surfaces perspectives I’d need weeks of reading to find. The “aha moments” that used to come from months of study now happen in concentrated bursts during a single conversation.

Operator. I’m a creator and strategist, but not an operator. I’ll obsessively prototype and iterate on a new idea — but once it becomes routine, the repetition drains me faster than the creative work. That’s not laziness; it’s a cognitive pattern I’ve had to accept. AI fills that role precisely — repetitive, reference-based, well-defined work is where it excels.

The mistake is collapsing these into one. AI is a superb operator. A useful consultant. It is not your thinker.

Recombination Is Not Understanding

AI is extraordinarily good at playing games humans have already designed — optimising business models, navigating social systems, solving quantifiable problems. It operates within our constructs faster and more thoroughly than any human.

But it cannot perceive the world through lived experience, generate genuinely new paradigms of understanding, or feel the texture of a problem before it has a name. This isn’t a limitation that more compute will solve. It’s structural.

Look at how AI actually achieves its breakthroughs:

In protein design, RFdiffusion (Nature, 2023) created proteins that don’t exist in nature — structures that violated conventional rules yet folded correctly when synthesised. ProGen generated functional enzymes with as little as 18% similarity to any natural protein. In rocket engineering, NASA’s AI-driven topology optimisation produced combustion chamber components with organic, bone-like geometries no engineer would draw — yet they outperformed conventional designs.

These results are beyond human intuition. But not beyond physics. AI explored a design space too vast for biological cognition and found novel combinations within existing principles. The principles themselves — the physics, the chemistry — still came from human understanding.

The pattern: AI doesn’t create new truths. It recombines known truths at a scale we can’t match. Recombination is powerful. But it is not understanding.

Connected networks — powerful at recombination, constrained by design

Here’s what makes this concrete: the major breakthroughs in neural networks didn’t come from making models more general. They came from constraining them. Convolutional networks restrict each neuron to a small image patch. Transformers selectively attend to relevant input, ignoring most of it. The Lottery Ticket Hypothesis (Frankle & Carbin, 2019) found you can prune 90%+ of connections with no loss in performance.

Unconstrained models can represent anything but learn nothing well. Constraints that match reality create focus. Focus enables depth.

The question that stays with me: should humans follow the same strategy? Narrow our perception until we’re optimised for a single domain? Become a very efficient function?

I don’t think so.

The World Is Larger Than Your Model of It

Here’s a fact that changed how I think about thinking.

Neuroscience research (Douglas & Martin, 2004) shows that only 5-10% of synapses on a cortical neuron come from external sensory input. The remaining 90-95% come from other neurons within the brain — internal feedback loops, predictions, associations.

The brain’s resting activity — the Default Mode Network discovered by Marcus Raichle — consumes the majority of its energy. Externally-driven task activity adds comparatively little. Karl Friston’s Free Energy Principle goes further: perception is not passive reception. It’s active prediction. The brain generates reality from the inside out; sensory input serves as correction signals — updating the model when predictions are wrong.

What you “see” is mostly what your brain predicted you would see. External reality is a calibration tool, not the main input.

This means something profound: most of your mental life is internal narrative, not external observation. We live inside our own stories and use the world to edit them.

The ocean of truth — always larger than what we've found

The gap between internal model and external reality is always there. The question is whether you know it, respect it, and actively work to close it — or let your model run unchecked.

The great minds who expanded human understanding shared one trait: they held their own models loosely. They treated the world with near-awe — not naively, but because they understood that reality is always more complex than any representation of it.

Newton, near the end of his life:

“I do not know what I may appear to the world, but to myself I seem to have been only like a boy playing on the seashore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me.”

Feynman, in his 1974 Caltech commencement address:

“The first principle is that you must not fool yourself — and you are the easiest person to fool.”

This isn’t modesty. It’s a cognitive strategy rooted in how the brain actually learns.

The Small Self That Sees Further

The Dunning-Kruger effect (Kruger & Dunning, 1999) shows that low-competence individuals overestimate their abilities — because recognising incompetence requires the skills they lack. The inverse matters more: high-competence individuals maintain awareness of their gaps. That awareness is what keeps them learning.

Carol Dweck’s growth mindset research found something striking at the neural level: people who believe their abilities are fixed show reduced neural processing of corrective feedback — their brains literally absorb less information after errors (Moser et al., 2011). People who see themselves as works-in-progress process errors more deeply. Their brains stay open to the world’s corrections.

Kaplan, Gimbel & Harris (2016) went further: when identity-linked beliefs are challenged, the brain activates threat responses — the amygdala fires, analytical processing shuts down. The feeling of “I know” triggers the same neural circuits as physical safety. The brain resists new information the way a body resists a blow.

In Zen, this is called Shoshin (初心) — beginner’s mind:

“In the beginner’s mind there are many possibilities, but in the expert’s mind there are few.”Shunryu Suzuki, 1970

Across neuroscience, psychology, and contemplative traditions, the pattern is consistent: a smaller self creates a larger aperture. When you stop filling the frame with certainty, you see more of what’s actually there.

This is what I mean by respect for the world. Not reverence in a religious sense — but recognising that reality is always richer, stranger, and more intricate than your current understanding. That recognition is the engine of learning. Without it, you stop updating. You become a closed loop — 90% internal narrative, 10% reality, and a shrinking willingness to let the 10% change the 90%.

The Danger of AI-Inflated Certainty

This is exactly what I’ve watched happen as AI becomes widespread.

People use AI and become more certain, not less. Confidence grows. Curiosity shrinks. AI reinforces your existing frame — ask it to confirm your view and it will. The output feels like validation, and validation feels like competence.

But competence built on AI answers without underlying understanding is hollow. It works until context shifts — then collapses, because no mental model was built to adapt.

When everyone has this hollow competence — when “good enough” output costs nothing — markets reveal the truth. Desktop publishing, stock photography, music production, app development: barriers drop, supply explodes, unit value collapses, the top 1% captures 90%+ of value. The winners are never those who merely used the tool. They brought what the tool couldn’t.

The question shifts from what can AI do? to what can I do with AI that others can’t?

That answer is never about the tool.

What Remains Yours

If AI commoditises output, what stays scarce?

Not information — AI has more. Not speed — AI is faster. Not pattern matching — AI sees patterns across more data than any human can.

What stays scarce:

  • Perception — noticing what matters before it has a name
  • Judgment — knowing which of ten AI options fits this situation
  • Taste — the irreducible sense of what feels true, elegant, or resonant
  • Self-awareness — understanding your cognitive patterns well enough to deploy yourself effectively
  • Willingness to be wrong — the prerequisite for learning anything genuinely new

These are functions of a mind that stays open, stays humble, and stays engaged with direct experience. A mind that treats the world as something to understand — not optimise. A mind that knows it is small, and sees further because of it.

AI handles the repeatable, the quantifiable, the well-defined. It surfaces knowledge you’d take years to find. It operates where you’d rather not.

But the thinking — the perception, the judgment, the thirst to understand things as they actually are rather than as you wish them to be — that stays yours.

In a world where everyone has the same AI, that’s exactly where your value lives.

The people who understand their own smallness will use AI to see further. The people who use AI to feel bigger will end up seeing less.

Technology should be like breathing air — invisible yet essential. But air doesn’t breathe for you. (Yes, a ventilator can. But if you’re on life support, you’ve got bigger problems than your AI strategy.)