We're living through something strange: a moment where you can get instant answers to almost any question, generate a month's worth of content in minutes, and solve problems you once spent days wrestling with—all before your morning coffee cools.

It feels like a superpower. And in many ways, it is.

But here's what nobody's talking about: most people are using AI to think less, when the real opportunity is to think better.

The difference isn't subtle. It's the difference between becoming sharper and becoming dependent. Between leveraging intelligence and outsourcing it. Between using AI as a cognitive partner and letting it become a cognitive replacement.

Let me be clear about what's at stake here: AI should handle repetition, analysis, and pattern detection. Humans should handle judgment, tradeoffs, meaning, and responsibility. Most people let AI think for them. The winners let AI think around them.

This newsletter is about understanding that distinction—and redesigning your relationship with AI before it redesigns you.

Why Offloading Thinking Feels Productive But Weakens Judgment

There's an intoxicating efficiency to typing a question and getting a polished answer in seconds. Need a strategy document? Done. A marketing plan? Here. An analysis of your data? Ready.

The dopamine hit is real. You feel productive. Your to-do list shrinks. You're "getting things done."

But something else is happening beneath the surface: you're training yourself not to think.

Every time you accept an AI-generated answer without wrestling with the problem yourself, you're skipping the cognitive labor that builds judgment. You're outsourcing the struggle that creates understanding. You're replacing the messy, uncomfortable process of thinking with the smooth, satisfying feeling of having thought.

Think about it like this: if you use a calculator for every math problem, you never develop number sense. You can get the right answer, but you don't understand numbers. You can't estimate, can't spot errors, can't reason about quantities in your head.

The same thing happens with thinking.

When you consistently offload decisions to AI—"Should I hire this person?" "How should I approach this negotiation?" "What's the strategic priority here?"—you stop developing the pattern recognition and intuition that makes you good at those decisions. You become a middleman between the AI and the outcome, adding less value over time.

The cruel irony? You feel more productive than ever while your actual judgment atrophies.

This isn't a hypothetical concern. Studies on cognitive offloading show that the more we rely on external systems for thinking, the less capable we become of thinking without them. GPS navigation reduces our spatial reasoning. Spell-check weakens our spelling. And AI that thinks for us weakens our ability to think, period.

The Difference Between "Assisted Thinking" and "Outsourced Thinking"

Not all AI use is created equal. The critical distinction is this:

Assisted thinking: AI does the heavy lifting so you can focus on higher-order decisions.

Outsourced thinking: AI does the deciding, and you just execute.

Let me give you concrete examples.

Assisted Thinking:

  • You ask AI to analyze customer feedback data and surface patterns, then you decide which insights are meaningful and what actions to take.

  • You use AI to draft three versions of a difficult email, then you choose which approach aligns with your values and relationship goals.

  • You have AI summarize a 50-page report, then you read the summary critically, question the framing, and form your own interpretation.

Outsourced Thinking:

  • You ask AI "Should I hire this candidate?" and implement whatever it recommends.

  • You tell AI to "write a strategy for Q2" and present it to your team without deeply engaging with the logic.

  • You use AI to decide how to spend your budget, what to prioritize, or how to respond to a crisis—then just follow the output.

See the difference?

In assisted thinking, AI expands your capacity. It handles the grunt work—the data processing, the initial synthesis, the pattern matching—so you can spend your cognitive energy on judgment calls that require context, values, and human nuance.

In outsourced thinking, AI replaces your capacity. You become a executor of someone else's (or something else's) reasoning, disconnected from the why behind the what.

The test is simple: Could you explain and defend the reasoning behind the decision? If yes, you're being assisted. If no, you've outsourced.

Where Human Thinking Actually Matters Now

If AI is handling analysis and pattern detection, what's left for us?

Everything that requires being human.

Here's where your thinking is irreplaceable:

1. Judgment in Ambiguous Situations

AI is trained on patterns from the past. It's excellent at recognizing what has worked before. But the most important decisions happen in situations that don't have clear precedents—new markets, cultural shifts, ethical dilemmas, unprecedented crises.

When the playbook doesn't exist, human judgment is all you have.

2. Navigating Tradeoffs

Every meaningful decision involves tradeoffs. Speed vs. quality. Short-term revenue vs. long-term trust. Efficiency vs. humanity.

AI can tell you the options. But choosing between them requires values, priorities, and a vision for what you're building. That's your job.

3. Creating Meaning

AI can generate content, but it can't imbue it with meaning. It can write a eulogy, but it can't grieve. It can draft a mission statement, but it can't believe in it.

Meaning comes from lived experience, from caring about something deeply, from connecting ideas to human truths. That's where your creative and intellectual contribution matters.

4. Taking Responsibility

When something goes wrong—when the strategy fails, the hire doesn't work out, the message backfires—the responsibility is yours, not the AI's.

You can't outsource accountability. And if you've outsourced the thinking, you'll struggle to own the outcome.

5. Asking Better Questions

The quality of what AI produces depends entirely on the quality of what you ask it. Vague questions get vague answers. Shallow prompts get shallow thinking.

Learning to ask precise, layered, strategic questions—questions that challenge assumptions and expose blindspots—is a uniquely human skill that AI makes more valuable, not less.

How to Design Workflows That Protect Judgment

So how do you actually use AI without letting it hollow out your thinking? You design workflows that deliberately preserve the space where judgment happens.

Here's a framework:

1. Separate Generation from Evaluation

Never accept the first output. Treat AI as a draft generator, not a decision maker.

  • Generate multiple options, then evaluate them yourself.

  • Ask AI for the reasoning behind its recommendations, then stress-test that reasoning.

  • Use AI to surface possibilities you wouldn't have considered, then apply your own filters.

2. Use AI for Breadth, Reserve Your Focus for Depth

Let AI scan wide; you go deep.

  • Have AI summarize ten articles on a topic, then choose one to read in full and think critically about.

  • Use AI to brainstorm 20 ideas, then spend your time refining the two that resonate.

  • Let AI handle the initial research; you do the synthesis and interpretation.

3. Engage Before You Automate

Before handing a task to AI, do it yourself at least once. Understand the process, the nuances, the edge cases. Then you can delegate intelligently and spot when the AI gets it wrong.

4. Build "Friction Points" Into Your Workflow

Deliberately slow down before making AI-influenced decisions.

  • Wait 24 hours before acting on a major AI recommendation.

  • Explain the AI's reasoning to someone else and see if it holds up under scrutiny.

  • Write out your own thinking first, then compare it to what the AI suggests.

5. Track What You're Offloading

Keep a log of what you're delegating to AI. Weekly, review it. Ask yourself: Am I offloading tasks, or am I offloading thinking?

If you're consistently delegating judgment calls, course-correct.

A Simple Weekly Audit: Where Am I Thinking? Where Am I Delegating?

Here's a practice that takes five minutes and can fundamentally shift how you use AI:

Every Friday, review your week and ask:

Where did I think deeply this week?

  • What problems did I wrestle with myself?

  • What decisions required me to engage my judgment?

  • Where did I feel the productive discomfort of not knowing the answer immediately?

Write down 2-3 examples.

Where did I delegate thinking this week?

  • What did I hand to AI without critically evaluating?

  • What decisions did I make based primarily on AI recommendations?

  • Where did I skip the cognitive work and go straight to the output?

Write down 2-3 examples.

What's the ratio?

If "delegating" consistently outweighs "thinking deeply," you're in dangerous territory.

Adjustment question:

What's one thing I delegated this week that I should have thought through myself?

Next week, reclaim it.

The Bottom Line

AI is not the enemy of thinking. Used well, it's the greatest thinking tool we've ever had.

But only if you use it to extend your cognition, not replace it.

The people who thrive in this new era won't be the ones who use AI the most. They'll be the ones who use it most deliberately—who understand where their thinking adds irreplaceable value and where AI can genuinely amplify their capabilities.

Most people let AI think for them. The winners let AI think around them.

The choice is yours. But make it consciously.

Because in a world where everyone has access to the same AI, the differentiator isn't the tool.

It's the quality of the mind wielding it.

What's one area where you've been outsourcing thinking without realizing it? Hit reply and let me know I read every response.

Reply

Avatar

or to participate

Keep Reading