Why AI Can Make Your Attention Worse (And What to Do Instead)

The popular assumption that AI tools improve focus by handling busywork misses a critical problem: many AI workflows create new attention costs that outpace the savings. Here is the honest picture.

Let’s dispense with the standard narrative first: AI handles your busywork so you can focus on your most important thinking. Tools like Claude and others generate first drafts, summarize documents, structure your day, and answer questions that would otherwise require 20 minutes of research. That narrative is true. AI genuinely does reduce certain cognitive loads.

The problem is that this story is incomplete. And the part it omits is responsible for a real and growing phenomenon among knowledge workers: AI use that leaves people feeling more productive while their capacity for sustained, independent thought quietly degrades.


The Myth: AI Is Attentionally Neutral

The implicit assumption in most AI productivity advice is that AI is an attentionally neutral tool — something you pick up and put down without attention cost. You ask a question, it answers, you return to your work.

Gloria Mark’s research at UC Irvine consistently undermines this assumption. Her studies found that after an interruption, knowledge workers take an average of around 23 minutes to return to the original task at full cognitive engagement. The interruption doesn’t need to be long — a 90-second email check and a 90-second AI query have comparable recovery costs when they occur mid-session.

The research also found that approximately 44% of interruptions in knowledge work are self-initiated. People do not wait to be interrupted. They interrupt themselves, then feel as though something external disrupted their work.

An AI tool open in a background tab is a self-interruption machine. It is faster to respond than email, more engaging than most notifications, and easier to rationalize as productive because the output it generates is substantive. Each query feels like work. The compounded recovery cost is invisible until you try to measure actual deep-focus time.


The Autocomplete Cognition Problem

Nicholas Carr made a broader version of this argument in The Shallows (2010). His concern was not AI specifically — it was the general principle that the cognitive tools we use shape the neural circuitry we develop.

Sustained reading of long-form text builds capacity for sustained analytical thought. Hyperlinked, fragmented, notification-interrupted reading builds a brain adapted for fragmentation. The research Carr cites on neuroplasticity suggests these are not just metaphors — the patterns of cognitive engagement we practice become the patterns our brains are best at.

The AI autocomplete version of this concern: if every analytical writing task gets completed with AI autocomplete, the cognitive exercise of holding a complex argument in working memory and wrestling it into clear prose never happens. That exercise is the mechanism by which analytical writing skills develop.

This does not mean autocomplete is harmful for routine writing. A status update or a meeting summary that AI drafts in 30 seconds rather than 5 minutes is genuinely freed cognitive capacity. The concern applies specifically to the writing that is simultaneously the thinking — analytical memos, strategic documents, problem formulations. For those tasks, using AI as the composer means bypassing the cognitive work that makes the output valuable.


The “Always-On” Companion Problem

A second pattern that degrades attention more subtly: using AI as a constant conversational companion throughout the day.

This use pattern is understandable. AI assistants are extraordinarily responsive, non-judgmental, and available at any hour. They provide the social and intellectual experience of thinking with someone without the coordination overhead of actual collaboration.

But Johann Hari’s framework in Stolen Focus is useful here: attention is not just a per-session resource. It is a capacity that is trained or degraded across weeks and months by the patterns of engagement we maintain. A brain that has access to an always-on thinking partner — one that never leaves the conversational ball in your court for long — may be building less capacity for extended solo thinking than a brain that regularly engages hard problems in silence.

The research on this specific question is newer and less settled than Mark’s interruption work or Carr’s neuroplasticity argument. But the mechanism is plausible and worth treating as a real risk, particularly for people who notice that extended solo thinking has become harder or less satisfying since they started using AI heavily.


The Four AI Attention Traps

Trap 1: The mid-session query. You are working on a difficult problem and hit a conceptual block. Instead of sitting with the discomfort — which is the experience of hard thinking, not evidence that you are stuck — you open AI and ask it to help. You get an answer, return to the work, and feel productive. But the discomfort you short-circuited was the productive kind: the cognitive tension that precedes an insight.

Not every mid-session block should be solved by sitting with it. Some are genuinely resource limitations where AI assistance is the right call. The question to ask: is this difficulty a productive cognitive challenge or a genuine information gap? If it is the former, the discomfort is the work.

Trap 2: Delegation creep. You start by using AI for routine communications. Then for meeting prep. Then for decision analysis. Eventually, you are running most of your cognitive output through AI in some form. Each individual delegation was reasonable. The aggregate is a workflow where you are primarily evaluating and approving AI-generated thinking rather than generating your own.

This is not always bad — evaluation requires real skill. But if you notice your independent thinking is becoming less rich, less original, or more reliant on AI prompts to initiate, delegation creep may be eroding a capacity you need.

Trap 3: The research spiral. AI makes it easy to keep asking follow-up questions. One query becomes a thread; a thread becomes a 45-minute research conversation. The output feels like learning. But there is a difference between learning that connects new information to existing understanding (which tends to be slow, uncomfortable, and require extended reflection) and consuming AI-generated information that feels like understanding but does not go deep enough to change how you think.

Trap 4: Planning procrastination. AI makes planning feel so productive that some people use extensive planning conversations as a substitute for the actual work. If you have ever spent 40 minutes designing a perfect AI-assisted project structure and then done little of the actual project, you have encountered this trap. AI planning is only valuable when it reduces friction on execution. When it replaces execution, it is sophisticated procrastination.


The Honest Counter-Argument

All four traps are patterns of misuse, not inherent features of AI tools. The same arguments could have been made about Google search in 2005 (and some researchers did make them), about personal computers in the 1990s, and about calculators before that.

The people who develop expertise with any cognitive tool — who use it to extend their thinking rather than replace it — are not worse thinkers than those who avoided the tool. They are typically better at their domain while possessing the tool-specific skills the others lack.

The question is not whether to use AI. It is whether you are using AI in a way that expands or constrains your cognitive capacity over time. That question requires honest self-assessment that most AI productivity content does not invite you to conduct.


The Three Diagnostic Questions

If you are uncertain whether your current AI use is helping or hurting your attention, ask yourself these three questions honestly:

  1. In the last month, have you spent more or less time in extended, uninterrupted focus on hard problems compared to the period before you used AI heavily? (Time in sustained solo thinking is a leading indicator of attention capacity.)

  2. When you try to think through a difficult problem without AI assistance, does it feel harder than it used to? (Some friction is normal and healthy; a complete inability to initiate without AI is a concerning signal.)

  3. Has your writing without AI assistance — your independent first-draft writing — improved, stayed the same, or declined in clarity and depth since you started using AI for writing tasks? (This is harder to assess but important for anyone whose work depends on written analytical thinking.)

If the honest answers to these questions concern you, the intervention is simple but requires conviction: dedicated periods of AI-free focus work, daily, until the capacity rebuilds. Not as a permanent rejection of AI, but as the cognitive equivalent of the exercises you do to maintain physical capability.


Start Here

Block one 90-minute focus session tomorrow with no AI tools open — not minimized, fully closed. Notice what happens in the first 20 minutes. The discomfort you feel is diagnostic: it tells you how dependent the current workflow has become on AI as a cognitive crutch.


Related:

Tags: AI attention, focus degradation, deep work, cognitive offloading, attention management

Frequently Asked Questions

  • Does using AI actually reduce my ability to focus?

    Not inevitably — but patterns of AI use that involve constant background checking, instant autocomplete for all thinking tasks, and AI-generated notifications can fragment attention and, over time, reduce the tolerance for sustained independent thought. The risk is real but it depends heavily on how AI is used, not whether it is used.
  • Is autocomplete harmful to deep thinking?

    For low-stakes writing, no. For analytical writing where the act of composition is the thinking, autocomplete can short-circuit the cognitive process that makes the work valuable. The practical rule: use AI to improve thinking you have already done, not to replace thinking you have not yet done.
  • What is the difference between productive AI use and attention-draining AI use?

    Productive AI use front-loads structure and offloads low-cognition tasks in designated operational windows. Attention-draining AI use involves constant background querying during focused work, using AI for tasks that require your own deliberate thinking, and treating AI responses as interruptions that need immediate attention.