The Science of Time Leaks: What Research Actually Says About Attention, Interruption, and Recovery

What peer-reviewed research tells us about how and why time leaks — from Gloria Mark's interruption studies to the neuroscience of task-switching and cognitive recovery.

Most productivity writing treats the research on attention and interruption as settled. Cite a finding, build a framework, sell a system. The actual research is messier, more interesting, and more nuanced than the pop-science version suggests.

This article covers what the primary studies actually found, where the evidence is strong, where it’s contested, and what the practical implications are for anyone trying to understand why their time disappears.


Gloria Mark and the Cost of Interruption

Gloria Mark’s work at UC Irvine is the most frequently cited research on workplace interruption. Her observational studies, conducted across multiple organizations in the early 2000s and updated in subsequent years, involved researchers shadowing knowledge workers and recording their task-switching patterns in real time.

The finding that entered popular culture: after a significant interruption, workers took an average of 23 minutes and 15 seconds to fully return to the original task.

Several clarifications are worth making.

First, the 23-minute figure is a mean across a heterogeneous sample of interruptions and task types. It represents the statistical average, not a universal law. Some interruptions have lower recovery costs; some (particularly those that interrupt cognitively demanding creative work) have higher ones.

Second, Mark and colleagues distinguished between interruptions of different types. Not all interruptions are equivalent. An interruption from a trusted colleague with a relevant question has different cognitive properties than an ambient notification that triggers a checking behavior with no meaningful payoff. The recovery curve varies accordingly.

Third, and less commonly cited: Mark’s research also found that workers often interrupt themselves. Approximately 44 percent of interruptions in her observational data were self-initiated — the worker voluntarily switching tasks or checking a communication channel without any external trigger. This finding complicates the framing of interruptions as entirely external problems and suggests that any complete account of time leaks must address internal switching impulses as well as external triggers.

In later work, Mark studied the relationship between notification exposure and stress, finding that workers who had email notifications turned off during a period of the workday showed lower heart rate variability (a physiological stress indicator) and reported greater ability to concentrate than those with notifications active. This study, while relatively small, provides physiological support for the attention cost of ambient notification presence.

The practical implication: Interruption costs are real, substantial, and extend well beyond the duration of the interrupting event. Both external and self-initiated interruptions contribute to the total cost, suggesting that interventions need to address internal as well as external triggers.


Jonathan Spira and the Economics of Information Overload

Jonathan Spira, formerly chief analyst at Basex, studied the economic costs of information overload in knowledge work. His research, published in Overload! (2011), estimated that unnecessary interruptions and the work required to recover from them cost the U.S. economy approximately $900 billion annually in lost productive capacity.

This figure has been widely cited and equally widely challenged — producing a precise dollar figure from behavioral estimates involves considerable methodological uncertainty. But the directional finding — that information overload imposes large economic costs — is consistent with the observational research and the self-reports of workers across industries.

Spira’s more specific and actionable findings:

  • Workers in interruption-heavy environments spend 28 percent of their workday recovering from interruptions
  • The most expensive interruptions are those that occur during complex cognitive work, where the reconstruction cost (rebuilding the mental context required to re-engage) is highest
  • Organizations underinvest in reducing interruptions partly because the costs are distributed invisibly — unlike a meeting that appears on a calendar, the recovery cost of an interruption has no timestamp

The last point is important for understanding why time leaks persist. They don’t appear in any standard measurement of how time is spent. A time audit that records calendar meetings and self-reported task time will entirely miss the recovery costs that interruptions generate. You can’t fix what you can’t measure — and the most expensive costs are the ones that don’t show up in any log.


Earl Miller and the Neuroscience of Task-Switching

Earl Miller, a neuroscientist at MIT’s Picower Institute for Learning and Memory, has published extensively on the brain mechanisms underlying attention and cognitive control. His work provides a neural basis for the behavioral costs documented by Mark and Spira.

The central finding: the human brain cannot focus on two cognitively demanding tasks simultaneously. What people experience as multitasking is neurologically a rapid sequential switching between attentional states, with a switching cost at each transition.

Miller’s research using electroencephalography (EEG) demonstrated that even brief delays — on the order of seconds — occur when the brain transitions between tasks. These delays reflect the neural work required to load and unload the task context: the representation of the task’s current state, goals, and relevant background knowledge held in working memory.

Working memory, the cognitive system that maintains this task context, has a capacity constraint of roughly four items at once (the updated estimate from Cowan et al., revising the older Millerían “7 plus or minus 2” figure). Complex tasks quickly saturate working memory. When an interruption forces the brain to switch, the partially completed task context must be partially overwritten by the interrupting content — which is why reassembling the original context after the interruption takes time and effort.

The practical implication: Context-switch leaks aren’t a productivity myth. They’re a cognitive architecture constraint. Reducing switching frequency isn’t a preference; it’s working with the brain’s actual operational limits rather than against them.


Adrian Ward and the Mere Presence Effect

A 2017 study by Adrian Ward and colleagues at the University of Texas at Austin investigated whether smartphone presence affected cognitive capacity even when participants weren’t using their phones.

The experiment: participants completed cognitively demanding tasks while their phones were either on the desk face-down, in their pocket, or in another room. All participants were instructed to silence notifications and keep their phones out of their hands.

The result: participants whose phones were in another room performed significantly better on the cognitive tasks than those whose phones were on the desk or in their pocket. The difference held even when controlling for use — participants who never touched their phones during the experiment still showed the capacity reduction if the phone was visible.

Ward’s interpretation: maintaining awareness of the phone as a potential distraction source consumes working memory capacity. The phone doesn’t need to produce an interruption to impose a cognitive cost; its presence alone is sufficient.

This finding has implications beyond smartphones. Any item in your visual field that carries potential task-relevance — a second monitor showing email, a sticky note with a pending task, an open browser tab with a notification badge — may impose a similar ambient cognitive cost.

The study’s practical implication is among the most action-oriented in the interruption research literature: physical removal of distraction sources is more effective than behavioral management of responses to them.


Cal Newport and the Shallow Work Accumulation Effect

Cal Newport’s argument in Deep Work (2016) is not primarily an empirical claim but a synthesis of existing research into a practical framework. His contribution is the concept of shallow work accumulation: the idea that sustained exposure to interruption-heavy work environments doesn’t just cost time in each session, but gradually erodes the cognitive capacity for sustained concentration as a skill.

Newport draws on neuroplasticity research to argue that the brain adapts to the attentional patterns it repeatedly practices. A brain that spends most of its working hours in a state of fragmented attention — checking, responding, switching — becomes increasingly uncomfortable and less capable of the sustained deep focus that high-value creative and analytical work requires.

This claim is directionally consistent with attention research, though Newport acknowledges it’s more speculative than the direct interruption-cost findings. The neuroplasticity argument, if correct, changes the stakes: time leaks aren’t just stealing time in the present; they’re degrading capacity for high-value work over time.

The important caveat: Newport’s framing of deep work as an increasingly rare skill in an increasingly distracted workforce is a hypothesis, not a controlled finding. The argument is compelling and consistent with the research, but it should be held as a well-reasoned inference rather than an established empirical claim.


What the Research Doesn’t Prove

Intellectual honesty about this literature requires noting several limitations.

Replication: The behavioral research on interruptions faces some of the same replication concerns as other areas of social psychology. Mark’s observational findings are generally well-regarded, but the specific numerical claims (precisely 23 minutes) are averages across limited samples, not universal constants.

Ecological validity: Many studies are conducted in controlled or semi-controlled conditions that may not perfectly reflect real-world knowledge work. The generalizability of laboratory attention research to complex professional contexts involves assumptions.

Individual variation: The research describes population averages. Individual differences in attentional capacity, task type, and working context produce meaningful variation. The 23-minute recovery figure is an average; your experience may differ.

Causality: Much of the field research is observational. When Spira finds that workers in high-interruption environments report lower productivity, causality is unclear — highly demanding environments may create both high interruptions and high productivity pressure as joint outcomes.

None of these caveats reverse the directional conclusions. The case for reducing interruption frequency, protecting continuous work time, and addressing distraction sources is well-supported even accounting for methodological limitations. The numbers should be held loosely; the principles should be held firmly.


What the Research Recommends, Directly

Translating the research into practical terms:

Reduce interruption frequency. Even if the 23-minute recovery figure is imprecise, the basic finding that interruption recovery takes much longer than the interruption itself is robust. Fewer interruptions per work session produce disproportionate gains in cognitive output.

Design environments, not responses. Ward’s mere-presence research supports environment design over behavioral management. Remove distraction sources; don’t just plan to resist them.

Protect cognitive load during complex work. Miller’s working memory research supports minimizing competing demands during high-complexity tasks. This means context switches are most expensive during the most valuable work — which is exactly when they most need to be eliminated.

Distinguish urgency from priority. Spira’s research on information overload consistently finds that most interruptions are assessed by receivers as less urgent after the fact than they seemed in the moment. Building the habit of deferred response (with a defined check-in structure) rarely produces the negative consequences that always-on availability is meant to prevent.

The science doesn’t provide a complete protocol. It provides the empirical grounding for why a protocol is necessary — and for why the leaks cost more than they appear to.

Frequently Asked Questions

  • Is the 23-minute recovery finding still considered valid?

    Gloria Mark's research on interruption recovery is frequently cited with varying levels of precision. Her studies documented that full resumption of complex cognitive work after significant interruption takes an average of around 23 minutes — but this figure represents an average across task types and individuals, not a fixed law. The core finding that interruption recovery takes substantially longer than the interruption itself is robustly supported across multiple studies. What's contested is the precise number, not the directional claim. For practical purposes, the implication — that frequent interruptions impose cognitive costs far larger than their duration suggests — is well-established.

  • Does multitasking really not work?

    The research on multitasking is specific: for tasks that compete for the same cognitive resources (attention, working memory), simultaneous performance produces worse results than sequential processing. Earl Miller's neuroscience research at MIT established that what we experience as multitasking is actually rapid task-switching, with a cognitive cost at each switch. The relevant question for time management isn't whether multitasking is possible but whether your current work patterns impose unnecessary switching costs that erode performance. Most knowledge workers find they do.

  • What does the research say about smartphones and productivity?

    Adrian Ward's 2017 study at the University of Texas is the most cited finding: smartphone presence on a desk reduces available cognitive capacity even when the phone is face-down and notifications are off. The mechanism appears to be cognitive — maintaining awareness of the phone's potential as a distraction source consumes working memory. This finding has practical implications for workspace design: removing the phone from visual range, rather than silencing it, is the more effective intervention.