The reason most focus tracking fails is not that people stop logging. It is that the single number they track cannot tell them what to fix.
You record a low productivity score on a Tuesday. Why was it low? Was it that you did fewer hours of deep work than planned? That you started sessions but kept abandoning them early? That your in-session attention was fragmented even during the hours you logged? The score cannot distinguish between these, so you are left guessing.
The Focus Dashboard is a three-metric framework that separates these problems so you can address each one directly.
The Three Pillars of the Framework
Metric 1: Deep Hours Per Day
What it measures: The total hours of genuinely demanding cognitive work you completed in a day — without interruption, on tasks that require real thinking.
Why it matters: Deep hours per day is your volume indicator. It answers the question “how much deep work am I actually doing?” independent of how focused you were during it.
This matters because volume and quality are separate problems. You can have excellent in-session focus and still do only 45 minutes of deep work because the rest of your day was consumed by reactive tasks. Tracking deep hours reveals whether the problem is structural (your schedule does not create deep work time) or behavioral (you have the time but are not using it for deep work).
How to measure: Self-report at the end of each session. Log start and end time. Count only sessions where you were doing cognitively demanding, high-stakes work without checking notifications or switching contexts. Do not round up generously — the goal is accuracy, not a number to feel good about.
Baseline: For most knowledge workers, 2–4 hours per day reflects strong deep work output. Anders Ericsson’s research on expert performance found that deliberate, high-intensity cognitive work rarely stays effective beyond four hours per day before quality deteriorates. If you are consistently below 1.5 hours, the problem is structural.
Metric 2: Session Completion Rate
What it measures: The percentage of planned deep work sessions you complete fully — reaching the intended end time without aborting early due to distraction or disengagement.
Why it matters: Session completion rate is your environmental integrity indicator. It answers the question “when I allocate time for deep work, does that time actually happen?”
A session completion rate below 60% consistently indicates that your environment is fragmenting your work before cognitive momentum can build. This is typically an external problem — a notification that pulls you out, a colleague interruption, a meeting that runs long into your protected block — rather than an attentional problem.
Distinguishing environmental from attentional causes is the most practically useful thing the three-metric framework does. A low completion rate calls for environmental interventions: phone in another room, Slack status set to away, calendar blocks defended. Low in-session quality with a high completion rate calls for different interventions entirely.
How to measure: After each planned deep work session, record whether you completed it. “Completed” means you reached your intended end time while staying in the work. If you stopped early or left the session and returned multiple times, mark it incomplete. Divide completed sessions by total planned sessions each week.
Baseline: Above 70% is functional. Below 55% for two or more consecutive weeks signals an environment worth auditing.
Metric 3: Distraction Count Per Hour
What it measures: The average number of times per hour you switch contexts — or feel the strong pull to do so — during a deep work session.
Why it matters: Distraction count per hour is your in-session attention quality indicator. It answers the question “even when I’m technically in a deep work session, how fragmented is my attention?”
This is the most granular of the three metrics and the most directly actionable. Gloria Mark’s research at UC Irvine on attention dynamics in knowledge work found that each context switch carries a recovery cost — in her studies, it took an average of over 20 minutes to return to the pre-interruption level of engagement after a significant switch. Even if you return quickly to your session after a two-minute distraction, you have likely lost significant cognitive depth.
How to measure: Keep a physical tally during sessions. A sticky note with tally marks next to your computer works well. Record every time you switch away from the task or feel a strong, sustained urge to do so. Divide the session’s tally by session length in hours.
Baseline: Below two distractions per hour indicates strong in-session focus. Two to six is functional. Above six consistently suggests either environmental triggers that need to be eliminated or internal cognitive patterns worth examining — unclear task scope, ambivalence about the work, or anxiety that redirects into checking behaviors.
Reading the Dashboard as a System
The three metrics are individually useful but diagnostically powerful only when read together.
Consider what different three-metric profiles imply:
Profile A — Low deep hours, high completion rate, low distractions. When you do deep work, you are excellent at it. The problem is volume: something is consuming the hours that should be deep work time. Look at your schedule for meeting creep, reactive task accumulation, or under-protected deep work blocks.
Profile B — Adequate deep hours, low completion rate, low distractions. Your in-session attention is solid, but sessions are getting interrupted before they can run their full course. This is almost always an environmental protection problem. Your deep work blocks are visible to interruption — your phone is present, your calendar blocks are not defended, or colleagues know they can reach you.
Profile C — Adequate deep hours, high completion rate, high distractions. You are completing sessions and logging hours, but the work is fragmented and likely shallow. High distraction counts with high completion rate suggests you are powering through sessions by force rather than depth. The work may be less cognitively demanding than you think it is, or you are avoiding a specific difficult task by keeping yourself busy within the session.
Profile D — Low deep hours, low completion rate, high distractions. Everything is fragile. This pattern usually reflects a scheduling environment that has no protected structure for deep work. The first intervention is not behavioral — it is calendrical. Block the time before trying to improve what happens within it.
The AI Layer: Weekly Pattern Detection
Once you have three weeks of data, AI analysis adds a layer of pattern detection that manual review cannot provide.
The core prompt for your weekly Focus Dashboard review:
Here is my Focus Dashboard log for the past week.
Each entry: date, deep hours, sessions planned vs. completed, avg distractions/hr.
Weekly context: avg sleep quality (1–3), meeting count, notable disruptions.
[paste log]
Please:
1. Calculate my three Focus Dashboard metrics for the week.
2. Compare them to my previous weeks if I've provided that data.
3. Identify the strongest environmental or scheduling condition correlated
with my highest-performing days.
4. Identify what changed on my lowest-performing days.
5. Tell me which of the three metrics shows the most variance — that is where
my leverage is likely highest.
The insight about variance is particularly useful. If your deep hours and distraction count are stable but your session completion rate swings from 40% to 85% week to week, your leverage is in understanding what controls session completion. That is where your attention and experimentation should go.
Beyond Time integrates this kind of structured weekly review directly into your planning workflow, so the Focus Dashboard analysis runs as part of your regular planning session rather than as a separate task.
Setting Baselines Before Setting Targets
One of the most common mistakes in focus tracking is setting targets before you know your baseline.
If you have never tracked deep hours and you set a target of four per day, you have no idea whether you are starting at 0.8 hours or 2.5 hours. The target may be wildly ambitious or uselessly easy. Either way, you will not know what improvement actually looks like.
Spend two weeks logging without targets. Observe your actual patterns. Then, with AI help, set targets that represent 15–20% improvement from your honest baseline — not aspirational numbers from productivity literature.
Goodhart’s Law — “when a measure becomes a target, it ceases to be a good measure” — applies directly here. The moment you start managing your session log to hit a number, the log stops being useful. Targets should create behavioral change, not logging behavior change.
The One-Week Setup
Day 1–2: Define your deep work task categories. Set up your logging format.
Day 3–5: Log every session. Tally distractions. Rate quality. Do not analyze yet.
Day 6–7: Add weekly context (sleep, meetings, disruptions).
Week 2: Continue logging. Resist the urge to draw conclusions from one week.
Week 3: Run your first AI analysis. Look for one pattern that explains your lowest-performing sessions. Design one small test.
The framework only pays off if you are patient with it. The data you need for meaningful pattern detection takes time to accumulate. One distraction-count tally on a Tuesday is noise. Twelve sessions across three weeks is signal.
Begin your first session log in the next 24 hours. Label it with today’s date, and treat whatever number you get as information, not judgment.
Related: Complete Guide to Focus Metrics and AI · Why Focus Scores Are Misleading · Science of Focus Measurement
Tags: focus metrics, Focus Dashboard, deep work framework, session completion rate, distraction tracking
Frequently Asked Questions
-
What are the three metrics in the Focus Dashboard?
Deep hours per day (volume of cognitively demanding work), session completion rate (percentage of planned sessions completed without early exit), and distraction count per hour (frequency of attention switches during sessions). -
Why track three metrics instead of one?
A single focus score collapses three distinct dimensions of performance — volume, session integrity, and in-session attention quality — into one number, which hides the specific nature of your focus problems and makes targeted improvement impossible. -
How does AI fit into the Focus Dashboard?
AI performs weekly pattern detection across your three-metric log, identifying conditions correlated with your best and worst sessions. It handles multi-variable analysis across weeks of data, which human review cannot do reliably. -
What is a good session completion rate?
A session completion rate above 70% indicates a functional focus environment. Below 55% consistently suggests environmental or scheduling issues — interruptions, unclear task scopes, or sessions placed at cognitively suboptimal times — rather than personal attentional deficits.