Nadia had been using RescueTime for two years before she accepted that it was not helping her understand her focus problems.
Her score fluctuated between 55 and 80 on a near-daily basis without any clear pattern she could act on. She had tried adjusting the application categories, adding distraction-block sessions, and reviewing her weekly scores every Sunday. None of it translated into clarity about why some weeks she produced excellent design work and other weeks she produced nearly nothing she was proud of.
“The score felt like a judgment,” she said. “Not a diagnosis.”
What follows is an account of the six weeks during which Nadia built a more honest picture of her own focus — and what it took to get there.
Baseline: What Was Actually Happening
Nadia is a senior product designer at a B2B software company. Her work splits roughly into three modes: exploratory design (divergent thinking, sketching, research, generating options), execution design (refining, prototyping, producing specs), and coordination (stakeholder reviews, design critique sessions, handoff conversations).
When she looked back at her RescueTime data, she noticed that her “worst” weeks — low scores, feeling scattered — were not obviously different from her “best” weeks in terms of which applications she was using. She spent similar time in Figma, in Notion, in her browser. The score was moving, but not in response to anything she could identify.
Her first hypothesis: the score was measuring busyness, not focus.
Version 1: Trying Pomodoro Tracking
Nadia’s first intervention was to add a Pomodoro timer and track completions. She planned six 25-minute intervals per day of design work and tracked how many she completed.
This lasted ten days before she recognized the problem: her best design work did not fit in 25-minute intervals. The exploratory mode — sketching, generating alternatives, making intuitive leaps — often needed 60 to 90 minutes before it reached productive depth. Interrupting it every 25 minutes for a break reset her thinking rather than refreshing it.
Her completion rate was high (85%), but she felt the completions were hollow. She was finishing intervals, not doing her best work.
Pomodoro tracking was giving her a metric that went up when she complied with a structure that did not fit her cognitive patterns.
Version 2: Switching to Qualitative Journaling
After abandoning interval tracking, Nadia tried a five-minute end-of-day reflection: what had she worked on, what felt good, what disrupted her?
She kept this up for three weeks. The entries were genuine and sometimes revealing — she noticed she was harder on herself on days when she had meetings in the morning, and softer on days that started with two hours of uninterrupted sketching. But she could not see any pattern across the three weeks when she read the entries back.
“I could tell what each day felt like,” she said. “I couldn’t tell what made Tuesdays different from Thursdays.”
The journal was generating data she could not analyze. What she needed was a way to turn narrative observations into something that could reveal trends.
Version 3: The Focus Dashboard Approach
In week four, Nadia tried a different structure. Instead of journaling about her whole day, she logged each deep work session specifically — three fields: task mode (exploratory, execution, coordination), distraction count during the session, and quality rating of 1–3.
She also noted one piece of context at the start of each day: whether she had a meeting in the first three hours of her workday.
This produced a different kind of data. Less narrative, more comparable across sessions. She collected two weeks of this log — 34 sessions — before running her first AI analysis.
Her prompt to Claude:
Here are 34 deep work session logs from the past two weeks. Each entry:
date, session type (exploratory/execution/coordination), distraction count,
quality rating (1–3). I also noted whether each day had a meeting in
the first three hours.
Please:
1. Compare my quality ratings and distraction counts across the three session types.
2. Tell me whether days with morning meetings show different patterns from
days without.
3. Identify the conditions most associated with my quality-3 sessions.
4. Identify the conditions most associated with my quality-1 sessions.
The Pattern She Had Not Seen
The AI’s analysis surfaced two findings that Nadia had not detected in her own review.
Finding 1: Her exploratory design sessions had an average quality rating of 2.6 when they were her first session of the day and 1.4 when they were her second session, placed after an execution session. Execution sessions followed by exploratory sessions was the worst sequence she could have for creative output.
This was the inverse of what she had been doing. Her instinct had been to “warm up” with execution work and then move to exploration. The data showed this was backwards: exploratory sessions needed to come first, when her thinking was freshest, before execution work consumed her cognitive resources.
Finding 2: Morning meetings correlated with a 0.8-point drop in average quality rating across the rest of the day — not just for the session immediately following the meeting, but for all sessions that day. She had known morning meetings disrupted her, but she had underestimated the duration of the effect.
The AI flagged both patterns with the appropriate caveat: “These correlations are based on 34 sessions. They are strong enough to be worth testing but not definitive.”
Redesigning the Week
Nadia made two structural changes based on the findings.
She moved all her exploratory design sessions to the first slot of her day — before checking email, before any synchronous communication. Execution sessions moved to mid-morning and afternoon.
She also negotiated with her manager to have Tuesdays and Thursdays free of meetings before noon. Mondays, Wednesdays, and Fridays could have morning meetings if necessary.
She continued logging at the same granularity through the following three weeks.
The AI’s second analysis, run at the end of week six, showed:
- Average quality rating for exploratory sessions up from 2.2 to 2.8
- Distraction count per hour down from 4.3 to 2.6
- No statistically meaningful change in execution session quality (which had been strong already)
“I’m not going to pretend the numbers are perfectly reliable,” Nadia said. “But the work feels completely different. The mornings when I sketch first are the ones where I actually surprise myself.”
What the Case Study Shows About Focus Tracking
Nadia’s process illustrates several things about how focus measurement works in practice.
Aggregate scores mask task-type variation. Her exploratory and execution sessions had completely different focus dynamics. A single daily score averaged those together and hid the contrast entirely.
Pattern detection across sessions requires more data than intuition provides. The finding that her work sequence mattered — exploratory first, execution second — was not something she could have reliably detected by reviewing her qualitative journal. It required numerical comparison across enough sessions to see the trend.
AI analysis is most useful as a hypothesis generator, not a verdict. The AI did not tell Nadia what to do. It surfaced patterns and named them clearly enough for her to design a test. The test confirmed the pattern with enough confidence that she made a permanent schedule change.
The logging habit is the core asset. Two weeks of honest session logs produced more useful information than two years of passive app tracking. The difference was not sophistication — it was specificity and honesty about what was actually happening in each session.
Beyond Time builds this kind of session-level logging directly into the planning workflow, making it easier to maintain the logging habit without managing a separate tracking system alongside your work.
The Lesson Worth Applying
Nadia’s breakthrough was not a new app or a clever framework. It was a decision to log specifically enough that the data could tell her something she did not already know.
If your focus problem feels diffuse and unfixable, the first question worth asking is: am I measuring at the right level of granularity? Not “how productive was today?” but “what was different about the sessions that went well versus the ones that did not?”
Start by logging your next five sessions with task type, distraction count, and quality rating. You do not need two weeks before you can start noticing — but you need more than one day before you can trust what you notice.
Related: Focus Metrics Framework with AI · How to Measure Focus with AI · Why Focus Scores Are Misleading
Tags: focus tracking case study, designer productivity, deep work measurement, AI pattern detection, creative work focus
Frequently Asked Questions
-
Can creative workers like designers benefit from focus metrics?
Yes, but creative work requires careful metric design. Task-specific tracking — distinguishing exploratory design sessions from execution sessions — produces far more useful signals than aggregate focus scores, which cannot differentiate creative exploration from distracted browsing. -
Why do designers often struggle with standard focus tracking approaches?
Standard focus tracking assumes linear, application-confined work. Designers often need to switch between analog and digital work, reference materials across many applications, and engage in exploratory browsing that is genuinely part of the creative process — all of which app-based tracking penalizes unfairly. -
How long does it take to see useful patterns from focus logging?
Most people need three to four weeks of consistent logging before AI analysis can surface patterns that are genuinely predictive rather than incidental. Single-week patterns are often driven by external events rather than stable underlying conditions.