This is a composite portrait drawn from conversations with knowledge workers who’ve built sustained planned vs actual analysis practices. The details are specific enough to be useful; the person is illustrative rather than singular.
Profile: Maya is a project manager at a mid-sized product company, managing two parallel software development streams. Her days are a mix of async communication, coordination meetings, review work, stakeholder updates, and intermittent deep work — the planning and strategic thinking that’s nominally her most important function but constantly gets pushed by the coordination demands.
She has been doing planned vs actual analysis for eight months. Here’s how she actually does it.
The Problem That Started the Practice
Before Maya started tracking variance, she had a persistent but vague sense that her capacity estimates were wrong. She’d agree to deliverables — stakeholder reports, project plans, process documentation — that felt reasonable in the moment but consistently arrived late or required weekend work.
The standard diagnosis she applied to herself was “I need to be better at saying no.” She spent several months trying to protect her time more aggressively, declining more meetings, pushing back more on ad-hoc requests. It helped somewhat. But the deliverables were still late, and she still regularly found herself wondering at Thursday 4pm how the week had gotten away from her.
The insight came from a peer who managed a neighboring product area. He mentioned, almost in passing, that he tracked his actual time against his estimates every week. “I used to think I was bad at protecting my time. Turns out I was bad at estimating my time. Those are different problems.”
Maya tried it for two weeks on a lark. What she found changed how she planned permanently.
What the Data Showed in Week One
Maya’s initial setup was minimal: a text file with four columns — task, category, estimated minutes, actual minutes. She logged at the end of each day, 2–3 minutes, no timer toggling.
After five days, she ran a rough variance calculation. Her overall variance rate was 148% — tasks were taking 48% longer than she estimated on average.
The category breakdown was more revealing:
- Deep work (planning documents, strategic writing): 112% — roughly accurate
- Coordination meetings: 167% — a third longer than scheduled
- Async communication (email, Slack, review comments): 189% — nearly double her estimates
- Review and approval tasks: 141% — consistently over
Her instinct had been that meetings were where her time went. The data showed that async communication was the larger problem — she was estimating 30 minutes for tasks that consistently took 55–60 minutes. Across a typical week, this gap alone accounted for roughly two hours of unplanned time.
This was actionable in a way that “I need to be better at saying no” was not. She wasn’t failing to protect her time from meetings. She was systematically underestimating how long the communication overhead of her role actually took.
The Weekly Review Workflow
After the first two weeks, Maya formalized a weekly review. It runs every Friday between 3:30 and 4pm.
Step 1: Compile the week’s log (5 minutes). She pastes her five daily logs into a single text file, organized by day. At this point she’s not analyzing — just aggregating.
Step 2: AI-assisted variance analysis (10 minutes). She pastes the compiled log into Beyond Time’s AI assistant and runs a variance analysis. Her standard prompt has evolved over the months:
“Here’s my time log for the week. Calculate my overall variance rate, variance by category, and compare to my 8-week rolling average by category. Flag any task types where this week’s variance is significantly higher than my average, and identify any patterns worth noting. Also note any tasks that weren’t on my original plan and estimate what percentage of my actual hours went to unplanned work.”
The AI returns a structured report in about 30 seconds. Maya reads it, notes anything surprising, and writes two or three specific observations in a running Friday note.
Step 3: Two decisions (5 minutes). Based on the analysis, Maya makes two planning decisions for the following week:
- One estimate adjustment — a specific task type or default duration she’ll change based on the data
- One structural change — something about how she schedules or batches work to address the largest variance source
The decisions are small and specific. She doesn’t try to fix everything at once.
The Monthly Calibration
Once per month, Maya runs a longer review. She asks the AI to analyze her full month of data and calculate updated reference-class estimates for each category.
Over her eight months of tracking, this process has produced a personal estimation table:
| Task type | My naive estimate | Calibrated estimate (multiplier) |
|---|---|---|
| 60-min meeting | 60 min | 75 min (1.25x) |
| Stakeholder update email | 30 min | 50 min (1.67x) |
| Review and comment on doc | 45 min | 60 min (1.33x) |
| Sprint planning doc | 2 hours | 2.5 hours (1.25x) |
| Async team coordination | 20 min | 38 min (1.9x) |
| Strategic planning work | 90 min | 100 min (1.11x) |
The deep work category is closest to 1.0 — her intuitive estimates for focused strategic work are reasonably accurate. The async communication tasks show the highest multipliers and the highest variance, which reflects the unpredictable complexity of coordination work.
When she’s estimating commitments now — “I can have that ready by Thursday” — she consults this table rather than estimating from scratch. A deliverable that requires 3 hours of review work and 1 hour of stakeholder communication gets estimated as 4 hours of nominal work plus calibrated buffers: actually more like 5.5–6 hours.
This is reference class forecasting applied to personal knowledge work. Rather than planning from what seems like it should take, she plans from what tasks like this have actually taken.
What Changed Over Eight Months
The changes Maya reports are both behavioral and attitudinal.
Behavioral changes:
Deadline commitments are more reliable. She no longer agrees to deliverables based on optimistic estimates. Before committing to a Thursday deadline, she checks: given my historical data on this task type and my current week’s calendar, is Thursday actually feasible? More often than she expected, the honest answer is Friday — and Friday commitments made upfront land better than Thursday commitments missed.
Meeting scheduling changed. She now buffers every meeting with a 15-minute recovery block, which her variance data showed she needed consistently. This isn’t a rule she imposed on herself through willpower — it’s a structural response to data.
Async communication is batched rather than dispersed. Because the data showed this was her highest-variance category and her biggest time drain, she moved from responding to messages throughout the day to two or three fixed communication windows. Variance in this category dropped from 189% to roughly 140% — still high, but meaningfully more predictable.
Attitudinal changes:
The most significant shift is harder to quantify. Maya describes moving from a chronic background anxiety about her workload to something that feels more like informed navigation. She doesn’t feel like she’s always behind — she feels like she knows approximately how long things take and can plan accordingly.
“I used to think weeks were chaotic. Turns out weeks are actually pretty predictable if you’ve been measuring them. The chaos was mostly in my estimates, not in the actual work.”
She also reports that the practice has changed how she has capacity conversations with stakeholders. Instead of vague statements about being “slammed,” she can point to specific data: her coordination overhead has averaged 12 hours per week for the past three months, and her deep work availability is approximately 8 hours per week. These are different conversations — they anchor to data rather than feelings, which tends to produce more useful outcomes.
The Role of Beyond Time
Maya uses Beyond Time as her primary capture and analysis tool. The daily log happens in the app, and the variance calculations are automatic rather than manual.
The feature she relies on most is the pattern alert — when a task type shows variance significantly above her rolling average, the app flags it. This functions as an early warning system: if her communication overhead spikes one week, she knows before it compounds into a capacity crisis.
The other feature she uses regularly is the estimate suggestion. When she’s planning and assigns a duration to a task, Beyond Time shows her historical average for that task type alongside her estimate. This doesn’t override her judgment, but it surfaces the reference class data at exactly the moment she’s making an estimate — which is when it’s most likely to influence the decision.
Before using a dedicated tool, Maya did the same process with a text file and a weekly AI prompt. She estimates the tool saves her about 20 minutes per week in analysis time and improves data completeness because logging is embedded in her planning workflow rather than being a separate activity.
What This Practice Requires
Sustaining this for eight months required three things that aren’t obvious at the outset.
Consistent capture, not perfect capture. Maya’s logs aren’t exhaustive. She misses some tasks, rounds aggressively, and sometimes reconstructs rather than logging in real time. She estimates her data completeness at about 80%. That’s sufficient for pattern detection. The practice works through accumulation, not precision.
Protection of the Friday review. The weekly review is a fixed calendar commitment. In eight months, she’s skipped it twice — both times during vacation. Missing it during a busy work week is not an option she allows. The 30 minutes is the mechanism that turns data into insight.
Willingness to let the numbers change your mind. The uncomfortable part isn’t the tracking — it’s finding out that your confident estimates were wrong by 50% or more. Maya’s framing: the estimates were wrong before she started measuring. Measuring didn’t make her worse at planning; it made her errors visible. Visible errors can be fixed. Invisible errors cannot.
Starting Your Own Version
You don’t need Maya’s eight months of data to start getting value from this practice. You need five days of honest capture and one comparison session.
If your overall variance rate after five days is above 130%, you have a meaningful signal worth following. If certain categories show much higher variance than others, you have a specific target for calibration.
The full architecture — weekly AI-assisted reviews, monthly calibration, a personal reference-class table — can be built gradually. The foundation is just: plan, do, compare, update. Everything else follows from there.
Related: The Complete Guide to Planned vs Actual Time Analysis — Using Beyond Time for Planned vs Actual Analysis
Suggested tags: planned vs actual case study, project management, time estimation, AI planning, knowledge work
Frequently Asked Questions
-
How does a project manager benefit from personal planned vs actual analysis?
Project managers often focus on tracking team capacity and project milestones while neglecting their own time. Personal planned vs actual analysis surfaces how much of their own week is consumed by unplanned coordination work, where their estimates for their own tasks are systematically off, and how their planning accuracy affects their ability to commit to deliverables. The practice also gives them direct experience with the calibration process, which makes them better at structuring the same analysis for their teams.
-
Can planned vs actual analysis work for team planning, not just individual?
Yes, though the dynamics are more complex. Team-level planned vs actual requires consistent estimation practices across team members, agreed-upon task granularity, and psychological safety for honest variance reporting. The planning fallacy applies at the team level as well — Bent Flyvbjerg's research on large project estimation failures shows that organizations consistently underestimate project durations even with experienced teams. Reference class forecasting at the team level (using historical sprint velocities, project completion ratios) is the same principle applied at larger scale.