How a Strategy Consultant Built a Weekly Review That Survived 18 Months of Client Work

A case study of how one independent strategy consultant designed, broke, redesigned, and finally stabilized a weekly review practice — through three failed versions and one that has run without a missed week in eight months.

This is a case study about getting a practice wrong twice before getting it right.

Sasha Brennan is an independent strategy consultant based in Edinburgh, ten years out of a Big Four firm. Her client work typically runs two to three simultaneous engagements — one large anchor client, one or two smaller ones. She invoices on deliverables, not hours, which means the week’s structure is entirely self-determined. No daily stand-ups, no manager checking in, no external accountability structures.

In that context, the weekly review isn’t a nice productivity habit. It’s load-bearing infrastructure. Without it, projects drift, business development falls behind client work, and the week-to-week experience becomes reactive — responding to what’s urgent rather than progressing what matters.

Sasha has been building and breaking her weekly review practice for three years. Here is the arc.


Version 1: The Comprehensive GTD Review (Months 1–4)

Sasha discovered GTD in her corporate years and ran a version of the GTD weekly review for most of that time. When she went independent, she brought the practice with her and built out the full system: physical inbox, three digital inboxes (email, Slack, a notes app), project list with next actions, waiting-for list.

The full GTD weekly review took 90 minutes on a good Friday afternoon. For the first four months of independent practice, the Fridays were mostly good. She had few clients, light commitments, and the Friday afternoon was genuinely available.

Then a large client engagement started with Thursday delivery requirements. Thursday became the crunch day of the week, bleeding into Friday morning. The Friday afternoon review, which had always been possible in theory, became contested in practice. She moved it to Saturday morning three weeks in a row, then started skipping.

By month four, the review was happening roughly every three weeks. When it did happen, the system had accumulated enough backlog — stale tasks, outdated project statuses, unprocessed inboxes — that the review became a cleanup session rather than a strategic reflection. That felt demoralizing.

The diagnosis: The 90-minute design had no minimum viable version. When Friday afternoons became constrained, there was no alternative. The all-or-nothing design produced “nothing” as soon as conditions changed.


Version 2: The Sunday Reset (Months 5–10)

Sasha read Tiago Forte’s work on Second Brain and tried the Sunday Reset model — Capture, Organize, Review, Plan — using Notion as her knowledge management layer. She appreciated the information management emphasis because her consulting work generates a lot of reference material: frameworks, case studies, client research, industry notes.

The Sunday Reset initially solved the Friday problem. Sunday mornings were more consistently available, and the Capture-Organize phases felt productive in a way that the pure GTD collection phase hadn’t.

The problem emerged gradually. The Sunday Reset is information management-focused, but Sasha’s most urgent challenges weren’t informational — they were behavioral. She was consistently underinvesting in business development. Client work was crowding out the proposal writing and relationship maintenance that generates future client work. The Sunday Reset helped her organize what she knew; it didn’t help her identify and fix the pattern of where her time was going.

By month seven, she was running a well-organized knowledge base and a business development pipeline that hadn’t moved in six weeks. The review felt productive but wasn’t changing the right things.

The diagnosis: Wrong system for the actual problem. The Sunday Reset is excellent for information management challenges. Sasha’s problem was behavioral — she needed a diagnostic review that identified why business development kept getting deferred and what specifically to change about her schedule.


Version 3: A Hybrid That Collapsed Under Its Own Weight (Months 11–14)

Sasha read more about weekly reviews and decided the answer was combining the best of each system. She built a hybrid: GTD collection phase, then Sunday Reset Organize step, then a personal Scrum retrospective for the analysis, then GTD-style project review, then weekly planning.

The combined review was thorough and well-designed. It took two hours. She ran it twice in the first month, once in the second, and then stopped.

The diagnosis: Scope creep through hybridization. Combining the strengths of multiple systems without trimming their redundancies created a review that was too comprehensive to be consistent. The habit couldn’t survive the time cost.


Version 4: The Redesign (Months 15 to Present)

After the third failure, Sasha took a different approach. Instead of designing the ideal review, she designed the minimum viable review — the smallest practice that would still produce the behavioral outputs she actually needed.

She identified her two non-negotiable outputs: one insight about where her time went relative to her priorities, and one specific behavioral commitment for the coming week. Everything else was secondary.

Her current weekly review runs in three phases, hard-capped at 40 minutes:

Phase 1: Sweep and Check (10 minutes)

Every Friday at 4:30pm, she opens her calendar and scans the week. Not comprehensively — just looking for two things: anything incomplete that needs to be recaptured, and any pattern in how the week actually unfolded versus how she’d intended it to go.

She uses Beyond Time (beyondtime.ai) to log her category data during the week — a 60-second end-of-day log that tags time as client delivery, business development, or administration. The weekly review shows her the split automatically, so she doesn’t need to reconstruct it from memory.

Phase 2: Analyze (15 minutes)

She pastes her week summary — category split, calendar, any notable events — into Claude and runs a specific prompt she’s refined over several months:

Here's my week data. I run a solo consulting practice with three categories:
client delivery, business development, and administration.
My target split is 60 / 30 / 10.

Actual split this week: [category data]
Notable events: [brief calendar summary]

Please identify:
1. Whether my split was close to or far from target, and what drove any significant variance
2. One specific pattern in how business development time got displaced this week
3. One scheduling change I could make next week to protect business development time better

Don't give me a list of suggestions — just the one highest-leverage change.

The output takes her two minutes to read and usually surfaces something she wouldn’t have named independently.

Phase 3: Navigate (15 minutes)

She reads the AI output, adjusts for anything it missed or misread, and writes three sentences in her review log: this week’s win, the behavioral change she’s making, and the one client deliverable that needs the most attention next week.

Then she opens her calendar and makes the behavioral change visible. If the AI recommended protecting Wednesday morning for business development, she blocks it now, before she closes the session.

Total time: 38–42 minutes. She has missed one review in the past eight months — a week she was traveling internationally with a time zone issue — and completed a 20-minute minimum viable version the following Monday.


What Changed Between Versions 3 and 4

Three structural differences explain why Version 4 has held when the others didn’t:

Hard time cap. The 40-minute ceiling means the review competes with almost nothing. A late meeting, a constrained Friday, a low-energy afternoon — none of these push the review below the minimum viable threshold.

Output-first design. Version 4 was designed backward: start with the two outputs that matter (behavioral pattern insight, specific commitment), then build the minimum process needed to produce them. Versions 1–3 were designed forward: start with comprehensive coverage, hope the useful outputs emerge.

Category tracking decoupled from review quality. In Versions 1–3, the review’s quality depended on the quality of other systems (GTD infrastructure, PARA database). Version 4’s quality depends only on calendar data, which is always available. The AI adds pattern analysis but is not required — Sasha can run the Analyze phase without it if needed.


Lessons for Independent Knowledge Workers

Design for adverse conditions, not ideal ones. The consultant’s week is structurally unpredictable — client emergencies, travel, deadline crunches. Any review design that requires a quiet 90-minute Friday will fail repeatedly in this context. Design for the hardest week of the year, not the average one.

Know what outputs you actually need. Sasha’s core insight was naming her two non-negotiable outputs (pattern insight and behavioral commitment) and ruthlessly cutting everything that didn’t serve them. A review that produces those two things in 40 minutes is better than a review that produces fifteen observations in 90 minutes and no behavioral commitment.

Use AI for the analytical phase, not the commitment phase. AI is genuinely useful for pattern recognition across category data and calendars. It’s not useful as a substitute for the human commitment step. Sasha’s AI prompt asks for one behavioral change, then she decides whether to accept it and makes it concrete herself. The AI does the analysis; the human makes the decision.

Block Friday at 4:30pm, answer three questions (win, pattern, commitment), and make one calendar change. That’s the minimum viable version. Start there.


Related:

Tags: weekly review case study, consultant productivity, weekly review systems, GTD weekly review, AI planning

Frequently Asked Questions

  • How do you maintain a weekly review when client work is unpredictable?

    The answer, as this case study shows, is designing a minimum viable version that doesn't depend on a quiet Friday afternoon. A review that runs in 20 minutes on a commute home on Thursday is more valuable than a 60-minute review that gets pushed to the following Monday. The minimum viable version needs to be so small that it can survive any week — travel, deadline, client emergency included.

  • Should a consultant's weekly review cover client projects and personal development separately?

    Most experienced practitioners recommend reviewing them in the same session but with a clear structural distinction. Client projects: current status, outstanding commitments, upcoming deliverables. Personal development: business development pipeline, skill building, relationship maintenance. Separating them into different sessions creates two habits to maintain instead of one; combining them in one session with clear sections creates one habit that covers both.