The Complete Guide to Weekly Review Systems (GTD, Bullet Journal, Agile & AI)

Every major weekly review system — GTD, Bullet Journal, Scrum retrospective, Tiago Forte's Sunday Reset, and AI-augmented — explained, compared, and made immediately usable. The definitive starting point for any knowledge worker who wants their weeks to compound.

Most people end the week with unfinished business in their head.

Tasks that migrated from Monday’s list to Friday’s list without ever getting started. An email flagged for a response that somehow survived six days. A project with a looming deadline that spent the week competing with the urgent and losing. The same intention — “I need to think about this” — showing up on four consecutive days without resolution.

This isn’t a discipline failure. It’s a system failure. Specifically, it’s the absence of a weekly review — the one practice that converts raw weekly experience into deliberate intention before the next week begins.

This guide covers every major weekly review system in enough depth to be genuinely useful: GTD’s canonical approach, the Bullet Journal method, the Scrum retrospective adapted for personal use, Tiago Forte’s Sunday Reset, and the AI-augmented review that collapses the time cost without sacrificing the output. We’ll compare them honestly, cover the research on why structured reflection produces better outcomes than intention alone, and close with a framework for choosing the right system for your work.


Why Most People Skip the Weekly Review (And Why That’s Costly)

The research on structured reflection is clearer than most people realize.

A study by Giada Di Stefano and colleagues at Harvard Business School found that workers who spent 15 minutes at the end of each day reflecting on what they learned performed 23% better on a skills test after 10 days than workers who continued practicing without reflection. The mechanism wasn’t time — it was consolidation. Reflection converts experience into transferable insight; experience without reflection just repeats.

Adam Grant’s work on work-life quality distinguishes reflection from rumination: reflection is purposeful, forward-looking, and produces actionable conclusions; rumination is circular, backward-looking, and produces emotional activation without resolution. The design of a weekly review system determines which you get. A well-structured review constrains rumination and forces reflection.

Peter Drucker, writing in The Effective Executive (1967), identified the weekly time audit as the single most reliable way for knowledge workers to understand how their time was actually spent versus how they believed it was spent. He was writing decades before digital distraction; the gap between believed and actual has almost certainly widened since.

The cost of skipping the weekly review is hard to see precisely because it’s distributed. A single missed review doesn’t produce a visible failure. The cost is cumulative: open loops compound, misaligned priorities persist, behavioral patterns that should have been corrected in week two are still running in week twelve.


The GTD Weekly Review: The Most Comprehensive System Available

David Allen introduced the weekly review in Getting Things Done (2001) as the “critical success factor” for the entire GTD methodology. Without the weekly review, GTD degrades into a sophisticated to-do list. With it, the system becomes genuinely self-correcting.

The Full GTD Weekly Review Structure

Allen’s weekly review has three phases:

Phase 1: Get Clear

  • Collect all loose papers, receipts, business cards, and physical inbox items
  • Process every email inbox to zero
  • Empty your head (a mindsweep: capture every open loop — commitments, ideas, concerns — onto paper)
  • Process and clear every note from the week

Phase 2: Get Current

  • Review your Next Actions lists for each context (at computer, phone calls, errands, agendas)
  • Review your calendar for the past week (anything undone that needs to be recaptured) and next two weeks ahead
  • Review your Waiting For list (outstanding delegations and commitments from others)
  • Review your Projects list — every project should have at least one next action in the system
  • Review relevant checklists

Phase 3: Get Creative

  • Review Someday/Maybe list — anything to move to active?
  • Review trigger lists to generate new thinking
  • Review the calendar further out for anything needing preparation

The GTD weekly review is comprehensive by design. It ensures every context, commitment, and project has been seen at least once in the past week. Nothing can silently drift.

GTD Weekly Review: Strengths and Limitations

The strength of the GTD weekly review is its completeness. If you do it properly, nothing falls through the cracks. The “trusted system” property of GTD — the reason it reduces background anxiety — depends entirely on the weekly review. Without it, you can’t trust your lists because you know they might be stale.

The limitation is time and setup cost. A proper GTD weekly review takes 60–90 minutes. It also requires a functioning GTD system (maintained projects list, current action lists, active inboxes). If your system is even partially broken, the review reveals the gaps but doesn’t automatically fix them, which can make a long session feel demoralizing rather than clarifying.

Best for: Practitioners who are already using GTD, or who process high volumes of incoming commitments and projects. Not a good first system if you’re starting from scratch.


The Bullet Journal Weekly Review: Analog, Flexible, and Personal

Ryder Carroll’s Bullet Journal method, introduced in his 2018 book The Bullet Journal Method, contains its own weekly review rhythm built into the system’s structure: the Weekly Log migration, the Weekly Reflection, and the Monthly Migration.

How the Bullet Journal Weekly Review Works

The Bullet Journal week ends with a migration: reviewing every incomplete task in the current week’s log and making a decision about each one. Every task gets one of four treatments: migrated forward (still relevant, copied to next week), scheduled (added to the Future Log for a specific date), delegated, or deleted.

The act of migration is the review. By requiring you to physically write out every task you’re carrying forward, the Bullet Journal imposes a cost on procrastination: each incomplete task must be deliberately re-committed to. Tasks you’re not willing to write out again get deleted. This friction is a feature.

Carroll also recommends a brief weekly reflection — typically a few sentences — about what you want to focus on, what didn’t work, and any observations from the week.

Bullet Journal Weekly Review: Strengths and Limitations

The migration system elegantly handles the most common problem with task lists: tasks that survive by inertia rather than relevance. The physical copying cost forces deliberate recommitment.

The limitation is context coverage. The Bullet Journal weekly review is task-focused but doesn’t systematically cover calendar, delegations, projects, or horizon-level goals the way GTD does. It’s also, by design, analog — it doesn’t play well with digital task environments unless you’ve intentionally built a hybrid system.

Best for: People who already use a Bullet Journal; analog-first thinkers; anyone who wants a migration-heavy approach that naturally kills zombie tasks.


The Scrum Retrospective Adapted for Personal Use

The Scrum retrospective is a team ceremony in Agile software development, conducted at the end of every sprint (typically one or two weeks). Its structure is standardized across Agile frameworks: What went well? What didn’t go well? What will we do differently?

Adapted for personal use, it becomes one of the most structured and diagnostic weekly review formats available.

A Personal Scrum Retro Structure

What went well this week? (Start here, not with what didn’t work. This is deliberate — research on positive psychology and motivation consistently shows that grounding in success before problem analysis produces better problem-solving. The practice prevents the review from becoming a self-criticism session.)

What didn’t go well, and why? (Diagnose at the system level, not the discipline level. “I didn’t finish the proposal” is not a useful retrospective insight. “I didn’t finish the proposal because I blocked three hours for it but one hour was consumed by an unplanned meeting and the remaining two hours weren’t enough” is actionable.)

What will I do differently next sprint? (This is the commitment step. One to three specific behavioral changes — new, concrete, schedulable. Not aspirations.)

Velocity check: In Scrum teams, velocity is the number of story points completed per sprint. In a personal context, you might track tasks completed, projects advanced, deep work hours logged, or whatever measure reflects your most important outputs. Velocity across four or five sprints reveals trends that any single week obscures.

Personal Scrum Retrospective: Strengths and Limitations

The retro structure is excellent at identifying systemic problems and producing specific behavioral commitments. It originated in software teams where diagnostic rigor is valued, and that rigor transfers well to knowledge work.

Its limitation is that it doesn’t cover the administrative clearing that GTD and Bullet Journal emphasize — inboxes, open loops, calendar review. A personal Scrum retro is diagnostic and forward-planning; it’s not a processing system. Many practitioners combine it with a lighter version of the GTD collection phase.

Best for: People who work in Agile teams (the language and cadence are already familiar); engineers and analytical thinkers who want diagnostic rigor in their reviews; anyone who finds that their weekly reviews produce insights but not behavioral change.


Tiago Forte’s Sunday Reset: Capturing, Organizing, Reviewing

Tiago Forte, the creator of the PARA method (Projects, Areas, Resources, Archives) and author of Building a Second Brain (2022), advocates for a Sunday Reset practice that combines his Second Brain framework with a weekly review structure.

The Sunday Reset Structure

Forte’s Sunday Reset is organized around four activities:

Capture: Review all the inputs from the week — notes, screenshots, articles clipped, emails with useful information, ideas jotted in apps. Everything that came in during the week that belongs in your second brain.

Organize: Move captured items into the appropriate PARA locations. Projects (active things you’re working on), Areas (ongoing responsibilities with standards), Resources (topics of interest), Archives (inactive items). This is the organizational heartbeat of the PARA method.

Review: Check the status of active projects. Is each one progressing? Does anything need attention this week? Does anything need to be started, paused, or archived?

Plan: Based on the project review, set intentions for the coming week. What are the two or three things that matter most?

Sunday Reset: Strengths and Limitations

The Sunday Reset is explicitly knowledge management-oriented. It’s particularly valuable for people who deal with high volumes of information — researchers, writers, consultants — whose work involves accumulating and synthesizing external knowledge over time. The PARA organization step is something no other weekly review system covers explicitly.

The limitation is that it assumes you’re using a Second Brain setup (Forte’s framework is Notion or similar digital notes tools). If you’re not, the Capture-Organize steps don’t have a home. It’s also less focused on task management and open loops than GTD.

Best for: Knowledge workers who deal heavily with information processing (researchers, writers, content strategists, consultants); people already using PARA; anyone whose review practice feels too task-focused and needs better information management.


The AI-Augmented Weekly Review: Speed Without Sacrifice

The most recent evolution in weekly review systems is the AI-augmented approach — using a language model to accelerate the analytical and planning steps that take the most time in traditional systems.

The core insight: most of the time cost in a weekly review comes from two activities that AI handles well. Pattern recognition across multiple weeks of data (humans are poor at spotting behavioral trends in their own experience; AI is fast and systematic). And structured synthesis of messy inputs (a brain dump of the week’s events is hard to reason from; a structured summary of wins, leaks, and open loops is immediately useful).

What an AI-Augmented Weekly Review Looks Like

The AI-augmented review has four phases, designed to fit within 30–45 minutes:

Collect (5 minutes): Dump everything into a single document or chat. Calendar events, task completions, anything open, anything notable. This doesn’t need to be organized — that’s the AI’s job.

Analyze (10 minutes): Paste your dump into your AI tool with a structured prompt. A reliable starting prompt:

Here is my week's data. Please:
1. Identify my three biggest wins
2. Identify the clearest pattern in what didn't get done (system-level, not discipline-level)
3. Surface any open loops I may have missed
4. Note any calendar patterns across this week vs. my stated priorities
5. Suggest one specific behavioral change for next week, based only on the evidence above

Week data:
[paste calendar + task dump here]

Reflect (10 minutes): Read the AI output, correct anything it missed or misread, and add your own interpretation. The AI identifies patterns; you assess whether those patterns matter.

Commit (5 minutes): Write three things: this week’s clearest win, next week’s single highest-stakes commitment, one behavioral shift from the pattern analysis.

Beyond Time (beyondtime.ai) is purpose-built for this workflow — its weekly review module accepts a free-form data dump and returns a structured Win/Leak/Shift summary with AI-generated implementation intentions you can schedule directly to your calendar.

AI-Augmented Review: Strengths and Limitations

The AI-augmented approach is the fastest path to consistent practice. By lowering the time cost (30 minutes instead of 60–90) and the skill cost (AI handles pattern analysis), it’s the most accessible entry point for someone starting from nothing.

The limitation is dependency and depth. If you lose access to your AI tool, the review stops working. And AI pattern analysis is only as good as the data you provide — a shallow dump produces shallow analysis. The GTD and Bullet Journal systems build organizational rigor into the review process itself; the AI-augmented approach relies on you to provide good data.

Best for: People starting from scratch; busy practitioners who need a 30-minute ceiling; teams that want to combine a personal review with an AI-powered planning session; anyone who has tried longer systems and abandoned them.


How to Choose the Right System

The question isn’t which system is best in the abstract — it’s which system you will actually do every week for the next six months.

Start with GTD if: You already use GTD, or you need comprehensive open-loop capture and have the time for a 60–90 minute weekly session.

Start with Bullet Journal if: You’re already a Bullet Journal practitioner, or you want a fundamentally analog review that kills zombie tasks through migration cost.

Start with Personal Scrum if: You work in an Agile environment, or your reviews produce insights but rarely change your behavior — the retrospective structure is specifically designed for behavioral commitment.

Start with Sunday Reset if: Your problem isn’t task management but information overload — you capture more than you can synthesize, and your reviews feel disorganized because you haven’t addressed your knowledge management layer.

Start with AI-augmented if: You’ve tried other systems and abandoned them, or your biggest constraint is time, or you’re starting fresh and want the fastest path to consistent practice.

One important note: these systems are not mutually exclusive. Many sophisticated practitioners combine them — a GTD collection phase to start, an AI analysis step in the middle, a retro-style commitment step at the end. The best review system is often an intentional hybrid.


The Weekly Review Framework We Recommend: The SCAN Method

Across all the systems above, four activities appear in every effective weekly review regardless of format. We call these the SCAN phases:

Sweep: Clear your inboxes, process loose notes, and capture any open loops still floating in your head. This is the GTD “Get Clear” function — it prevents the review from operating on incomplete data.

Check: Review your calendar for the past week and the next two weeks. Review your projects list. Review your waiting-for items. Make sure everything that matters is current and visible.

Analyze: Identify patterns. What got done? What got skipped, and why? What surprised you? This is where AI accelerates the traditional process — a prompt that asks for pattern analysis across your week’s data consistently surfaces insights that unaided reflection misses.

Navigate: Set your intentions for the coming week. Not an exhaustive task list — a directional commitment. What are the two or three outcomes that would make next week a success? What one behavioral change are you carrying forward from the analysis?

The SCAN method is compatible with any tool set and any time budget from 20 to 90 minutes. Strip it back to 20 minutes and you’re still doing Sweep (5), Check (5), Analyze (5), Navigate (5). Expand it to 60 minutes and you’re adding depth at each phase.


The Research Case for Weekly Reviews

Three bodies of evidence support the practice:

Reflection research: Di Stefano et al. (2014, Harvard Business School) showed that structured reflection after task performance consistently improves future performance compared to continued practice without reflection. The mechanism is learning consolidation — reflection converts experience into transferable principle.

Implementation intentions: Peter Gollwitzer’s extensive research on implementation intentions shows that specifying when, where, and how you will perform a behavior significantly increases follow-through compared to goal setting alone. A weekly review that ends with a scheduled commitment (“I will complete the proposal draft on Tuesday morning from 9–11am”) is activating this mechanism. One that ends with “I should get the proposal done next week” is not.

Retrospective practices in Agile teams: A meta-analysis by Dingsoyr et al. examining agile retrospective practices found that teams conducting regular retrospectives improved their velocity and reduced defect rates over time compared to teams that did not. The personal equivalent — consistent self-evaluation followed by behavioral adjustment — produces compounding improvement in individual productivity by the same mechanism.


Common Mistakes That Undermine Weekly Review Systems

Reviewing the wrong things: Reviewing tasks without reviewing projects. Projects are the units of meaningful work; tasks are just the components. If your review focuses on task completion rates but never asks whether the right projects are being progressed, you’re optimizing at the wrong level.

No behavioral commitment: A review that ends with observations but no specific, schedulable behavioral change is a journaling session, not a planning system. The commitment step is not optional.

Reviewing too infrequently: Every two weeks is not a weekly review. The seven-day cycle matters because a week is the fundamental unit of knowledge work scheduling — the cadence of recurring meetings, the frame of project deadlines, the rhythm of most teams. A biweekly review operates on a different cycle and misses the compounding effect.

Perfect conditions dependency: Waiting until you have complete data, a quiet hour, or ideal energy. The review that happens in 20 minutes on a Friday afternoon is worth more than the perfect review that doesn’t happen because conditions aren’t right. Protect the habit before optimizing the session.


Your First Weekly Review: Where to Start

If you have no weekly review practice and want to start one this week, the minimum viable version is three questions:

  1. What was this week’s clearest win?
  2. What was this week’s biggest friction, and what caused it (system-level, not discipline-level)?
  3. What is the single most important thing I need to do differently next week?

Write the answers. Schedule the behavioral change. You have now done a weekly review.

From there, add the Sweep and Check phases as the habit stabilizes. Introduce AI analysis when you want pattern recognition across multiple weeks. Adopt whichever full system (GTD, Bullet Journal, Scrum retro) matches your existing tools and working style.

The only requirement is that it happens every week.

Pick a time in the next seven days, block 30 minutes, and ask yourself those three questions.


Related:

Tags: weekly review systems, GTD weekly review, productivity frameworks, personal retrospective, AI planning

Frequently Asked Questions

  • What is a weekly review system?

    A weekly review system is a structured practice — usually 30–90 minutes, done once per week — for processing open loops, evaluating progress, and planning the week ahead. Different systems (GTD, Bullet Journal, Scrum retro, AI-augmented) vary in structure and emphasis, but all share the same core function: converting raw experience into deliberate intention before the next week begins.

  • How long should a weekly review take?

    The GTD full weekly review can take 60–90 minutes; Tiago Forte's Sunday Reset runs about 60 minutes; an AI-augmented review can be done in 30–45 minutes because AI handles pattern-spotting and summarization. Most people find that a sustainable weekly review lands between 30 and 60 minutes. Anything longer than 90 minutes is a sign the system has scope creep.

  • Which weekly review system is best for knowledge workers?

    It depends on what you already use. If you're in GTD, the native GTD weekly review is the right fit. If you journal in a Bullet Journal, the migration and reflection built into that system already cover the core. If you work in Agile teams, a personal Scrum retrospective structure is familiar and portable. If you have no prior system, an AI-augmented review is the fastest path to a sustainable 30-minute practice. The best system is the one you actually do every week.

  • Do I need AI to run a weekly review?

    No — GTD practitioners were running rigorous weekly reviews decades before AI tools existed. AI accelerates two specific steps: pattern analysis (spotting trends across multiple weeks that humans miss) and friction-free capture (dictating or pasting messy notes and having them structured automatically). But a paper notebook and 45 honest minutes produce genuine results without any technology.

  • What is the GTD weekly review?

    David Allen's GTD weekly review consists of six phases: Get Clear (collect loose papers, process inbox to zero, empty your head), Get Current (review action lists, calendar, waiting-for list, project list), and Get Creative (review someday/maybe list and trigger new ideas). It's the most comprehensive of the major weekly review formats and takes 60–90 minutes when done properly.