The problem wasn’t that Nadia didn’t have goals. She had OKRs, a Q3 planning doc, a Notion page for each objective, and a Linear board full of tickets. She had more structured goal documentation than most people she knew.
The problem was that none of it talked to each other.
By week six of Q3, she’d completed 47 Linear tickets and made almost no progress on her most important objective. The tickets had been real work — legitimate, useful things. But they were reactive work, driven by Slack threads and engineering requests, not by the goals she’d set in July. And because nothing in her environment connected her daily task work to those goals, she hadn’t noticed the drift until the Q3 review was two weeks away.
What follows is a detailed account of how Nadia restructured her stack over a single weekend, what changed in practice, and what the system looks like a full quarter later.
(Note: Nadia is a composite persona built from real workflows. Her tool choices and outcomes represent patterns we see across product managers in mid-stage tech companies.)
The Diagnosis: Three Disconnected Systems
Before rebuilding, Nadia mapped what she actually had:
System 1: Goals (Notion). One page per OKR, with key results listed. Updated at the start of each quarter and rarely touched again. No connection to Linear, no connection to her calendar.
System 2: Tasks (Linear). The living system, updated constantly. Tickets created from Slack messages, engineering requests, customer feedback, and her own planning. Organized by project team, not by OKR. No goal tags.
System 3: Calendar (Google Calendar). Dense with meetings. Three or four “PM work” blocks per week, unlabeled. No indication of which goal each block served.
The AI component — Claude — was used occasionally for drafting, but never systematically for goal review. When she did use it, she started each session by trying to remember where things stood, which meant the quality of the conversation was limited by the quality of her memory.
Three systems. Zero connections. A goal review that required manually reconciling all three before she could answer a basic question.
The Redesign: Building the SSoT
The first decision was canonical: where does goal truth live?
Nadia already had Notion — but she had it set up as a series of pages rather than a database. Pages are good for writing; databases are good for structured data and queries. She rebuilt her OKR tracking as a Notion database with these fields:
- Goal ID (Q3-01, Q3-02, etc.)
- Objective (the OKR objective text)
- Key Result (the specific measurable outcome)
- Progress % (a number field, manually updated weekly)
- Status (On Track / At Risk / Behind / Complete)
- This Week’s Milestone (a brief note on the specific advancement expected this week)
- Update Log (a text field with dated entries — her weekly reflections on progress)
The database view became her SSoT. Any goal-related truth — progress, status, current focus — lived here. The Notion pages she’d been using were archived. No duplicate records.
The First Satellite: Linear
Reconnecting Linear to the SSoT required one structural change: adding a goal label system.
She created five Linear labels corresponding to her five OKRs: goal-q3-01, goal-q3-02, and so on. She spent 45 minutes retroactively tagging open tickets. Everything without a goal tag was moved to a “reactive” project — acknowledged as real work, but clearly separated from goal-advancing work.
Going forward, every ticket she created got a goal tag at creation. Tickets without goal tags were explicitly categorized as reactive — which made visible, for the first time, the ratio of reactive to goal-directed work each week.
The connection between Linear and Notion was initially manual: each Sunday, she filtered Linear by goal tag and counted completed tickets per goal. This took seven minutes. She noted it in the Notion update log with a date. The manual step was intentional — she wanted to see the data with her own eyes before automating it away.
After four weeks, she built a Zapier flow: when a Linear ticket with a goal label is marked complete, append a row to a Google Sheet log with the ticket title, goal label, and completion date. A second Zap ran every Sunday evening and pulled that week’s log entries into a summary, which was emailed to her before Monday morning.
She didn’t automate the Notion progress field update itself — she found that manually calculating the percentage was a five-minute exercise that kept her calibrated. Automation can make you passive about data you should be actively thinking about.
The Second Satellite: Google Calendar
Calendar integration required labeling.
Nadia went through her standing meetings and recurring blocks and added a goal tag to every event that was genuinely goal-directed. “PM Work” became “PM Work — Goal Q3-02: Activation Funnel.” “Deep work” became “Deep Work — Goal Q3-01: Retention Metric.”
Reactive meetings — engineering syncs, standups, reviews without a specific goal connection — stayed unlabeled. This made the goal-time allocation visible at a glance: she could look at her week and immediately see the ratio of goal-directed to reactive time.
The quantitative step was adding a time-log column to her Notion database: each week, she recorded planned goal hours (from the calendar) and actual goal hours (from the calendar, adjusted for blocks she’d canceled or shortened). The gap was consistent and uncomfortable — she’d planned more time on her top goal than she actually delivered, every week for the first six weeks.
This data became the anchor of her weekly review conversation with Claude. “I planned four hours on Goal Q3-01 this week and delivered 2.5. Here’s what took the time instead. Help me think through whether this represents a prioritization problem or a scope problem in how I’m thinking about this goal.”
The Third Satellite: Claude
The AI component required a prompt template, not an integration.
She designed a weekly review prompt with five elements, filled in from the Notion SSoT and the Linear completion log:
Weekly Goal Review — [Date]
GOAL STATUS (from Notion database):
[paste current status table]
COMPLETED THIS WEEK (from Linear log):
[paste goal-tagged completions]
TIME ALLOCATION (from calendar):
[planned vs. actual hours per goal]
CONTEXT (free-form):
[anything not captured in the above — blockers, surprises, decisions made]
QUESTIONS FOR THIS REVIEW:
1. Which goals are on track, at risk, or behind — and why?
2. Is the time I'm allocating to each goal appropriate for its priority level?
3. What is the single most important thing to advance Goal Q3-01 next week?
The template took 10 minutes to fill in from the SSoT. The resulting Claude conversation was 15-20 minutes. Total review time: 25-30 minutes. Total planning quality improvement: significant.
When she tried Claude with Beyond Time (beyondtime.ai) to cross-check her time allocation data against the calendar logs she’d been tracking manually, the comparison confirmed what she’d suspected — her calendar planning was optimistic in a way that her actual time log didn’t support. She started building a 20% buffer into all goal-time estimates.
What Changed Over a Full Quarter
Four months in, the system has a different character than it did in the diagnostic phase.
The ratio of goal-directed to reactive work is visible every week. This alone changed her behavior — knowing the ratio is tracked makes her more deliberate about accepting new reactive tickets and declining or deferring meeting invitations that don’t serve a goal.
The weekly review takes 30 minutes instead of 75. The data-gathering step — which used to require visiting four apps and manually reconciling information — now takes 10 minutes because the SSoT is current and the logs are already organized.
The Claude conversations are qualitatively better. When the AI has current, structured goal data rather than a verbal summary from memory, the questions it asks are more specific and the analysis cuts closer to the real issue. She estimates that the AI’s usefulness roughly doubled when she moved from memory-based prompts to data-backed ones.
Progress numbers are honest. Because the SSoT is updated weekly with dated logs, she can’t rationalize “we’re basically on track” when the numbers say otherwise. The update log has a record of the last four weeks that’s hard to rewrite.
What Still Doesn’t Work Perfectly
The system has gaps worth acknowledging.
Goal-tagging discipline in Linear degrades when she’s under acute pressure. During a particularly intense two-week sprint, she stopped tagging new tickets, which produced a gap in the weekly log. The manual reconciliation step at the end of that fortnight took 30 minutes rather than seven.
The time allocation tracking is still manual, which means it depends on her updating the Notion log each week. When she misses a week, the data is gone.
The Zapier flow occasionally delays — she’s noticed some Linear completions appearing in the Google Sheet log 24-48 hours after completion rather than same-day. For weekly summaries this doesn’t matter; for anything more granular it would.
These are solvable problems. The more important observation is that the imperfect connected system is dramatically better than the perfect-looking but disconnected one she had before.
Spend 30 minutes this week converting your goal notes from a freeform page to a structured database with a progress field and a dated update log — that single change is the most important step in building a system that works.
Tags: product manager goal system, connected productivity stack, Notion Linear integration, AI goal review, case study
Frequently Asked Questions
-
Does this case study approach work for non-product managers?
Yes. The specific tools (Notion, Linear) are common in tech environments, but the structural approach — SSoT for goals, task manager as satellite, calendar as time-allocation record, AI assistant as review layer — applies to any knowledge worker role. The key decisions (where does goal truth live, how do tasks connect to goals, how does review get structured) are the same regardless of job title or industry. Substitute your task manager for Linear and your preferred AI for Claude, and the architecture transfers.
-
How long did it take to set up this system?
The initial setup — creating the Notion goal database, tagging existing Linear tickets, building the Zapier automation, and writing the Claude review prompt — took about four hours spread across two evenings. The system ran in 'manual update' mode for the first two weeks before the Zapier automation was added. The staged approach (structural first, automation second) meant the system was useful immediately rather than waiting for everything to be perfect before starting.
-
What were the biggest surprises from using this system for a quarter?
Two things stood out. First, the time-allocation data was humbling — the actual hours dedicated to the most important goal were consistently lower than the planned hours, by an average of about 40%. Before the connected system, this gap was invisible. Second, the Claude review sessions became the most valuable part of the workflow after the first month. Having the AI work from current data rather than remembered summaries changed the quality of the analysis significantly.