There is a version of OKRs that most teams encounter: write some Objectives in a spreadsheet, attach some numbers, review them at the end of the quarter. That version produces minimal value and tends to generate cynicism about goal-setting frameworks in general.
Then there is what Andy Grove and John Doerr actually built: a system designed around specific behavioral and organizational insights about how teams lose focus, how ambition gets eroded, and how strategy gets disconnected from execution.
This article explains the second version — the design logic behind each component of the framework, and why the details that seem optional are usually the ones that matter most.
Why the Objective and the Key Result Are Kept Separate
The most fundamental architectural decision in OKRs is the separation between the qualitative Objective and the quantitative Key Results. This isn’t just a formatting convention.
Grove’s insight, building on Peter Drucker’s Management by Objectives, was that teams routinely confuse direction with measurement. When you write “increase revenue by 30%” as a single goal, you’ve collapsed two distinct things: the strategic direction (grow) and the measurement of whether you got there (30%). That collapse has consequences.
First, numeric-only goals invite local optimization. If the target is “30% revenue growth,” the team will find the path of least resistance to 30% — even if that path undermines long-term strategy. A strong Objective (“become the preferred solution for enterprise customers expanding into Europe”) constrains the space of acceptable paths. Not all roads to 30% revenue qualify.
Second, qualitative Objectives create a shared language the team can use for everyday decision-making. Grove described this as the goal’s ability to act as a “decision filter.” When a team member is choosing between two projects, they can ask: which one more directly advances the Objective? That question only works if the Objective is a clear direction, not a number.
Key Results then provide the accountability that the qualitative Objective can’t. Without the numbers, the Objective remains an aspiration. With them, the team has an unambiguous answer to “did we get there?”
The Logic of the 70% Target
The most counterintuitive piece of OKR design is the scoring norm for aspirational goals. Doerr writes in Measure What Matters: “If you only set goals you know you can achieve, why set goals at all?”
The 70% expectation is built on a specific bet about human psychology and organizational incentives. Most goal-setting systems, implicitly or explicitly, treat anything below full achievement as failure. That treatment causes people to set goals they’re confident they can hit. The result is a goal-setting system that measures busyness rather than ambition.
Grove’s argument was that the expected completion rate for a stretch goal should be calibrated to produce genuine stretch. If a team consistently hits 100% on every OKR, the targets weren’t hard enough. If a team consistently hits 40%, the targets may be poorly calibrated or the team may be blocked by systemic issues that need addressing.
The 70% norm only works if it is actually honored — meaning teams that score 0.65 on an aspirational OKR are treated as having performed well, not as having narrowly escaped failure. Organizations that say “70% is fine” but then penalize teams for missing 1.0 produce the worst of both worlds: ambitious-looking goals that nobody actually commits to pursuing.
How the Commit vs. Aspirational Distinction Operates in Practice
This distinction is the most important nuance in the OKR framework, and the one most implementations fail to operationalize clearly.
Committed OKRs are operational commitments. The team is saying: we will deliver this. If something prevents delivery, that’s a significant problem requiring explanation and correction. Committed OKRs typically cover: shipping commitments, operational uptime, revenue targets with agreed contracts, compliance deadlines. These should score 1.0. A score of 0.7 on a Committed OKR is not “in the success zone.”
Aspirational OKRs are strategic bets. The team is saying: we believe this is the most important direction to pursue, and here is our best-calibrated estimate of how far we can go if we execute well. The 70% norm applies here.
In practice, a quarterly OKR set typically contains a mix of both types. An engineering team might have one Committed OKR (ship the migration by the deadline) and two Aspirational OKRs (improve test coverage, reduce mean time to recovery).
The critical operating step is labeling each OKR explicitly at the time it’s written. The label changes the grading conversation at the end of the quarter. Without it, teams either celebrate 0.65 when they should be concerned (treating everything as aspirational) or penalize 0.65 when it represents genuine stretch performance (treating everything as committed).
The Alignment Architecture: How Goals Connect Across Levels
OKRs are often described as a “cascading” system, which creates the wrong mental model. Cascading implies that company-level goals are simply broken down into team-level goals, and then again into individual goals. That is top-down compliance dressed up as goal-setting.
What Grove and Doerr actually describe is a bidirectional alignment system.
Top-down: Company leadership sets Objectives that represent the most important strategic priorities for the period. These create the frame within which team OKRs should make sense. Teams that draft goals without looking at company OKRs will produce locally sensible but strategically disconnected plans.
Bottom-up: Teams look at company OKRs and ask: “What does our team need to accomplish to make these company priorities succeed?” The team-level OKRs are not copies of the company OKRs — they are the team’s best articulation of their specific contribution. This distinction matters enormously. A sales team’s contribution to “become the leading platform for mid-market operations” looks entirely different from a product team’s contribution to the same Objective.
Doerr describes the target balance in mature OKR organizations as roughly 40% top-down, 60% bottom-up. The 60% represents team-initiated goals that emerge from proximity to customers, product, and technical reality — goals that leadership may not have anticipated but that clearly serve the strategic direction.
Cross-functional alignment is the third dimension. Some Objectives require contribution from multiple teams. An Objective owned by the product team might have Key Results that depend on contributions from engineering, design, and marketing. Making those dependencies explicit — and ensuring all relevant teams have aligned OKRs — is one of the most operationally valuable things the framework can produce.
The Weekly Check-In: Why It’s Not a Status Meeting
Grove was explicit in High Output Management that the review cadence is as important as the goals themselves. An OKR set that gets reviewed quarterly is a plan, not a management tool.
The weekly check-in has a specific purpose that gets lost when teams treat it as a progress report.
The purpose is blocker identification.
When a Key Result is tracking below its confidence target, the right question is not “why are you behind?” The right question is: “What is the specific thing preventing progress, and what does the team need to do about it this week?”
Grove described this as the operational difference between a manager who asks “how’s it going?” and a manager who asks “what do you need from me?” The first generates updates. The second generates action.
A functional weekly check-in runs in roughly 30 minutes and covers three questions per Key Result:
- What is the current metric?
- What is our confidence that we’ll hit the target?
- What is the biggest thing blocking progress?
Teams that run this consistently find that most OKR-cycle failures are predictable by week 4 or 5 — early enough to course-correct. Teams that skip the check-in discover the failure in week 12, when there’s nothing left to do.
How Grading Works — and What to Do With the Scores
At the end of each cycle, each Key Result is scored on a 0.0–1.0 scale. The Objective score is typically the average of its Key Results, though some teams weight Key Results differently based on relative importance.
The score distribution across an OKR portfolio tells you something useful:
- Cluster of 0.8–1.0 scores on aspirational OKRs: Goals may have been conservative. Consider raising ambition in the next cycle.
- Cluster of 0.3–0.5 scores: Goals may have been unrealistic, or the team faced structural blockers that need addressing before the next cycle. Investigate before simply resetting.
- Wide variance (some 0.0, some 1.0): May reflect uneven prioritization. Teams often over-invest in the OKRs they’re confident about and under-invest in the ones that scare them.
After grading, the retrospective conversation should answer: what drove the score? High scores driven by favorable market conditions are different from high scores driven by excellent execution. Low scores caused by a competitor disrupting the market are different from low scores caused by unclear ownership.
The retrospective output is an input to the next cycle’s goal-setting. This is the learning loop that makes OKRs more useful over time, rather than just a recurring administrative task.
How AI Tools Can Strengthen the Framework
Two of the most technically difficult parts of OKR implementation — writing good Key Results and maintaining weekly check-in discipline — are areas where AI assistance adds genuine value.
For Key Result quality, an AI assistant can perform the outcome-vs-activity test automatically. Paste your draft Key Results and ask: “Which of these describes an activity rather than a measurable outcome? Rewrite the activity-based ones as outcome-based Key Results.” The AI’s rewrite won’t always be better than your original, but the comparison will clarify the distinction in a way that abstract explanations often don’t.
For weekly check-ins, structured AI prompting can turn a vague blocker conversation into a specific action item. A prompt like: “Here is our Key Result target and current status. What are the three most likely causes of this gap, and what is the highest-leverage thing we could do this week to close it?” generates a starting point for the discussion rather than an open-ended catch-up.
Beyond Time (beyondtime.ai) integrates OKR tracking with weekly planning, so the check-in data flows into the following week’s priorities automatically — closing the loop between what the team committed to and what ends up on the calendar.
What the Framework Doesn’t Tell You
OKRs are a goal management system, not a strategy system. This is the most important limitation to understand.
The framework assumes you already know which direction to point. It helps you express that direction clearly, measure progress toward it, and maintain alignment across a team. It does not help you figure out which direction is right.
A company with a confused strategy will produce confused OKRs — beautifully formatted, precisely measured, and pointing at the wrong thing. The OKR framework will faithfully execute whatever strategic intent you pour into it.
Grove was aware of this limitation. He described OKRs as a “system of focus” — they clarify priorities and enforce choice between competing demands. But the judgment about what matters most has to come from somewhere else.
This means the most important work happens before the OKR-writing session begins: the strategic thinking about where the business is, where it needs to go, and what the highest-leverage leverage points are. That work is not replaced by OKRs. It is the prerequisite for OKRs producing anything useful.
The One Thing Most Implementations Skip
Most OKR implementations put significant energy into the goal-writing and almost none into the retrospective.
The quarterly retrospective — honest scoring, root-cause analysis of what drove results, and explicit lessons carried forward — is where the compounding value of OKRs is generated. Teams that run three or four rigorous retrospectives have a significantly better model of what they can actually achieve, what their real blockers are, and how to write Objectives that create genuine pull rather than formal compliance.
Teams that skip the retrospective cycle get one quarter of benefit, then repeat the same mistakes indefinitely.
At the end of your next OKR cycle, block two hours for a proper retrospective. Score every Key Result honestly. Identify one thing that worked and one thing that didn’t. Carry both into the next planning session.
That two-hour investment compounds.
Tags: OKR framework, OKR mechanics, how OKRs work, objectives and key results, goal setting systems, Andy Grove, John Doerr
Frequently Asked Questions
-
What makes OKRs different from regular goal setting?
OKRs separate direction (the Objective) from measurement (the Key Results) and build in an explicit grading philosophy that treats 70% achievement as success for stretch goals. Most goal-setting systems treat any result below 100% as failure, which tends to produce conservative goal-setting. -
How often should you update OKRs?
OKRs are typically set quarterly and reviewed weekly. They can be revised mid-cycle if circumstances change materially, but should not be changed simply to protect scores. -
What is the role of the weekly OKR check-in?
The check-in surfaces blockers early enough to address them. It's not a status report — it's a problem-identification session. Andy Grove argued that the cadence of the review is as important as the quality of the goals themselves.