The Intention Stack: A Three-Layer Framework for Intentional Living with AI

The Intention Stack maps intentional living across three layers—values, commitments, and daily choices—and shows exactly where AI adds leverage at each layer. Here's how it works in practice.

Every intentional living framework has the same problem: it tells you what to aim for without explaining why things break down between intention and action.

You clarify your values. You set ambitious commitments. Three weeks later, you’re back to operating on default. The gap isn’t a motivation problem. It’s a structural one: most frameworks don’t account for the fact that values, commitments, and daily choices operate on different timescales, respond to different pressures, and require different kinds of attention.

The Intention Stack addresses this by making the three layers explicit—and by assigning AI a specific, non-generic role at each one.


Why Three Layers?

The three-layer structure isn’t arbitrary. It maps to three distinct types of decisions that play out at different speeds and with different degrees of reversibility.

Values are slow and deep. They change over years, not weeks. They’re often partially unconscious until they’re violated. Clarifying them is an act of excavation more than invention.

Commitments are medium-speed. They should be stable enough to survive a bad week, but they do need revision when life circumstances change significantly—a new job, a child, a health event.

Daily choices are fast and volatile. They’re influenced by energy, environment, social pressure, and dozens of small frictions you probably haven’t designed deliberately. They’re where values either get expressed or abandoned in real time.

Most intentional living practice collapses these three levels into one. The result is that people try to manage daily choices with tools calibrated for values work, or try to resolve values conflicts by tweaking their schedule. Neither works well.

The Intention Stack keeps the layers distinct—and uses AI differently at each one.


Layer 1: Values

What belongs here

Values are qualities you care about intrinsically—not for what they produce, but because they reflect who you are or are becoming. Some examples that appear frequently across intentional living literature: integrity, autonomy, craft, connection, curiosity, presence, courage, growth.

Note what’s absent from that list: success, wealth, status, productivity. These are outcomes, not values. They can be the result of acting on values, but they can’t serve as the criteria for evaluating whether an action is worth taking.

The distinction matters because values-as-outcomes are infinitely expansive—there’s always more success to pursue, more wealth to accumulate. Values-as-qualities are satisfiable. You can have a day defined by genuine presence, by actual courage, by real craft. When the quality is present, the value is expressed.

Where AI helps at Layer 1

The most useful thing AI can do at the values layer is ask questions that surface values you’ve never put into words—particularly through the pattern of your emotional responses.

I'm going to describe three situations: one where I felt most out of alignment 
with myself, one where I felt an unexpected sense of pride, and one where I 
said yes to something and immediately regretted it.

[Describe the three situations.]

Based on these, what values do you infer I'm defending or violating in each case? 
Don't give me a values list to choose from—infer from the specifics.

The behavior-inference method is more reliable than self-report. When you describe a specific situation and your emotional response, you’re providing behavioral evidence. AI can surface patterns in that evidence that you’ve normalized or missed.

The values clarification output

Aim for three to five values, stated concisely. Not a paragraph—a word or short phrase. “Intellectual honesty.” “Sustained presence.” “Making things that are genuinely good.” The specificity matters. “Authenticity” is too abstract to act on. “Saying what I actually think even in uncomfortable rooms” is concrete enough to guide a decision.


Layer 2: Commitments

What belongs here

Commitments are the structured forms your values take in daily life. They’re not aspirations (“I want to be more present”) and they’re not goals with completion dates (“I will finish the project by October”). They’re durable, ongoing practices that express a value.

The commitment structure that works best is: I [specific recurring behavior] [in what context or timeframe] [because it expresses this value].

For example: “I maintain two uninterrupted morning hours for my most important project, three days per week, because this is how I express the value of craft.”

This format makes the commitment concrete enough to evaluate, and keeps the connection to the underlying value explicit—which matters when the commitment is under pressure and you’re tempted to rationalize dropping it.

The common mistake: commitment inflation

The single most predictable failure mode at Layer 2 is having too many commitments. Once you have more than eight to ten active commitments across all life domains, you’ve recreated the overcommitment problem you were trying to solve.

Greg McKeown’s test from Essentialism is useful here: if it’s not a “hell yes,” it should probably be a “no.” Apply this to your commitments list. If you’re not genuinely prepared to defend a commitment when it conflicts with other demands, it’s not really a commitment—it’s an aspiration with better formatting.

Where AI helps at Layer 2

AI is most useful at Layer 2 for two things: designing commitments that are minimal but meaningful, and stress-testing them before they fail.

Here are my three core values: [list them].

For each value, I want to write one commitment that passes the following test: 
it's specific enough to evaluate, robust enough to survive a demanding week, 
and minimal enough that I'm not setting myself up for failure.

Draft one commitment per value. Then for each one, tell me: what's the most 
likely week-three failure mode? What rationalization will I use when I drop it?

The rationalization question is the key move. AI can anticipate the self-justifications you’ll reach for—“it was an unusually busy week,” “I’ll make it up next week,” “I technically did the spirit of the commitment”—before you use them. Having named the rationalization in advance makes it harder to deploy unconsciously.

Commitments need environmental support

A commitment held by willpower alone has a short half-life. Each commitment should have at least one environmental condition that makes keeping it the path of least resistance. This is the domain of behavioral economics—specifically the insight from Thaler and Sunstein on choice architecture that defaults are disproportionately influential.

Ask AI to identify the lowest-friction environmental adjustment for each commitment:

My commitment is: [state it]. 

What's the single environmental, scheduling, or structural change that would 
make breaking this commitment slightly more effortful than keeping it? 
Think small—I'm not redesigning my life, I'm tipping the default.

Layer 3: Daily Choices

What belongs here

Layer 3 is where everything you’ve built at Layers 1 and 2 gets tested against reality. The daily choice layer includes: which tasks you do first, which meetings you accept, which requests you defer or decline, how you spend the transitions between structured activities, and what you do when the plan breaks down.

These choices can’t all be scripted in advance. But they can be reviewed, and the patterns in them can be surfaced quickly with AI.

The Weekly Layer 3 Scan

The most effective Layer 3 practice is a ten-minute weekly scan—not a journal, not a deep review, just a pattern check. Beyond Time’s daily planning workflow supports this kind of quick alignment check between what you intended and what actually happened, reducing the friction of logging enough that the practice can actually sustain itself.

The prompt for the weekly scan:

Here are my current commitments: [list them].

Here's a brief summary of how I spent my time this week: [honest summary—
3-5 sentences per commitment area].

Where did my daily choices align with my commitments? Where did I drift? 
Flag any pattern you see—especially if the same type of drift is appearing 
for the second or third consecutive week.

What to do with drift signals

Not all drift requires action. Some drift is situational: one bad week doesn’t indicate a systemic problem. The signal that matters is when the same type of drift appears for three or more consecutive weeks.

At that point, the question is diagnostic: is the commitment unrealistic? Is an environmental condition working against it? Or has something changed at the values layer—do you actually still care about this?

The answer determines the response. Unrealistic commitment → simplify it. Environmental problem → fix the friction. Values shift → update Layer 1 and rebuild from there.


How the Layers Work Together

The Intention Stack’s main contribution isn’t any single layer—it’s the vertical relationship between them.

Drift almost always flows downward: a gradual erosion at Layer 3 (daily choices) that slowly hollows out Layer 2 (commitments), until Layer 1 (values) becomes purely theoretical. You still say you value craft, but you haven’t done deep work in two months. You still say you value presence, but you haven’t had a phone-absent dinner in six weeks.

AI’s role in the stack is to make vertical drift visible before it reaches the values layer. By the time drift shows up as a values crisis—the feeling that your life doesn’t reflect what you care about—you’ve usually been drifting at the daily choice level for months.

The weekly Layer 3 scan interrupts this early. The monthly commitment review catches medium-term drift. The quarterly values review catches the slow shifts.

None of this requires hours of reflection. The whole maintenance practice runs on about twenty minutes per week once the initial layers are established.


Getting Started

Build the stack once, fully, in this order: values first, then commitments, then a weekly review cadence.

Don’t skip ahead to Layer 3. Designing a daily review practice before you have clear commitments produces noise, not signal. The commitment-to-daily-choice connection is what gives the review its diagnostic value.

This week, spend thirty minutes on Layer 1 using the behavior-inference prompt above. That’s the foundation everything else builds on.

Related:

Tags: Intention Stack, intentional living framework, AI planning, values commitments daily choices, life design

Frequently Asked Questions

  • What is the Intention Stack?

    The Intention Stack is a three-layer framework: values (what you care about intrinsically), commitments (the structured practices that express those values), and daily choices (the micro-decisions that either honor or erode commitments). AI is most valuable at detecting drift between layers.
  • How is the Intention Stack different from other intentional living frameworks?

    Most frameworks focus on one layer—essentialism on commitments, Stoicism on daily choices, Ikigai on values. The Intention Stack works across all three and explicitly maps where AI adds leverage at each level.
  • How often should I run the Intention Stack review?

    Daily choices benefit from a weekly ten-minute AI scan. Commitments should be reviewed monthly. Values need only a quarterly check—they change slowly but do change.
  • What if my values and commitments conflict with each other?

    That tension is data. It usually signals either an unrealistic commitment, an unarticulated higher value that's pulling in a different direction, or an environment that's systematically hostile to the commitment. AI can help you diagnose which.