The Complete Guide to Intentional Living Frameworks with AI

Most people pursue intentional living through willpower alone. This guide introduces the Intention Stack—a three-layer framework that uses AI to surface drift between your values, commitments, and daily choices.

Most people who pursue intentional living start in the wrong place.

They begin with habits—morning routines, evening rituals, weekly reviews. These aren’t bad practices. But a habit is downstream of something more important: a value. And if you never surface what you actually value, you can build an extremely consistent routine in the service of someone else’s definition of a good life.

This guide introduces the Intention Stack, a three-layer framework for aligning how you live with what you care about—and shows how AI can serve as a reflective partner at each layer. We’ll also cover the major intentional living traditions, what the research says about eudaimonic wellbeing, and the specific prompts that make AI-assisted reflection useful rather than performative.


Why Most Intentional Living Advice Misses the Point

The popular framing of intentional living borrows heavily from self-help productivity culture. It emphasizes morning routines, journaling cadences, vision boards, and annual goal-setting. These tools can be useful. But they treat the symptoms rather than the root cause of drift.

The root cause is this: most people have never clearly articulated what they value, separated from what they were taught to value, what their peers value, or what their current income structure requires them to value.

Mark Manson makes a pointed version of this argument in The Subtle Art of Not Giving a Fck*: choosing your values isn’t a feel-good exercise. It requires accepting that you cannot value everything, and that authentic values often conflict with socially comfortable ones. The person who says they value “family time” and “career achievement” equally, and schedules 60-hour work weeks, has not actually clarified their values. They’ve restated their conflict.

Greg McKeown, in Essentialism, frames the same problem differently: if you don’t deliberately choose what is essential, someone else will choose for you—and they will choose based on their priorities, not yours.

The research literature on wellbeing backs this up. Studies on eudaimonic wellbeing—the Aristotelian concept of flourishing through purposeful action—consistently distinguish it from hedonic wellbeing (pleasure and comfort). Research by Carol Ryff and others identifies autonomy, purpose, and personal growth as central to eudaimonic wellbeing, and finds that pursuing externally-defined success without clarity on internal values tends to undermine rather than support it.

The practical implication: before you design a system, you need to know what the system is for.


What Is the Intention Stack?

The Intention Stack has three layers, arranged in order of abstraction:

Layer 1: Values These are the qualities you care about intrinsically—not because they produce outcomes, but because they reflect who you are or want to be. Integrity, curiosity, presence, craft, connection. Values are not goals. They don’t have completion dates or metrics. They function as criteria for evaluating whether a given choice is worth making.

Layer 2: Commitments Commitments are the structured forms your values take in real life. If you value craft, one commitment might be: I maintain a weekly deep-work block for my most important project. If you value connection, a commitment might be: I have dinner with my family four nights per week, phone-absent. Commitments are specific enough to be acted on, but stable enough to survive a busy week without being renegotiated.

Layer 3: Daily Choices These are the micro-decisions that either honor or erode your commitments—and by extension, your values. What you say yes to at 9 AM. Whether you open email before you’ve done your most important work. Whether you take the meeting that wasn’t in the plan. Daily choices are where intention either holds or evaporates.

The critical insight of the Intention Stack is that drift is almost always vertical. You don’t usually abandon your values consciously. You make a series of small, justified choices at Layer 3 that slowly erode the commitments at Layer 2, until the gap between what you say you care about and how you live becomes too wide to ignore.

AI is most useful at detecting this drift before it becomes a crisis.


The Five Major Intentional Living Traditions

Before building your own stack, it helps to understand the traditions that have shaped modern intentional living practice. Each has real strengths—and distinct blind spots.

Stoic Practice

Stoicism, particularly as interpreted by Ryan Holiday and others drawing on Marcus Aurelius and Epictetus, emphasizes three disciplines: desire (wanting only what is within your control), action (acting virtuously regardless of outcomes), and assent (judging events accurately, without catastrophizing).

The practical Stoic toolkit includes the memento mori practice (contemplating mortality to clarify what matters), the negative visualization exercise (imagining loss to increase appreciation), and the evening review (asking “what did I do well, what could I have done better?”).

Stoicism’s strength is its robustness. It works in bad circumstances, not just good ones. Its weakness as an intentional living framework is that it’s largely reactive—it helps you respond to life well, but it doesn’t actively help you design it.

Ikigai

The Japanese concept of ikigai—loosely, “reason for being”—is often represented in the West as a Venn diagram of four circles: what you love, what you’re good at, what the world needs, and what you can be paid for. The intersection of all four is your ikigai.

The framework is useful for career and purpose questions. It surfaces mismatches: people who love their work but can’t monetize it, or who are paid well for work they find meaningless. The research is somewhat murkier than popular presentations suggest—the original Japanese concept is less grandiose, closer to “everyday reasons to get up in the morning” than a life’s singular purpose.

Its practical weakness is that it can become another form of pressure: the search for the perfect four-way intersection can paralyze people who haven’t found it.

Minimalism

The minimalist tradition, associated with writers like Leo Babauta and the broader slow-living movement, focuses on subtracting what doesn’t serve you rather than adding systems. Its emphasis on voluntary simplicity—fewer possessions, fewer commitments, fewer sources of stimulation—creates mental space for what remains.

Minimalism is valuable as a corrective when someone has over-committed. It’s less useful as a primary framework, because it tells you what to remove but not what to pursue.

Essentialism

Greg McKeown’s Essentialism: The Disciplined Pursuit of Less is probably the most practically useful framework for knowledge workers. Its core idea: almost everything is non-essential, but social pressure, fear of missing out, and the inability to say no lead people to spread their energy across too many things. Essentialism requires deliberate selection of what matters and systematic elimination of everything else.

McKeown’s insight that “if you don’t prioritize your life, someone else will” is perhaps the most actionable formulation of the intentional living problem. The framework is excellent for commitment-setting (Layer 2 of the Intention Stack) but says relatively little about values articulation (Layer 1).

AI-Augmented Reflection

The newest addition to the toolkit, and the most misunderstood. AI is not a framework in the same sense as the above. It’s a reflective partner that can support any framework. Its specific strengths are: asking questions you haven’t asked yourself, identifying inconsistencies in your stated values and actual behaviors, generating alternative perspectives on situations you’ve framed narrowly, and tracking your thinking over time.

Its weakness is that it will pattern-match to whatever framework you bring. If you ask AI to help you live intentionally without having done the values work, it will help you optimize efficiently toward whatever you already seem to want—which may not be what you actually need.


Building Your Intention Stack with AI

Here is how to build each layer using AI as a partner.

Layer 1: Clarifying Values

The values clarification exercise is the hardest part of the whole process, because values are often unconscious until they’re violated. One effective technique is the regret-minimization exercise, made famous by Jeff Bezos but drawing on research by Daniel Kahneman on the tendency to regret inaction more than action over long time horizons.

AI prompt for values clarification:

I want to identify my core values—not aspirational values, but the ones that actually 
drive my decisions and that, when violated, cause me real distress. 

I'll describe three moments: one where I felt most alive and aligned, one where I felt 
most uncomfortable with a decision I made, and one where I felt proud of a choice even 
though it was costly.

[Describe your three moments]

Based on these, what values do you infer I hold deeply? Then ask me three follow-up 
questions that would help distinguish which of these values I'd protect even when they 
conflict with each other.

This prompt works because it extracts values from behavior rather than aspiration. Most values-clarification exercises ask “what do you value?” People answer with whatever sounds admirable. This prompt asks AI to infer from your actual emotional responses, which is more reliable.

Layer 2: Setting Commitments

Once you have a working list of three to five values, commitments are how those values get structural form. The common mistake is creating too many commitments—one per value per life domain quickly produces an unworkable list.

AI prompt for commitment-setting:

Here are my three core values: [list them].

I want to translate each into one or two commitments—specific, durable practices 
that I can maintain even in a demanding week. A commitment should be concrete enough 
to check (I either did it or I didn't) but meaningful enough to actually honor the value.

For each value, help me draft one commitment. Then play devil's advocate: for each 
commitment I draft, what's the most likely way I'll rationalize abandoning it in 
week three?

The devil’s advocate turn is important. AI’s willingness to surface your likely failure modes is one of its most underused capabilities.

Layer 3: Aligning Daily Choices

The daily layer is where most intentional living practice focuses—and where it’s least effective, because micro-decisions can’t be scripted in advance. What you can do is design your environment and review cadence so that drift is visible quickly.

Beyond Time’s AI planning workflow is designed specifically for this kind of daily alignment check—it logs not just what you did, but what you intended, and surfaces the gap at your end-of-day review. That kind of friction-free daily tracking makes Layer 3 visible without requiring a lengthy reflection session.

AI prompt for daily alignment review:

Here are my three core values and two commitments per value: [list them].

Here's how I actually spent today: [brief summary].

Where did my choices align with my commitments? Where did I drift? For the drifts, 
what pattern do you notice—is this a one-off, or does it suggest a commitment that 
needs revision?

How AI Surfaces Drift Between Layers

Drift detection is where AI adds the most unique value to intentional living practice. Human self-assessment is distorted by consistency bias—we tend to rationalize our past choices as intentional, even when they weren’t. AI, given an honest account, can identify patterns we’d prefer not to see.

The most useful drift-detection prompts are longitudinal: they compare what you said you valued two months ago with how you’ve been describing your days since then.

I've been journaling my weekly reflections for the past six weeks. Here are the 
entries: [paste entries].

Without prompting from me, identify: (1) what I seem to be prioritizing based on 
my actual choices, (2) what I claim to value in my stated intentions, and (3) where 
the two diverge consistently. I want honest analysis, not reassurance.

The phrase “I want honest analysis, not reassurance” matters. AI will default to affirmation unless instructed otherwise. This instruction shifts the dynamic from validation to diagnosis.


The Most Common Mistakes in Intentional Living Practice

Mistaking aspiration for values. “I value health” is often shorthand for “I feel guilty about not exercising.” Real values survive inconvenience and cost. Test yours by asking: would I defend this value if acting on it cost me something significant?

Treating commitments as goals. Goals have completion dates; commitments don’t. Once you’ve achieved a goal, you stop. Commitments are ongoing. Confusing the two leads to abandoning a commitment because you’ve “achieved” it.

Reviewing too infrequently. A monthly values review is too slow. Drift accumulates in days, not months. A brief weekly Layer 3 check—ten minutes, not an hour—is more effective than a quarterly deep dive.

Using intentional living as another optimization project. This is the trap Greg McKeown is most explicitly warning against: using essentialism techniques to become even more efficiently busy in service of goals you never examined. The question is not “how do I do this better?” It’s “is this worth doing at all?”

Csikszentmihalyi’s research on flow states is sometimes cited in intentional living contexts, but it’s worth being precise about what it shows. Flow—the state of deep engagement where the challenge matches your skill—is associated with high wellbeing, but it doesn’t require that the activity is aligned with your values. People report flow during activities they don’t find meaningful. The Intention Stack’s contribution is to direct that focused engagement toward things that actually matter to you.


A Note on the Limits of Frameworks

No framework is neutral. Each one encodes assumptions about what matters.

Stoicism reflects certain ancient Mediterranean assumptions about virtue and self-mastery. Ikigai, as popularized in the West, reflects certain assumptions about work as the primary arena for meaning—which is contested in the Japanese scholarship from which the concept comes. Essentialism reflects assumptions about individual agency that don’t translate cleanly to people with family care responsibilities or constrained economic choices.

The Intention Stack attempts to be framework-agnostic: it provides the structure (values → commitments → daily choices) without prescribing what your values should be. AI can help populate that structure honestly, but only if you bring intellectual honesty to the exercise.

The place to start isn’t a system. It isn’t a morning routine. It’s the harder question McKeown keeps returning to: if you could only do one thing with the next three months of your life, what would it be—and if you’re not doing that, why?


A Practical Starting Point

This week, take fifteen minutes and write down three moments from the past year: one where you felt most like yourself, one where you felt most out of alignment, and one where you made a difficult choice you’re still proud of. Paste those moments into a Claude conversation and ask it to infer your values from those three stories.

Don’t edit the output for palatability. Use whatever comes back as a starting hypothesis.

That’s Layer 1. The rest follows.


Related:

Tags: intentional living, AI life design, values clarification, Intention Stack, essentialism

Frequently Asked Questions

  • What is intentional living?

    Intentional living means making deliberate choices aligned with your core values, rather than defaulting to habit, social pressure, or convenience. It requires identifying what you actually care about, then designing your daily structure around those priorities.
  • How can AI help with intentional living?

    AI serves as a reflective partner—it can surface contradictions between your stated values and how you spend your time, suggest reframings, track drift over time, and help you articulate values you've never put into words. It doesn't set your values for you.
  • What is the Intention Stack?

    The Intention Stack is a three-layer framework: values (what you care about most deeply), commitments (the structured forms your values take), and daily choices (the micro-decisions that either honor or erode your commitments). AI is most useful at detecting gaps between layers.
  • Which intentional living framework is best?

    There is no single best framework. Stoicism-inspired practice works well for people who need a philosophical anchor; Ikigai suits those seeking a sense of purpose at work; essentialism fits high-achievers prone to overcommitment. The Intention Stack can incorporate elements of each.
  • How is intentional living different from productivity optimization?

    Productivity optimization asks how to do more. Intentional living asks whether the things you're doing are worth doing at all. The two can coexist, but optimization without intention often accelerates the wrong things.