AI for Major Life Decisions: Answers to the Questions People Actually Ask

An honest FAQ covering how AI can and can't help with high-stakes personal decisions — from the mechanics of prompting to the research behind the methodology.

These are the questions we see most often from people who are genuinely curious about using AI as a thinking tool for major decisions — but want honest answers rather than optimistic marketing.


About the Basics

Can AI actually help with a major life decision, or is this just hype?

It can help, within specific limits. The genuine value is in what researchers call “decision aid” functions: helping you articulate your values, surfacing considerations you haven’t examined, stress-testing your reasoning, and organizing your thinking. There’s good evidence that structured decision aids improve decision quality — and AI can function as an unusually flexible version of one.

What AI cannot do: know your specific context, feel what you feel, predict your future emotional states reliably, or substitute for the judgment of people who actually know you. Treating it as an oracle — asking “what should I do?” — produces coherent-sounding generic responses, not analysis of your situation.

What kind of decisions is this approach well-suited for?

Decisions that are high-stakes, involve multiple competing criteria, have long time horizons, or are emotionally charged enough to impair your own reasoning. Career pivots, relocation decisions, major financial commitments, relationship inflection points, educational choices.

It’s less valuable for decisions where you already have strong domain expertise (your pattern-recognition is a valid input) or for low-complexity reversible choices (the overhead exceeds the benefit).

Which AI platform should I use?

Any capable LLM — Claude, ChatGPT, Gemini — works. The quality of your prompts matters more than which platform you use. Assigning AI a specific role and providing detailed context produces better output than open-ended questions, regardless of platform.


About the Process

What’s the most common mistake people make?

Asking AI to decide. “What should I do?” prompts AI to play an oracle role it’s structurally unsuited for. The output will be coherent and plausible. It won’t be analysis of your situation — it will be pattern completion based on how similar prompts are typically answered.

The fix: assign a role. “Play devil’s advocate and make the strongest case against my current lean” produces genuinely different and more useful output than “what should I do?”

How do I start a session?

Don’t start with analysis. Start with an unfiltered description of the situation — including what you’re afraid of, what you want but feel you shouldn’t, and what feels genuinely undecidable. Then ask AI to interview you before analyzing anything. The questions AI generates often reveal more than the analysis that follows.

How long should a decision session take?

A complete session covering all four roles in the Decision Thinking Partner framework takes 60–90 minutes. For major decisions, plan for two sessions — one initial, one follow-up after 24–48 hours. The second session often looks quite different as your thinking has had time to settle.

Should I use AI during the decision or only after it?

Both are legitimate. During the decision, AI helps clarify your thinking and stress-test your reasoning. After the decision, AI is useful for scenario simulation (mapping what implementation will require), assumption tracking (checking which of your planning assumptions hold up), and decision journaling (reflecting on what you’ve learned).

What if the AI’s responses feel generic?

Add specificity to your prompts. AI produces generic output when it receives generic input. Instead of “I’m deciding whether to change careers,” try “I’m a 38-year-old software engineer with 12 years of experience in fintech, considering a move to a product management role at a health tech startup — the offer is X compensation, the team is Y size, and my main concern is Z.” The more context you provide, the more specific the output.


About the Research and Theory

Is there actual research behind this, or is it just theory?

The framework draws on several well-established research traditions:

Daniel Kahneman’s dual-process theory (System 1/2) provides the basis for why structured deliberation improves over intuition for novel decision types. The evidence here is robust — decades of replicated research across cognitive psychology and behavioral economics.

Dan Gilbert’s work on affective forecasting documents systematic errors in predicting future emotional states — the “impact bias” and “duration bias” — which argues for caution about the felt-sense of which option “seems better.” Gilbert’s findings have been replicated across multiple studies and contexts.

Gary Klein’s naturalistic decision-making research shows that expert intuition is domain-specific — reliable in familiar domains, unreliable in novel ones. Most major life decisions are structurally novel.

Thomas Gilovich and Victoria Medvec’s research on the temporal pattern of regret provides the empirical basis for the regret minimization approach.

The evidence for AI specifically as a decision aid is thinner — this is an active research area with promising early findings, but we’re extrapolating from established decision aid research rather than pointing to AI-specific outcome studies.

Is the regret minimization framework really evidence-based?

Yes, with appropriate caveats. The research on long-horizon regret — showing that regrets of omission (paths not taken) dominate over regrets of commission (missteps along paths taken) over longer time scales — is well-replicated. Jeff Bezos’s formulation is a practitioner application of that research, not a novel insight.

The caveat: this is an aggregate finding. Individual regret profiles vary. And regret minimization should not override practical constraints, responsibilities to others, or genuine risk analysis. It’s one input, not a trump card.

What about ego depletion and decision fatigue? Should I time my decision sessions?

Decision fatigue — the degraded quality of decisions made after a long sequence of choices — is a real phenomenon, though the original Roy Baumeister ego depletion model has had replication difficulties. The practical upshot holds: don’t run major decision sessions when you’re cognitively depleted. Schedule them in the morning or after genuine rest. Don’t make major commitments immediately after high-cognitive-load periods.


About AI’s Limitations

What does AI genuinely not know about my situation?

Quite a lot, unless you tell it. AI doesn’t know: your specific relationships and their dynamics, the actual culture of the organization you’re considering, your precise financial picture, the history behind your current emotional state, the quality of the people involved in your decision, or the accumulated tacit knowledge from your lived experience. All of these can be described in text, partially — but the gap between description and knowledge remains.

This is why the rule “be as specific as possible” matters, and why AI-assisted decision-making complements rather than replaces the counsel of people who actually know you.

Can AI predict what will make me happy?

No — and this is one of the most important limitations to hold clearly. Dan Gilbert’s research on affective forecasting shows that humans are unreliable predictors of their own future emotional states. AI, which has no access to your specific psychological architecture and is pattern-matching from training data, is even less reliable. Any AI response that confidently asserts “this choice will make you happier” should be treated skeptically.

What AI can do is help you think about your values and what you’ve historically found meaningful — which is a different and more tractable question than predicting future happiness.

What if AI’s devil’s advocate arguments are wrong for my specific situation?

They sometimes are. AI generates the strongest general case against your type of decision, which may include arguments that don’t actually apply to your circumstances. Your job is to evaluate each argument against your specific situation, not accept it wholesale.

The value of the devil’s advocate prompt is not that AI’s arguments are necessarily correct. It’s that they force you to engage explicitly with the opposing case rather than merely acknowledging it exists.

Is there a risk of overthinking? Can AI sessions make decisions harder?

Yes. For people already prone to analytical paralysis, adding more structured analysis can entrench indecision rather than resolve it. Signs that this is happening: you’re in your third AI session on the same decision without any of the core questions resolving; you’re running the same roles repeatedly and getting the same outputs; you’re using AI sessions as a way to delay committing.

If you recognize this pattern, the right move is not more analysis. It’s setting a decision deadline, identifying the one or two genuinely unresolved questions, going to get that specific information, and returning to decide.


About Getting Started

What’s the simplest possible first step?

Open a conversation with any AI platform and type: “I’m facing a major decision I’ve been avoiding. Before we analyze anything, ask me questions to help clarify what I’m actually trying to decide and what I care most about.” Then describe the decision.

That’s it. The clarifying interview alone — before any analysis — often moves the decision further than days of private deliberation.

Should I do this alone or with my partner/family?

Both. AI sessions for major decisions work best as private individual sessions first — when your framing hasn’t been shaped by others’ reactions. After you’ve developed a clearer sense of your own thinking, bring that clarity to conversations with the relevant people in your life.

The combination is stronger than either alone: AI’s systematic structure plus the contextual knowledge of people who know you.

What if I’m still stuck after running the full framework?

Two possibilities. First: there may be information you don’t have yet that you need before you can decide. The adversarial pass or assumption check often reveals these gaps. Go get the information, then return.

Second: you may already know what you want but haven’t committed to it. If the framework hasn’t changed your lean, but you still won’t decide, the obstacle is probably not cognitive. It’s the discomfort of accepting what commitment requires — and no framework resolves that. That’s what the act of deciding is for.


Related:

Tags: AI for decisions FAQ, major life decisions, decision-making, AI thinking partner, life design

Frequently Asked Questions

  • Is AI reliable for major life decisions?

    AI is reliable as a thinking tool — for surfacing blind spots, stress-testing reasoning, and organizing information. It is not reliable as a decision-maker, because it lacks knowledge of your specific context, values, and lived experience. The reliability comes from the structure you apply, not from AI's outputs.
  • What's the most common mistake people make when using AI for decisions?

    Asking what they should do. This frames AI as an oracle and produces generic, pattern-matched output. Assigning AI a specific role — devil's advocate, reversibility analyzer, regret minimizer — consistently produces more useful results.
  • Should I tell AI everything about my situation?

    The more relevant context you provide, the more specific and useful the output. You don't need to share anything you're not comfortable sharing — but generic inputs produce generic outputs. AI doesn't store your conversation between sessions unless you explicitly use a memory feature.