Hard decisions carry a specific kind of discomfort. They resist resolution. You can think about them for days and still feel uncertain. And there’s a point in every prolonged decision process where you become tired of your own indecision and start looking for someone — or something — to just tell you what to do.
AI is available for that request at any hour. It will answer it. And that’s exactly the problem.
What Actually Happens When You Ask AI to Decide
When you frame a prompt as “which should I choose?” or “what’s the right decision here?”, you’re asking AI to do something it’s fundamentally unsuited for.
AI doesn’t know what you value. It doesn’t know your history, your relationships, your risk tolerance, your specific fears, or the texture of the life you’re actually living. It has no stake in your outcome. It can’t weigh the considerations that are invisible in text — the gut feeling, the tired feeling, the fear that’s been running below the surface for months.
What AI can do is generate a coherent, well-structured response that sounds like considered advice. It will identify pros and cons. It will apply general principles. It will often be technically accurate about the general landscape of the decision type. And it will present all of this with a confidence and fluency that can feel authoritative.
The problem is that the output is substantially generic. It reflects patterns in training data, not analysis of your specific situation. Two different people with superficially similar situations but very different values and contexts would receive responses that are largely interchangeable — and both would leave feeling that the AI “understood” their dilemma.
That’s not understanding. It’s pattern completion.
The Affective Forecasting Problem
Even setting aside AI’s limitations, the underlying intuition that someone else should resolve your hard decisions has a deeper problem.
Harvard psychologist Dan Gilbert spent decades studying affective forecasting — our ability to predict how future events will make us feel. His findings, documented in Stumbling on Happiness, are consistently humbling: humans are poor at this. We overestimate the emotional impact of both positive and negative outcomes. We overestimate how long feelings will last. We overestimate how much a particular choice will determine our happiness.
This means that the felt urgency driving you toward a decisive answer — the need to finally know which choice will make you happy — is based on a question your own brain can’t reliably answer. The emotional outcome of a major decision is less determined by the choice itself than by how you engage with its consequences.
Gilbert calls this psychological immunity: humans have a remarkable capacity to rationalize, adapt, and find meaning in outcomes they didn’t choose. This doesn’t make choices irrelevant. But it does suggest that the gap between the best and second-best option is usually smaller than it feels from inside the decision.
The corollary for AI: if your own forecasting about your future emotional states is unreliable, AI’s forecasting about them — built on generic pattern-matching rather than knowledge of you — is even less reliable. “This choice will make you happier” is not a claim any AI should make with confidence.
The Self-Knowledge That Comes From Hard Decisions
There’s another argument against outsourcing major decisions that doesn’t get made often enough: the process itself is valuable.
Working through a major decision — sitting with the discomfort, confronting the trade-offs, understanding what you’re willing to lose and what you’re not willing to live without — produces self-knowledge. That knowledge doesn’t disappear after the decision is made. It informs the next decision, and the one after that.
People who consistently outsource their hard choices to others — whether to advisors, partners, or AI — tend to have a weaker relationship with their own values and preferences. They become less practiced at the act of judgment. They’re also more likely to second-guess themselves later: when an AI-recommended path runs into difficulties, it’s easy to doubt the choice in a way that wouldn’t happen if you’d owned the reasoning.
The discomfort of a hard decision is, in part, the feeling of your value hierarchy under pressure. Avoiding that discomfort also means avoiding the clarity it produces.
What AI Should Do Instead
The right relationship between AI and major decisions is not oracle-to-supplicant. It’s thinking partner to decision-maker.
Specifically, AI is most valuable when it’s assigned a constrained role rather than an open mandate.
As devil’s advocate: AI’s job is to argue the strongest possible case against your current lean. It doesn’t decide — it challenges. You decide what to do with the challenge.
As a question-generator: Before any analysis, AI asks you the questions that would clarify your actual priorities. The questions are often more valuable than the answers.
As a historical pattern surfacer: AI identifies what people in structurally similar situations have typically experienced, regretted, and found important in retrospect. It’s not predicting your future — it’s offering base rates for your consideration.
As a reversibility mapper: AI helps you think carefully about which aspects of a decision are genuinely irreversible, and which feel permanent but aren’t. This reframes the stakes accurately.
As a regret minimizer: AI helps you project to a future vantage point and ask which choice you’d be more likely to grieve. Not to get an answer — to get a temporal perspective that short-term anxiety obscures.
Notice that none of these roles involve AI making a determination. In every case, AI is generating input for your reasoning. You remain the one integrating that input, weighing it against your experience and values, and making the call.
A Note on the Seductiveness of AI Confidence
There’s a specific hazard worth naming: well-written AI outputs can feel more authoritative than they are.
When an AI response is eloquent, well-structured, and internally consistent, it activates credibility signals in the reader. It sounds like someone who knows. This is a feature of language model outputs that has nothing to do with accuracy or appropriateness to your specific situation.
The practical defense is simple: treat AI outputs as input to your thinking, not as conclusions. Read them the way you’d read a thoughtful opinion from someone who doesn’t know you well — with genuine interest, but without surrendering your judgment.
The question is never “does this output sound convincing?” It’s “does this argument apply to my actual situation, given what I know that AI doesn’t?”
When AI Advice Is Appropriate
This isn’t a case against using AI for any decisions. For decisions that are low-stakes, easily reversible, and don’t involve your core values — choosing a project management approach, picking a tool, formatting a document — AI recommendations are perfectly reasonable. The overhead of structured analysis far exceeds the benefit.
The principle is proportionality: the more a decision involves your values, relationships, career trajectory, or financial stability, the more you need to own the reasoning rather than delegate it.
Major life decisions are the mechanism by which you build your life. They deserve your best thinking. AI can make that thinking more rigorous. It can’t do the thinking for you — and you shouldn’t want it to.
Related:
- The Complete Guide to AI for Major Life Decisions
- How to Use AI for Major Life Decisions
- The Science of Major Life Decisions
- The AI Decision Framework for Major Life Choices
Tags: AI decision making, affective forecasting, Dan Gilbert, life decisions, AI thinking partner
Frequently Asked Questions
-
Is it ever OK to let AI make a decision for you?
For low-stakes, easily reversible choices — which book to read next, which format to use for a document — AI recommendations are fine. For decisions involving your values, relationships, career, or finances, AI should assist your reasoning, not replace it. -
What's wrong with asking AI 'what should I do?'
When you ask AI to make a decision, it generates a plausible-sounding response based on patterns in its training data. It doesn't know your actual values, your specific context, or the tacit knowledge embedded in your lived experience. The output can feel authoritative while being essentially generic. -
What's the difference between AI-assisted decision-making and AI deciding?
In AI-assisted decision-making, you use AI to improve the quality of your reasoning — surfacing blind spots, stress-testing your thinking, exploring implications. The judgment call remains yours. AI deciding means you're treating AI's output as the answer, substituting its pattern-matching for your considered judgment.