How One Person Used AI to Navigate a Career Pivot: A Case Study

A detailed walkthrough of how Soren — a senior product manager — used the Decision Thinking Partner framework to evaluate a major career change, with the actual prompts and what each one revealed.

Soren had been a senior product manager at a B2B software company for six years. He was good at it. He was compensated well. He was bored out of his mind.

The offer on the table was from an early-stage startup — CPO role, meaningful equity, a domain he found genuinely interesting. But it was also 40% lower base salary, no established team, and a founding team he’d only had two calls with. His partner was supportive but worried. His friends had opinions. His manager had made noises about a promotion.

He’d been turning it over for three weeks without getting anywhere.


Baseline: Where He Was Before the First Session

Soren’s initial framing, as he described it: “I think I should take it, but I’m scared, and I can’t tell how much of the fear is rational versus just risk aversion.”

This is a common starting position: a lean buried under anxiety, with the two things — the considered judgment and the emotional response — not yet separated.

His first attempt at structured thinking was a pro/con list. He’d made one, shared it with his partner, and found that they both just kept adding items without any of them feeling decisive. The list had no weighting, no framework, and no way to adjudicate between competing considerations.

He’d also asked a friend who’d made a similar move what they thought. The friend said: “Just go for it.” This was not useful.


Session 1: The Structured Interview

Soren opened his first AI session with a prompt he’d pulled from this site:

I'm facing a major career decision and I want to think it through carefully. Before we analyze anything, interview me about the decision. Ask questions to help clarify my actual priorities and what I'm taking for granted.

The AI’s first questions:

  • What specifically is boring you about your current role — the work itself, the domain, the team dynamics, or the level of impact you feel you have?
  • When you imagine the startup role going well at 18 months, what does that specifically look like?
  • What’s your financial floor — the actual minimum monthly income your household needs to function, versus what you’d like to have?
  • What does your partner’s worry specifically focus on? Is it the income reduction, the uncertainty, or something else?
  • Have you ever left a stable situation for something riskier before? What was that like?

Soren spent 40 minutes answering these. Several answers surprised him.

He realized that what he was bored by wasn’t the product management work itself — it was the specific domain (logistics software) and the company’s risk aversion. He’d been conflating domain boredom with career boredom, which mattered because the startup offer was in a domain he found intellectually interesting.

He also discovered, articulating it for the first time, that his financial floor was considerably lower than the offer’s base salary. The 40% cut felt alarming as a percentage, but it still cleared his household’s actual needs with margin.

What this phase produced: Separation of emotional framing from factual baseline. Soren discovered that two of his primary objections — domain fit and financial floor — were actually non-issues once he stated them precisely.


Session 1: Devil’s Advocate

After the interview, Soren ran the adversarial prompt:

I'm currently leaning toward taking this startup offer. I want you to make the strongest possible case against doing this — not generic startup risk warnings, but the most substantive, uncomfortable arguments specific to what I've told you.

The AI’s response included several points, but one stopped him cold: the founding team.

He’d had two calls with the CEO and CTO. Both went well. But he hadn’t spoken with the other two co-founders. He had no clear picture of the decision-making culture. He didn’t know if the CEO-CTO dynamic was functional under pressure. He’d been measuring the opportunity against his current job’s boredom, not against a fully characterized picture of what the opportunity actually was.

The AI framed it: “You’re evaluating a six-figure compensation reduction and major career risk based on roughly three hours of interaction with the team. That’s not enough data to evaluate the thing that will most determine whether this is a good move — whether you can work effectively with these people under stress.”

Soren called this “the moment the whole decision reframed.”

What this phase produced: Identification of the actual unresolved question. The decision wasn’t “startup vs. stable job.” It was “do I have enough information about this specific team to make a sound judgment?”


Between Sessions: Filling the Gap

Soren spent a week doing targeted information-gathering. He requested a call with both co-founders he hadn’t met. He asked the CEO for an introduction to one former employee who’d left the company. He had a direct conversation with his partner about the financial floor rather than a vague discussion about “risk.”

He returned to AI for a second session only after he had more information.


Session 2: Reversibility Analysis and Regret Minimization

With the new information, Soren ran a reversibility analysis:

Assuming I take this offer, if it turns out to be wrong at 18 months, walk me through the reversibility spectrum. What can I undo? What would be hard to reverse? What would be permanently closed?

The key insight: the career trajectory was substantially more reversible than he’d assumed. Senior PM roles in B2B software are structurally plentiful; a CPO title at a startup, even an early-stage one, would not hurt future employment prospects. Financial savings required six months of disciplined spending adjustment — recoverable. The startup equity had value contingent on outcomes he couldn’t control, but its downside was simply zero — not negative.

The genuinely irreversible element, he realized, was his relationship with the promotion conversation at his current company. If he declined the startup offer but the promotion didn’t materialize, he’d be in the same role, knowing he’d passed on something, with the relationship to his manager subtly strained.

Then the regret minimization prompt:

Imagine I'm 72 looking back. From that perspective, which path am I more likely to grieve not having tried — and what risks that feel large now would look quite small from there?

The AI’s observation, drawing on what Soren had shared: “The salary reduction that feels significant at 36 is, from a 72-year-old’s vantage point, roughly two years of above-average but bounded compensation growth. The question is whether you grieve not having tried the role that actually interested you.”

What this phase produced: Recalibration of stakes. The financial risk was real but bounded and time-limited. The professional risk was largely positive even in failure scenarios. The emotional risk of not trying — the regret of omission — was the largest actual risk.


Version 1 Failure and Redesign: What Almost Went Wrong

Soren was ready to accept after the second session. Then his partner raised a concern he hadn’t surfaced in the AI conversations: they’d been discussing having a child in the next two years, and neither of them had thought about how a base salary cut would interact with parental leave and childcare costs in that scenario.

He went back for a third session, specifically focused on this:

I've been analyzing this decision without factoring in a specific constraint: we're planning to have a child in the next 18–24 months. Help me think through how this changes the reversibility and risk profile of the startup offer.

This session produced a much more nuanced picture of the financial timeline, the specific phase when risk would be highest, and what financial buffer would need to be in place before taking the offer to make the constraint manageable.

What this phase produced: The decision didn’t change. But the implementation conditions did. Soren negotiated a slightly higher base salary and a delayed start date to allow for additional savings runway.


Stable State: The Outcome

Soren accepted the offer. The process took approximately ten days from first session to commitment, with one week of targeted information-gathering between sessions.

He didn’t credit AI with making the decision. He credited it with clarifying the actual decision — which turned out to be different from the one he thought he was making.

The central move: the adversarial prompt revealed that his lean was based on insufficient information about the team. Filling that gap (the calls, the former-employee conversation) gave him the basis for genuine judgment. Everything after that was working through the risk calculus clearly.


Connecting the Decision to the Plan

Once Soren accepted, the challenge shifted from deciding to implementing. He needed a structured ramp plan: 90-day goals for his first quarter, a priority map for team-building, a monthly financial review to track against the adjusted budget.

He used Beyond Time to convert those high-level goals into a weekly structure. The transition from “I’ve decided” to “here’s what I’m doing this week” is where most implementation plans dissolve. Having the planning layer connected to the decision’s implications kept the execution concrete.


Three Lessons From Soren’s Process

1. The decision you think you’re making is often not the actual decision. Soren thought he was deciding “startup vs. stability.” He was actually deciding “do I have enough information about this team to make a sound judgment?” The AI-assisted process revealed the real question.

2. The adversarial prompt surfaces the thing you’re avoiding. In every case study we’ve documented, the devil’s advocate pass produces at least one insight the person had been skirting. For Soren, it was the team due diligence gap. For others, it’s a financial assumption they haven’t verified, a relationship impact they haven’t discussed, or a dependency they haven’t accounted for.

3. Information gaps are decision inputs. When the adversarial pass reveals that you don’t know something you need to know, the right response is not more analysis — it’s going to get the information. AI sessions are most valuable when they’re interrupted by real-world information-gathering.


Related:

Tags: career pivot, AI decision making, case study, life design, major life decisions

Frequently Asked Questions

  • Is this case study based on a real person?

    Soren is a composite persona based on patterns common to knowledge workers navigating career pivot decisions. The prompts, reasoning, and framework steps are representative of how the Decision Thinking Partner process plays out in practice.
  • How long did the full decision process take?

    Soren ran three AI sessions over two weeks, with time between sessions for new information-gathering. The total structured AI time was roughly three hours. The decision itself came about ten days after the first session.
  • What was the most valuable part of the AI-assisted process?

    Soren identified the adversarial prompting as highest-value — specifically because it surfaced a concern he'd been actively avoiding, which turned out to be the central issue he needed to resolve before committing.