From Vague to Actionable: A Prompt Engineering Case Study for Goal Setting

A detailed case study showing how one professional transformed a weak goal-setting process using the PROMPT Anatomy framework—including every prompt and output.

This is a case study of one professional’s goal-setting process before and after applying the PROMPT Anatomy framework. Every prompt and a summary of every response is included. The name and some details have been composited to protect privacy, but the prompts are real.


The Baseline: Where the Process Started

Layla is a product designer at a mid-size fintech company. She had been setting quarterly goals for two years—a mix of professional development and project targets—but found herself abandoning them by week five of each quarter. The goals felt real when she wrote them and abstract by the time she needed to act on them.

Her previous approach to AI-assisted goal setting:

Help me set good professional goals for Q4. I'm a product designer and want to grow in my career.

The output she received was, in her words, “completely reasonable and completely useless.” Five categories: improve your craft, build relationships, take on stretch assignments, document your work, seek feedback. All true. None specific. No action implied for Monday morning.

She had been blaming the AI.

The issue was the prompt.


Round 1: The Situation Audit (Before Any Goals)

The first change was to stop asking for goals immediately. Instead, she used an audit prompt to surface what was actually happening.

Prompt:

I want to set meaningful Q4 professional goals, but I'm going to do something different: before we set any goals, I want to understand my situation clearly.

Here's my context:
- Role: Senior product designer, 3 years in role, 5 total years in design
- Company: Mid-size fintech, ~300 people, Series C
- Current projects: Two in-flight: a consumer onboarding redesign (high visibility) and an internal tools redesign (lower visibility but technically interesting)
- Team: Works closely with two PMs, one engineering manager. Rarely interacts with the broader design team.
- Pattern I've noticed: I set goals each quarter that I abandon by week 5. The goals sound right but don't connect to anything I actually do.

Based on this, ask me 4 questions that would help you understand what's really going on before we set any goals. Make them the 4 questions most likely to change what goals you'd suggest.

What the AI asked:

The four questions were substantially better than anything Layla expected:

  1. When you say the goals “don’t connect to what you actually do”—what percentage of your work in a typical week is planned vs. reactive? And who is initiating the reactive work?

  2. You mentioned the internal tools project is lower visibility. Is low visibility a problem you’re experiencing practically (no one knows you’re doing good work) or a concern about future opportunities?

  3. Have your previous goals been outcome-based (something changes in the world) or activity-based (you do something)? What’s an example of each from last quarter?

  4. When you say “grow in my career”—what does the next role look like for you, and how far away does it feel?

These questions surfaced something Layla hadn’t articulated: she had been setting activity goals (attend more design crits, read two UX books per month) rather than outcome goals. And her “low visibility” concern was actually a concrete organizational problem—she had no relationship with the VP of Product, who made headcount decisions.


Round 2: Goal Generation with Full Context

After answering the four questions, she ran the goal generation prompt.

Prompt:

[Persona] I'm a senior product designer, 5 years experience, at a 300-person Series C fintech. My work is split roughly 60% reactive (responding to PM requests and design reviews) / 40% planned. The reactive work is largely driven by one PM who treats me as an embedded designer rather than a collaborator on direction.

[Resources] I have about 4 hours per week that I can direct intentionally. I have no budget for courses or conferences this quarter. I have a strong relationship with my immediate PM and EM but almost no relationship with the VP of Product, who makes headcount and promotion decisions.

[Objective] Generate 3 specific Q4 goals. Each goal should be an outcome (something that changes in the world) rather than an activity (something I do). Each should have a measurable 90-day outcome and a monthly checkpoint.

[Mode] Don't suggest goals that are entirely within my control in a vacuum. The most important goals for me are ones that require building visibility or relationship with people who matter for my career—even if that's uncomfortable.

[Parameters] 3 goals maximum. Q4 only (Oct-Dec). Professional domain. Each goal should be achievable in 4 hours/week of intentional focus.

[Tests] Before presenting, verify: Is each goal an outcome, not an activity? Is each goal achievable in 4 hours/week? Does at least one goal directly address the VP visibility gap? Revise any that fail.

Summary of what the AI produced:

The three goals were meaningfully different from the previous quarter’s:

  1. Visibility goal: Secure two instances of direct, substantive interaction with the VP of Product before Dec 31—either presenting work in a VP-level review or getting specific feedback on the onboarding redesign. (Monthly checkpoint: one VP touchpoint per month.) The AI flagged that “interaction” could be gamed, and proposed that the goal be specific about the nature of the interaction—it had to be substantive, not incidental.

  2. Ownership goal: Transition from embedded designer to design-direction collaborator on one current project by getting formal co-ownership of the design brief. Measurable: the EM and PM explicitly acknowledge you as the design direction decision-maker for the project in a Slack channel or doc by Nov 15.

  3. Portfolio goal: Ship one piece of work this quarter with documented impact metrics—before/after numbers, not just a case study narrative. By Dec 31, have one project entry in your portfolio that includes quantitative outcomes.

All three goals were outcome-based, not activity-based. All three were measurable with explicit criteria. The first two directly addressed the organizational problems that her audit prompt had surfaced.


Round 3: The Pre-Mortem

Layla ran a pre-mortem on the goal she was most anxious about—the VP visibility goal.

Prompt:

Here's my Goal 1 from this session:

"Secure two instances of direct, substantive interaction with the VP of Product before Dec 31—presenting work in a VP-level review or receiving specific feedback on the onboarding redesign."

Run a pre-mortem. Assume it's January 5 and I failed to achieve this goal. List the 4 most likely reasons for failure, in order of probability, given everything I've told you about my situation. For each reason, give me one mitigation I can build into my plan in the next two weeks.

Summary of the AI’s output:

The four failure modes identified were:

  1. The onboarding redesign doesn’t reach a review milestone where VP attendance is expected (most likely)
  2. Layla waits for an organic opportunity instead of creating one (second most likely)
  3. She creates an interaction opportunity but the content isn’t substantive enough to be memorable
  4. The VP’s Q4 is dominated by a company priority that reduces availability

The mitigations were specific: identify in writing which design reviews in Q4 VP typically attends (this week), schedule a brief check-in with her PM to understand VP involvement expectations on the onboarding project (this week), prepare a 5-minute “design direction brief” she could offer to share if an opportunity emerged.

The key mitigation: stop treating this as a passive goal. The one action she could take this week was to ask her PM directly: “Is there a point in the onboarding project where we’d typically get VP design review input? I’d like to be there for that.”


Round 4: The Weekly Check-In Template

At the end of the session, Layla built a weekly check-in prompt she could reuse throughout the quarter.

Prompt:

My Q4 Goal 1 is: [insert goal]
My commitment this week was: [insert planned actions]
What actually happened: [insert actual actions and progress]

Tell me:
1. Whether this represents a planning problem (unrealistic plan), execution problem (knew what to do, didn't do it), or clarity problem (unclear what the next action was)
2. One specific change for next week—not "try harder," but a concrete adjustment to either the goal, the plan, or the environment

Keep it under 200 words.

The value of the weekly check-in prompt is not just accountability—it’s diagnosis. Most goal failures aren’t execution failures. They’re planning failures (the plan was unrealistic) or clarity failures (the next action wasn’t concrete enough). The prompt forces that distinction, which changes the corrective response.


What Changed and Why

Layla ran this process at the start of Q4 and tracked it through December. By month three, she had achieved Goals 1 and 3 fully and made partial progress on Goal 2 (she got informal acknowledgment of design direction ownership but didn’t get the formal documentation she’d specified).

The post-quarter review she ran using the PROMPT Anatomy structure surfaced a pattern: she consistently achieves goals where the bottleneck is her own action and underachieves on goals where she needs someone else to change a process or documentation norm.

That finding became the input for her Q1 goal-setting session.

Beyond Time’s planning interface is built around structured goal-tracking that supports exactly this kind of quarterly iteration—if you’d rather work within a purpose-built environment than raw AI chat, it’s worth looking at beyondtime.ai.


Three Lessons from This Case Study

The audit prompt is the highest-leverage step. The questions the AI asked in Round 1 revealed the real problem—activity goals, invisible VP relationship gap—in a way that Layla’s own reflection hadn’t. Starting with goal generation skips the diagnostic work.

Constraints produce better goals than aspirations. The goals produced in Round 2 were better because they were calibrated to 4 hours per week and to the organizational context of a 300-person company at Series C. Unconstrained goal generation produces goals for an idealized version of your situation.

The pre-mortem changes the first week. The value of the pre-mortem prompt is not prediction—it’s action. The mitigation list gave Layla specific things to do in week 1, before the goal had any chance to become passive. This is what separates goals from wishes.


Your action for today: Run the Situation Audit prompt from Round 1 on your current domain. Don’t ask for goals yet. Just ask the AI to give you four questions that would change what goals it would suggest.

Related:

Tags: goal setting case study, prompt engineering, PROMPT anatomy, AI planning, professional goals

Frequently Asked Questions

  • What is the PROMPT Anatomy framework used in this case study?

    PROMPT Anatomy stands for Persona, Resources, Objective, Mode, Parameters, and Tests. It is a six-component structure for writing AI prompts that produce specific, calibrated goal-setting outputs.
  • How long does it take to run a PROMPT Anatomy goal session?

    The initial prompt takes 3-5 minutes to write. A full three-round iterative session—generation, evaluation, refinement—takes 15-20 minutes. The case study in this article used four rounds across two sessions.
  • Can this approach work for personal as well as professional goals?

    Yes. The case study focuses on professional goals, but the framework applies equally to health, learning, creative, and financial goal setting. The key is always specificity of context and constraints.
  • Do the prompts in this case study need to be used exactly as written?

    No. They're designed as templates. Replace the situational details with your own context. The structure matters more than the specific wording.
  • What made the biggest difference in the case study outcomes?

    Two things: adding real constraints to the Persona and Resources components, and adding the Tests component to force self-evaluation before the output was presented.