The Stalled Founder: Applying Motivation Science to Get a Product Unstuck

A detailed case study of a B2B SaaS founder who applied SDT diagnostics and expectancy-value planning with AI to move from six weeks of stalled development to consistent forward motion.

Tariq had been building his B2B SaaS product for 14 months when he hit the wall. Not a dramatic collapse — no product failure, no co-founder departure, no customer crisis. Just a gradual erosion of momentum that, six weeks in, had left him spending most days on email, administrative tasks, and unfocused product research rather than shipping features.

He was not lazy. He cared about the product. He had customers who needed what he was building. By every external measure, the business was worth working on. But the work itself had become aversive in a way he could not articulate.

This case study documents how Tariq applied motivation science diagnostics through a series of AI planning conversations to identify the specific failure modes and move from stalled to consistently productive within three weeks.

Tariq is a composite case based on common patterns in early-stage founders. Identifying details are illustrative, not from a single individual.


The Situation Before the Intervention

Tariq’s product was a workflow automation tool for operations teams at mid-market companies. He had five paying customers, promising early retention data, and a clear roadmap. Objectively, the signals were positive.

His daily experience was different. Each morning, he would open his laptop with the intention of working on product development and find himself three hours later having answered emails, read competitor updates, and done everything except the core development work. The avoidance was not complete — he was still working — but he had drifted from the high-leverage work he knew mattered.

His first instinct was the standard productivity response: stricter scheduling, time-blocking, accountability commitments to his peer founder group. None of it worked for more than a few days.


The Diagnosis: Running the SDT Three-Need Check

Tariq’s first AI conversation was a structured application of the SDT autonomy, competence, and relatedness diagnostic.

The prompt he used:

“I’ve been avoiding my most important development work for six weeks. I want to figure out which of the three SDT needs — autonomy, competence, or relatedness — is most frustrated right now. Ask me ten questions that would help identify which need is the problem and why. Be direct about what my answers suggest.”

The conversation produced three clear findings.

On autonomy: Tariq had started the company to build a specific product he believed in. Over 14 months, the day-to-day work had gradually shifted. Customer support requests, investor updates, administrative tasks, and sales conversations had expanded to fill his calendar. By the time he recognized the stall, the work that felt genuinely chosen — product architecture, core feature development, design decisions — occupied less than 25% of his working time. The rest felt imposed: not by any particular person, but by the accumulated weight of running a company.

The autonomy need was significantly frustrated. He had not chosen to become primarily an operator. The role had accumulated around him.

On competence: The features he needed to build next were at the edge of his technical capability. He was not incompetent — he had shipped a working product. But the next phase required a data pipeline architecture he had never built, and he was facing it without the gradual skill-building feedback loops that had carried him through earlier development.

When a task requires skills you do not yet have, with no clear path to acquiring them quickly, expectancy collapses. His avoidance of the development work was not laziness but a competence-expectancy mismatch: he did not believe, with reasonable confidence, that he would succeed.

On relatedness: Tariq was almost entirely isolated in his work. His customers were satisfied but remote. He had no co-founder. His peer founder group met bi-weekly but the conversations stayed shallow. He had not had a substantive conversation about what he was building — the actual product decisions and tradeoffs — with anyone who understood the domain deeply.

The relatedness finding surprised him. He had not connected his low energy to the isolation because isolation had been the constant condition of founding. It did not feel like a new problem. But the cumulative effect had compounded.


Intervention 1: Reclaiming Autonomy Through Role Redesign

The first intervention targeted the autonomy finding. Tariq used AI to audit his weekly time allocation and identify which tasks could be deferred, delegated, or systematically batched to free up blocks for chosen work.

The AI conversation:

“Here is how I spent last week: [pasted time log]. Help me identify which of these categories are genuinely mine — things I chose and would choose again — versus things that have accumulated without me deliberately deciding to do them. Then help me figure out what to do with the accumulated ones.”

The conversation identified three categories:

  • Genuinely chosen: product development, customer calls (strategic, not support), founder thinking and strategy
  • Accumulated but necessary: investor updates, accounting, some operational tasks
  • Accumulated and deferrable: non-urgent support requests, competitor research that was producing no decisions, unsolicited introductions he kept following up on

The action: batch the accumulated-but-necessary work into a single two-hour block on Fridays, stop following up on unsolicited introductions, and protect three daily 90-minute blocks for product development.

The critical move was not scheduling. Tariq had tried scheduling. The critical move was the autonomy framing: he was explicitly reclaiming the work he had chosen to do. The same blocks, framed as “now I am doing the thing I actually chose this job to do,” activated different motivational machinery than “now I am doing the thing on the calendar.”


Intervention 2: Rebuilding Expectancy Through Decomposition

The competence-expectancy problem required a different approach. The data pipeline work that Tariq was avoiding was not a single task but a poorly-specified project that his brain had pattern-matched as “very hard and uncertain.”

The intervention: decompose the project until each next action felt achievable.

The AI conversation:

“I need to build a data pipeline that does [specific description]. I have not done this before. I am avoiding starting because it feels overwhelming and I am not sure I can do it. Help me break this project into stages, where each stage teaches me what I need to know for the next one. Each stage should be completable in two to three days, and I should feel like I’ve made genuine progress at the end of each.”

The result was a six-stage decomposition:

  1. Build a minimal working version that does the core thing, badly, without optimizing anything
  2. Write down every specific technical question the minimal version raised
  3. Build solutions to those questions, one at a time, with explicit research time budgeted per question
  4. Integrate the solutions into the minimal version
  5. Test with one real customer’s data
  6. Optimize based on what the test revealed

Each stage had a clear definition of done. Stage one could begin immediately, with existing skills, and would produce something real. The project had gone from a wall to a staircase.

This is not a productivity trick. It is the expectancy-value model applied: the overall project had low expectancy. The first stage had high expectancy. High-expectancy first steps produce the small wins that rebuild expectancy for subsequent stages — the mechanism Teresa Amabile and Steven Kramer identified as the progress principle.


Intervention 3: Addressing Relatedness Through Deliberate Connection

The relatedness intervention was the simplest to design and the one Tariq was most resistant to. His instinct was that relatedness was a soft problem compared to the structural autonomy and competence issues. The AI conversation pushed back:

“You’ve addressed the autonomy and competence problems. But the SDT research is clear that relatedness is not a bonus — it’s a need with the same status as autonomy and competence. What specific form would meaningful connection around this work take for you? Not networking. Not accountability. Actual connection to people who care about what you’re building.”

Two answers emerged: a domain expert in data infrastructure who he had met once at a conference and never followed up with, and a potential customer who had expressed deep interest in the product’s direction and was clearly capable of a substantive conversation.

He reached out to both. The conversations were not accountability relationships. They were genuine exchanges about the work, conducted with people who had relevant knowledge and genuine interest.

Within a week, he noticed a shift in how the work felt. Not dramatically — SDT research does not promise transformation — but the experience of building something that someone knowledgeable found interesting changed the texture of the work.


Three Weeks Later

By week three, Tariq had shipped the first stage of the data pipeline, had a clear plan for stages two through four, and had returned to working primarily on product development rather than accumulated operational tasks.

He used Beyond Time to track his weekly allocation against the role-redesign targets, which helped him notice when accumulated tasks were re-expanding and intervene before another six-week drift.

The specific change was not motivation in the fuel-tank sense. It was structural: three specific needs that had been frustrated were being addressed specifically. The motivation that returned was not manufactured — it was the natural state when the conditions for it are present.


What This Case Illustrates About Motivation Science

Several things worth noting from this case:

The problem was not character. Tariq was not lacking discipline, grit, or purpose. He had a specific structural mismatch between his psychological needs and his working conditions. Diagnosing the mismatch correctly was the intervention.

All three needs mattered. Addressing autonomy alone would not have been sufficient. The competence-expectancy problem would have regenerated avoidance. The relatedness deficit would have continued to drain energy. SDT’s insistence on all three needs as equally important is borne out in cases like this.

AI facilitated but did not supply. The AI conversations helped Tariq ask the right diagnostic questions and design specific responses. But the autonomy experience came from reclaiming his work, the competence experience from actually making progress on the pipeline, and the relatedness experience from genuine human connection. AI can point at what is needed; it cannot substitute for it.

The change was durable. Productivity tricks fail. Structural changes grounded in what the motivation research actually says tend to hold, because they address the mechanism rather than the symptom.


Related:

Tags: motivation science, case study, founder productivity, Self-Determination Theory, AI planning

Frequently Asked Questions

  • How does SDT apply to founders specifically?

    Founders often score high on autonomy in theory — they chose the work — but score lower in practice because the demands of running a business rapidly crowd out the parts of the work they intrinsically care about. The competence need is frequently undermined by working at the edge of skills without adequate feedback loops. And relatedness often suffers because founding is isolating, particularly in the early stages.
  • What is the most common motivation failure mode for early-stage founders?

    Based on SDT research applied to entrepreneurial contexts, the most common pattern is not low value (founders usually care deeply about their work) but collapsed expectancy combined with autonomy erosion. The founder still cares about the outcome but no longer believes the current approach will get there, and the day-to-day work has drifted toward operational tasks that feel imposed rather than chosen.
  • Can AI tools actually help with motivation or just with task management?

    The distinction matters. AI tools used purely for task capture and scheduling are task management tools. AI tools used to facilitate diagnostic conversations — about why something stalled, which need is frustrated, whether the plan is realistic — are operating as motivation-science tools. The same AI can do both, but the framing of the conversation determines which function it serves.