Case Study: How One SaaS Founder Went from 1 to 4 Features Per Month with AI Planning

How Priya Nambiar, a B2B SaaS founder, restructured her planning system with AI and quadrupled her shipping velocity in nine weeks without working more hours.

Priya Nambiar had not shipped a meaningful product feature in six weeks.

This was not a technical problem. Priya, a former senior engineer at a Series B fintech, was building Docthread — a B2B SaaS product that helps legal ops teams manage contract review workflows. She knew how to build. The product was technically sound. The market was real.

The problem was time. Or more precisely, the problem was where her time was going.


The Situation Before

Priya ran Docthread solo for the first eight months, then hired one full-time engineer, Marcus, at month nine. Adding a team member changed the nature of her work in ways she had not fully anticipated.

Before Marcus: she spent roughly 60% of her time building, 30% talking to customers, and 10% on operational overhead. The calendar was sparse enough that she could protect long morning build sessions without active effort.

After Marcus: she was the bottleneck for almost every decision. Architecture reviews, PR reviews, product questions, priority calls, customer calls where technical depth was needed. The 1:1s, the async clarifications, the “quick 15-minute syncs” that reliably ran to 30 minutes. By month eleven, she was averaging three to four hours of interruption per day.

Shipping one feature per month had been a reasonable pace when she was solo and still learning the market. At month eleven, with a paying customer base of twelve companies and two enterprise prospects in a late-stage sales process, it was not enough.

“I knew the problem,” she said later. “I was spending all day helping Marcus and talking to customers and doing everything except the actual product work. But I had no system for making it stop.”


What the Calendar Audit Showed

Priya started her AI planning experiment by running a simple calendar audit — categorizing every meeting and time block from the previous two weeks into Build, Sell, and Operate.

The results were specific and uncomfortable.

Actual allocation (weeks 1–2 of audit):

  • Build: 18%
  • Sell: 29%
  • Operate: 53%

Three hours of her average day were going to internal coordination, PR reviews, brief clarification calls with Marcus, administrative tasks, and investor touchpoints. The 53% Operate figure was not the result of any single decision. It was the accumulated weight of dozens of small commitments, each individually reasonable, none individually significant.

The Sell figure was reasonably healthy given her stage — she was mid-funnel on two enterprise deals and needed to be present. The 18% Build figure was the problem. Eighteen percent of a nine-hour workday is roughly 97 minutes of actual product work. Most of it was not in a single block.

Target allocation for her stage:

  • Build: 45%
  • Sell: 30%
  • Operate: 25%

The gap between actual and target Build time — 18% versus 45% — was 2.5 hours per day. That was where the missing features were.


The Diagnosis: Three Root Causes

Running a follow-up AI prompt on the audit data surfaced three specific structural causes, not generic advice.

Root cause 1: No protected morning blocks.

Priya’s calendar had no recurring maker blocks. Marcus had learned, from evidence, that mornings were available for questions. His questions were not unreasonable. But each one cost Priya not just the time of the question but the re-entry cost of returning to deep focus afterward.

Root cause 2: Synchronous defaults for asynchronous problems.

Most of the internal coordination that was consuming Priya’s time did not require her real-time presence. Architecture questions could be answered via voice memo or written response. PR reviews could happen during a defined review window rather than immediately on request. The default was synchronous because synchronous was the path of least resistance — for Marcus, not for Priya.

Root cause 3: No decision-making rubric for Marcus.

Because Marcus had no clear framework for what he could decide independently versus what needed Priya, the default was to ask Priya. This was rational from Marcus’s perspective — better to check than to make a wrong call. But it created a coordination tax that fell entirely on Priya’s time.


The Interventions

Over three weeks, Priya implemented three changes, each directly addressing one root cause.

Intervention 1: The 9–12 maker block.

Priya blocked 9am–12pm Monday through Thursday as a recurring calendar event labeled “Product — no interruptions.” She communicated this to Marcus: during this window, she was unavailable for synchronous questions. Questions could go into a shared async document. She reviewed and responded at 12:05pm.

The first two weeks were imperfect. Marcus sent two Slack messages during the morning blocks. Priya answered them. But by week three, the pattern held. The habit had established itself.

Intervention 2: A weekly async architecture review.

Instead of fielding architecture questions throughout the week, Priya introduced a written async review: every Friday, Marcus sent a brief document describing any architectural questions or decisions he faced. Priya reviewed it Sunday evening and recorded a voice response. Technical questions that previously consumed three to four small synchronous interruptions per week were consolidated into one 25-minute async exchange.

Intervention 3: A simple decision rubric for Marcus.

Priya used an AI prompt to draft a one-page decision framework for Marcus — a rubric specifying which categories of decisions Marcus could make independently, which needed written async input from Priya, and which genuinely required synchronous discussion. The framework had four categories, each with two or three examples.

Creating this document took 40 minutes. Its effect was immediate: the volume of inbound synchronous questions dropped roughly 60% in the first week it was in use.


The Results at Nine Weeks

Priya ran a new calendar audit at the nine-week mark to compare against the baseline.

Actual allocation (weeks 9–10):

  • Build: 41%
  • Sell: 33%
  • Operate: 26%

The Build figure had more than doubled, from 18% to 41%. The Operate figure had dropped from 53% to 26%. Sell had remained stable.

The product outcomes were direct:

Before (months 8–10 baseline):

  • Features shipped per month: 1.1 (average)
  • Customer-requested features in backlog: 14
  • Engineering cycle time for medium feature: 3–4 weeks

After (weeks 9–10 measurement):

  • Features shipped per month: 4 (month 2 of new system)
  • Customer-requested features in backlog: 8
  • Engineering cycle time for medium feature: 1.5–2 weeks

The enterprise sales process, which had been stalled partly because a key integration feature was delayed, closed three weeks after the feature shipped.


What the AI Planning System Did (and Did Not Do)

It is worth being precise about the AI’s contribution in this case.

AI did not write code. It did not make product decisions. It did not directly ship features.

What AI contributed:

  1. The initial calendar audit, which made the allocation problem visible in numbers rather than feelings
  2. The root cause analysis prompt, which surfaced structural causes rather than symptoms
  3. The decision rubric draft for Marcus, which Priya reviewed and modified but estimated saved her three hours compared to writing from scratch
  4. The weekly Triangle audit, which Priya ran every Sunday for the nine-week period to track her progress and catch any drift before it accumulated

The planning system created by AI took roughly 25 minutes per week to run. The time savings it produced — from reduced synchronous interruptions alone — were estimated at 2–2.5 hours per day.

The math is straightforward. The harder part was the initial diagnosis: being willing to look honestly at where the time was actually going, rather than where it felt like it was going.


Using Beyond Time for Ongoing Triangle Monitoring

In week six of the experiment, Priya switched from running the Triangle audit manually to using Beyond Time, which pulled calendar data automatically and produced the Build/Sell/Operate breakdown without requiring her to paste in calendar blocks each week.

The primary value was consistency. The Sunday planning session dropped from 20 minutes to 8 minutes, which was enough of a reduction to eliminate the “I’ll do it later” temptation that had occasionally caused her to skip the weekly audit during high-pressure weeks.

The planning behavior that produced results — the 9–12 maker block, the async defaults, the decision rubric — was not dependent on any specific tool. What the tool provided was the weekly accounting that made drift visible quickly enough to correct before it accumulated.


The Most Important Number

Priya’s most important metric was not features per month, even though that was the outcome that mattered for the business.

The most important number was 41% — her Build allocation at week nine. Features per month is a lagging indicator. It reflects decisions made three weeks ago. Calendar allocation is a leading indicator. It tells you what results are coming.

A founder who sees their Build allocation dropping week over week can intervene before the shipping velocity drops. A founder who only tracks shipping velocity learns about the problem when the pipeline is already three weeks empty.

That is the case for weekly planning data over outcome data alone: it gives you time to act.


Your action: Run a calendar audit on last week. Calculate your Build, Sell, and Operate percentages. If your Build allocation is below 35%, identify the single largest Operate time block and ask: could this be made asynchronous?


Tags: founder case study, AI planning for founders, founder productivity, maker time, shipping velocity

Frequently Asked Questions

  • Is this case study based on a real person?

    Priya Nambiar is a composite fictional character created to illustrate real patterns we see across founder planning transformations. The specific metrics, company details, and planning interventions are representative of outcomes achievable with the approach described, not a verbatim account of one individual. The planning workflows and prompts are real and applicable.

  • Can AI planning really increase shipping velocity?

    AI does not write code or make product decisions. What AI-assisted planning does is protect the time and cognitive conditions necessary for technical work to happen. The primary mechanism is simple: founders who ship more features typically have more uninterrupted maker time. AI planning's contribution is identifying what is consuming maker time, restructuring the calendar to protect it, and maintaining the weekly discipline to prevent calendar drift from re-establishing itself.

  • How long does it take to see results from AI-assisted founder planning?

    Most founders notice improvements in focus and daily outcomes within one to two weeks. Shipping velocity changes take longer to measure — typically four to six weeks before the pattern is clear enough to be meaningful. The case study describes results at the nine-week mark, which is a reasonable timeline for structural changes to show up in product outcomes.