Research on Motivation and AI: What the Evidence Actually Shows

A research digest covering what empirical studies say about AI's impact on motivation, goal pursuit, and intrinsic engagement — including where the evidence is strong, where it is preliminary, and what questions remain open.

The intersection of AI and motivation is generating both enthusiasm and concern in productivity research communities. The enthusiasm comes from AI’s potential to reduce friction, support goal-setting, and provide personalized feedback at scale. The concern comes from possible risks to autonomy, skill development, and intrinsic motivation.

What does the actual evidence show?

This research digest covers what is established, what is preliminary, and what questions remain substantively open. The honest answer involves more uncertainty than most technology commentary acknowledges.


The Evidence Base: What We Are Working With

Direct empirical research on AI planning tools and motivation is limited. The tools are new, longitudinal studies take years, and the field of AI-augmented productivity is still primarily in the phase of usage studies and short-term experiments rather than extended naturalistic research.

What we have instead is a body of adjacent evidence that can be applied carefully:

  • Automation and autonomy research: how task automation affects workers’ sense of agency and engagement
  • AI tutoring and learning: effects of AI-assisted learning on motivation and skill development
  • Feedback systems research: effects of AI-generated feedback on intrinsic motivation
  • Goal technology studies: research on apps and digital tools designed to support goal pursuit
  • The large AI productivity literature: studies on AI assistance and output quality, with some attention to engagement effects

Each requires inference to apply to AI planning tools specifically, but the inferences are grounded.


Finding 1: AI Assistance Improves Output but Raises Questions About Engagement

A 2023 study by Dell’Acqua and colleagues at MIT examined the effects of AI assistance on knowledge worker performance. Participants who had access to AI tools produced higher-quality work, faster, across the task domains studied. The productivity improvements were real and substantial.

But the study also documented a pattern the authors called “skill atrophy risk” — participants who relied heavily on AI assistance showed lower performance when the AI was removed, suggesting that the assistance reduced the practice required to develop independent capability.

From an SDT and expectancy-value perspective, this raises a specific concern: if AI assistance reduces the experience of developing competence (because the AI is doing the competence-requiring work), the competence need may go unmet even as output quality improves. You can produce better work while becoming less capable of producing it without assistance.

This is not a reason to avoid AI tools. It is a reason to be deliberate about which tasks you want AI to assist with and which you want to develop independently. Competence experience comes from doing the hard parts — not from producing good outputs via AI assistance.

What this means in practice: Use AI to assist with tasks outside your core skill-development goals. For the work where you are trying to grow, use AI as a coach (help me understand what I am doing wrong, help me identify the next skill to develop) rather than as a producer (do this for me).


Finding 2: Goal-Tracking Technology Has Mixed Effects

A substantial research literature on goal apps and digital tracking tools suggests effects that are more mixed than their marketing implies.

Milkman and colleagues have documented that commitment devices and goal-tracking tools can improve follow-through, particularly for goals where the person has a clear intention but faces akrasia — doing something other than what they intended. This is a real benefit.

But research on goal visualization and tracking also shows a counterproductive pattern: detailed mental simulation of reaching a goal can reduce motivation to pursue it, because the brain experiences a partial satisfaction from the simulation itself. Oettingen’s WOOP research (Wish/Outcome/Obstacle/Plan) found that fantasy about success without contrasting it with present obstacles consistently reduced goal pursuit compared to implementation-intention-based planning.

The implication for AI-generated goal plans: AI tools that emphasize vision and progress narrative may be providing satisfying cognitive experiences that do not translate to action. Tools that convert goals into specific next actions with concrete obstacles pre-identified are more likely to produce the implementation-intention effects that goal-setting research supports.

What this means in practice: When using AI for goal planning, push toward specificity and obstacle pre-commitment rather than motivational narrative. “What is the first concrete action?” and “What will most likely prevent me from doing it?” are more useful AI prompts than “Help me visualize achieving this goal.”


Finding 3: AI Feedback Can Support or Undermine Intrinsic Motivation Depending on Framing

Research on feedback and intrinsic motivation consistently distinguishes two feedback types:

Informational feedback: feedback that provides information about performance quality and how to improve. This supports the competence need and has been shown to strengthen intrinsic motivation for tasks people find interesting.

Controlling feedback: feedback that emphasizes external standards, comparison to others, or performance relative to a threshold. This undermines intrinsic motivation by shifting the perceived locus of causality from internal to external.

AI can deliver either type. Productivity tools that tell you how many tasks you completed relative to yesterday, or rank your productivity against a metric, are providing controlling feedback. AI planning conversations that help you understand what made your best work sessions effective and how to replicate that are providing informational feedback.

The research on AI tutors in educational settings is instructive here. Studies on systems like Khan Academy’s AI tutor and similar platforms consistently find that AI feedback framed around growth and understanding outperforms AI feedback framed around performance metrics for long-term learning outcomes. This finding generalizes to motivation: informational AI feedback supports intrinsic motivation; evaluative AI feedback risks undermining it.

What this means in practice: When reviewing your work with AI, use prompts that generate informational feedback. “What worked well about how I approached that task, and what could I do differently next time?” produces different output than “Rate my productivity this week.” The former is informational; the latter activates external evaluation.


Finding 4: Automation Can Undermine Autonomy Experience Even When It Preserves Choice

Research on automation and control in human factors psychology (a field studying how humans interact with automated systems) has documented a phenomenon called “automation complacency” and a related one called “loss of meaningful agency.”

The key finding: people can technically retain the ability to make choices while experiencing a loss of meaningful agency if automated systems make the choices obvious, constrained, or pre-determined. The mere presence of choice is not sufficient for the autonomy experience — the choice must feel genuinely open and consequential.

Applied to AI planning tools: if an AI generates a detailed daily schedule and you follow it, you have technically chosen to follow it. But the autonomy experience — which SDT research shows is what matters, not just behavioral choice — may be minimal. The plan feels like the AI’s plan, not yours.

This is a design problem as much as a usage problem. AI planning tools that present outputs as recommendations to be modified rather than prescriptions to follow, that ask for your judgment before finalizing a plan, and that explicitly frame planning as the user’s decision are doing something motivationally different from tools that generate and present complete plans.

What this means in practice: After any AI-generated plan, spend 60 seconds explicitly modifying at least one element — not because the modification improves the plan, but to activate the experience of authorship. An AI-generated plan you have consciously adopted as yours is motivationally different from one you have passively accepted.


Finding 5: Social Comparison Features Reliably Reduce Intrinsic Motivation for Non-Competitive Tasks

Research by Lam and colleagues, and related work in educational motivation research, consistently shows that social comparison features (leaderboards, rankings, comparisons to averages) reliably improve performance in explicitly competitive contexts and reliably undermine intrinsic motivation for collaborative or individual skill-development tasks.

Many AI productivity tools include social proof elements, streak comparisons, and community performance displays. These design choices have known motivational effects. For users trying to develop competence in complex, long-horizon work — writing, coding, strategic thinking — these features are likely to activate the external regulation end of the SDT motivation continuum.

The research implication is direct: disable or ignore social comparison features in AI planning tools when using them for complex knowledge work that you care about intrinsically. They are not neutral decorations.


What the Research Does Not Yet Tell Us

Several important questions remain open:

Long-term effects of AI-assisted planning on autonomous goal-setting capability. Does regular use of AI for goal planning and decision-making impair the development of independent planning skills? Short-term studies suggest caution; longitudinal evidence is limited.

Individual differences in AI and motivation interactions. Most studies report average effects. There is preliminary evidence that people with higher need for autonomy are more susceptible to the autonomy-undermining effects of highly prescriptive AI tools, while people with lower baseline self-regulation benefit more from AI structure. The field has not yet characterized these interactions systematically.

The relatedness substitution question. SDT identifies relatedness as a core need, and AI cannot fully satisfy it. But can AI tools do anything useful for relatedness — by helping people identify who their work serves, or by facilitating connections with others? Some evidence from AI coaching research suggests that AI conversations can improve social goal clarity, if not social connection itself. This is an active area.

Whether AI planning tools produce the same performance effects in field settings as in controlled studies. Most productivity AI research uses short, controlled tasks. Complex knowledge work over months or years is a different environment. Transfer of findings requires careful assumption.


Holding the Uncertainty Productively

The research on motivation and AI supports some design principles with reasonable confidence (informational over controlling feedback, autonomy-supportive framing, goal specificity over goal visualization) and leaves others genuinely open.

The appropriate response to this state of evidence is not to avoid AI planning tools but to use them with the theory in view: as tools that can support the conditions for intrinsic motivation when used carefully, and that risk undermining those conditions when used carelessly.

The question to keep asking: is this AI interaction supporting my sense of autonomy, competence, and connection — or is it replacing one of those experiences with a substitute?

If you can answer that question honestly and regularly, the research is doing what applied science is supposed to do.


Related:

Tags: motivation research, AI and motivation, Self-Determination Theory, intrinsic motivation, productivity science

Frequently Asked Questions

  • Is there direct research on AI tools and motivation?

    Direct research specifically on AI planning tools and motivation is limited — the field is young. Most relevant evidence comes from three adjacent areas: research on automation and autonomy (how offloading tasks to systems affects motivation), AI tutoring and learning motivation research, and studies on feedback systems and intrinsic motivation. The findings are applicable but require inference across contexts.
  • Does AI assistance with tasks reduce intrinsic motivation for those tasks?

    Possibly, under specific conditions. Research by Dell'Acqua et al. (2023) on AI assistance in professional contexts found performance improvements but also raised questions about skill development and engagement. The overjustification effect literature predicts that replacing autonomous effort with AI-generated outputs could reduce intrinsic interest in tasks the person previously found engaging. This remains an active empirical question.
  • What does the evidence say about AI-generated feedback and motivation?

    Feedback quality matters more than source. Research on feedback and motivation (consistent with competence need in SDT) shows that informational feedback — feedback that helps you understand your performance and how to improve — supports intrinsic motivation, while controlling feedback undermines it. AI can deliver informational feedback at scale, which is a genuine motivational benefit — provided the feedback is accurate and process-focused rather than purely evaluative.