A surprising amount of what circulates as “goal science” is either misattributed, overstated, or stripped of the conditions that make it meaningful.
This isn’t a minor issue. When someone applies a distorted version of a finding — believing that writing a goal makes it 42% more likely to happen, or that habits take 21 days, or that SMART goals are the scientifically validated approach — they’re often putting faith in a method that doesn’t do what they think it does. Then, when it fails, they assume the failure is personal.
Here are the five most commonly misread findings in goal science, and what the research actually says.
Misread 1: “Writing Down Goals Makes You 42% More Likely to Achieve Them”
You’ll find this statistic in blog posts, LinkedIn carousel slides, and productivity books. It usually traces back to Gail Matthews.
What the study actually showed: Matthews, a psychology professor at Dominican University of California, conducted a study published in 2015. She divided 267 participants — self-selected professionals who responded to a recruitment email — into five conditions based on how they engaged with their goals: thought about but didn’t write them; wrote them; wrote them and formulated action commitments; sent these to a friend; and sent weekly progress reports to a friend.
The group that wrote down their goals and used action commitments plus weekly accountability achieved significantly more than the group that only thought about goals. The 42% figure reflects the comparison between these groups.
Why it’s typically overstated: The study wasn’t comparing “writing goals” to “not having goals at all.” It was comparing written, structured, accountable goal engagement to vague, unwritten goal thinking. The effect likely includes the commitment device effect (writing as a form of commitment), the implementation intention effect (action commitments), and the social accountability effect (sending progress to a friend) — not just the writing per se.
The study also hasn’t been replicated at scale, used a self-selected sample, and the 42% difference was across a specific operationalization that bundled multiple techniques. Using this statistic as though it’s a law of goal setting is not supported by the study.
What you should believe instead: There is solid evidence that specificity, commitment devices, and accountability improve goal outcomes. Matthews’s study is consistent with this. But the 42% figure is a single-study claim, and the mechanism is probably not “writing alone.”
Misread 2: “SMART Goals Are Scientifically Proven”
SMART — Specific, Measurable, Achievable, Relevant, Time-bound — is presented in training programs, management literature, and productivity books as a research-validated framework.
Where SMART actually came from: George Doran, a consultant, introduced the acronym in a 1981 article in Management Review titled “There’s a S.M.A.R.T. Way to Write Management’s Goals and Objectives.” It was a mnemonic for management practice, not a research-derived framework.
The problem with the “A”: The “Achievable” or “Realistic” criterion — depending on whose version you use — directly contradicts one of the best-replicated findings in goal-setting science. Locke and Latham’s research, spanning more than 400 studies, shows a near-linear relationship between goal difficulty and performance up to the capability ceiling. Goals calibrated to be “achievable” or “realistic” tend to produce less effort and lower performance than goals calibrated to be genuinely difficult.
SMART’s achievability criterion was designed to prevent unrealistic management targets from demoralizing employees. That’s a reasonable organizational concern. But it is not the same thing as maximizing individual goal performance.
What you should believe instead: SMART provides useful prompts for making a goal concrete. “Is this specific?” and “how will I measure it?” are good questions. But SMART is a heuristic, not a science. Calling it scientifically proven overstates its origins. And applying the “A” criterion to personal goals may actively reduce your performance targets below where they should be.
Misread 3: “It Takes 21 Days to Form a Habit”
This is probably the most widely repeated incorrect claim in self-help culture.
Where the 21-day figure came from: It traces to Maxwell Maltz, a plastic surgeon who published Psycho-Cybernetics in 1960. Maltz observed, informally, that patients seemed to take roughly 21 days to adjust to changes in their appearance — a changed nose, an amputated limb. This was a clinical observation about body image adaptation, not a study of habit formation.
At some point, “21 days” migrated from Maltz’s clinical observation to a general claim about how long it takes to form any habit. The mechanism of that migration is unclear, but it was almost certainly accelerated by self-help books that found the round number satisfying.
What the actual research shows: Phillipa Lally and colleagues published the most rigorous study of habit formation timing in the European Journal of Social Psychology in 2010. They tracked 96 participants performing a new behavior daily for 84 days and assessed automaticity over time. Results: new habits took between 18 and 254 days to form, with an average of 66 days. The median was closer to 66 days, and the distribution was heavily skewed — simple behaviors (drinking a glass of water at lunch) automatized faster; complex behaviors (running 15 minutes before dinner) took much longer.
What you should believe instead: There is no single timeline for habit formation. The 21-day figure is clinically meaningless. Most behaviors people want to make habitual will take longer than three weeks. Some will take two to three months. This matters practically: people who believe habits form in 21 days often give up at day 22 when the behavior still requires deliberate effort, concluding that they’ve failed rather than that the estimate was wrong.
Misread 4: “Positive Visualization Helps You Achieve Goals”
Visualization is recommended in sports psychology, performance coaching, and countless productivity frameworks. The research picture is more complicated.
What the research actually shows: Gabriele Oettingen’s research program, developed across the 1990s and 2000s, tested the effects of positive fantasy (imagining a positive future in detail) on goal-directed behavior. The finding is consistent across studies: pure positive visualization — imagining success without confronting obstacles — reduces goal-directed behavior rather than increasing it.
The mechanism: positive fantasies partially satisfy the desire they represent. Imagining a successful outcome reduces the motivational gap between current state and desired state, which reduces the energy available for actual pursuit.
The nuance: Visualization is not uniformly bad for goal pursuit. The research that supports it — particularly in sports psychology — tends to involve process visualization (imagining the specific actions and behaviors required, not just the positive outcome) or mental contrasting (combining outcome visualization with obstacle identification, as in Oettingen’s WOOP). These are fundamentally different from the positive fantasy that self-help culture recommends.
What you should believe instead: Visualizing a successful outcome in isolation may reduce your motivation to pursue it. Visualizing the process by which you’ll achieve it is different and better-supported. Mental contrasting — Oettingen’s WOOP — produces reliable improvements in goal-directed behavior by combining outcome visualization with honest obstacle identification.
Misread 5: “Goal Setting Is Always Beneficial”
This sounds counterintuitive, but the research on goal setting includes important caveats that popular accounts tend to omit.
What the nuanced research shows: Lisa Ordóñez and colleagues published a 2009 paper in Academy of Management Perspectives titled “Goals Gone Wild,” which argued that goal setting has systematic negative side effects that are underweighted in standard accounts. Their review identified cases where goal setting led to increased risk-taking, unethical behavior (to hit targets), tunnel vision (focusing narrowly on measured metrics at the expense of unmeasured ones), and reduced intrinsic motivation.
Locke and Latham responded with a rebuttal, and the exchange is worth reading because both sides have valid points. The resolution isn’t “goal setting is bad” — it’s that goal setting works in conditions where the right goals are set and where the measurement system captures what actually matters. Specific, difficult goals for the wrong outcomes — or with metrics that inadequately capture the actual objective — can produce harmful performance patterns.
What you should believe instead: Goal setting is one of the most reliably effective behavioral interventions in organizational psychology. But it requires three things to be done well: the right goal (connected to what genuinely matters), appropriate difficulty calibration, and a measurement system that captures the actual outcome rather than a proxy. Goals Gone Wild is worth keeping in mind as a check on whether your specific goals meet these criteria.
The Pattern Behind the Misreadings
These five examples share a structure: a real finding gets simplified, de-conditionalized, and repeated until it sounds like a rule. Then the rule gets applied without the conditions that made the original finding meaningful.
The solution isn’t skepticism about goal science — the core findings, particularly from Locke and Latham and Gollwitzer, are genuinely robust. The solution is reading primary sources or reliable summaries that preserve the caveats, and calibrating your confidence in proportion to the quality of the evidence.
When someone tells you that writing goals makes them 42% more likely to happen, ask: compared to what? Under what conditions? In what sample? Those questions usually reveal that the finding, while directionally right, is being applied far beyond what the study warrants.
Related:
- The Complete Guide to the Science of Goal Achievement
- 5 Evidence-Based Goal Approaches Compared
- The Latest Research on Goal Achievement
- What the Science Says About Setting Goals with AI
Tags: goal science myths, SMART goals myth, 21-day habit myth, goal setting research, Matthews study
Frequently Asked Questions
-
Is the 42% statistic about writing down goals accurate?
The statistic comes from Gail Matthews's 2015 study at Dominican University of California. It's a real study, but it's routinely overstated. Matthews compared written goals to goals that were only thought about — not to a true baseline. The sample was self-selected professionals who volunteered. The study hasn't been replicated at scale. The direction of the finding (written goals outperform unwritten ones) is plausible and consistent with commitment device research, but '42% more likely to achieve your goals' is far stronger a claim than the study supports.
-
Were SMART goals really proven by research?
No. SMART goals were introduced by George Doran in a 1981 Management Review article as a management heuristic — not a research-derived framework. The 'A' (Achievable or Realistic) criterion directly contradicts Locke and Latham's finding that difficult goals outperform easy ones. SMART provides useful prompts for goal specification, but it isn't 'scientifically proven,' and calling it so misrepresents how it was developed.
-
Does it take 21 days to form a habit?
No. The 21-day figure is a misreading of a 1960 observation by plastic surgeon Maxwell Maltz, who noticed that patients took roughly 21 days to adjust to a changed body image — a completely different phenomenon from habit formation. Phillipa Lally's 2010 study in the European Journal of Social Psychology, the most rigorous study on habit formation timing, found that new habits took 18 to 254 days to form, with an average of 66 days. The range matters: some behaviors automatize quickly, some take months.