NPS When You Have 200 Users: A Sample-Size Guide
NPS When You Have 200 Users: A Sample-Size Guide
At a response rate of 20%, a 200-user SaaS gets 40 NPS responses, which carries a margin of error of roughly plus or minus 15 points at a 95% confidence level. That means a reported NPS of 35 could actually be 20 or 50, and you would have no way to tell from the data.
TL;DR:
- NPS uses an 11-point scale, which is statistically noisy at small sample sizes.
- Below 1,000 users, most NPS movements are noise, not signal.
- Use CSAT, the Sean Ellis PMF test, or direct outreach to power users instead.
The 11-Point Scale Problem
NPS is noisy at small N because it throws away most of the information a Likert scale contains.
The scoring rule is the issue. Net Promoter Score takes an 11-point scale (0 to 10), collapses it into three buckets (Detractors 0-6, Passives 7-8, Promoters 9-10), and then subtracts the percentage of Detractors from the percentage of Promoters. The output ranges from -100 to +100. The problem: a score of 6 and a score of 0 are treated identically, and a jump from 8 to 9 changes the final number dramatically. At small sample sizes, the random variation in how individual users round their own satisfaction to an integer swamps the underlying signal.
Designers at Bain, who created NPS, never intended it for startup-scale samples. Their original work used tens of thousands of responses across F500 customer bases. Applying the same methodology to 40 responses is not a miniature version of the same analysis. It is a different statistical regime.
Margin of Error, Calculated
The math is more discouraging than most founders realize when they first run it.
Here is the rough calculation. For a proportion at 50% (worst case), the 95% margin of error is about 1.96 times the square root of (p times (1-p) divided by n). That simplifies to approximately 1/√n for the proportion of Promoters alone. NPS subtracts two proportions, so the variance roughly doubles. A usable approximation is:
MoE_NPS ≈ 140 / √n
Plug in your sample size and the number is sobering:
| Users | Response rate | Responses (n) | MoE on NPS (points) |
|---|---|---|---|
| 200 | 20% | 40 | ±22 |
| 500 | 20% | 100 | ±14 |
| 1,000 | 20% | 200 | ±10 |
| 2,000 | 20% | 400 | ±7 |
| 20,000 | 20% | 4,000 | ±2.2 |
The rough rule to remember: until you clear 400 responses, any quarterly NPS movement under 10 points is inside the noise band. You cannot tell if the product got better or worse.
Why Benchmarks Lie at This Scale
The industry benchmarks you see quoted are almost all collected from large panels.
A "good" SaaS NPS is often quoted as 30, and a "world-class" NPS as 50. Those numbers come from vendor surveys of hundreds of companies with thousands of respondents each. They are not noise-adjusted and they are not normalized for product category. Worse, they typically exclude the lopsided sample problem that kills early-stage survey validity: the users who respond first are usually your biggest fans.
For a micro-SaaS, early respondents are your loudest 5% of users. Your "NPS of 60" is really "NPS of 60 among the 5% of users who like you enough to open an email." Extrapolating from there to your whole base is the kind of mistake that leads founders to double down on the wrong segment.
3 Better Signals Below 1,000 Users
At this scale, the right question is not "what is our NPS" but "what will tell us if the product is working."
Three instruments are more useful than NPS below 1,000 users.
1. 5-point CSAT on specific interactions. After a user completes a key action (exports a report, runs their first campaign), show a single 1-5 satisfaction question. The 5-point scale has less noise than an 11-point scale, and tying it to a specific action gives you actionable feedback. CSAT above 4.2 is healthy. Below 3.8 is a red flag.
2. Sean Ellis PMF test. Ask "How would you feel if you could no longer use this product?" with three options: Very disappointed, Somewhat disappointed, Not disappointed. The 40% threshold on "Very disappointed" is a better PMF signal than any NPS number, because it measures dependency rather than sentiment.
3. Qualitative interviews with top 10% by usage. Take the users in your 90th percentile of sessions per week and ask them three questions: what nearly stopped you signing up, what would you miss most, what problem did this replace. Fifteen of these conversations will teach you more than 400 NPS responses.
Feedbask's in-app widget can run all three patterns. The point is not the tool, though. The point is to stop optimizing for a number that is mostly sampling error.
When to Switch On NPS
NPS becomes statistically useful somewhere between 1,000 and 2,000 monthly active users.
The pragmatic threshold is when your margin of error drops below 10 points, which happens around 200 responses per survey period. At a 20% response rate, that is 1,000 active users. Below that, running NPS is not harmful as long as you do not read the number too closely, but it is an opportunity cost: every minute you spend analyzing 40 responses is a minute you are not talking to a power user.
When you do switch it on, a few implementation details matter. Survey the same cohort each quarter so comparisons are apples to apples. Exclude users under 30 days old, because they are still learning the product. Publish the raw response distribution, not just the score, so you can see which bucket moved.
How to Run a Sean Ellis Test in 10 Minutes
The Sean Ellis test is the single highest-ROI survey for a pre-1K user product.
The full protocol is documented in detail by Ellis and Hiten Shah at First Round Review, but a minimum viable version takes about ten minutes to deploy.
The question: "How would you feel if you could no longer use [Product]?" with three options, Very disappointed, Somewhat disappointed, Not disappointed, plus an open text field "What's the main benefit you get from [Product]?"
Target: active users who have been signed up at least two weeks and used the core feature at least twice. Anything shorter and they are still in onboarding.
Analysis: segment by "Very disappointed" users. Read their text answers for the pattern in how they describe the benefit. That language is your positioning. The 40% threshold matters, but even at 25% you can learn who your product works for.
Here is what a minimal Feedbask setup looks like for this, embedded as a survey widget trigger after a user hits a usage milestone:
<script async src="https://feedbask.com/embed.js"
data-project="YOUR_PROJECT_ID"
data-widget="survey"
data-survey="pmf-sean-ellis"
data-trigger="event:core_action_completed_v2"></script>
A fuller writeup of how to configure event triggers is in /docs.
FAQ
But the big SaaS blogs all say NPS is important. Big SaaS has tens of thousands of customers. The methodology is not wrong, it is the wrong tool for your stage. The noise floor is too high at low N.
Can I just survey more often to get more responses? No. Surveying quarterly with 40 responses has the same statistical power as weekly with 40 responses. What matters is total unique responses per period, not frequency.
What if I weight responses by MRR? Revenue-weighted NPS is a reasonable variation for B2B SaaS but the variance problem is actually worse, because a handful of large accounts can swing the number single-handed. At small N, the signal gets noisier, not cleaner.
Is CSAT really better? For specific interactions, yes. CSAT measures a single event, which is easier to act on than a product-wide vibe. The 5-point scale is also less noisy than 11 points at low N.
How do I present this to a board or investor who expects NPS? Report CSAT plus the Sean Ellis percentage plus qualitative themes. If they insist on NPS, report it with confidence intervals. A line that reads "NPS 35 ± 22 at 95% CI" is more professional than a naked number.
When should I stop using the Sean Ellis test? Once you are past PMF and growing at a predictable rate, it matters less. At that point, switch to NPS for trend tracking and retain CSAT for interaction-level insight.
If you want a feedback stack designed for the pre-1K user stage, start free at feedbask.com/auth. The widget handles CSAT, PMF, and NPS out of the box, so you can switch signals as your user base grows.
More Posts
Closing the Feedback Loop: 7 Email Templates That Actually Get Read
Copy-paste email templates for acknowledging feedback, shipping features, asking for more details, and saying no — with open-rate data.
Forcing Users to Sign Up Is Killing Your Feedback Board
Why requiring an account to submit feedback slashes response rates by 70% — and what to do instead.
14 Bug Report Templates (With Copy-Paste Forms and Screenshots)
Proven bug report templates for web apps, mobile, browser extensions, and client work — with field definitions and embed snippets.
