How to Prioritize Feature Requests (Revenue-Weighted, Not Vote-Counted)
How to Prioritize Feature Requests (Revenue-Weighted, Not Vote-Counted)
A 2024 ProductPlan survey found that 67% of product teams use vote count as the primary ranking signal on their public feedback board, while only 18% weight requests by the paying status or plan size of the requester. That gap is why so many small SaaS teams spend a quarter building something 400 free users asked for, only to watch their three largest customers churn over a separate feature they mentioned once in a Slack DM.
TL;DR:
- Vote counts reward loud users, not paying customers.
- RICE and MoSCoW are decent starting points but ignore dollar value.
- A simple revenue-weighted score can reorder your roadmap in a single afternoon.
Why Vote Count Fails
Vote count measures popularity, not willingness to pay.
On a public feedback board, the user who leaves the most votes is usually the user with the most spare time. That is rarely your highest-paying customer. For most SaaS products, the distribution of votes per user is heavily skewed toward free tier and trial users, because they are the ones most actively shopping. Paying customers are busy using the product, not browsing the roadmap.
A concrete failure mode looks like this. Your free tier is 5,000 users, your paid tier is 200. If 3% of free users vote on a given request and 5% of paid users vote on a different request, the free-tier request wins 150 to 10, even though the paid-tier request represents far more revenue and retention risk. Vote count by itself will always reward the larger, louder segment.
This is not an argument for ignoring feedback from free users. It is an argument against using raw vote totals as your sort key. The fix is a weighting function that accounts for how much revenue each voter represents.
Existing Frameworks: RICE, MoSCoW, Kano
Most teams already use a scoring framework. The common three each have a specific blind spot.
RICE stands for Reach, Impact, Confidence, and Effort. Score equals (Reach times Impact times Confidence) divided by Effort. It is useful because it normalizes for effort, but "Reach" is usually measured in users rather than dollars, which reintroduces the vote-count problem.
MoSCoW sorts items into Must, Should, Could, and Won't. It works for release planning but is not a ranking framework. Everything tends to end up in Should.
Kano categorizes features as Basic, Performance, or Delight. Useful for deciding what to build long term but slow to apply to individual requests.
A quick comparison:
| Framework | What it optimizes for | What it misses |
|---|---|---|
| RICE | Effort-adjusted reach | Dollar value per user |
| MoSCoW | Release scope | Quantitative ranking |
| Kano | Long-term category fit | Near-term revenue impact |
| Votes | User demand signal | Who is paying what |
None of these are wrong. They just need a revenue dimension bolted on before they drive a roadmap that keeps paying customers happy.
The Revenue-Weighted Impact Score
The simplest useful weighting is a two-step formula that takes minutes to set up in a spreadsheet.
Here is the core formula:
Score = (Sum of Requester MRR × Impact) / (Reach × Effort)
Where:
- Sum of Requester MRR is the total monthly recurring revenue of the customers who requested the feature.
- Impact is your subjective 1-5 estimate of how much this moves the needle per customer.
- Reach is how many future customers also benefit (1-5 scale, not raw count).
- Effort is estimated person-weeks (1-5).
Notice what this does differently. A request from three customers paying $500/month each produces a numerator of $1,500 times Impact. A request from 100 free-tier users produces a numerator of $0 times Impact, which collapses the whole score to zero.
You can soften this if that feels too aggressive. A common variation assigns a nominal $1-$5 "shadow MRR" to free users to reflect their conversion probability. The exact coefficient depends on your free-to-paid conversion rate. If 5% of free users convert at an average of $30/month, each free requester is worth roughly $1.50 in expected MRR. Plug that in and free requests still count, but correctly discounted.
Case Study: 5 Requests, Ranked Two Ways
The quickest way to see the difference is a side-by-side ranking.
Imagine a fictional B2B SaaS with five open feature requests this quarter. Here is what the backlog looks like:
| Feature | Requesters | Votes | Paying | MRR of requesters | Impact | Effort |
|---|---|---|---|---|---|---|
| A: Dark mode | 120 | 340 | 8 | $240 | 2 | 2 |
| B: Slack integration | 18 | 55 | 14 | $1,400 | 4 | 3 |
| C: CSV export to S3 | 4 | 9 | 4 | $1,600 | 5 | 2 |
| D: Mobile app | 300 | 410 | 12 | $360 | 3 | 5 |
| E: SSO (SAML) | 3 | 6 | 3 | $2,700 | 5 | 4 |
Ranked by vote count, the order is D, A, B, C, E. Dark mode and a mobile app rise to the top because free users are loud.
Now apply the revenue-weighted score. For simplicity, Reach is set to match relative customer breadth (1-5):
| Feature | Numerator (MRR × Impact) | Denominator (Reach × Effort) | Score |
|---|---|---|---|
| A | $240 × 2 = 480 | 5 × 2 = 10 | 48 |
| B | $1,400 × 4 = 5,600 | 3 × 3 = 9 | 622 |
| C | $1,600 × 5 = 8,000 | 2 × 2 = 4 | 2,000 |
| D | $360 × 3 = 1,080 | 5 × 5 = 25 | 43 |
| E | $2,700 × 5 = 13,500 | 1 × 4 = 4 | 3,375 |
The ranking flips to E, C, B, A, D. SSO and CSV export, which looked like niche low-vote items, now top the list because they unlock real revenue. Dark mode and mobile drop because the requesters are mostly not paying.
A good sanity check: the revenue-weighted list should agree with your gut on 3 out of 5 items. If it disagrees on more than that, either your MRR data is stale or your Impact scoring is drifting.
When Revenue Weighting Breaks Down
This framework is not bulletproof. It has three known failure modes.
First, product-led growth products where today's free user is tomorrow's paid team. If your conversion funnel is real, assigning shadow MRR to free users is mandatory, otherwise you will under-build for the segment that funds your next year.
Second, table-stakes features that do not fit impact scoring. Security, compliance, and accessibility features often score low on the formula because they have diffuse impact and no specific MRR attached. Carve out a fixed 20% of engineering capacity for these and do not run them through the formula at all.
Third, platform work and technical debt. The formula punishes large-effort items with no immediate customer signal. Run these on a separate track. A reasonable split is 60% revenue-weighted features, 20% table-stakes, 20% platform.
Acknowledging these three categories keeps the framework honest. It is a ranking tool for customer-facing features, not a universal decision function.
Your Priorities Spreadsheet
A working priorities spreadsheet has seven columns. Build it in whatever tool you use.
| Column | Type | Notes |
|---|---|---|
| Feature | Text | Short name, link to the original request |
| Requesters | List | Names or IDs of customers who asked |
| Total MRR | Number (USD/month) | Sum MRR of requesters, include shadow MRR |
| Impact | Integer 1-5 | Subjective, how much it moves the needle |
| Reach | Integer 1-5 | How many future customers also benefit |
| Effort | Integer 1-5 | Estimated person-weeks |
| Score | Formula | (Total MRR × Impact) / (Reach × Effort) |
Two rules keep it usable. Re-score monthly because MRR and churn move. Review the top 10 with your team before you commit to them, because any formula can be gamed by whoever enters the numbers.
If you want the request tracking and MRR weighting in one place instead of a standalone sheet, Feedbask's public roadmap and feature voting tool track requester identity and let you pull plan data into your prioritization flow.
FAQ
Should I show MRR weighting to customers on the public roadmap? No. Customers should see status (requested, planned, in progress, shipped) but not your internal scoring. Otherwise the loudest users will engineer their way into higher scores.
What if I do not know each requester's MRR? Start with the largest buckets: which plan tier they are on. Even "Free, Starter, Growth" as a 0/1/3 weighting is a massive improvement over raw votes.
How does this interact with RICE? Think of it as RICE with a revenue-weighted Reach term. You are replacing "how many users want this" with "how much revenue wants this." The Effort and Confidence terms still apply.
Does this hurt new customer acquisition? Only if you never build anything for free users, which is why the 20% table-stakes carve-out matters. The formula prevents free-tier requests from dominating, not from appearing.
How often should I re-rank? Monthly for the top of the list, quarterly for the full backlog. Anything more frequent becomes theater.
What if two customers request the same feature but phrase it differently? Merge the requests and sum their MRR. The whole point is to capture the true demand weight. Most feedback tools, including Feedbask, let you merge requests and carry over voters and requester history automatically.
Ready to sort your backlog by revenue instead of noise? Sign up free at feedbask.com/auth and pull your feature requests, roadmap, and customer data into one view.
More Posts
Closing the Feedback Loop: 7 Email Templates That Actually Get Read
Copy-paste email templates for acknowledging feedback, shipping features, asking for more details, and saying no — with open-rate data.
Forcing Users to Sign Up Is Killing Your Feedback Board
Why requiring an account to submit feedback slashes response rates by 70% — and what to do instead.
14 Bug Report Templates (With Copy-Paste Forms and Screenshots)
Proven bug report templates for web apps, mobile, browser extensions, and client work — with field definitions and embed snippets.
