Interpretation Basics

This tutorial shows how to read results from each survey type and translate them into clear product decisions.

What You'll Know When You're Done

  • How to read results from all six survey types
  • What signals to look for (and what to ignore)
  • When you have enough data to make a decision, and when you need more

The goal is to move from raw responses to confident next steps.

Reading Results by Survey Type

FastPoll

FastPoll gives you a quick signal. Here's how to read it:

  • Clear winner (one option above 50%) — Strong signal. You can act on this with confidence.
  • Split opinions (options within 10% of each other) — Weak signal. You might need a follow-up survey with more specific options, or more responses.
  • One option at near-zero — That option is dead. Consider removing it from consideration.

See the FastPoll reference for details on response data format.

UserChoice

UserChoice uses Conjoint Analysis to reveal which product trade-offs matter most. Look for:

  • Preference strength — How decisive are users? A 70/30 split is a clear preference. A 52/48 split means both options are roughly equal in their eyes.
  • Consistency across comparisons — If option A beats B, and B beats C, but C beats A, your users don't have a clear mental model. Simplify the options.
  • Segment differences — Power users and new users often prefer different things. If your data shows a split, check whether it's a segment issue, not a preference issue.

See the UserChoice reference for details on preference data.

PricePoint

PricePoint uses Van Westendorp to find your pricing sweet spot. The key numbers:

  • Too cheap threshold — Below this price, people question quality. Pricing below it costs you credibility.
  • Too expensive threshold — Above this price, you lose buyers. Pricing above it costs you conversions.
  • Optimal price range — The zone between "too cheap" and "too expensive" where most users are comfortable. This is where you should price.

See the PricePoint reference for details on pricing analysis output. For deeper methodology context, read the Van Westendorp pricing guide.

["IMAGE - Example PricePoint results showing the Van Westendorp price sensitivity meter with too-cheap, too-expensive, and optimal price range zones highlighted."] ["ALT - Example PricePoint results showing the Van Westendorp price sensitivity meter with too-cheap, too-expensive, and optimal price range zones highlighted."]

FeaturePriority

FeaturePriority uses Kano, MaxDiff, and Bradley-Terry to sort your backlog. What to look for:

  • Must-have features — These consistently rank high. If you don't build them, users will be disappointed. Non-negotiable.
  • Nice-to-have features — These rank in the middle. Build them when you have capacity, but don't prioritise them over must-haves.
  • Don't-bother features — These consistently rank low. Building them wastes dev time on things users don't care about.
  • Divided opinions — If a feature ranks high for some users and low for others, it might be a segment-specific need. Worth investigating, not dismissing.

See the FeaturePriority reference for ranking data details. For Kano methodology context, read the Kano model guide.

["IMAGE - Example FeaturePriority results showing features sorted into must-have, nice-to-have, and don't-bother categories with ranking scores."] ["ALT - Example FeaturePriority results showing features sorted into must-have, nice-to-have, and don't-bother categories with ranking scores."]

OpenFeedback

OpenFeedback collects qualitative responses. Numbers won't help you here. Patterns will:

  • Recurring themes — If five different people mention the same pain point, that's a real problem, not an outlier.
  • Sentiment — Are responses mostly positive, negative, or mixed? The overall tone tells you whether users are happy, frustrated, or indifferent.
  • Specific suggestions — "The checkout is confusing" is more actionable than "I don't like it." Look for responses that point to specific things you can fix.

See the OpenFeedback reference for response data format.

Reaction

Reaction surveys measure how people feel about content, designs, or copy. Quick reads:

  • Strong positive reactions — The content lands. Ship it.
  • Mixed reactions — Something's off but not broken. Test a variation before committing.
  • Strong negative reactions — Don't ship this version. Rework the content before rollout.

See the Reaction reference for reaction data format.

General Interpretation Principles

  • Wait for enough responses. Thirty responses is a reasonable minimum for most surveys. Five responses is an anecdote, not data.
  • Look for patterns, not outliers. One angry response doesn't mean your product is broken. Ten similar complaints probably does.
  • Combine quantitative and qualitative. Numbers tell you what's happening. Open feedback tells you why.
  • Consider timing. Responses collected right after a product launch may reflect excitement or frustration that fades. Check back after a week.
  • Share with your team. Different people notice different patterns. A designer might spot UX issues in feedback that a PM would miss.

Next Steps