Shipping SaaS on gut feel is an expensive habit. You don’t need more opinions—you need quick, credible signals that your idea solves a real job, for a real segment, at a price they’ll actually pay. If you came searching for Tomer Sharon’s Validating Product Ideas and its lean research mindset, you’re on the right track: the goal is to learn fast, cheaply, and continuously, so you can stop debating and start deciding.
This guide turns that mindset into action. You’ll get 10 lean research tactics tailored for SaaS, each with what it is, how to run it, the key questions it answers, pass/fail thresholds, and tools to try. We’ll borrow Sharon’s core questions to structure your plan, then walk through practical moves—centralizing user feedback, customer discovery interviews (JTBD), smoke-test landing pages and fake doors, micro‑polls, paid pilots and LOIs, concierge/Wizard‑of‑Oz MVPs, five‑user usability tests, pricing experiments, and instrumentation for activation and retention. Along the way, you’ll see how to turn scattered feedback into a prioritized roadmap with tools like Koala Feedback. Let’s replace hunches with evidence and move your idea from “interesting” to “investable.”
1. Centralize and prioritize user feedback with Koala Feedback
When signals live in docs, tickets, and DMs, you get anecdotes—not evidence. Centralizing feedback turns scattered opinions into a single backlog you can sort, dedupe, and score. It’s a fast, low-cost way of validating product ideas while building trust through a transparent roadmap.
What it is
A lightweight system for capturing every request in one place and mapping it to themes, demand, and status. Koala Feedback gives you a branded Feedback Portal, automatic deduplication and categorization, voting and comments, prioritization boards, and a public roadmap with customizable statuses.
How to run it
Start small and make it the front door for requests so patterns emerge quickly.
Create your portal: Add your domain, logo, and categories that mirror product areas.
Route inputs: Pipe support tickets, sales notes, and in‑app prompts to Koala; point users to submit and vote.
Deduplicate and tag: Let Koala auto‑merge duplicates; refine categories as themes solidify.
Prioritize: Group by boards (e.g., Onboarding, Billing, Integrations) and stack‑rank by demand and fit.
Communicate: Publish a public roadmap; update statuses to close the loop.
Questions to answer
What do people need right now?
Who’s asking (prospect vs. customer, segment, plan)?
Do people want the product or feature enough to push for it?
How do people describe the pain in their own words?
Pass/fail signals
Pass: Clear concentration of votes/comments on a few themes; buyer‑language in comments; repeat requests from target segments; engagement rises after status updates.
Fail: Flat voting, fragmented themes with no pattern, requests from non‑target users, silence after roadmap updates.
Tools to try
Koala Feedback: Feedback Portal, dedupe/categorization, voting, prioritization boards, public roadmap.
Google Forms or in‑app micro‑prompts: To capture quick inputs you funnel into Koala.
2. Build a lean research plan using Tomer Sharon's questions
Before you start testing tactics, decide what you’re trying to learn. Tomer Sharon’s Validating Product Ideas approach gives you a compact question set that turns vague bets into focused experiments—perfect for validating product ideas in SaaS without burning time or cash.
What it is
A one‑page plan that uses Sharon’s core questions to sequence quick, cheap studies and set pass/fail thresholds. It aligns your team on what to test first, which method to use, and what “good” looks like.
How to run it
Identify your top three riskiest assumptions, then map each to a method and a metric. Stage tests from lowest to highest cost. Predefine sample sizes and success gates (e.g., run until you log ~100 conversion events; target pre‑orders at 10–20% or convert up to 5% of waitlists).
Questions to answer
Use Sharon’s spine to focus your plan: What do people need? Who are the users? How do people solve it today? What is the workflow? Do people want the product? Which design generates better results? How do people find stuff? How will we find participants?
Pass/fail signals
Pass if you can answer each question with evidence from the right audience and hit your pre‑set thresholds; signals converge across methods (e.g., interviews, landing tests, usability). Fail if results are flat, fragmented, or miss targets—even after iterating once.
Tools to try
Koala Feedback for centralizing signals and prioritization; Google Forms/SurveyMonkey for quick surveys; Google Trends and Exploding Topics for demand signals; a prototyping tool for design tests; a simple doc or board to track assumptions, methods, and metrics.
3. Run customer discovery interviews (jobs-to-be-done)
When you’re validating product ideas, nothing beats hearing the story of the last time a customer tried to make progress and hit friction. Jobs‑to‑be‑Done (JTBD) interviews replace feature wishlists with real switch moments, constraints, and desired outcomes—evidence you can ship against.
What it is
A semi‑structured interview that centers on the “job” a user is hiring a solution to do. Instead of pitching, you unpack the timeline, triggers, alternatives, and outcomes. This method dovetails with Tomer Sharon’s questions—What do people need? Who are the users? How do they solve it today?—and turns anecdotes into actionable insight for validating product ideas.
How to run it
Keep it light, fast, and focused on recent behavior, not hypotheticals. Recruit current users, churned users, and qualified prospects who recently “hired” or “fired” a solution.
Define your assumption: segment, job, and suspected constraints.
Recruit from your list, sales pipeline, and Koala Feedback voters/commenters.
Write a short guide: timeline, triggers, alternatives, selection criteria, definition of success.
Interview: avoid pitching; probe for specifics, quotes, and artifacts (screenshots, spreadsheets).
Synthesize: cluster pains and desired outcomes; tag themes; map to opportunities and risks.
Feed insights into Koala Feedback so themes, demand, and language roll into your roadmap.
Questions to answer
Ground the conversation in a recent event and get concrete.
What triggered you to look for a solution—what changed?
What job were you trying to get done, and why now?
What did you try before (workarounds, competitors)? What broke?
How did you choose? What were must‑haves vs. nice‑to‑haves?
What does “success” look like in your words?
What would make this a non‑starter (security, compliance, integrations)?
How do you budget and justify spend on this problem?
Pass/fail signals
Pass: Repeated, vivid switch stories from your target segment; clear top‑3 pains and outcomes; buyers articulate decision criteria in their own words; concrete next steps (pilot, internal intro, deeper eval).
Fail: Vague interest, no recent attempts to solve, problem not prioritized, satisfaction with current workaround/incumbent, feedback dominated by non‑target users.
Tools to try
Koala Feedback: Recruit engaged voters, log verbatims, tag themes, and link insights to roadmap items.
SurveyMonkey/Google Forms: Short screeners to qualify participants.
Recording/transcription + a simple spreadsheet/board: Capture quotes, timestamps, and themes for fast synthesis.
4. Launch a smoke-test landing page and fake-door CTA
A smoke test puts your message in front of real prospects and asks them to act—before you build. A “fake-door” CTA (“Start free trial,” “Request demo,” “Pre‑order”) captures intent and gives you fast, cheap evidence for validating product ideas without code.
What it is
A single‑purpose landing page that promises the core value, pairs it with one primary CTA, and records conversions (clicks, emails, or payments). If the CTA is a fake door, route clickers to a waitlist or “coming soon” page and learn from their responses and language.
How to run it
Keep the surface area small and instrument everything. Drive targeted traffic in short bursts so you can iterate copy and offer quickly.
Define the offer: One segment, one job, one outcome. Draft three headline/value prop variants.
Drive traffic: Email your list, run small paid tests, and post in relevant communities.
Close the loop: Thank sign‑ups, ask a 1‑question micro‑poll, and funnel verbatims into Koala for synthesis.
Questions to answer
Your goal is to gauge demand strength, message‑market fit, and who is converting.
Does this segment click and sign up? Why now?
Which message wins (problem vs. outcome vs. feature)?
Will anyone pre‑order or place a deposit?
What price anchor keeps interest?
Which channel produces qualified sign‑ups at acceptable cost?
Pass/fail signals
Use pre‑set thresholds and small, decisive tests, then iterate or pivot.
Pass: You log ~100 conversion events; pre‑orders convert in the 10–20% range; a pre‑launch waitlist later converts up to ~5% to buyers; consistent CTA engagement from your target segment.
Fail: Flat or highly variable results across variants; no concentration by segment; poor follow‑through from waitlist to buyer even after iteration.
Tools to try
A simple page builder or your CMS: Create the landing and “coming soon” page; Shopify’s password‑protected prelaunch page works for quick tests.
Crowdfunder / Waitlist apps: Collect paid pre‑orders or build a queue you can later convert.
Analytics: Track events and cohorts (Shopify Analytics or your web analytics stack).
Koala Feedback: Capture micro‑poll replies and waitlist comments, dedupe themes, and tie insights to your roadmap.
5. Collect short surveys and in-app micro-polls
Interviews tell you why; surveys and micro‑polls tell you how many. A few crisp questions, delivered at the right moment, can quantify pains, surface objections, and capture buyer‑language you can route into Koala for prioritization—fuel for validating product ideas without heavy lift.
What it is
Lightweight, context‑aware questions that run inside your app or on a landing page, plus short follow‑up surveys to qualified users. The goal is to measure problem salience, willingness to pay, and must‑have requirements—fast and with minimal bias.
Micro‑polls: Single question with optional “Why?” free‑text.
Pulse surveys: 3–5 questions to a defined segment.
Trigger points: On exit, after task completion, or post‑support.
How to run it
Decide the single decision you’re trying to make, then instrument the smallest set of questions to inform it. Keep copy neutral, trigger on behavior, and close the loop by acting on what you learn.
Draft 1–3 questions (Likert or multiple‑choice) plus one open “Why?”.
Pipe verbatims to Koala, tag themes, act, and communicate status.
Questions to answer
Anchor questions to Tomer Sharon’s spine so each response maps to a decision.
Problem: “What’s the most frustrating part of X right now?”
Alternatives: “How do you solve this today?”
Willingness to pay: “What feels like a fair starting price—and why?”
Pass/fail signals
Pass: Clear theme concentration, consistent buyer‑language from target segments, and metric lift after shipping fixes (e.g., higher CTA engagement or demo requests).
Fail: Low response, scattered themes, answers dominated by non‑targets, and no behavioral improvement after iteration.
Tools to try
Koala Feedback: Centralize responses, dedupe, categorize, and tie to a public roadmap.
Google Forms / SurveyMonkey: Quick pulses to lists or cohorts.
In‑app prompts: Simple, event‑based micro‑polls routed into Koala for synthesis.
6. Validate demand with paid pilots, deposits, or LOIs
The cleanest way to prove willingness to pay is to ask for money—or a credible commitment. Paid pilots, refundable deposits (pre‑orders), and letters of intent (LOIs) turn “sounds cool” into contracts, cash, or calendar time. They’re powerful for validating product ideas because they force prioritization on the buyer’s side and give you concrete success criteria.
What it is
A low‑risk commercial test that exchanges value before the full product ships. Options include a time‑boxed paid pilot, a refundable deposit to reserve access, or a lightweight LOI spelling out price, scope, timeline, and success metrics. Treat deposits like pre‑orders—an established signal for demand before launch.
How to run it
Start with a narrow offer and clear outcomes, then make committing dead simple.
Define a 30–60 day pilot: scope, outcomes, success metrics, price anchor, and refund/cancellation terms.
Create lightweight paperwork: a one‑page pilot brief and an LOI template; add a payment link for deposits or pilot fees.
Target the right prospects: your ICP from discovery calls, waitlists, and engaged voters/commenters from Koala Feedback.
Run a focused sales loop: discovery → proposal → ask for a pilot fee or deposit; if not feasible, request an LOI with a start date.
Instrument the funnel: track offers_made, pilots_won, deposits, LOIs, cycle_time, and reasons_lost. Pipe pilot feedback into Koala and update your public roadmap to close the loop.
Questions to answer
Will economic buyers allocate budget now—and for what outcome?
What scope and timeline reduce risk enough to say yes?
What price anchors feel credible and fair?
What legal, security, or integration blockers appear?
Pass/fail signals
Pass: Multiple paid pilots or deposits from your target segment; LOIs with clear scope/price/timeline; short cycle time from proposal to commitment; pre‑orders converting in the 10–20% range or waitlists converting up to ~5% when deposits open.
Fail: Verbal interest without payment or LOI; procurement stalls with no next steps; commitments from non‑ICP buyers; high refund or churn after pilot.
Tools to try
Koala Feedback to recruit engaged prospects, capture pilot feedback, and communicate status on a public roadmap.
Payment links/invoicing to collect deposits or pilot fees.
E‑signature for LOIs and one‑page pilot briefs.
A simple CRM or spreadsheet to track conversion metrics and cycle time.
7. Prototype with concierge or Wizard-of-Oz MVPs
When code is the bottleneck, simulate the outcome by doing the work manually. Concierge and Wizard‑of‑Oz MVPs let you deliver the promised value with humans and lightweight tools, giving you fast evidence for validating product ideas before you automate anything.
What it is
A concierge MVP is a high‑touch, manual service that achieves the user’s desired outcome. A Wizard‑of‑Oz MVP looks automated on the surface, but humans do the heavy lifting behind the scenes. Both expose real workflows, pricing levers, and deal blockers without writing full product code.
How to run it
Scope it to one segment and one job, then instrument every step. Treat it like a time‑boxed pilot with explicit success criteria and a clear price anchor or deposit.
Define the job/outcome and guardrails (data, privacy, turnaround time).
Build the “front stage” (simple form or demo UI) and your “back stage” (scripts, spreadsheets).
Recruit from discovery calls, your waitlist, and Koala Feedback voters.
Deliver the outcome manually; narrate trade‑offs and capture friction.
Ask for a next‑step commitment (paid pilot, refundable deposit, or LOI) and log all insights.
Questions to answer
Center on behavior, not opinions, and tie findings back to scope and willingness to pay.
What steps are essential vs. skippable to deliver the outcome?
Where do errors, delays, or handoffs occur?
What integrations or data sources are truly required?
Will targets commit budget or a deposit after seeing value?
Pass/fail signals
Pass if users return, escalate, or pay; fail if the outcome doesn’t matter enough to warrant commitment or if delivery dependencies make the job non‑viable.
Pass: Repeat usage or purchase intent from the target segment; agreement to a paid pilot, deposit, or LOI; shorter turnaround times over iterations; clear must‑have requirements emerge.
Fail: One‑and‑done trials; “nice to have” feedback without budget; blockers you can’t mitigate (security, compliance, missing data).
Tools to try
Use simple, scrappy tools for speed and route all learnings into your feedback system.
Koala Feedback: Recruit engaged users, tag pains/outcomes, and reflect status on a public roadmap.
Forms + spreadsheets: Collect inputs and run the back office.
Scheduling + payment links: Make commitments (sessions, deposits) effortless.
8. Run five-user usability tests on clickable prototypes
Before you ship code, put your flow in front of five target users and watch them try to accomplish a real task. Clickable prototypes surface confusing copy, dead‑ends, and missing affordances in an afternoon—one of the fastest ways of validating product ideas and de‑risking your MVP user experience.
What it is
A lean usability study using a high‑fidelity clickable prototype. You give participants realistic tasks (e.g., “connect billing,” “create first project”), observe where they struggle, and capture both behavioral and verbal data to fix the biggest blockers before development.
How to run it
Keep it scrappy, consistent, and focused on one core job per session.
Pick the task and flow: One segment, one primary outcome.
Build the prototype: Link screens for the happy path plus a few realistic detours.
Recruit five targets: From discovery calls, your waitlist, or Koala Feedback voters.
Moderate neutrally: Ask them to think aloud; don’t coach. Record screen + audio.
Score and synthesize: Track Task success rate = successes / attempts, time‑to‑first‑action, and critical issues; tag themes and feed them into Koala.
Questions to answer
Your aim is to prove users can reach value quickly—and see where they can’t.
Where do users hesitate, backtrack, or abandon?
Which labels, steps, or settings create errors?
Can they complete the task without help?
What copy or UI change would remove the next blocker?
Pass/fail signals
Make a call after one iteration; then retest quickly.
Pass: Majority complete the task unaided; sharp drop in critical issues; faster time‑to‑first‑action; participants can explain the value in their own words.
Fail: Repeated stalls at the same step; reliance on moderator hints; contradictory mental models across participants; no improvement after fixes.
Tools to try
Use simple tooling; invest the effort in observation and iteration.
A prototyping tool: Build the clickable flow you need to test.
Screen recording + calendar/VC: Run and capture remote sessions.
A simple tracker: Log issues, severity, and fixes.
Koala Feedback: Centralize usability findings, dedupe themes, prioritize fixes, and communicate status on your public roadmap.
9. Test pricing and packaging with lean methods
Pricing and packaging feel like late‑stage decisions, but for SaaS they’re core to validating product ideas. Lean price tests trade hypotheticals for small, reversible commitments—pre‑orders, pilot fees, and plan choices—so you learn what buyers will actually accept and why.
What it is
A sequence of quick experiments that gauge willingness to pay and which “value fences” (tiers, limits, add‑ons) resonate with each segment. You compare anchored price points and simple plan grids, then watch real behavior, not surveys alone.
How to run it
Start with one ICP and one outcome, then ladder from intent to commitment.
Anchor on a page: Create landing variants with two or three price anchors and a simple three‑tier grid. Track PriceConv% = orders / unique_visitors and PlanMix% = plan_orders / total_orders.
Ask for commitment: Offer a refundable deposit/pre‑order or a 30–60 day paid pilot at the anchored price.
Package the value: Test which features belong in Core vs. Pro vs. Add‑on by prompting prospects to pick a plan for a real task; capture objections in free‑text.
Use sales calls deliberately: Present two anchored options and listen for the trade‑offs they choose, then ask for a pilot fee or LOI.
Close the loop: Push verbatims and objections into Koala Feedback; update your roadmap if packaging needs to shift.
Questions to answer
Which price feels credible for the promised outcome—and why?
What feature gates make a higher tier feel worth it?
Monthly vs. annual: which cadence wins for this segment?
What procurement, security, or seat constraints cap the deal?
Pass/fail signals
Pass: Deposits or pre‑orders convert in the 10–20% range; a pre‑launch waitlist later converts up to ~5% when pricing is revealed; plan selection concentrates (clear winner) without tanking overall interest; buyers accept a pilot fee at the anchored price.
Fail: Interest collapses when price is shown; scattered plan selection with “none fit me” objections; verbal enthusiasm without deposits, fees, or LOIs—even after one iteration.
Tools to try
Koala Feedback: Centralize price/packaging objections, dedupe themes, and reflect changes on your public roadmap.
Landing/CMS + analytics: Ship variants and track conversions.
Survey tools: Quick pulses to rank features by tier and capture “why.”
10. Instrument MVP analytics for activation and retention
If interviews and smoke tests tell you what people say, activation and retention show what they do. For validating product ideas, your MVP needs just enough instrumentation to prove users reach first value and come back. Keep the model lean, measure the same way every week, and close the loop with qualitative feedback.
What it is
A minimal event schema and a few core metrics that track the journey from sign‑up to “aha,” ongoing use, and churn. You don’t need a data warehouse—just consistent events, cohort views, and a way to tie numbers to user comments so you know what to fix next.
Qual + quant loop: Trigger micro‑polls at drop‑offs and route verbatims into Koala.
How to run it
Define the one job your MVP must deliver, make that the activation event, and instrument from day one. Run compact cohorts and iterate until signals stabilize.
Set the activation event: e.g., “imports first dataset” or “sends first invoice.”
Set gates: Run small tests until you log ~100 conversion events; compare cohorts before/after changes.
Diagnose with context: Trigger a one‑question micro‑poll at the biggest drop‑off; ship a fix; re‑measure.
Operationalize learning: Pipe themes into Koala Feedback, update statuses on your public roadmap, and notify users.
Questions to answer
Are users reaching first value quickly? What blocks them?
Which steps correlate with retention or upgrade?
Which segment/cohort retains, which churns, and why (in their words)?
What change moves activation or D7 retention the most?
Pass/fail signals
Pass: Clear movement in activation after fixes; repeat usage in the 10–15% range before scale; cohorts stabilize or improve; qualitative feedback aligns with the metrics.
Fail: Most sign‑ups never hit first_value; D7/D30 retention collapses; events are missing or inconsistent; no improvement after an iteration.
Tools to try
Your product analytics + simple cohort sheets: Track events, funnels, and retention.
In‑app micro‑polls: Capture “why” at drop‑offs and route to Koala.
Koala Feedback: Centralize verbatims, dedupe themes, prioritize fixes, and communicate progress on a public roadmap.
Putting it all together
Lean validation is a loop, not a launch event. Centralize signals, frame your riskiest assumptions with Tomer Sharon’s questions, then march through interviews, smoke tests, micro‑polls, paid pilots, concierge/WoZ runs, five‑user tests, pricing checks, and MVP analytics. Set gates up front—log ~100 conversion events, look for pre‑orders in the 10–20% range, convert up to ~5% of waitlists, and seek early repeat usage in the 10–15% band. If a test misses the mark after one iteration, cut it or change the segment; if it clears, double down.
Make the loop visible so customers help you steer. Open a shared backlog, publish a lightweight roadmap, and close the loop every time you ship. If you want a simple way to operationalize the whole flow, set up your feedback portal, prioritization boards, and public roadmap with Koala Feedback and turn scattered opinions into confident product bets.
Collect valuable feedback from your users
Start today and have your feedback portal up and running in minutes.