Blog / What Is the MoSCoW Prioritization Method? Steps & Examples

What Is the MoSCoW Prioritization Method? Steps & Examples

Lars Koole
Lars Koole
·
October 9, 2025

MoSCoW prioritization is a simple, no‑nonsense way to decide what gets built first. It groups work into four buckets—Must have, Should have, Could have, and Won’t have (this time)—so teams can agree on what’s non‑negotiable and what can wait for a later release. Used for a defined timebox or budget, it replaces vague “high/medium/low” labels with clear delivery expectations: a missing Must sinks the release, Shoulds add strong value but can slip, Coulds are contingency, and Won’ts set boundaries to curb scope creep.

This guide shows you exactly how to use the method in agile and product management. You’ll learn its origins, the decision rules for each category, when MoSCoW fits best, and how to run a step‑by‑step workshop. We’ll cover allocation and timeboxing (including the common 60/20 guidance), pairing MoSCoW with scoring models and data, handling dependencies and risk, stakeholder facilitation tips, common pitfalls, and real‑world examples with templates you can copy. We’ll also walk through applying MoSCoW in Koala Feedback and how it stacks up against frameworks like RICE, ICE, and Kano.

Origins of the MoSCoW method and where it fits in agile

MoSCoW was created by software development expert Dai Clegg while at Oracle and later formalized in the Dynamic Systems Development Method (DSDM) handbook. It spread quickly beyond engineering into project management and business analysis because it gives cross‑functional teams a shared, practical language for priority. Rather than arguing over 1–N rankings, stakeholders align on delivery promises that map to real constraints like time and budget.

In agile, MoSCoW shines wherever timeboxing is used. DSDM fixes time, cost, and quality, then negotiates features—MoSCoW is the mechanism for that negotiation. It defines a Minimum Usable SubseT (MUST) for a viable release, lets Shoulds and Coulds add value as capacity allows, and uses Won’ts to prevent scope creep. You can apply it at multiple levels—project, release/increment, and sprint/timebox—so teams protect deadlines while keeping flexibility to optimize value.

MoSCoW categories and decision rules

The MoSCoW prioritization method turns vague “priority” debates into delivery commitments. Classify each item by asking: what happens if this isn’t delivered in this timeframe? If the release becomes non-viable (or illegal/unsafe), it’s a Must. If a painful but acceptable workaround exists, it’s a Should or Could. Used in timeboxed planning, these categories define the Minimum Usable SubseT (MUST) that guarantees a viable release while making room for value-add work and explicit de-scopes.

  • Must have: Non‑negotiable for a viable solution; without it, you’d cancel or delay the release. Covers legal/safety/compliance and core functionality. Defines the Minimum Usable SubseT (MUST). A Must must not depend on a non‑Must.

  • Should have: Important but not vital. Omission may hurt user experience or efficiency, but a temporary workaround exists. Often deferred without breaking the release.

  • Could have: Desirable, lowest impact if omitted. Forms the main contingency pool and is dropped first if time/budget slip.

  • Won’t have (this time): Explicitly out of scope for this timeframe. Manages expectations and prevents scope creep; may be reconsidered later.

When torn between Should vs Could, agree objective thresholds (e.g., users affected, revenue). You can also split acceptance criteria by priority, such as: “restore service Should be within 4 hours; it Must be within 24 hours.”

When to use the MoSCoW prioritization method

Use the MoSCoW prioritization method when you need clear trade‑off decisions under real constraints—time, budget, or capacity—and a shared promise about what will (and won’t) ship. It replaces fuzzy “high/medium/low” labels with delivery commitments, helping teams preserve deadlines while protecting a viable Minimum Usable SubseT and managing expectations across stakeholders.

  • Timeboxed planning: Releases, increments, or sprints where time is fixed and features must flex (per DSDM).
  • Tight budgets or headcount: When you must select the highest‑value scope the team can actually complete.
  • Skill or capacity limits: When available expertise constrains what can be built this cycle.
  • Competing priorities: Portfolio clashes or parallel initiatives that force explicit trade‑offs.
  • Defining an MVP/MUST set: Early in a project to establish a viable baseline and defer non‑essentials.
  • Mid‑flight replanning: When risk or scope creep threatens the date and you need controlled de‑scoping.
  • Organization‑wide alignment: To involve stakeholders and set expectations with Won’t‑have (this time) items.
  • Multi‑level roadmapping: Applying priorities at project, release, and timebox to keep plans coherent.

How to run a MoSCoW workshop step-by-step

A well-run MoSCoW workshop turns a messy backlog into clear delivery commitments. Timebox the session, bring the right stakeholders, and ground the conversation in the actual constraints of your release or project. Use this repeatable agenda to apply the MoSCoW prioritization method with consistency and speed.

  1. Frame the scope and constraints: define the timeframe, budget/capacity, and business objectives.
  2. Assemble candidates: bring decomposed items with context (users impacted, value, risks, effort ranges, and any supporting feedback or data).
  3. Align on decision rules: agree what qualifies as a Must (“cancel or delay without it”), how to separate Should vs Could, and that a Must cannot depend on a non‑Must.
  4. Silent review and pre‑sort: let participants scan items and surface likely Musts to focus discussion.
  5. Debate by exception: for each proposed Must, ask “What happens if it’s not delivered?” and “Is there a workaround?” Move items down if workarounds exist.
  6. Classify decisively: place each item into Must, Should, Could, or Won’t (this time) and record the rationale and assumptions.
  7. Tag dependencies: note upstream/downstream links and flag any Musts that rely on non‑Musts for immediate correction.
  8. Prioritize acceptance criteria where needed: set mixed thresholds (e.g., “Should restore in 4 hours; Must in 24 hours”).
  9. Sanity‑check capacity: ensure the emerging Must set looks deliverable within the timebox; defer details of allocations to the next step.
  10. Publish and review: share the prioritized list with stakeholders, highlight Won’ts to manage expectations, and schedule reviews at each timebox/increment or when new work appears.

Setting allocations and timeboxing (the 60/20 guidance)

Timeboxing only works if you cap how much “Must” you plan. DSDM guidance is clear: keep Must‑have effort to no more than 60% of the timeframe’s capacity and set aside around 20% for Could‑haves as contingency; the remainder goes to Should‑haves. Won’t‑haves are excluded from effort calculations. This spread protects the Minimum Usable SubseT and gives you room to absorb unknowns without slipping dates. If you push Musts above 60%, risk rises unless estimates are very accurate, the approach is well understood, the team is performing, and external risks are low.

  • Quantify capacity: Convert the timebox into effort (e.g., points, days, or person‑weeks).
  • Cap Musts: Plan Must_capacity ≤ total_capacity × 0.60. A Must must not depend on a non‑Must.
  • Reserve contingency: Allocate Could_capacity ≈ total_capacity × 0.20 as your first drop line if pressure mounts.
  • Fill with Shoulds: Use the remaining capacity for high‑value Should‑haves that improve outcomes without jeopardizing the date.
  • Apply at every level: Set allocations for project, increment/release, and each timebox to keep plans coherent.
  • Monitor and rebalance: Track burn‑down/burn‑up; if Musts expand, immediately down‑scope Coulds (then Shoulds) rather than extending the timebox.

Example: with 100 units of capacity, target ≤60 for Musts, ~20 for Coulds, and ~20 for Shoulds. Reassess these percentages at each review point as new information emerges.

Pairing MoSCoW with scoring models and data

The MoSCoW prioritization method is great for setting delivery promises, but it gets sharper when you reduce subjectivity with a scoring model and real data. Use a simple, consistent rubric—weighted scoring, value vs. complexity, ICE, RICE, Kano, or opportunity scoring—to quantify value and cost, then use MoSCoW to translate scores into timeboxed commitments.

  • Recommended flow:

    • Score the backlog with your chosen model using agreed criteria.
    • Apply MoSCoW decision rules to set Must/Should/Could/Won’t for the timeframe; sort within each bucket by score.
    • Add a confidence check; high uncertainty can lower priority or trigger a discovery spike.
  • Data to bring to the room:

    • Users affected and segments: vote counts and request volume (e.g., from Koala Feedback), weighted by strategic segments.
    • Business impact: revenue/ARR influenced, retention risk, competitive parity.
    • Risk and obligations: legal/compliance/SLA flags.
    • Cost to deliver: effort/complexity estimates and key dependencies.
    • Confidence level: strength of evidence behind assumptions.

This pairing preserves MoSCoW’s clarity while anchoring decisions in measurable impact and capacity constraints.

Stakeholders, roles, and facilitation tips

MoSCoW works best when the right people are making the right calls. Before you start, name decision owners, clarify an escalation path, and give everyone the same rules for Must, Should, Could, and Won’t. Keep the discussion grounded in the timebox and business objectives; require clear rationale for every promotion into the current timeframe.

  • Business Sponsor: Funds the initiative; arbitrates escalations and scope trade‑offs.
  • Business Visionary: Owns vision/ROI; explains and defends why something is a Must.
  • Business Ambassador: Brings the user view; empowered for day‑to‑day decisions.
  • Project Manager + Business Analyst: Enforce timebox rules; capture rationale, dependencies, and assumptions.
  • Solution Development Team: Estimate effort, surface risks; ensure no Must depends on a non‑Must.

Facilitation tips

  • Start from Won’ts: Promote items only with evidence and consensus.
  • Use the “cancel/stop deployment?” test to validate true Musts.
  • Pre‑agree thresholds that separate Should vs Could (e.g., users affected).
  • Cap Musts at ≤60% effort; reserve ~20% Coulds as contingency.
  • Review every timebox/increment; publish the rationale and explicit Won’ts to manage expectations.

Dealing with constraints, dependencies, and risk

Real projects are constrained by time, budget, skills, and cross‑team commitments. The MoSCoW prioritization method gives you the language for trade‑offs; the execution comes from how you expose dependencies, protect the release from risk, and pre‑plan what drops when pressure mounts. Treat constraints as first‑class inputs to your timebox, and make dependency and risk management a standing part of every MoSCoW review.

  • Make constraints explicit: Quantify time, budget, capacity, and key skill gaps. Keep Must effort ≤60% and reserve ~20% Coulds as contingency to absorb unknowns.
  • Enforce the dependency rule: A Must must not depend on a non‑Must. Either promote the dependency to Must, split the work to isolate the Must acceptance, or defer.
  • Map dependencies early: Maintain a simple dependency matrix; tag upstream/downstream links on every item and flag cross‑team or vendor handoffs.
  • Sequence by risk, not comfort: Pull highest‑uncertainty items forward; timebox discovery spikes to reduce risk before committing full scope.
  • Auto‑classify obligations: Legal, safety, and compliance needs default to Must; capture the rationale to avoid backsliding.
  • Stabilize external dependencies: Secure service‑level commitments; if you can’t, treat them as risks and avoid placing them on the critical Must path.
  • Pre‑agree drop order: Within Shoulds/Coulds, rank the exact cut line to protect the Minimum Usable SubseT when estimates shift.
  • Track assumptions and triggers: Record the assumptions behind each classification and define triggers for re‑prioritization.
  • React without slipping dates: When risk materializes, re‑estimate and immediately de‑scope Coulds (then Shoulds) rather than extending the timebox.

Common pitfalls to avoid

Even simple frameworks fail when teams skip the basics. The MoSCoW prioritization method works best when you treat categories as delivery promises, not opinions, and keep them aligned with real constraints, evidence, and governance. Watch for these traps and you’ll preserve predictability without sacrificing value.

  • Overstuffing Musts: Exceeding the ~60% Must effort guidance leaves no contingency and raises failure risk.
  • Using MoSCoW as a rank order: It’s not 1–N; it’s a commitment to what will ship in this timeframe.
  • Skipping objective scoring: Inconsistent, subjective calls creep in without a simple scoring rubric.
  • Excluding key stakeholders: Missing perspectives leads to misclassified Musts/Shoulds and surprise escalations.
  • Letting bias win: Personal favorites distort categories when evidence is thin.
  • Ignoring dependencies: A Must that depends on a non‑Must breaks when pressure mounts.
  • Vague Should vs Could rules: Without clear thresholds (e.g., users affected), debates drag and decisions wobble.
  • Set‑and‑forget priorities: Failing to revisit categories each timebox invites scope creep and hidden risk.
  • Not capturing rationale: Decisions are hard to defend or adjust without recorded assumptions and evidence.

Real-world examples and templates you can copy

Seeing the MoSCoW prioritization method in context makes the rules easier to apply. Use these quick scenarios as patterns: anchor Musts to viability, legal, or safety; keep Shoulds valuable but deferrable; treat Coulds as contingency; and use Won’ts to set boundaries. Capture rationale, dependencies, and drop order so the team knows exactly what flexes when pressure hits.

  • SaaS release, enterprise push: Musts: SSO (SAML), export of personal data for compliance. Shoulds: audit log filters, usage analytics drill‑downs. Coulds: dark mode, in‑app tips. Won’t (this time): pricing page revamp. Rationale: missing SSO blocks deals; analytics has a workaround via CSV.
  • Mobile payments expansion: Musts: PCI compliance updates, core SDK integration. Shoulds: Apple Pay/Google Pay. Coulds: promo codes. Won’t: loyalty tiers. Dependency rule: promote any Must‑blocking SDK work to Must; leave promo codes as first drop.
  • Internal IT migration: Musts: SSO cutover, daily backups meeting RPO/RTO (e.g., Must within 24h, Should within 4h). Shoulds: role‑based access. Coulds: self‑service portal. Won’t: BI dashboards in this wave.

Use this one‑pager template in your planning doc or board:

Item Category (M/S/C/W) Why now? Evidence Dependencies Effort (S/M/L) Notes/Drop order
SSO (SAML 2.0) M Blocks enterprise contracts 5 deals, compliance IdP config M Protect; drop “dark mode” first

Copy the structure for every item, keep Must effort ≤60%, reserve ~20% for Coulds, and publish Won’ts to manage expectations.

How to apply MoSCoW in Koala Feedback

Koala Feedback makes the MoSCoW prioritization method actionable by turning raw ideas into clear delivery commitments and a public plan. You’ll centralize feedback, classify it as Must/Should/Could/Won’t for a specific timeframe, and communicate decisions on a transparent roadmap—so users see what’s coming and why.

  1. Create a Prioritization Board with four lanes: Must, Should, Could, Won’t (this time). Use it as the single sorting surface for your release or quarter.
  • Pipe ideas from the Feedback Portal. Koala auto‑deduplicates and categorizes, so related requests roll up to the same card.
  • Enrich every item: link user votes and comments, add a short rationale (“What happens if we don’t ship this now?”), and note any compliance obligations. Capture obvious dependencies in the description.
  • Classify with discipline: promote to Must only if the release isn’t viable without it (or it’s legal/safety critical). Keep Coulds as your contingency pool.
  • Timebox the decision: tag the items you plan to attempt this cycle and move selected Must/Should work into “Planned” on the Public Roadmap. Keep Won’ts visible with a custom status like “Not planned (this time)” to manage expectations.
  • Communicate and close the loop: publish updates as items move to “In Progress” and “Completed,” and use comments to explain trade‑offs (e.g., when a Could drops to protect Musts). Revisit the board each cycle to rebalance based on new feedback and capacity.

This flow preserves a lean Minimum Usable SubseT, uses votes to anchor value, and leverages Koala’s public roadmap and custom statuses to keep everyone aligned.

MoSCoW vs other prioritization frameworks

MoSCoW isn’t a scoring model—it’s a commitment model. Where most frameworks rank ideas by “how valuable,” MoSCoW answers “what ships now” within a fixed timeframe and what drops first if pressure rises. That’s why it pairs well with numeric or research‑driven methods that reduce subjectivity before you translate scores into Must/Should/Could/Won’t.

  • MoSCoW vs Weighted Scoring: Weighted scoring ranks options against agreed criteria; MoSCoW converts those rankings into delivery promises for a specific timebox. Use scoring to inform, MoSCoW to commit.

  • MoSCoW vs Value vs Complexity (2×2): A 2×2 highlights quick wins and high‑value bets; MoSCoW protects deadlines by capping Musts and reserving contingency. Use 2×2 to shortlist, MoSCoW to finalize scope.

  • MoSCoW vs RICE/ICE: These produce a stack‑ranked backlog; MoSCoW creates buckets with explicit cut lines. Sort within each MoSCoW bucket by RICE/ICE to decide sequence.

  • MoSCoW vs Kano: Kano explains customer satisfaction dynamics; MoSCoW decides timing. Map Kano insights into MoSCoW (e.g., obligation items tend to be Musts), then schedule accordingly.

  • MoSCoW vs Eisenhower matrix: Eisenhower (urgent/important) is great for operational triage; MoSCoW is better for release planning and scope negotiation.

  • MoSCoW vs Buy‑a‑Feature/Opportunity scoring: These help discover what customers value; MoSCoW turns that evidence into a timeboxed plan and clear Won’ts to prevent scope creep.

Bottom line: use scoring and research to decide “best bets,” then use the MoSCoW prioritization method to make reliable, timeboxed delivery commitments stakeholders can trust.

Key takeaways

MoSCoW turns priority debates into delivery commitments. By fixing time and flexing scope, teams protect a viable release while managing expectations with explicit Won’ts. Follow clear decision rules, cap Must effort, pair with data, and revisit often to stay predictable. To make this workflow transparent from feedback to roadmap updates, use Koala Feedback to collect votes, classify Must/Should/Could/Won’t, and communicate changes without surprises.

  • Musts define the Minimum Usable SubseT; keep Must effort ≤60%.
  • Hold ≈20% as Coulds (contingency); the rest goes to Shoulds.
  • No Must may depend on a non‑Must—promote, split, or defer.
  • Pair with simple scoring (RICE/ICE/weighted) to cut bias.
  • Capture rationale, dependencies, and a clear drop order.
Koala Feedback mascot with glasses

Collect valuable feedback from your users

Start today and have your feedback portal up and running in minutes.