What comes next?

Tim Parsons
May 17, 2023

How to prioritise, by Move78

Mastering the art of prioritisation is important for every business but it’s especially critical for startups, where resources are always limited, and a misstep can stall your progress.

The logical next step is not always obvious, and different perspectives can quickly turn into heated disagreements. This is where objective methods of prioritisation can be hugely helpful. Here are six prioritisation methods you can start using today.

Value vs. Effort Analysis

In basic terms, striking the right balance between potential value and required effort is the key to effective prioritisation. The simplest version of this is the Value vs Effort matrix, which focuses on plotting tasks in relation to each other in these terms:

Major Project = Big Bets
  • A Big Bet (high effort/ high value) item may be some new feature that customers have been requesting, but will take significant time and effort to design and deploy.
  • A Quick Win (high-value/ low-effort) task might be integrating a chatbot to manage straightforward customer inquiries. This is a quick job for developers that could save customer service teams many hours a week.
  • Fill-ins are the kind of tasks that need to be done at some point, but probably during a quiet week, or when more valuable items are blocked.
  • Try to avoid thankless tasks where possible.

The idea is to focus your team’s efforts on the high value items, favouring those accomplished with lower effort.

While this is a useful starting point for prioritisation, it does paper over a lot of nuances. Tasks might have varying degrees of value and effort that don't always fit neatly into the quadrants. Overly focusing on Quick Wins, might lead to the neglect of Big Bets that are vital to the company’s long term success. In that sense, it’s important to distinguish between urgency and importance.

The Eisenhower Matrix

I have two kinds of problems, the urgent and the important. The urgent are not important, and the important are never urgent.
Dwight D. Eisenhower

Popularised in The 7 Habits of Highly Effective People, an Eisenhower Matrix sorts tasks into four quadrants based on importance and urgency.

Urgent & Important

These tasks demand immediate attention and have significant consequences if not addressed. For example, resolving a website outage or addressing a critical security breach. You should tackle these issues head-on, deploying all the resources at your disposal to resolve them promptly.

Important but Not Urgent

These tasks contribute to long-term success, but don't require any immediate action. Examples include developing features that will appeal to new audiences, or refining your company's positioning vs competitors. You should dedicate time to these items, and encourage team members to focus on them too, despite the lack of immediate pressure.

Urgent but Not Important

These tasks demand immediate attention but have a lower impact on your business goals. Non-essential meetings, admin, and most emails can be put into this category. These items should be delegated or streamlined, freeing up your time for more important things.

Neither Urgent nor Important

Tasks in this category don't contribute significantly to your success, and don't require immediate attention. Examples might include attending (most) Meetups or undertaking research with no foreseeable application. Aim to cull these as they emerge (where possible).

The Eisenhower Matrix is a helpful tool for differentiating actual value from busy work, but let's face it, getting different teams to agree on what's important can be challenging.


ABC and MoSCow are ranking systems that attach a status to tasks, sorting them into categories.

An ABC framework categorises tasks into three groups:

  • A-tasks are mission-critical and demand immediate attention; urgent bug fixes or essential feature improvements that directly shape user experience.
  • B-tasks are significant, but not urgent; refining existing features or implementing customer feedback, which are both valuable but can probably be deferred.
  • C-tasks are helpful, but can be put on hold; less pressing updates or nice-to-have features that won't make or break your product's success.

Similarly, the MOSCOW approach sorts tasks into four categories:

  • Must-haves are non-negotiable essentials; the core functionalities that define your product's value.
  • Should-haves are important but not critical; performance enhancements that boost user satisfaction but are not strictly indispensable.
  • Could-haves add value but can be deferred; delightful add-ons that can be postponed without jeopardising your product's core appeal.
  • Won't-haves are low-priority items that can be set aside; they may be interesting, but they don't align with your current goals.

Having teams rank tasks collectively using either method can both focus minds, and unearth contrasting perspectives. Don’t be afraid of disagreement; give individuals a chance to state their point of view, and encourage them to build business cases to support their case. Done right, the team will emerge with a clearer picture of what’s truly important, and how that relates to individual/ departmental priorities.


The RICE framework is a more formulaic incarnation of the systems outlined above. It’s generally used for larger teams, or where there are several competing priorities in play with no obvious resolution.

The RICE approach evaluates tasks based on four criteria: Reach, Impact, Confidence, and Effort:

  • Reach gauges the number of customers (or potential customers) affected
  • Impact assesses the potential influence on the business
  • Confidence measures your certainty of the task's success
  • Effort estimates the required time and resources.

Score each out of 10 (or 50, or 100 - just keep it consistent). A change that affects ALL customers or prospective customers would score a 10 for reach. If you’re 80% sure it will work as intended, it gets an 8 for confidence, and so on.

You can then use the following formula to calculate the task’s RICE score.

This score gives teams a more principled foundation for making decisions. A given feature may be extremely promising in terms of reach, impact and confidence, but if implementing it will take months, some collection of lower-impact tasks may be a better use of time and energy. In that sense, RICE quantifies opportunity cost in a tangible way.

Of course you can still disagree about the ranking (confidence levels are particularly subjective), but the point is that RICE forces teams to examine their assumptions in a more structured way, supporting their assertions with research, data, and alignment with strategic priorities.

Pareto Analysis (aka. the 80/20 rule)

Lastly, the Pareto Principle (or the 80/20 rule) can be applied to any of the methods above. It works on the assumption that 20% of your input (business activity) produces 80% of the output (business value).

While the ratio isn't meant to be exact, the underlying idea of a wildly imbalanced relationship between input and output holds true across many domains.

It’s an important principle to bear in mind. Teams should endeavour to pinpoint the 20% of tasks responsible for 80% of the desired outcomes, ensuring their focus remains on what really matters.

Perhaps 80% of user satisfaction is driven by just 20% of a product's features. The product team should probably focus their efforts on enhancing and improving those features, or at least treat their usefulness as a base to build out from.

You may discover that 20% of your customers are driving 80% of your revenue. Focusing your efforts around the needs of that customer segment is likely to have a greater impact on your bottom line than adopting a more general approach.

If 80% of your customers are finding you via search engines, marketing should probably be dedicating more time to SEO than influencer outreach.


Prioritising effectively is hard, and there isn’t a right way to do it. The frameworks above are really just useful ways of working through problems; how and when to deploy them will depend on the circumstances.

Need to move fast?

If you’re standing at a crossroads and need to make decisions quickly, a value/ effort matrix will probably suffice. Try to quantify how you plot things (ie. team time vs additional users), and answers will start to emerge.

Feeling overwhelmed?

If you’re overwhelmed by tasks (and something probably has to give), the Eisenhower Matrix is your friend. Drop and delegate items as appropriate, then focus on the top left quadrant. If after considered analysis every task is in the top left, or so many that you’ll never actually get to the top right, consider scaling back your ambitions (or hiring some help).

Need to resolve team alignment?

ABC and MoSCoW really exist to resolve disagreements around MVP creation. It forces teams to compare itemised user stories against set objectives for user experience, which naturally makes things less subjective. The must and should distinction often reflects the needs of early adopters vs wider commercialisation of a product, a shared understanding of which is often enough to find resolution.

Complex decision space?

RICE exists to deal with complex decisions - multi-feature apps, or websites that sell a wide range of products are prime examples. It’s a useful system, but a lot of work to get right, so make sure you really need it. Also make sure the whole team is bought into both the system and the scoring method, or it’s likely to cause more disagreements than it solves.

Got analysis paralysis?

The Pareto Principle is especially useful for retrospectives. It’s often hard to predict what’s going to be valuable in the future, while in retrospect it can seem obvious. It’s important to evaluate your decisions based on what you knew at the time, rather than what you know now. If your wasted time was going on experimental activity that could have worked but didn’t, that’s forgivable, and maybe even admirable. If it’s made up of tasks that were obviously not important or unlikely to work, that’s much more of a problem. Applying 80/20 thinking at regular intervals helps teams avoid the kind of inertia that leads to predictable waste. Make sure your activities have clear outcomes attached to them, and regularly cull the ones that aren’t delivering.

Want to make better decisions? Get in touch!

Monthly Bulletin

Sign up for product, growth and GTM development tips for innovators

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.