• Work together
  • Product development
  • Ways of working

menu image

Have you read my two bestsellers, Unlearn and Lean Enterprise? If not, please do. If you have, please write a review!

  • Read my story
  • Get in touch

menu image

  • Oval Copy 2 Blog

How to Implement Hypothesis-Driven Development

  • Facebook__x28_alt_x29_ Copy

Remember back to the time when we were in high school science class. Our teachers had a framework for helping us learn – an experimental approach based on the best available evidence at hand. We were asked to make observations about the world around us, then attempt to form an explanation or hypothesis to explain what we had observed. We then tested this hypothesis by predicting an outcome based on our theory that would be achieved in a controlled experiment – if the outcome was achieved, we had proven our theory to be correct.

We could then apply this learning to inform and test other hypotheses by constructing more sophisticated experiments, and tuning, evolving, or abandoning any hypothesis as we made further observations from the results we achieved.

Experimentation is the foundation of the scientific method, which is a systematic means of exploring the world around us. Although some experiments take place in laboratories, it is possible to perform an experiment anywhere, at any time, even in software development.

Practicing Hypothesis-Driven Development [1] is thinking about the development of new ideas, products, and services – even organizational change – as a series of experiments to determine whether an expected outcome will be achieved. The process is iterated upon until a desirable outcome is obtained or the idea is determined to be not viable.

We need to change our mindset to view our proposed solution to a problem statement as a hypothesis, especially in new product or service development – the market we are targeting, how a business model will work, how code will execute and even how the customer will use it.

We do not do projects anymore, only experiments. Customer discovery and Lean Startup strategies are designed to test assumptions about customers. Quality Assurance is testing system behavior against defined specifications. The experimental principle also applies in Test-Driven Development – we write the test first, then use the test to validate that our code is correct, and succeed if the code passes the test. Ultimately, product or service development is a process to test a hypothesis about system behavior in the environment or market it is developed for.

The key outcome of an experimental approach is measurable evidence and learning. Learning is the information we have gained from conducting the experiment. Did what we expect to occur actually happen? If not, what did and how does that inform what we should do next?

In order to learn we need to use the scientific method for investigating phenomena, acquiring new knowledge, and correcting and integrating previous knowledge back into our thinking.

As the software development industry continues to mature, we now have an opportunity to leverage improved capabilities such as Continuous Design and Delivery to maximize our potential to learn quickly what works and what does not. By taking an experimental approach to information discovery, we can more rapidly test our solutions against the problems we have identified in the products or services we are attempting to build. With the goal to optimize our effectiveness of solving the right problems, over simply becoming a feature factory by continually building solutions.

The steps of the scientific method are to:

  • Make observations
  • Formulate a hypothesis
  • Design an experiment to test the hypothesis
  • State the indicators to evaluate if the experiment has succeeded
  • Conduct the experiment
  • Evaluate the results of the experiment
  • Accept or reject the hypothesis
  • If necessary, make and test a new hypothesis

Using an experimentation approach to software development

We need to challenge the concept of having fixed requirements for a product or service. Requirements are valuable when teams execute a well known or understood phase of an initiative and can leverage well-understood practices to achieve the outcome. However, when you are in an exploratory, complex and uncertain phase you need hypotheses. Handing teams a set of business requirements reinforces an order-taking approach and mindset that is flawed. Business does the thinking and ‘knows’ what is right. The purpose of the development team is to implement what they are told. But when operating in an area of uncertainty and complexity, all the members of the development team should be encouraged to think and share insights on the problem and potential solutions. A team simply taking orders from a business owner is not utilizing the full potential, experience and competency that a cross-functional multi-disciplined team offers.

Framing Hypotheses

The traditional user story framework is focused on capturing requirements for what we want to build and for whom, to enable the user to receive a specific benefit from the system.

As A…. <role>

I Want… <goal/desire>

So That… <receive benefit>

Behaviour Driven Development (BDD) and Feature Injection aims to improve the original framework by supporting communication and collaboration between developers, tester and non-technical participants in a software project.

In Order To… <receive benefit>

As A… <role>

When viewing work as an experiment, the traditional story framework is insufficient. As in our high school science experiment, we need to define the steps we will take to achieve the desired outcome. We then need to state the specific indicators (or signals) we expect to observe that provide evidence that our hypothesis is valid. These need to be stated before conducting the test to reduce the bias of interpretation of results.

If we observe signals that indicate our hypothesis is correct, we can be more confident that we are on the right path and can alter the user story framework to reflect this.

Therefore, a user story structure to support Hypothesis-Driven Development would be;

hdd-card

We believe < this capability >

What functionality we will develop to test our hypothesis? By defining a ‘test’ capability of the product or service that we are attempting to build, we identify the functionality and hypothesis we want to test.

Will result in < this outcome >

What is the expected outcome of our experiment? What is the specific result we expect to achieve by building the ‘test’ capability?

We will have confidence to proceed when < we see a measurable signal >

What signals will indicate that the capability we have built is effective? What key metrics (qualitative or quantitative) we will measure to provide evidence that our experiment has succeeded and give us enough confidence to move to the next stage.

The threshold you use for statistical significance will depend on your understanding of the business and context you are operating within. Not every company has the user sample size of Amazon or Google to run statistically significant experiments in a short period of time. Limits and controls need to be defined by your organization to determine acceptable evidence thresholds that will allow the team to advance to the next step.

For example, if you are building a rocket ship you may want your experiments to have a high threshold for statistical significance. If you are deciding between two different flows intended to help increase user sign up you may be happy to tolerate a lower significance threshold.

The final step is to clearly and visibly state any assumptions made about our hypothesis, to create a feedback loop for the team to provide further input, debate, and understanding of the circumstance under which we are performing the test. Are they valid and make sense from a technical and business perspective?

Hypotheses, when aligned to your MVP, can provide a testing mechanism for your product or service vision. They can test the most uncertain areas of your product or service, in order to gain information and improve confidence.

Examples of Hypothesis-Driven Development user stories are;

Business story.

We Believe That increasing the size of hotel images on the booking page Will Result In improved customer engagement and conversion We Will Have Confidence To Proceed When  we see a 5% increase in customers who review hotel images who then proceed to book in 48 hours.

It is imperative to have effective monitoring and evaluation tools in place when using an experimental approach to software development in order to measure the impact of our efforts and provide a feedback loop to the team. Otherwise, we are essentially blind to the outcomes of our efforts.

In agile software development, we define working software as the primary measure of progress. By combining Continuous Delivery and Hypothesis-Driven Development we can now define working software and validated learning as the primary measures of progress.

Ideally, we should not say we are done until we have measured the value of what is being delivered – in other words, gathered data to validate our hypothesis.

Examples of how to gather data is performing A/B Testing to test a hypothesis and measure to change in customer behavior. Alternative testings options can be customer surveys, paper prototypes, user and/or guerilla testing.

One example of a company we have worked with that uses Hypothesis-Driven Development is lastminute.com . The team formulated a hypothesis that customers are only willing to pay a max price for a hotel based on the time of day they book. Tom Klein, CEO and President of Sabre Holdings shared the story  of how they improved conversion by 400% within a week.

Combining practices such as Hypothesis-Driven Development and Continuous Delivery accelerates experimentation and amplifies validated learning. This gives us the opportunity to accelerate the rate at which we innovate while relentlessly reducing costs, leaving our competitors in the dust. Ideally, we can achieve the ideal of one-piece flow: atomic changes that enable us to identify causal relationships between the changes we make to our products and services, and their impact on key metrics.

As Kent Beck said, “Test-Driven Development is a great excuse to think about the problem before you think about the solution”. Hypothesis-Driven Development is a great opportunity to test what you think the problem is before you work on the solution.

We also run a  workshop to help teams implement Hypothesis-Driven Development . Get in touch to run it at your company. 

[1]  Hypothesis-Driven Development  By Jeffrey L. Taylor

More strategy insights

Say hello to venture capital 3.0, negotiation made simple with dr john lowry, how high performance organizations innovate at scale, read my newsletter.

Insights in every edition. News you can use. No spam, ever. Read the latest edition

We've just sent you your first email. Go check it out!

.

  • Explore Insights
  • Nobody Studios
  • LinkedIn Learning: High Performance Organizations

Advisory boards aren’t only for executives. Join the LogRocket Content Advisory Board today →

LogRocket blog logo

  • Product Management
  • Solve User-Reported Issues
  • Find Issues Faster
  • Optimize Conversion and Adoption

What is hypothesis-driven development?

hypothesis driven development examples

Uncertainty is one of the biggest challenges of modern product development. Most often, there are more question marks than answers available.

What Is Hypothesis Driven Development

This fact forces us to work in an environment of ambiguity and unpredictability.

Instead of combatting this, we should embrace the circumstances and use tools and solutions that excel in ambiguity. One of these tools is a hypothesis-driven approach to development.

Hypothesis-driven development in a nutshell

As the name suggests, hypothesis-driven development is an approach that focuses development efforts around, you guessed it, hypotheses.

To make this example more tangible, let’s compare it to two other common development approaches: feature-driven and outcome-driven.

In feature-driven development, we prioritize our work and effort based on specific features we planned and decided on upfront. The underlying goal here is predictability.

In outcome-driven development, the priorities are dictated not by specific features but by broader outcomes we want to achieve. This approach helps us maximize the value generated.

When it comes to hypothesis-driven development, the development effort is focused first and foremost on validating the most pressing hypotheses the team has. The goal is to maximize learning speed over all else.

Benefits of hypothesis-driven development

There are numerous benefits of a hypothesis-driven approach to development, but the main ones include:

Continuous learning

Mvp mindset, data-driven decision-making.

Hypothesis-driven development maximizes the amount of knowledge the team acquires with each release.

After all, if all you do is test hypotheses, each test must bring you some insight:

Continuous Learning With Hypothesis Driven Development Cycle Image

Hypothesis-driven development centers the whole prioritization and development process around learning.

Instead of designing specific features or focusing on big, multi-release outcomes, a hypothesis-driven approach forces you to focus on minimum viable solutions ( MVPs ).

After all, the primary thing you are aiming for is hypothesis validation. It often doesn’t require scalability, perfect user experience, and fully-fledged features.

hypothesis driven development examples

Over 200k developers and product managers use LogRocket to create better digital experiences

hypothesis driven development examples

By definition, hypothesis-driven development forces you to truly focus on MVPs and avoid overcomplicating.

In hypothesis-driven development, each release focuses on testing a particular assumption. That test then brings you new data points, which help you formulate and prioritize next hypotheses.

That’s truly a data-driven development loop that leaves little room for HiPPOs (the highest-paid person in the room’s opinion).

Guide to hypothesis-driven development

Let’s take a look at what hypothesis-driven development looks like in practice. On a high level, it consists of four steps:

  • Formulate a list of hypotheses and assumptions
  • Prioritize the list
  • Design an MVP
  • Test and repeat

1. Formulate hypotheses

The first step is to list all hypotheses you are interested in.

Everything you wish to know about your users and market, as well as things you believe you know but don’t have tangible evidence to support, is a form of a hypothesis.

At this stage, I’m not a big fan of robust hypotheses such as, “We believe that if <we do something> then <something will happen> because <some user action>.”

To have such robust hypotheses, you need a solid enough understanding of your users, and if you do have it, then odds are you don’t need hypothesis-driven development anymore.

Instead, I prefer simpler statements that are closer to assumptions than hypotheses, such as:

  • “Our users will love the feature X”
  • “The option to do X is very important for student segment”
  • “Exam preparation is an important and underserved need that our users have”

2. Prioritize

The next step in hypothesis-driven development is to prioritize all assumptions and hypotheses you have. This will create your product backlog:

Prioritization Graphic With Cards In Order Of Descending Priority

There are various prioritization frameworks and approaches out there, so choose whichever you prefer. I personally prioritize assumptions based on two main criteria:

  • How much will we gain if we positively validate the hypothesis?
  • How much will we learn during the validation process?

Your priorities, however, might differ depending on your current context.

3. Design an MVP

Hypothesis-driven development is centered around the idea of MVPs — that is, the smallest possible releases that will help you gather enough information to validate whether a given hypothesis is true.

User experience, maintainability, and product excellence are secondary.

4. Test and repeat

The last step is to launch the MVP and validate whether the actual impact and consequent user behavior validate or invalidate the initial hypothesis.

The success isn’t measured by whether the hypothesis turned out to be accurate, but by how many new insights and learnings you captured during the process.

Based on the experiment, revisit your current list of assumptions, and, if needed, adjust the priority list.

Challenges of hypothesis-driven development

Although hypothesis-driven development comes with great benefits, it’s not all wine and roses.

Let’s take a look at a few core challenges that come with a hypothesis-focused approach.

Lack of robust product experience

Focusing on validating hypotheses and underlying MVP mindset comes at a cost. Robust product experience and great UX often require polishes, optimizations, and iterations, which go against speed-focused hypothesis-driven development.

You can’t optimize for both learning and quality simultaneously.

Unfocused direction

Although hypothesis-driven development is great for gathering initial learnings, eventually, you need to start developing a focused and sustainable long-term product strategy. That’s where outcome-driven development shines.

There’s an infinite amount of explorations you can do, but at some point, you must flip the switch and narrow down your focus around particular outcomes.

Over-emphasis on MVPs

Teams that embrace a hypothesis-driven approach often fall into the trap of an “MVP only” approach. However, shipping an actual prototype is not the only way to validate an assumption or hypothesis.

You can utilize tools such as user interviews, usability tests, market research, or willingness to pay (WTP) experiments to validate most of your doubts.

There’s a thin line between being MVP-focused in development and overusing MVPs as a validation tool.

When to use hypothesis-driven development

As you’ve most likely noticed, a hypothesis-driven development isn’t a multi-tool solution that can be used in every context.

On the contrary, its challenges make it an unsuitable development strategy for many companies.

As a rule of thumb, hypothesis-driven development works best in early-stage products with a high dose of ambiguity. Focusing on hypotheses helps bring enough clarity for the product team to understand where even to focus:

When To Use Hypothesis Driven Development Grid

But once you discover your product-market fit and have a solid idea for your long-term strategy, it’s often better to shift into more outcome-focused development. You should still optimize for learning, but it should no longer be the primary focus of your development effort.

While at it, you might also consider feature-driven development as a next step. However, that works only under particular circumstances where predictability is more important than the impact itself — for example, B2B companies delivering custom solutions for their clients or products focused on compliance.

Hypothesis-driven development can be a powerful learning-maximization tool. Its focus on MVP, continuous learning process, and inherent data-driven approach to decision-making are great tools for reducing uncertainty and discovering a path forward in ambiguous settings.

Honestly, the whole process doesn’t differ much from other development processes. The primary difference is that backlog and priories focus on hypotheses rather than features or outcomes.

Start by listing your assumptions, prioritizing them as you would any other backlog, and working your way top-to-bottom by shipping MVPs and adjusting priorities as you learn more about your market and users.

However, since hypothesis-driven development often lacks long-term cohesiveness, focus, and sustainable product experience, it’s rarely a good long-term approach to product development.

I tend to stick to outcome-driven and feature-driven approaches most of the time and resort to hypothesis-driven development if the ambiguity in a particular area is so hard that it becomes challenging to plan sensibly.

Featured image source: IconScout

LogRocket generates product insights that lead to meaningful action

Get your teams on the same page — try LogRocket today.

Share this:

  • Click to share on Twitter (Opens in new window)
  • Click to share on Reddit (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • #product strategy

hypothesis driven development examples

Stop guessing about your digital experience with LogRocket

Recent posts:.

Akash Gupta Leader Spotlight

Leader Spotlight: Empowering analytics and business intelligence teams, with Akash Gupta

Akash Gupta discusses the importance of empowering analytics and business intelligence teams to find “golden nuggets” of insights.

hypothesis driven development examples

What are product lines? Types, examples, and strategies

Product lines are more than just a collection of products. They are a reflection of a company’s strategic vision and market positioning.

hypothesis driven development examples

Leader Spotlight: The impact of macroeconomic trends on product roles, with Lori Edwards

Lori Edwards, Director of Product at Niche, discusses challenges with the transition from an individual contributor to a people manager.

hypothesis driven development examples

Techniques for building rapport in professional settings

Effective rapport fosters trust, facilitates communication, and creates a foundation for successful collaboration and conflict resolution.

hypothesis driven development examples

Leave a Reply Cancel reply

Mobile Menu

  • Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

HDD & More from Me

Hypothesis-Driven Development (Practitioner’s Guide)

Table of Contents

What is hypothesis-driven development (HDD)?

How do you know if it’s working, how do you apply hdd to ‘continuous design’, how do you apply hdd to application development, how do you apply hdd to continuous delivery, how does hdd relate to agile, design thinking, lean startup, etc..

Like agile, hypothesis-driven development (HDD) is more a point of view with various associated practices than it is a single, particular practice or process. That said, my goal here for is you to leave with a solid understanding of how to do HDD and a specific set of steps that work for you to get started.

After reading this guide and trying out the related practice you will be able to:

  • Diagnose when and where hypothesis-driven development (HDD) makes sense for your team
  • Apply techniques from HDD to your work in small, success-based batches across your product pipeline
  • Frame and enhance your existing practices (where applicable) with HDD

Does your product program feel like a Netflix show you’d binge watch? Is your team excited to see what happens when you release stuff? If so, congratulations- you’re already doing it and please hit me up on Twitter so we can talk about it! If not, don’t worry- that’s pretty normal, but HDD offers some awesome opportunities to work better.

Scientific-Method

Building on the scientific method, HDD is a take on how to integrate test-driven approaches across your product development activities- everything from creating a user persona to figuring out which integration tests to automate. Yeah- wow, right?! It is a great way to energize and focus your practice of agile and your work in general.

By product pipeline, I mean the set of processes you and your team undertake to go from a certain set of product priorities to released product. If you’re doing agile, then iteration (sprints) is a big part of making these work.

Product-Pipeline-Cowan.001

It wouldn’t be very hypothesis-driven if I didn’t have an answer to that! In the diagram above, you’ll find metrics for each area. For your application of HDD to what we’ll call continuous design, your metric to improve is the ratio of all your release content to the release content that meets or exceeds your target metrics on user behavior. For example, if you developed a new, additional way for users to search for products and set the success threshold at it being used in >10% of users sessions, did that feature succeed or fail by that measure? For application development, the metric you’re working to improve is basically velocity, meaning story points or, generally, release content per sprint. For continuous delivery, it’s how often you can release. Hypothesis testing is, of course, central to HDD and generally doing agile with any kind focus on valuable outcomes, and I think it shares the metric on successful release content with continuous design.

hypothesis driven development examples

The first component is team cost, which you would sum up over whatever period you’re measuring. This includes ‘c $ ’, which is total compensation as well as loading (benefits, equipment, etc.) as well as ‘g’ which is the cost of the gear you use- that might be application infrastructure like AWS, GCP, etc. along with any other infrastructure you buy or share with other teams. For example, using a backend-as-a-service like Heroku or Firebase might push up your value for ‘g’ while deferring the cost of building your own app infrastructure.

The next component is release content, fe. If you’re already estimating story points somehow, you can use those. If you’re a NoEstimates crew, and, hey, I get it, then you’d need to do some kind of rough proportional sizing of your release content for the period in question. The next term, r f , is optional but this is an estimate of the time you’re having to invest in rework, bug fixes, manual testing, manual deployment, and anything else that doesn’t go as planned.

The last term, s d , is one of the most critical and is an estimate of the proportion of your release content that’s successful relative to the success metrics you set for it. For example, if you developed a new, additional way for users to search for products and set the success threshold at it being used in >10% of users sessions, did that feature succeed or fail by that measure? Naturally, if you’re not doing this it will require some work and changing your habits, but it’s hard to deliver value in agile if you don’t know what that means and define it against anything other than actual user behavior.

Here’s how some of the key terms lay out in the product pipeline:

hypothesis driven development examples

The example here shows how a team might tabulate this for a given month:

hypothesis driven development examples

Is the punchline that you should be shooting for a cost of $1,742 per story point? No. First, this is for a single month and would only serve the purpose of the team setting a baseline for itself. Like any agile practice, the interesting part of this is seeing how your value for ‘F’ changes from period to period, using your team retrospectives to talk about how to improve it. Second, this is just a single team and the economic value (ex: revenue) related to a given story point will vary enormously from product to product. There’s a Google Sheets-based calculator that you can use here: Innovation Accounting with ‘F’ .

Like any metric, ‘F’ only matters if you find it workable to get in the habit of measuring it and paying attention to it. As a team, say, evaluates its progress on OKR (objectives and key results), ‘F’ offers a view on the health of the team’s collaboration together in the context of their product and organization. For example, if the team’s accruing technical debt, that will show up as a steady increase in ‘F’. If a team’s invested in test or deploy automation or started testing their release content with users more specifically, that should show up as a steady lowering of ‘F’.

In the next few sections, we’ll step through how to apply HDD to your product pipeline by area, starting with continuous design.

pipeline-continuous-design

It’s a mistake to ask your designer to explain every little thing they’re doing, but it’s also a mistake to decouple their work from your product’s economics. On the one hand, no one likes someone looking over their shoulder and you may not have the professional training to reasonably understand what they’re doing hour to hour, even day to day. On the other hand, it’s a mistake not to charter a designer’s work without a testable definition of success and not to collaborate around that.

Managing this is hard since most of us aren’t designers and because it takes a lot of work and attention to detail to work out what you really want to achieve with a given design.

Beginning with the End in Mind

The difference between art and design is intention- in design we always have one and, in practice, it should be testable. For this, I like the practice of customer experience (CX) mapping. CX mapping is a process for focusing the work of a team on outcomes–day to day, week to week, and quarter to quarter. It’s amenable to both qualitative and quantitative evidence but it is strictly focused on observed customer behaviors, as opposed to less direct, more lagging observations.

CX mapping works to define the CX in testable terms that are amenable to both qualitative and quantitative evidence. Specifically for each phase of a potential customer getting to behaviors that accrue to your product/market fit (customer funnel), it answers the following questions:

1. What do we mean by this phase of the customer funnel? 

What do we mean by, say, ‘Acquisition’ for this product or individual feature? How would we know it if we see it?

2. How do we observe this (in quantitative terms)? What’s the DV?

This come next after we answer the question “What does this mean?”. The goal is to come up with a focal single metric (maybe two), a ‘dependent variable’ (DV) that tells you how a customer has behaved in a given phase of the CX (ex: Acquisition, Onboarding, etc.).

3. What is the cut off for a transition?

Not super exciting, but extremely important in actual practice, the idea here is to establish the cutoff for deciding whether a user has progressed from one phase to the next or abandoned/churned.

4. What is our ‘Line in the Sand’ threshold?

Popularized by the book ‘Lean Analytics’, the idea here is that good metrics are ones that change a team’s behavior (decisions) and for that you need to establish a threshold in advance for decision making.

5. How might we test this? What new IVs are worth testing?

The ‘independent variables’ (IV’s) you might test are basically just ideas for improving the DV (#2 above).

6. What’s tricky? What do we need to watch out for?

Getting this working will take some tuning, but it’s infinitely doable and there aren’t a lot of good substitutes for focusing on what’s a win and what’s a waste of time.

The image below shows a working CX map for a company (HVAC in a Hurry) that services commercial heating, ventilation, and air-conditioning systems. And this particular CX map is for the specific ‘job’/task/problem of how their field technicians get the replacement parts they need.

CX-Map-Full-HinH

For more on CX mapping you can also check out it’s page- Tutorial: Customer Experience (CX) Mapping .

Unpacking Continuous Design for HDD

For the unpacking the work of design/Continuous Design with HDD , I like to use the ‘double diamond’ framing of ‘right problem’ vs. ‘right solution’, which I first learned about in Donald Norman’s seminal book, ‘The Design of Everyday Things’.

I’ve organized the balance of this section around three big questions:

How do you test that you’ve found the ‘Right Problem’?

How do you test that you’ve found demand and have the ‘right solution’, how do you test that you’ve designed the ‘right solution’.

hdd+design-thinking-UX

Let’s say it’s an internal project- a ‘digital transformation’ for an HVAC (heating, ventilation, and air conditioning) service company. The digital team thinks it would be cool to organize the documentation for all the different HVAC equipment the company’s technicians service. But, would it be?

The only way to find out is to go out and talk to these technicians and find out! First, you need to test whether you’re talking to someone who is one of these technicians. For example, you might have a screening question like: ‘How many HVAC’s did you repair last week?’. If it’s <10,  you might instead be talking to a handyman or a manager (or someone who’s not an HVAC tech at all).

Second, you need to ask non-leading questions. The evidentiary value of a specific answer to a general question is much higher than a specific answer to a specific questions. Also, some questions are just leading. For example, if you ask such a subject ‘Would you use a documentation system if we built it?’, they’re going to say yes, just to avoid the awkwardness and sales pitch they expect if they say no.

How do you draft personas? Much more renowned designers than myself (Donald Norman among them) disagree with me about this, but personally I like to draft my personas while I’m creating my interview guide and before I do my first set of interviews. Whether you draft or interview first is also of secondary important if you’re doing HDD- if you’re not iteratively interviewing and revising your material based on what you’ve found, it’s not going to be very functional anyway.

Really, the persona (and the jobs-to-be-done) is a means to an end- it should be answering some facet of the question ‘Who is our customer, and what’s important to them?’. It’s iterative, with a process that looks something like this:

personas-process-v3

How do you draft jobs-to-be-done? Personally- I like to work these in a similar fashion- draft, interview, revise, and then repeat, repeat, repeat.

You’ll use the same interview guide and subjects for these. The template is the same as the personas, but I maintain a separate (though related) tutorial for these–

A guide on creating Jobs-to-be-Done (JTBD) A template for drafting jobs-to-be-done (JTBD)

How do you interview subjects? And, action! The #1 place I see teams struggle is at the beginning and it’s with the paradox that to get to a big market you need to nail a series of small markets. Sure, they might have heard something about segmentation in a marketing class, but here you need to apply that from the very beginning.

The fix is to create a screener for each persona. This is a factual question whose job is specifically and only to determine whether a given subject does or does not map to your target persona. In the HVAC in a Hurry technician persona (see above), you might have a screening question like: ‘How many HVAC’s did you repair last week?’. If it’s <10,  you might instead be talking to a handyman or a manager (or someone who’s not an HVAC tech at all).

And this is the point where (if I’ve made them comfortable enough to be candid with me) teams will ask me ‘But we want to go big- be the next Facebook.’ And then we talk about how just about all those success stories where there’s a product that has for all intents and purpose a universal user base started out by killing it in small, specific segments and learning and growing from there.

Sorry for all that, reader, but I find all this so frequently at this point and it’s so crucial to what I think is a healthy practice of HDD it seemed necessary.

The key with the interview guide is to start with general questions where you’re testing for a specific answer and then progressively get into more specific questions. Here are some resources–

An example interview guide related to the previous tutorials A general take on these interviews in the context of a larger customer discovery/design research program A template for drafting an interview guide

To recap, what’s a ‘Right Problem’ hypothesis? The Right Problem (persona and PS/JTBD) hypothesis is the most fundamental, but the hardest to pin down. You should know what kind of shoes your customer wears and when and why they use your product. You should be able to apply factual screeners to identify subjects that map to your persona or personas.

You should know what people who look like/behave like your customer who don’t use your product are doing instead, particularly if you’re in an industry undergoing change. You should be analyzing your quantitative data with strong, specific, emphatic hypotheses.

If you make software for HVAC (heating, ventilation and air conditioning) technicians, you should have a decent idea of what you’re likely to hear if you ask such a person a question like ‘What are the top 5 hardest things about finishing an HVAC repair?’

In summary, HDD here looks something like this:

Persona-Hypothesis

01 IDEA : The working idea is that you know your customer and you’re solving a problem/doing a job (whatever term feels like it fits for you) that is important to them. If this isn’t the case, everything else you’re going to do isn’t going to matter.

Also, you know the top alternatives, which may or may not be what you see as your direct competitors. This is important as an input into focused testing demand to see if you have the Right Solution.

02 HYPOTHESIS : If you ask non-leading questions (like ‘What are the top 5 hardest things about finishing an HVAC repair?’), then you should generally hear relatively similar responses.

03 EXPERIMENTAL DESIGN : You’ll want an Interview Guide and, critically, a screener. This is a factual question you can use to make sure any given subject maps to your persona. With the HVAC repair example, this would be something like ‘How many HVAC repairs have you done in the last week?’ where you’re expecting an answer >5. This is important because if your screener isn’t tight enough, your interview responses may not converge.

04 EXPERIMENTATION : Get out and interview some subjects- but with a screener and an interview guide. The resources above has more on this, but one key thing to remember is that the interview guide is a guide, not a questionnaire. Your job is to make the interaction as normal as possible and it’s perfectly OK to skip questions or change them. It’s also 1000% OK to revise your interview guide during the process.

05: PIVOT OR PERSEVERE : What did you learn? Was it consistent? Good results are: a) We didn’t know what was on their A-list and what alternatives they are using, but we do know. b) We knew what was on their A-list and what alternatives they are using- we were pretty much right (doesn’t happen as much as you’d think). c) Our interviews just didn’t work/converge. Let’s try this again with some changes (happens all the time to smart teams and is very healthy).

By this, I mean: How do you test whether you have demand for your proposition? How do you know whether it’s better enough at solving a problem (doing a job, etc.) than the current alternatives your target persona has available to them now?

If an existing team was going to pick one of these areas to start with, I’d pick this one. While they’ll waste time if they haven’t found the right problem to solve and, yes, usability does matter, in practice this area of HDD is a good forcing function for really finding out what the team knows vs. doesn’t. This is why I show it as a kind of fulcrum between Right Problem and Right Solution:

Right-Solution-VP

This is not about usability and it does not involve showing someone a prototype, asking them if they like it, and checking the box.

Lean Startup offers a body of practice that’s an excellent fit for this. However, it’s widely misused because it’s so much more fun to build stuff than to test whether or not anyone cares about your idea. Yeah, seriously- that is the central challenge of Lean Startup.

Here’s the exciting part: You can massively improve your odds of success. While Lean Startup does not claim to be able to take any idea and make it successful, it does claim to minimize waste- and that matters a lot. Let’s just say that a new product or feature has a 1 in 5 chance of being successful. Using Lean Startup, you can iterate through 5 ideas in the space it would take you to build 1 out (and hope for the best)- this makes the improbably probable which is pretty much the most you can ask for in the innovation game .

Build, measure, learn, right? Kind of. I’ll harp on this since it’s important and a common failure mode relate to Lean Startup: an MVP is not a 1.0. As the Lean Startup folks (and Eric Ries’ book) will tell you, the right order is learn, build, measure. Specifically–

Learn: Who your customer is and what matters to them (see Solving the Right Problem, above). If you don’t do this, you’ll throwing darts with your eyes closed. Those darts are a lot cheaper than the darts you’d throw if you were building out the solution all the way (to strain the metaphor some), but far from free.

In particular, I see lots of teams run an MVP experiment and get confusing, inconsistent results. Most of the time, this is because they don’t have a screener and they’re putting the MVP in front of an audience that’s too wide ranging. A grandmother is going to respond differently than a millennial to the same thing.

Build : An experiment, not a real product, if at all possible (and it almost always is). Then consider MVP archetypes (see below) that will deliver the best results and try them out. You’ll likely have to iterate on the experiment itself some, particularly if it’s your first go.

Measure : Have metrics and link them to a kill decision. The Lean Startup term is ‘pivot or persevere’, which is great and makes perfect sense, but in practice the pivot/kill decisions are hard and as you decision your experiment you should really think about what metrics and thresholds are really going to convince you.

How do you code an MVP? You don’t. This MVP is a means to running an experiment to test motivation- so formulate your experiment first and then figure out an MVP that will get you the best results with the least amount of time and money. Just since this is a practitioner’s guide, with regard to ‘time’, that’s both time you’ll have to invest as well as how long the experiment will take to conclude. I’ve seen them both matter.

The most important first step is just to start with a simple hypothesis about your idea, and I like the form of ‘If we [do something] for [a specific customer/persona], then they will [respond in a specific, observable way that we can measure]. For example, if you’re building an app for parents to manage allowances for their children, it would be something like ‘If we offer parents and app to manage their kids’ allowances, they will download it, try it, make a habit of using it, and pay for a subscription.’

All that said, for getting started here is- A guide on testing with Lean Startup A template for creating motivation/demand experiments

To recap, what’s a Right Solution hypothesis for testing demand? The core hypothesis is that you have a value proposition that’s better enough than the target persona’s current alternatives that you’re going to acquire customers.

As you may notice, this creates a tight linkage with your testing from Solving the Right Problem. This is important because while testing value propositions with Lean Startup is way cheaper than building product, it still takes work and you can only run a finite set of tests. So, before you do this kind of testing I highly recommend you’ve iterated to validated learning on the what you see below: a persona, one or more PS/JTBD, the alternatives they’re using, and a testable view of why your VP is going to displace those alternatives. With that, your odds of doing quality work in this area dramatically increase!

trent-value-proposition.001

What’s the testing, then? Well, it looks something like this:

hypothesis driven development examples

01 IDEA : Most practicing scientists will tell you that the best way to get a good experimental result is to start with a strong hypothesis. Validating that you have the Right Problem and know what alternatives you’re competing against is critical to making investments in this kind of testing yield valuable results.

With that, you have a nice clear view of what alternative you’re trying to see if you’re better than.

02 HYPOTHESIS : I like a cause an effect stated here, like: ‘If we [offer something to said persona], they will [react in some observable way].’ This really helps focus your work on the MVP.

03 EXPERIMENTAL DESIGN : The MVP is a means to enable an experiment. It’s important to have a clear, explicit declaration of that hypothesis and for the MVP to delivery a metric for which you will (in advance) decide on a fail threshold. Most teams find it easier to kill an idea decisively with a kill metric vs. a success metric, even though they’re literally different sides of the same threshold.

04 EXPERIMENTATION : It is OK to tweak the parameters some as you run the experiment. For example, if you’re running a Google AdWords test, feel free to try new and different keyword phrases.

05: PIVOT OR PERSEVERE : Did you end up above or below your fail threshold? If below, pivot and focus on something else. If above, great- what is the next step to scaling up this proposition?

How does this related to usability? What’s usability vs. motivation? You might reasonably wonder: If my MVP has something that’s hard to understand, won’t that affect the results? Yes, sure. Testing for usability and the related tasks of building stuff are much more fun and (short-term) gratifying. I can’t emphasize enough how much harder it is for most founders, etc. is to push themselves to focus on motivation.

There’s certainly a relationship and, as we transition to the next section on usability, it seems like a good time to introduce the relationship between motivation and usability. My favorite tool for this is BJ Fogg’s Fogg Curve, which appears below. On the y-axis is motivation and on the x-axis is ‘ability’, the inverse of usability. If you imagine a point in the upper left, that would be, say, a cure for cancer where no matter if it’s hard to deal with you really want. On the bottom right would be something like checking Facebook- you may not be super motivated but it’s so easy.

The punchline is that there’s certainly a relationship but beware that for most of us our natural bias is to neglect testing our hypotheses about motivation in favor of testing usability.

Fogg-Curve

First and foremost, delivering great usability is a team sport. Without a strong, co-created narrative, your performance is going to be sub-par. This means your developers, testers, analysts should be asking lots of hard, inconvenient (but relevant) questions about the user stories. For more on how these fit into an overall design program, let’s zoom out and we’ll again stand on the shoulders of Donald Norman.

Usability and User Cognition

To unpack usability in a coherent, testable fashion, I like to use Donald Norman’s 7-step model of user cognition:

user-cognition

The process starts with a Goal and that goals interacts with an object in an environment, the ‘World’. With the concepts we’ve been using here, the Goal is equivalent to a job-to-be-done. The World is your application in whatever circumstances your customer will use it (in a cubicle, on a plane, etc.).

The Reflective layer is where the customer is making a decision about alternatives for their JTBD/PS. In his seminal book, The Design of Everyday Things, Donald Normal’s is to continue reading a book as the sun goes down. In the framings we’ve been using, we looked at understanding your customers Goals/JTBD in ‘How do you test that you’ve found the ‘right problem’?’, and we looked evaluating their alternatives relative to your own (proposition) in ‘How do you test that you’ve found the ‘right solution’?’.

The Behavioral layer is where the user interacts with your application to get what they want- hopefully engaging with interface patterns they know so well they barely have to think about it. This is what we’ll focus on in this section. Critical here is leading with strong narrative (user stories), pairing those with well-understood (by your persona) interface patterns, and then iterating through qualitative and quantitative testing.

The Visceral layer is the lower level visual cues that a user gets- in the design world this is a lot about good visual design and even more about visual consistency. We’re not going to look at that in depth here, but if you haven’t already I’d make sure you have a working style guide to ensure consistency (see  Creating a Style Guide ).

How do you unpack the UX Stack for Testability? Back to our example company, HVAC in a Hurry, which services commercial heating, ventilation, and A/C systems, let’s say we’ve arrived at the following tested learnings for Trent the Technician:

As we look at how we’ll iterate to the right solution in terms of usability, let’s say we arrive at the following user story we want to unpack (this would be one of many, even just for the PS/JTBD above):

As Trent the Technician, I know the part number and I want to find it on the system, so that I can find out its price and availability.

Let’s step through the 7 steps above in the context of HDD, with a particular focus on achieving strong usability.

1. Goal This is the PS/JTBD: Getting replacement parts to a job site. An HDD-enabled team would have found this out by doing customer discovery interviews with subjects they’ve screened and validated to be relevant to the target persona. They would have asked non-leading questions like ‘What are the top five hardest things about finishing an HVAC repair?’ and consistently heard that one such thing is sorting our replacement parts. This validates the PS/JTBD hypothesis that said PS/JTBD matters.

2. Plan For the PS/JTBD/Goal, which alternative are they likely to select? Is our proposition better enough than the alternatives? This is where Lean Startup and demand/motivation testing is critical. This is where we focused in ‘How do you test that you’ve found the ‘right solution’?’ and the HVAC in a Hurry team might have run a series of MVP to both understand how their subject might interact with a solution (concierge MVP) as well as whether they’re likely to engage (Smoke Test MVP).

3. Specify Our first step here is just to think through what the user expects to do and how we can make that as natural as possible. This is where drafting testable user stories, looking at comp’s, and then pairing clickable prototypes with iterative usability testing is critical. Following that, make sure your analytics are answering the same questions but at scale and with the observations available.

4. Perform If you did a good job in Specify and there are not overt visual problems (like ‘Can I click this part of the interface?’), you’ll be fine here.

5. Perceive We’re at the bottom of the stack and looping back up from World: Is the feedback from your application readily apparent to the user? For example, if you turn a switch for a lightbulb, you know if it worked or not. Is your user testing delivering similar clarity on user reactions?

6. Interpret Do they understand what they’re seeing? Does is make sense relative to what they expected to happen. For example, if the user just clicked ‘Save’, do they’re know that whatever they wanted to save is saved and OK? Or not?

7. Compare Have you delivered your target VP? Did they get what they wanted relative to the Goal/PS/JTBD?

How do you draft relevant, focused, testable user stories? Without these, everything else is on a shaky foundation. Sometimes, things will work out. Other times, they won’t. And it won’t be that clear why/not. Also, getting in the habit of pushing yourself on the relevance and testability of each little detail will make you a much better designer and a much better steward of where and why your team invests in building software.

For getting started here is- A guide on creating user stories A template for drafting user stories

How do you create find the relevant patterns and apply them? Once you’ve got great narrative, it’s time to put the best-understood, most expected, most relevant interface patterns in front of your user. Getting there is a process.

For getting started here is- A guide on interface patterns and prototyping

How do you run qualitative user testing early and often? Once you’ve got great something to test, it’s time to get that design in front of a user, give them a prompt, and see what happens- then rinse and repeat with your design.

For getting started here is- A guide on qualitative usability testing A template for testing your user stories

How do you focus your outcomes and instrument actionable observation? Once you release product (features, etc.) into the wild, it’s important to make sure you’re always closing the loop with analytics that are a regular part of your agile cadences. For example, in a high-functioning practice of HDD the team should be interested in and  reviewing focused analytics to see how their pair with the results of their qualitative usability testing.

For getting started here is- A guide on quantitative usability testing with Google Analytics .

To recap, what’s a Right Solution hypothesis for usability? Essentially, the usability hypothesis is that you’ve arrived at a high-performing UI pattern that minimizes the cognitive load, maximizes the user’s ability to act on their motivation to connect with your proposition.

Right-Solution-Usability-Hypothesis

01 IDEA : If you’re writing good user stories , you already have your ideas implemented in the form of testable hypotheses. Stay focused and use these to anchor your testing. You’re not trying to test what color drop-down works best- you’re testing which affordances best deliver on a given user story.

02 HYPOTHESIS : Basically, the hypothesis is that ‘For [x] user story, this interface pattern will perform will, assuming we supply the relevant motivation and have the right assessments in place.

03 EXPERIMENTAL DESIGN : Really, this means have a tests set up that, beyond working, links user stories to prompts and narrative which supply motivation and have discernible assessments that help you make sure the subject didn’t click in the wrong place by mistake.

04 EXPERIMENTATION : It is OK to iterate on your prototypes and even your test plan in between sessions, particularly at the exploratory stages.

05: PIVOT OR PERSEVERE : Did the patterns perform well, or is it worth reviewing patterns and comparables and giving it another go?

There’s a lot of great material and successful practice on the engineering management part of application development. But should you pair program? Do estimates or go NoEstimates? None of these are the right choice for every team all of the time. In this sense, HDD is the only way to reliably drive up your velocity, or f e . What I love about agile is that fundamental to its design is the coupling and integration of working out how to make your release content successful while you’re figuring out how to make your team more successful.

What does HDD have to offer application development, then? First, I think it’s useful to consider how well HDD integrates with agile in this sense and what existing habits you can borrow from it to improve your practice of HDD. For example, let’s say your team is used to doing weekly retrospectives about its practice of agile. That’s the obvious place to start introducing a retrospective on how your hypothesis testing went and deciding what that should mean for the next sprint’s backlog.

Second, let’s look at the linkage from continuous design. Primarily, what we’re looking to do is move fewer designs into development through more disciplined experimentation before we invest in development. This leaves the developers the do things better and keep the pipeline healthier (faster and able to produce more content or story points per sprint). We’d do this by making sure we’re dealing with a user that exists, a job/problem that exists for them, and only propositions that we’ve successfully tested with non-product MVP’s.

But wait– what does that exactly mean: ‘only propositions that we’ve successfully tested with non-product MVP’s’? In practice, there’s no such thing as fully validating a proposition. You’re constantly looking at user behavior and deciding where you’d be best off improving. To create balance and consistency from sprint to sprint, I like to use a ‘ UX map ‘. You can read more about it at that link but the basic idea is that for a given JTBD:VP pairing you map out the customer experience (CX) arc broken into progressive stages that each have a description, a dependent variable you’ll observe to assess success, and ideas on things (independent variables or ‘IV’s’) to test. For example, here’s what such a UX map might look like for HVAC in a Hurry’s work on the JTBD of ‘getting replacement parts to a job site’.

hypothesis driven development examples

From there, how can we use HDD to bring better, more testable design into the development process? One thing I like to do with user stories and HDD is to make a habit of pairing every single story with a simple, analytical question that would tell me whether the story is ‘done’ from the standpoint of creating the target user behavior or not. From there, I consider focal metrics. Here’s what that might look like at HinH.

hypothesis driven development examples

For the last couple of decades, test and deploy/ops was often treated like a kind of stepchild to the development- something that had to happen at the end of development and was the sole responsibility of an outside group of specialists. It didn’t make sense then, and now an integral test capability is table stakes for getting to a continuous product pipeline, which at the core of HDD itself.

A continuous pipeline means that you release a lot. Getting good at releasing relieves a lot of energy-draining stress on the product team as well as creating the opportunity for rapid learning that HDD requires. Interestingly, research by outfits like DORA (now part of Google) and CircleCI shows teams that are able to do this both release faster and encounter fewer bugs in production.

Amazon famously releases code every 11.6 seconds. What this means is that a developer can push a button to commit code and everything from there to that code showing up in front of a customer is automated. How does that happen? For starters, there are two big (related) areas: Test & Deploy.

While there is some important plumbing that I’ll cover in the next couple of sections, in practice most teams struggle with test coverage. What does that mean? In principal, what it means is that even though you can’t test everything, you iterate to test automation coverage that is catching most bugs before they end up in front of a user. For most teams, that means a ‘pyramid’ of tests like you see here, where the x-axis the number of tests and the y-axis is the level of abstraction of the tests.

test-pyramid-v2

The reason for the pyramid shape is that the tests are progressively more work to create and maintain, and also each one provides less and less isolation about where a bug actually resides. In terms of iteration and retrospectives, what this means is that you’re always asking ‘What’s the lowest level test that could have caught this bug?’.

Unit tests isolate the operation of a single function and make sure it works as expected. Integration tests span two functions and system tests, as you’d guess, more or less emulate the way a user or endpoint would interact with a system.

Feature Flags: These are a separate but somewhat complimentary facility. The basic idea is that as you add new features, they each have a flag that can enable or disable them. They are start out disabled and you make sure they don’t break anything. Then, on small sets of users, you can enable them and test whether a) the metrics look normal and nothing’s broken and, closer to the core of HDD, whether users are actually interacting with the new feature.

In the olden days (which is when I last did this kind of thing for work), if you wanted to update a web application, you had to log in to a server, upload the software, and then configure it, maybe with the help of some scripts. Very often, things didn’t go accordingly to plan for the predictable reason that there was a lot of opportunity for variation between how the update was tested and the machine you were updating, not to mention how you were updating.

Now computers do all that- but you still have to program them. As such, the job of deployment has increasingly become a job where you’re coding solutions on top of platforms like Kubernetes, Chef, and Terraform. These folks are (hopefully) working closely with developers on this. For example, rather than spending time and money on writing documentation for an upgrade, the team would collaborate on code/config. that runs on the kind of application I mentioned earlier.

Pipeline Automation

Most teams with a continuous pipeline orchestrate something like what you see below with an application made for this like Jenkins or CircleCI. The Manual Validation step you see is, of course, optional and not a prevalent part of a truly continuous delivery. In fact, if you automate up to the point of a staging server or similar before you release, that’s what’s generally called continuous integration.

Finally, the two yellow items you see are where the team centralizes their code (version control) and the build that they’re taking from commit to deploy (artifact repository).

Continuous-Delivery

To recap, what’s the hypothesis?

Well, you can’t test everything but you can make sure that you’re testing what tends to affect your users and likewise in the deployment process. I’d summarize this area of HDD as follows:

CD-Hypothesis

01 IDEA : You can’t test everything and you can’t foresee everything that might go wrong. This is important for the team to internalize. But you can iteratively, purposefully focus your test investments.

02 HYPOTHESIS : Relative to the test pyramid, you’re looking to get to a place where you’re finding issues with the least expensive, least complex test possible- not an integration test when a unit test could have caught the issue, and so forth.

03 EXPERIMENTAL DESIGN : As you run integrations and deployments, you see what happens! Most teams move from continuous integration (deploy-ready system that’s not actually in front of customers) to continuous deployment.

04 EXPERIMENTATION : In  retrospectives, it’s important to look at the tests suite and ask what would have made the most sense and how the current processes were or weren’t facilitating that.

05: PIVOT OR PERSEVERE : It takes work, but teams get there all the time- and research shows they end up both releasing more often and encounter fewer production bugs, believe it or not!

Topline, I would say it’s a way to unify and focus your work across those disciplines. I’ve found that’s a pretty big deal. While none of those practices are hard to understand, practice on the ground is patchy. Usually, the problem is having the confidence that doing things well is going to be worthwhile, and knowing who should be participating when.

My hope is that with this guide and the supporting material (and of course the wider body of practice), that teams will get in the habit of always having a set of hypotheses and that will improve their work and their confidence as a team.

Naturally, these various disciplines have a lot to do with each other, and I’ve summarized some of that here:

Hypothesis-Driven-Dev-Diagram

Mostly, I find practitioners learn about this through their work, but I’ll point out a few big points of intersection that I think are particularly notable:

  • Learn by Observing Humans We all tend to jump on solutions and over invest in them when we should be observing our user, seeing how they behave, and then iterating. HDD helps reinforce problem-first diagnosis through its connections to relevant practice.
  • Focus on What Users Actually Do A lot of thing might happen- more than we can deal with properly. The goods news is that by just observing what actually happens you can make things a lot easier on yourself.
  • Move Fast, but Minimize Blast Radius Working across so many types of org’s at present (startups, corporations, a university), I can’t overstate how important this is and yet how big a shift it is for more traditional organizations. The idea of ‘moving fast and breaking things’ is terrifying to these places, and the reality is with practice you can move fast and rarely break things/only break them a tiny bit. Without this, you end up stuck waiting for someone else to create the perfect plan or for that next super important hire to fix everything (spoiler: it won’t and they don’t).
  • Minimize Waste Succeeding at innovation is improbable, and yet it happens all the time. Practices like Lean Startup do not warrant that by following them you’ll always succeed; however, they do promise that by minimizing waste you can test five ideas in the time/money/energy it would otherwise take you to test one, making the improbable probable.

What I love about Hypothesis-Driven Development is that it solves a really hard problem with practice: that all these behaviors are important and yet you can’t learn to practice them all immediately. What HDD does is it gives you a foundation where you can see what’s similar across these and how your practice in one is reenforcing the other. It’s also a good tool to decide where you need to focus on any given project or team.

Copyright © 2022 Alex Cowan · All rights reserved.

The 6 Steps that We Use for Hypothesis-Driven Development

hypothesis driven development examples

One of the greatest fears of product managers is to create an app that flopped because it's based on untested assumptions. After successfully launching more than 20 products, we're convinced that we've found the right approach for hypothesis-driven development.

In this guide, I'll show you how we validated the hypotheses to ensure that the apps met the users' expectations and needs.

What is hypothesis-driven development?

Hypothesis-driven development is a prototype methodology that allows product designers to develop, test, and rebuild a product until it’s acceptable by the users. It is an iterative measure that explores assumptions defined during the project and attempts to validate it with users’ feedbacks.

What you have assumed during the initial stage of development may not be valid for the users. Even if they are backed by historical data, user behaviors can be affected by specific audiences and other factors. Hypothesis-driven development removes these uncertainties as the project progresses. 

hypothesis-driven development

Why we use hypothesis-driven development

For us, the hypothesis-driven approach provides a structured way to consolidate ideas and build hypotheses based on objective criteria. It’s also less costly to test the prototype before production.

Using this approach has reliably allowed us to identify what, how, and in which order should the testing be done. It gives us a deep understanding of how we prioritise the features, how it’s connected to the business goals and desired user outcomes.

We’re also able to track and compare the desired and real outcomes of developing the features. 

The process of Prototype Development that we use

Our success in building apps that are well-accepted by users is based on the Lean UX definition of hypothesis. We believe that the business outcome will be achieved if the user’s outcome is fulfilled for the particular feature. 

Here’s the process flow:

How Might We technique → Dot voting (based on estimated/assumptive impact) → converting into a hypothesis → define testing methodology (research method + success/fail criteria) → impact effort scale for prioritizing → test, learn, repeat.

Once the hypothesis is proven right, the feature is escalated into the development track for UI design and development. 

hypothesis driven development

Step 1: List Down Questions And Assumptions

Whether it’s the initial stage of the project or after the launch, there are always uncertainties or ideas to further improve the existing product. In order to move forward, you’ll need to turn the ideas into structured hypotheses where they can be tested prior to production.  

To start with, jot the ideas or assumptions down on paper or a sticky note. 

Then, you’ll want to widen the scope of the questions and assumptions into possible solutions. The How Might We (HMW) technique is handy in rephrasing the statements into questions that facilitate brainstorming.

For example, if you have a social media app with a low number of users, asking, “How might we increase the number of users for the app?” makes brainstorming easier. 

Step 2: Dot Vote to Prioritize Questions and Assumptions

Once you’ve got a list of questions, it’s time to decide which are potentially more impactful for the product. The Dot Vote method, where team members are given dots to place on the questions, helps prioritize the questions and assumptions. 

Our team uses this method when we’re faced with many ideas and need to eliminate some of them. We started by grouping similar ideas and use 3-5 dots to vote. At the end of the process, we’ll have the preliminary data on the possible impact and our team’s interest in developing certain features. 

This method allows us to prioritize the statements derived from the HMW technique and we’re only converting the top ones. 

Step 3: Develop Hypotheses from Questions

The questions lead to a brainstorming session where the answers become hypotheses for the product. The hypothesis is meant to create a framework that allows the questions and solutions to be defined clearly for validation.

Our team followed a specific format in forming hypotheses. We structured the statement as follow:

We believe we will achieve [ business outcome], 

If [ the persona],

Solve their need in  [ user outcome] using [feature]. ‍

Here’s a hypothesis we’ve created:

We believe we will achieve DAU=100 if Mike (our proto persona) solve their need in recording and sharing videos instantaneously using our camera and cloud storage .

hypothesis driven team

Step 4: Test the Hypothesis with an Experiment

It’s crucial to validate each of the assumptions made on the product features. Based on the hypotheses, experiments in the form of interviews, surveys, usability testing, and so forth are created to determine if the assumptions are aligned with reality. 

Each of the methods provides some level of confidence. Therefore, you don’t want to be 100% reliant on a particular method as it’s based on a sample of users.

It’s important to choose a research method that allows validation to be done with minimal effort. Even though hypotheses validation provides a degree of confidence, not all assumptions can be tested and there could be a margin of error in data obtained as the test is conducted on a sample of people. 

The experiments are designed in such a way that feedback can be compared with the predicted outcome. Only validated hypotheses are brought forward for development.

Testing all the hypotheses can be tedious. To be more efficient, you can use the impact effort scale. This method allows you to focus on hypotheses that are potentially high value and easy to validate. 

You can also work on hypotheses that deliver high impact but require high effort. Ignore those that require high impact but low impact and keep hypotheses with low impact and effort into the backlog. 

At Uptech, we assign each hypothesis with clear testing criteria. We rank the hypothesis with a binary ‘task success’ and subjective ‘effort on task’ where the latter is scored from 1 to 10. 

While we’re conducting the test, we also collect qualitative data such as the users' feedback. We have a habit of segregation the feedback into pros, cons and neutral with color-coded stickers.  (red - cons, green -pros, blue- neutral).

The best practice is to test each hypothesis at least on 5 users. 

Step 5  Learn, Build (and Repeat)

The hypothesis-driven approach is not a single-ended process. Often, you’ll find that some of the hypotheses are proven to be false. Rather than be disheartened, you should use the data gathered to finetune the hypothesis and design a better experiment in the next phase.

Treat the entire cycle as a learning process where you’ll better understand the product and the customers. 

We’ve found the process helpful when developing an MVP for Carbon Club, an environmental startup in the UK. The app allows users to donate to charity based on the carbon-footprint produced. 

In order to calculate the carbon footprint, we’re weighing the options of

  • Connecting the app to the users’ bank account to monitor the carbon footprint based on purchases made.
  • Allowing users to take quizzes on their lifestyles.

Upon validation, we’ve found that all of the users opted for the second option as they are concerned about linking an unknown app to their banking account. 

The result makes us shelves the first assumption we’ve made during pre-Sprint research. It also saves our client $50,000, and a few months of work as connecting the app to the bank account requires a huge effort. 

hypothesis driven development

Step 6: Implement Product and Maintain

Once you’ve got the confidence that the remaining hypotheses are validated, it’s time to develop the product. However, testing must be continued even after the product is launched. 

You should be on your toes as customers’ demands, market trends, local economics, and other conditions may require some features to evolve. 

hypothesis driven development

Our takeaways for hypothesis-driven development

If there’s anything that you could pick from our experience, it’s these 5 points.

1. Should every idea go straight into the backlog? No, unless they are validated with substantial evidence. 

2. While it’s hard to define business outcomes with specific metrics and desired values, you should do it anyway. Try to be as specific as possible, and avoid general terms. Give your best effort and adjust as you receive new data.  

3. Get all product teams involved as the best ideas are born from collaboration.

4. Start with a plan consists of 2 main parameters, i.e., criteria of success and research methods. Besides qualitative insights, you need to set objective criteria to determine if a test is successful. Use the Test Card to validate the assumptions strategically. 

5. The methodology that we’ve recommended in this article works not only for products. We’ve applied it at the end of 2019 for setting the strategic goals of the company and end up with robust results, engaged and aligned team.

You'll have a better idea of which features would lead to a successful product with hypothesis-driven development. Rather than vague assumptions, the consolidated data from users will provide a clear direction for your development team. 

As for the hypotheses that don't make the cut, improvise, re-test, and leverage for future upgrades.

Keep failing with product launches? I'll be happy to point you in the right direction. Drop me a message here.

Tell us about your idea. We will reach you out.

Cookie Notice

This site uses cookies for performance, analytics, personalization and advertising purposes.

For more information about how we use cookies please see our Cookie Policy .

Cookie Policy   |   Privacy Policy

Manage Consent Preferences

Essential/Strictly Necessary Cookies

These cookies are essential in order to enable you to move around the website and use its features, such as accessing secure areas of the website.

Analytical/ Performance Cookies

These are analytics cookies that allow us to collect information about how visitors use a website, for instance which pages visitors go to most often, and if they get error messages from web pages. This helps us to improve the way the website works and allows us to test different ideas on the site.

Functional/ Preference Cookies

These cookies allow our website to properly function and in particular will allow you to use its more personal features.

Targeting/ Advertising Cookies

These cookies are used by third parties to build a profile of your interests and show you relevant adverts on other sites. You should check the relevant third party website for more information and how to opt out, as described below.

hypothesis driven development examples

  • Starburst vs OSS Trino

By Use Cases

  • Open Data Lakehouse
  • Artificial Intelligence
  • ELT Data Processing
  • Data Applications
  • Data Migrations
  • Data Products

By Industry

  • Financial Services
  • Healthcare & Life Sciences
  • Retail & CPG
  • All Industries
  • Meet our Customers
  • Customer Experience
  • Starburst Data Rebels
  • Documentation
  • Technical overview
  • Starburst Galaxy
  • Starburst Enterprise
  • Upcoming Events
  • Data Universe
  • Data Fundamentals
  • Starburst Academy
  • Become a Partner
  • Partner Login
  • Security & Trust

hypothesis driven development examples

Fully managed in the cloud

Self-managed anywhere

Hypothesis-Driven Development

Hypothesis-driven development (hdd), also known as hypothesis-driven product development, is an approach used in software development and product management..

HDD involves creating hypotheses about user behavior, needs, or desired outcomes, and then designing and implementing experiments to validate or invalidate those hypotheses.

Related blogs

hypothesis driven development examples

BCG study: Data costs & architectural complexity reach a tipping point

hypothesis driven development examples

Data-driven innovation: If you want to innovate with data, this is what you should do!

hypothesis driven development examples

Artificial intelligence life cycle

Why use a hypothesis-driven approach?

With hypothesis-driven development, instead of making assumptions and building products or features based on those assumptions, teams should formulate hypotheses and conduct experiments to gather data and insights.

This method assists with making informed decisions and reduces the overall risk of building products that do not meet user needs or solve their problems.

How do you implement hypothesis-driven development

At a high level, here’s a general approach to implementing HDD:

  • Identify the problem or opportunity: Begin by identifying the problem or opportunity that you want to address with your product or feature.
  • Create a hypothesis: Clearly define a hypothesis that describes a specific user behavior, need, or outcome you believe will occur if you implement the solution.
  • Design an experiment: Determine the best way to test your hypothesis. This could involve creating a prototype, conducting user interviews, A/B testing, or other forms of user research.
  • Implement the experiment: Execute the experiment by building the necessary components or conducting the research activities.
  • Collect and analyze data: Gather data from the experiment and analyze the results to determine if the hypothesis is supported or not.
  • If the hypothesis is supported, you can move forward with further development.
  • If the hypothesis is not supported, you may need to pivot, refine the hypothesis, or explore alternative solutions.
  • Rinse and repeat: Continuously repeat the process, iterating and refining your hypotheses and experiments to guide the development of your product or feature.

Hypothesis-driven development emphasizes a data-driven and iterative approach to product development, allowing teams to make more informed decisions, validate assumptions, and ultimately deliver products that better meet user needs.

A single point of access to all your data

Stay in the know - sign up for our newsletter.

  • Resource Library
  • Events and Webinars
  • Open-source Trino

Quick Links

Get in touch.

  • Customer Support

LinkedIn

© Starburst Data, Inc. Starburst and Starburst Data are registered trademarks of Starburst Data, Inc. All rights reserved. Presto®, the Presto logo, Delta Lake, and the Delta Lake logo are trademarks of LF Projects, LLC

Read Starburst reviews on G2

Privacy Policy   |   Legal Terms   |   Cookie Notice

Start Free with Starburst Galaxy

Up to $500 in usage credits included

  • Query your data lake fast with Starburst's best-in-class MPP SQL query engine
  • Get up and running in less than 5 minutes

For more deployment options:

Please fill in all required fields and ensure you are using a valid email address.

By clicking Create Account , you agree to Starburst Galaxy's terms of service and privacy policy .

Hypothesis-Driven Development

Hypothesis-Driven Development (HDD) is a software development approach rooted in the philosophy of systematically formulating and testing hypotheses to drive decision-making and improvements in a product or system. At its core, HDD seeks to align development efforts with the goal of discovering what resonates with users. This philosophy recognizes that assumptions about user behavior and preferences can often be flawed, and the best way to understand users is through experimentation and empirical evidence.

In the context of HDD, features and user stories are often framed as hypotheses. This means that instead of assuming a particular feature or enhancement will automatically improve the user experience, development teams express these elements as testable statements. For example, a hypothesis might propose that introducing a real-time chat feature will lead to increased user engagement by facilitating instant communication.

The Process

The process of Hypothesis-Driven Development involves a series of steps. Initially, development teams formulate clear and specific hypotheses based on the goals of the project and the anticipated impact on users. These hypotheses are not merely speculative ideas but are designed to be testable through concrete experiments.

Once hypotheses are established, the next step is to design and implement experiments within the software. This could involve introducing new features, modifying existing ones, or making adjustments to the user interface. Throughout this process, the emphasis is on collecting relevant data that can objectively measure the impact of the changes being tested.

Validating Hypotheses

The collected data is then rigorously analyzed to determine the validity of the hypotheses. This analytical phase is critical for extracting actionable insights and understanding how users respond to the implemented changes. If a hypothesis is validated, the development team considers how to build upon the success. Conversely, if a hypothesis is invalidated, adjustments are made based on the lessons learned from the experiment.

HDD embraces a cycle of continuous improvement. As new insights are gained and user preferences evolve, the development process remains flexible and adaptive. This iterative approach allows teams to respond to changing conditions and ensures that the software is consistently refined in ways that genuinely resonate with users. In essence, Hypothesis-Driven Development serves as a methodology that not only recognizes the complexity of user behavior but actively seeks to uncover what truly works through a structured and empirical approach.

Other Recent Articles

Customer Development

Customer Development

What is a Fractional CPO?

What is a Fractional CPO?

AI for Product Managers

AI for Product Managers

Start building amazing products today.

Why hypothesis-driven development is key to DevOps

gears and lightbulb to represent innovation

Opensource.com

The definition of DevOps, offered by  Donovan Brown is  "The union of people , process , and products to enable continuous delivery of value to our customers. " It accentuates the importance of continuous delivery of value. Let's discuss how experimentation is at the heart of modern development practices.

hypothesis driven development examples

Reflecting on the past

Before we get into hypothesis-driven development, let's quickly review how we deliver value using waterfall, agile, deployment rings, and feature flags.

In the days of waterfall , we had predictable and process-driven delivery. However, we only delivered value towards the end of the development lifecycle, often failing late as the solution drifted from the original requirements, or our killer features were outdated by the time we finally shipped.

hypothesis driven development examples

Here, we have one release X and eight features, which are all deployed and exposed to the patiently waiting user. We are continuously delivering value—but with a typical release cadence of six months to two years, the value of the features declines as the world continues to move on . It worked well enough when there was time to plan and a lower expectation to react to more immediate needs.

The introduction of agile allowed us to create and respond to change so we could continuously deliver working software, sense, learn, and respond.

hypothesis driven development examples

Now, we have three releases: X.1, X.2, and X.3. After the X.1 release, we improved feature 3 based on feedback and re-deployed it in release X.3. This is a simple example of delivering features more often, focused on working software, and responding to user feedback. We are on the path of continuous delivery, focused on our key stakeholders: our users.

Using deployment rings and/or feature flags , we can decouple release deployment and feature exposure, down to the individual user, to control the exposure—the blast radius—of features. We can conduct experiments; progressively expose, test, enable, and hide features; fine-tune releases, and continuously pivot on learnings and feedback.

When we add feature flags to the previous workflow, we can toggle features to be ON (enabled and exposed) or OFF (hidden).

hypothesis driven development examples

Here, feature flags for features 2, 4, and 8 are OFF, which results in the user being exposed to fewer of the features. All features have been deployed but are not exposed (yet). We can fine-tune the features (value) of each release after deploying to production.

Ring-based deployment limits the impact (blast) on users while we gradually deploy and evaluate one or more features through observation. Rings allow us to deploy features progressively and have multiple releases (v1, v1.1, and v1.2) running in parallel.

Ring-based deployment

Exposing features in the canary and early-adopter rings enables us to evaluate features without the risk of an all-or-nothing big-bang deployment.

Feature flags decouple release deployment and feature exposure. You "flip the flag" to expose a new feature, perform an emergency rollback by resetting the flag, use rules to hide features, and allow users to toggle preview features.

Toggling feature flags on/off

When you combine deployment rings and feature flags, you can progressively deploy a release through rings and use feature flags to fine-tune the deployed release.

See deploying new releases: Feature flags or rings , what's the cost of feature flags , and breaking down walls between people, process, and products for discussions on feature flags, deployment rings, and related topics.

Adding hypothesis-driven development to the mix

Hypothesis-driven development is based on a series of experiments to validate or disprove a hypothesis in a complex problem domain where we have unknown-unknowns. We want to find viable ideas or fail fast. Instead of developing a monolithic solution and performing a big-bang release, we iterate through hypotheses, evaluating how features perform and, most importantly, how and if customers use them.

Template: We believe {customer/business segment} wants {product/feature/service} because {value proposition}. Example: We believe that users want to be able to select different themes because it will result in improved user satisfaction. We expect 50% or more users to select a non-default theme and to see a 5% increase in user engagement.

Every experiment must be based on a hypothesis, have a measurable conclusion, and contribute to feature and overall product learning. For each experiment, consider these steps:

  • Observe your user
  • Define a hypothesis and an experiment to assess the hypothesis
  • Define clear success criteria (e.g., a 5% increase in user engagement)
  • Run the experiment
  • Evaluate the results and either accept or reject the hypothesis

Let's have another look at our sample release with eight hypothetical features.

hypothesis driven development examples

When we deploy each feature, we can observe user behavior and feedback, and prove or disprove the hypothesis that motivated the deployment. As you can see, the experiment fails for features 2 and 6, allowing us to fail-fast and remove them from the solution. We do not want to carry waste that is not delivering value or delighting our users! The experiment for feature 3 is inconclusive, so we adapt the feature, repeat the experiment, and perform A/B testing in Release X.2. Based on observations, we identify the variant feature 3.2 as the winner and re-deploy in release X.3. We only expose the features that passed the experiment and satisfy the users.

Hypothesis-driven development lights up progressive exposure

When we combine hypothesis-driven development with progressive exposure strategies, we can vertically slice our solution, incrementally delivering on our long-term vision. With each slice, we progressively expose experiments, enable features that delight our users and hide those that did not make the cut.

But there is more. When we embrace hypothesis-driven development, we can learn how technology works together, or not, and what our customers need and want. We also complement the test-driven development (TDD) principle. TDD encourages us to write the test first (hypothesis), then confirm our features are correct (experiment), and succeed or fail the test (evaluate). It is all about quality and delighting our users , as outlined in principles 1, 3, and 7  of the Agile Manifesto :

  • Our highest priority is to satisfy the customers through early and continuous delivery of value.
  • Deliver software often, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
  • Working software is the primary measure of progress.

More importantly, we introduce a new mindset that breaks down the walls between development, business, and operations to view, design, develop, deliver, and observe our solution in an iterative series of experiments, adopting features based on scientific analysis, user behavior, and feedback in production. We can evolve our solutions in thin slices through observation and learning in production, a luxury that other engineering disciplines, such as aerospace or civil engineering, can only dream of.

The good news is that hypothesis-driven development supports the empirical process theory and its three pillars: Transparency , Inspection , and Adaption .

hypothesis driven development examples

But there is more. Based on lean principles, we must pivot or persevere after we measure and inspect the feedback. Using feature toggles in conjunction with hypothesis-driven development, we get the best of both worlds, as well as the ability to use A|B testing to make decisions on feedback, such as likes/dislikes and value/waste.

Hypothesis-driven development:

  • Is about a series of experiments to confirm or disprove a hypothesis. Identify value!
  • Delivers a measurable conclusion and enables continued learning.
  • Enables continuous feedback from the key stakeholder—the user—to understand the unknown-unknowns!
  • Enables us to understand the evolving landscape into which we progressively expose value.

Progressive exposure:

  • Is not an excuse to hide non-production-ready code. Always ship quality!
  • Is about deploying a release of features through rings in production. Limit blast radius!
  • Is about enabling or disabling features in production. Fine-tune release values!
  • Relies on circuit breakers to protect the infrastructure from implications of progressive exposure. Observe, sense, act!

What have you learned about progressive exposure strategies and hypothesis-driven development? We look forward to your candid feedback.

User profile image.

Comments are closed.

Related content.

Working on a team, busy worklife

Scrum and Hypothesis Driven Development

Profile picture for user Dave West

  • Website for Dave West
  • Twitter for Dave West
  • LinkedIn for Dave West

hypothesis driven development examples

Scrum was built to better manage risk and deliver value by focusing on inspection and encouraging adaptation. It uses an empirical approach combined with self organizing, empowered teams to effectively work on complex problems. And after reading Jeff Gothelf ’s and Josh Seiden ’s book “ Sense and Respond: How Successful Organizations Listen to Customers and Create New Products Continuously ”, I realized that the world is full of complex problems. This got me thinking about the relationship between Scrum and modern organizations as they pivot toward becoming able to ‘sense and respond’. So, I decided to ask Jeff Gothelf… Here is a condensed version of our conversation.

hypothesis driven development examples

Sense & Respond was exactly this attempt to change the hearts and minds of managers, executives and aspiring managers. It makes the case that first and foremost, any business of scale or that seeks to scale is in the software business. We share a series of compelling case studies to illustrate how this is true across nearly every industry. We then move on to the second half of the book where we discuss how managing a software-based business is different. We cover culture, process, staffing, planning, budgeting and incentives. Change has to be holistic.

What you are describing is the challenge of ownership. Product Owner (PO) is the role in the Scrum Framework empowered to make decisions about what and when things are in the product. But disempowerment is a real problem in most organizations, with their POs not having the power to make decisions. Is this something you see when introducing the ideas of Sense and Respond?

There will always be situations where things simply have to get built. Legal and compliance are two great examples of this. In these, low risk, low uncertainty situations a more straightforward execution is usually warranted. That said, just because a feature has to be included for compliance reasons doesn’t mean there is only one way to implement it. What teams will often find is that there is actual flexibility in how these (actual) requirements can be implemented with some being more successful and less distracting to the overall user experience than others. The level of discovery that you would expend on these features is admittedly smaller but it shouldn’t be thrown out altogether as these features still need to figure into a holistic workflow.   

What did you think about this post?

Share with your network.

  • Share this page via email
  • Share this page on Facebook
  • Share this page on Twitter
  • Share this page on LinkedIn

View the discussion thread.

Data-driven hypothesis development

Data-driven hypothesis development

Have you ever tried solving a difficult problem with no clear path forward? Perhaps it’s a problem that may or may not be well understood or it’s a problem with many ideas of things that might work, and you are facing this without an approach to guide you. We've been there and lived this very scenario and will take you through an approach we've found to be very effective.

As Donald Rumsfeld once said, the problems we solve everyday can be classified into four categories:

“There are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns — the ones we don’t know we don’t know.”

Risk matrix

Problems can be classified into four categories. 

When working on problems with little data and high levels of risks (i.e. the “known unknowns” and “unknown unknowns”), it’s important to focus on finding the shortest path to the ‘correct’ solutions and removing the ‘incorrect’ solutions as soon as possible. In our experience, the best approach for solving these problems is to use hypotheses to focus your thinking and inform your decisions with data, which is known as: data-driven hypothesis development.

What is data-driven hypothesis development?

Data-driven hypothesis development (DDHD) is an effective approach when facing complex “known unknown” and “unknown unknown” problems. There are four steps to the approach:

1. Define the goal using data

Problem solving starts with well defined problems, however, we know that “known unknowns” and “unknown unknowns” are rarely well defined. Most of the time, the only thing we do know is that the system is broken and has many problems; what we don’t know is which problem is critical to help us achieve the strategic business goal.

The key is to define the problem using data, thus bringing the needed clarity. State the problem, define the metrics upfront and align these with your goals.

A dashboard to visualize and track all key metrics.

Setup a dashboard to visualize and track all key metrics.

2. Hypothesize

Hypotheses are introduced to create a path to the next state. This requires a change in mindset; proposed solutions are viewed as a series of experiments done in rapid iterations until a desirable outcome is achieved or the experiment is proved not viable.

Hypothesis driven development card

One hypothesis is made up of one or many experiments. Each experiment is independent with a clear outcome, criteria and metrics. It should be short to build and short to test and learn. Results should be a clear indicator of success or not. 

If the result of the experiment has a positive impact on the outcome, the next step would be to implement the change in production. 

If an experiment is proved not viable, mark it as a failure, track and share lessons learned. 

Capability to fail fast is pivotal. As we don’t know the exact path to the destination, we need to have the ability to quickly test different paths to effectively identify the next experiment. 

Each hypothesis needs to be able to answer the question: when should we stop? At what point will you have enough information to make an informed decision? 

3. Fast feedback

Experiments need to be small, specific, so that we can receive feedback in days rather than weeks. There are at least two feedback loops to build in when there is code change involved:

An isolated testing environment: to run the same set of testing suites to baseline the metrics and compare them with our experiment’s results

The production environment: once the experiment is proven in the testing environment it needs to be further tested in a production environment. 

Fast feedback delivered through feedback loops is critical in determining the next step.

Fast feedback delivered through feedback loops is critical in determining the next step.

Fast feedback requires solid engineering practices like continuous delivery to accelerate experimentation and maximize what we can learn. We call out a few practices as an example, different systems might require different techniques:

Regression testing automation: for an orphaned legacy system, it’s important to build a regression testing suite as the learning progresses (have a baseline first then evolve as you go), providing a safety net and early feedback if any change is wrong. 

Monitoring and observability: monitoring is quite often a big gap in legacy systems, not to mention observability. Start with monitoring, you will learn how the system is functioning, utilizing resources, when it will break and how it will behave in failure modes.

Performance testing automation: when there’s a problem about performance, there is a need to automate the performance testing so you can baseline the problem and continuously run it with every change.

A/B testing in production: set up the system to have basic ability to run the current system and the change in parallel; and rollback automatically if there is a need. 

4. Incremental delivery of value

The value created by experiments, successful and failed, can be divided into three categories:

Tangible improvements on the system

Increased understanding of the problem and more data-informed decisions

Improved understanding of system via documentation, monitoring, test and etc.

It’s easy to take successful experiments as the only value delivered. Yet in the complex world of “known unknowns” and “unknown unknowns”, the value of  “failed experiments” is equally important, providing clarity in decision making.

Another often ignored value delivered is the understanding of the problem/system, using data. This is extremely useful when there’s heavy loss of domain knowledge, providing a low cost, less risky way to rebuild knowledge.

Data-driven hypothesis development enables you to find the shortest path to your desired outcome, in itself delivering value. 

Data driven hypothesis development approach

Data driven hypothesis development approach

When facing a complex problem with many known unknowns and unknown unknowns, being data-driven serves as a compass, helping the team stay focused and prioritizing the right work. A data-driven approach helps you deliver incremental value and let’s you know when to stop.

Why we decided to use data-driven hypothesis development

Our client presented us with a proposed solution — a rebuild —- asking us to implement their shiny new design, believing this approach would solve the problem. However, it wasn’t clear what problem we’d be solving by implementing a new build, so we tried to better understand the problem to know if the proposed solution was likely to succeed. 

Once we looked at the underlying architecture and discussed how we might do it differently, we discovered there wasn’t a lot we would change. The architecture at its core was solid, however, there were just too many layers of band-aid fixes. There was low visibility, the system was poorly understood and it had been neglected for many years.   

DDHD would allow us to run short experiments, to learn as we delivered incremental value to the customer, and to continuously apply our lessons learned to have greater impact and rebuild domain knowledge.

Indicators data-driven hypothesis development might work for you

No or low visibility of the system and the problem

Little knowledge of the system exists within your organization

The system has been neglected for some time with band-aids applied loosely

You don’t know what to fix or where to start

You want to de-risk a large piece of work

You want to deliver value incrementally

You are looking at a complete rebuild as the solution

Our approach

1. understand the problem and explore all options.

To understand all sides of the problem, consider the underlying architecture, the customer, the business, and the issues being surfaced. One activity we ran recorded every known problem, discussing what we knew or didn’t know about it. This process involved people outside the immediate team. We gathered anyone who might have some knowledge on the system to join the problem discussion. 

Once you have an understanding of the problem, share it far and wide. This is the beginning of your story; you will keep telling this story with data throughout the process, building interest, buy-in, support and knowledge. 

The framework we used to guide us in our problem discussion.

The framework we used to guide us in our problem discussion. 

2. Define the goals using data

As a team, define your goals or the desired outcomes. What is it you want to achieve? Discuss how you will measure success. What are the key metrics you will use? What does success look like? Once you’ve reached agreement on this, you’ll need to set about baselining your metrics.

Define the goals using data

We used a template similar to the one above to share our goals and record the metrics. The goals were front and center in our daily activities, we talked about them in stand-up, included them on story cards and shared them in our showcases, helping to anchor our thoughts and hold our focus. In an ideal world, you’ll see a direct line from your goal through to your organization's overarching objectives. 

3. Hypothesize 

One of the reasons we were successful in solving the problem and delivering outstanding results for our client was due to involving the whole team. We didn’t have just one or two team members writing hypotheses, defining, and driving experiments - every single member of the team was involved. To set your team up for success, align on the approach and how you’ll work together. Empower your team to write hypotheses from day one, no matter their role.

A table setting the goals, the approach, and what to deliver

We created templates to work from and encouraged pairing on writing hypotheses and defining experiments.

Hypothesis canvas

4. Experiment

Run small, data-driven experiments. One hypothesis can have one or many experiments. Experiments should be short to build and short to test. They should be independent and must be measurable.

Experiment template

5. Conclude the experiment

Acceptance criteria plays a critical role in determining whether the experiment is successful or not. For successful experiments, we will need to build a plan to apply the changes. For all experiments, successful or not, you should revisit other remaining experiments with the new data you have collected and change accordingly upon completion. This could mean updating, stopping or creating new experiments. 

Every conclusion of an experiment is a starting point of the next step plan.

6. Track the experiment and share results

Use data to tell stories and share your lessons learned. Don’t just share this with your immediate team; share your lessons learned and data with the business and your stakeholders. The more they know, the more empowered they will feel too. Take people on the journey with you. Build an experiment dashboard and use it as an info radar to visualize the learning. 

Experiment tracking

Key takeaways

Our key takeaways from running a DDHD approach:

Use data to tell stories. Data was key in all of this. We used it in every conversation, every showcase, every brainstorming session. Data helped us to align the business, get buy-in from stakeholders, empower the team, and to celebrate wins. 

De-risk a large piece of work. We were asking our clients to trust us to fix the “unknown unknowns” over implementing a shiny new solution. DDHD enabled us to deliver incremental value, gaining trust each week and de-risking a piece of work with a potential 12 - 24 month timeframe and equally big price tag. 

Be comfortable with failure. We encouraged the team to celebrate the failed experiments as much as the successful ones. Lessons come from failure, failure enables decision making and through this we find the quickest path to the desired outcome. 

Empower the team to own the problem and the goals. Our success was a direct result of the whole team taking ownership of the problem and the goals. The team were empowered early on to form hypotheses and write experiments. Everytime they learned something, it was shared back and new hypotheses and/or experiments were formed.   

Deliver incremental parcels of value. Keep focussed on delivering small, incremental changes. When faced with a large piece of work and/or a system that has been neglected for some time, it can feel impossible to have an impact. We focussed on delivering value weekly. Delivering value wasn’t just about getting something into the customers’ hands, it was also learning from failed experiments. Celebrate every step, it means you are inching closer to success. 

We’ve found this to be a really valuable approach to dealing with problems that can be classified as ‘known unknowns’ and ‘unknown unknowns’ and we hope you find this technique useful too.

' title=

Let's talk about your next project

Stratechi.com

  • What is Strategy?
  • Business Models
  • Developing a Strategy
  • Strategic Planning
  • Competitive Advantage
  • Growth Strategy
  • Market Strategy
  • Customer Strategy
  • Geographic Strategy
  • Product Strategy
  • Service Strategy
  • Pricing Strategy
  • Distribution Strategy
  • Sales Strategy
  • Marketing Strategy
  • Digital Marketing Strategy
  • Organizational Strategy
  • HR Strategy – Organizational Design
  • HR Strategy – Employee Journey & Culture
  • Process Strategy
  • Procurement Strategy
  • Cost and Capital Strategy
  • Business Value
  • Market Analysis
  • Problem Solving Skills
  • Strategic Options
  • Business Analytics
  • Strategic Decision Making
  • Process Improvement
  • Project Planning
  • Team Leadership
  • Personal Development
  • Leadership Maturity Model
  • Leadership Team Strategy
  • The Leadership Team
  • Leadership Mindset
  • Communication & Collaboration
  • Problem Solving
  • Decision Making
  • People Leadership
  • Strategic Execution
  • Executive Coaching
  • Strategy Coaching
  • Business Transformation
  • Strategy Workshops
  • Leadership Strategy Survey
  • Leadership Training
  • Who’s Joe?

“A fact is a simple statement that everyone believes. It is innocent, unless found guilty. A hypothesis is a novel suggestion that no one wants to believe. It is guilty until found effective.”

– Edward Teller, Nuclear Physicist

During my first brainstorming meeting on my first project at McKinsey, this very serious partner, who had a PhD in Physics, looked at me and said, “So, Joe, what are your main hypotheses.” I looked back at him, perplexed, and said, “Ummm, my what?” I was used to people simply asking, “what are your best ideas, opinions, thoughts, etc.” Over time, I began to understand the importance of hypotheses and how it plays an important role in McKinsey’s problem solving of separating ideas and opinions from facts.

What is a Hypothesis?

“Hypothesis” is probably one of the top 5 words used by McKinsey consultants. And, being hypothesis-driven was required to have any success at McKinsey. A hypothesis is an idea or theory, often based on limited data, which is typically the beginning of a thread of further investigation to prove, disprove or improve the hypothesis through facts and empirical data.

The first step in being hypothesis-driven is to focus on the highest potential ideas and theories of how to solve a problem or realize an opportunity.

Let’s go over an example of being hypothesis-driven.

Let’s say you own a website, and you brainstorm ten ideas to improve web traffic, but you don’t have the budget to execute all ten ideas. The first step in being hypothesis-driven is to prioritize the ten ideas based on how much impact you hypothesize they will create.

hypothesis driven example

The second step in being hypothesis-driven is to apply the scientific method to your hypotheses by creating the fact base to prove or disprove your hypothesis, which then allows you to turn your hypothesis into fact and knowledge. Running with our example, you could prove or disprove your hypothesis on the ideas you think will drive the most impact by executing:

1. An analysis of previous research and the performance of the different ideas 2. A survey where customers rank order the ideas 3. An actual test of the ten ideas to create a fact base on click-through rates and cost

While there are many other ways to validate the hypothesis on your prioritization , I find most people do not take this critical step in validating a hypothesis. Instead, they apply bad logic to many important decisions . An idea pops into their head, and then somehow it just becomes a fact.

One of my favorite lousy logic moments was a CEO who stated,

“I’ve never heard our customers talk about price, so the price doesn’t matter with our products , and I’ve decided we’re going to raise prices.”

Luckily, his management team was able to do a survey to dig deeper into the hypothesis that customers weren’t price-sensitive. Well, of course, they were and through the survey, they built a fantastic fact base that proved and disproved many other important hypotheses.

Why is being hypothesis-driven so important?

Imagine if medicine never actually used the scientific method. We would probably still be living in a world of lobotomies and bleeding people. Many organizations are still stuck in the dark ages, having built a house of cards on opinions disguised as facts, because they don’t prove or disprove their hypotheses. Decisions made on top of decisions, made on top of opinions, steer organizations clear of reality and the facts necessary to objectively evolve their strategic understanding and knowledge. I’ve seen too many leadership teams led solely by gut and opinion. The problem with intuition and gut is if you don’t ever prove or disprove if your gut is right or wrong, you’re never going to improve your intuition. There is a reason why being hypothesis-driven is the cornerstone of problem solving at McKinsey and every other top strategy consulting firm.

How do you become hypothesis-driven?

Most people are idea-driven, and constantly have hypotheses on how the world works and what they or their organization should do to improve. Though, there is often a fatal flaw in that many people turn their hypotheses into false facts, without actually finding or creating the facts to prove or disprove their hypotheses. These people aren’t hypothesis-driven; they are gut-driven.

The conversation typically goes something like “doing this discount promotion will increase our profits” or “our customers need to have this feature” or “morale is in the toilet because we don’t pay well, so we need to increase pay.” These should all be hypotheses that need the appropriate fact base, but instead, they become false facts, often leading to unintended results and consequences. In each of these cases, to become hypothesis-driven necessitates a different framing.

• Instead of “doing this discount promotion will increase our profits,” a hypothesis-driven approach is to ask “what are the best marketing ideas to increase our profits?” and then conduct a marketing experiment to see which ideas increase profits the most.

• Instead of “our customers need to have this feature,” ask the question, “what features would our customers value most?” And, then conduct a simple survey having customers rank order the features based on value to them.

• Instead of “morale is in the toilet because we don’t pay well, so we need to increase pay,” conduct a survey asking, “what is the level of morale?” what are potential issues affecting morale?” and what are the best ideas to improve morale?”

Beyond, watching out for just following your gut, here are some of the other best practices in being hypothesis-driven:

Listen to Your Intuition

Your mind has taken the collision of your experiences and everything you’ve learned over the years to create your intuition, which are those ideas that pop into your head and those hunches that come from your gut. Your intuition is your wellspring of hypotheses. So listen to your intuition, build hypotheses from it, and then prove or disprove those hypotheses, which will, in turn, improve your intuition. Intuition without feedback will over time typically evolve into poor intuition, which leads to poor judgment, thinking, and decisions.

Constantly Be Curious

I’m always curious about cause and effect. At Sports Authority, I had a hypothesis that customers that received service and assistance as they shopped, were worth more than customers who didn’t receive assistance from an associate. We figured out how to prove or disprove this hypothesis by tying surveys to transactional data of customers, and we found the hypothesis was true, which led us to a broad initiative around improving service. The key is you have to be always curious about what you think does or will drive value, create hypotheses and then prove or disprove those hypotheses.

Validate Hypotheses

You need to validate and prove or disprove hypotheses. Don’t just chalk up an idea as fact. In most cases, you’re going to have to create a fact base utilizing logic, observation, testing (see the section on Experimentation ), surveys, and analysis.

Be a Learning Organization

The foundation of learning organizations is the testing of and learning from hypotheses. I remember my first strategy internship at Mercer Management Consulting when I spent a good part of the summer combing through the results, findings, and insights of thousands of experiments that a banking client had conducted. It was fascinating to see the vastness and depth of their collective knowledge base. And, in today’s world of knowledge portals, it is so easy to disseminate, learn from, and build upon the knowledge created by companies.

NEXT SECTION: DISAGGREGATION

DOWNLOAD STRATEGY PRESENTATION TEMPLATES

168-PAGE COMPENDIUM OF STRATEGY FRAMEWORKS & TEMPLATES 186-PAGE HR & ORG STRATEGY PRESENTATION 100-PAGE SALES PLAN PRESENTATION 121-PAGE STRATEGIC PLAN & COMPANY OVERVIEW PRESENTATION 114-PAGE MARKET & COMPETITIVE ANALYSIS PRESENTATION 18-PAGE BUSINESS MODEL TEMPLATE

JOE NEWSUM COACHING

EXECUTIVE COACHING STRATEGY COACHING ELEVATE360 BUSINESS TRANSFORMATION STRATEGY WORKSHOPS LEADERSHIP STRATEGY SURVEY & WORKSHOP STRATEGY & LEADERSHIP TRAINING

THE LEADERSHIP MATURITY MODEL

Explore other types of strategy.

BIG PICTURE WHAT IS STRATEGY? BUSINESS MODEL COMP. ADVANTAGE GROWTH

TARGETS MARKET CUSTOMER GEOGRAPHIC

VALUE PROPOSITION PRODUCT SERVICE PRICING

GO TO MARKET DISTRIBUTION SALES MARKETING

ORGANIZATIONAL ORG DESIGN HR & CULTURE PROCESS PARTNER

EXPLORE THE TOP 100 STRATEGIC LEADERSHIP COMPETENCIES

TYPES OF VALUE MARKET ANALYSIS PROBLEM SOLVING

OPTION CREATION ANALYTICS DECISION MAKING PROCESS TOOLS

PLANNING & PROJECTS PEOPLE LEADERSHIP PERSONAL DEVELOPMENT

Career in Consulting

hypothesis driven development examples

Hypothesis-driven approach: the definitive guide

Imagine you are walking in one of McKinsey’s offices.

Around you, there are a dozen of busy consultants.

The word “hypothesis” would be one of the words you would hear the most.

Along with “MECE” or “what’s the so-what?”.

This would also be true in any BCG, Bain & Company office or other major consulting firms.

Because strategy consultants are trained to use a hypothesis-driven approach to solve problems.

And as a candidate, you must demonstrate your capacity to be hypothesis-driven in your case interviews .

There is no turnaround:

If you want a consulting offer, you MUST know how to use a hypothesis-driven approach .

Like a consultant would be hypothesis-driven on a real project for a real client?

Hell, no! Big mistake!

Because like any (somehow) complex topics in life, the context matters.

What is correct in one context becomes incorrect if the context changes.

And this is exactly what’s happening with using a hypothesis-driven approach in case interviews.

This should be different from the hypothesis-driven approach used by consultant solving a problem for a real client .

And that’s why many candidates get it wrong (and fail their interviews).

They use a hypothesis-driven approach like they were already a consultant.

Thus, in this article, you’ll learn the correct definition of being hypothesis-driven in the context of case interviews .

Plus, you’ll learn how to use a hypothesis in your case interviews to “crack the case”, and more importantly get the well-deserved offer!

Ready? Let’s go. It will be super interesting!

Table of Contents

The wrong hypothesis-driven approach in case interviews.

Let’s start with a definition:

Hypothesis-driven thinking is a problem-solving method whereby you start with the answer and work back to prove or disprove that answer through fact-finding.

Concretely, here is how consultants use a hypothesis-driven approach to solve their clients’ problems:

  • Form an initial hypothesis, which is what they think the answer to the problem is.
  • Craft a logic issue tree , by asking themselves “what needs to be true for the hypothesis to be true?”
  • Walk their way down the issue tree and gather the necessary data to validate (or refute) the hypothesis.
  • Reiterate the process from step 1 – if their first hypothesis was disproved by their analysis – until they get it right.

hypothesis driven development examples

With this answer-first approach, consultants do not gather data to fish for an answer. They seek to test their hypotheses , which is a very efficient problem-solving process.

The answer-first thinking works well if the initial hypothesis has been carefully formed.

This is why – in top consulting firms like McKinsey , BCG , or Bain & Company – the hypothesis is formed by a Partner with 20+ years of work experience.

And this is why this is NOT the right approach for case interviews.

Imagine a candidate doing a case interview at McKinsey and using answer-first thinking.

At the beginning of a case, this candidate forms a hypothesis (a potential answer to the problem), builds a logic tree, and gathers data to prove the hypothesis.

Here, there are two options:

The initial hypothesis is right

The initial hypothesis is wrong

If the hypothesis is right, what does it mean for the candidate?

That the candidate was lucky.

Nothing else.

And it certainly does not prove the problem-solving skills of this candidate (which is what is tested in case interviews).

Now, if the hypothesis is wrong, what’s happening next?

The candidate reiterates the process.

Imagine how disorganized the discussion with the interviewer can be.

Most of the time, such candidates cannot form another hypothesis, the case stops, and the candidate feels miserable.

This leads us to the right hypothesis-driven approach for case interviews.

The right hypothesis-driven approach in case interviews

To make my point clear between the wrong and right approach, I’ll take a non-business example.

Let’s imagine you want to move from point A to point B.

And for that, you have the choice among a multitude of roads.

hypothesis driven development examples

Using the answer-first approach presented in the last section, you’d know which road to take to move from A to B (for instance the red line in the drawing below).

hypothesis driven development examples

Again, this would not demonstrate your capacity to find the “best” road to go from A to B.

(regardless of what “best” means. It can be the fastest or the safest for instance.)

Now, a correct hypothesis-driven approach consists in drawing a map with all the potential routes between A and B, and explaining at each intersection why you want to turn left or right (” my hypothesis is that we should turn right ”).

hypothesis driven development examples

And in the context of case interviews?

In the above analogy:

  • A is the problem
  • B is the solution
  • All the potential routes are the issues in your issue tree

And the explanation of why you want to take a certain road instead of another would be your hypothesis.

Is the difference between the wrong and right hypothesis-driven approach clearer?

If not, don’t worry. You’ll find many more examples below in this article.

But, next, let’s address another important question.

Why you must (always) use a hypothesis in your case interviews

You must use a hypothesis in your case interviews for two reasons.

A hypothesis helps you focus on what’s important to solve the case

Using a hypothesis-driven approach is critical to solving a problem efficiently.

In other words:

A hypothesis will limit the number of analysis you need to perform to solve a problem.

Thus, this is a way to apply the 80/20 principle and prioritize the issues (from your MECE issue tree ) you want to investigate.

And this is very important because your time with your interviewer is limited (like is the time with your client on a real project).

Let’s take a simple example of a hypothesis:

The profits of your client have dropped.

And your initial analysis shows increasing costs and stagnating revenues.

So your hypothesis can be:

“I think something happened in our cost structure, causing the profit drop. Next, I’d like to understand better the cost structure of our clients and which cost items have changed recently.”

Here the candidate is rigorously “cutting” half of his/her issue tree (the revenue side) and will focus the case discussion on the cost side.

And this is a good example of a hypothesis in case interviews.

Get 4 Complete Case Interview Courses For Free

hypothesis driven development examples

You need 4 skills to be successful in all case interviews: Case Structuring, Case Leadership, Case Analytics, and Communication. Join this free training and learn how to ace ANY case questions.

A hypothesis tells your interviewers why you want to do an analysis

There is a road that you NEVER want to take.

On this road, the purpose of the questions asked by a candidate is not clear.

Here are a few examples:

“What’s the market size? growth?”

“Who are the main competitors? what are their market shares?”

“Have customer preferences changed in this market?”

This list of questions might be relevant to solve the problem at stake.

But how these questions help solve the problem is not addressed.

Or in other words, the logical connection between these questions and the problem needs to be included.

So, a better example would be:

“We discovered that our client’s sales have declined for the past three years. I would like to know if this is specific to our client or if the whole market has the same trend. Can you tell me how the market size has changed over the past three years? »

In the above question, the reason why the candidate wants to investigate the market is clear: to narrow down the analysis to an internal root cause or an external root cause.

Yet, I see only a few (great) candidates asking clear and purposeful questions.

You want to be one of these candidates.

How to use a hypothesis-driven approach in your case interviews?

At this stage, you understand the importance of a hypothesis-driven approach in case interviews:

You want to identify the most promising areas to analyze (remember that time is money ).

And there are two (and only two) ways to create a good hypothesis in your case interviews:

  • a quantitative way
  • a qualitative way

Let’s start with the quantitative way to develop a good hypothesis in your case interviews.

The quantitative approach: use the available data

Let’s use an example to understand this data-driven approach:

Interviewer: your client is manufacturing computers. They have been experiencing increasing costs and want to know how to address this issue.

Candidate: to begin with, I want to know the breakdown of their cost structure. Do you have information about the % breakdown of their costs?

Interviewer: their materials costs count for 30% and their manufacturing costs for 60%. The last 10% are SG&A costs.

Candidate: Given the importance of manufacturing costs, I’d like to analyze this part first. Do we know if manufacturing costs go up?

Interviewer: yes, manufacturing costs have increased by 20% over the past 2 years.

Candidate: interesting. Now, it would be interesting to understand why such an increase happened.

You can notice in this example how the candidate uses data to drive the case discussion and prioritize which analysis to perform.

The candidate made a (correct) hypothesis that the increasing costs were driven by the manufacturing costs (the biggest chunk of the cost structure).

Even if the hypothesis were incorrect, the candidate would have moved closer to the solution by eliminating an issue (manufacturing costs are not causing the overall cost increase).

That said, there is another way to develop a good hypothesis in your case interviews.

The qualitative approach: use your business acumen

Sometimes you don’t have data (yet) to make a good hypothesis.

Thus, you must use your business judgment and develop a hypothesis.

Again, let’s take an example to illustrate this approach.

Interviewer: your client manufactures computers and has been losing market shares to their direct competitors. They hired us to find the root cause of this problem.

Candidate: I think of many reasons explaining the drop in market shares. First, our client manufactures and sells not-competitive products. Secondly, we might price our products too high. Third, we need to use the right distribution channels. For instance, we might sell in brick-and-mortars stores when consumers buy their computers in e-stores like Amazon. Finally, I think of our marketing expenses. There may be too low or not used strategically.

Candidate: I see these products as commodities where consumers use price as the main buying decision criteria. That’s why I’d like to explore how our client prices their products. Do you have information about how our prices compare to competitors’?

Interviewer: this is a valid point. Here is the data you want to analyze.

Note how this candidate explains what she/he wants to analyze first (prices) and why (computers are commodities).

In this case interview, the hypothesis-driven approach looks like this:

This is a commodity industry —> consumers buying behavior is driven by pricing —> our client’s prices are too high.

Again, note how the candidate first listed the potential root causes for this situation and did not use an answer-first approach.

Want to learn more?

In this free training , I explain in detail how to use data or your business acumen to prioritize the issues to analyze and “crack the case.”

Also, you’ll learn what to do if you don’t have data or can’t use your business acumen.

Sign up now for free .

Form a hypothesis in these two critical moments of your case interviews

After you’ve presented your initial structure.

The first moment to form a hypothesis in your case interview?

In the beginning, after you’ve presented your structure.

When you’ve presented your issue tree, mention which issue you want to analyze first.

Also, explain why you want to investigate this first issue.

Make clear how the outcome of the analysis of this issue will help you solve the problem.

After an analysis

The second moment to form a hypothesis in your case interview?

After you’ve derived an insight from data analysis.

This insight has proved (or disproved) your hypothesis.

Either way, after you have developed an insight, you must form a new hypothesis.

This can be the issue you want to analyze next.

Or what a solution to the problem is.

Hypothesis-driven approach in case interviews: a conclusion

Having spent about 10 years coaching candidates through the consulting recruitment process , one commonality of successful candidates is that they truly understand how to be hypothesis-driven and demonstrate efficient problem-solving.

Plus, per my experience in coaching candidates , not being able to use a hypothesis is the second cause of rejection in case interviews (the first being the lack of MECEness ).

This means you can’t afford NOT to master this concept in a case study.

So, sign up now for this free course to learn how to use a hypothesis-driven approach in your case interviews and land your dream consulting job.

More than 7,000 people have already signed up.

Don’t waste one more minute!

See you there.

SHARE THIS POST

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

You need 4 skills to be successful in all case interviews: Case Structuring, Case Leadership, Case Analytics, and Communication. Enroll in our 4 free courses and discover the proven systems +300 candidates used to learn these 4 skills and land offers in consulting.

  • Product Development
  • Ux Consulting
  • Application Development
  • Application Modernization
  • Quality Assurance Services

Migrating application and databases to the cloud, moving from legacy technologies to a serverless platform for a FinTech organization.

Product-Engineering

  • Cloud Services
  • Devops Services
  • Cloud Migration
  • Cloud Optimization
  • Cloud Computing Services
  • Kubernetes Consulting Services

Building end-to-end data engineering capabilities and setting up DataOps for a healthcare ISV managing sensitive health data.

Cloud-Engineering

  • Big Data Services
  • AI Consulting

Setting up a next-gen SIEM system, processing PB scale data with zero lag, and implementing real-time threat detection.

Data-Machine-Learning

  • IoT Consulting
  • IoT App Development

Building a technically robust IoT ecosystem that was awarded the best implementation in Asia Pacific for a new age IoT business.

Internet-of-Things

  • Innovation Lab as a Service

Establishing an Innovation Lab for the world’s largest Pharma ISV, accelerating product innovation & tech research while ensuring BaU.

Innovation-Lab-as-a-Service

Find out why Cuelogic, a world-leading software product development company, is the best fit for your needs. See how our engineering excellence makes a difference in the lives of everyone we work with.

about us

Discover how Cuelogic is as a global software consultancy and explore what makes us stand apart.

Culture

Read about our free and open culture, a competitive edge that helps clients and employees thrive.

Current Openings

Want to join us? Search current openings, check out the recruitment process, or email your resume.

  • Case Studies
  • Tell Us Your Project   ❯

Hypothesis-driven Development 

blog

Home > Hypothesis-driven Development 

An IT project typically involves a few definite stages. A team has an idea of what would be valuable to the end-user. The same team designs proposed solutions for the product implements the ideas, tests the ideas, deploys them, and eventually, the final product reaches the customer. Ideally, the customer likes the product, uses it, and benefits from it. However, the product team does not immediately have a clear view of the customer experience. 

This is where Hypothesis-driven Development (HDD) comes into the picture. 

Hypothesis-driven development is about solving the right problem at the right time. It ensures that understanding of the problem is verified before actual work on the solution begins.  

What is Hypothesis-driven Development? 

Hypothesis-driven Development (HDD) involves the development of new products and services through iterative experiments on hypotheses. The results of the experiments help to decide whether an expected outcome will be achieved. The steps are repeated till a desirable outcome is reached or the idea is deemed to not be viable anymore. 

HDD advocates experimentation over detailed planning, client feedback over instinct, and iterative design over conventional “big design up front” deliveries.  

With HDD, solutions are viewed as hypotheses, which can be theories about an area relevant to the problem statement. Hypotheses can be developed about areas like the market being targeted, the performance of a business model, the performance of code or the customer’s preferred way of interacting with the system. 

As the software development industry has matured, teams now have the opportunity to leverage capabilities like Continuous Design and Delivery to maximize their potential to learn quickly what works and what does not. By taking an experimental approach to information discovery, teams can more rapidly test their solutions against the problems they have identified in the products or services they are attempting to build. 

What does Hypothesis-driven Development Hope to Achieve?

The goal of HDD is to improve the efficacy of teams, by ensuring they solve correctly-identified problems, rather than continually building low-impact solutions. In HDD, teams focus on user outcomes, rather than their outputs. 

What led to the Popularity of Hypothesis-driven Development?

Since every software company began competing to deliver better software, faster, development teams faced two major challenges: 

1. Delivering functionality  fast , before the features became outdated 

2. Developing features for a more  selective user-base,  that demanded a better experience in every possible way. 

3. Multiple  competing options  for a user to switch loyalties to 

In today’s fast-paced world, teams need to be ready to adapt continuously and quickly, even if it may require deviating far from the originally chosen path.  

The conventional approach of researching and documenting user requirements, developing a solution “to spec” and deploying it months or years later can no longer be considered suitable. 

Requirements add value when teams are executing a well-known or understood phase of an initiative and can leverage previously used practices to achieve the outcome. However, when teams are in an exploratory phase of the product’s development they should use hypotheses to validate ideas first. 

The adoption of this ‘deliver early and fail fast’ approach has become so commonplace, that the word ‘pivot’ is commonly recognized to mean the act of rapidly changing from one plan to another. 

Why Is HDD Important? 

Handing teams a set of business requirements does not allow the developers the freedom of innovation. It implies that the business team is in charge of the problem as well as the solution. The development team merely implements what they are told.  

However, when developing a new product or feature, the entire team should be encouraged to share their insights on the problem and potential solutions. A development team that merely takes orders from a business owner is not utilizing the full potential, experience, and competency that a cross-functional multi-disciplined team offers. 

Key Steps in Product Development 

It is crucial to lay down the foundational steps in HDD product development. The following four steps are integral to the process: 

steps in hypothesis development

1. Finding the Right Problem  

Teams can make use of the ‘persona hypothesis’ and ‘JTBD/Job To Be Done hypothesis’ to ensure they have identified the right problem. The ‘persona hypothesis’ focuses on the persona, which is a humanized and detailed view of who the user is, and what motivates their actions. To aid with creating the persona, the team usually follows an ‘interview guide’ to ensure they can gather sufficient information about the users they are solving problems for. 

The second type of hypothesis that can aid in identifying the right problem is the ‘JTBD hypothesis’. This hypothesis tries to understand the tasks (or jobs) that customers are trying to achieve in their lives. This framework is foundational for understanding what motivates customers and why customers behave the way they do. 

2. Identifying the Demand for the Solution

This area is pivotal and easy to assess: have you validated that you have something for your audience that they prefer over the alternatives. Teams use the ‘demand value hypothesis’ to aid them with identifying demand. The ‘demand/value hypothesis’ is a hypothesis that contains the exact value that would be given to potential clients. 

3. Finding the Right Solution to the Problem

By making use of the ‘usability hypothesis’, teams can assess whether they have found the right solution to the problem. The ‘usability hypothesis’ helps to determine how easy-to-use the designed solutions are. Simpler solutions are more likely to be adopted by more users.

4. Achieving Continuous Delivery while Deploying the Solution  

By delivering enhanced products to users quickly, teams have the opportunity to learn faster. Teams make use of the ‘functional hypothesis’ in continuous delivery pipelines, to make sure the delivered products provide the expected results. 

How Hypothesis-driven Development Works

HDD is a scientific approach to product development . In HDD, teams make observations about customers, to come up with hypotheses or explanations, which they believe align with their customers’ views. Each hypothesis is then tested by predicting an outcome based on it – if the outcome is aligned with the prediction, the hypothesis can be assumed to be correct. 

The key outcome of this experiment-based approach is a better understanding of the system and desired outcomes. If the results of the hypothesis are not as expected, deductions can be made about how to refine the hypothesis further. 

The experiment at the heart of every iteration of HDD must have a quantifiable conclusion and must contribute to gaining more information about the end-users usage of the product. For each experiment, the following steps must take place: 

hypothesis driven development examples

  • Make observations about the user 
  • Define a hypothesis 
  • Define an experiment to assess the hypothesis 
  • Decide upon success criteria (e.g., a 30% increase in successful transactions) 
  • Conduct the experiment 
  • Evaluate the results of the experiment 
  • Accept or reject the hypothesis 
  • If required, design and assess a new hypothesis 

Product development teams can leverage Continuous Design and Delivery to deliver changes related to their hypothesis for real-time testing of their theories.  

How is a Good Hypothesis Framed? 

There is a framework that should be followed to define a clear hypothesis. 

The framework supports communication and collaboration between developers, testers as well as non-technical stakeholders in the product. 

It is of the following format: 

We believe  <this capability>  

‘This capability defines the functionality that is being developed. The success of the feature is used to test the hypothesis  

Will result in  <this outcome>  

‘This outcome’ defines the clear objective for the development of the capability/feature. Teams identify a tangible goal to measure the usefulness of the feature being developed.  

We will have the confidence to proceed when  <we see a measurable signal>  

The measurable signal refers to the key metrics (which can be qualitative or quantitative) that can be measured to provide evidence that the experiment has succeeded.  

An effective hypothesis is fundamental to data-driven optimization. Hypotheses are used to convert data collected about customers into actionable insights. Every hypothesis is a theory that must be assessed; each idea that is proven to either hold true or false, confirms notions about customer expectations and behaviors, and drives the iterative optimization process.  

For example, an e-retail website could have a high rate of abandonment in the purchase flow. The hypothesis could be that links in the flow are distracting potential customers. The experiment would be to remove them. Should there be an improvement in the number of completed purchases, it would confirm the hypothesis. This would give a confirmed improved understanding of the retail website’s customers and their behavioral trends. This improved insight would help decide what could be optimized next, why, and how results could be measured. 

The following is how the same hypothesis could be defined in an ideal user story:  

  We believe that removing irrelevant links from the purchase page  

This Will result in improved customer conversion  

We will have the confidence to proceed when we see a 20% increase in customers who checkout their shopping cart with successful payment.  

Following the framework is an easy way to ensure you have thought of every aspect of the problem as well as the proposed solution before starting actual work on the project. The framework also ensures that only meaningful features are developed, by quantifying the benefits of these features. 

Best Practices in Hypothesis-driven Development 

The following are the best standards that will help teams ensure they implement HDD well: 

1. Gather Sufficient Data  

Data is what marks the difference between a well-formed hypothesis and a guess. To create a meaningful hypothesis, a business intelligence report is greatly helpful. By monitoring customer behavioral patterns using techniques like web analytics, and indirect sources like competitor overviews, a valuable profile about customers can be created. 

2. Create a Testable Hypothesis  

To adequately test the hypothesis, there need to be well-defined metrics with clear criteria for success and failure. For example, the hypothesis that removing unnecessary navigation links from the checkout page will ensure customers’ complete transactions can easily be assessed for correctness. The change in the company’s revenue will indicate whether or not the hypothesis was correctly defined. 

3. Measure the Experiment Results as per the Need  

The threshold used for determining success depends on the business and context. Not every company has the user sample size of Amazon or Google to run statistically significant experiments in a short period. Limits need to be defined by the organization to determine acceptable evidence thresholds that will allow the team to advance to the next step. 

For example, while building a vehicle, the experiments will have a high threshold for statistical significance. However, if the experiment is to decide between two different flows intended to help increase user sign-up, a lower significance threshold can be acceptable. 

4. State the Assumptions  

Clearly and visibly state any assumptions made about the hypothesis, to create a feedback loop for the team. This allows the team to provide further input, debate, and understand the circumstance of the test. The assumptions must make sense from a technical and business perspective. 

5. Use Insights to Drive Learning  

Hypothesis-driven experimentation gives comprehensive insights into customer behavior. These insights lead to additional questions about customers and the product experience and lead to an iterative learning process. 

The learning process follows this iterative pattern: 

  • Collect data about the product domain, customers, and use the knowledge gained to formulate questions 
  • Design a hypothesis based on the insights gained 
  • Create and implement a campaign derived from the hypothesis 
  • Inspect the results to determine whether the hypothesis is valid or not 
  • Document the conclusions  
  • Use the conclusions to formulate additional questions 
  • Connect the results to the problem being solved 

To ensure optimum learning, begin the entire process with a problem statement, and not a preferred solution. Should the solution fail to deliver the expected result, use the problem statement to analyze new potential solutions and repeat the process? This practice ensures focus on the problems being solved. 

What HDD has in Common With Strategies Like Design Thinking and Lean Startup 

Within large companies, innovation teams commonly use strategies like Design Thinking, Lean Startup, or Agile. Each of these strategies defines only one aspect of the product development process. However, each of these strategies does have certain key principles in common with HDD, which are highlighted below: 

  • Observe Humans to Learn 
  • The cost of development can be kept low if solutions are designed after observing client behaviour, and then iterating. HDD reinforces the “problem-first” mentality by first observing the target audience for the problem being solved. 
  • Focus on Client Actions 
  • This helps to prioritise which areas of the problem need to be targeted first. HDD focuses on the prioritisation of the problems being solved. 
  • Work Fast, but Keep the Blast Radius Minimal 

Even though continuous delivery is a crucial tactic, it cannot be at the expense of correctness. HDD does not promote reduced standards of work, even during the experimentation phase. 

    6.  Minimise Waste 

By focusing on the core of the problem being solved, product development teams ensure they do not waste time/money/resources on features that would not be used by clients.  

Teams can refine the framework they follow as per their needs, but HDD provides them with a foundation for the best practices used across each of these popular strategies.  

Identify How Your Team Has Successfully Implemented Hypothesis-driven Development 

It is necessary to have a capable monitoring and evaluation framework set up when using an experimental approach to software development to quantify the impact of efforts. These results are then used to generate feedback for the team. The learning that is gained during HDD can be the primary measure of progress for work done. 

Ideally, an iteration of HDD is not considered complete till there is a measurable value of what was delivered, that is, data to validate the hypothesis. 

Teams can ensure they have succeeded at implementing HDD by  measuring what makes a difference.  

  • Vanity metrics  are statistics that look good on the surface but do not necessarily provide meaningful business insights. These measurements are usually superficial, easily quantifiable, and sometimes, easily manipulated. Examples could include metrics on the number of social media followers or the number of visits to a promotional advertisement for a product. These metrics do not provide insights about what led to these numbers. 

They also do not reflect how these numbers can be achieved again or how they can be improved. 

  • Actionable metrics  are the metrics that have real significance. They can provide insights that help make decisions about what can be done next to achieve specific business goals. 

Distinguishing Between Actionable and Vanity Metrics: 

Metrics that run deeper than vanity metrics do not necessarily have to be actionable. For example, revenue measures a business’s performance, rather than the vanity metric of how many visitors the business website has. However, merely knowing the statistical changes does not indicate what causes these changes. If, for example, revenue increased but the reason for the increase was not identified, the business cannot repeat the actions made!  

Not just that, they do not have the means to further amplify the improvement! If, however, the revenue was measured before and after a noted change that affected a target set of users, then the business has an actionable metric at hand. They can then go ahead and employ the approach of hypothesis-driven development to try experiments and understand the best way to add value. 

Vanity metrics and actionable metrics can be measured for several actions.  

The first, and in today’s world, most significant, is website behaviour. Apart from this, teams can measure developer or team productivity, alongside bugs reported, and customer usage metrics. Knowing the count of lines of code written does not add value if the code is problematic and requires rework in the next development cycle. It also does not matter if the team is working fast but on a solution that no one will use. 

The relevant point is to work with the team to identify statistics that will provide value to the product development. The metrics should be shared with the team before, during and after any significant changes done to the product. 

Benefits of Hypothesis-driven Development

HDD is being adopted rapidly because of the several benefits it offers. 

hypothesis driven development examples

1. HDD helps product teams prioritise the development of features,  

Teams can understand how the features are connected to the business goals. By tracking metrics before and after product deliveries, clear insights can be learned and worked upon for further growth. As long as teams keep in mind the end-users pain points, which can only be assessed by experimentation, they will deliver incrementally improved products. 

2. HDD enables tracking the desired and real outcomes of the development process.   

Each experiment is formulated to define the expected outcomes. These can be used to understand how the team’s development strategy can be revised for maximum gain. 

3. HDD is Cost-Effective  

It is more cost-effective to test the prototype before delivering the completed product to production.  

By constantly inquiring about the customer’s needs during the development process, the team will benefit from a feedback loop about the product’s performance, during the development phase itself. This will lead to minimal rework on the released product. 

4. Discover and Prioritize the Potential Benefits of Hypothesis-Driven Experiments  

Teams can easily understand the business benefits of changes they make. They can use these numbers to refine a company’s roadmap 

5. Establish a Common Language for Ideation and Research  

By following a framework for designing hypotheses, teams benefit from a standard way to define potential ideas. This enables development and research teams to collaborate and communicate more transparently. 

6. Choose Problems that are Aligned with the Company’s Challenges  

By reviewing the impact of experiments, teams ensure that while working on smaller goals, they stay aligned with the company’s end-state vision. 

7. Gain from Planning and Budgeting insights  

Other than measuring the outcome of experiments, teams can also measure the number of hypotheses being tested, the cost of these experiments and the time taken for each experiment. Advanced analysis can also help teams understand their development velocity. The measurable nature of the hypothesis-driven approach makes it simple for an organization to understand how to plan, budget for and undertake a hypothesis-driven approach. 

8. Quantify and Manage Risk  

Teams can understand how many hypotheses they need to validate or invalidate to determine whether they should make further investments in their product. Stakeholders can monitor the previously opaque process of risk evaluation, by evaluating the quality and quantity of hypotheses tested. 

9. Explicitly Documented Learning  

The hypothesis-driven approach for product development has an added benefit in that it explicitly captures lessons learned about the organization’s target market, customers, competitors, and products. A hypothesis-driven approach requires the thorough documentation and capture of each hypothesis, the details of the experiment as well as the results. This data becomes an invaluable store of information for the organization. 

10. HDD leads to better Technical Debt Management  

By reworking the product during the development phase, there are likely to be no surprises in customer reviews of the final product. This will ensure the technical debt is minimal and reduces overall development costs. 

Success Story of Hypothesis-driven Development 

A great example of a company that successfully used HDD while developing its main offering is Dropbox, the popular remote file-sharing service. Today, Dropbox is used by over  45 million users . However, when they initially launched, they had several competitors in the same domain. The key point was that each of these options was not well made. 

Dropbox’s initial hypothesis was that if they offered a well-executed file system, then people would be willing to pay to use it. 

When they began their experimentation, they used the persona hypothesis to define their ideal target user base. The persona they devised was of a technically-aware person, who would work in the software world. 

The solution they designed was centered to be ideal for the person they had devised. 

The Job To Be Done was sharing files between groups of people. The existing solutions at the time were manual, like file systems that needed to be backed up by an engineer. 

Dropbox’s value hypothesis was that a transparent remote file system would be adopted by several users, provided it was well-made. Dropbox needed to identify the demand for their solution. However, they did not have the resources at the time to create the product and have it validated by multiple users for the first round of experimentation of their hypothesis. They circumvented this blocker by releasing a video, which detailed their idea. The video was published online and advertised on development forums. 

The interest in their proposal was significant, enough to help them validate their proposed solution design, 

Conclusion  

Hypothesis-driven development draws its strength from the fact that the real world is complex, in a state of constant flux, and can sometimes be confusing. Consistent hypothesis-driven experimentation helps programs make a significant and beneficial impact on a company’s company objectives. Using data that is strongly coupled to the company’s vision ensures that focus is given to areas of significance for customers, rather than points that seem significant to a specific group of product managers.  

Remember, in the scientific world of development, data and facts will always trump intuition. 

Recommended Content

Share This Blog

Have a tech or business challenge? We can help!

Request your Case Study

Please leave this field empty.

People Also Read

BDD vs TDD : Highlighting the two important Quality Engineering Practices

10 Things that increase your product valuation and fuel growth

User Experience

Types of UX Deliverables : How To Bring In Best User Experience

Subscribe to our Blog

Subscribe to our newsletter to receive the latest thought leadership by Cuelogic experts, delivered straight to your inbox!

  • Product Engineering
  • Capabilities Portfolio
  • Unique Operating Model
  • Engineering Framework
  • Current Openings

We are Social

Subscribe to Our Insights

We don't spam!

We are hiring!

  • The future of data engineering in digital product engineering lies with Gen AI
  • Key considerations for Greenfield Product Engineering to build futuristic solutions
  • Data as a Product: The Role of Data Architecture and Data Modelling Strategy
  • Data as a Product: Data Architecture Principles for Management Blueprint
  • Core Principles Of Design Thinking

Privacy Policy

All Rights Reserved ©  Cuelogic 202 3

2.4 Developing a Hypothesis

Learning objectives.

  • Distinguish between a theory and a hypothesis.
  • Discover how theories are used to generate hypotheses and how the results of studies can be used to further inform theories.
  • Understand the characteristics of a good hypothesis.

Theories and Hypotheses

Before describing how to develop a hypothesis it is imporant to distinguish betwee a theory and a hypothesis. A  theory  is a coherent explanation or interpretation of one or more phenomena. Although theories can take a variety of forms, one thing they have in common is that they go beyond the phenomena they explain by including variables, structures, processes, functions, or organizing principles that have not been observed directly. Consider, for example, Zajonc’s theory of social facilitation and social inhibition. He proposed that being watched by others while performing a task creates a general state of physiological arousal, which increases the likelihood of the dominant (most likely) response. So for highly practiced tasks, being watched increases the tendency to make correct responses, but for relatively unpracticed tasks, being watched increases the tendency to make incorrect responses. Notice that this theory—which has come to be called drive theory—provides an explanation of both social facilitation and social inhibition that goes beyond the phenomena themselves by including concepts such as “arousal” and “dominant response,” along with processes such as the effect of arousal on the dominant response.

Outside of science, referring to an idea as a theory often implies that it is untested—perhaps no more than a wild guess. In science, however, the term theory has no such implication. A theory is simply an explanation or interpretation of a set of phenomena. It can be untested, but it can also be extensively tested, well supported, and accepted as an accurate description of the world by the scientific community. The theory of evolution by natural selection, for example, is a theory because it is an explanation of the diversity of life on earth—not because it is untested or unsupported by scientific research. On the contrary, the evidence for this theory is overwhelmingly positive and nearly all scientists accept its basic assumptions as accurate. Similarly, the “germ theory” of disease is a theory because it is an explanation of the origin of various diseases, not because there is any doubt that many diseases are caused by microorganisms that infect the body.

A  hypothesis , on the other hand, is a specific prediction about a new phenomenon that should be observed if a particular theory is accurate. It is an explanation that relies on just a few key concepts. Hypotheses are often specific predictions about what will happen in a particular study. They are developed by considering existing evidence and using reasoning to infer what will happen in the specific context of interest. Hypotheses are often but not always derived from theories. So a hypothesis is often a prediction based on a theory but some hypotheses are a-theoretical and only after a set of observations have been made, is a theory developed. This is because theories are broad in nature and they explain larger bodies of data. So if our research question is really original then we may need to collect some data and make some observation before we can develop a broader theory.

Theories and hypotheses always have this  if-then  relationship. “ If   drive theory is correct,  then  cockroaches should run through a straight runway faster, and a branching runway more slowly, when other cockroaches are present.” Although hypotheses are usually expressed as statements, they can always be rephrased as questions. “Do cockroaches run through a straight runway faster when other cockroaches are present?” Thus deriving hypotheses from theories is an excellent way of generating interesting research questions.

But how do researchers derive hypotheses from theories? One way is to generate a research question using the techniques discussed in this chapter  and then ask whether any theory implies an answer to that question. For example, you might wonder whether expressive writing about positive experiences improves health as much as expressive writing about traumatic experiences. Although this  question  is an interesting one  on its own, you might then ask whether the habituation theory—the idea that expressive writing causes people to habituate to negative thoughts and feelings—implies an answer. In this case, it seems clear that if the habituation theory is correct, then expressive writing about positive experiences should not be effective because it would not cause people to habituate to negative thoughts and feelings. A second way to derive hypotheses from theories is to focus on some component of the theory that has not yet been directly observed. For example, a researcher could focus on the process of habituation—perhaps hypothesizing that people should show fewer signs of emotional distress with each new writing session.

Among the very best hypotheses are those that distinguish between competing theories. For example, Norbert Schwarz and his colleagues considered two theories of how people make judgments about themselves, such as how assertive they are (Schwarz et al., 1991) [1] . Both theories held that such judgments are based on relevant examples that people bring to mind. However, one theory was that people base their judgments on the  number  of examples they bring to mind and the other was that people base their judgments on how  easily  they bring those examples to mind. To test these theories, the researchers asked people to recall either six times when they were assertive (which is easy for most people) or 12 times (which is difficult for most people). Then they asked them to judge their own assertiveness. Note that the number-of-examples theory implies that people who recalled 12 examples should judge themselves to be more assertive because they recalled more examples, but the ease-of-examples theory implies that participants who recalled six examples should judge themselves as more assertive because recalling the examples was easier. Thus the two theories made opposite predictions so that only one of the predictions could be confirmed. The surprising result was that participants who recalled fewer examples judged themselves to be more assertive—providing particularly convincing evidence in favor of the ease-of-retrieval theory over the number-of-examples theory.

Theory Testing

The primary way that scientific researchers use theories is sometimes called the hypothetico-deductive method  (although this term is much more likely to be used by philosophers of science than by scientists themselves). A researcher begins with a set of phenomena and either constructs a theory to explain or interpret them or chooses an existing theory to work with. He or she then makes a prediction about some new phenomenon that should be observed if the theory is correct. Again, this prediction is called a hypothesis. The researcher then conducts an empirical study to test the hypothesis. Finally, he or she reevaluates the theory in light of the new results and revises it if necessary. This process is usually conceptualized as a cycle because the researcher can then derive a new hypothesis from the revised theory, conduct a new empirical study to test the hypothesis, and so on. As  Figure 2.2  shows, this approach meshes nicely with the model of scientific research in psychology presented earlier in the textbook—creating a more detailed model of “theoretically motivated” or “theory-driven” research.

Figure 4.4 Hypothetico-Deductive Method Combined With the General Model of Scientific Research in Psychology Together they form a model of theoretically motivated research.

Figure 2.2 Hypothetico-Deductive Method Combined With the General Model of Scientific Research in Psychology Together they form a model of theoretically motivated research.

As an example, let us consider Zajonc’s research on social facilitation and inhibition. He started with a somewhat contradictory pattern of results from the research literature. He then constructed his drive theory, according to which being watched by others while performing a task causes physiological arousal, which increases an organism’s tendency to make the dominant response. This theory predicts social facilitation for well-learned tasks and social inhibition for poorly learned tasks. He now had a theory that organized previous results in a meaningful way—but he still needed to test it. He hypothesized that if his theory was correct, he should observe that the presence of others improves performance in a simple laboratory task but inhibits performance in a difficult version of the very same laboratory task. To test this hypothesis, one of the studies he conducted used cockroaches as subjects (Zajonc, Heingartner, & Herman, 1969) [2] . The cockroaches ran either down a straight runway (an easy task for a cockroach) or through a cross-shaped maze (a difficult task for a cockroach) to escape into a dark chamber when a light was shined on them. They did this either while alone or in the presence of other cockroaches in clear plastic “audience boxes.” Zajonc found that cockroaches in the straight runway reached their goal more quickly in the presence of other cockroaches, but cockroaches in the cross-shaped maze reached their goal more slowly when they were in the presence of other cockroaches. Thus he confirmed his hypothesis and provided support for his drive theory. (Zajonc also showed that drive theory existed in humans (Zajonc & Sales, 1966) [3] in many other studies afterward).

Incorporating Theory into Your Research

When you write your research report or plan your presentation, be aware that there are two basic ways that researchers usually include theory. The first is to raise a research question, answer that question by conducting a new study, and then offer one or more theories (usually more) to explain or interpret the results. This format works well for applied research questions and for research questions that existing theories do not address. The second way is to describe one or more existing theories, derive a hypothesis from one of those theories, test the hypothesis in a new study, and finally reevaluate the theory. This format works well when there is an existing theory that addresses the research question—especially if the resulting hypothesis is surprising or conflicts with a hypothesis derived from a different theory.

To use theories in your research will not only give you guidance in coming up with experiment ideas and possible projects, but it lends legitimacy to your work. Psychologists have been interested in a variety of human behaviors and have developed many theories along the way. Using established theories will help you break new ground as a researcher, not limit you from developing your own ideas.

Characteristics of a Good Hypothesis

There are three general characteristics of a good hypothesis. First, a good hypothesis must be testable and falsifiable . We must be able to test the hypothesis using the methods of science and if you’ll recall Popper’s falsifiability criterion, it must be possible to gather evidence that will disconfirm the hypothesis if it is indeed false. Second, a good hypothesis must be  logical. As described above, hypotheses are more than just a random guess. Hypotheses should be informed by previous theories or observations and logical reasoning. Typically, we begin with a broad and general theory and use  deductive reasoning to generate a more specific hypothesis to test based on that theory. Occasionally, however, when there is no theory to inform our hypothesis, we use  inductive reasoning  which involves using specific observations or research findings to form a more general hypothesis. Finally, the hypothesis should be  positive.  That is, the hypothesis should make a positive statement about the existence of a relationship or effect, rather than a statement that a relationship or effect does not exist. As scientists, we don’t set out to show that relationships do not exist or that effects do not occur so our hypotheses should not be worded in a way to suggest that an effect or relationship does not exist. The nature of science is to assume that something does not exist and then seek to find evidence to prove this wrong, to show that really it does exist. That may seem backward to you but that is the nature of the scientific method. The underlying reason for this is beyond the scope of this chapter but it has to do with statistical theory.

Key Takeaways

  • A theory is broad in nature and explains larger bodies of data. A hypothesis is more specific and makes a prediction about the outcome of a particular study.
  • Working with theories is not “icing on the cake.” It is a basic ingredient of psychological research.
  • Like other scientists, psychologists use the hypothetico-deductive method. They construct theories to explain or interpret phenomena (or work with existing theories), derive hypotheses from their theories, test the hypotheses, and then reevaluate the theories in light of the new results.
  • Practice: Find a recent empirical research report in a professional journal. Read the introduction and highlight in different colors descriptions of theories and hypotheses.
  • Schwarz, N., Bless, H., Strack, F., Klumpp, G., Rittenauer-Schatka, H., & Simons, A. (1991). Ease of retrieval as information: Another look at the availability heuristic.  Journal of Personality and Social Psychology, 61 , 195–202. ↵
  • Zajonc, R. B., Heingartner, A., & Herman, E. M. (1969). Social enhancement and impairment of performance in the cockroach.  Journal of Personality and Social Psychology, 13 , 83–92. ↵
  • Zajonc, R.B. & Sales, S.M. (1966). Social facilitation of dominant and subordinate responses. Journal of Experimental Social Psychology, 2 , 160-168. ↵

Creative Commons License

Share This Book

  • Increase Font Size

Kalinga Institute of Industrial Technology

  • Top Courses

University of Virginia

Hypothesis-Driven Development

This course is part of multiple programs. Learn more

This course is part of multiple programs

Taught in English

Some content may not be translated

Alex Cowan

Instructor: Alex Cowan

Sponsored by Kalinga Institute of Industrial Technology

49,685 already enrolled

(950 reviews)

What you'll learn

How to drive valuable outcomes for your user and reduce waste for your team by diagnosing and prioritizing what you need to know about them

How to focus your practice of agile by pairing qualitative and quantitative analytics

How to do just enough research when you need it by running design sprints

How to accelerate value delivery by investing in your product pipeline

Skills you'll gain

  • Design and Product
  • Communication
  • Leadership and Management
  • Project Management
  • User Experience
  • Software Engineering

Details to know

hypothesis driven development examples

Add to your LinkedIn profile

See how employees at top companies are mastering in-demand skills

Placeholder

Build your subject-matter expertise

  • Learn new concepts from industry experts
  • Gain a foundational understanding of a subject or tool
  • Develop job-relevant skills with hands-on projects
  • Earn a shareable career certificate

Placeholder

Earn a career certificate

Add this credential to your LinkedIn profile, resume, or CV

Share it on social media and in your performance review

Placeholder

There are 4 modules in this course

To deliver agile outcomes, you have to do more than implement agile processes- you have to create focus around what matters to your user and constantly test your ideas. This is easier said than done, but most of today’s high-functioning innovators have a strong culture of experimentation.

In this course, you’ll learn how to identify the right questions at the right time, and pair them with the right methods to do just enough testing to make sure you minimize waste and maximize the outcomes you create with your user. This course is supported by the Batten Institute at UVA’s Darden School of Business. The Batten Institute’s mission is to improve the world through entrepreneurship and innovation: www.batteninstitute.org.

How Do We Know if We're Building for a User that Doesn't Exist?

How do you go from backlog grooming to blockbuster results with agile? Hypothesis-driven decisions. Specifically, you need to shift your teammates focus from their natural tendency to focus on their own output to focusing out user outcomes. Easier said than done, but getting everyone excited about results of an experiment is one of the most reliable ways to get there. This week, we’ll focus on how you get started in a practical way.

What's included

22 videos 1 reading 1 quiz

22 videos • Total 88 minutes

  • Course Introduction • 4 minutes • Preview module
  • Hypotheses-Driven Development & Your Product Pipeline • 7 minutes
  • Introducing Example Company: HVAC in a Hurry • 1 minute
  • Driving Outcomes With Your Product Pipeline • 7 minutes
  • The Persona Hypothesis • 3 minutes
  • The JTBD Hypothesis • 3 minutes
  • The Demand Hypothesis • 2 minutes
  • The Usability Hypothesis • 2 minutes
  • The Collaboration Hypothesis • 2 minutes
  • The Functional Hypothesis • 2 minutes
  • Driving to Value with Your Persona & JTBD Hypothesis • 2 minutes
  • Example Personas and Jobs-to-be-Done • 4 minutes
  • Setting Up Interviews • 3 minutes
  • Prepping for Subject Interviews • 3 minutes
  • Conducting the Interview • 6 minutes
  • How Not to Interview • 6 minutes
  • Day in the Life • 4 minutes
  • You and Your Next Design Sprint • 4 minutes
  • The Practice of Time Boxing • 4 minutes
  • Overview of the Persona and JTBD Sprint • 2 minutes
  • How Do I Sell the Idea of a Design Sprint • 4 minutes
  • Your Persona & JTBD Hypotheses: What's Next For You? • 3 minutes

1 reading • Total 15 minutes

  • Course Overview & Requirements • 15 minutes

1 quiz • Total 20 minutes

  • Week 1 Quiz • 20 minutes

How Do We Reduce Waste & Increase Wins by Testing Our Propositions Before We Build Them?

Nothing will help a team deliver better outcomes like making sure they’re building something the user values. This might sound simple or obvious, but I think after this week it’s likely you’ll find opportunities to help improve your team’s focus by testing ideas more definitively before you invest in developing software. In this module, you’ll learn how to make concept testing an integral part of your product pipeline. We’ll continue to apply methods from Lean Startup, looking at how they pair with agile. We’ll look at how high-functioning teams design and run situation-appropriate experiments to test ideas, and how that works before the fact (when you’re testing an idea) and after the fact (when you’re testing the value of software you’ve released).

20 videos 1 quiz 1 discussion prompt

20 videos • Total 120 minutes

  • Creating More Wins • 5 minutes • Preview module
  • Describing the Customer Experience (CX) for Testability • 8 minutes
  • CX Mapping for Prioritization and Testing • 6 minutes
  • Testing Demand Hypotheses with MVP's • 4 minutes
  • Learning What's Valuable • 7 minutes
  • Introducing Enable Quiz • 1 minute
  • Business to Consumer Case Studies • 9 minutes
  • Business to Business Case Studies • 6 minutes
  • Using a Design Sprint to Test Your Demand Hypothesis • 3 minutes
  • Lean Startup and Learning from Practice • 0 minutes
  • Interview: Tristan Kromer on the Practice of Lean Startup • 6 minutes
  • Interview: David Bland on the Practice of Lean Startup • 5 minutes
  • Interview: Tristan Kromer on Creating a Culture of Experimentation Part 1 • 7 minutes
  • Interview: Tristan Kromer on Creating a Culture of Experimentation Part 2 • 6 minutes
  • Interview: David Bland on Creating a Culture of Experimentation: Part 1 • 4 minutes
  • Interview: David Bland on Creating a Culture of Experimentation: Part 2 • 9 minutes
  • Interview: David Bland on Marrying Agile to Lean Startup • 7 minutes
  • Interview: David Bland on Using Hypothesis with Agile • 5 minutes
  • Interview: Laura Klein on the Right Kind of Research • 10 minutes
  • Your Demand Hypotheses: What's next for you? • 3 minutes
  • Week 2 Quiz • 20 minutes

1 discussion prompt • Total 15 minutes

  • Learnings from David, Tristan, and Laura • 15 minutes

How Do We Consistently Deliver Great Usability?

The best products are tested for usability early and often, avoiding the destructive stress and uncertainty of a "big unveil." In this module, you’ll learn how to diagnose, design and execute phase-appropriate user testing. The tools you’ll learn to use here (a test plan template, prototyping tool, and test session infrastructure) are accessible/teachable to anyone on your team. And that’s a very good thing -- often products are released with poor usability because there "wasn’t enough time" to test it. With these techniques, you’ll be able to test early and often, reinforcing your culture of experimentation.

19 videos 1 quiz 1 discussion prompt

19 videos • Total 90 minutes

  • The Always Test • 4 minutes • Preview module
  • A Test-Driven Approach to Usability • 5 minutes
  • The Inexact Science of Interface Design • 6 minutes
  • Diagnosing Usability with Donald Norman's 7 Steps Model • 8 minutes
  • Fixing Usability with Donald Norman's 7 Steps Model • 3 minutes
  • Applying the 7 Steps Model to Hypothesis-Driven Development • 3 minutes
  • Fixing the Visceral Layer • 4 minutes
  • Fixing the Behavioral Layer: The Importance of Comparables & Prototyping • 9 minutes
  • Prototyping With Balsamiq • 4 minutes
  • Usability Testing: Fun & Affordable • 2 minutes
  • The Right Testing at the Right Time • 2 minutes
  • A Test Plan Anyone Can Use • 6 minutes
  • Creating Good Test Items • 3 minutes
  • Running a Usability Design Sprint • 3 minutes
  • Running a Usability Design Sprint Skit • 5 minutes
  • Interview: Laura Klein on Qualitative vs. Quantitative Research • 4 minutes
  • Interview: Laura Klein on Lean UX in Enterprise IT • 5 minutes
  • Prioritizing User Outcomes with Story Mapping • 4 minutes
  • Your Usability Hypotheses: What's Next For You? • 3 minutes
  • Week 3 Quiz • 20 minutes
  • How will these techniques help you? • 15 minutes

How Do We Invest to Move Fast?

You’ve learned how to test ideas and usability to reduce the amount of software your team needs to build and to focus its execution. Now you’re going to learn how high-functioning teams approach testing of the software itself. The practice of continuous delivery and the closely related Devops movement are changing the way we build and release software. It wasn’t that long ago where 2-3 releases a year was considered standard. Now, Amazon, for example, releases code every 11.6 seconds. This week, we’ll look at the delivery pipeline and step through what successful practitioners do at each stage and how you can diagnose and apply the practices that will improve your implementation of agile.

24 videos 1 quiz 1 peer review

24 videos • Total 128 minutes

  • Functional Hypotheses and Continous Delivery • 6 minutes • Preview module
  • The Team that Releases Together • 4 minutes
  • Getting Started with Continuous Delivery • 3 minutes
  • Anders Wallgren on Getting Started • 4 minutes
  • The Test Pyramid • 6 minutes
  • The Commit & Small Tests Stage • 2 minutes
  • The Job of Version Control • 3 minutes
  • Medium Tests • 1 minute
  • Large Tests • 6 minutes
  • Creating Large/Behavioral Tests • 9 minutes
  • Anders Wallgren on Functional Testing • 9 minutes
  • Release Stage • 4 minutes
  • The Job of Deploying • 6 minutes
  • Anders Wallgren on Deployment • 2 minutes
  • Chris Kent on Developing with Continuous Delivery • 10 minutes
  • Chris Kent on Continuous Deployment • 11 minutes
  • Test-Driven General Management • 5 minutes
  • Narrative and the 'Happy Path' • 3 minutes
  • The Emergence of DevOps and the Ascent of Continuous Delivery • 4 minutes
  • Design for Deployability • 2 minutes
  • Anders Wallgren on Continuous Deployment • 3 minutes
  • Anders Wallgren on Creating a Friendly Environment for Continuous Deployment • 6 minutes
  • Your Functional Hypotheses: What's Next For You? • 2 minutes
  • Course Conclusion • 8 minutes
  • Week 4 Quiz • 20 minutes

1 peer review • Total 90 minutes

  • Creating and Testing a Demand/Value Hypothesis • 90 minutes

Instructor ratings

We asked all learners to give feedback on our instructors based on the quality of their teaching style.

hypothesis driven development examples

A premier institution of higher education, The University of Virginia offers outstanding academics, world-class faculty, and an inspiring, supportive environment. Founded by Thomas Jefferson in 1819, the University is guided by his vision of discovery, innovation, and development of the full potential of students from all walks of life. Through these courses, global learners have an opportunity to study with renowned scholars and thought leaders.

Why people choose Coursera for their career

hypothesis driven development examples

Learner reviews

Showing 3 of 950

950 reviews

Reviewed on Jan 9, 2022

I loved the whole specialization has a lot of benefits about product management from A to Z and especially this course was discussing every point in more detail for the whole specialization.

Reviewed on Sep 25, 2018

This course actually bring all that knowledge into light which has been taught in Course 1-3. all videos specially the interview are the essence of this course.

Reviewed on Mar 18, 2018

The course contains complete detailed information on Agile Testing. Users should make the best use of this course knowledge as now all the companies are now moving to Agile.

Recommended if you're interested in Computer Science

hypothesis driven development examples

University of Virginia

Managing an Agile Team

hypothesis driven development examples

Product Analytics and AI

hypothesis driven development examples

Agile Meets Design Thinking

hypothesis driven development examples

Board Infinity

Fluent Assertion

Placeholder

Open new doors with Coursera Plus

Unlimited access to 7,000+ world-class courses, hands-on projects, and job-ready certificate programs - all included in your subscription

Advance your career with an online degree

Earn a degree from world-class universities - 100% online

Join over 3,400 global companies that choose Coursera for Business

Upskill your employees to excel in the digital economy

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Oral Maxillofac Pathol
  • v.23(2); May-Aug 2019

Hypothesis-driven Research

Umadevi krishnamohan rao.

1 Department of Oral and Maxillofacial Pathology, Ragas Dental College and Hospital, Chennai, Tamil Nadu, India E-mail: moc.liamg@kvuamu

An external file that holds a picture, illustration, etc.
Object name is JOMFP-23-168-g001.jpg

As Oral Pathologists, we have the responsibility to upgrade our quality of service with an open mind attitude and gratitude for the contributions made by our professional colleagues. Teaching the students is the priority of the faculty, and with equal priority, oral pathologists have the responsibility to contribute to the literature too as a researcher.

Research is a scientific method of answering a question. This can be achieved when the work done in a representative sample of the population, i.e., the outcome of the result, can be applied to the rest of the population, from which the sample is drawn. In this context, frequently done research is a hypothesis-driven research which is based on scientific theories. Specific aims are listed in this type of research, and the objectives are stated. The scope of a well-designed methodology in a hypothesis-driven research equips the researcher to establish an opportunity to state the outcome of the study.

A provisional statement in which the relationship between two variables is described is known as hypothesis. It is very specific and offers the freedom of evaluating a prediction between the variables stated. It facilitates the researcher to envision and gauge as to what changes can occur in the listed specific outcome variables (dependent) when changes are made in a specific predictor (independent) variable. Thus, any given hypothesis should include both these variables, and the primary aim of the study should be focused on demonstrating the association between the variables, by maintaining the highest ethical standards.

The other requisites for a hypothesis-based study are we should state the level of statistical significance and should specify the power, which is defined as the probability that a statistical test will indicate a significant difference when it truly exists.[ 1 ] In a hypothesis-driven research, specifications of methodology help the grant reviewers to differentiate good science from bad science, and thus, hypothesis-driven research is the most funded research.[ 2 ]

“Hypotheses aren’t simply useful tools in some potentially outmoded vision of science; they are the whole point.” This was stated by Sean Carroll, from the California Institute of Technology, in response to Editor-In-Chief of “ Wired ” Chris Anderson, who argued that “biology is too complex for hypotheses and models, and he favored working on enormous data by correlative analysis.”[ 3 ]

Research does not stop by stating the hypotheses but must ensure that it is clear, testable and falsifiable and should serve as the fundamental basis for constructing a methodology that will allow either its acceptance (study favoring a null hypothesis) or rejection (study rejecting the null hypothesis in favor of the alternative hypothesis).

It is very worrying to observe that many research projects, which require a hypothesis, are being done without stating one. This is the fundamental backbone of the question to be asked and tested, and later, the findings need to be extrapolated in an analytical study, addressing a research question.

A good dissertation or thesis which is submitted for fulfillment of a curriculum or a submitted manuscript is comprised of a thoughtful study, addressing an interesting concept, and has to be scientifically designed. Nowadays, evolving academicians are in a competition to prove their point and be academically visible, which is very vital in their career graph. In any circumstance, unscientific research or short-cut methodology should never be conducted or encouraged to produce a research finding or publish the same as a manuscript.

The other type of research is exploratory research, which is based on a journey for discovery, which is not backed by previously established theories and is driven by hope and chance of breakthrough. The advantage of using these data is that statistics can be applied to establish predictions without the consideration of the principles of designing a study, which is the fundamental requirement of a conventional hypothesis. There is a need to set standards of statistical evidence with a much higher cutoff value for acceptance when we consider doing a study without a hypothesis.

In the past few years, there is an emergence of nonhypothesis-driven research, which does receive encouragement from funding agencies such as innovative molecular analysis technologies. The point to be taken here is that funding of nonhypothesis-driven research does not implicate decrease in support to hypothesis-driven research, but the objective is to encourage multidisciplinary research which is dependent on coordinated and cooperative execution of many branches of science and institutions. Thus, translational research is challenging and does carry a risk associated with the lack of preliminary data to establish a hypothesis.[ 4 ]

The merit of hypothesis testing is that it takes the next stride in scientific theory, having already stood the rigors of examination. Hypothesis testing is in practice for more than five decades and is considered to be a standard requirement when proposals are being submitted for evaluation. Stating a hypothesis is mandatory when we intend to make the study results applicable. Young professionals must be appraised of the merits of hypothesis-based research and must be trained to understand the scope of exploratory research.

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

6 Common Leadership Styles — and How to Decide Which to Use When

  • Rebecca Knight

hypothesis driven development examples

Being a great leader means recognizing that different circumstances call for different approaches.

Research suggests that the most effective leaders adapt their style to different circumstances — be it a change in setting, a shift in organizational dynamics, or a turn in the business cycle. But what if you feel like you’re not equipped to take on a new and different leadership style — let alone more than one? In this article, the author outlines the six leadership styles Daniel Goleman first introduced in his 2000 HBR article, “Leadership That Gets Results,” and explains when to use each one. The good news is that personality is not destiny. Even if you’re naturally introverted or you tend to be driven by data and analysis rather than emotion, you can still learn how to adapt different leadership styles to organize, motivate, and direct your team.

Much has been written about common leadership styles and how to identify the right style for you, whether it’s transactional or transformational, bureaucratic or laissez-faire. But according to Daniel Goleman, a psychologist best known for his work on emotional intelligence, “Being a great leader means recognizing that different circumstances may call for different approaches.”

hypothesis driven development examples

  • RK Rebecca Knight is a journalist who writes about all things related to the changing nature of careers and the workplace. Her essays and reported stories have been featured in The Boston Globe, Business Insider, The New York Times, BBC, and The Christian Science Monitor. She was shortlisted as a Reuters Institute Fellow at Oxford University in 2023. Earlier in her career, she spent a decade as an editor and reporter at the Financial Times in New York, London, and Boston.

Partner Center

COMMENTS

  1. How to Implement Hypothesis-Driven Development

    Examples of Hypothesis-Driven Development user stories are; Business Story. We Believe That increasing the size of hotel images on the booking page Will Result In improved customer engagement and conversion We Will Have Confidence To Proceed When we see a 5% increase in customers who review hotel images who then proceed to book in 48 hours.

  2. How to Implement Hypothesis-Driven Development

    Examples of Hypothesis-Driven Development user stories are; Business story. We Believe That increasing the size of hotel images on the booking page. Will Result In improved customer engagement and conversion. We Will Know We Have Succeeded When we see a 5% increase in customers who review hotel images who then proceed to book in 48 hours.

  3. What is hypothesis-driven development?

    Hypothesis-driven development in a nutshell. As the name suggests, hypothesis-driven development is an approach that focuses development efforts around, you guessed it, hypotheses. To make this example more tangible, let's compare it to two other common development approaches: feature-driven and outcome-driven.

  4. Hypothesis-Driven Development (Practitioner's Guide)

    Like agile, hypothesis-driven development (HDD) is more a point of view with various associated practices than it is a single, particular practice or process. That said, my goal here for is you to leave with a solid understanding of how to do HDD and a specific set of steps that work for you to get started. After reading this guide and trying ...

  5. Guide for Hypothesis-Driven Development: How to Form a List of

    The hypothesis-driven development management cycle begins with formulating a hypothesis according to the "if" and "then" principles. In the second stage, it is necessary to carry out several works to launch the experiment (Action), then collect data for a given period (Data), and at the end, make an unambiguous conclusion about whether ...

  6. The 6 Steps that We Use for Hypothesis-Driven Development

    Hypothesis-driven development is a prototype methodology that allows product designers to develop, test, and rebuild a product until it's acceptable by the users. It is an iterative measure that explores assumptions defined during the project and attempts to validate it with users' feedbacks. ... For example, if you have a social media app ...

  7. Hypothesis-driven development: Definition, why and implementation

    Hypothesis-driven development emphasizes a data-driven and iterative approach to product development, allowing teams to make more informed decisions, validate assumptions, and ultimately deliver products that better meet user needs. Hypothesis-driven development (HDD) is an approach used in software development and product management.

  8. Lessons from Hypothesis-Driven Development

    The principle of hypothesis-driven development is to apply scientific methods to product development. Defining success criteria and then forming testable hypotheses around how to meet them. Over ...

  9. Hypothesis-Driven Development

    Hypothesis-Driven Development 1 Connor Mullen I incoln aboratory une 22 Overview In developing large, complex Department of Defense ... For example, an intent could be to improve message throughput in a mission system. A hypothesis is formed to fulfill the intent and key metrics are identified. Then

  10. Hypothesis-Driven Development

    For example, a hypothesis might propose that introducing a real-time chat feature will lead to increased user engagement by facilitating instant communication. The Process. The process of Hypothesis-Driven Development involves a series of steps. Initially, development teams formulate clear and specific hypotheses based on the goals of the ...

  11. Hypothesis-Driven Development

    Hypotheses-Driven Development & Your Product Pipeline • 7 minutes. Introducing Example Company: HVAC in a Hurry • 1 minute. Driving Outcomes With Your Product Pipeline • 7 minutes. The Persona Hypothesis • 3 minutes. The JTBD Hypothesis • 3 minutes. The Demand Hypothesis • 2 minutes. The Usability Hypothesis • 2 minutes.

  12. Why hypothesis-driven development is key to DevOps

    Hypothesis-driven development is based on a series of experiments to validate or disprove a hypothesis in a complex problem domain where we have unknown-unknowns. We want to find viable ideas or fail fast. ... Example: We believe that users want to be able to select different themes because it will result in improved user satisfaction. We ...

  13. What I learned at McKinsey: How to be hypothesis-driven

    McKinsey consultants follow three steps in this cycle: Form a hypothesis about the problem and determine the data needed to test the hypothesis. Gather and analyze the necessary data, comparing ...

  14. Scrum and Hypothesis Driven Development

    Scrum and Hypothesis Driven Development. The opportunities and consequences of being responsive to change have never been higher. Organizations that once had many years to respond to competitive, environmental or socio/political pressures now have to respond within months or weeks. Organizations have to transition from thoughtful, careful ...

  15. Data-driven hypothesis development

    Data-driven hypothesis development (DDHD) is an effective approach when facing complex "known unknown" and "unknown unknown" problems. There are four steps to the approach: 1. Define the goal using data. Problem solving starts with well defined problems, however, we know that "known unknowns" and "unknown unknowns" are rarely ...

  16. How McKinsey uses Hypotheses in Business & Strategy by McKinsey Alum

    The first step in being hypothesis-driven is to focus on the highest potential ideas and theories of how to solve a problem or realize an opportunity. Let's go over an example of being hypothesis-driven. Let's say you own a website, and you brainstorm ten ideas to improve web traffic, but you don't have the budget to execute all ten ideas.

  17. Hypothesis-driven approach: the definitive guide

    Using a hypothesis-driven approach is critical to solving a problem efficiently. In other words: A hypothesis will limit the number of analysis you need to perform to solve a problem. Thus, this is a way to apply the 80/20 principle and prioritize the issues (from your MECE issue tree) you want to investigate.

  18. Hypothesis-driven Development

    Success Story of Hypothesis-driven Development . A great example of a company that successfully used HDD while developing its main offering is Dropbox, the popular remote file-sharing service. Today, Dropbox is used by over 45 million users. However, when they initially launched, they had several competitors in the same domain.

  19. Hypothesis-driven product management

    Yes, hypothesis-driven practices are ideal for building new features. Since the goal is to test the validity of each hypothesis, the uncertainty around the product development process is significantly reduced. In a way, hypothesis testing helps you make better decisions about your product lifecycle management.

  20. 2.4 Developing a Hypothesis

    A theory is broad in nature and explains larger bodies of data. A hypothesis is more specific and makes a prediction about the outcome of a particular study. Working with theories is not "icing on the cake." It is a basic ingredient of psychological research. Like other scientists, psychologists use the hypothetico-deductive method.

  21. How to Write a Strong Hypothesis

    5. Phrase your hypothesis in three ways. To identify the variables, you can write a simple prediction in if…then form. The first part of the sentence states the independent variable and the second part states the dependent variable. If a first-year student starts attending more lectures, then their exam scores will improve.

  22. Hypothesis-Driven Development

    Offered by University of Virginia. To deliver agile outcomes, you have to do more than implement agile processes- you have to create focus ... Enroll for free.

  23. Hypothesis-driven Research

    The scope of a well-designed methodology in a hypothesis-driven research equips the researcher to establish an opportunity to state the outcome of the study. A provisional statement in which the relationship between two variables is described is known as hypothesis. It is very specific and offers the freedom of evaluating a prediction between ...

  24. 6 Common Leadership Styles

    Much has been written about common leadership styles and how to identify the right style for you, whether it's transactional or transformational, bureaucratic or laissez-faire. But according to ...