Rapid Experimentation: The Road To Innovation (Complete Guide)

Home » Rapid Experimentation: The Road To Innovation (Complete Guide)   | 🕑 

experiment loop map

Gust de Backer

June 26, 2024.

Rapid Experimentation

👉🏻    Workshop    /    Keynote    /    Consultancy

You read it left and right, companies that owe much of their success to experimentation ….

Of course, experimentation can be understood in a hugely broad way, so in this article I’m going to get you started with:

  • Understanding why experimentation is important
  • Starting experiments
  • Supervising and organizing experiments

This blog article can also be seen as a summary of the book Testing Business Ideas by David J. Bland and Alex Osterwalder.

Let’s get started quickly.

Table of Contents

Why start experimenting?

Innovation today is happening at an unprecedented rate, it’s more important than ever to innovate.

To innovate with your business you will need to start implementing ideas, but how do you avoid spending attention, time and money only on ideas that look best on paper?

Experimentation allows ideas to be tested without spending a lot of time, energy and resources on them:

Hypotheses + Experiments + Insights = Less Uncertainty & Risk

Phase 1 is the startup, or the structure of the team and the course of ideation….

1.1 Form the team

Teams almost always consist of the following disciplines:

Growth Hacking

It is important to have diversity on your team because everyone has different skills, past experiences and perspectives.

There are 6 factors that a successful team meets:

  • Data influenced : insights from data fill the backlog and inform the strategy. Here, it is not necessarily necessary to be data-driven, but you must be influenced by the data.
  • Customer centric : it is important to know the ‘why’ behind the work.
  • Iterative approach : teams should always work towards a desired outcome in the form of a repeating cycle of processes.
  • Experiment driven : teams must dare to be wrong and not just be focused on making features.
  • Entrepreneurial : move quickly and validate assumptions. Think creative problem-solving at high speed.
  • Question assumptions : don’t be afraid to test out a disruptive business model, don’t always play it safe.

As experiments become more complex, your team will also grow:

Rapid Experimentation Team

Environment

Teams need an environment in which they can develop themselves, for this purpose teams must be the following:

  • Dedicated : teams must be dedicated to the work. Multitasking between different projects is bad for progression.
  • Funded : experimenting costs money, based on the learnings during stakeholder reviews budget can be allocated (venture capital style, to follow).
  • Autonomous : teams must be given room to work. Don’t micromanage the team where progress is delayed.

For a company it is important to offer the following:

  • Leadership : a facilitative leadership style is desired because no one knows the exact solution. Lead with questions, not answers.
  • Coaching : coaches with extensive experience of experimentation can inspire the team.
  • Customers : teams need access to customers and should definitely not be isolated from them.
  • Resources : teams need enough resources to make progress and generate evidence.
  • Strategy : teams need to be clear about where they are going and what strategy they are going to use. By not having a clear strategy you will confuse being busy with making progress. You could use the OGSM canvas for this.
  • KPIs : teams need to be able to show how much progress they have made. Think for example of the North Star Metric .

Strategyzer recommends the Team Alignment Map .

1.2 Describe the ideas

The design loop has 3 steps:

Design loop

  • Ideate : come up with as many alternative ideas as possible based on your intuition and insights. In doing so, don’t fall in love with your first ideas.
  • Business prototype : start with a low-effort prototype such as an outline, filling in a Value Proposition Canvas and a Business Model Canvas / Lean canvas . As time passes, your prototypes will become more professional.
  • Is this the best way to address our customers’ jobs, pains and gains?
  • Is this the best way to monetize our idea?
  • Does this take everything we learned in testing?

In phase two, we actually get started….

2.1 Make up hypotheses

In making hypotheses you start with ‘we believe that …’ , but avoid trying to prove only things you believe. In fact, you can also make a hypothesis based on ‘we believe that … NOT/REALLY …’ .

There are three properties that a good hypothesis must meet:

  • Testable : hypotheses must be testable and have an outcome of true/false.
  • Precise : it must be exactly clear what success looks like. Ideally, it also describes the what, who and when.
  • Discrete : an experiment should only test one thing otherwise it should be split into multiple experiments.

There are three types of hypotheses:

  • Customer Segments: we target the right segment and that segment is large enough.
  • Channels: we make (good) use of the right channels.
  • Value Propositions: we have the right (unique) value proposition(s).
  • Customer Relationships: we have a good relationship with customers and we know how to keep them.
  • Key Activities: activities can be scaled up without loss of quality.
  • Key Resources: important resources can be scaled up.
  • Key Partners: we have the right partners.
  • Revenue Streams: customers actually want to pay for our product or service.
  • Cost Structure: costs can be managed and are under control.
  • Profit: we can make a profit.

Make sticky notes with the different types of hypotheses and paste them on the appropriate section in the Business Model Canvas.

Then you can start prioritizing the hypotheses:

Risk Assumption Matrix

2.2 Experiment & Learn

To experiment, it is helpful to get started with the following tools:

  • Experiment Card : describe the reason for and the setup of the experiment.
  • Learnings Card : describe the insights you gained from the experiment.
  • Experiment Canvas / Airtable : document your experiments.

There is a difference between weak and strong evidence:

Types of evidence

To share your experiments with stakeholders you can use the Experiment Design & Analysis .

Insights alone don’t do you much good, it’s about what you do with those insights.

Deciding is about taking action, in the sense of:

  • The next steps you will take in removing uncertainty and risk.
  • Making decisions based on the insights .
  • Deciding to complete, change, or adopt an experiment.

To share knowledge and provide structure, it is necessary to schedule the following:

Meetings

This seems like a lot, but it’s not too bad in the end:

Occupation of Meetings

Weekly Planning

Duration : 30 – 60 minutes.

When : once a week, after the weekly learning.

Participants : core team.

  • Devise hypotheses.
  • Prioritize experiments

Scrum Board with Waiting

Daily Standup

Duration : 15 minutes.

When : every weekday

  • What is the goal today?
  • How are we going to achieve that goal?
  • What is still in the way?

Weekly Learning

When : once a week, for weekly planning.

Participants : core team and extended team.

  • Collect evidence from experiments.
  • Generate insights, look for patterns in outcomes of experiments. Be open-minded in this, make sure you don’t overlook any (unexpected) insights.
  • Go back to your Business Model Canvas, Value Proposition Canvas and Assumptions Map with the new insights and make updates. Then you can incorporate your insights into your strategy.

Biweekly Retrospective

When : once every two weeks, after the weekly learning and before the weekly planning.

  • What is going well?
  • What needs to be improved?
  • What are we going to try next?

In addition, you can add other options, such as:

  • What should we start with?
  • What should we stop doing?
  • What should we keep?
  • What should we do more of?

Monthly Stakeholder Reviews

Duration : 60 – 90 minutes.

When : once a month.

Participants : stakeholders, extended team & core team.

  • What have we learned?
  • What is holding back progress? (document blockers during the month)
  • Pivot / Persevere / Kill decision

To communicate effectively with different departments, there are guidelines you can follow:

  • Our customer segment is _____
  • The total number of customers involved in our experiment is about _____
  • Our experiment will run from _____ to _____
  • Information we acquire in the process is _____
  • Branding we use is _____
  • Financial resources we need are _____
  • We can launch the experiment by _____

3. Experiment

In phase 3 we dive deeper into the different experiments….

3.1 Select an experiment

To select an experiment you can use the ICE Framework:

  • Impact : how big could the impact of this experiment be?
  • Confidence : how confident are we that this will work?
  • Effort : how difficult is it to conduct this experiment?

Some thumb rules:

  • Be cheap and fast at the beginning : you hardly know anything about the result yet, don’t make yourself too dependent on this experiment.
  • Strengthen your evidence with multiple experiments on the same hypothesis: don’t make decisions based on weak evidence or one experiment.
  • Choose experiments with strong evidence : design experiments so that you always have a strong evidence as an outcome.
  • Reduce uncertainty as much as possible : to test a hypothesis you don’t necessarily have to build something from A-Z.

3.2 Discover

Discovery Experiments

3.3 Validate

Validation Experiments

3.4 Sequences

Experiment Sequences

When experimenting, it’s important to provide the right leadership from the top down

4.1 Avoid experiment pitfalls

There are several pitfalls you should not fall into:

  • Time trap : not dedicating enough time. You get what you invest in, spend enough time each week testing, learning and deciding.
  • Analysis Paralysis : thinking too long about things you just need to test and apply. Get out of the building instead of just keeping thinking about ideas, timebox your analysis.
  • Incomparable data / evidence : unclear data that cannot be compared.
  • Weak data / evidence : only measure what people say, not what they actually do.
  • Confirmation bias : only believe evidence that matches expectations.
  • Too few experiments : doing only one experiment on key hypotheses.
  • Failure to learn and adapt : spending too little time analyzing evidence and generating insights and actions.
  • Outsource testing : outsource what you should be doing and learning from yourself.

4.2 Lead by experimentation

There are some things you need to think about as a leader:

  • Language : be careful with your choice of words, even if you have a lot of experience and knowledge don’t steer your team too much in the direction you want. Eventually your team will start waiting for you to assign them experiments.
  • Accountability : focus on business results, not just features and dates.
  • Facilitation : don’t talk too much in I, me or mine and what date something has to be finished, but more in we, us or our and how certain results will be achieved.

There are several steps leaders can take to facilitate this:

  • Enabling environment : make enough resources and time available to iteratively test ideas.
  • Evidence trumps opinion : experience and track record don’t mean that much, evidence from testing is more important than the opinion that comes from experience.
  • Remove obstacles and open doors : lack of access or specialized resources must be fixed, often there is even insufficient access to customers.
  • Ask questions rather than provide answers : important to push teams to test better value propositions and business models.
  • Meet your teams one-half step ahead : bring your team into the process, don’t leave them behind. Check with yourself where you want to see your team standing and then figure out how you are going to get them there. One-on-ones, retrospectives and walk-throughs can help with this.
  • Understand context before giving advice : before you start giving advice, you need to understand the context. Let people talk and then ask questions to make it clear.
  • How are we going to do this?
  • What do you guys think?

4.3 Organize for experiments

Often it is not exactly clear at the beginning what a solution will look like. Things change along the way, which is why it is useful to have cross-functional teams:

Functional vs Cross functional teams

This will, in fact, allow for faster response to change.

In addition, it is important to adopt more of a venture-capital funding style rather than large budgets on an annual basis, because that incentivizes bad behavior:

Venture Capital Style Funding

4.4 Testing principles

There are a number of principles to keep in mind while experimenting:

  • Evidence is better than opinion.
  • Learn quickly and reduce risk by embracing failure.
  • Test early; perfect later.
  • Experiments do not equal reality.
  • Find the balance between learnings and the vision.
  • Start with the most important tests, which can undermine your entire hypothesis/idea.
  • Make sure you understand your customer first.
  • Make it measurable.
  • Accept that not all facts are equal, an interviewee may say one thing, but do another.
  • Double test important irreversible decisions.

Are you ready?

So, now you are armed to start Rapid Experimentation….

Now I want to know from you, what has been your most successful experiment so far?

Let me know in a comment.

P.S. if you want any additional help, let me know at [email protected] .

experiment loop map

I try to help business surpass their growth ceiling with my content.

Sounds interesting?

Let’s connect on LinkedIn!

Business Strategy | Change Management | Conversion Rate Optimization | Digital Marketing | Growth Hacking | Organizational Culture

Gust de Backer Icon (4)

Gust’s Must-Reads 👇🏼

  • TAM SAM SOM
  • Value Proposition
  • Decision Making Unit
  • Product-Market Fit
  • North Star Metric
  • Market Research
  • Customer Development
  • Growth Hacking
  • Brand Identity
  • Customer Journey
  • Account-Based Marketing

Business-Driven Marketing (BDM): 8 Steps to Drive Business Impact [+14 Templates]

Business-Driven Marketing (BDM): 8 Steps to Drive Business Impact [+14 Templates]

Is your marketing not delivering the growth for your Scale-Up that it should? Companies spend an average of 9.5% of annual sales on marketing. Yet you feel you're missing out on sales. That you're not getting the maximum growth out of your Scale-Up. You lack knowledge...

Automate ~30% of Repetitive Marketing Tasks with AI: 5 Easy Steps [+ 7 Expert Prompts]

Automate ~30% of Repetitive Marketing Tasks with AI: 5 Easy Steps [+ 7 Expert Prompts]

Did you know that marketing teams waste an average of 33% of their time on repetitive tasks? This leaves less time for strategy and creativity. Meanwhile, your competitors are increasing their efficiency with AI. The rapid developments make you feel like you're on a...

Customer Journey Map (2024): How-to & Examples [+ Template]

Customer Journey Map (2024): How-to & Examples [+ Template]

The Customer Journey is the process your customers go through with your company. This then covers the first to last interaction someone has with your company. Many companies do not have a map of how their customers orient, what they care about or when the company...

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Submit Comment

Experiment Canvas Template

Experiment Canvas

You have successfully subscribed, proven (marketing) management tactics 12x / year.

✔  Discover the secrets of successful companies.

✔  Make better decisions and avoid bad choices.

✔  Never miss out on any growth for your company.

Function Owner C-Level Manager Marketer Sales Student

Thank you! You have successfully subscribed.

Home / Glossary / What is an experimentation loop?

What is an experimentation loop?

Article originally published in June 2023 by Stuart Brameld . Most recent update in April 2024.

Request a demo

Stuart is the Founder of Growth Method, Growth Advisor to B2B companies (currently Colt, Visio and MobiLoud) and Mentor at Growth Mentor.

growthmentor

Experiment results

Recent experiments results include competitor SEO, AI-driven content, exit-intent modals and AB testing homepage headlines.

"We are on-track to deliver a 43% increase in inbound leads this year. There is no doubt the adoption of Growth Method is the primary driver behind these results." 

experiment loop map

Definition of an experimentation loop

An experimentation loop is a continuous process of testing, learning, and iterating marketing strategies to optimise performance and achieve desired outcomes. For marketers, this involves designing and implementing experiments, collecting and analysing data, drawing insights, and making data-driven decisions to refine marketing tactics. By consistently going through this loop, marketers can identify the most effective approaches, uncover new opportunities, and adapt to changing market conditions, ultimately driving growth and maximising return on investment.

An example of an experimentation loop

Here is an example of how it works:

1. Define Objective: Increase the number of new subscribers for Growth Method by 20% in the next quarter. 2. Formulate Hypothesis: Offering a 14-day free trial will attract more potential customers and lead to an increase in new subscribers. 3. Identify Key Metrics: Number of new subscribers, conversion rate from free trial to paid subscription, and churn rate. 4. Design Experiment: Implement a 14-day free trial option on the Growth Method website and track the number of sign-ups, conversions, and churns. 5. Execute Experiment: Launch the 14-day free trial offer and monitor the key metrics for a period of three months. 6. Analyze Results: Compare the number of new subscribers, conversion rate, and churn rate before and after the introduction of the free trial. 7. Draw Conclusions: If the results show a significant increase in new subscribers and a positive conversion rate, the hypothesis is validated, and the free trial offer can be continued. If not, a new hypothesis should be formulated and tested. 8. Iterate: Based on the conclusions, either continue with the current strategy or develop a new hypothesis and repeat the experimentation loop.

How does an experimentation loop work?

An experimentation loop works by continuously testing, analyzing, and optimizing marketing strategies to achieve the best possible results. Marketers begin by identifying a hypothesis or a specific aspect of their campaign they want to improve. They then design an experiment to test this hypothesis, such as an A/B test comparing two different ad creatives. Once the experiment is conducted, marketers analyze the data to determine which variation performed better and why. Based on these insights, they can make data-driven decisions to optimize their marketing efforts. This process is repeated in a cyclical manner, allowing marketers to constantly refine their strategies and maximize their return on investment.

experiment loop map

Expert opinions and perspectives

Here are how some of the world’s best marketing and growth professionals think about an experimentation loop.

  • “The only way to win at content marketing is for the reader to say, ‘This was written specifically for me.’ The way to get there is to continuously iterate your content, and the only way to do that is to adopt an experimentation loop.” – Jamie Turner, Founder of 60 Second Marketer
  • “Test fast, fail fast, adjust fast.” – Tom Peters, American writer on business management practices
  • “Innovation needs to be part of your culture. Consumers are transforming faster than we are, and if we don’t catch up, we’re in trouble. The best way to catch up is through an experimentation loop.” – Ian Schafer, Founder and CEO of Deep Focus

Questions to ask yourself

As a modern growth marketing or agile marketing professional, ask yourself the following questions with regard to an experimentation loop:

  • What is the primary goal or objective of this experiment, and how does it align with our overall marketing and growth strategy?
  • What are the key performance indicators (KPIs) that will help us measure the success of this experiment, and how will we track and analyze them?
  • What resources (time, budget, personnel) are required to execute this experiment, and how can we ensure that it is implemented efficiently and effectively?
  • How will we validate the results of this experiment, and what criteria will we use to determine whether it should be scaled, optimized, or discontinued?
  • What learnings can we gather from this experiment, and how can we apply these insights to future marketing and growth initiatives?

Additional reading

Here are some related articles and further reading around an experimentation loop that you may find helpful.

  • The Lean Startup: How Today’s Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses
  • Experimentation Works: The Surprising Power of Business Experiments

See how this topic is trending on Google Trends here: https://trends.google.com/trends/explore?date=all&q=experimentation

More questions? Connect with me on LinkedIn or Twitter , or book a Growth Call .

We don't have a newsletter

but we do share our in-house growth experiments.

Recent experiments include competitor SEO, AI-driven content, exit-intent modals and AB testing homepage headlines.

growth newsletter

  • A/B (or Split) Testing
  • Affiliate Marketing
  • Cart Abandonment
  • Click Through Rate
  • Compliance & Regulation
  • Conversion Rate Optimization
  • Customer Data Platform
  • Customer Engagement
  • Customer Experience
  • Data Security & Privacy
  • Grow Traffic and Subscribers
  • Landing Page Optimization
  • Mobile App Insights
  • Mobile App Testing
  • Multivariate Testing
  • Partner Ecosystem
  • Personalization
  • Segmentation & Targeting
  • Server-Side Testing
  • Usability Testing
  • Visitor Behavior Analytics
  • Web Insights
  • Web Testing
  • Website Analysis
  • Website Optimization
  • Website Redesign
  • Product Updates

experiment loop map

Conversion Rate Optimization Solutions for Revenue Growth

VWO’s end-to-end CRO platform helps brands understand visitor behavior, make observations, build a testing pipeline, execute the tests for statistical results, and engage through new-age channels.

experiment loop map

Follow us and stay on top of everything CRO

Related content:, introducing experimentation loop.

experiment loop map

Take a look at the history of technological progress.

You can see that advanced technology did not come out of the blue. It evolved with one advancement becoming the foundation for another.

For instance, the smartphone industry stands on the foundation of numerous technological breakthroughs. From the initial landline telephones, the concept of cordless phones emerged, followed by the integration of mobile communication with computing power. 

Over time, we witnessed an evolution from personal digital assistants, such as BlackBerry devices, to the advent of the iPhone, which paved the way for the smartphone industry.

It’s like a loop, where each advancement created new opportunities that, in turn, lead to further progress. The loop has revolutionized our technology because we never left a loose end after an advancement. 

What if we followed the same approach toward experimentation on digital properties? 

Experimentation can sometimes lift your conversion rate beyond expectations and at times drop even for a promising hypothesis. It’s part and parcel of the process. 

But if you stick to a linear approach of closing the test after getting results and move on to test something new, it will rarely give you breakthroughs. You’ll miss out on chances to improve conversion rates and overlook valuable insights for future success. In the best-case scenario, it will plateau your growth rate. 

That is why it’s time to move on from the linear approach and take a strategic approach with the Experimentation Loop to realize the true conversion potential of your websites and mobile apps.

But what is an Experimentation loop? Let’s delve into this fascinating concept.

Feature Image

What is an Experimentation Loop?

An Experimentation Loop starts with identifying a problem through behavior analysis and creating a solution in the form of a hypothesis. Then, you run experiments to test the hypothesis. You either win or lose, but with a linear approach, you stop the experimentation cycle here. But with the Experimentation Loop, you investigate the test results to uncover valuable insights. The uncovered insights can derive new hypotheses, which lead to further experiments, creating a continuous cycle of learning and optimization. 

Here’s a visual illustration of how the Experimentation Loop works:

Experimentation Loop

With Experimentation Loops, you are not just stopping at the results but diving deeper to understand the reasons behind the results, identifying anomalies, and discovering if particular audiences (or participants of the experiment) react differently from others. This becomes the foundation for your new hypothesis and experiments. 

It is especially critical in today’s ever-changing digital landscape, where user behavior is constantly evolving. By embracing the continuous learning and optimization provided by Experimentation Loops, you can stay ahead of the curve and keep improving your conversion rate.

Understanding the Experimentation Loop with an example

Here is a hypothetical example that explains how the Experimentation Loop functions:

Consider a landing page created with the intent to generate leads. The original version of the page has a description of the offering in the first fold, followed by the call-to-action (CTA) button that will lead to the contact form. 

Let’s say that the behavioral analysis of the landing page reveals many visitors dropping off on the first fold. This leads to the hypothesis of adding a CTA above the fold to improve engagement. This way, you create an A/B test to compare the original version and the variation with additional CTA above the fold. 

Here is the visual representation of the original and the variation of the landing page:

original and the variation of the landing page

Let’s assume that the test ends with the variation outperforming the original in terms of the conversion rate (i.e. number of clicks on the CTA). Here, the traditional approach concludes the test. But with the experimentation loop, we will try to analyze the results to come up with more hypotheses and open up multiple opportunities for improvement.

Suppose, we zero down on the hypothesis that demands testing the CTA button. Then, the second round will involve coming up with multiple variations of the CTA text and CTA color to optimize the button. Here, to find out the best variation, we can run a multivariate test to compare the original version and multiple variations with different combinations. 

multivariate test to compare the original version and multiple variations with different combinations

At the end of the test, there can be an uplift in conversion, which would have not been possible with the traditional approach. And if the test fails to get an uplift in conversion rate, it will lead to insights that can help in knowing more about the users.

Likewise, we can check the results to know if a particular audience segment engaged with the button more than others (and if they have common attributes) – in which case, it could lead to a hypothesis for a personalization campaign that includes personalizing the headings or subheading before the CTA as per behavior, demographic, or geographic attributes of the segment. 

Thus, an Experimentation Loop opens up the opportunity to improve, which is not possible with a siloed and linear approach. 

But how can you carry out the successful execution of the Experimentation Loop? 

The experimentation loop consists of three steps, and we will delve into each of these steps in the upcoming section.

Three steps in the Experimentation Loop

Following are the three key steps in the Experimentation Loop for improving conversions. 

Three steps in the Experimentation Loop

Step 1: Identify problems

The Experimentation Loop starts with identifying the existing problem in user experience. First, you do a quantitative analysis that involves going through key metrics like conversion rate, bounce rate, and page views to identify the low-performing pages on the user journey.

Once you zero down on the weak links, you can do a qualitative analysis to understand the pain points. You can check session recordings and heatmaps to know the performance of each element that affects the conversion rate.  

Once you identify the problem associated with the elements, it can help draft a hypothesis .

Step 2: Build hypothesis from insights

After identifying elements that are affecting the conversion negatively, you can start digging into the insight data to make sense of it. 

For example, you identified the banner image position as the reason for the high bounce rate of the blog after all the quantitative and qualitative analyses. Then you can build a hypothesis about the position of this image that offers a solution for the high bounce rate.

While framing the hypothesis, you should specify the key performance indicator (KPI) to be measured, the expected uplift, and the element to test.

Next, you move forward to run the experiment. 

Step 3: Run experiments

Based on the hypothesis, you choose from tests like the A/B test, multivariate test, split URL, and multipage test. You run it till the test reaches a statistical significance .

The test may result in a change in the conversion rate, and the insights about the user behavior toward the new experience can open doors to identify areas for the second cycle of the experimentation. 

Thus, the Experimentation Loop will constantly carve a path to improve conversion.   

Experimentation Loop ebook

Experimentation Loop and sales funnel

Running Experimentation Loops at every stage of the funnel can substantially improve the conversion rate and provide a strategic framework for testing hypotheses rather than a haphazard approach.

To enhance the conversion rate of the same element, you can run an Experimentation Loop, as seen in the example of A/B testing to Multivariate testing. 

Alternatively, you can analyze the insights from a test that improved a metric to see how it affected other metrics, which could lead to the second cycle of the test.

For instance, let’s take the awareness stage. The goal in this stage is to attract users and introduce them to products or services on a digital platform. 

Suppose you ran an A/B test on search ads to get more users to the website and monitored metrics like the number of visitors. 

Let’s say the test led to an improvement in traffic. Now, you can move on to analyze other metrics, such as % scroll depth and bounce rate for the landing page, and identify areas for improvement. To pinpoint the specific areas where users are leaving, you can use tools such as scroll maps, heat maps, and session recordings. The analysis can lead you to create hypotheses for the second leg of the experiment. It could involve improving user engagement by testing a visual element or a catchy headline.

Likewise, running the Experimentation Loop at other stages of the funnel can optimize the micro journey that the customer takes at each funnel stage. Moreover, the Experimentation Loop can lead to hypotheses creation from one funnel stage to another, resulting in a seamless experience that is hard to achieve with a siloed approach.

Storyboard on experimentation loop

How Frictionless Commerce uses Experimentation Loops for conversion copywriting

Frictionless Commerce, a digital agency, has relied on VWO for over ten years to conduct A/B testing on new buyer journeys. They have established a system where they build new experiments based on their previous learnings. Through iterative experimentation, they have identified nine psychological drivers that impact first-time buyer decisions.

Recently, they worked with a client in the shampoo bar industry, where they created a landing page copy that incorporated all nine drivers. After running the test for five weeks, they saw an increase of 5.97% in conversion rate resulting in 2778 new orders.

How Frictionless Commerce uses Experimentation Loops for conversion copywriting

It just shows how Experimentation Loops can bring valuable insights and take your user experience to the next level. 

You can learn more about Frictionless Commerce’s experimentation process in their case study.

Embracing the continuous learning and optimization provided by Experimentation Loops is crucial for businesses looking to stay ahead of the curve and improve their conversion rates.

To truly drive success from your digital property, it’s time to break the linear mold and embrace the Experimentation Loop. By using a strategic framework for testing hypotheses, rather than a haphazard approach, businesses can continuously optimize and improve their digital offerings. 

You can create Experimentation Loops using VWO, the world’s leading experimentation platform. VWO offers free testing for up to 5000 monthly tracked users. Visit our plans and pricing page now for more information.

Categories:

experiment loop map

More from VWO

How to Map Customer Journey With The Power of Behavior Analytics

How to Map Customer Journey With The Power of Behavior Analytics

No customer has the exact same journey on your website. While they may follow similar…

experiment loop map

Ashley Bhalerao

5 Tried-and-Tested Methods To Collect User Feedback With Surveys

5 Tried-and-Tested Methods To Collect User Feedback With Surveys

Those days are over when companies used website surveys to react to negative feedback from…

experiment loop map

Margo Ovsiienko

6 B2B Content Ideas to Increase Conversions

6 B2B Content Ideas to Increase Conversions

Think about the products and services you are using to run your business. How did…

experiment loop map

Ketan Pande

Scale your A/B testing and experimentation with VWO.

Opt for higher roi on your optimization efforts.

Take a 30 -day, all-inclusive free trial with VWO.

experiment loop map

Hi, I am Pratyusha from the VWO Research Desk.

Join our community of 10,000+ Marketing, Product & UX Folks today & never miss the latest from the world of experience optimization.

Check your inbox for the confirmation mail

Talk to a sales representative

Get in touch

Thank you for writing to us.

One of our representatives will get in touch with you shortly.

Signup for a full-featured trial

Free for 30 days. No credit card required

Set up your password to get started

Awesome! Your meeting is confirmed for at

Thank you, for sharing your details.

Hi 👋 Let's schedule your demo

To begin, tell us a bit about yourself

While we will deliver a demo that covers the entire VWO platform, please share a few details for us to personalize the demo for you.

Select the capabilities that you would like us to emphasise on during the demo., which of these sounds like you, please share the use cases, goals or needs that you are trying to solve., please share the url of your website..

We will come prepared with a demo environment for this specific website.

I can't wait to meet you on at

, thank you for sharing the details. Your dedicated VWO representative, will be in touch shortly to set up a time for this demo.

We're satisfied and glad we picked VWO. We're getting the ROI from our experiments. Christoffer Kjellberg CRO Manager
VWO has been so helpful in our optimization efforts. Testing opportunities are endless and it has allowed us to easily identify, set up, and run multiple tests at a time. Elizabeth Levitan Digital Optimization Specialist
As the project manager for our experimentation process, I love how the functionality of VWO allows us to get up and going quickly but also gives us the flexibility to be more complex with our testing. Tara Rowe Marketing Technology Manager
You don't need a website development background to make VWO work for you. The VWO support team is amazing Elizabeth Romanski Consumer Marketing & Analytics Manager

Trusted by thousands of leading brands

experiment loop map

Share on Mastodon

Tech Agilist

An Experiment Loop is a structured and iterative process that scientists and researchers use to design, conduct, and analyze experiments. It is a way of systematically testing hypotheses and gathering data to understand a particular problem or phenomenon. The experiment loop allows researchers to gather evidence and improve their understanding of the problem through repeated experimentation and data collection.

Product Owner Stances – The Experimenter

Stating a hypothesis, explaining what we know AND what we don’t know, by seeing a lot of the work we do as experiments, rather than ‘set-in-stone’ work packages. Understands the need of trying out new things, exploring, innovating, and therefore; experiment.

Benefits of Experimenter PO

  • The innovation rate improves and costs get reduced significantly. 
  • Technical Debt is reduced.
  • Time to Market is also reduced.
  • High-quality products and services are more likely to meet customer and user needs. 
  • Happiness and morale of the Scrum Team increase.

Steps for Experiment Loop

The loop typically consists of several steps:

  • Define the problem and hypothesis: The first step in the experiment loop is to define the problem that you want to study and the hypothesis that you want to test. The hypothesis should be a specific statement about how you think the variables in your experiment are related.
  • Design the experiment: Once you have a clear problem and hypothesis, you can design the experiment to test it. This includes determining the independent and dependent variables, selecting a sample or population, and determining the methods and procedures that you will use to collect data.
  • Collect data: After the experiment is designed, it is time to collect data. This step involves carrying out the procedures you have planned and measuring the variables you have identified.
  • Analyze data: Once you have collected data, the next step is to analyze it. This includes summarizing the data, looking for patterns or relationships, and testing the hypothesis using statistical methods.
  • Draw conclusions: The final step in the experiment loop is to draw conclusions based on the data and analysis. This includes interpreting the results, determining whether the hypothesis was supported or not, and identifying any implications for future research.
  • Iterate: After drawing conclusions, you can decide to iterate the process again and again with different methods and techniques to improve your findings and get a better understanding of the problem you are studying.

Benefits of Experiment Loop

The experiment loop provides several benefits for scientists and researchers:

  • Helps to test hypotheses: The experiment loop provides a structured process for designing and conducting experiments to test hypotheses. This allows researchers to gather data and evidence to support or refute their hypotheses.
  • Improves understanding: By iterating through the process of designing, conducting, analyzing, and interpreting experiments, researchers can improve their understanding of a particular problem or phenomenon. This can lead to new insights and discoveries that may not have been possible with a single study.
  • Increases confidence in results: By repeating experiments and collecting multiple sets of data, researchers can increase their confidence in the results. This is particularly important in fields such as medicine and drug development where the results of a single study may not be considered conclusive.
  • Helps to identify sources of error: Repeating experiments and collecting multiple sets of data can help researchers identify sources of error in their methods. This allows them to refine their methods and improve the accuracy and precision of their results.
  • Encourages replication: The experiment loop encourages replication, which is essential for scientific progress. By repeating experiments and collecting multiple sets of data, researchers can establish the reliability and generalizability of their findings. This allows other scientists to build on their work and further advance the field.
  • Increases efficiency: By iterating through the experiment loop, researchers can identify the most effective methods and techniques to study the problem at hand. This can save time and resources by avoiding unnecessary or ineffective methods.

Example of Experiment Loop

An example of an experiment loop is a study on the effects of a new medication on blood pressure. The process would go as follows:

  • Define the problem and hypothesis: The problem is to determine the effectiveness of the new medication on blood pressure. The hypothesis is that the new medication will lower blood pressure in patients with hypertension.
  • Design the experiment: The independent variable is the new medication and the dependent variable is blood pressure. A sample of patients with hypertension is selected and randomly assigned to either a treatment group (receiving the new medication) or a control group (receiving a placebo). Blood pressure is measured at the start and end of the study.
  • Collect data: The patients in the treatment group are given the new medication and their blood pressure is measured before and after the treatment period. The patients in the control group are given a placebo and their blood pressure is also measured before and after the treatment period.
  • Analyze data: The data is analyzed using statistical methods to compare the change in blood pressure between the treatment and control groups. This includes calculating the mean and standard deviation of blood pressure for each group, and testing for significant differences between the groups using a t-test.
  • Draw conclusions: The results show that the new medication significantly lowers blood pressure in patients with hypertension. The study concludes that the new medication is an effective treatment for hypertension.
  • Iterate: After drawing conclusions, the researchers can iterate by conducting more studies with larger sample sizes, studying different populations, and testing different dosages of the new medication.

It is important to note that the experiment loop is not linear, and researchers may revisit previous steps multiple times before reaching a conclusion. The loop is iterative, allowing researchers to refine their methods and improve their understanding of the problem they are studying over time. In summary, the experiment loop provides a structured and iterative process for scientists and researchers to design, conduct, and analyze experiments, leading to more accurate and reliable results, a better understanding of the problem, and ultimately scientific progress.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Guide to Experimental Design | Overview, Steps, & Examples

Guide to Experimental Design | Overview, 5 steps & Examples

Published on December 3, 2019 by Rebecca Bevans . Revised on June 21, 2023.

Experiments are used to study causal relationships . You manipulate one or more independent variables and measure their effect on one or more dependent variables.

Experimental design create a set of procedures to systematically test a hypothesis . A good experimental design requires a strong understanding of the system you are studying.

There are five key steps in designing an experiment:

  • Consider your variables and how they are related
  • Write a specific, testable hypothesis
  • Design experimental treatments to manipulate your independent variable
  • Assign subjects to groups, either between-subjects or within-subjects
  • Plan how you will measure your dependent variable

For valid conclusions, you also need to select a representative sample and control any  extraneous variables that might influence your results. If random assignment of participants to control and treatment groups is impossible, unethical, or highly difficult, consider an observational study instead. This minimizes several types of research bias, particularly sampling bias , survivorship bias , and attrition bias as time passes.

Table of contents

Step 1: define your variables, step 2: write your hypothesis, step 3: design your experimental treatments, step 4: assign your subjects to treatment groups, step 5: measure your dependent variable, other interesting articles, frequently asked questions about experiments.

You should begin with a specific research question . We will work with two research question examples, one from health sciences and one from ecology:

To translate your research question into an experimental hypothesis, you need to define the main variables and make predictions about how they are related.

Start by simply listing the independent and dependent variables .

Research question Independent variable Dependent variable
Phone use and sleep Minutes of phone use before sleep Hours of sleep per night
Temperature and soil respiration Air temperature just above the soil surface CO2 respired from soil

Then you need to think about possible extraneous and confounding variables and consider how you might control  them in your experiment.

Extraneous variable How to control
Phone use and sleep in sleep patterns among individuals. measure the average difference between sleep with phone use and sleep without phone use rather than the average amount of sleep per treatment group.
Temperature and soil respiration also affects respiration, and moisture can decrease with increasing temperature. monitor soil moisture and add water to make sure that soil moisture is consistent across all treatment plots.

Finally, you can put these variables together into a diagram. Use arrows to show the possible relationships between variables and include signs to show the expected direction of the relationships.

Diagram of the relationship between variables in a sleep experiment

Here we predict that increasing temperature will increase soil respiration and decrease soil moisture, while decreasing soil moisture will lead to decreased soil respiration.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

experiment loop map

Now that you have a strong conceptual understanding of the system you are studying, you should be able to write a specific, testable hypothesis that addresses your research question.

Null hypothesis (H ) Alternate hypothesis (H )
Phone use and sleep Phone use before sleep does not correlate with the amount of sleep a person gets. Increasing phone use before sleep leads to a decrease in sleep.
Temperature and soil respiration Air temperature does not correlate with soil respiration. Increased air temperature leads to increased soil respiration.

The next steps will describe how to design a controlled experiment . In a controlled experiment, you must be able to:

  • Systematically and precisely manipulate the independent variable(s).
  • Precisely measure the dependent variable(s).
  • Control any potential confounding variables.

If your study system doesn’t match these criteria, there are other types of research you can use to answer your research question.

How you manipulate the independent variable can affect the experiment’s external validity – that is, the extent to which the results can be generalized and applied to the broader world.

First, you may need to decide how widely to vary your independent variable.

  • just slightly above the natural range for your study region.
  • over a wider range of temperatures to mimic future warming.
  • over an extreme range that is beyond any possible natural variation.

Second, you may need to choose how finely to vary your independent variable. Sometimes this choice is made for you by your experimental system, but often you will need to decide, and this will affect how much you can infer from your results.

  • a categorical variable : either as binary (yes/no) or as levels of a factor (no phone use, low phone use, high phone use).
  • a continuous variable (minutes of phone use measured every night).

How you apply your experimental treatments to your test subjects is crucial for obtaining valid and reliable results.

First, you need to consider the study size : how many individuals will be included in the experiment? In general, the more subjects you include, the greater your experiment’s statistical power , which determines how much confidence you can have in your results.

Then you need to randomly assign your subjects to treatment groups . Each group receives a different level of the treatment (e.g. no phone use, low phone use, high phone use).

You should also include a control group , which receives no treatment. The control group tells us what would have happened to your test subjects without any experimental intervention.

When assigning your subjects to groups, there are two main choices you need to make:

  • A completely randomized design vs a randomized block design .
  • A between-subjects design vs a within-subjects design .

Randomization

An experiment can be completely randomized or randomized within blocks (aka strata):

  • In a completely randomized design , every subject is assigned to a treatment group at random.
  • In a randomized block design (aka stratified random design), subjects are first grouped according to a characteristic they share, and then randomly assigned to treatments within those groups.
Completely randomized design Randomized block design
Phone use and sleep Subjects are all randomly assigned a level of phone use using a random number generator. Subjects are first grouped by age, and then phone use treatments are randomly assigned within these groups.
Temperature and soil respiration Warming treatments are assigned to soil plots at random by using a number generator to generate map coordinates within the study area. Soils are first grouped by average rainfall, and then treatment plots are randomly assigned within these groups.

Sometimes randomization isn’t practical or ethical , so researchers create partially-random or even non-random designs. An experimental design where treatments aren’t randomly assigned is called a quasi-experimental design .

Between-subjects vs. within-subjects

In a between-subjects design (also known as an independent measures design or classic ANOVA design), individuals receive only one of the possible levels of an experimental treatment.

In medical or social research, you might also use matched pairs within your between-subjects design to make sure that each treatment group contains the same variety of test subjects in the same proportions.

In a within-subjects design (also known as a repeated measures design), every individual receives each of the experimental treatments consecutively, and their responses to each treatment are measured.

Within-subjects or repeated measures can also refer to an experimental design where an effect emerges over time, and individual responses are measured over time in order to measure this effect as it emerges.

Counterbalancing (randomizing or reversing the order of treatments among subjects) is often used in within-subjects designs to ensure that the order of treatment application doesn’t influence the results of the experiment.

Between-subjects (independent measures) design Within-subjects (repeated measures) design
Phone use and sleep Subjects are randomly assigned a level of phone use (none, low, or high) and follow that level of phone use throughout the experiment. Subjects are assigned consecutively to zero, low, and high levels of phone use throughout the experiment, and the order in which they follow these treatments is randomized.
Temperature and soil respiration Warming treatments are assigned to soil plots at random and the soils are kept at this temperature throughout the experiment. Every plot receives each warming treatment (1, 3, 5, 8, and 10C above ambient temperatures) consecutively over the course of the experiment, and the order in which they receive these treatments is randomized.

Prevent plagiarism. Run a free check.

Finally, you need to decide how you’ll collect data on your dependent variable outcomes. You should aim for reliable and valid measurements that minimize research bias or error.

Some variables, like temperature, can be objectively measured with scientific instruments. Others may need to be operationalized to turn them into measurable observations.

  • Ask participants to record what time they go to sleep and get up each day.
  • Ask participants to wear a sleep tracker.

How precisely you measure your dependent variable also affects the kinds of statistical analysis you can use on your data.

Experiments are always context-dependent, and a good experimental design will take into account all of the unique considerations of your study system to produce information that is both valid and relevant to your research question.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Likert scale

Research bias

  • Implicit bias
  • Framing effect
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic

Experimental design means planning a set of procedures to investigate a relationship between variables . To design a controlled experiment, you need:

  • A testable hypothesis
  • At least one independent variable that can be precisely manipulated
  • At least one dependent variable that can be precisely measured

When designing the experiment, you decide:

  • How you will manipulate the variable(s)
  • How you will control for any potential confounding variables
  • How many subjects or samples will be included in the study
  • How subjects will be assigned to treatment levels

Experimental design is essential to the internal and external validity of your experiment.

The key difference between observational studies and experimental designs is that a well-done observational study does not influence the responses of participants, while experiments do have some sort of treatment condition applied to at least some participants by random assignment .

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word “between” means that you’re comparing different conditions between groups, while the word “within” means you’re comparing different conditions within the same group.

An experimental group, also known as a treatment group, receives the treatment whose effect researchers wish to study, whereas a control group does not. They should be identical in all other ways.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bevans, R. (2023, June 21). Guide to Experimental Design | Overview, 5 steps & Examples. Scribbr. Retrieved September 16, 2024, from https://www.scribbr.com/methodology/experimental-design/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

Other students also liked, random assignment in experiments | introduction & examples, quasi-experimental design | definition, types & examples, how to write a lab report, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Published: 28 July 2021

Gaussian processes for autonomous data acquisition at large-scale synchrotron and neutron facilities

  • Marcus M. Noack   ORCID: orcid.org/0000-0002-7288-4787 1 ,
  • Petrus H. Zwart 1 , 2 , 3 ,
  • Daniela M. Ushizima 1 , 4 ,
  • Masafumi Fukuto 5 ,
  • Kevin G. Yager 6 ,
  • Katherine C. Elbert 7 ,
  • Christopher B. Murray 7 ,
  • Aaron Stein 6 ,
  • Gregory S. Doerk   ORCID: orcid.org/0000-0002-2933-2047 6 ,
  • Esther H. R. Tsai 6 ,
  • Ruipeng Li 5 ,
  • Guillaume Freychet   ORCID: orcid.org/0000-0001-8406-798X 5 ,
  • Mikhail Zhernenkov   ORCID: orcid.org/0000-0003-3604-0672 5 ,
  • Hoi-Ying N. Holman 2 , 3 ,
  • Steven Lee 2 , 3 , 8 ,
  • Liang Chen 2 , 3 ,
  • Eli Rotenberg   ORCID: orcid.org/0000-0002-3979-8844 9 ,
  • Tobias Weber   ORCID: orcid.org/0000-0002-7230-1932 10 ,
  • Yannick Le Goc 10 ,
  • Martin Boehm 10 ,
  • Paul Steffens   ORCID: orcid.org/0000-0002-7034-4031 10 ,
  • Paolo Mutti 10 &
  • James A. Sethian 1 , 11  

Nature Reviews Physics volume  3 ,  pages 685–697 ( 2021 ) Cite this article

2164 Accesses

53 Citations

23 Altmetric

Metrics details

  • Applied mathematics
  • Computational methods
  • Design, synthesis and processing

The execution and analysis of complex experiments are challenged by the vast dimensionality of the underlying parameter spaces. Although an increase in data-acquisition rates should allow broader querying of the parameter space, the complexity of experiments and the subtle dependence of the model function on input parameters remains daunting owing to the sheer number of variables. New strategies for autonomous data acquisition are being developed, with one promising direction being the use of Gaussian process regression (GPR). GPR is a quick, non-parametric and robust approximation and uncertainty quantification method that can be applied directly to autonomous data acquisition. We review GPR-driven autonomous experimentation and illustrate its functionality using real-world examples from large experimental facilities in the USA and France. We introduce the basics of a GPR-driven autonomous loop with a focus on Gaussian processes, and then shift the focus to the infrastructure that needs to be built around GPR to create a closed loop. Finally, the case studies we discuss show that Gaussian-process-based autonomous data acquisition is a widely applicable method that can facilitate the optimal use of instruments and facilities by enabling the efficient acquisition of high-value datasets.

Gaussian process regression (GPR) is a robust statistical, non-parametric technique for uncertainty quantification and function approximation.

GPR can directly be applied to autonomous and optimal data acquisition.

GPR provides straightforward ways to inject domain knowledge and can easily be customized for feature finding.

The gpCAM software tool provides a simple way for practitioners to use GPR for autonomous experimentation.

This is a preview of subscription content, access via your institution

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 12 digital issues and online access to articles

92,52 € per year

only 7,71 € per issue

Buy this article

  • Purchase on SpringerLink
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

experiment loop map

Similar content being viewed by others

experiment loop map

Advances in Kriging-Based Autonomous X-Ray Scattering Experiments

experiment loop map

Autonomous materials discovery driven by Gaussian process regression with inhomogeneous measurement noise and anisotropic kernels

experiment loop map

The data-driven future of high-energy-density physics

Code availability.

The gpCAM code for autonomous steering associated with this Review is available at https://doi.org/10.11578/dc.20210217.5 and https://bitbucket.org/MarcusMichaelNoack/gpcam and via pip install gpCAM. Any updates will be published in the repository and on the Python package index (PyPi). The Takin software is available at https://doi.org/10.1016/j.softx.2021.100667 .

Peirce, C. S. The fixation of belief. Pop. Sci. Mon. 12 , 1−15 (1877).

Google Scholar  

Peirce, C. S. & Menand, L. How to make our ideas clear. Pop. Sci. Mon. 12 , 286–302 (1878).

McKay, M. D., Beckman, R. J. & Conover, W. J. Comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics 21 , 239–245 (1979).

MathSciNet   MATH   Google Scholar  

Fisher, R. A. The arrangement of field experiments. In Breakthroughs in Statistics 82−91 (Springer, 1992).

Settles, B. Active learning literature survey. Technical Reports (University of Wisconsin-Madison, Department of Computer Sciences, 2009).

Krishnakumar, A. Active learning literature survey. Technical Reports 42 (University of California Santa Cruz, 2007).

van de Schoot, R. et al. Bayesian statistics and modelling. Nat. Rev. Methods Primers 1 , 1–26 (2021).

Article   Google Scholar  

Noack, M. M. et al. A Kriging-based approach to autonomous experimentation with applications to X-ray scattering. Sci. Rep. 9 , 11809 (2019).

Article   ADS   Google Scholar  

Noack, M. M., Doerk, G. S., Li, R., Fukuto, M. & Yager, K. G. Advances in Kriging-based autonomous X-ray scattering experiments. Sci. Rep. 10 , 1325 (2020).

Noack, M. & Zwart, P. Computational strategies to increase efficiency of Gaussian-process-driven autonomous experiments. In 2019 IEEE/ACM 1st Annual Workshop on Large-scale Experiment-in-the-Loop Computing (XLOOP) 1−7 (IEEE, 2019).

Noack, M. M. et al. Autonomous materials discovery driven by Gaussian process regression with inhomogeneous measurement noise and anisotropic kernels. Sci. Rep. 10 , 17663 (2020).

Wiegart, L. et al. Instrumentation for in situ/operando X-ray scattering studies of polymer additive manufacturing processes. Synchrotron Radiat. News 32 , 20–27 (2019).

Frazier, P. I. Bayesian optimization. Recent Adv. Optim. Model. Contemp. Probl. https://doi.org/10.1287/educ.2018.0188 (2018).

Noack, M. gpcam version 6. bitbucket https://bitbucket.org/MarcusMichaelNoack/gpcam (2021).

Noack, M. M. & Funke, S. W. Hybrid genetic deflated Newton method for global optimisation. J. Comput. Appl. Math. 325 , 97–112 (2017).

Article   MathSciNet   MATH   Google Scholar  

Hobson, A. & Cheng, B.-K. A comparison of the Shannon and Kullback information measures. J. Stat. Phys. 7 , 301–310 (1973).

Article   ADS   MathSciNet   MATH   Google Scholar  

Noack, M. M. & Sethian, J. A. Advanced stationary and non-stationary Kernel designs for domain-aware Gaussian processes. Preprint at https://arxiv.org/abs/2102.03432 (2021).

Fratzl, P. Small-angle scattering in materials science — a short review of applications in alloys, ceramics and composite materials. J. Appl. Crystallogr. 36 , 397–404 (2003).

Dubcek, P. Nanostructures as seen by the SAXS. Vacuum 80 , 92–97 (2005).

Yager, K. G., Zhang, Y., Lu, F. & Gang, O. Periodic lattices of arbitrary nano-objects: modeling and applications for self-assembled systems. J. Appl. Crystallogr. 47 , 118–129 (2014).

Liu, J. et al. The impact of alterations in lignin deposition on cellulose organization of the plant cell wall. Biotechnol. Biofuels 9 , 126 (2016).

Paris, O. From diffraction to imaging: new avenues in studying hierarchical biological tissues with X-ray microbeams (review). Biointerphases 3 , FB16 (2008).

Aghamohammadzadeh, H., Newton, R. H. & Meek, K. M. X-ray scattering used to map the preferred collagen orientation in the human cornea and limbus. Structure 12 , 249–256 (2004).

Liu, J. et al. Amyloid structure exhibits polymorphism on multiple length scales in human brain tissue. Sci. Rep. 6 , 33079 (2016).

Weaver, J. C. et al. The stomatopod dactyl club: a formidable damage-tolerant biological hammer. Science 336 , 1275–1280 (2012).

Wang, Q. et al. Phase transformations and structural developments in the radular teeth of Cryptochiton stelleri . Adv. Funct. Mater. 23 , 2908–2917 (2013).

Meredith, J. C., Smith, A. P., Karim, A. & Amis, E. J. Combinatorial materials science for polymer thin-film dewetting. Macromolecules 33 , 9747–9756 (2000).

Stafford, C. M., Roskov, K. E., Epps III, T. H. & Fasolka, M. J. Generating thickness gradients of thin polymer films via flow coating. Rev. Sci. Instrum. 77 , 023908 (2006).

Smith, A. P., Douglas, J. F., Meredith, J. C., Amis, E. J. & Karim, A. High-throughput characterization of pattern formation in symmetric diblock copolymer films. J. Polym. Sci. B 39 , 2141–2158 (2001).

Davis, R. L., Jayaraman, S., Chaikin, P. M. & Register, R. A. Creating controlled thickness gradients in polymer thin films via flowcoating. Langmuir 30 , 5637–5644 (2014).

Meredith, J. C., Karim, A. & Amis, E. J. High-throughput measurement of polymer blend phase behavior. Macromolecules 33 , 5760–5762 (2000).

Roberson, S. V., Fahey, A. J., Sehgal, A. & Karim, A. Multifunctional ToF-SIMS: combinatorial mapping of gradient energy substrates. Appl. Surf. Sci. 200 , 150–164 (2002).

Berry, B. C. et al. Versatile platform for creating gradient combinatorial libraries via modulated light exposure. Rev. Sci. Instrum. 78 , 072202 (2007).

Smith, A. P., Sehgal, A., Douglas, J. F., Karim, A. & Amis, E. J. Combinatorial mapping of surface energy effects on diblock copolymer thin film ordering. Macromol. Rapid Commun. 24 , 131–135 (2003).

Toth, K., Osuji, C. O., Yager, K. G. & Doerk, G. S. Electrospray deposition tool: creating compositionally gradient libraries of nanomaterials. Rev. Sci. Instrum. 91 , 013701 (2020).

Holman, H.-Y. N., Bechtel, H. A., Hao, Z. & Martin, M. C. Synchrotron IR spectromicroscopy: chemistry of living cells. Anal. Chem. 82 , 8757–8765 (2010).

Holman, H.-Y. N. et al. Real-time characterization of biogeochemical reduction of Cr (VI) on basalt surfaces by SR-FTIR imaging. Geomicrobiol. J. 16 , 307–324 (1999).

Holman, H.-Y. N. et al. Catalysis of PAH biodegradation by humic acid shown in synchrotron infrared studies. Environ. Sci. Technol. 36 , 1276–1280 (2002).

Mason, O. U. et al. Metagenome, metatranscriptome and single-cell sequencing reveal microbial response to Deepwater Horizon oil spill. ISME J. 6 , 1715–1727 (2012).

Holman, H.-Y. N. et al. Real-time molecular monitoring of chemical environment in obligate anaerobes during oxygen adaptive response. Proc. Natl Acad. Sci. USA 106 , 12599–12604 (2009).

Hazen, T. C. et al. Deep-sea oil plume enriches indigenous oil-degrading bacteria. Science 330 , 204–208 (2010).

Bælum, J. et al. Deep-sea bacteria enriched by oil and dispersant from the Deepwater Horizon spill. Environ. Microbiol. 14 , 2405–2416 (2012).

Benning, L. G., Phoenix, V., Yee, N. & Konhauser, K. The dynamics of cyanobacterial silicification: an infrared micro-spectroscopic investigation. Geochim. Cosmochim. Acta 68 , 743–757 (2004).

Benning, L. G., Phoenix, V., Yee, N. & Tobin, M. Molecular characterization of cyanobacterial silicification using synchrotron infrared micro-spectroscopy. Geochim. Cosmochim. Acta 68 , 729–741 (2004).

Yee, N., Benning, L. G., Phoenix, V. R. & Ferris, F. G. Characterization of metal-cyanobacteria sorption reactions: a combined macroscopic and infrared spectroscopic investigation. Environ. Sci. Technol. 38 , 775–782 (2004).

Probst, A. J. et al. Tackling the minority: sulfate-reducing bacteria in an archaea-dominated subsurface biofilm. ISME J. 7 , 635–651 (2013).

Valdespino-Castillo, P. M. et al. Exploring biogeochemistry and microbial diversity of extant microbialites in Mexico and Cuba. Front. Microbiol. 9 , 510 (2018).

Valdespino-Castillo, P. M. et al. Interplay of microbial communities with mineral environments in coralline algae. Sci. Total Environ. 757 , 143877 (2021).

Holman, E. et al. Autonomous adaptive data acquisition for scanning hyperspectral imaging. Commun. Biol. 3 , 684 (2020).

Davies, T. & Fearn, T. Back to basics: the principles of principal component analysis. Spectrosc. Eur. 16 , 20 (2004).

Melton, C. N. et al. K -means-driven Gaussian process data collection for angle-resolved photoemission spectroscopy. Mach. Learn. Sci. Technol. 1 , 045015 (2020).

Cao, Y. et al. Unconventional superconductivity in magic-angle graphene superlattices. Nature 556 , 43–50 (2018).

Squires, G. L. Introduction to the Theory of Thermal Neutron Scattering (Cambridge Univ. Press, 2012).

Weber, T. Takin 2 (software). GitLab https://code.ill.fr/scientific-software/takin (2021).

Weber, T. Update 2.0 to “Takin: an open-source software for experiment planning, visualisation, and data analysis”, (PII: S2352711016300152). SoftwareX 14 , 100667 (2021).

Bostwick, A. et al. Band structure and many body effects in graphene. Eur. Phys. J. Spec. Top. 148 , 5–13 (2007).

Boehm, M. et al. ThALES – Three Axis Low Energy Spectroscopy for highly correlated electron systems. Neutron News 26 , 18–21 (2015).

Download references

Acknowledgements

The work was partly funded through the Center for Advanced Mathematics for Energy Research Applications (CAMERA), which is jointly funded by the Advanced Scientific Computing Research (ASCR) and Basic Energy Sciences (BES) within the Department of Energy’s Office of Science, as well as by the Laboratory Directed Research and Development Program of Lawrence Berkeley National Laboratory, under US Department of Energy contract no. DE-AC02-05CH11231. This research used resources of the Center for Functional Nanomaterials and the National Synchrotron Light Source II, which are US DOE Office of Science facilities, at Brookhaven National Laboratory under contract no. DE-SC0012704. This research also used resources of the Berkeley Synchrotron Infrared Structural Biology (BSISB) Imaging Program, funded by the US Department of Energy, Office of Biological and Environmental Research, under contract no. DE-AC02-05CH11231. The Advanced Light Source is supported by the Director, Office of Science, and the Office of Basic Energy Sciences. Both the ALS and BSISB were supported through contract no. DE-AC02-05CH11231. K.C.E. and C.B.M. acknowledge support from the Office of Naval Research Multidisciplinary University Research Initiative Award ONR N00014-18-1-2497. K.C.E. acknowledges support from the NSF Graduate Research Fellowship Program under grant no. DGE-1321851. This work is based on experiments performed at the Institut Laue-Langevin (ILL) in Grenoble, France. The collected datasets have the DOIs 10.5291/ILL-DATA.TEST-3123 and in part 10.5291/ILL-DATA.4-01-1643. The authors thank E. Villard, P. Chevalier and J. Locatelli for technical support at the ThALES spectrometer. C. N. Melton (author of ref. 51 ) performed the K -means cluster-based GP collection simulations.

Author information

Authors and affiliations.

The Center for Advanced Mathematics for Energy Research Applications (CAMERA), Lawrence Berkeley National Laboratory, Berkeley, CA, USA

Marcus M. Noack, Petrus H. Zwart, Daniela M. Ushizima & James A. Sethian

Molecular Biophysics and Integrated Bioimaging Division (MBIB), Lawrence Berkeley National Laboratory, Berkeley, CA, USA

Petrus H. Zwart, Hoi-Ying N. Holman, Steven Lee & Liang Chen

Berkeley Synchrotron Infrared Structural Biology Imaging Resource (BSISB), Lawrence Berkeley National Laboratory, Berkeley, CA, USA

Bakar Institute, University of California, San Francisco, San Francisco, CA, USA

Daniela M. Ushizima

National Synchrotron Light Source II (NSLS-II), Brookhaven National Laboratory, Upton, NY, USA

Masafumi Fukuto, Ruipeng Li, Guillaume Freychet & Mikhail Zhernenkov

Center for Functional Nanomaterials (CFN), Brookhaven National Laboratory, Upton, NY, USA

Kevin G. Yager, Aaron Stein, Gregory S. Doerk & Esther H. R. Tsai

Department of Chemistry, University of Pennsylvania, Philadelphia, PA, USA

Katherine C. Elbert & Christopher B. Murray

Department of Physics, University of California, Berkeley, CA, USA

Advanced Light Source (ALS), Lawrence Berkeley National Laboratory, Berkeley, CA, USA

Eli Rotenberg

Institut Laue-Langevin (ILL), Grenoble, France

Tobias Weber, Yannick Le Goc, Martin Boehm, Paul Steffens & Paolo Mutti

Department of Mathematics, University of California, Berkeley, CA, USA

James A. Sethian

You can also search for this author in PubMed   Google Scholar

Contributions

M.M.N. wrote the initial drafts of the introduction and the technical sections, devised the algorithm used, formulated the required mathematics, and implemented the computer codes (gpCAM). P.H.Z. designed, coordinated and collaborated on the development of basic computational strategies in gpCAM and on its use in SR-FTIR microscopy and ARPES experiments and took part in writing and editing this manuscript. D.M.U. designed, configured and implemented codes associated with convnets for reverse image search and wrote the related section. M.F. and K.G.Y. planned, supervised and coordinated experiments at Brookhaven National Laboratory’s National Synchrotron Light Source II, and wrote the related section. M.F., K.G.Y., E.H.R.T., R.L., G.F. and M.Z. performed X-ray scattering experiments at National Synchrotron Light Source II, including beamline operation and data analytics. K.C.E. and C.B.M. prepared nanoplatelet materials. A.S. and G.S.D. prepared chemical templates and self-assembled films. E.R. planned and led the ARPES measurements at the Advanced Light Source, and wrote the related section. H.-Y.N.H. led the SR-FTIR measurements, coordinated the simulations and wrote the initial draft of the related section, S.L. designed and performed the PCA-based GP collection simulations and wrote the related section. L.C. designed the simulations and wrote the related section. Y.L.G. and T.W. customized gpCAM for use at the ThALES spectrometer. T.W. developed and performed preparatory simulations with gpCAM using theoretical dynamical structure factor models for neutron scattering. T.W. planned and T.W., M.B., P.S. and P.M. performed the first autonomous commissioning experiment at ThALES measuring the magnons in the chiral magnet MnSi. M.B. proposed and M.B., T.W., P.S. and P.M. performed the second autonomous commissioning experiment at ThALES, the results of which are shown in Fig. 4. The sample for the first experiment (MnSi) was provided by A. Bauer, the sample for the second autonomous commissioning experiment was provided by M.B. T.W. analysed the data of the first experiment (MnSi, not shown), M.B. analysed the data of the second experiment (Fig. 4). M.B. and T.W. wrote the text of the corresponding section to equal parts. J.A.S. supervised the development of the mathematics and the implementation of the code, and revised and improved the manuscript. All authors commented on the manuscript and revised it repeatedly.

Corresponding author

Correspondence to Marcus M. Noack .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Peer review information.

Nature Reviews Physics thanks the anonymous reviewers for their contribution to the peer review of this work.

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

The quantitative characterization of uncertainties in computational and real-world applications.

A common test function in the optimization community.

A dimensionality reduction technique that finds an orthonormal basis; typically retaining only the first few basis vectors preserves the majority of the variance of the dataset while substantially reducing data dimensionality.

A computational linear algebra technique to factorize a matrix into two matrices without negative elements.

A function that is both smooth and compactly supported.

An approximate model when the actual model is difficult or costly to evaluate.

A technique for function approximation and automated data acquisition.

A special spectrometer that selects the wavelengths of neutrons before and after they hit the sample, which directly probes the energy and momentum response of various materials.

A triangulation technique such that no point in the set is inside the circumcircle of any of the triangles connecting the points.

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Noack, M.M., Zwart, P.H., Ushizima, D.M. et al. Gaussian processes for autonomous data acquisition at large-scale synchrotron and neutron facilities. Nat Rev Phys 3 , 685–697 (2021). https://doi.org/10.1038/s42254-021-00345-y

Download citation

Accepted : 10 June 2021

Published : 28 July 2021

Issue Date : October 2021

DOI : https://doi.org/10.1038/s42254-021-00345-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

experiment loop map

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • A Quick Guide to Experimental Design | 5 Steps & Examples

A Quick Guide to Experimental Design | 5 Steps & Examples

Published on 11 April 2022 by Rebecca Bevans . Revised on 5 December 2022.

Experiments are used to study causal relationships . You manipulate one or more independent variables and measure their effect on one or more dependent variables.

Experimental design means creating a set of procedures to systematically test a hypothesis . A good experimental design requires a strong understanding of the system you are studying. 

There are five key steps in designing an experiment:

  • Consider your variables and how they are related
  • Write a specific, testable hypothesis
  • Design experimental treatments to manipulate your independent variable
  • Assign subjects to groups, either between-subjects or within-subjects
  • Plan how you will measure your dependent variable

For valid conclusions, you also need to select a representative sample and control any  extraneous variables that might influence your results. If if random assignment of participants to control and treatment groups is impossible, unethical, or highly difficult, consider an observational study instead.

Table of contents

Step 1: define your variables, step 2: write your hypothesis, step 3: design your experimental treatments, step 4: assign your subjects to treatment groups, step 5: measure your dependent variable, frequently asked questions about experimental design.

You should begin with a specific research question . We will work with two research question examples, one from health sciences and one from ecology:

To translate your research question into an experimental hypothesis, you need to define the main variables and make predictions about how they are related.

Start by simply listing the independent and dependent variables .

Research question Independent variable Dependent variable
Phone use and sleep Minutes of phone use before sleep Hours of sleep per night
Temperature and soil respiration Air temperature just above the soil surface CO2 respired from soil

Then you need to think about possible extraneous and confounding variables and consider how you might control  them in your experiment.

Extraneous variable How to control
Phone use and sleep in sleep patterns among individuals. measure the average difference between sleep with phone use and sleep without phone use rather than the average amount of sleep per treatment group.
Temperature and soil respiration also affects respiration, and moisture can decrease with increasing temperature. monitor soil moisture and add water to make sure that soil moisture is consistent across all treatment plots.

Finally, you can put these variables together into a diagram. Use arrows to show the possible relationships between variables and include signs to show the expected direction of the relationships.

Diagram of the relationship between variables in a sleep experiment

Here we predict that increasing temperature will increase soil respiration and decrease soil moisture, while decreasing soil moisture will lead to decreased soil respiration.

Prevent plagiarism, run a free check.

Now that you have a strong conceptual understanding of the system you are studying, you should be able to write a specific, testable hypothesis that addresses your research question.

Null hypothesis (H ) Alternate hypothesis (H )
Phone use and sleep Phone use before sleep does not correlate with the amount of sleep a person gets. Increasing phone use before sleep leads to a decrease in sleep.
Temperature and soil respiration Air temperature does not correlate with soil respiration. Increased air temperature leads to increased soil respiration.

The next steps will describe how to design a controlled experiment . In a controlled experiment, you must be able to:

  • Systematically and precisely manipulate the independent variable(s).
  • Precisely measure the dependent variable(s).
  • Control any potential confounding variables.

If your study system doesn’t match these criteria, there are other types of research you can use to answer your research question.

How you manipulate the independent variable can affect the experiment’s external validity – that is, the extent to which the results can be generalised and applied to the broader world.

First, you may need to decide how widely to vary your independent variable.

  • just slightly above the natural range for your study region.
  • over a wider range of temperatures to mimic future warming.
  • over an extreme range that is beyond any possible natural variation.

Second, you may need to choose how finely to vary your independent variable. Sometimes this choice is made for you by your experimental system, but often you will need to decide, and this will affect how much you can infer from your results.

  • a categorical variable : either as binary (yes/no) or as levels of a factor (no phone use, low phone use, high phone use).
  • a continuous variable (minutes of phone use measured every night).

How you apply your experimental treatments to your test subjects is crucial for obtaining valid and reliable results.

First, you need to consider the study size : how many individuals will be included in the experiment? In general, the more subjects you include, the greater your experiment’s statistical power , which determines how much confidence you can have in your results.

Then you need to randomly assign your subjects to treatment groups . Each group receives a different level of the treatment (e.g. no phone use, low phone use, high phone use).

You should also include a control group , which receives no treatment. The control group tells us what would have happened to your test subjects without any experimental intervention.

When assigning your subjects to groups, there are two main choices you need to make:

  • A completely randomised design vs a randomised block design .
  • A between-subjects design vs a within-subjects design .

Randomisation

An experiment can be completely randomised or randomised within blocks (aka strata):

  • In a completely randomised design , every subject is assigned to a treatment group at random.
  • In a randomised block design (aka stratified random design), subjects are first grouped according to a characteristic they share, and then randomly assigned to treatments within those groups.
Completely randomised design Randomised block design
Phone use and sleep Subjects are all randomly assigned a level of phone use using a random number generator. Subjects are first grouped by age, and then phone use treatments are randomly assigned within these groups.
Temperature and soil respiration Warming treatments are assigned to soil plots at random by using a number generator to generate map coordinates within the study area. Soils are first grouped by average rainfall, and then treatment plots are randomly assigned within these groups.

Sometimes randomisation isn’t practical or ethical , so researchers create partially-random or even non-random designs. An experimental design where treatments aren’t randomly assigned is called a quasi-experimental design .

Between-subjects vs within-subjects

In a between-subjects design (also known as an independent measures design or classic ANOVA design), individuals receive only one of the possible levels of an experimental treatment.

In medical or social research, you might also use matched pairs within your between-subjects design to make sure that each treatment group contains the same variety of test subjects in the same proportions.

In a within-subjects design (also known as a repeated measures design), every individual receives each of the experimental treatments consecutively, and their responses to each treatment are measured.

Within-subjects or repeated measures can also refer to an experimental design where an effect emerges over time, and individual responses are measured over time in order to measure this effect as it emerges.

Counterbalancing (randomising or reversing the order of treatments among subjects) is often used in within-subjects designs to ensure that the order of treatment application doesn’t influence the results of the experiment.

Between-subjects (independent measures) design Within-subjects (repeated measures) design
Phone use and sleep Subjects are randomly assigned a level of phone use (none, low, or high) and follow that level of phone use throughout the experiment. Subjects are assigned consecutively to zero, low, and high levels of phone use throughout the experiment, and the order in which they follow these treatments is randomised.
Temperature and soil respiration Warming treatments are assigned to soil plots at random and the soils are kept at this temperature throughout the experiment. Every plot receives each warming treatment (1, 3, 5, 8, and 10C above ambient temperatures) consecutively over the course of the experiment, and the order in which they receive these treatments is randomised.

Finally, you need to decide how you’ll collect data on your dependent variable outcomes. You should aim for reliable and valid measurements that minimise bias or error.

Some variables, like temperature, can be objectively measured with scientific instruments. Others may need to be operationalised to turn them into measurable observations.

  • Ask participants to record what time they go to sleep and get up each day.
  • Ask participants to wear a sleep tracker.

How precisely you measure your dependent variable also affects the kinds of statistical analysis you can use on your data.

Experiments are always context-dependent, and a good experimental design will take into account all of the unique considerations of your study system to produce information that is both valid and relevant to your research question.

Experimental designs are a set of procedures that you plan in order to examine the relationship between variables that interest you.

To design a successful experiment, first identify:

  • A testable hypothesis
  • One or more independent variables that you will manipulate
  • One or more dependent variables that you will measure

When designing the experiment, first decide:

  • How your variable(s) will be manipulated
  • How you will control for any potential confounding or lurking variables
  • How many subjects you will include
  • How you will assign treatments to your subjects

The key difference between observational studies and experiments is that, done correctly, an observational study will never influence the responses or behaviours of participants. Experimental designs will have a treatment condition applied to at least a portion of participants.

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word ‘between’ means that you’re comparing different conditions between groups, while the word ‘within’ means you’re comparing different conditions within the same group.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Bevans, R. (2022, December 05). A Quick Guide to Experimental Design | 5 Steps & Examples. Scribbr. Retrieved 16 September 2024, from https://www.scribbr.co.uk/research-methods/guide-to-experimental-design/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

Frequently Asked Questions about Evidence-Based Management

Q: What is the relationship between EBM and OKRs?

A: OKRs are a powerful technique for setting goals and measuring against goals. Some organizations use OKRs as a practice that helps them with EBM.  What EBM adds to the OKR technique is an agile and iterative way for the organization to improve their performance in pursuit of OKRs. Specifically, using EBM’s “experiment loop” provides a way for teams to empirically test improvement ideas, and even inspect and adapt objectives while they are working on improvements.  In addition, tEBM’s Key Value Areas provide focus for measurements and improvements by encouraging teams to look at specific kinds of measures as they form their ideas for improvement.

For an additional perspective on EBM and OKRs, see OKRs: The Good, the Bad, and the Ugly .

Q: How can an organization get started applying EBM?

A: The obvious place to start is to ask yourself “What is my organization trying to achieve?”, i.e. what is its Strategic Goal.  This is often a hard question to answer because many organizations have rather vague and qualitative goals and have a hard time articulating how they would know if they have achieved the goal.

Sometimes organizations find it easier to talk about the goals for their agile initiative or agile transformation.  A frequently expressed goal is that the organization wants to “deliver to customers faster”, or to “improve efficiency.”  If they do, they should ask themselves “why do we want to do these things?” and keep asking why until they can express the fundamental reason why they want to be more agile. We find that this reason is often related to seizing some customer-related market opportunity. In EBM terms, they seek to realize some currently Unrealized Value in the market. The concept of Unrealized Value is explored more here .

As part of this discussion, they should ask themselves, “how will we know when we have achieved this goal? What measure will tell us that we have achieved the goal?”  This will help them achieve focus in pursuit of their goal.

Since the achievement of Strategic Goals often takes years, organizations need nearer-term targets to help them move toward their long-term goals. Discussions about immediate next steps and intermediate steps give rise to Immediate Tactical Goals and Intermediate Goals, each with their associated measures that will tell the organization when they have achieved their goal.

Q: How do EBM’s Key Value Areas help organizations achieve their goals?

A: EBM’s Key Value areas provide organizations with conceptual lenses that can help them focus on specific areas in which they may need to improve. First, an organization should understand the Current Value that it delivers to customers, and whether they have any opportunities to improve the value that their customers experience ( Unrealized Value ).

If the organization can deliver and measure small increments of value, they may focus on improving Current Value and reducing Unrealized Value. Many organizations are looking to improve their agility because they cannot deliver and measure small increments of value. As a result, they may need to focus first on improving their Time to Market .  If they can deliver quickly but each increment of value is very small because the organization is stretched too thin, the organization may need to also improve its Ability to Innovate.

Q: Can EBM be used to manage portfolios of products and services?

A: Yes. There is a white paper discussing this topic. See Investing for Business Agility .

Q: Why are leading and lagging indicator concepts not part of EBM? =

A: Leading indicators imply at least correlation, if not causation, with a lagging indicator.  They are useful concepts in simple domains where cause and effect are relatively obvious.  In complicated domains, they may still be useful even when causality is not certain but correlation is relatively reliable.

In complex domains, those in which using empiricism to seek toward a goal is the only practical way to proceed, cause and effect, and even correlation, may not be evident until after the event, if they are discernable at all. Consequently, we don’t think focusing on identifying causal relationships or leading indicators is fruitful.

Every assertion about a potential leading-lagging relationship is, in complex domains, simply a hypothesis. EBM uses an experimentation loop to form, test, inspect, and adapt hypotheses, so if a team wishes to explore leading-lagging relationships, they can use this mechanism to do so. In a sense, every experiment is testing the hypothesis that “if we do this thing, that measure will improve”, and teams can use this to gradually progress toward their goals.

What we find, however, is that each experiment loop is testing a different hypothesis. As teams try things, inspect the results, and adapt based on those results, they are constantly testing different ideas. They are not seeking to establish a simple leading-lagging relationship, but rather are gathering clues about what their next improvements might be. Their world is not simple enough for leading and lagging relationships to be very useful for very long.

Q: How does the concept of Unrealized Value help organizations to measure their success?

A: There is a blog that discusses this topic; see Measure Business Opportunities with Unrealized Value

Q: Why these four Key Value Areas and not others?

A : Current Value is important to understand how a product or service delivers valuable outcomes to customers or users of a product. But because it focuses only on what the customer/user experiences today, we added an additional Key Value Area, Unrealized Value , to reflect that there may be outcomes that the customer/user would like to experience but do not today.

The other two Key Value Areas look at the organization’s capability to deliver value from two perspectives: Time-to-Market , or speed, and Ability to Innovate , which focuses on the effectiveness of the efforts of the organization at delivering value. 

We have found that these four perspectives provide organizations with a holistic view on areas in which they might need to improve. If we find that other perspectives are also useful, we may add those in the future.

Q: On what intervals should organizations inspect and adapt their goals?

A: We use the experiment loop as the means by which organizations form hypotheses, run experiments, inspect the results, and adapt their thinking on what to do next.  We have found that this loop is most effective when it is no longer than a month, and ideally faster.  When a team or organization inspects the results of an experiment, they should also inspect their goals to determine whether the goals are still valid. Market conditions, business opportunities, and solution alternatives change all the time, and goals can shift as a result. 

Unlike some goal-setting approaches, EBM does not assume that goals are always right; they need to be frequently examined to determine whether they are the right goals.  The best time to do this is when the results of experiments are inspected, on at least a monthly basis, if not more frequently.

Q: Do organizations have to use Scrum to use EBM?

A: No. Although our experience with Scrum has certainly informed and shaped our perspectives on empirical management, EBM does not require any particular approach other than forming hypotheses, running experiments, inspecting the results, and adapting based on feedback on at least monthly cycles.  If an organization already uses Scrum, this cycle will feel very familiar, but EBM can be used with any approach provided that the organization can run experiments quickly enough.

Q: Why doesn’t EBM talk about measures like profit, revenue, or EPS as Strategic Goals?

A: Business results like profit, revenue, EPS, and the like are indirect measures that have many components.  Most of these measures can also be gamed for short-term advantage, in ways that may destroy the long-term viability of the organization.

As an example, an organization might cut costs in a way that shows short-term gains, only to have valuable employees with critical knowledge leave the organization, which would affect the organization’s long-term viability.

EBM is founded on the principle that all business value is created by closing customer satisfaction gaps.  If an organization focuses on doing this while also reducing waste by improving Time-to-Market and Ability to Innovate, profits will come.  In addition, focusing the areas defined by EBM’s Key Value Areas gives people in the organization concrete and motivating targets for improvement, while purely abstract financial goals do not.

Q: How can I learn more about EBM?

A: Scrum.org has developed a workshop to help students to understand and apply EBM concepts. For more information, look here .

What did you think about this content?

  • Quick Facts
  • Sights & Attractions
  • Tsarskoe Selo
  • Oranienbaum
  • Foreign St. Petersburg
  • Restaurants & Bars
  • Accommodation Guide
  • St. Petersburg Hotels
  • Serviced Apartments
  • Bed and Breakfasts
  • Private & Group Transfers
  • Airport Transfers
  • Concierge Service
  • Russian Visa Guide
  • Request Visa Support
  • Walking Tours
  • River Entertainment
  • Public Transportation
  • Travel Cards
  • Essential Shopping Selection
  • Business Directory
  • Photo Gallery
  • Video Gallery
  • 360° Panoramas
  • Moscow Hotels
  • Moscow.Info
  • Imperial Estates

Oranienbaum (Lomonosov)

Still commonly known by its post-war name of Lomonosov, the estate at Oranienbaum is the oldest of the Imperial Palaces around St. Petersburg, and also the only one not to be captured by Nazi forces during the Great Patriotic War. Founded by Prince Menshikov, Peter the Great's closest adviser, the Grand Palace is one of the most opulent examples of Petrine architecture to have survived to the present, although until very recently the palace itself has been greatly neglected. After Menshikov's death, Oranienbaum passed to the state, and was used as a hospice until, in 1743, it was presented by Empress Elizabeth to her nephew, the future Peter III. Peter made Oranienbaum his official summer residence and transformed one corner of the park, ordering the construction of a "Joke" Castle and a small citadel manned by his Holstein guards. This peculiar ensemble, called Petershtadt, was mostly demolished during Pavel's reign. Antonio Rinaldi, the Italian-born architect who also designed the Grand Palace at Gatchina and the Marble Palace in St. Petersburg, was commissioned by Peter in 1758 to build a modest stone palace next to the fortress, and this has survived.

After Peter was deposed, Rinaldi was commissioned by Catherine the Great to build the Chinese Palace, in the Upper Park, as her official country residence. However, Catherine spent little time at Oranienbaum, which she had grown to hate during her marriage to Peter, and by the end of the 18 th century the estate had been turned into a Naval Cadet College. The palace became an Imperial residence again in the reign of Alexander I, and retained that status until the Revolution, when it was immediately opened as a museum. Although never captured by the Germans, Oranienbaum was bombarded during the war and, while the Grand Menshikov Palace survived intact, its restoration was given much lower priority than the more famous estates at Peterhof and Tsarskoe Selo. Today, the small but elegant park has been almost completely restored, while the full restoration of the palaces has finally gained momentum over the last decade.

We can help you make the right choice from hundreds of St. Petersburg hotels and hostels.

Live like a local in self-catering apartments at convenient locations in St. Petersburg.

Comprehensive solutions for those who relocate to St. Petersburg to live, work or study.

Maximize your time in St. Petersburg with tours expertly tailored to your interests.

Get around in comfort with a chauffeured car or van to suit your budget and requirements.

Book a comfortable, well-maintained bus or a van with professional driver for your group.

Navigate St. Petersburg’s dining scene and find restaurants to remember.

Need tickets for the Mariinsky, the Hermitage, a football game or any event? We can help.

Get our help and advice choosing services and options to plan a prefect train journey.

Let our meeting and events experts help you organize a superb event in St. Petersburg.

We can find you a suitable interpreter for your negotiations, research or other needs.

Get translations for all purposes from recommended professional translators.

COMMENTS

  1. Rapid Experimentation: The Road To Innovation (Complete Guide)

    Experiment driven: teams must dare to be wrong and not just be focused on making features. Entrepreneurial: move quickly and validate assumptions. Think creative problem-solving at high speed. ... Strategyzer recommends the Team Alignment Map. 1.2 Describe the ideas. The design loop has 3 steps: Design loop. Ideate: come up with as many ...

  2. What is an experimentation loop?

    An experimentation loop is a continuous process of testing, learning, and iterating marketing strategies to optimise performance and achieve desired outcomes. For marketers, this involves designing and implementing experiments, collecting and analysing data, drawing insights, and making data-driven decisions to refine marketing tactics.

  3. PDF Experiment Loop Map

    Launch your experiment. The Experiment Loop Map was designed in response to our desire to track experiments over time, in a highly visible format which encourages discipline and story telling. For more detailed instructions, please visit www.ExperimentMap.com Run your experiments as quickly and frugally as possible. START

  4. Introducing Experimentation Loop

    An Experimentation Loop starts with identifying a problem through behavior analysis and creating a solution in the form of a hypothesis. Then, you run experiments to test the hypothesis. You either win or lose, but with a linear approach, you stop the experimentation cycle here. But with the Experimentation Loop, you investigate the test ...

  5. Evidence-Based Management (EBM) and Business Strategy

    The "experiment loop" shown in the following diagram represents, collectively, all the Sprints in which Scrum Teams develop and deliver valuable product Increments. The experimentation and feedback loops that impact the business strategy can happen on many levels within an organization. For example, they can be on the: Organizational level.

  6. Scrum.org Evidence-Based Management Study Notes

    The Experiment Loop. The Experiment Loop helps organisations move from their Current State toward their Next Target Goal, and ultimately their Strategic Goal, by taking small, measured steps, called experiments, using explicit hypotheses. Hypothesis - A proposed explanation for some observation that has not yet been proven (or disproven).

  7. Experiment Loop

    An Experiment Loop is a structured and iterative process that scientists and researchers use to design, conduct, and analyze experiments. It is a way of systematically testing hypotheses and gathering data to understand a particular problem or phenomenon. The experiment loop allows researchers to gather evidence and improve their understanding of the problem through repeated experimentation ...

  8. Scrum: An Evidence-Based Management Approach

    Evidence-Based Management Experiment Loop. Scrum.org Evidence-Based Management (EBM) effectively combines the Scrum framework with an emphasis on value, vision, and strategy. This approach appreciates the unpredictability of product development, promoting fact-based management over speculation. EBM's core involves continuous inspection and ...

  9. Theory and Experiment Loop (Part 1)

    Workshop: 4D Cellular Physiology Reimagined: Theory as a Principal ComponentThis workshop will focus on the central role that theoretical frameworks and mode...

  10. Sprint 28 Assessment

    LabTechify Experiment Loop map

  11. Guide to Experimental Design

    Table of contents. Step 1: Define your variables. Step 2: Write your hypothesis. Step 3: Design your experimental treatments. Step 4: Assign your subjects to treatment groups. Step 5: Measure your dependent variable. Other interesting articles. Frequently asked questions about experiments.

  12. Gaussian processes for autonomous data acquisition at large-scale

    A schematic of the autonomous-experiment loop is depicted in Fig. ... Each map is further limited by time constraints to fewer than a total of 2,500 pixels. The acquisition time for such a single ...

  13. A Quick Guide to Experimental Design

    A good experimental design requires a strong understanding of the system you are studying. There are five key steps in designing an experiment: Consider your variables and how they are related. Write a specific, testable hypothesis. Design experimental treatments to manipulate your independent variable.

  14. Frequently Asked Questions about Evidence-Based Management

    Specifically, using EBM's "experiment loop" provides a way for teams to empirically test improvement ideas, and even inspect and adapt objectives while they are working on improvements. In addition, tEBM's Key Value Areas provide focus for measurements and improvements by encouraging teams to look at specific kinds of measures as they ...

  15. Best practices for the visualization, mapping, and manipulation of R

    We offer a set of best practices to improve the reproducibility of maps, hoping that such guidelines could be useful for authors and referees alike. Finally, we propose a possible resolution for the apparent contradictions in R‐loop mapping outcomes between antibody‐based and RNase H1‐based mapping approaches.

  16. Visiting Oranienbaum and Lomonosov

    Last admission is at 5 pm. October 10 to April 30: Saturday and Sunday, 10:30 am to 5 pm. Last admission is at 4 pm. Admission: Adult: RUB 250.00 Children: RUB 150.00. Accessibility note: No wheelchair access in the museum. Essential visitor information for the Imperial estate at Oranienbaum, near the St. Petersburg suburb of Lomonosov.

  17. Oranienbaum (Lomonosov), St. Petersburg, Russia

    Oranienbaum (Lomonosov) Still commonly known by its post-war name of Lomonosov, the estate at Oranienbaum is the oldest of the Imperial Palaces around St. Petersburg, and also the only one not to be captured by Nazi forces during the Great Patriotic War. Founded by Prince Menshikov, Peter the Great's closest adviser, the Grand Palace is one of ...

  18. Saint Petersburg

    Yandex Metro offers an interactive Saint Petersburg metro (underground, subway, tube) map with route times and trip planning that accounts for closed stations and entrances.