Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Guide to Experimental Design | Overview, Steps, & Examples

Guide to Experimental Design | Overview, 5 steps & Examples

Published on December 3, 2019 by Rebecca Bevans . Revised on June 21, 2023.

Experiments are used to study causal relationships . You manipulate one or more independent variables and measure their effect on one or more dependent variables.

Experimental design create a set of procedures to systematically test a hypothesis . A good experimental design requires a strong understanding of the system you are studying.

There are five key steps in designing an experiment:

  • Consider your variables and how they are related
  • Write a specific, testable hypothesis
  • Design experimental treatments to manipulate your independent variable
  • Assign subjects to groups, either between-subjects or within-subjects
  • Plan how you will measure your dependent variable

For valid conclusions, you also need to select a representative sample and control any  extraneous variables that might influence your results. If random assignment of participants to control and treatment groups is impossible, unethical, or highly difficult, consider an observational study instead. This minimizes several types of research bias, particularly sampling bias , survivorship bias , and attrition bias as time passes.

Table of contents

Step 1: define your variables, step 2: write your hypothesis, step 3: design your experimental treatments, step 4: assign your subjects to treatment groups, step 5: measure your dependent variable, other interesting articles, frequently asked questions about experiments.

You should begin with a specific research question . We will work with two research question examples, one from health sciences and one from ecology:

To translate your research question into an experimental hypothesis, you need to define the main variables and make predictions about how they are related.

Start by simply listing the independent and dependent variables .

Research question Independent variable Dependent variable
Phone use and sleep Minutes of phone use before sleep Hours of sleep per night
Temperature and soil respiration Air temperature just above the soil surface CO2 respired from soil

Then you need to think about possible extraneous and confounding variables and consider how you might control  them in your experiment.

Extraneous variable How to control
Phone use and sleep in sleep patterns among individuals. measure the average difference between sleep with phone use and sleep without phone use rather than the average amount of sleep per treatment group.
Temperature and soil respiration also affects respiration, and moisture can decrease with increasing temperature. monitor soil moisture and add water to make sure that soil moisture is consistent across all treatment plots.

Finally, you can put these variables together into a diagram. Use arrows to show the possible relationships between variables and include signs to show the expected direction of the relationships.

Diagram of the relationship between variables in a sleep experiment

Here we predict that increasing temperature will increase soil respiration and decrease soil moisture, while decreasing soil moisture will lead to decreased soil respiration.

Prevent plagiarism. Run a free check.

Now that you have a strong conceptual understanding of the system you are studying, you should be able to write a specific, testable hypothesis that addresses your research question.

Null hypothesis (H ) Alternate hypothesis (H )
Phone use and sleep Phone use before sleep does not correlate with the amount of sleep a person gets. Increasing phone use before sleep leads to a decrease in sleep.
Temperature and soil respiration Air temperature does not correlate with soil respiration. Increased air temperature leads to increased soil respiration.

The next steps will describe how to design a controlled experiment . In a controlled experiment, you must be able to:

  • Systematically and precisely manipulate the independent variable(s).
  • Precisely measure the dependent variable(s).
  • Control any potential confounding variables.

If your study system doesn’t match these criteria, there are other types of research you can use to answer your research question.

How you manipulate the independent variable can affect the experiment’s external validity – that is, the extent to which the results can be generalized and applied to the broader world.

First, you may need to decide how widely to vary your independent variable.

  • just slightly above the natural range for your study region.
  • over a wider range of temperatures to mimic future warming.
  • over an extreme range that is beyond any possible natural variation.

Second, you may need to choose how finely to vary your independent variable. Sometimes this choice is made for you by your experimental system, but often you will need to decide, and this will affect how much you can infer from your results.

  • a categorical variable : either as binary (yes/no) or as levels of a factor (no phone use, low phone use, high phone use).
  • a continuous variable (minutes of phone use measured every night).

How you apply your experimental treatments to your test subjects is crucial for obtaining valid and reliable results.

First, you need to consider the study size : how many individuals will be included in the experiment? In general, the more subjects you include, the greater your experiment’s statistical power , which determines how much confidence you can have in your results.

Then you need to randomly assign your subjects to treatment groups . Each group receives a different level of the treatment (e.g. no phone use, low phone use, high phone use).

You should also include a control group , which receives no treatment. The control group tells us what would have happened to your test subjects without any experimental intervention.

When assigning your subjects to groups, there are two main choices you need to make:

  • A completely randomized design vs a randomized block design .
  • A between-subjects design vs a within-subjects design .

Randomization

An experiment can be completely randomized or randomized within blocks (aka strata):

  • In a completely randomized design , every subject is assigned to a treatment group at random.
  • In a randomized block design (aka stratified random design), subjects are first grouped according to a characteristic they share, and then randomly assigned to treatments within those groups.
Completely randomized design Randomized block design
Phone use and sleep Subjects are all randomly assigned a level of phone use using a random number generator. Subjects are first grouped by age, and then phone use treatments are randomly assigned within these groups.
Temperature and soil respiration Warming treatments are assigned to soil plots at random by using a number generator to generate map coordinates within the study area. Soils are first grouped by average rainfall, and then treatment plots are randomly assigned within these groups.

Sometimes randomization isn’t practical or ethical , so researchers create partially-random or even non-random designs. An experimental design where treatments aren’t randomly assigned is called a quasi-experimental design .

Between-subjects vs. within-subjects

In a between-subjects design (also known as an independent measures design or classic ANOVA design), individuals receive only one of the possible levels of an experimental treatment.

In medical or social research, you might also use matched pairs within your between-subjects design to make sure that each treatment group contains the same variety of test subjects in the same proportions.

In a within-subjects design (also known as a repeated measures design), every individual receives each of the experimental treatments consecutively, and their responses to each treatment are measured.

Within-subjects or repeated measures can also refer to an experimental design where an effect emerges over time, and individual responses are measured over time in order to measure this effect as it emerges.

Counterbalancing (randomizing or reversing the order of treatments among subjects) is often used in within-subjects designs to ensure that the order of treatment application doesn’t influence the results of the experiment.

Between-subjects (independent measures) design Within-subjects (repeated measures) design
Phone use and sleep Subjects are randomly assigned a level of phone use (none, low, or high) and follow that level of phone use throughout the experiment. Subjects are assigned consecutively to zero, low, and high levels of phone use throughout the experiment, and the order in which they follow these treatments is randomized.
Temperature and soil respiration Warming treatments are assigned to soil plots at random and the soils are kept at this temperature throughout the experiment. Every plot receives each warming treatment (1, 3, 5, 8, and 10C above ambient temperatures) consecutively over the course of the experiment, and the order in which they receive these treatments is randomized.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Finally, you need to decide how you’ll collect data on your dependent variable outcomes. You should aim for reliable and valid measurements that minimize research bias or error.

Some variables, like temperature, can be objectively measured with scientific instruments. Others may need to be operationalized to turn them into measurable observations.

  • Ask participants to record what time they go to sleep and get up each day.
  • Ask participants to wear a sleep tracker.

How precisely you measure your dependent variable also affects the kinds of statistical analysis you can use on your data.

Experiments are always context-dependent, and a good experimental design will take into account all of the unique considerations of your study system to produce information that is both valid and relevant to your research question.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Likert scale

Research bias

  • Implicit bias
  • Framing effect
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic

Experimental design means planning a set of procedures to investigate a relationship between variables . To design a controlled experiment, you need:

  • A testable hypothesis
  • At least one independent variable that can be precisely manipulated
  • At least one dependent variable that can be precisely measured

When designing the experiment, you decide:

  • How you will manipulate the variable(s)
  • How you will control for any potential confounding variables
  • How many subjects or samples will be included in the study
  • How subjects will be assigned to treatment levels

Experimental design is essential to the internal and external validity of your experiment.

The key difference between observational studies and experimental designs is that a well-done observational study does not influence the responses of participants, while experiments do have some sort of treatment condition applied to at least some participants by random assignment .

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word “between” means that you’re comparing different conditions between groups, while the word “within” means you’re comparing different conditions within the same group.

An experimental group, also known as a treatment group, receives the treatment whose effect researchers wish to study, whereas a control group does not. They should be identical in all other ways.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bevans, R. (2023, June 21). Guide to Experimental Design | Overview, 5 steps & Examples. Scribbr. Retrieved June 9, 2024, from https://www.scribbr.com/methodology/experimental-design/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

Other students also liked, random assignment in experiments | introduction & examples, quasi-experimental design | definition, types & examples, how to write a lab report, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

how to conduct true experimental research

  • Survey Software The world’s leading omnichannel survey software
  • Online Survey Tools Create sophisticated surveys with ease.
  • Mobile Offline Conduct efficient field surveys.
  • Text Analysis
  • Close The Loop
  • Automated Translations
  • NPS Dashboard
  • CATI Manage high volume phone surveys efficiently
  • Cloud/On-premise Dialer TCPA compliant Cloud & on-premise dialer
  • IVR Survey Software Boost productivity with automated call workflows.
  • Analytics Analyze survey data with visual dashboards
  • Panel Manager Nurture a loyal community of respondents.
  • Survey Portal Best-in-class user friendly survey portal.
  • Voxco Audience Conduct targeted sample research in hours.

how to conduct true experimental research

Find the best survey software for you! (Along with a checklist to compare platforms)

Get Buyer’s Guide

  • 40+ question types
  • Drag-and-drop interface
  • Skip logic and branching
  • Multi-lingual survey
  • Text piping
  • Question library
  • CSS customization
  • White-label surveys
  • Customizable ‘Thank You’ page
  • Customizable survey theme
  • Reminder send-outs
  • Survey rewards
  • Social media
  • Website surveys
  • Correlation analysis
  • Cross-tabulation analysis
  • Trend analysis
  • Real-time dashboard
  • Customizable report
  • Email address validation
  • Recaptcha validation
  • SSL security

Take a peek at our powerful survey features to design surveys that scale discoveries.

Download feature sheet.

  • Hospitality
  • Financial Services
  • Academic Research
  • Customer Experience
  • Employee Experience
  • Product Experience
  • Market Research
  • Social Research
  • Data Analysis

Explore Voxco 

Need to map Voxco’s features & offerings? We can help!

Watch a Demo 

Download Brochures 

Get a Quote

  • NPS Calculator
  • CES Calculator
  • A/B Testing Calculator
  • Margin of Error Calculator
  • Sample Size Calculator
  • CX Strategy & Management Hub
  • Market Research Hub
  • Patient Experience Hub
  • Employee Experience Hub
  • Market Research Guide
  • Customer Experience Guide
  • The Voxco Guide to Customer Experience
  • NPS Knowledge Hub
  • Survey Research Guides
  • Survey Template Library
  • Webinars and Events
  • Feature Sheets
  • Try a sample survey
  • Professional services

Find the best customer experience platform

Uncover customer pain points, analyze feedback and run successful CX programs with the best CX platform for your team.

Get the Guide Now

how to conduct true experimental research

We’ve been avid users of the Voxco platform now for over 20 years. It gives us the flexibility to routinely enhance our survey toolkit and provides our clients with a more robust dataset and story to tell their clients.

VP Innovation & Strategic Partnerships, The Logit Group

  • Client Stories
  • Voxco Reviews
  • Why Voxco Research?
  • Careers at Voxco
  • Vulnerabilities and Ethical Hacking

Explore Regional Offices

  • Cloud/On-premise Dialer TCPA compliant Cloud on-premise dialer
  • Predictive Analytics
  • Customer 360
  • Customer Loyalty
  • Fraud & Risk Management
  • AI/ML Enablement Services
  • Credit Underwriting

Get Buyer’s Guide

  • 100+ question types
  • SMS surveys
  • Banking & Financial Services
  • Retail Solution
  • Risk Management
  • Customer Lifecycle Solutions
  • Net Promoter Score
  • Customer Behaviour Analytics
  • Customer Segmentation
  • Data Unification

Explore Voxco 

Watch a Demo 

Download Brochures 

  • CX Strategy & Management Hub
  • Blogs & White papers
  • Case Studies

how to conduct true experimental research

VP Innovation & Strategic Partnerships, The Logit Group

  • Why Voxco Intelligence?
  • Our clients
  • Client stories
  • Featuresheets

True Experimental Design - Types & How to Conduct

SHARE THE ARTICLE ON

EXPERIMENTAL RESEARCH1 1

True-experimental research is often considered the most accurate research. A researcher has complete control over the process which helps reduce any error in the result. This also increases the confidence level of the research outcome. 

In this blog, we will explore in detail what it is, its various types, and how to conduct it in 7 steps.

What is a true experimental design?

True experimental design is a statistical approach to establishing a cause-and-effect relationship between variables. This research method is the most accurate forms which provides substantial backing to support the existence of relationships.

There are three elements in this study that you need to fulfill in order to perform this type of research:

1. The existence of a control group:  The sample of participants is subdivided into 2 groups – one that is subjected to the experiment and so, undergoes changes and the other that does not. 

2. The presence of an independent variable:  Independent variables that influence the working of other variables must be there for the researcher to control and observe changes.

3.   Random assignment:  Participants must be randomly distributed within the groups.

Read how Voxco helped Brain Research improve research productivity by 60%.

“The platform extends our ability to productively manage our busy intercept survey projects, and the confidence to support major new clients.”

Laura Ruvalcaba, President & CEO, Brain Research

An example of true experimental design

A study to observe the effects of physical exercise on productivity levels can be conducted using a true experimental design.

Suppose a group of 300 people volunteer for a study involving office workers in their 20s. These 300 participants are randomly distributed into 3 groups. 

  • 1st Group:  A control group that does not participate in exercising and has to carry on with their everyday schedule. 
  • 2nd Group:  Asked to indulge in home workouts for 30-45 minutes every day for one month. 
  • 3rd Group:  Has to work out 2 hours every day for a month. Both groups have to take one rest day per week.

In this research, the  level of physical exercise acts  as an  independent variable  while the  performance at the workplace  is a  dependent variable  that varies with the change in exercise levels.

Before initiating the true experimental research, each participant’s current performance at the workplace is evaluated and documented. As the study goes on, a progress report is generated for each of the 300 participants to monitor how their physical activity has impacted their workplace functioning.

At the end of two weeks, participants from the 2nd and 3rd groups that are able to endure their current level of workout, are asked to increase their daily exercise time by half an hour. While those that aren’t able to endure, are suggested to either continue with the same timing or fix the timing to a level that is half an hour lower. 

So, in this true experimental design a participant who at the end of two weeks is not able to put up with 2 hours of workout, will now workout for 1 hour and 30 minutes for the remaining tenure of two weeks while someone who can endure the 2 hours, will now push themselves towards 2 hours and 30 minutes.

In this manner, the researcher notes the timings of each member from the two active groups for the first two weeks and the remaining two weeks after the change in timings and also monitors their corresponding performance levels at work.

The above example can be categorized as true experiment research since now we have:

  • Control group:  Group 1 carries on with their schedule without being conditioned to exercise.
  • Independent variable : The duration of exercise each day.
  • Random assignment:  300 participants are randomly distributed into 3 groups and as such, there are no criteria for the assignment.

What is the purpose of conducting true experimental research?

Both the primary usage and purpose of a true experimental design lie in establishing meaningful relationships based on quantitative surveillance. 

True experiments focus on connecting the dots between two or more variables by displaying how the change in one variable brings about a change in another variable. It can be as small a change as having enough sleep improves retention or as large scale as geographical differences affect consumer behavior. 

The main idea is to ensure the presence of different sets of variables to study with some shared commonality.

Beyond this, the research is used when the three criteria of random distribution, a control group, and an independent variable to be manipulated by the researcher, are met.

Voxco’s omnichannel survey software helps you collect insights from multiple channels using a single platform

See the true power of using an integrated survey platform to conduct online, offline, and phone surveys along with a built-in analytical suite.

What are the advantages of true experimental design?

Let’s take a look at some advantages that make this research design conclusive and accurate research.

Concrete method of research:

The statistical nature of the experimental design makes it highly credible and accurate. The data collected from the research is subjected to statistical tools. 

This makes the results easy to understand, objective and actionable. This makes it a better alternative to observation-based studies that are subjective and difficult to make inferences from.

Easy to understand and replicate:

Since the research provides hard figures and a precise representation of the entire process, the results presented become easily comprehensible for any stakeholder. 

Further, it becomes easier for future researchers conducting studies around the same subject to get a grasp of prior takes on the same and replicate its results to supplement their own research.

Establishes comparison:

The presence of a control group in true experimental research allows researchers to compare and contrast. The degree to which a methodology is applied to a group can be studied with respect to the end result as a frame of reference.

Conclusive:

The research combines observational and statistical analysis to generate informed conclusions. This directs the flow of follow-up actions in a definite direction, thus, making the research process fruitful.

What are the disadvantages of true experimental design?

We should also learn about the disadvantages it can pose in research to help you determine when and how you should use this type of research. 

This research design is costly. It takes a lot of investment in recruiting and managing a large number of participants which is necessary for the sample to be representative. 

The high resource investment makes it highly important for the researcher to plan each aspect of the process to its minute details.

Too idealistic:

The research takes place in a completely controlled environment. Such a scenario is not representative of real-world situations and so the results may not be authentic. 

T his is one of the main limitation why open-field research is preferred over lab research, wherein the researcher can influence the study.

Time-consuming:

Setting up and conducting a true experiment is highly time-consuming. This is because of the processes like recruiting a large enough sample, gathering respondent data, random distribution into groups, monitoring the process over a span of time, tracking changes, and making adjustments. 

The amount of processes, although essential to the entire model, is not a feasible option to go for when the results are required in the near future.

Now that we’ve learned about the advantages and disadvantages let’s look at its types.

Get started with your Experimental Research

Send your survey to the right people to receive quality responses.

What are the 3 types of true experimental design?

The research design is categorized into three types based on the way you should conduct the research. Each type has its own procedure and guidelines, which you should be aware of to achieve reliable data.  

The three types are: 

1) Post-test-only control group design. 

2) Pre-test post-test control group design.

3) Solomon four group control design.

Let’s see how these three types differ. 

1) Post-test-only control group design:

In this type of true experimental research, the control as well as the experimental group that has been formed using random allocation, are not tested before applying the experimental methodology. This is so as to avoid affecting the quality of the study.

The participants are always on the lookout to identify the purpose and criteria for assessment. Pre-test conveys to them the basis on which they are being judged which can allow them to modify their end responses, compromising the quality of the entire research process. 

However, this can hinder your ability to establish a comparison between the pre-experiment and post-experiment conditions which weighs in on the changes that have taken place over the course of the research.

2) Pre-test post-test control group design:

It is a modification of the post-test control group design with an additional test carried out before the implementation of the experimental methodology. 

This two-way testing method can help in noticing significant changes brought in the research groups as a result of the experimental intervention. There is no guarantee that the results present the true picture as post-testing can be affected due to the exposure of the respondents to the pre-test.

3) Solomon four group control design:

This type of true experimental design involves the random distribution of sample members into 4 groups. These groups consist of 2 control groups that are not subjected to the experiments and changes and 2 experimental groups that the experimental methodology applies to.

Out of these 4 groups, one control and one experimental group is used for pre-testing while all four groups are subjected to post-tests.

This way researcher gets to establish pre-test post-test contrast while there remains another set of respondents that have not been exposed to pre-tests and so, provide genuine post-test responses, thus, accounting for testing effects.

Explore all the survey question types possible on Voxco.

What is the difference between pre-experimental & true experimental research design.

Pre-experimental research helps determine the researchers’ intervention on a group of people. It is a step where you design the proper experiment to address a research question. 

True experiment defines that you are conducting the research. It helps establish a cause-and-effect relationship between the variables. 

We’ll discuss the differences between the two based on four categories, which are: 

  • Observatory Vs. Statistical. 
  • Absence Vs. Presence of control groups. 
  • Non-randomization Vs. Randomization. 
  • Feasibility test Vs. Conclusive test.

Let’s find the differences to better understand the two experiments. 

Observatory vs Statistical:

Pre-experimental research  is an observation-based model i.e. it is highly subjective and qualitative in nature. 

The true experimental design  offers an accurate analysis of the data collected using statistical data analysis tools.

Absence vs Presence of control groups:

Pre-experimental research  designs do not usually employ a control group which makes it difficult to establish contrast. 

While all three types of  true experiments  employ control groups.

Non-randomization vs Randomization:

Pre-experimental research  doesn’t use randomization in certain cases whereas 

True experimental research  always adheres to a randomization approach to group distribution.

Feasibility test vs Conclusive test:

Pre-tests  are used as a feasibility mechanism to see if the methodology being applied is actually suitable for the research purpose and whether it will have an impact or not.

While  true experiments  are conclusive in nature.

Guide to Descriptive Research

Learn the key steps of conducting descriptive research to uncover breakthrough insights into your target market.

7 Steps to conduct a true experimental research

It’s important to understand the steps/guidelines of research in order to maintain research integrity and gather valid and reliable data.  

We have explained 7 steps to conducting this research in detail. The TL;DR version of it is: 

1) Identify the research objective.

2) Identify independent and dependent variables.

3) Define and group the population.

4) Conduct Pre-tests.

5) Conduct the research.

6) Conduct post-tests.

7) Analyse the collected data. 

Now let’s explore these seven steps in true experimental design. 

1) Identify the research objective:

Identify the variables which you need to analyze for a cause-and-effect relationship. Deliberate which particular relationship study will help you make effective decisions and frame this research objective in one of the following manners:

  • Determination of the impact of X on Y
  • Studying how the usage/application of X causes Y

2) Identify independent and dependent variables:

Establish clarity as to what would be your controlling/independent variable and what variable would change and would be observed by the researcher. In the above samples, for research purposes, X is an independent variable & Y is a dependent variable.

3) Define and group the population:

Define the targeted audience for the true experimental design. It is out of this target audience that a sample needs to be selected for accurate research to be carried out. It is imperative that the target population gets defined in as much detail as possible.

To narrow the field of view, a random selection of individuals from the population is carried out. These are the selected respondents that help the researcher in answering their research questions. Post their selection, this sample of individuals gets randomly subdivided into control and experimental groups.

4) Conduct Pre-tests:

Before commencing with the actual study, pre-tests are to be carried out wherever necessary. These pre-tests take an assessment of the condition of the respondent so that an effective comparison between the pre and post-tests reveals the change brought about by the research.

5) Conduct the research:

Implement your experimental procedure with the experimental group created in the previous step in the true experimental design. Provide the necessary instructions and solve any doubts or queries that the participants might have. Monitor their practices and track their progress. Ensure that the intervention is being properly complied with, otherwise, the results can be tainted.

6) Conduct post-tests:

Gauge the impact that the intervention has had on the experimental group and compare it with the pre-tests. This is particularly important since the pre-test serves as a starting point from where all the changes that have been measured in the post-test, are the effect of the experimental intervention. 

So for example: If the pre-test in the above example shows that a particular customer service employee was able to solve 10 customer problems in two hours and the post-test conducted after a month of 2-hour workouts every day shows a boost of 5 additional customer problems being solved within those 2 hours, the additional 5 customer service calls that the employee makes is the result of the additional productivity gained by the employee as a result of putting in the requisite time

7) Analyse the collected data:

Use appropriate statistical tools to derive inferences from the data observed and collected. Correlational data analysis tools and tests of significance are highly effective relationship-based studies and so are highly applicable for true experimental research.

This step also includes differentiating between the pre and the post-tests for scoping in on the impact that the independent variable has had on the dependent variable. A contrast between the control group and the experimental groups sheds light on the change brought about within the span of the experiment and how much change is brought intentionally and is not caused by chance.

Voxco is trusted by 500+ global brands and top 50 MR firms to gather insights and take actions.

See how Voxco can help enhance your research efficiency.

Wrapping up;

This sums up everything about true experimental design. While it’s often considered complex and expensive, it is also one of the most accurate research.

The true experiment uses statistical analysis which ensures that your data is reliable and has a high confidence level. Curious to learn how you can use  survey software  to conduct your experimental research,  book a meeting with us .

  • What is true experimental research design?

True experimental research design helps investigate the cause-and-effect relationships between the variables under study. The research method requires manipulating an independent variable, random assignment of participants to different groups, and measuring the dependent variable. 

  • How does true experiment research differ from other research designs?

The true experiment uses random selection/assignment of participants in the group to minimize preexisting differences between groups. It allows researchers to make causal inferences about the influence of independent variables. This is the factor that makes it different from other research designs like correlational research. 

  • What are the key components of true experimental research designs?

The following are the important factors of a true experimental design: 

  • Manipulation of the independent variable. 
  • Control groups. 
  • Experiment groups. 
  • Dependent variable. 
  • Random assignment. 
  • What are some advantages of true experiment design?

It enables you to establish causal relationships between variables and offers control over the confounding variables. Moreover, you can generalize the research findings to the target population. 

  • What ethical considerations are important in a true experimental research design?

When conducting this research method, you must obtain informed consent from the participants. It’s important to ensure the confidentiality and privacy of the participants to minimize any risk or harm. 

Explore Voxco Survey Software

+ Omnichannel Survey Software 

+ Online Survey Software 

+ CATI Survey Software 

+ IVR Survey Software 

+ Market Research Tool

+ Customer Experience Tool 

+ Product Experience Software 

+ Enterprise Survey Software 

True Experimental Design employee experience

How Netflix’ Employee & Customer Experience has helped them grow double their expectations during the crisis

How Netflix’ Employee & Customer Experience has helped them grow double their expectations during the crisis Read Netflix’s secret to customer experience Get our in-depth

True Experimental Design employee experience

How to truly become a data-driven company

How to truly become a data-driven company SHARE THE ARTICLE ON Table of Contents Introduction Data-driven companies are the new gold standard. They know how

True Experimental Design employee experience

Categorical data vs Numerical data

Categorical Data vs Numerical Data: Exploring The Differences SHARE THE ARTICLE ON Table of Contents When collecting your data for research, it is important to

True Experimental Design employee experience

Six Unconventional Strategies to Elevate Survey Questions for Customer Satisfaction Measurement

Six Unconventional Strategies to Elevate Survey Questions for Customer Satisfaction Measurement SHARE THE ARTICLE ON Table of Contents Introduction In a world where traditional survey

Everything you need to know about Average Handle Time cvr

Everything you need to know about Average Handle Time

What is Average Handle Time? Try a free Voxco Online sample survey! Unlock your Sample Survey SHARE THE ARTICLE ON Share on facebook Share on

True Experimental Design employee experience

Internet Surveys

Internet Surveys SHARE THE ARTICLE ON Table of Contents What are Internet Surveys? Internet surveys typically take the form of computerized self-administered questionnaires that are

We use cookies in our website to give you the best browsing experience and to tailor advertising. By continuing to use our website, you give us consent to the use of cookies. Read More

Name Domain Purpose Expiry Type
hubspotutk www.voxco.com HubSpot functional cookie. 1 year HTTP
lhc_dir_locale amplifyreach.com --- 52 years ---
lhc_dirclass amplifyreach.com --- 52 years ---
Name Domain Purpose Expiry Type
_fbp www.voxco.com Facebook Pixel advertising first-party cookie 3 months HTTP
__hstc www.voxco.com Hubspot marketing platform cookie. 1 year HTTP
__hssrc www.voxco.com Hubspot marketing platform cookie. 52 years HTTP
__hssc www.voxco.com Hubspot marketing platform cookie. Session HTTP
Name Domain Purpose Expiry Type
_gid www.voxco.com Google Universal Analytics short-time unique user tracking identifier. 1 days HTTP
MUID bing.com Microsoft User Identifier tracking cookie used by Bing Ads. 1 year HTTP
MR bat.bing.com Microsoft User Identifier tracking cookie used by Bing Ads. 7 days HTTP
IDE doubleclick.net Google advertising cookie used for user tracking and ad targeting purposes. 2 years HTTP
_vwo_uuid_v2 www.voxco.com Generic Visual Website Optimizer (VWO) user tracking cookie. 1 year HTTP
_vis_opt_s www.voxco.com Generic Visual Website Optimizer (VWO) user tracking cookie that detects if the user is new or returning to a particular campaign. 3 months HTTP
_vis_opt_test_cookie www.voxco.com A session (temporary) cookie used by Generic Visual Website Optimizer (VWO) to detect if the cookies are enabled on the browser of the user or not. 52 years HTTP
_ga www.voxco.com Google Universal Analytics long-time unique user tracking identifier. 2 years HTTP
_uetsid www.voxco.com Microsoft Bing Ads Universal Event Tracking (UET) tracking cookie. 1 days HTTP
vuid vimeo.com Vimeo tracking cookie 2 years HTTP
Name Domain Purpose Expiry Type
__cf_bm hubspot.com Generic CloudFlare functional cookie. Session HTTP
Name Domain Purpose Expiry Type
_gcl_au www.voxco.com --- 3 months ---
_gat_gtag_UA_3262734_1 www.voxco.com --- Session ---
_clck www.voxco.com --- 1 year ---
_ga_HNFQQ528PZ www.voxco.com --- 2 years ---
_clsk www.voxco.com --- 1 days ---
visitor_id18452 pardot.com --- 10 years ---
visitor_id18452-hash pardot.com --- 10 years ---
lpv18452 pi.pardot.com --- Session ---
lhc_per www.voxco.com --- 6 months ---
_uetvid www.voxco.com --- 1 year ---
  • Privacy Policy

Research Method

Home » Experimental Design – Types, Methods, Guide

Experimental Design – Types, Methods, Guide

Table of Contents

Experimental Research Design

Experimental Design

Experimental design is a process of planning and conducting scientific experiments to investigate a hypothesis or research question. It involves carefully designing an experiment that can test the hypothesis, and controlling for other variables that may influence the results.

Experimental design typically includes identifying the variables that will be manipulated or measured, defining the sample or population to be studied, selecting an appropriate method of sampling, choosing a method for data collection and analysis, and determining the appropriate statistical tests to use.

Types of Experimental Design

Here are the different types of experimental design:

Completely Randomized Design

In this design, participants are randomly assigned to one of two or more groups, and each group is exposed to a different treatment or condition.

Randomized Block Design

This design involves dividing participants into blocks based on a specific characteristic, such as age or gender, and then randomly assigning participants within each block to one of two or more treatment groups.

Factorial Design

In a factorial design, participants are randomly assigned to one of several groups, each of which receives a different combination of two or more independent variables.

Repeated Measures Design

In this design, each participant is exposed to all of the different treatments or conditions, either in a random order or in a predetermined order.

Crossover Design

This design involves randomly assigning participants to one of two or more treatment groups, with each group receiving one treatment during the first phase of the study and then switching to a different treatment during the second phase.

Split-plot Design

In this design, the researcher manipulates one or more variables at different levels and uses a randomized block design to control for other variables.

Nested Design

This design involves grouping participants within larger units, such as schools or households, and then randomly assigning these units to different treatment groups.

Laboratory Experiment

Laboratory experiments are conducted under controlled conditions, which allows for greater precision and accuracy. However, because laboratory conditions are not always representative of real-world conditions, the results of these experiments may not be generalizable to the population at large.

Field Experiment

Field experiments are conducted in naturalistic settings and allow for more realistic observations. However, because field experiments are not as controlled as laboratory experiments, they may be subject to more sources of error.

Experimental Design Methods

Experimental design methods refer to the techniques and procedures used to design and conduct experiments in scientific research. Here are some common experimental design methods:

Randomization

This involves randomly assigning participants to different groups or treatments to ensure that any observed differences between groups are due to the treatment and not to other factors.

Control Group

The use of a control group is an important experimental design method that involves having a group of participants that do not receive the treatment or intervention being studied. The control group is used as a baseline to compare the effects of the treatment group.

Blinding involves keeping participants, researchers, or both unaware of which treatment group participants are in, in order to reduce the risk of bias in the results.

Counterbalancing

This involves systematically varying the order in which participants receive treatments or interventions in order to control for order effects.

Replication

Replication involves conducting the same experiment with different samples or under different conditions to increase the reliability and validity of the results.

This experimental design method involves manipulating multiple independent variables simultaneously to investigate their combined effects on the dependent variable.

This involves dividing participants into subgroups or blocks based on specific characteristics, such as age or gender, in order to reduce the risk of confounding variables.

Data Collection Method

Experimental design data collection methods are techniques and procedures used to collect data in experimental research. Here are some common experimental design data collection methods:

Direct Observation

This method involves observing and recording the behavior or phenomenon of interest in real time. It may involve the use of structured or unstructured observation, and may be conducted in a laboratory or naturalistic setting.

Self-report Measures

Self-report measures involve asking participants to report their thoughts, feelings, or behaviors using questionnaires, surveys, or interviews. These measures may be administered in person or online.

Behavioral Measures

Behavioral measures involve measuring participants’ behavior directly, such as through reaction time tasks or performance tests. These measures may be administered using specialized equipment or software.

Physiological Measures

Physiological measures involve measuring participants’ physiological responses, such as heart rate, blood pressure, or brain activity, using specialized equipment. These measures may be invasive or non-invasive, and may be administered in a laboratory or clinical setting.

Archival Data

Archival data involves using existing records or data, such as medical records, administrative records, or historical documents, as a source of information. These data may be collected from public or private sources.

Computerized Measures

Computerized measures involve using software or computer programs to collect data on participants’ behavior or responses. These measures may include reaction time tasks, cognitive tests, or other types of computer-based assessments.

Video Recording

Video recording involves recording participants’ behavior or interactions using cameras or other recording equipment. This method can be used to capture detailed information about participants’ behavior or to analyze social interactions.

Data Analysis Method

Experimental design data analysis methods refer to the statistical techniques and procedures used to analyze data collected in experimental research. Here are some common experimental design data analysis methods:

Descriptive Statistics

Descriptive statistics are used to summarize and describe the data collected in the study. This includes measures such as mean, median, mode, range, and standard deviation.

Inferential Statistics

Inferential statistics are used to make inferences or generalizations about a larger population based on the data collected in the study. This includes hypothesis testing and estimation.

Analysis of Variance (ANOVA)

ANOVA is a statistical technique used to compare means across two or more groups in order to determine whether there are significant differences between the groups. There are several types of ANOVA, including one-way ANOVA, two-way ANOVA, and repeated measures ANOVA.

Regression Analysis

Regression analysis is used to model the relationship between two or more variables in order to determine the strength and direction of the relationship. There are several types of regression analysis, including linear regression, logistic regression, and multiple regression.

Factor Analysis

Factor analysis is used to identify underlying factors or dimensions in a set of variables. This can be used to reduce the complexity of the data and identify patterns in the data.

Structural Equation Modeling (SEM)

SEM is a statistical technique used to model complex relationships between variables. It can be used to test complex theories and models of causality.

Cluster Analysis

Cluster analysis is used to group similar cases or observations together based on similarities or differences in their characteristics.

Time Series Analysis

Time series analysis is used to analyze data collected over time in order to identify trends, patterns, or changes in the data.

Multilevel Modeling

Multilevel modeling is used to analyze data that is nested within multiple levels, such as students nested within schools or employees nested within companies.

Applications of Experimental Design 

Experimental design is a versatile research methodology that can be applied in many fields. Here are some applications of experimental design:

  • Medical Research: Experimental design is commonly used to test new treatments or medications for various medical conditions. This includes clinical trials to evaluate the safety and effectiveness of new drugs or medical devices.
  • Agriculture : Experimental design is used to test new crop varieties, fertilizers, and other agricultural practices. This includes randomized field trials to evaluate the effects of different treatments on crop yield, quality, and pest resistance.
  • Environmental science: Experimental design is used to study the effects of environmental factors, such as pollution or climate change, on ecosystems and wildlife. This includes controlled experiments to study the effects of pollutants on plant growth or animal behavior.
  • Psychology : Experimental design is used to study human behavior and cognitive processes. This includes experiments to test the effects of different interventions, such as therapy or medication, on mental health outcomes.
  • Engineering : Experimental design is used to test new materials, designs, and manufacturing processes in engineering applications. This includes laboratory experiments to test the strength and durability of new materials, or field experiments to test the performance of new technologies.
  • Education : Experimental design is used to evaluate the effectiveness of teaching methods, educational interventions, and programs. This includes randomized controlled trials to compare different teaching methods or evaluate the impact of educational programs on student outcomes.
  • Marketing : Experimental design is used to test the effectiveness of marketing campaigns, pricing strategies, and product designs. This includes experiments to test the impact of different marketing messages or pricing schemes on consumer behavior.

Examples of Experimental Design 

Here are some examples of experimental design in different fields:

  • Example in Medical research : A study that investigates the effectiveness of a new drug treatment for a particular condition. Patients are randomly assigned to either a treatment group or a control group, with the treatment group receiving the new drug and the control group receiving a placebo. The outcomes, such as improvement in symptoms or side effects, are measured and compared between the two groups.
  • Example in Education research: A study that examines the impact of a new teaching method on student learning outcomes. Students are randomly assigned to either a group that receives the new teaching method or a group that receives the traditional teaching method. Student achievement is measured before and after the intervention, and the results are compared between the two groups.
  • Example in Environmental science: A study that tests the effectiveness of a new method for reducing pollution in a river. Two sections of the river are selected, with one section treated with the new method and the other section left untreated. The water quality is measured before and after the intervention, and the results are compared between the two sections.
  • Example in Marketing research: A study that investigates the impact of a new advertising campaign on consumer behavior. Participants are randomly assigned to either a group that is exposed to the new campaign or a group that is not. Their behavior, such as purchasing or product awareness, is measured and compared between the two groups.
  • Example in Social psychology: A study that examines the effect of a new social intervention on reducing prejudice towards a marginalized group. Participants are randomly assigned to either a group that receives the intervention or a control group that does not. Their attitudes and behavior towards the marginalized group are measured before and after the intervention, and the results are compared between the two groups.

When to use Experimental Research Design 

Experimental research design should be used when a researcher wants to establish a cause-and-effect relationship between variables. It is particularly useful when studying the impact of an intervention or treatment on a particular outcome.

Here are some situations where experimental research design may be appropriate:

  • When studying the effects of a new drug or medical treatment: Experimental research design is commonly used in medical research to test the effectiveness and safety of new drugs or medical treatments. By randomly assigning patients to treatment and control groups, researchers can determine whether the treatment is effective in improving health outcomes.
  • When evaluating the effectiveness of an educational intervention: An experimental research design can be used to evaluate the impact of a new teaching method or educational program on student learning outcomes. By randomly assigning students to treatment and control groups, researchers can determine whether the intervention is effective in improving academic performance.
  • When testing the effectiveness of a marketing campaign: An experimental research design can be used to test the effectiveness of different marketing messages or strategies. By randomly assigning participants to treatment and control groups, researchers can determine whether the marketing campaign is effective in changing consumer behavior.
  • When studying the effects of an environmental intervention: Experimental research design can be used to study the impact of environmental interventions, such as pollution reduction programs or conservation efforts. By randomly assigning locations or areas to treatment and control groups, researchers can determine whether the intervention is effective in improving environmental outcomes.
  • When testing the effects of a new technology: An experimental research design can be used to test the effectiveness and safety of new technologies or engineering designs. By randomly assigning participants or locations to treatment and control groups, researchers can determine whether the new technology is effective in achieving its intended purpose.

How to Conduct Experimental Research

Here are the steps to conduct Experimental Research:

  • Identify a Research Question : Start by identifying a research question that you want to answer through the experiment. The question should be clear, specific, and testable.
  • Develop a Hypothesis: Based on your research question, develop a hypothesis that predicts the relationship between the independent and dependent variables. The hypothesis should be clear and testable.
  • Design the Experiment : Determine the type of experimental design you will use, such as a between-subjects design or a within-subjects design. Also, decide on the experimental conditions, such as the number of independent variables, the levels of the independent variable, and the dependent variable to be measured.
  • Select Participants: Select the participants who will take part in the experiment. They should be representative of the population you are interested in studying.
  • Randomly Assign Participants to Groups: If you are using a between-subjects design, randomly assign participants to groups to control for individual differences.
  • Conduct the Experiment : Conduct the experiment by manipulating the independent variable(s) and measuring the dependent variable(s) across the different conditions.
  • Analyze the Data: Analyze the data using appropriate statistical methods to determine if there is a significant effect of the independent variable(s) on the dependent variable(s).
  • Draw Conclusions: Based on the data analysis, draw conclusions about the relationship between the independent and dependent variables. If the results support the hypothesis, then it is accepted. If the results do not support the hypothesis, then it is rejected.
  • Communicate the Results: Finally, communicate the results of the experiment through a research report or presentation. Include the purpose of the study, the methods used, the results obtained, and the conclusions drawn.

Purpose of Experimental Design 

The purpose of experimental design is to control and manipulate one or more independent variables to determine their effect on a dependent variable. Experimental design allows researchers to systematically investigate causal relationships between variables, and to establish cause-and-effect relationships between the independent and dependent variables. Through experimental design, researchers can test hypotheses and make inferences about the population from which the sample was drawn.

Experimental design provides a structured approach to designing and conducting experiments, ensuring that the results are reliable and valid. By carefully controlling for extraneous variables that may affect the outcome of the study, experimental design allows researchers to isolate the effect of the independent variable(s) on the dependent variable(s), and to minimize the influence of other factors that may confound the results.

Experimental design also allows researchers to generalize their findings to the larger population from which the sample was drawn. By randomly selecting participants and using statistical techniques to analyze the data, researchers can make inferences about the larger population with a high degree of confidence.

Overall, the purpose of experimental design is to provide a rigorous, systematic, and scientific method for testing hypotheses and establishing cause-and-effect relationships between variables. Experimental design is a powerful tool for advancing scientific knowledge and informing evidence-based practice in various fields, including psychology, biology, medicine, engineering, and social sciences.

Advantages of Experimental Design 

Experimental design offers several advantages in research. Here are some of the main advantages:

  • Control over extraneous variables: Experimental design allows researchers to control for extraneous variables that may affect the outcome of the study. By manipulating the independent variable and holding all other variables constant, researchers can isolate the effect of the independent variable on the dependent variable.
  • Establishing causality: Experimental design allows researchers to establish causality by manipulating the independent variable and observing its effect on the dependent variable. This allows researchers to determine whether changes in the independent variable cause changes in the dependent variable.
  • Replication : Experimental design allows researchers to replicate their experiments to ensure that the findings are consistent and reliable. Replication is important for establishing the validity and generalizability of the findings.
  • Random assignment: Experimental design often involves randomly assigning participants to conditions. This helps to ensure that individual differences between participants are evenly distributed across conditions, which increases the internal validity of the study.
  • Precision : Experimental design allows researchers to measure variables with precision, which can increase the accuracy and reliability of the data.
  • Generalizability : If the study is well-designed, experimental design can increase the generalizability of the findings. By controlling for extraneous variables and using random assignment, researchers can increase the likelihood that the findings will apply to other populations and contexts.

Limitations of Experimental Design

Experimental design has some limitations that researchers should be aware of. Here are some of the main limitations:

  • Artificiality : Experimental design often involves creating artificial situations that may not reflect real-world situations. This can limit the external validity of the findings, or the extent to which the findings can be generalized to real-world settings.
  • Ethical concerns: Some experimental designs may raise ethical concerns, particularly if they involve manipulating variables that could cause harm to participants or if they involve deception.
  • Participant bias : Participants in experimental studies may modify their behavior in response to the experiment, which can lead to participant bias.
  • Limited generalizability: The conditions of the experiment may not reflect the complexities of real-world situations. As a result, the findings may not be applicable to all populations and contexts.
  • Cost and time : Experimental design can be expensive and time-consuming, particularly if the experiment requires specialized equipment or if the sample size is large.
  • Researcher bias : Researchers may unintentionally bias the results of the experiment if they have expectations or preferences for certain outcomes.
  • Lack of feasibility : Experimental design may not be feasible in some cases, particularly if the research question involves variables that cannot be manipulated or controlled.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Transformative Design

Transformative Design – Methods, Types, Guide

Quasi-Experimental Design

Quasi-Experimental Research Design – Types...

One-to-One Interview in Research

One-to-One Interview – Methods and Guide

Quantitative Research

Quantitative Research – Methods, Types and...

Qualitative Research Methods

Qualitative Research Methods

Observational Research

Observational Research – Methods and Guide

  • EXPLORE Random Article

How to Conduct a True Experiment

Last Updated: February 2, 2024 References

This article was co-authored by Bess Ruff, MA . Bess Ruff is a Geography PhD student at Florida State University. She received her MA in Environmental Science and Management from the University of California, Santa Barbara in 2016. She has conducted survey work for marine spatial planning projects in the Caribbean and provided research support as a graduate fellow for the Sustainable Fisheries Group. There are 14 references cited in this article, which can be found at the bottom of the page. This article has been viewed 149,404 times.

Experiments are vital to the advancement of science. One important type of experiment is known as the true experiment. A true experiment is one in which the experimenter has worked to control all of the variables except the one that is being studied. In order to accomplish this, true experiments make use of random test groups. [1] X Trustworthy Source PubMed Central Journal archive from the U.S. National Institutes of Health Go to source True experiments are useful for exploring cause and effect relationships such as: is a particular treatment effective for a medical condition? Or, does exposure to a particular substance cause a certain disease? However, because they take place in controlled circumstances, they don’t always fully reflect what will happen in the real world.

Designing the Experiment

Step 1 Formulate the question you would like to answer.

  • For example, if you want to know if listening to punk music makes you sleep less, the dependent variable will be the numbers of hours slept.
  • A dependent variable must be measurable.

Step 3 Identify the independent variable.

  • In your cause-and-effect question, it is the term that comes before "cause": does better nutrition cause higher test scores? Better nutrition is the independent variable, and higher test scores is the dependent variable.
  • In the example about punk music, listening to punk music is the independent variable.

Step 4 Identify the relevant population.

  • Random selection ensures that your subjects have a diverse set of characteristics that reflects the population in general. This helps you to avoid introducing unintended variables. If education level is significant to your study, for example, and your population includes people with very little education as well as people with Ph.D.s, you don’t want a subject group composed only of college freshmen.
  • There are several methods of randomly selecting subjects. For a relatively small population, you could assign each member a number and then use a random number generator to select members. For a larger population, you could take a systematic sample (for example, the second name on each page of a directory) and then use the random number method just described with that smaller subset. [5] X Research source
  • Additionally, large populations can be randomly sampled through stratified sampling methods, which divide the population into homogeneous "strata" and then select individuals from each group to generate a random sample population. [6] X Research source
  • Select a group large enough to produce statistically useful data. The ideal size will vary greatly depending on factors such as the size of the underlying population and the expected size of the effect. [7] X Trustworthy Source PubMed Central Journal archive from the U.S. National Institutes of Health Go to source You may use a sample size calculator to aid in determining a target size.

Running the Experiment

Step 1 Randomly assign subjects into two groups.

  • Use a random number generator to assign a number to each subject. Then place them in the two groups by number. For instance, assign the lower half of the random numbers to the control group.
  • The control group will not be given the treatment or intervention. This will allow you to measure the effect of the intervention.

Step 2 Ensure that subjects do not know which group they are in.

  • Have different people in charge of assigning subjects to a group, administering treatment, and evaluating subjects after treatment.

Step 4 Conduct a “pretest.”

  • A pretest is not a required feature of the true experiment. However, it increases the ability of your experiment to demonstrate cause and effect. [10] X Research source In order to say that A causes B, you want to show that A happened before B, which can only be done through the use of a pretest.
  • For example, if you are conducting an experiment on how listening to punk music affects sleep, you’d want to gather data on how long each participant typically sleeps at night when they haven’t listened to punk music.

Step 5 Administer the treatment to the experimental group.

  • In a clinical trial, this often means that a placebo is administered to the control group. A placebo resembles the real treatment as closely as possible, but is in fact designed to have no effect. For example, in a study on the effect of a medicine, both groups would come to the same room and receive an identical-looking pill. The only difference would be that one pill would contain the medicine, while the other would be an inert “sugar pill.”
  • In other kinds of experiments, keeping the two experiences equivalent will take other forms. Take the example of the effect of playing the trumpet on academic performance. You might want to offer the control group another kind of lesson or opportunity for socialization, to be sure that it’s really the trumpet-playing in specific and not getting a music lesson in general that is causing the effect. [11] X Research source

Step 6 Administer a post-test.

Analyzing Your Results

Step 1 Calculate descriptive statistics.

  • What is the central tendency of the data? Central tendency is measured using mean (average), median, or mode. For example, in a study on the effects of caffeine on sleep, you will want to calculate the mean number of hours slept by members of the control and experimental groups.
  • What is the distribution of the data? Again, there are many different ways to measure how the data are distributed, including range, variance, and standard deviation.

Step 2 Compare the post-test results produced by the experimental and control groups.

  • A t-test is a common test of significance. A t-test compares the difference between the means of two sets of data in relation to the variation within the data. [15] X Research source You can calculate a t-test by hand or by using statistical software such as Microsoft Excel.

Step 4 Evaluate your experiment.

Community Q&A

Community Answer

  • Combine true experiments with other types of experiments in order to gain a fuller picture. Observational studies will provide information about how a given treatment, for example, works in real life. Thanks Helpful 1 Not Helpful 1
  • True experiments are often conducted in a laboratory. But they don’t have to be, as long as control is imposed over possible extraneous factors. Thanks Helpful 0 Not Helpful 1
  • Be sure to take ethics into consideration when conducting this type of study. Never administer anything that may be harmful to a subject. Always stop the study if adverse effects occur. Never withhold treatments knowing that they will improve a subject's health. Follow the guidelines of your school, university, lab, or company in handling human or animal subjects. Thanks Helpful 0 Not Helpful 0
  • Be aware of how research design affects results. Bias in how you select subjects or how you control the environment of the experiment can introduce hidden effects on your results. Thanks Helpful 0 Not Helpful 0

You Might Also Like

Become Taller Naturally

  • ↑ http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3505292/
  • ↑ https://nces.ed.gov/nceskids/help/user_guide/graph/variables.asp
  • ↑ http://linguistics.byu.edu/faculty/henrichsenl/ResearchMethods/RM_2_08.html
  • ↑ http://allpsych.com/researchmethods/selectingsubjects/
  • ↑ http://www.stat.yale.edu/Courses/1997-98/101/sample.htm
  • ↑ http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2876926/
  • ↑ http://www.bmj.com/rapid-response/2011/10/31/what-single-blind-trial
  • ↑ https://www.verywellmind.com/what-is-a-double-blind-study-2795103
  • ↑ http://web.csulb.edu/~msaintg/ppa696/696exper.htm
  • ↑ http://allpsych.com/researchmethods/trueexperimentaldesign/
  • ↑ https://statistics.laerd.com/statistical-guides/descriptive-inferential-statistics.php
  • ↑ https://www.stat.cmu.edu/~hseltman/309/Book/Book.pdf
  • ↑ http://www.stat.yale.edu/Courses/1997-98/101/sigtest.htm
  • ↑ http://archive.bio.ed.ac.uk/jdeacon/statistics/tress4a.html

About this article

Bess Ruff, MA

Reader Success Stories

Etu Buka

Dec 29, 2016

Did this article help you?

Etu Buka

  • About wikiHow
  • Terms of Use
  • Privacy Policy
  • Do Not Sell or Share My Info
  • Not Selling Info

Main Chegg Logo

  • True experimental design

Published November 23, 2021. Updated December 14, 2021.

True experimental design is a statistical technique for identifying a cause-and-effect relationship between variables. It is one of the most accurate research designs since it gives substantial evidence to support or refute a hypothesis, and is best applied to quantitative data.

Requirements that must be satisfied to conduct true experimental research are as follows:

  • There must be a viable control group.
  • Only one independent variable is preferred to be tested at a time to maintain statistical robustness.
  • The participants must be randomly assigned to either a control or experimental group.

Steps to conduct a true experimental study

Step 1: Identify the research objective and state the hypothesis.

Step 2: Determine the dependent and independent variables.

Step 3: Define and randomly assign participants to the control and experimental groups.

Step 4: Conduct pre-tests before beginning the experiment.

Step 5: Conduct the experiment.

Step 6: Conduct post-tests to examine the impact of the study on the experimental group and compare it with the pre-test data.

Step 7: Analyze the collected data using statistical methods.

Example of true experimental design

A group of 400 office workers participate in a research study to determine how physical activity affects their work productivity. The participants are divided into three groups: 1) a control group with no exercise routine, 2) an experimental group required to exercise for 30-45 minutes per day, and 3) an experimental group required to exercise for two hours per day. Each group is required to have one rest day each week, and the experiment lasts one month.

The duration of physical activity is the independent variable and workplace performance is the dependent variable. Before the study begins, each participant’s work performance is assessed with a pre-test. The researcher tracks the exercise and work performance of all participants across all three groups.

The above example qualifies as a true experimental research design because:

  • A control group is present.
  • Experimental groups are present.
  • Participants are randomly assigned to the study groups.
  • The duration of physical activity is an independent variable manipulated by the researcher.

Advantages of true experimental design

  • True experimental design is a reliable and accurate method for analyzing quantitative data as it uses statistical methods.
  • Study results are repeatable by future researchers.
  • Limiting a study to include only one independent variable leaves less room for error in attributing causality.

Disadvantages of true experimental design

  • True experimental designs are expensive as many resources are required to manage a large number of participants for a representative sample.
  • The procedure of setting up and conducting a true experimental study is time-consuming.
  • Disciplines within the social and biological sciences often focus on inquiries wherein a single independent variable is difficult to identify and isolate for testing. Variations make using this method of study difficult within certain fields.
  • Real world conditions are not taken into account.

Key takeaways

  • True experimental design is a statistical technique for identifying a cause-and-effect relationship between variables. It is one of the most precise forms of study design since it uses statistical analysis to test a hypothesis.
  • True experimental research consists of a control group and an experimental group.
  • True experimental design is a reliable method for analyzing quantitative data, is repeatable by other researchers, and limits error in attributing causality of variables.
  • True experimental design is an expensive and time-consuming method, is limited in usefulness to particular disciplines of research, and does not take into account real world conditions.

Research Design

For more details, visit these additional research guides .

Research Variables

  • Research design
  • Research bias
  • Type of variables
  • Independent variable in research
  • Dependent variables in research
  • Confounding variables
  • Control variables
  • Extraneous variables

Experimental and Other Research Design

  • Experimental research
  • Quasi-experimental design
  • Double-blind experiment
  • Between subject design
  • Within subject design
  • Case study research design
  • Descriptive research design
  • Longitudinal study
  • Cross-sectional design
  • Survey design
  • Naturalistic observation
  • Survey response scales
  • Control group in science
  • Null hypothesis

Framed paper

What’s included with a Chegg Writing subscription

  • Unlimited number of paper scans
  • Plagiarism detection: Check against billions of sources
  • Expert proofreading for papers on any subject
  • Grammar scans for 200+ types of common errors
  • Automatically create & save citations in 7,000+ styles
  • Cancel subscription anytime, no obligation

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • A Quick Guide to Experimental Design | 5 Steps & Examples

A Quick Guide to Experimental Design | 5 Steps & Examples

Published on 11 April 2022 by Rebecca Bevans . Revised on 5 December 2022.

Experiments are used to study causal relationships . You manipulate one or more independent variables and measure their effect on one or more dependent variables.

Experimental design means creating a set of procedures to systematically test a hypothesis . A good experimental design requires a strong understanding of the system you are studying. 

There are five key steps in designing an experiment:

  • Consider your variables and how they are related
  • Write a specific, testable hypothesis
  • Design experimental treatments to manipulate your independent variable
  • Assign subjects to groups, either between-subjects or within-subjects
  • Plan how you will measure your dependent variable

For valid conclusions, you also need to select a representative sample and control any  extraneous variables that might influence your results. If if random assignment of participants to control and treatment groups is impossible, unethical, or highly difficult, consider an observational study instead.

Table of contents

Step 1: define your variables, step 2: write your hypothesis, step 3: design your experimental treatments, step 4: assign your subjects to treatment groups, step 5: measure your dependent variable, frequently asked questions about experimental design.

You should begin with a specific research question . We will work with two research question examples, one from health sciences and one from ecology:

To translate your research question into an experimental hypothesis, you need to define the main variables and make predictions about how they are related.

Start by simply listing the independent and dependent variables .

Research question Independent variable Dependent variable
Phone use and sleep Minutes of phone use before sleep Hours of sleep per night
Temperature and soil respiration Air temperature just above the soil surface CO2 respired from soil

Then you need to think about possible extraneous and confounding variables and consider how you might control  them in your experiment.

Extraneous variable How to control
Phone use and sleep in sleep patterns among individuals. measure the average difference between sleep with phone use and sleep without phone use rather than the average amount of sleep per treatment group.
Temperature and soil respiration also affects respiration, and moisture can decrease with increasing temperature. monitor soil moisture and add water to make sure that soil moisture is consistent across all treatment plots.

Finally, you can put these variables together into a diagram. Use arrows to show the possible relationships between variables and include signs to show the expected direction of the relationships.

Diagram of the relationship between variables in a sleep experiment

Here we predict that increasing temperature will increase soil respiration and decrease soil moisture, while decreasing soil moisture will lead to decreased soil respiration.

Prevent plagiarism, run a free check.

Now that you have a strong conceptual understanding of the system you are studying, you should be able to write a specific, testable hypothesis that addresses your research question.

Null hypothesis (H ) Alternate hypothesis (H )
Phone use and sleep Phone use before sleep does not correlate with the amount of sleep a person gets. Increasing phone use before sleep leads to a decrease in sleep.
Temperature and soil respiration Air temperature does not correlate with soil respiration. Increased air temperature leads to increased soil respiration.

The next steps will describe how to design a controlled experiment . In a controlled experiment, you must be able to:

  • Systematically and precisely manipulate the independent variable(s).
  • Precisely measure the dependent variable(s).
  • Control any potential confounding variables.

If your study system doesn’t match these criteria, there are other types of research you can use to answer your research question.

How you manipulate the independent variable can affect the experiment’s external validity – that is, the extent to which the results can be generalised and applied to the broader world.

First, you may need to decide how widely to vary your independent variable.

  • just slightly above the natural range for your study region.
  • over a wider range of temperatures to mimic future warming.
  • over an extreme range that is beyond any possible natural variation.

Second, you may need to choose how finely to vary your independent variable. Sometimes this choice is made for you by your experimental system, but often you will need to decide, and this will affect how much you can infer from your results.

  • a categorical variable : either as binary (yes/no) or as levels of a factor (no phone use, low phone use, high phone use).
  • a continuous variable (minutes of phone use measured every night).

How you apply your experimental treatments to your test subjects is crucial for obtaining valid and reliable results.

First, you need to consider the study size : how many individuals will be included in the experiment? In general, the more subjects you include, the greater your experiment’s statistical power , which determines how much confidence you can have in your results.

Then you need to randomly assign your subjects to treatment groups . Each group receives a different level of the treatment (e.g. no phone use, low phone use, high phone use).

You should also include a control group , which receives no treatment. The control group tells us what would have happened to your test subjects without any experimental intervention.

When assigning your subjects to groups, there are two main choices you need to make:

  • A completely randomised design vs a randomised block design .
  • A between-subjects design vs a within-subjects design .

Randomisation

An experiment can be completely randomised or randomised within blocks (aka strata):

  • In a completely randomised design , every subject is assigned to a treatment group at random.
  • In a randomised block design (aka stratified random design), subjects are first grouped according to a characteristic they share, and then randomly assigned to treatments within those groups.
Completely randomised design Randomised block design
Phone use and sleep Subjects are all randomly assigned a level of phone use using a random number generator. Subjects are first grouped by age, and then phone use treatments are randomly assigned within these groups.
Temperature and soil respiration Warming treatments are assigned to soil plots at random by using a number generator to generate map coordinates within the study area. Soils are first grouped by average rainfall, and then treatment plots are randomly assigned within these groups.

Sometimes randomisation isn’t practical or ethical , so researchers create partially-random or even non-random designs. An experimental design where treatments aren’t randomly assigned is called a quasi-experimental design .

Between-subjects vs within-subjects

In a between-subjects design (also known as an independent measures design or classic ANOVA design), individuals receive only one of the possible levels of an experimental treatment.

In medical or social research, you might also use matched pairs within your between-subjects design to make sure that each treatment group contains the same variety of test subjects in the same proportions.

In a within-subjects design (also known as a repeated measures design), every individual receives each of the experimental treatments consecutively, and their responses to each treatment are measured.

Within-subjects or repeated measures can also refer to an experimental design where an effect emerges over time, and individual responses are measured over time in order to measure this effect as it emerges.

Counterbalancing (randomising or reversing the order of treatments among subjects) is often used in within-subjects designs to ensure that the order of treatment application doesn’t influence the results of the experiment.

Between-subjects (independent measures) design Within-subjects (repeated measures) design
Phone use and sleep Subjects are randomly assigned a level of phone use (none, low, or high) and follow that level of phone use throughout the experiment. Subjects are assigned consecutively to zero, low, and high levels of phone use throughout the experiment, and the order in which they follow these treatments is randomised.
Temperature and soil respiration Warming treatments are assigned to soil plots at random and the soils are kept at this temperature throughout the experiment. Every plot receives each warming treatment (1, 3, 5, 8, and 10C above ambient temperatures) consecutively over the course of the experiment, and the order in which they receive these treatments is randomised.

Finally, you need to decide how you’ll collect data on your dependent variable outcomes. You should aim for reliable and valid measurements that minimise bias or error.

Some variables, like temperature, can be objectively measured with scientific instruments. Others may need to be operationalised to turn them into measurable observations.

  • Ask participants to record what time they go to sleep and get up each day.
  • Ask participants to wear a sleep tracker.

How precisely you measure your dependent variable also affects the kinds of statistical analysis you can use on your data.

Experiments are always context-dependent, and a good experimental design will take into account all of the unique considerations of your study system to produce information that is both valid and relevant to your research question.

Experimental designs are a set of procedures that you plan in order to examine the relationship between variables that interest you.

To design a successful experiment, first identify:

  • A testable hypothesis
  • One or more independent variables that you will manipulate
  • One or more dependent variables that you will measure

When designing the experiment, first decide:

  • How your variable(s) will be manipulated
  • How you will control for any potential confounding or lurking variables
  • How many subjects you will include
  • How you will assign treatments to your subjects

The key difference between observational studies and experiments is that, done correctly, an observational study will never influence the responses or behaviours of participants. Experimental designs will have a treatment condition applied to at least a portion of participants.

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word ‘between’ means that you’re comparing different conditions between groups, while the word ‘within’ means you’re comparing different conditions within the same group.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Bevans, R. (2022, December 05). A Quick Guide to Experimental Design | 5 Steps & Examples. Scribbr. Retrieved 9 June 2024, from https://www.scribbr.co.uk/research-methods/guide-to-experimental-design/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

Logo for University of Southern Queensland

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

10 Experimental research

Experimental research—often considered to be the ‘gold standard’ in research designs—is one of the most rigorous of all research designs. In this design, one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different treatment levels (random assignment), and the results of the treatments on outcomes (dependent variables) are observed. The unique strength of experimental research is its internal validity (causality) due to its ability to link cause and effect through treatment manipulation, while controlling for the spurious effect of extraneous variable.

Experimental research is best suited for explanatory research—rather than for descriptive or exploratory research—where the goal of the study is to examine cause-effect relationships. It also works well for research that involves a relatively limited and well-defined set of independent variables that can either be manipulated or controlled. Experimental research can be conducted in laboratory or field settings. Laboratory experiments , conducted in laboratory (artificial) settings, tend to be high in internal validity, but this comes at the cost of low external validity (generalisability), because the artificial (laboratory) setting in which the study is conducted may not reflect the real world. Field experiments are conducted in field settings such as in a real organisation, and are high in both internal and external validity. But such experiments are relatively rare, because of the difficulties associated with manipulating treatments and controlling for extraneous effects in a field setting.

Experimental research can be grouped into two broad categories: true experimental designs and quasi-experimental designs. Both designs require treatment manipulation, but while true experiments also require random assignment, quasi-experiments do not. Sometimes, we also refer to non-experimental research, which is not really a research design, but an all-inclusive term that includes all types of research that do not employ treatment manipulation or random assignment, such as survey research, observational research, and correlational studies.

Basic concepts

Treatment and control groups. In experimental research, some subjects are administered one or more experimental stimulus called a treatment (the treatment group ) while other subjects are not given such a stimulus (the control group ). The treatment may be considered successful if subjects in the treatment group rate more favourably on outcome variables than control group subjects. Multiple levels of experimental stimulus may be administered, in which case, there may be more than one treatment group. For example, in order to test the effects of a new drug intended to treat a certain medical condition like dementia, if a sample of dementia patients is randomly divided into three groups, with the first group receiving a high dosage of the drug, the second group receiving a low dosage, and the third group receiving a placebo such as a sugar pill (control group), then the first two groups are experimental groups and the third group is a control group. After administering the drug for a period of time, if the condition of the experimental group subjects improved significantly more than the control group subjects, we can say that the drug is effective. We can also compare the conditions of the high and low dosage experimental groups to determine if the high dose is more effective than the low dose.

Treatment manipulation. Treatments are the unique feature of experimental research that sets this design apart from all other research methods. Treatment manipulation helps control for the ‘cause’ in cause-effect relationships. Naturally, the validity of experimental research depends on how well the treatment was manipulated. Treatment manipulation must be checked using pretests and pilot tests prior to the experimental study. Any measurements conducted before the treatment is administered are called pretest measures , while those conducted after the treatment are posttest measures .

Random selection and assignment. Random selection is the process of randomly drawing a sample from a population or a sampling frame. This approach is typically employed in survey research, and ensures that each unit in the population has a positive chance of being selected into the sample. Random assignment, however, is a process of randomly assigning subjects to experimental or control groups. This is a standard practice in true experimental research to ensure that treatment groups are similar (equivalent) to each other and to the control group prior to treatment administration. Random selection is related to sampling, and is therefore more closely related to the external validity (generalisability) of findings. However, random assignment is related to design, and is therefore most related to internal validity. It is possible to have both random selection and random assignment in well-designed experimental research, but quasi-experimental research involves neither random selection nor random assignment.

Threats to internal validity. Although experimental designs are considered more rigorous than other research methods in terms of the internal validity of their inferences (by virtue of their ability to control causes through treatment manipulation), they are not immune to internal validity threats. Some of these threats to internal validity are described below, within the context of a study of the impact of a special remedial math tutoring program for improving the math abilities of high school students.

History threat is the possibility that the observed effects (dependent variables) are caused by extraneous or historical events rather than by the experimental treatment. For instance, students’ post-remedial math score improvement may have been caused by their preparation for a math exam at their school, rather than the remedial math program.

Maturation threat refers to the possibility that observed effects are caused by natural maturation of subjects (e.g., a general improvement in their intellectual ability to understand complex concepts) rather than the experimental treatment.

Testing threat is a threat in pre-post designs where subjects’ posttest responses are conditioned by their pretest responses. For instance, if students remember their answers from the pretest evaluation, they may tend to repeat them in the posttest exam.

Not conducting a pretest can help avoid this threat.

Instrumentation threat , which also occurs in pre-post designs, refers to the possibility that the difference between pretest and posttest scores is not due to the remedial math program, but due to changes in the administered test, such as the posttest having a higher or lower degree of difficulty than the pretest.

Mortality threat refers to the possibility that subjects may be dropping out of the study at differential rates between the treatment and control groups due to a systematic reason, such that the dropouts were mostly students who scored low on the pretest. If the low-performing students drop out, the results of the posttest will be artificially inflated by the preponderance of high-performing students.

Regression threat —also called a regression to the mean—refers to the statistical tendency of a group’s overall performance to regress toward the mean during a posttest rather than in the anticipated direction. For instance, if subjects scored high on a pretest, they will have a tendency to score lower on the posttest (closer to the mean) because their high scores (away from the mean) during the pretest were possibly a statistical aberration. This problem tends to be more prevalent in non-random samples and when the two measures are imperfectly correlated.

Two-group experimental designs

R

Pretest-posttest control group design . In this design, subjects are randomly assigned to treatment and control groups, subjected to an initial (pretest) measurement of the dependent variables of interest, the treatment group is administered a treatment (representing the independent variable of interest), and the dependent variables measured again (posttest). The notation of this design is shown in Figure 10.1.

Pretest-posttest control group design

Statistical analysis of this design involves a simple analysis of variance (ANOVA) between the treatment and control groups. The pretest-posttest design handles several threats to internal validity, such as maturation, testing, and regression, since these threats can be expected to influence both treatment and control groups in a similar (random) manner. The selection threat is controlled via random assignment. However, additional threats to internal validity may exist. For instance, mortality can be a problem if there are differential dropout rates between the two groups, and the pretest measurement may bias the posttest measurement—especially if the pretest introduces unusual topics or content.

Posttest -only control group design . This design is a simpler version of the pretest-posttest design where pretest measurements are omitted. The design notation is shown in Figure 10.2.

Posttest-only control group design

The treatment effect is measured simply as the difference in the posttest scores between the two groups:

\[E = (O_{1} - O_{2})\,.\]

The appropriate statistical analysis of this design is also a two-group analysis of variance (ANOVA). The simplicity of this design makes it more attractive than the pretest-posttest design in terms of internal validity. This design controls for maturation, testing, regression, selection, and pretest-posttest interaction, though the mortality threat may continue to exist.

C

Because the pretest measure is not a measurement of the dependent variable, but rather a covariate, the treatment effect is measured as the difference in the posttest scores between the treatment and control groups as:

Due to the presence of covariates, the right statistical analysis of this design is a two-group analysis of covariance (ANCOVA). This design has all the advantages of posttest-only design, but with internal validity due to the controlling of covariates. Covariance designs can also be extended to pretest-posttest control group design.

Factorial designs

Two-group designs are inadequate if your research requires manipulation of two or more independent variables (treatments). In such cases, you would need four or higher-group designs. Such designs, quite popular in experimental research, are commonly called factorial designs. Each independent variable in this design is called a factor , and each subdivision of a factor is called a level . Factorial designs enable the researcher to examine not only the individual effect of each treatment on the dependent variables (called main effects), but also their joint effect (called interaction effects).

2 \times 2

In a factorial design, a main effect is said to exist if the dependent variable shows a significant difference between multiple levels of one factor, at all levels of other factors. No change in the dependent variable across factor levels is the null case (baseline), from which main effects are evaluated. In the above example, you may see a main effect of instructional type, instructional time, or both on learning outcomes. An interaction effect exists when the effect of differences in one factor depends upon the level of a second factor. In our example, if the effect of instructional type on learning outcomes is greater for three hours/week of instructional time than for one and a half hours/week, then we can say that there is an interaction effect between instructional type and instructional time on learning outcomes. Note that the presence of interaction effects dominate and make main effects irrelevant, and it is not meaningful to interpret main effects if interaction effects are significant.

Hybrid experimental designs

Hybrid designs are those that are formed by combining features of more established designs. Three such hybrid designs are randomised bocks design, Solomon four-group design, and switched replications design.

Randomised block design. This is a variation of the posttest-only or pretest-posttest control group design where the subject population can be grouped into relatively homogeneous subgroups (called blocks ) within which the experiment is replicated. For instance, if you want to replicate the same posttest-only design among university students and full-time working professionals (two homogeneous blocks), subjects in both blocks are randomly split between the treatment group (receiving the same treatment) and the control group (see Figure 10.5). The purpose of this design is to reduce the ‘noise’ or variance in data that may be attributable to differences between the blocks so that the actual effect of interest can be detected more accurately.

Randomised blocks design

Solomon four-group design . In this design, the sample is divided into two treatment groups and two control groups. One treatment group and one control group receive the pretest, and the other two groups do not. This design represents a combination of posttest-only and pretest-posttest control group design, and is intended to test for the potential biasing effect of pretest measurement on posttest measures that tends to occur in pretest-posttest designs, but not in posttest-only designs. The design notation is shown in Figure 10.6.

Solomon four-group design

Switched replication design . This is a two-group design implemented in two phases with three waves of measurement. The treatment group in the first phase serves as the control group in the second phase, and the control group in the first phase becomes the treatment group in the second phase, as illustrated in Figure 10.7. In other words, the original design is repeated or replicated temporally with treatment/control roles switched between the two groups. By the end of the study, all participants will have received the treatment either during the first or the second phase. This design is most feasible in organisational contexts where organisational programs (e.g., employee training) are implemented in a phased manner or are repeated at regular intervals.

Switched replication design

Quasi-experimental designs

Quasi-experimental designs are almost identical to true experimental designs, but lacking one key ingredient: random assignment. For instance, one entire class section or one organisation is used as the treatment group, while another section of the same class or a different organisation in the same industry is used as the control group. This lack of random assignment potentially results in groups that are non-equivalent, such as one group possessing greater mastery of certain content than the other group, say by virtue of having a better teacher in a previous semester, which introduces the possibility of selection bias . Quasi-experimental designs are therefore inferior to true experimental designs in interval validity due to the presence of a variety of selection related threats such as selection-maturation threat (the treatment and control groups maturing at different rates), selection-history threat (the treatment and control groups being differentially impacted by extraneous or historical events), selection-regression threat (the treatment and control groups regressing toward the mean between pretest and posttest at different rates), selection-instrumentation threat (the treatment and control groups responding differently to the measurement), selection-testing (the treatment and control groups responding differently to the pretest), and selection-mortality (the treatment and control groups demonstrating differential dropout rates). Given these selection threats, it is generally preferable to avoid quasi-experimental designs to the greatest extent possible.

N

In addition, there are quite a few unique non-equivalent designs without corresponding true experimental design cousins. Some of the more useful of these designs are discussed next.

Regression discontinuity (RD) design . This is a non-equivalent pretest-posttest design where subjects are assigned to the treatment or control group based on a cut-off score on a preprogram measure. For instance, patients who are severely ill may be assigned to a treatment group to test the efficacy of a new drug or treatment protocol and those who are mildly ill are assigned to the control group. In another example, students who are lagging behind on standardised test scores may be selected for a remedial curriculum program intended to improve their performance, while those who score high on such tests are not selected from the remedial program.

RD design

Because of the use of a cut-off score, it is possible that the observed results may be a function of the cut-off score rather than the treatment, which introduces a new threat to internal validity. However, using the cut-off score also ensures that limited or costly resources are distributed to people who need them the most, rather than randomly across a population, while simultaneously allowing a quasi-experimental treatment. The control group scores in the RD design do not serve as a benchmark for comparing treatment group scores, given the systematic non-equivalence between the two groups. Rather, if there is no discontinuity between pretest and posttest scores in the control group, but such a discontinuity persists in the treatment group, then this discontinuity is viewed as evidence of the treatment effect.

Proxy pretest design . This design, shown in Figure 10.11, looks very similar to the standard NEGD (pretest-posttest) design, with one critical difference: the pretest score is collected after the treatment is administered. A typical application of this design is when a researcher is brought in to test the efficacy of a program (e.g., an educational program) after the program has already started and pretest data is not available. Under such circumstances, the best option for the researcher is often to use a different prerecorded measure, such as students’ grade point average before the start of the program, as a proxy for pretest data. A variation of the proxy pretest design is to use subjects’ posttest recollection of pretest data, which may be subject to recall bias, but nevertheless may provide a measure of perceived gain or change in the dependent variable.

Proxy pretest design

Separate pretest-posttest samples design . This design is useful if it is not possible to collect pretest and posttest data from the same subjects for some reason. As shown in Figure 10.12, there are four groups in this design, but two groups come from a single non-equivalent group, while the other two groups come from a different non-equivalent group. For instance, say you want to test customer satisfaction with a new online service that is implemented in one city but not in another. In this case, customers in the first city serve as the treatment group and those in the second city constitute the control group. If it is not possible to obtain pretest and posttest measures from the same customers, you can measure customer satisfaction at one point in time, implement the new service program, and measure customer satisfaction (with a different set of customers) after the program is implemented. Customer satisfaction is also measured in the control group at the same times as in the treatment group, but without the new program implementation. The design is not particularly strong, because you cannot examine the changes in any specific customer’s satisfaction score before and after the implementation, but you can only examine average customer satisfaction scores. Despite the lower internal validity, this design may still be a useful way of collecting quasi-experimental data when pretest and posttest data is not available from the same subjects.

Separate pretest-posttest samples design

An interesting variation of the NEDV design is a pattern-matching NEDV design , which employs multiple outcome variables and a theory that explains how much each variable will be affected by the treatment. The researcher can then examine if the theoretical prediction is matched in actual observations. This pattern-matching technique—based on the degree of correspondence between theoretical and observed patterns—is a powerful way of alleviating internal validity concerns in the original NEDV design.

NEDV design

Perils of experimental research

Experimental research is one of the most difficult of research designs, and should not be taken lightly. This type of research is often best with a multitude of methodological problems. First, though experimental research requires theories for framing hypotheses for testing, much of current experimental research is atheoretical. Without theories, the hypotheses being tested tend to be ad hoc, possibly illogical, and meaningless. Second, many of the measurement instruments used in experimental research are not tested for reliability and validity, and are incomparable across studies. Consequently, results generated using such instruments are also incomparable. Third, often experimental research uses inappropriate research designs, such as irrelevant dependent variables, no interaction effects, no experimental controls, and non-equivalent stimulus across treatment groups. Findings from such studies tend to lack internal validity and are highly suspect. Fourth, the treatments (tasks) used in experimental research may be diverse, incomparable, and inconsistent across studies, and sometimes inappropriate for the subject population. For instance, undergraduate student subjects are often asked to pretend that they are marketing managers and asked to perform a complex budget allocation task in which they have no experience or expertise. The use of such inappropriate tasks, introduces new threats to internal validity (i.e., subject’s performance may be an artefact of the content or difficulty of the task setting), generates findings that are non-interpretable and meaningless, and makes integration of findings across studies impossible.

The design of proper experimental treatments is a very important task in experimental design, because the treatment is the raison d’etre of the experimental method, and must never be rushed or neglected. To design an adequate and appropriate task, researchers should use prevalidated tasks if available, conduct treatment manipulation checks to check for the adequacy of such tasks (by debriefing subjects after performing the assigned task), conduct pilot tests (repeatedly, if necessary), and if in doubt, use tasks that are simple and familiar for the respondent sample rather than tasks that are complex or unfamiliar.

In summary, this chapter introduced key concepts in the experimental design research method and introduced a variety of true experimental and quasi-experimental designs. Although these designs vary widely in internal validity, designs with less internal validity should not be overlooked and may sometimes be useful under specific circumstances and empirical contingencies.

Social Science Research: Principles, Methods and Practices (Revised edition) Copyright © 2019 by Anol Bhattacherjee is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Logo for Mavs Open Press

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

14.2 True experiments

Learning objectives.

Learners will be able to…

  • Describe a true experimental design in social work research
  • Understand the different types of true experimental designs
  • Determine what kinds of research questions true experimental designs are suited for
  • Discuss advantages and disadvantages of true experimental designs

A true experiment , often considered to be the “gold standard” in research designs, is thought of as one of the most rigorous of all research designs. In this design, one or more independent variables (as treatments) are manipulated by the researcher, subjects are randomly assigned (i.e., random assignment) to different treatment levels, and the results of the treatments on outcomes (dependent variables) are observed. The unique strength of experimental research is its ability to increase internal validity and help establish causality through treatment manipulation, while controlling for the effects of extraneous variables. As such they are best suited for explanatory research questions.

In true experimental design, research subjects are assigned to either an experimental group, which receives the treatment or intervention being investigated, or a control group, which does not.  Control groups may receive no treatment at all, the standard treatment (which is called “treatment as usual” or TAU), or a treatment that entails some type of contact or interaction without the characteristics of the intervention being investigated.  For example, the control group may participate in a support group while the experimental group is receiving a new group-based therapeutic intervention consisting of education and cognitive behavioral group therapy.

After determining the nature of the experimental and control groups, the next decision a researcher must make is when they need to collect data during their experiment. Do they take a baseline measurement and then a measurement after treatment, or just a measurement after treatment, or do they handle data collection another way? Below, we’ll discuss three main types of true experimental designs. There are sub-types of each of these designs, but here, we just want to get you started with some of the basics.

Using a true experiment in social work research is often difficult and can be quite resource intensive. True experiments work best with relatively large sample sizes, and random assignment, a key criterion for a true experimental design, is hard (and unethical) to execute in practice when you have people in dire need of an intervention. Nonetheless, some of the strongest evidence bases are built on true experiments.

For the purposes of this section, let’s bring back the example of CBT for the treatment of social anxiety. We have a group of 500 individuals who have agreed to participate in our study, and we have randomly assigned them to the control and experimental groups. The participants in the experimental group will receive CBT, while the participants in the control group will receive a series of videos about social anxiety.

Classical experiments (pretest posttest control group design)

The elements of a classical experiment are (1) random assignment of participants into an experimental and control group, (2) a pretest to assess the outcome(s) of interest for each group, (3) delivery of an intervention/treatment to the experimental group, and (4) a posttest to both groups to assess potential change in the outcome(s).

When explaining experimental research designs, we often use diagrams with abbreviations to visually represent the components of the experiment. Table 14.2 starts us off by laying out what the abbreviations mean.

Table 14.2 Experimental research design notations
R Random assignment
O Observation (assessment of the dependent/outcome variable)
X Intervention or treatment
X Experimental condition (i.e., the treatment or intervention)
X Treatment as usual (sometimes denoted TAU)
A, B, C, etc. Denotes different groups (control/comparison and experimental)

Figure 14.1 depicts a classical experiment using our example of assessing the intervention of CBT for social anxiety.  In the figure, RA denotes random assignment to the experimental group A and RB is random assignment to the control group B. O 1 (observation 1) denotes the pretest, X e denotes the experimental intervention, and O 2 (observation 2) denotes the posttest.

how to conduct true experimental research

The more general, or universal, notation for classical experimental design is shown in Figure 14.2.

how to conduct true experimental research

In a situation where the control group received treatment as usual instead of no intervention, the diagram would look this way (Figure 14.3), with X i denoting treatment as usual:

how to conduct true experimental research

Hopefully, these diagrams provide you a visualization of how this type of experiment establishes temporality , a key component of a causal relationship. By administering the pretest, researchers can assess if the change in the outcome occured after the intervention. Assuming there is a change in the scores between the pretest and posttest, we would be able to say that yes, the change did occur after the intervention.

Posttest only control group design

Posttest only control group design involves only giving participants a posttest, just like it sounds. But why would you use this design instead of using a pretest posttest design? One reason could be to avoid potential testing effects that can happen when research participants take a pretest.

In research, the testing effect threatens internal validity when the pretest changes the way the participants respond on the posttest or subsequent assessments (Flannelly, Flannelly, & Jankowski, 2018). [1] A common example occurs when testing interventions for cognitive impairment in older adults. By taking a cognitive assessment during the pretest, participants get exposed to the items on the assessment and get to “practice” taking it (see for example, Cooley et al., 2015). [2] They may perform better the second time they take it because they have learned how to take the test, not because there have been changes in cognition. This specific type of testing effect is called the practice effect . [3]

The testing effect isn’t always bad in practice—our initial assessments might help clients identify or put into words feelings or experiences they are having when they haven’t been able to do that before. In research, however, we might want to control its effects to isolate a cleaner causal relationship between intervention and outcome. Going back to our CBT for social anxiety example, we might be concerned that participants would learn about social anxiety symptoms by virtue of taking a pretest. They might then identify that they have those symptoms on the posttest, even though they are not new symptoms for them. That could make our intervention look less effective than it actually is. To mitigate the influence of testing effects, posttest only control group designs do not administer a pretest to participants. Figure 14.4 depicts this.

how to conduct true experimental research

A drawback to the posttest only control group design is that without a baseline measurement, establishing causality can be more difficult. If we don’t know someone’s state of mind before our intervention, how do we know our intervention did anything at all? Establishing time order is thus a little more difficult. The posttest only control group design relies on the random assignment to groups to create groups that are equivalent at baseline because, without a pretest, researchers cannot assess whether the groups are equivalent before the intervention. Researchers must balance this consideration with the benefits of this type of design.

Solomon four group design

One way we can possibly measure how much the testing effect threatens internal validity is with the Solomon four group design. Basically, as part of this experiment, there are two experimental groups and two control groups. The first pair of experimental/control groups receives both a pretest and a posttest. The other pair receives only a posttest (Figure 14.5). In addition to addressing testing effects, this design also addresses the problems of establishing time order and equivalent groups in posttest only control group designs.

how to conduct true experimental research

For our CBT project, we would randomly assign people to four different groups instead of just two. Groups A and B would take our pretest measures and our posttest measures, and groups C and D would take only our posttest measures. We could then compare the results among these groups and see if they’re significantly different between the folks in A and B, and C and D. If they are, we may have identified some kind of testing effect, which enables us to put our results into full context. We don’t want to draw a strong causal conclusion about our intervention when we have major concerns about testing effects without trying to determine the extent of those effects.

Solomon four group designs are less common in social work research, primarily because of the logistics and resource needs involved. Nonetheless, this is an important experimental design to consider when we want to address major concerns about testing effects.

Key Takeaways

  • True experimental design is best suited for explanatory research questions.
  • True experiments require random assignment of participants to control and experimental groups.
  • Pretest posttest research design involves two points of measurement—one pre-intervention and one post-intervention.
  • Posttest only research design involves only one point of measurement—after the intervention or treatment. It is a useful design to minimize the effect of testing effects on our results.
  • Solomon four group research design involves both of the above types of designs, using 2 pairs of control and experimental groups. One group receives both a pretest and a posttest, while the other receives only a posttest. This can help uncover the influence of testing effects.

TRACK 1 (IF YOU ARE CREATING A RESEARCH PROPOSAL FOR THIS CLASS):

  • Think about a true experiment you might conduct for your research project. Which design would be best for your research, and why?
  • What challenges or limitations might make it unrealistic (or at least very complicated!) for you to carry your true experimental design in the real-world as a researcher?
  • What hypothesis(es) would you test using this true experiment?

TRACK 2 (IF YOU AREN’T CREATING A RESEARCH PROPOSAL FOR THIS CLASS):

Imagine you are interested in studying child welfare practice. You are interested in learning more about community-based programs aimed to prevent child maltreatment and to prevent out-of-home placement for children.

  • Think about a true experiment you might conduct for this research project. Which design would be best for this research, and why?
  • What challenges or limitations might make it unrealistic (or at least very complicated) for you to carry your true experimental design in the real-world as a researcher?
  • Flannelly, K. J., Flannelly, L. T., & Jankowski, K. R. B. (2018). Threats to the internal validity of experimental and quasi-experimental research in healthcare. Journal of Health Care Chaplaincy, 24 (3), 107-130. https://doi.org/10.1080/08854726.20 17.1421019 ↵
  • Cooley, S. A., Heaps, J. M., Bolzenius, J. D., Salminen, L. E., Baker, L. M., Scott, S. E., & Paul, R. H. (2015). Longitudinal change in performance on the Montreal Cognitive Assessment in older adults. The Clinical Neuropsychologist, 29(6), 824-835. https://doi.org/10.1080/13854046.2015.1087596 ↵
  • Duff, K., Beglinger, L. J., Schultz, S. K., Moser, D. J., McCaffrey, R. J., Haase, R. F., Westervelt, H. J., Langbehn, D. R., Paulsen, J. S., & Huntington's Study Group (2007). Practice effects in the prediction of long-term cognitive outcome in three patient samples: a novel prognostic index. Archives of clinical neuropsychology : the official journal of the National Academy of Neuropsychologists, 22(1), 15–24. https://doi.org/10.1016/j.acn.2006.08.013 ↵

An experimental design in which one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different treatment levels (random assignment), and the results of the treatments on outcomes (dependent variables) are observed

Ability to say that one variable "causes" something to happen to another variable. Very important to assess when thinking about studies that examine causation such as experimental or quasi-experimental designs.

the idea that one event, behavior, or belief will result in the occurrence of another, subsequent event, behavior, or belief

A demonstration that a change occurred after an intervention. An important criterion for establishing causality.

an experimental design in which participants are randomly assigned to control and treatment groups, one group receives an intervention, and both groups receive only a post-test assessment

The measurement error related to how a test is given; the conditions of the testing, including environmental conditions; and acclimation to the test itself

improvements in cognitive assessments due to exposure to the instrument

Doctoral Research Methods in Social Work Copyright © by Mavs Open Press. All Rights Reserved.

Share This Book

Enago Academy

Experimental Research Design — 6 mistakes you should never make!

' src=

Since school days’ students perform scientific experiments that provide results that define and prove the laws and theorems in science. These experiments are laid on a strong foundation of experimental research designs.

An experimental research design helps researchers execute their research objectives with more clarity and transparency.

In this article, we will not only discuss the key aspects of experimental research designs but also the issues to avoid and problems to resolve while designing your research study.

Table of Contents

What Is Experimental Research Design?

Experimental research design is a framework of protocols and procedures created to conduct experimental research with a scientific approach using two sets of variables. Herein, the first set of variables acts as a constant, used to measure the differences of the second set. The best example of experimental research methods is quantitative research .

Experimental research helps a researcher gather the necessary data for making better research decisions and determining the facts of a research study.

When Can a Researcher Conduct Experimental Research?

A researcher can conduct experimental research in the following situations —

  • When time is an important factor in establishing a relationship between the cause and effect.
  • When there is an invariable or never-changing behavior between the cause and effect.
  • Finally, when the researcher wishes to understand the importance of the cause and effect.

Importance of Experimental Research Design

To publish significant results, choosing a quality research design forms the foundation to build the research study. Moreover, effective research design helps establish quality decision-making procedures, structures the research to lead to easier data analysis, and addresses the main research question. Therefore, it is essential to cater undivided attention and time to create an experimental research design before beginning the practical experiment.

By creating a research design, a researcher is also giving oneself time to organize the research, set up relevant boundaries for the study, and increase the reliability of the results. Through all these efforts, one could also avoid inconclusive results. If any part of the research design is flawed, it will reflect on the quality of the results derived.

Types of Experimental Research Designs

Based on the methods used to collect data in experimental studies, the experimental research designs are of three primary types:

1. Pre-experimental Research Design

A research study could conduct pre-experimental research design when a group or many groups are under observation after implementing factors of cause and effect of the research. The pre-experimental design will help researchers understand whether further investigation is necessary for the groups under observation.

Pre-experimental research is of three types —

  • One-shot Case Study Research Design
  • One-group Pretest-posttest Research Design
  • Static-group Comparison

2. True Experimental Research Design

A true experimental research design relies on statistical analysis to prove or disprove a researcher’s hypothesis. It is one of the most accurate forms of research because it provides specific scientific evidence. Furthermore, out of all the types of experimental designs, only a true experimental design can establish a cause-effect relationship within a group. However, in a true experiment, a researcher must satisfy these three factors —

  • There is a control group that is not subjected to changes and an experimental group that will experience the changed variables
  • A variable that can be manipulated by the researcher
  • Random distribution of the variables

This type of experimental research is commonly observed in the physical sciences.

3. Quasi-experimental Research Design

The word “Quasi” means similarity. A quasi-experimental design is similar to a true experimental design. However, the difference between the two is the assignment of the control group. In this research design, an independent variable is manipulated, but the participants of a group are not randomly assigned. This type of research design is used in field settings where random assignment is either irrelevant or not required.

The classification of the research subjects, conditions, or groups determines the type of research design to be used.

experimental research design

Advantages of Experimental Research

Experimental research allows you to test your idea in a controlled environment before taking the research to clinical trials. Moreover, it provides the best method to test your theory because of the following advantages:

  • Researchers have firm control over variables to obtain results.
  • The subject does not impact the effectiveness of experimental research. Anyone can implement it for research purposes.
  • The results are specific.
  • Post results analysis, research findings from the same dataset can be repurposed for similar research ideas.
  • Researchers can identify the cause and effect of the hypothesis and further analyze this relationship to determine in-depth ideas.
  • Experimental research makes an ideal starting point. The collected data could be used as a foundation to build new research ideas for further studies.

6 Mistakes to Avoid While Designing Your Research

There is no order to this list, and any one of these issues can seriously compromise the quality of your research. You could refer to the list as a checklist of what to avoid while designing your research.

1. Invalid Theoretical Framework

Usually, researchers miss out on checking if their hypothesis is logical to be tested. If your research design does not have basic assumptions or postulates, then it is fundamentally flawed and you need to rework on your research framework.

2. Inadequate Literature Study

Without a comprehensive research literature review , it is difficult to identify and fill the knowledge and information gaps. Furthermore, you need to clearly state how your research will contribute to the research field, either by adding value to the pertinent literature or challenging previous findings and assumptions.

3. Insufficient or Incorrect Statistical Analysis

Statistical results are one of the most trusted scientific evidence. The ultimate goal of a research experiment is to gain valid and sustainable evidence. Therefore, incorrect statistical analysis could affect the quality of any quantitative research.

4. Undefined Research Problem

This is one of the most basic aspects of research design. The research problem statement must be clear and to do that, you must set the framework for the development of research questions that address the core problems.

5. Research Limitations

Every study has some type of limitations . You should anticipate and incorporate those limitations into your conclusion, as well as the basic research design. Include a statement in your manuscript about any perceived limitations, and how you considered them while designing your experiment and drawing the conclusion.

6. Ethical Implications

The most important yet less talked about topic is the ethical issue. Your research design must include ways to minimize any risk for your participants and also address the research problem or question at hand. If you cannot manage the ethical norms along with your research study, your research objectives and validity could be questioned.

Experimental Research Design Example

In an experimental design, a researcher gathers plant samples and then randomly assigns half the samples to photosynthesize in sunlight and the other half to be kept in a dark box without sunlight, while controlling all the other variables (nutrients, water, soil, etc.)

By comparing their outcomes in biochemical tests, the researcher can confirm that the changes in the plants were due to the sunlight and not the other variables.

Experimental research is often the final form of a study conducted in the research process which is considered to provide conclusive and specific results. But it is not meant for every research. It involves a lot of resources, time, and money and is not easy to conduct, unless a foundation of research is built. Yet it is widely used in research institutes and commercial industries, for its most conclusive results in the scientific approach.

Have you worked on research designs? How was your experience creating an experimental design? What difficulties did you face? Do write to us or comment below and share your insights on experimental research designs!

Frequently Asked Questions

Randomization is important in an experimental research because it ensures unbiased results of the experiment. It also measures the cause-effect relationship on a particular group of interest.

Experimental research design lay the foundation of a research and structures the research to establish quality decision making process.

There are 3 types of experimental research designs. These are pre-experimental research design, true experimental research design, and quasi experimental research design.

The difference between an experimental and a quasi-experimental design are: 1. The assignment of the control group in quasi experimental research is non-random, unlike true experimental design, which is randomly assigned. 2. Experimental research group always has a control group; on the other hand, it may not be always present in quasi experimental research.

Experimental research establishes a cause-effect relationship by testing a theory or hypothesis using experimental groups or control variables. In contrast, descriptive research describes a study or a topic by defining the variables under it and answering the questions related to the same.

' src=

good and valuable

Very very good

Good presentation.

Rate this article Cancel Reply

Your email address will not be published.

how to conduct true experimental research

Enago Academy's Most Popular Articles

What is Academic Integrity and How to Uphold it [FREE CHECKLIST]

Ensuring Academic Integrity and Transparency in Academic Research: A comprehensive checklist for researchers

Academic integrity is the foundation upon which the credibility and value of scientific findings are…

7 Step Guide for Optimizing Impactful Research Process

  • Publishing Research
  • Reporting Research

How to Optimize Your Research Process: A step-by-step guide

For researchers across disciplines, the path to uncovering novel findings and insights is often filled…

Launch of "Sony Women in Technology Award with Nature"

  • Industry News
  • Trending Now

Breaking Barriers: Sony and Nature unveil “Women in Technology Award”

Sony Group Corporation and the prestigious scientific journal Nature have collaborated to launch the inaugural…

Guide to Adhere Good Research Practice (FREE CHECKLIST)

Achieving Research Excellence: Checklist for good research practices

Academia is built on the foundation of trustworthy and high-quality research, supported by the pillars…

ResearchSummary

  • Promoting Research

Plain Language Summary — Communicating your research to bridge the academic-lay gap

Science can be complex, but does that mean it should not be accessible to the…

Choosing the Right Analytical Approach: Thematic analysis vs. content analysis for…

Comparing Cross Sectional and Longitudinal Studies: 5 steps for choosing the right…

Research Recommendations – Guiding policy-makers for evidence-based decision making

how to conduct true experimental research

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

how to conduct true experimental research

As a researcher, what do you consider most when choosing an image manipulation detector?

Department of Health & Human Services

Module 2: Research Design - Section 2

Module 1

  • Section 1 Discussion
  • Section 2 Discussion

Section 2: Experimental Studies

Unlike a descriptive study, an experiment is a study in which a treatment, procedure, or program is intentionally introduced and a result or outcome is observed. The American Heritage Dictionary of the English Language defines an experiment as "A test under controlled conditions that is made to demonstrate a known truth, to examine the validity of a hypothesis, or to determine the efficacy of something previously untried."

Manipulation, Control, Random Assignment, Random Selection

This means that no matter who the participant is, he/she has an equal chance of getting into all of the groups or treatments in an experiment. This process helps to ensure that the groups or treatments are similar at the beginning of the study so that there is more confidence that the manipulation (group or treatment) "caused" the outcome. More information about random assignment may be found in section Random assignment.

Definition : An experiment is a study in which a treatment, procedure, or program is intentionally introduced and a result or outcome is observed.

Case Example for Experimental Study

Experimental studies — example 1.

Teacher

Experimental Studies — Example 2

A fitness instructor wants to test the effectiveness of a performance-enhancing herbal supplement on students in her exercise class. To create experimental groups that are similar at the beginning of the study, the students are assigned into two groups at random (they can not choose which group they are in). Students in both groups are given a pill to take every day, but they do not know whether the pill is a placebo (sugar pill) or the herbal supplement. The instructor gives Group A the herbal supplement and Group B receives the placebo (sugar pill). The students' fitness level is compared before and after six weeks of consuming the supplement or the sugar pill. No differences in performance ability were found between the two groups suggesting that the herbal supplement was not effective.

PDF

Email Updates

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons

Margin Size

  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Social Sci LibreTexts

13.2: True experimental design

  • Last updated
  • Save as PDF
  • Page ID 135156

  • Matthew DeCarlo, Cory Cummings, & Kate Agnelli
  • Open Social Work Education

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

Learning Objectives

Learners will be able to…

  • Describe a true experimental design in social work research
  • Understand the different types of true experimental designs
  • Determine what kinds of research questions true experimental designs are suited for
  • Discuss advantages and disadvantages of true experimental designs

True experimental design , often considered to be the “gold standard” in research designs, is thought of as one of the most rigorous of all research designs. In this design, one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different treatment levels (random assignment), and the results of the treatments on outcomes (dependent variables) are observed. The unique strength of experimental research is its  internal validity and its ability to establish ( causality ) through treatment manipulation, while controlling for the effects of extraneous variable. Sometimes the treatment level is no treatment, while other times it is simply a different treatment than that which we are trying to evaluate. For example, we might have a control group that is made up of people who will not receive any treatment for a particular condition. Or, a control group could consist of people who consent to treatment with DBT when we are testing the effectiveness of CBT.

As we discussed in the previous section, a true experiment has a  control group with participants randomly assigned , and an experimental group . This is the most basic element of a true experiment. The next decision a researcher must make is when they need to gather data during their experiment. Do they take a baseline measurement and then a measurement after treatment, or just a measurement after treatment, or do they handle measurement another way? Below, we’ll discuss the three main types of true experimental designs. There are sub-types of each of these designs, but here, we just want to get you started with some of the basics.

Using a true experiment in social work research is often pretty difficult, since as I mentioned earlier, true experiments can be quite resource intensive. True experiments work best with relatively large sample sizes, and random assignment, a key criterion for a true experimental design, is hard (and unethical) to execute in practice when you have people in dire need of an intervention. Nonetheless, some of the strongest evidence bases are built on true experiments.

For the purposes of this section, let’s bring back the example of CBT for the treatment of social anxiety. We have a group of 500 individuals who have agreed to participate in our study, and we have randomly assigned them to the control and experimental groups. The folks in the experimental group will receive CBT, while the folks in the control group will receive more unstructured, basic talk therapy. These designs, as we talked about above, are best suited for explanatory research questions.

Before we get started, take a look at the table below. When explaining experimental research designs, we often use diagrams with abbreviations to visually represent the experiment. Table 13.1 starts us off by laying out what each of the abbreviations mean.

R Randomly assigned group (control/comparison or experimental)
O Observation/measurement taken of dependent variable
X Intervention or treatment
X Experimental or new intervention
X Typical intervention/treatment as usual
A, B, C, etc. Denotes different groups (control/comparison and experimental)

Table 13.1 Experimental research design notations

Pretest and post-test control group design

In  pretest and post-test control group design , participants are given a  pretest of some kind to measure their baseline state before their participation in an intervention. In our social anxiety experiment, we would have participants in both the experimental and control groups complete some measure of social anxiety—most likely an established scale and/or a structured interview—before they start their treatment. As part of the experiment, we would have a defined time period during which the treatment would take place (let’s say 12 weeks, just for illustration). At the end of 12 weeks, we would give both groups the same measure as a  post-test . 

how to conduct true experimental research

Figure 13.1 Pretest and post-test control group design

In the diagram, RA (random assignment group A) is the experimental group and RB is the control group. O 1  denotes the pre-test, X e  denotes the experimental intervention, and O 2  denotes the post-test. Let’s look at this diagram another way, using the example of CBT for social anxiety that we’ve been talking about.

how to conduct true experimental research

Figure 13.2 Pretest and post-test control group design testing CBT an intervention

In a situation where the control group received treatment as usual instead of no intervention, the diagram would look this way, with X i  denoting treatment as usual (Figure 13.3).

how to conduct true experimental research

Figure 13.3 Pretest and post-test control group design with treatment as usual instead of no treatment

Hopefully, these diagrams provide you a visualization of how this type of experiment establishes  time order , a key component of a causal relationship. Did the change occur after the intervention? Assuming there is a change in the scores between the pretest and post-test, we would be able to say that yes, the change did occur after the intervention. Causality can’t exist if the change happened before the intervention—this would mean that something else led to the change, not our intervention.

Post-test only control group design

Post-test only control group design involves only giving participants a post-test, just like it sounds (Figure 13.4).

how to conduct true experimental research

Figure 13.4 Post-test only control group design

But why would you use this design instead of using a pretest/post-test design? One reason could be the testing effect that can happen when research participants take a pretest. In research, the  testing effect refers to “measurement error related to how a test is given; the conditions of the testing, including environmental conditions; and acclimation to the test itself” (Engel & Schutt, 2017, p. 444)\(^1\) (When we say “measurement error,” all we mean is the accuracy of the way we measure the dependent variable.) Figure 13.4 is a visualization of this type of experiment. The testing effect isn’t always bad in practice—our initial assessments might help clients identify or put into words feelings or experiences they are having when they haven’t been able to do that before. In research, however, we might want to control its effects to isolate a cleaner causal relationship between intervention and outcome.

Going back to our CBT for social anxiety example, we might be concerned that participants would learn about social anxiety symptoms by virtue of taking a pretest. They might then identify that they have those symptoms on the post-test, even though they are not new symptoms for them. That could make our intervention look less effective than it actually is.

However, without a baseline measurement establishing causality can be more difficult. If we don’t know someone’s state of mind before our intervention, how do we know our intervention did anything at all? Establishing  time order is thus a little more difficult. You must balance this consideration with the benefits of this type of design.

Solomon four group design

One way we can possibly measure how much the testing effect might change the results of the experiment is with the Solomon four group design. Basically, as part of this experiment, you have two control groups and two experimental groups. The first pair of groups receives both a pretest and a post-test. The other pair of groups receives only a post-test (Figure 13.5). This design helps address the problem of establishing time order in post-test only control group designs.

how to conduct true experimental research

Figure 13.5 Solomon four-group design

For our CBT project, we would randomly assign people to four different groups instead of just two. Groups A and B would take our pretest measures and our post-test measures, and groups C and D would take only our post-test measures. We could then compare the results among these groups and see if they’re significantly different between the folks in A and B, and C and D. If they are, we may have identified some kind of testing effect, which enables us to put our results into full context. We don’t want to draw a strong causal conclusion about our intervention when we have major concerns about testing effects without trying to determine the extent of those effects.

Solomon four group designs are less common in social work research, primarily because of the logistics and resource needs involved. Nonetheless, this is an important experimental design to consider when we want to address major concerns about testing effects.

Key Takeaways

  • True experimental design is best suited for explanatory research questions.
  • True experiments require random assignment of participants to control and experimental groups.
  • Pretest/post-test research design involves two points of measurement—one pre-intervention and one post-intervention.
  • Post-test only research design involves only one point of measurement—post-intervention. It is a useful design to minimize the effect of testing effects on our results.
  • Solomon four group research design involves both of the above types of designs, using 2 pairs of control and experimental groups. One group receives both a pretest and a post-test, while the other receives only a post-test. This can help uncover the influence of testing effects.
  • Think about a true experiment you might conduct for your research project. Which design would be best for your research, and why?
  • What challenges or limitations might make it unrealistic (or at least very complicated!) for you to carry your true experimental design in the real-world as a student researcher?
  • What hypothesis(es) would you test using this true experiment?

Our websites may use cookies to personalize and enhance your experience. By continuing without changing your cookie settings, you agree to this collection. For more information, please see our University Websites Privacy Notice .

Neag School of Education

Educational Research Basics by Del Siegle

Experimental research.

The major feature that distinguishes experimental research from other types of research is that the researcher manipulates the independent variable.  There are a number of experimental group designs in experimental research. Some of these qualify as experimental research, others do not.

  • In true experimental research , the researcher not only manipulates the independent variable, he or she also randomly assigned individuals to the various treatment categories (i.e., control and treatment).
  • In quasi experimental research , the researcher does not randomly assign subjects to treatment and control groups. In other words, the treatment is not distributed among participants randomly. In some cases, a researcher may randomly assigns one whole group to treatment and one whole group to control. In this case, quasi-experimental research involves using intact groups in an experiment, rather than assigning individuals at random to research conditions. (some researchers define this latter situation differently. For our course, we will allow this definition).
  • In causal comparative ( ex post facto ) research, the groups are already formed. It does not meet the standards of an experiment because the independent variable in not manipulated.

The statistics by themselves have no meaning. They only take on meaning within the design of your study. If we just examine stats, bread can be deadly . The term validity is used three ways in research…

  • I n the sampling unit, we learn about external validity (generalizability).
  • I n the survey unit, we learn about instrument validity .
  • In this unit, we learn about internal validity and external validity . Internal validity means that the differences that we were found between groups on the dependent variable in an experiment were directly related to what the researcher did to the independent variable, and not due to some other unintended variable (confounding variable). Simply stated, the question addressed by internal validity is “Was the study done well?” Once the researcher is satisfied that the study was done well and the independent variable caused the dependent variable (internal validity), then the research examines external validity (under what conditions [ecological] and with whom [population] can these results be replicated [Will I get the same results with a different group of people or under different circumstances?]). If a study is not internally valid, then considering external validity is a moot point (If the independent did not cause the dependent, then there is no point in applying the results [generalizing the results] to other situations.). Interestingly, as one tightens a study to control for treats to internal validity, one decreases the generalizability of the study (to whom and under what conditions one can generalize the results).

There are several common threats to internal validity in experimental research. They are described in our text.  I have review each below (this material is also included in the  PowerPoint Presentation on Experimental Research for this unit):

  • Subject Characteristics (Selection Bias/Differential Selection) — The groups may have been different from the start. If you were testing instructional strategies to improve reading and one group enjoyed reading more than the other group, they may improve more in their reading because they enjoy it, rather than the instructional strategy you used.
  • Loss of Subjects ( Mortality ) — All of the high or low scoring subject may have dropped out or were missing from one of the groups. If we collected posttest data on a day when the honor society was on field trip at the treatment school, the mean for the treatment group would probably be much lower than it really should have been.
  • Location — Perhaps one group was at a disadvantage because of their location.  The city may have been demolishing a building next to one of the schools in our study and there are constant distractions which interferes with our treatment.
  • Instrumentation Instrument Decay — The testing instruments may not be scores similarly. Perhaps the person grading the posttest is fatigued and pays less attention to the last set of papers reviewed. It may be that those papers are from one of our groups and will received different scores than the earlier group’s papers
  • Data Collector Characteristics — The subjects of one group may react differently to the data collector than the other group. A male interviewing males and females about their attitudes toward a type of math instruction may not receive the same responses from females as a female interviewing females would.
  • Data Collector Bias — The person collecting data my favors one group, or some characteristic some subject possess, over another. A principal who favors strict classroom management may rate students’ attention under different teaching conditions with a bias toward one of the teaching conditions.
  • Testing — The act of taking a pretest or posttest may influence the results of the experiment. Suppose we were conducting a unit to increase student sensitivity to prejudice. As a pretest we have the control and treatment groups watch Shindler’s List and write a reaction essay. The pretest may have actually increased both groups’ sensitivity and we find that our treatment groups didn’t score any higher on a posttest given later than the control group did. If we hadn’t given the pretest, we might have seen differences in the groups at the end of the study.
  • History — Something may happen at one site during our study that influences the results. Perhaps a classmate dies in a car accident at the control site for a study teaching children bike safety. The control group may actually demonstrate more concern about bike safety than the treatment group.
  • Maturation –There may be natural changes in the subjects that can account for the changes found in a study. A critical thinking unit may appear more effective if it taught during a time when children are developing abstract reasoning.
  • Hawthorne Effect — The subjects may respond differently just because they are being studied. The name comes from a classic study in which researchers were studying the effect of lighting on worker productivity. As the intensity of the factor lights increased, so did the work productivity. One researcher suggested that they reverse the treatment and lower the lights. The productivity of the workers continued to increase. It appears that being observed by the researchers was increasing productivity, not the intensity of the lights.
  • John Henry Effect — One group may view that it is competition with the other group and may work harder than than they would under normal circumstances. This generally is applied to the control group “taking on” the treatment group. The terms refers to the classic story of John Henry laying railroad track.
  • Resentful Demoralization of the Control Group — The control group may become discouraged because it is not receiving the special attention that is given to the treatment group. They may perform lower than usual because of this.
  • Regression ( Statistical Regression) — A class that scores particularly low can be expected to score slightly higher just by chance. Likewise, a class that scores particularly high, will have a tendency to score slightly lower by chance. The change in these scores may have nothing to do with the treatment.
  • Implementation –The treatment may not be implemented as intended. A study where teachers are asked to use student modeling techniques may not show positive results, not because modeling techniques don’t work, but because the teacher didn’t implement them or didn’t implement them as they were designed.
  • Compensatory Equalization of Treatmen t — Someone may feel sorry for the control group because they are not receiving much attention and give them special treatment. For example, a researcher could be studying the effect of laptop computers on students’ attitudes toward math. The teacher feels sorry for the class that doesn’t have computers and sponsors a popcorn party during math class. The control group begins to develop a more positive attitude about mathematics.
  • Experimental Treatment Diffusion — Sometimes the control group actually implements the treatment. If two different techniques are being tested in two different third grades in the same building, the teachers may share what they are doing. Unconsciously, the control may use of the techniques she or he learned from the treatment teacher.

When planning a study, it is important to consider the threats to interval validity as we finalize the study design. After we complete our study, we should reconsider each of the threats to internal validity as we review our data and draw conclusions.

Del Siegle, Ph.D. Neag School of Education – University of Connecticut [email protected] www.delsiegle.com

Experimental Research Design | Definition, Components & Examples

Heather is a science educator with a bachelor's degree in biology and a master's degree in environmental science and policy. She teaches college science and is currently a doctoral student in education. She is also a musician and writer.

Lisa has taught at all levels from kindergarten to college and has a master's degree in human relations.

What are the types of experimental research design?

There are three general types of experimental research design. Pre-experimental research usually occurs to determine whether a true experiment is warranted. Quasi-experimental research is very similar to true experimental research but lacks the elements of random sampling and random assignment. True experimental research is the most robust type of experimental study due to its careful control and manipulation of variables, random sampling, and random assignment.

What is in an experimental design?

An experimental research design is typically focused on the relationship between two variables: the independent variable and the dependent variable. The researcher uses random sampling and random assignment to create a control group and an experimental group. The results of the experiment are compared to determine whether there is a significant difference between the group that receives the treatment and the control group.

Table of Contents

What is experimental research design, types of experimental research design, key components of experimental study design, steps for designing an experimental research study, examples of experimental research design, advantages and disadvantages of experimental study design, lesson summary.

Although experimental research may bring to mind images of laboratory scientists with test tubes and beakers, experimental studies can be used in many fields, including the physical sciences , life sciences , and social sciences. The experimental research design definition is a research method used to investigate the interaction between independent and dependent variables, which can be used to determine a cause-and-effect relationship.

Experimental research is commonly used within the framework of the scientific method.

Experimental Research Design vs. Other Types of Studies

There are many types of research designs , and not all of them are experimental. Choosing the most appropriate research approach depends on many factors, including the nature of the investigation, the goals of the study, and access to subjects or materials. Non-experimental research studies can be useful for describing or exploring phenomena. Commonly, non-experimental studies start with a question. Examples of possible non-experimental research studies follow.

  • Exploratory: Exploratory studies aim to research a new topic and usually answer questions beginning with "What." To know whether a research question is exploratory, a researcher needs to find out what has already been learned about that topic. If little research has been done, then an exploratory study might be appropriate to generate new information about the topic.
  • Descriptive: Descriptive studies typically answer questions about "How." A descriptive study could be done to describe how people feel after a particular experience. Data might be gathered through interviews or focus groups to learn about people's experiences.
  • Explanatory: Explanatory studies are useful for answering questions starting with "Why." After a relationship is known, an explanatory study could be done to provide an explanation for why that relationship exists.
  • Correlational: A correlational study could be done to learn whether there is a relationship between two variables. However, researchers need to be careful to describe relationships correctly. The presence of a relationship does not mean there is a direct cause-and-effect relationship. Causal relationships should not be inferred from correlation.

To unlock this lesson you must be a Study.com Member. Create your account

how to conduct true experimental research

An error occurred trying to load this video.

Try refreshing the page, or contact customer support.

You must c C reate an account to continue watching

Register to view this lesson.

As a member, you'll also get unlimited access to over 88,000 lessons in math, English, science, history, and more. Plus, get practice tests, quizzes, and personalized coaching to help you succeed.

Get unlimited access to over 88,000 lessons.

Already registered? Log in here for access

Resources created by teachers for teachers.

I would definitely recommend Study.com to my colleagues. It’s like a teacher waved a magic wand and did the work for me. I feel like it’s a lifeline.

You're on a roll. Keep up the good work!

Just checking in. are you still watching.

  • 0:05 Example and Definition
  • 0:45 Experimental Research
  • 4:21 When a True Experiment…
  • 5:35 Lesson Summary

What is experimental research design? There are three main types of experimental research design that can be carried out using methods such as observational studies , simulations , and surveys.

  • Pre-experimental: A pre-experimental study is not truly experimental, but it is included in this category because it may precede an experimental study. Researchers may conduct pre-experimental investigations to determine whether a full experimental study is necessary. For example, researchers may conduct a survey to gather data that shows an interesting correlation between variables. They may then conduct an experimental study to focus on that specific relationship.
  • Quasi-experimental : A quasi-experimental study is similar to an experimental study, but lacks random selection and random assignment of participants/subjects. An example of a quasi-experimental study would be comparing the reading skills between two classes. Perhaps one group uses a printed book and the other uses an electronic version of the same book. A researcher could compare the skills of the groups, but this is not a true experiment.
  • True experimental: A true experimental study is considered to provide the most robust results, and it has the most rigorous requirements. The requirements for a true experiment will be presented next.

What is an experimental research study ? It includes an experiment or test, data collection, and analysis of the results. Experimental study design requires careful planning and control to ensure the results are robust and meaningful. There are several key components of experimental studies, including:

  • Hypothesis : A hypothesis is similar to a research question , but it is phrased as a statement, and it is more like an educated prediction than a whimsical what-if scenario. A hypothesis must be testable, so it cannot be a statement of opinion.
  • Independent variable : The independent variable is what the researcher will change or manipulate. The independent variable may be thought of as the cause in a cause-and-effect relationship.
  • Dependent variable : The dependent variable is the variable that might change as a result of manipulation of the independent variable. The dependent variable is where the effect may be observed. Its outcome is dependent on the manipulation of the independent variable.

Random Sampling

Besides the hypothesis and variables, sampling and groups are essential elements of experimental research design. Since an experiment involves testing something, it is necessary to have two types of groups to study. One is the control group, which represents the status quo. The control group provides a baseline so the researcher can tell whether the treatment or intervention causes anything different to happen. Without a control group, researchers would not be able to make a comparison between the test results and the normal condition. The other group is the experimental or treatment group. This is the group that receives the treatment or intervention being investigated. For example, in a study to determine whether a new drug is effective, the experimental group would receive the drug, and the control group would receive a placebo .

Sampling is another critical component of an experimental research study. The sample is the group of subjects or people involved in a research study. Random sampling means that every individual in the study population has an equal chance of being selected for the study. Random assignment means that every participant has an equal chance of being assigned to either the control or experimental group. For example, if a researcher is interested in learning about how a new drink affects running speed, they would need to test the drink to generate data. The researcher needs a random sample of participants to make the test as fair and unbiased as possible. Participants should be randomly selected from the population and then randomly assigned to either the control group or experimental group .

What would happen without random selection and random assignment? If the researcher chose members of the running club to be in the experimental group, and non-runners to be in the control group, that would not be random sampling or random assignment. That could result in biased results since the running club members might be faster than the non-runners even without the new drink. The results of the test would not necessarily reflect the effects of the drink on the running speed of the participants.

There are several standard steps to use when designing an experimental research study.

  • First, consider the variables and identify which is the independent variable (cause) and which is the dependent variable (effect).
  • Construct the research hypothesis. A hypothesis is an educated prediction, not a what-if question.
  • Design an experiment to test the relationship described in the hypothesis.
  • Generate a random sample and assign participants randomly to the control and experimental groups.
  • Decide how to measure the dependent variable so you can analyze the outcome.

Some examples of true experimental studies follow.

  • Test whether the presence of light affects the amount of time it takes for seeds to sprout. In this study, the independent variable is the presence of light, and the dependent variable is the time to germination. A possible hypothesis might be, "Seeds exposed to light will germinate more quickly than seeds kept in the dark." Track the time it takes for the seeds to sprout and compare the data to generate results. Determine whether the results support or reject the hypothesis.
  • Test whether salt affects the time it takes for an ice cube to melt. Keeping everything else identical (ice cube, starting temperature of water, amount of water, type of glass), use salt water in the experimental case and plain water in the control case. The independent variable would be the type of water (plain or salted) and the dependent variable would be the time it takes to melt the ice cube. A hypothesis might be, "Ice placed in saltwater will melt more quickly than ice placed in plain water." Time the melting process and record the time it takes the ice cubes to melt. Compare the data to generate the results. Did the results support or reject the hypothesis?

Experimental research design takes careful planning. The hypothesis needs to be testable and narrow enough to focus on the key independent and dependent variables. The type of data generated should be measurable. After the experiment, the data from the experimental group should be compared to the data from the control group to determine whether there is a significant difference. To come to a reliable conclusion, researchers often use statistical tests to determine whether the results are significant.

Experimental study design is considered a robust and reliable method of conducting research. However, there are times when a true experiment is not appropriate or possible to conduct. Some of the advantages and disadvantages of experimental study design follow.

An experimental research study may demonstrate a cause and effect relationship between variables.

Advantages of Experimental Study Design

  • Experimental research can produce reliable results.
  • Experimental studies tend to be well controlled.
  • Conclusions based on experimental studies are backed by evidence.
  • Experimental research can lead to the identification of cause and effect relationships.
  • An experimental study can provide useful information even if the experimental hypothesis is rejected.

Disadvantages of Experimental Study Design

  • Experimental study design is not appropriate in cases where it causes unethical behavior.
  • Experimental research is not possible when researchers cannot select a random sample.
  • Experimental research studies may be inconvenient or expensive to conduct.

Experimental research design is a rigorous approach to studying various subjects, including life science, physical sciences, and social sciences. The key factors required for a true experimental study are clearly defined independent and dependent variables, a focused and testable hypothesis , random sampling, and random assignment to the control or experimental group. Random sampling means that every individual in the population being studied has an equal chance of being included in the study. The independent variable is what the researcher manipulates or changes, and the dependent variable is the element that is measured to determine the outcome. In an experimental research study , the control group represents the status quo, so the researcher does not manipulate the treatment of the control group. The experimental group receives the treatment or intervention, and the results are compared to those of the control group. The experiment's hypothesis is tested by comparing the results of the experimental and control groups. Performing an experimental study does not always result in a supported hypothesis, but can produce robust results and explain cause-and-effect relationships between variables.

True experiments fit well with the scientific experimental process, provide valid information, and can demonstrate cause/effect relationships. True experiments should not be used when a random sample cannot be obtained, or when the testing would be unethical. In those cases, non-experimental studies are more appropriate.

Video Transcript

Example and definition.

Let's imagine that you recently went out dancing with your friends at two different clubs. The first dance club had a disco ball hanging from the ceiling, and the second club did not. You started wondering if dancing under a disco ball made people dance better. You want to prove that a true cause and effect relationship exists between the disco ball and better dance moves. How would you go about testing this idea?

Since you want to demonstrate a cause and effect relationship between the disco ball and better dance moves, you would need to perform a true experiment. In a true experiment , effort is made to control all influences other than the ones that are being studied.

Experimental Research

True experiments are used in human growth and development research whenever they are feasible. This is because they are the only way to prove the existence of a cause and effect relationship between two variables. A true experiment will include all parts of the experimental process. To understand what this means, let's walk through the process of developing your experiment to test whether a disco ball makes a person dance better.

First, you will develop a hypothesis. A hypothesis is a testable statement that is logically derived from theory or observation. Based on your observations at the dance clubs last weekend, your hypothesis is that a person will dance better when a disco ball is present.

Now that we have your hypothesis, we need to talk about variables. A variable is an aspect of the research environment that can change. Controlling variables allows the researcher to determine a cause and effect relationship between what is being studied. In order to test for one variable, the researcher needs to have full control over all aspects of the environment. Then one aspect, or variable, is manipulated.

The variable that is manipulated in an experiment is called the independent variable . In your experiment, the independent variable is the disco ball. Another aspect, or variable, is measured. The researcher does not control this variable. The variable that is measured in an experiment is called the dependent variable . In your experiment, the dependent variable is dance ability. So, you have your hypothesis and variables, but how do you test the independent variable? Let's keep setting up the experiment and find out.

If you want to test your hypothesis, it is obvious that you're going to need some dancers. You would want to have a large number of test subjects with varied dancing skills to test. Once you have your test subjects, you need to split them into two separate groups. There must be at least two groups in any valid experiment: the experimental group and the control group.

An experimental group is the group that receives the variable being tested in an experiment. In your experiment, it is the group that will dance under the disco ball. The control group is the group in an experiment that does not receive the variable you are testing. In your experiment, this would be the group that does not have a disco ball to dance under.

Each group would be selected as a random sample. A random sample occurs when every individual in the group being studied has an equal chance of being selected. You now need to put judges in place to score the dancers in each of your two groups. The judges will rate each dancer's ability on a scale from 1-10. These scores will be used as results from each of the groups.

How are you going to measure these results to test your hypothesis? In order to measure the results, you must have some way of making a comparison. Comparing the results from the experimental group with the results from the control group is one way of doing this.

On a scale of 1-10, dancers from the experimental group (with the disco ball) received an average score of nine. Dancers from the control group (without the disco ball) received an average score of four. This data supports your hypothesis that a disco ball makes people dance better! You have completed a true experiment and found a possible cause and effect relationship: The disco ball causes the dancers to perform better.

When a True Experiment Is Not Used

Now you know that a true experiment allows a researcher to have full control, yields the most valid information and demonstrates cause and effect relationships. So why are other types of research designs sometimes used? In human growth and development research, other methods of research are a common and often necessary replacement for a true experiment. This is usually because it is not possible to utilize a random sample. This may be due to ethical concerns or lack of practicality.

Imagine trying to create a random sample when you are researching the effects of child abuse. If a parent is randomly selected from a group of study participants for the experimental group, they will be forced to abuse their children throughout the study. This would be unethical and unacceptable!

When a true experiment is not feasible, there are many different ways we can go about finding answers. We can examine existing information to look for data that can be manipulated to show possible relationships, we can observe what we see in the world around us or examine specific groups in other ways. The results from other research designs may not prove cause and effect, but the results can be used to infer this relationship.

In a true experiment , every effort is made to control all influences other than the ones that are being studied. This type of research will provide the most reliable information but is sometimes unethical or impractical for human growth and development research. A true experiment will include all parts of the experimental process.

First, you will develop a hypothesis. A hypothesis is a testable statement that is logically derived from a theory or observation. Then you will identify the variables. A variable is an aspect of the research environment that can change. Controlling a variable is what allows a researcher to demonstrate a cause and effect relationship. There are independent variables and dependent variables in an experiment.

Next, test subjects are chosen and split into two separate groups through random sampling. These two groups are the experimental group and the control group . A random sample occurs when every individual in the group being studied has an equal chance of being selected. After measuring the results from each of your groups, you will compare the results from the experimental group with the results of the control group. This comparison is what will either support or reject your hypothesis.

Even though other research designs are often used and can provide useful results, the results they provide must be inferred to some degree. A true experiment is the only research method that can prove the existence of a cause and effect relationship between two variables.

Learning Outcomes

When this lesson is done, you should be able to:

  • Describe what a true experiment is and how it is set up
  • Define hypothesis and variables
  • Differentiate between independent and dependent variables, as well as between experimental groups and control groups
  • Understand what is meant by a random sample
  • List the benefits of true experiments
  • Explain when it would not be possible to set up a true experiment

Unlock Your Education

See for yourself why 30 million people use study.com, become a study.com member and start learning now..

Already a member? Log In

Recommended Lessons and Courses for You

Related lessons, related courses, recommended lessons for you.

Cooperative Learning: Strategies & Techniques

Experimental Research Design | Definition, Components & Examples Related Study Materials

  • Related Topics

Browse by Courses

  • CLEP Human Growth and Development Prep
  • Human Growth and Development: Certificate Program
  • Human Growth and Development: Help and Review
  • UExcel Social Psychology: Study Guide & Test Prep
  • Human Growth and Development: Tutoring Solution
  • UExcel Life Span Developmental Psychology: Study Guide & Test Prep
  • CLEP Introductory Psychology Prep
  • Social Psychology: Help and Review
  • Social Psychology: Homework Help Resource
  • CLEP Introduction to Educational Psychology Prep
  • Praxis Family and Consumer Sciences (5122) Prep
  • Praxis Psychology (5391) Prep
  • Psychology 103: Human Growth and Development
  • Psychology 104: Social Psychology
  • AP Psychology: Exam Prep

Browse by Lessons

  • Experimental Design in Science | Definition, Process & Steps
  • True Experimental Design
  • Experimental Hypothesis | Importance, Features & Examples
  • Introduction to Research Design & Statistical Analysis for Psychology
  • Design a Scientific Experiment: Example of Avery and Griffith's Experiments
  • How to Design an Experiment: Lesson for Kids
  • Experimental Design in Statistics | Overview, Types & Examples
  • Multivariate Experimental Design
  • Experiments in Marketing Research | Designs, Steps & Examples
  • Nuclear Medicine History, Techniques & Benefits
  • Cognitive Impairment vs. Dementia
  • Freud's Death Drive Theory | Thanatos Definition & Development
  • Sleepwalking in Kids | Definition, Causes & Symptoms
  • Apraxia Definition, Types & Treatments
  • How Much Exercise Do Children Need?

Create an account to start this course today Used by over 30 million students worldwide Create an account

Explore our library of over 88,000 lessons

  • Foreign Language
  • Social Science
  • See All College Courses
  • Common Core
  • High School
  • See All High School Courses
  • College & Career Guidance Courses
  • College Placement Exams
  • Entrance Exams
  • General Test Prep
  • K-8 Courses
  • Skills Courses
  • Teacher Certification Exams
  • See All Other Courses
  • Create a Goal
  • Create custom courses
  • Get your questions answered

Experimental Research

  • First Online: 25 February 2021

Cite this chapter

how to conduct true experimental research

  • C. George Thomas 2  

4529 Accesses

Experiments are part of the scientific method that helps to decide the fate of two or more competing hypotheses or explanations on a phenomenon. The term ‘experiment’ arises from Latin, Experiri, which means, ‘to try’. The knowledge accrues from experiments differs from other types of knowledge in that it is always shaped upon observation or experience. In other words, experiments generate empirical knowledge. In fact, the emphasis on experimentation in the sixteenth and seventeenth centuries for establishing causal relationships for various phenomena happening in nature heralded the resurgence of modern science from its roots in ancient philosophy spearheaded by great Greek philosophers such as Aristotle.

The strongest arguments prove nothing so long as the conclusions are not verified by experience. Experimental science is the queen of sciences and the goal of all speculation . Roger Bacon (1214–1294)

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Bibliography

Best, J.W. and Kahn, J.V. 1993. Research in Education (7th Ed., Indian Reprint, 2004). Prentice–Hall of India, New Delhi, 435p.

Google Scholar  

Campbell, D. and Stanley, J. 1963. Experimental and quasi-experimental designs for research. In: Gage, N.L., Handbook of Research on Teaching. Rand McNally, Chicago, pp. 171–247.

Chandel, S.R.S. 1991. A Handbook of Agricultural Statistics. Achal Prakashan Mandir, Kanpur, 560p.

Cox, D.R. 1958. Planning of Experiments. John Wiley & Sons, New York, 308p.

Fathalla, M.F. and Fathalla, M.M.F. 2004. A Practical Guide for Health Researchers. WHO Regional Publications Eastern Mediterranean Series 30. World Health Organization Regional Office for the Eastern Mediterranean, Cairo, 232p.

Fowkes, F.G.R., and Fulton, P.M. 1991. Critical appraisal of published research: Introductory guidelines. Br. Med. J. 302: 1136–1140.

Gall, M.D., Borg, W.R., and Gall, J.P. 1996. Education Research: An Introduction (6th Ed.). Longman, New York, 788p.

Gomez, K.A. 1972. Techniques for Field Experiments with Rice. International Rice Research Institute, Manila, Philippines, 46p.

Gomez, K.A. and Gomez, A.A. 1984. Statistical Procedures for Agricultural Research (2nd Ed.). John Wiley & Sons, New York, 680p.

Hill, A.B. 1971. Principles of Medical Statistics (9th Ed.). Oxford University Press, New York, 390p.

Holmes, D., Moody, P., and Dine, D. 2010. Research Methods for the Bioscience (2nd Ed.). Oxford University Press, Oxford, 457p.

Kerlinger, F.N. 1986. Foundations of Behavioural Research (3rd Ed.). Holt, Rinehart and Winston, USA. 667p.

Kirk, R.E. 2012. Experimental Design: Procedures for the Behavioural Sciences (4th Ed.). Sage Publications, 1072p.

Kothari, C.R. 2004. Research Methodology: Methods and Techniques (2nd Ed.). New Age International, New Delhi, 401p.

Kumar, R. 2011. Research Methodology: A Step-by step Guide for Beginners (3rd Ed.). Sage Publications India, New Delhi, 415p.

Leedy, P.D. and Ormrod, J.L. 2010. Practical Research: Planning and Design (9th Ed.), Pearson Education, New Jersey, 360p.

Marder, M.P. 2011. Research Methods for Science. Cambridge University Press, 227p.

Panse, V.G. and Sukhatme, P.V. 1985. Statistical Methods for Agricultural Workers (4th Ed., revised: Sukhatme, P.V. and Amble, V. N.). ICAR, New Delhi, 359p.

Ross, S.M. and Morrison, G.R. 2004. Experimental research methods. In: Jonassen, D.H. (ed.), Handbook of Research for Educational Communications and Technology (2nd Ed.). Lawrence Erlbaum Associates, New Jersey, pp. 10211043.

Snedecor, G.W. and Cochran, W.G. 1980. Statistical Methods (7th Ed.). Iowa State University Press, Ames, Iowa, 507p.

Download references

Author information

Authors and affiliations.

Kerala Agricultural University, Thrissur, Kerala, India

C. George Thomas

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to C. George Thomas .

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s)

About this chapter

Thomas, C.G. (2021). Experimental Research. In: Research Methodology and Scientific Writing . Springer, Cham. https://doi.org/10.1007/978-3-030-64865-7_5

Download citation

DOI : https://doi.org/10.1007/978-3-030-64865-7_5

Published : 25 February 2021

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-64864-0

Online ISBN : 978-3-030-64865-7

eBook Packages : Education Education (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • How it works

researchprospect post subheader

A Complete Guide to Experimental Research

Published by Carmen Troy at August 14th, 2021 , Revised On August 25, 2023

A Quick Guide to Experimental Research

Experimental research refers to the experiments conducted in the laboratory or observation under controlled conditions. Researchers try to find out the cause-and-effect relationship between two or more variables. 

The subjects/participants in the experiment are selected and observed. They receive treatments such as changes in room temperature, diet, atmosphere, or given a new drug to observe the changes. Experiments can vary from personal and informal natural comparisons. It includes three  types of variables ;

  • Independent variable
  • Dependent variable
  • Controlled variable

Before conducting experimental research, you need to have a clear understanding of the experimental design. A true experimental design includes  identifying a problem , formulating a  hypothesis , determining the number of variables, selecting and assigning the participants,  types of research designs , meeting ethical values, etc.

There are many  types of research  methods that can be classified based on:

  • The nature of the problem to be studied
  • Number of participants (individual or groups)
  • Number of groups involved (Single group or multiple groups)
  • Types of data collection methods (Qualitative/Quantitative/Mixed methods)
  • Number of variables (single independent variable/ factorial two independent variables)
  • The experimental design

Types of Experimental Research

Types of Experimental Research

Laboratory Experiment  

It is also called experimental research. This type of research is conducted in the laboratory. A researcher can manipulate and control the variables of the experiment.

Example: Milgram’s experiment on obedience.

Pros Cons
The researcher has control over variables. Easy to establish the relationship between cause and effect. Inexpensive and convenient. Easy to replicate. The artificial environment may impact the behaviour of the participants. Inaccurate results The short duration of the lab experiment may not be enough to get the desired results.

Field Experiment

Field experiments are conducted in the participants’ open field and the environment by incorporating a few artificial changes. Researchers do not have control over variables under measurement. Participants know that they are taking part in the experiment.

Pros Cons
Participants are observed in the natural environment. Participants are more likely to behave naturally. Useful to study complex social issues. It doesn’t allow control over the variables. It may raise ethical issues. Lack of internal validity

Natural Experiments

The experiment is conducted in the natural environment of the participants. The participants are generally not informed about the experiment being conducted on them.

Examples: Estimating the health condition of the population. Did the increase in tobacco prices decrease the sale of tobacco? Did the usage of helmets decrease the number of head injuries of the bikers?

Pros Cons
The source of variation is clear.  It’s carried out in a natural setting. There is no restriction on the number of participants. The results obtained may be questionable. It does not find out the external validity. The researcher does not have control over the variables.

Quasi-Experiments

A quasi-experiment is an experiment that takes advantage of natural occurrences. Researchers cannot assign random participants to groups.

Example: Comparing the academic performance of the two schools.

Pros Cons
Quasi-experiments are widely conducted as they are convenient and practical for a large sample size. It is suitable for real-world natural settings rather than true experimental research design. A researcher can analyse the effect of independent variables occurring in natural conditions. It cannot the influence of independent variables on the dependent variables. Due to the absence of a control group, it becomes difficult to establish the relationship between dependent and independent variables.

Does your Research Methodology Have the Following?

  • Great Research/Sources
  • Perfect Language
  • Accurate Sources

If not, we can help. Our panel of experts makes sure to keep the 3 pillars of Research Methodology strong.

Research-Methodology-ads

How to Conduct Experimental Research?

Step 1. identify and define the problem.

You need to identify a problem as per your field of study and describe your  research question .

Example: You want to know about the effects of social media on the behavior of youngsters. It would help if you found out how much time students spend on the internet daily.

Example: You want to find out the adverse effects of junk food on human health. It would help if you found out how junk food frequent consumption can affect an individual’s health.

Step 2. Determine the Number of Levels of Variables

You need to determine the number of  variables . The independent variable is the predictor and manipulated by the researcher. At the same time, the dependent variable is the result of the independent variable.

Independent variables Dependent variables Confounding Variable
The number of hours youngsters spend on social media daily. The overuse of social media among the youngsters and negative impact on their behaviour. Measure the difference between youngsters’ behaviour with the minimum social media usage and maximum social media utilisation. You can control and minimise the number of hours of using the social media of the participants.
The overconsumption of junk food. Adverse effects of junk food on human health like obesity, indigestion, constipation, high cholesterol, etc. Identify the difference between people’s health with a healthy diet and people eating junk food regularly. You can divide the participants into two groups, one with a healthy diet and one with junk food.

In the first example, we predicted that increased social media usage negatively correlates with youngsters’ negative behaviour.

In the second example, we predicted the positive correlation between a balanced diet and a good healthy and negative relationship between junk food consumption and multiple health issues.

Step 3. Formulate the Hypothesis

One of the essential aspects of experimental research is formulating a hypothesis . A researcher studies the cause and effect between the independent and dependent variables and eliminates the confounding variables. A  null hypothesis is when there is no significant relationship between the dependent variable and the participants’ independent variables. A researcher aims to disprove the theory. H0 denotes it.  The  Alternative hypothesis  is the theory that a researcher seeks to prove.  H1or HA denotes it. 

Null hypothesis 
The usage of social media does not correlate with the negative behaviour of youngsters. Over-usage of social media affects the behaviour of youngsters adversely.
There is no relationship between the consumption of junk food and the health issues of the people. The over-consumption of junk food leads to multiple health issues.

Why should you use a Plagiarism Detector for your Paper?

It ensures:

  • Original work
  • Structure and Clarity
  • Zero Spelling Errors
  • No Punctuation Faults

Plagiarism Detector for your Paper

Step 4. Selection and Assignment of the Subjects

It’s an essential feature that differentiates the experimental design from other research designs . You need to select the number of participants based on the requirements of your experiment. Then the participants are assigned to the treatment group. There should be a control group without any treatment to study the outcomes without applying any changes compared to the experimental group.

Randomisation:  The participants are selected randomly and assigned to the experimental group. It is known as probability sampling. If the selection is not random, it’s considered non-probability sampling.

Stratified sampling : It’s a type of random selection of the participants by dividing them into strata and randomly selecting them from each level. 

Randomisation Stratified sampling
Participants are randomly selected and assigned a specific number of hours to spend on social media. Participants are divided into groups as per their age and then assigned a specific number of hours to spend on social media.
Participants are randomly selected and assigned a balanced diet. Participants are divided into various groups based on their age, gender, and health conditions and assigned to each group’s treatment group.

Matching:   Even though participants are selected randomly, they can be assigned to the various comparison groups. Another procedure for selecting the participants is ‘matching.’ The participants are selected from the controlled group to match the experimental groups’ participants in all aspects based on the dependent variables.  

What is Replicability?

When a researcher uses the same methodology  and subject groups to carry out the experiments, it’s called ‘replicability.’ The  results will be similar each time. Researchers usually replicate their own work to strengthen external validity.

Step 5. Select a Research Design

You need to select a  research design  according to the requirements of your experiment. There are many types of experimental designs as follows.

Type of Research Design Definition
Two-group Post-test only It includes a control group and an experimental group selected randomly or through matching. This experimental design is used when the sample of subjects is large. It is carried out outside the laboratory. Group’s dependent variables are compared after the experiment.
Two-group pre-test post-test only. It includes two groups selected randomly. It involves pre-test and post-test measurements in both groups. It is conducted in a controlled environment.
Soloman 4 group design It includes both post-test-only group and pre-test-post-test control group design with good internal and external validity.
Factorial design Factorial design involves studying the effects of two or more factors with various possible values or levels.
Example: Factorial design applied in optimisation technique.
Randomised block design It is one of the most widely used experimental designs in forestry research. It aims to decrease the experimental error by using blocks and excluding the known sources of variation among the experimental group.
Cross over design In this type of experimental design, the subjects receive various treatments during various periods.
Repeated measures design The same group of participants is measured for one dependant variable at various times or for various dependant variables. Each individual receives experimental treatment consistently. It needs a minimum number of participants. It uses counterbalancing (randomising and reversing the order of subjects and treatment) and increases the treatments/measurements’ time interval.

Step 6. Meet Ethical and Legal Requirements

  • Participants of the research should not be harmed.
  • The dignity and confidentiality of the research should be maintained.
  • The consent of the participants should be taken before experimenting.
  • The privacy of the participants should be ensured.
  • Research data should remain confidential.
  • The anonymity of the participants should be ensured.
  • The rules and objectives of the experiments should be followed strictly.
  • Any wrong information or data should be avoided.

Tips for Meeting the Ethical Considerations

To meet the ethical considerations, you need to ensure that.

  • Participants have the right to withdraw from the experiment.
  • They should be aware of the required information about the experiment.
  • It would help if you avoided offensive or unacceptable language while framing the questions of interviews, questionnaires, or Focus groups.
  • You should ensure the privacy and anonymity of the participants.
  • You should acknowledge the sources and authors in your dissertation using any referencing styles such as APA/MLA/Harvard referencing style.

Step 7. Collect and Analyse Data.

Collect the data  by using suitable data collection according to your experiment’s requirement, such as observations,  case studies ,  surveys ,  interviews , questionnaires, etc. Analyse the obtained information.

Step 8. Present and Conclude the Findings of the Study.

Write the report of your research. Present, conclude, and explain the outcomes of your study .  

Frequently Asked Questions

What is the first step in conducting an experimental research.

The first step in conducting experimental research is to define your research question or hypothesis. Clearly outline the purpose and expectations of your experiment to guide the entire research process.

You May Also Like

A normal distribution is a probability distribution that is symmetric about its mean, with all data points near the mean.

Inductive and deductive reasoning takes into account assumptions and incidents. Here is all you need to know about inductive vs deductive reasoning.

A two-way ANOVA test examines the impact of independent variables on the expected outcome as well as their relationship to the outcome.

USEFUL LINKS

LEARNING RESOURCES

researchprospect-reviews-trust-site

COMPANY DETAILS

Research-Prospect-Writing-Service

  • How It Works
  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

How to Conduct a Psychology Experiment

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

how to conduct true experimental research

Emily is a board-certified science editor who has worked with top digital publishing brands like Voices for Biodiversity, Study.com, GoodTherapy, Vox, and Verywell.

how to conduct true experimental research

Conducting your first psychology experiment can be a long, complicated, and sometimes intimidating process. It can be especially confusing if you are not quite sure where to begin or which steps to take.

Like other sciences, psychology utilizes the  scientific method  and bases conclusions upon empirical evidence. When conducting an experiment, it is important to follow the seven basic steps of the scientific method:

  • Ask a testable question
  • Define your variables
  • Conduct background research
  • Design your experiment
  • Perform the experiment
  • Collect and analyze the data
  • Draw conclusions
  • Share the results with the scientific community

At a Glance

It's important to know the steps of the scientific method if you are conducting an experiment in psychology or other fields. The processes encompasses finding a problem you want to explore, learning what has already been discovered about the topic, determining your variables, and finally designing and performing your experiment. But the process doesn't end there! Once you've collected your data, it's time to analyze the numbers, determine what they mean, and share what you've found.

Find a Research Problem or Question

Picking a research problem can be one of the most challenging steps when you are conducting an experiment. After all, there are so many different topics you might choose to investigate.

Are you stuck for an idea? Consider some of the following:

Investigate a Commonly Held Belief

Folk knowledge is a good source of questions that can serve as the basis for psychological research. For example, many people believe that staying up all night to cram for a big exam can actually hurt test performance.

You could conduct a study to compare the test scores of students who stayed up all night with the scores of students who got a full night's sleep before the exam.

Review Psychology Literature

Published studies are a great source of unanswered research questions. In many cases, the authors will even note the need for further research. Find a published study that you find intriguing, and then come up with some questions that require further exploration.

Think About Everyday Problems

There are many practical applications for psychology research. Explore various problems that you or others face each day, and then consider how you could research potential solutions. For example, you might investigate different memorization strategies to determine which methods are most effective.

Define Your Variables

Variables are anything that might impact the outcome of your study. An operational definition describes exactly what the variables are and how they are measured within the context of your study.

For example, if you were doing a study on the impact of sleep deprivation on driving performance, you would need to operationally define sleep deprivation and driving performance .

An operational definition refers to a precise way that an abstract concept will be measured. For example, you cannot directly observe and measure something like test anxiety . You can, however, use an anxiety scale and assign values based on how many anxiety symptoms a person is experiencing. 

In this example, you might define sleep deprivation as getting less than seven hours of sleep at night. You might define driving performance as how well a participant does on a driving test.

What is the purpose of operationally defining variables? The main purpose is control. By understanding what you are measuring, you can control for it by holding the variable constant between all groups or manipulating it as an independent variable .

Develop a Hypothesis

The next step is to develop a testable hypothesis that predicts how the operationally defined variables are related. In the recent example, the hypothesis might be: "Students who are sleep-deprived will perform worse than students who are not sleep-deprived on a test of driving performance."

Null Hypothesis

In order to determine if the results of the study are significant, it is essential to also have a null hypothesis. The null hypothesis is the prediction that one variable will have no association to the other variable.

In other words, the null hypothesis assumes that there will be no difference in the effects of the two treatments in our experimental and control groups .

The null hypothesis is assumed to be valid unless contradicted by the results. The experimenters can either reject the null hypothesis in favor of the alternative hypothesis or not reject the null hypothesis.

It is important to remember that not rejecting the null hypothesis does not mean that you are accepting the null hypothesis. To say that you are accepting the null hypothesis is to suggest that something is true simply because you did not find any evidence against it. This represents a logical fallacy that should be avoided in scientific research.  

Conduct Background Research

Once you have developed a testable hypothesis, it is important to spend some time doing some background research. What do researchers already know about your topic? What questions remain unanswered?

You can learn about previous research on your topic by exploring books, journal articles, online databases, newspapers, and websites devoted to your subject.

Reading previous research helps you gain a better understanding of what you will encounter when conducting an experiment. Understanding the background of your topic provides a better basis for your own hypothesis.

After conducting a thorough review of the literature, you might choose to alter your own hypothesis. Background research also allows you to explain why you chose to investigate your particular hypothesis and articulate why the topic merits further exploration.

As you research the history of your topic, take careful notes and create a working bibliography of your sources. This information will be valuable when you begin to write up your experiment results.

Select an Experimental Design

After conducting background research and finalizing your hypothesis, your next step is to develop an experimental design. There are three basic types of designs that you might utilize. Each has its own strengths and weaknesses:

Pre-Experimental Design

A single group of participants is studied, and there is no comparison between a treatment group and a control group. Examples of pre-experimental designs include case studies (one group is given a treatment and the results are measured) and pre-test/post-test studies (one group is tested, given a treatment, and then retested).

Quasi-Experimental Design

This type of experimental design does include a control group but does not include randomization. This type of design is often used if it is not feasible or ethical to perform a randomized controlled trial.

True Experimental Design

A true experimental design, also known as a randomized controlled trial, includes both of the elements that pre-experimental designs and quasi-experimental designs lack—control groups and random assignment to groups.

Standardize Your Procedures

In order to arrive at legitimate conclusions, it is essential to compare apples to apples.

Each participant in each group must receive the same treatment under the same conditions.

For example, in our hypothetical study on the effects of sleep deprivation on driving performance, the driving test must be administered to each participant in the same way. The driving course must be the same, the obstacles faced must be the same, and the time given must be the same.

Choose Your Participants

In addition to making sure that the testing conditions are standardized, it is also essential to ensure that your pool of participants is the same.

If the individuals in your control group (those who are not sleep deprived) all happen to be amateur race car drivers while your experimental group (those that are sleep deprived) are all people who just recently earned their driver's licenses, your experiment will lack standardization.

When choosing subjects, there are some different techniques you can use.

Simple Random Sample

In a simple random sample, the participants are randomly selected from a group. A simple random sample can be used to represent the entire population from which the representative sample is drawn.

Drawing a simple random sample can be helpful when you don't know a lot about the characteristics of the population.

Stratified Random Sample

Participants must be randomly selected from different subsets of the population. These subsets might include characteristics such as geographic location, age, sex, race, or socioeconomic status.

Stratified random samples are more complex to carry out. However, you might opt for this method if there are key characteristics about the population that you want to explore in your research.

Conduct Tests and Collect Data

After you have selected participants, the next steps are to conduct your tests and collect the data. Before doing any testing, however, there are a few important concerns that need to be addressed.

Address Ethical Concerns

First, you need to be sure that your testing procedures are ethical . Generally, you will need to gain permission to conduct any type of testing with human participants by submitting the details of your experiment to your school's Institutional Review Board (IRB), sometimes referred to as the Human Subjects Committee.

Obtain Informed Consent

After you have gained approval from your institution's IRB, you will need to present informed consent forms to each participant. This form offers information on the study, the data that will be gathered, and how the results will be used. The form also gives participants the option to withdraw from the study at any point in time.

Once this step has been completed, you can begin administering your testing procedures and collecting the data.

Analyze the Results

After collecting your data, it is time to analyze the results of your experiment. Researchers use statistics to determine if the results of the study support the original hypothesis and if the results are statistically significant.

Statistical significance means that the study's results are unlikely to have occurred simply by chance.

The types of statistical methods you use to analyze your data depend largely on the type of data that you collected. If you are using a random sample of a larger population, you will need to utilize inferential statistics.

These statistical methods make inferences about how the results relate to the population at large.

Because you are making inferences based on a sample, it has to be assumed that there will be a certain margin of error. This refers to the amount of error in your results. A large margin of error means that there will be less confidence in your results, while a small margin of error means that you are more confident that your results are an accurate reflection of what exists in that population.

Share Your Results After Conducting an Experiment

Your final task in conducting an experiment is to communicate your results. By sharing your experiment with the scientific community, you are contributing to the knowledge base on that particular topic.

One of the most common ways to share research results is to publish the study in a peer-reviewed professional journal. Other methods include sharing results at conferences, in book chapters, or academic presentations.

In your case, it is likely that your class instructor will expect a formal write-up of your experiment in the same format required in a professional journal article or lab report :

  • Introduction
  • Tables and figures

What This Means For You

Designing and conducting a psychology experiment can be quite intimidating, but breaking the process down step-by-step can help. No matter what type of experiment you decide to perform, always check with your instructor and your school's institutional review board for permission before you begin.

NOAA SciJinks. What is the scientific method? .

Nestor, PG, Schutt, RK. Research Methods in Psychology . SAGE; 2015.

Andrade C. A student's guide to the classification and operationalization of variables in the conceptualization and eesign of a clinical study: Part 2 .  Indian J Psychol Med . 2021;43(3):265-268. doi:10.1177/0253717621996151

Purna Singh A, Vadakedath S, Kandi V. Clinical research: A review of study designs, hypotheses, errors, sampling types, ethics, and informed consent .  Cureus . 2023;15(1):e33374. doi:10.7759/cureus.33374

Colby College. The Experimental Method .

Leite DFB, Padilha MAS, Cecatti JG. Approaching literature review for academic purposes: The Literature Review Checklist .  Clinics (Sao Paulo) . 2019;74:e1403. doi:10.6061/clinics/2019/e1403

Salkind NJ. Encyclopedia of Research Design . SAGE Publications, Inc.; 2010. doi:10.4135/9781412961288

Miller CJ, Smith SN, Pugatch M. Experimental and quasi-experimental designs in implementation research .  Psychiatry Res . 2020;283:112452. doi:10.1016/j.psychres.2019.06.027

Nijhawan LP, Manthan D, Muddukrishna BS, et. al. Informed consent: Issues and challenges . J Adv Pharm Technol Rese . 2013;4(3):134-140. doi:10.4103/2231-4040.116779

Serdar CC, Cihan M, Yücel D, Serdar MA. Sample size, power and effect size revisited: simplified and practical approaches in pre-clinical, clinical and laboratory studies .  Biochem Med (Zagreb) . 2021;31(1):010502. doi:10.11613/BM.2021.010502

American Psychological Association.  Publication Manual of the American Psychological Association  (7th ed.). Washington DC: The American Psychological Association; 2019.

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

  • Foundations
  • Write Paper

Search form

  • Experiments
  • Anthropology
  • Self-Esteem
  • Social Anxiety

how to conduct true experimental research

Experimental Research

Experimental Research

Experimental research is commonly used in sciences such as sociology and psychology, physics, chemistry, biology and medicine etc.

This article is a part of the guide:

  • Pretest-Posttest
  • Third Variable
  • Research Bias
  • Independent Variable
  • Between Subjects

Browse Full Outline

  • 1 Experimental Research
  • 2.1 Independent Variable
  • 2.2 Dependent Variable
  • 2.3 Controlled Variables
  • 2.4 Third Variable
  • 3.1 Control Group
  • 3.2 Research Bias
  • 3.3.1 Placebo Effect
  • 3.3.2 Double Blind Method
  • 4.1 Randomized Controlled Trials
  • 4.2 Pretest-Posttest
  • 4.3 Solomon Four Group
  • 4.4 Between Subjects
  • 4.5 Within Subject
  • 4.6 Repeated Measures
  • 4.7 Counterbalanced Measures
  • 4.8 Matched Subjects

It is a collection of research designs which use manipulation and controlled testing to understand causal processes. Generally, one or more variables are manipulated to determine their effect on a dependent variable.

The experimental method is a systematic and scientific approach to research in which the researcher manipulates one or more variables, and controls and measures any change in other variables.

Experimental Research is often used where:

  • There is time priority in a causal relationship ( cause precedes effect )
  • There is consistency in a causal relationship (a cause will always lead to the same effect)
  • The magnitude of the correlation is great.

(Reference: en.wikipedia.org)

The word experimental research has a range of definitions. In the strict sense, experimental research is what we call a true experiment .

This is an experiment where the researcher manipulates one variable, and control/randomizes the rest of the variables. It has a control group , the subjects have been randomly assigned between the groups, and the researcher only tests one effect at a time. It is also important to know what variable(s) you want to test and measure.

A very wide definition of experimental research, or a quasi experiment , is research where the scientist actively influences something to observe the consequences. Most experiments tend to fall in between the strict and the wide definition.

A rule of thumb is that physical sciences, such as physics, chemistry and geology tend to define experiments more narrowly than social sciences, such as sociology and psychology, which conduct experiments closer to the wider definition.

how to conduct true experimental research

Aims of Experimental Research

Experiments are conducted to be able to predict phenomenons. Typically, an experiment is constructed to be able to explain some kind of causation . Experimental research is important to society - it helps us to improve our everyday lives.

how to conduct true experimental research

Identifying the Research Problem

After deciding the topic of interest, the researcher tries to define the research problem . This helps the researcher to focus on a more narrow research area to be able to study it appropriately.  Defining the research problem helps you to formulate a  research hypothesis , which is tested against the  null hypothesis .

The research problem is often operationalizationed , to define how to measure the research problem. The results will depend on the exact measurements that the researcher chooses and may be operationalized differently in another study to test the main conclusions of the study.

An ad hoc analysis is a hypothesis invented after testing is done, to try to explain why the contrary evidence. A poor ad hoc analysis may be seen as the researcher's inability to accept that his/her hypothesis is wrong, while a great ad hoc analysis may lead to more testing and possibly a significant discovery.

Constructing the Experiment

There are various aspects to remember when constructing an experiment. Planning ahead ensures that the experiment is carried out properly and that the results reflect the real world, in the best possible way.

Sampling Groups to Study

Sampling groups correctly is especially important when we have more than one condition in the experiment. One sample group often serves as a control group , whilst others are tested under the experimental conditions.

Deciding the sample groups can be done in using many different sampling techniques. Population sampling may chosen by a number of methods, such as randomization , "quasi-randomization" and pairing.

Reducing sampling errors is vital for getting valid results from experiments. Researchers often adjust the sample size to minimize chances of random errors .

Here are some common sampling techniques :

  • probability sampling
  • non-probability sampling
  • simple random sampling
  • convenience sampling
  • stratified sampling
  • systematic sampling
  • cluster sampling
  • sequential sampling
  • disproportional sampling
  • judgmental sampling
  • snowball sampling
  • quota sampling

Creating the Design

The research design is chosen based on a range of factors. Important factors when choosing the design are feasibility, time, cost, ethics, measurement problems and what you would like to test. The design of the experiment is critical for the validity of the results.

Typical Designs and Features in Experimental Design

  • Pretest-Posttest Design Check whether the groups are different before the manipulation starts and the effect of the manipulation. Pretests sometimes influence the effect.
  • Control Group Control groups are designed to measure research bias and measurement effects, such as the Hawthorne Effect or the Placebo Effect . A control group is a group not receiving the same manipulation as the experimental group. Experiments frequently have 2 conditions, but rarely more than 3 conditions at the same time.
  • Randomized Controlled Trials Randomized Sampling, comparison between an Experimental Group and a Control Group and strict control/randomization of all other variables
  • Solomon Four-Group Design With two control groups and two experimental groups. Half the groups have a pretest and half do not have a pretest. This to test both the effect itself and the effect of the pretest.
  • Between Subjects Design Grouping Participants to Different Conditions
  • Within Subject Design Participants Take Part in the Different Conditions - See also: Repeated Measures Design
  • Counterbalanced Measures Design Testing the effect of the order of treatments when no control group is available/ethical
  • Matched Subjects Design Matching Participants to Create Similar Experimental- and Control-Groups
  • Double-Blind Experiment Neither the researcher, nor the participants, know which is the control group. The results can be affected if the researcher or participants know this.
  • Bayesian Probability Using bayesian probability to "interact" with participants is a more "advanced" experimental design. It can be used for settings were there are many variables which are hard to isolate. The researcher starts with a set of initial beliefs, and tries to adjust them to how participants have responded

Pilot Study

It may be wise to first conduct a pilot-study or two before you do the real experiment. This ensures that the experiment measures what it should, and that everything is set up right.

Minor errors, which could potentially destroy the experiment, are often found during this process. With a pilot study, you can get information about errors and problems, and improve the design, before putting a lot of effort into the real experiment.

If the experiments involve humans, a common strategy is to first have a pilot study with someone involved in the research, but not too closely, and then arrange a pilot with a person who resembles the subject(s) . Those two different pilots are likely to give the researcher good information about any problems in the experiment.

Conducting the Experiment

An experiment is typically carried out by manipulating a variable, called the independent variable , affecting the experimental group. The effect that the researcher is interested in, the dependent variable(s) , is measured.

Identifying and controlling non-experimental factors which the researcher does not want to influence the effects, is crucial to drawing a valid conclusion. This is often done by controlling variables , if possible, or randomizing variables to minimize effects that can be traced back to third variables . Researchers only want to measure the effect of the independent variable(s) when conducting an experiment , allowing them to conclude that this was the reason for the effect.

Analysis and Conclusions

In quantitative research , the amount of data measured can be enormous. Data not prepared to be analyzed is called "raw data". The raw data is often summarized as something called "output data", which typically consists of one line per subject (or item). A cell of the output data is, for example, an average of an effect in many trials for a subject. The output data is used for statistical analysis, e.g. significance tests, to see if there really is an effect.

The aim of an analysis is to draw a conclusion , together with other observations. The researcher might generalize the results to a wider phenomenon, if there is no indication of confounding variables "polluting" the results.

If the researcher suspects that the effect stems from a different variable than the independent variable, further investigation is needed to gauge the validity of the results. An experiment is often conducted because the scientist wants to know if the independent variable is having any effect upon the dependent variable. Variables correlating are not proof that there is causation .

Experiments are more often of quantitative nature than qualitative nature, although it happens.

Examples of Experiments

This website contains many examples of experiments. Some are not true experiments , but involve some kind of manipulation to investigate a phenomenon. Others fulfill most or all criteria of true experiments.

Here are some examples of scientific experiments:

Social Psychology

  • Stanley Milgram Experiment - Will people obey orders, even if clearly dangerous?
  • Asch Experiment - Will people conform to group behavior?
  • Stanford Prison Experiment - How do people react to roles? Will you behave differently?
  • Good Samaritan Experiment - Would You Help a Stranger? - Explaining Helping Behavior
  • Law Of Segregation - The Mendel Pea Plant Experiment
  • Transforming Principle - Griffith's Experiment about Genetics
  • Ben Franklin Kite Experiment - Struck by Lightning
  • J J Thomson Cathode Ray Experiment
  • Psychology 101
  • Flags and Countries
  • Capitals and Countries

Oskar Blakstad (Jul 10, 2008). Experimental Research. Retrieved Jun 10, 2024 from Explorable.com: https://explorable.com/experimental-research

You Are Allowed To Copy The Text

The text in this article is licensed under the Creative Commons-License Attribution 4.0 International (CC BY 4.0) .

This means you're free to copy, share and adapt any parts (or all) of the text in the article, as long as you give appropriate credit and provide a link/reference to this page.

That is it. You don't need our permission to copy the article; just include a link/reference back to this page. You can use it freely (with some kind of link), and we're also okay with people reprinting in publications like books, blogs, newsletters, course-material, papers, wikipedia and presentations (with clear attribution).

Want to stay up to date? Follow us!

Get all these articles in 1 guide.

Want the full version to study at home, take to school or just scribble on?

Whether you are an academic novice, or you simply want to brush up your skills, this book will take your academic writing skills to the next level.

how to conduct true experimental research

Download electronic versions: - Epub for mobiles and tablets - For Kindle here - For iBooks here - PDF version here

Save this course for later

Don't have time for it all now? No problem, save it as a course and come back to it later.

Footer bottom

  • Privacy Policy

how to conduct true experimental research

  • Subscribe to our RSS Feed
  • Like us on Facebook
  • Follow us on Twitter

What Is Internal Validity In Research?

Charlotte Nickerson

Research Assistant at Harvard University

Undergraduate at Harvard University

Charlotte Nickerson is a student at Harvard University obsessed with the intersection of mental health, productivity, and design.

Learn about our Editorial Process

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Internal validity refers to whether the design and conduct of a study are able to support that a causal relationship exists between the independent and dependent variables .

It ensures that no other variables except the independent variable caused the observed effect on the dependent variable.

Conducting research that has strong internal and external validity requires thoughtful planning and design from the outset.

Rather than hastening through the design process, it’s wise to invest sufficient time in structuring a study that is methodologically robust and widely applicable. 

By carefully considering factors that can compromise internal and external validity during the design phase, one can avoid having to remedy issues later. 

Research that exhibits both high internal and external validity permits drawing forceful conclusions about the findings. Though it may require more initial effort, ensuring studies have sound internal and external validity is necessary for producing meaningful and influential research.

Close-up view of university students discussing their group project while using tablet

For example, if you implement a smoking cessation program and see improvement among participants, high internal validity means you can be confident this is due to the program itself rather than other influences. 

Internal validity is not black-and-white – it’s about the level of confidence we can have in results based on how well the study controls for variables that could undermine the findings. 

The more a study avoids potential “confounding factors,” the higher its internal validity and the more faith we can place in the cause-effect relationship it uncovers. 

For the general public, internal validity is important because it means a given study’s results and takeaways can be trusted and applied.

Threats to Internal Validity

Confounding variables.

Confounding variables are extraneous factors that influence the dependent variables in an experiment, causing a misleading association and making it difficult to isolate the true effect of the independent variable. 

They threaten internal validity because they provide alternative explanations for study results, making it unclear if changes in the dependent variable are really due to manipulation of the independent variable or due to the confounding variable.

A failure to control extraneous variables undermines the ability of researchers to create causal inferences logically. Unfortunately, however, confounding variables are difficult to control outside of laboratory settings.

Nonetheless, Campbell (1957) identified several confounding variables that can threaten internal validity. 

Participant Factors

Participant reaction biases threaten internal validity because participants may act differently when they know they are being observed. These biases include participant expectancies, participant reactance, and evaluation apprehension.

Participant expectancies occur when a participant, consciously or unconsciously, attempts to behave in a way that the experimenter expects them to. The overly cooperative participant may often base their behavior on factors such as study setting and directions. 

Participant expectancies may also occur during a participant screening process. For example, a participant hoping to participate in a study about depression may exaggerate their symptoms on a screening questionnaire to appear more eligible for the study.

Participant reactance occurs when participants intentionally try to act in a way counter to the experimenter’s hypothesis.

For example, if studying the effects of daylight exposure on sleep habits, a participant may intentionally sleep at exactly the same time, regardless of whether or not they are exposed to daylight. Intentional uncooperativeness could result from a desire for autonomy or independence (Brehm, 1966).

Evaluation apprehension happens when a desire to appear consistent with social or group beliefs affects participant responses.

This response style can polarize responses and lead to inappropriate conclusions. For instance, participants asked about their opinions on a political issue in a group may feel pressure to conform to the responses of other group members. 

Broadly, researchers can reduce these biases by guaranteeing participant anonymity, using cover stories, unobtrusive observations, and indirect measures.

Sampling bias

Sampling bias occurs when the process of selecting participants for a research study results in key differences between groups that could skew the results. This threatens internal validity because it introduces systematic error in the comparisons between an experimental group and a control group.

For example, let’s say a study is testing a new math tutoring program and students are randomly assigned to either participate in the program (experiment group) or continue with normal instruction (control group).

However, the researcher unknowingly samples students for the experiment group from advanced math classes, while the control group is sampled from regular math classes.

In this case, a sampling bias is introduced because the students in the experiment group may have higher math abilities or motivation levels to begin with compared to the control group.

Any positive effects observed from the tutoring program could simply be due to these pre-existing differences rather than being an actual result of the program itself.

According to Campbell (1957), attrition, otherwise known as experimental mortality,  refers to a differential loss of study participants in experimental and control groups. 

This can threaten internal validity if the rate of attrition differs significantly between the experimental and control groups.

For example, imagine a clinical trial testing the effectiveness of a new therapy for depression. Participants are randomly assigned to either receive the therapy (experimental group) or no therapy (control group) for 8 weeks.

Over the course of the study, a number of participants from both groups drop out and are lost to follow-up. However, twice as many participants dropped out from the control group compared to the experimental group.

This differential attrition introduces bias because the participants remaining in each condition are no longer equivalent – the experimental group now contains more of its original participants compared to the smaller subset remaining in the control group.

Any observed differences in depression levels by the end of the study could be due to this systematic imbalance rather than being an actual effect of the therapy.

Experimenter bias

Experimenter bias refers to when a researcher’s expectations, perceptions, or motivations influence the outcome of an experiment in unconscious ways. This threatens internal validity because it provides an alternative explanation for results besides the independent variable being tested.

For example, a psychologist is conducting an experiment on the effects of praise on child task performance. The psychologist hypothesizes that praising children will improve their task performance.

During the experiment, she unconsciously provided more encouragement and positive body language when interacting with the praise group versus the neutral group.

Consequently, the praise group shows better task performance. However, it is unclear whether this is truly due to the predictive praise or inadvertent experimenter bias, where children picked up on the researcher’s subtle supportive cues.

This demonstrates how a researcher’s cognitive bias can unknowingly impact participant responses and behavior in a way that distorts the causal relationship between variables.

History encompasses specific events that a study participant experiences during the course of an experiment that is not part of the experiment itself. 

Specifically, it threatens the internal validity of experiments that take place over longer periods of time. For example, imagine a 12-month clinical trial testing a new psychotherapy for reducing anxiety. Participants are randomly assigned to receive either the new therapy or an existing therapy.

However, 8 months into the trial, the COVID-19 pandemic begins. This external event increases anxiety levels for people everywhere.

By the end of the trial, anxiety levels are reassessed. The new therapy group shows greater reductions in anxiety compared to the existing therapy group.

However, it is unclear whether this difference is truly due to the new therapy’s effectiveness or the confounding variable of COVID-19 raising anxiety in the control group.

Perhaps anxiety would have decreased similarly in both groups if not for the pandemic. This demonstrates how history can introduce confounds and alternative explanations that undermine internal validity.

Instrumentation 

Instrumentation refers to the ability of experimental instruments to provide consistent results throughout the course of a study. 

Instrumentation threats occur when there are changes in the calibration or administration of the tools, surveys, or measures used to collect data over the course of a study.

This can introduce systematic measurement error and provide an alternative explanation for any observed differences aside from the independent variable.

For example, a researcher using a battery-powered device to measure blood pressure in an experiment intended to investigate the effectiveness of a drug in reducing hypertension may find that the battery’s progressive decay may result in these readings appearing lower on a post-test than on the pre-tests.

Instrumentation is not limited to electronic or mechanical instruments. For example, a newly-hired researcher asked to rate the mental health status of participants over the course of a month may, with experience, be able to rate participants more accurately in the post-test than during the pre-test (Flannelly et al., 2018).

Diffusion of information between participants

The diffusion of information and treatments between patients can call internal validity into question. The latter case describes a situation in which research participants adopt a different intervention than the one they were assigned because they believe the different interventions to be more effective. 

For example, a control participant in a weight-loss study who learns that those in the treatment group are losing more weight than them may adopt the treatment group’s intervention. 

Differential diffusion of information can also occur when participants are given different instructions or instructions that can be misinterpreted by those conducting the study.

For instance, participants asked to take a medication biweekly may take it twice a week or once every two weeks (Flannelly et al., 2018; Campbell, 1957).

Maturation 

Maturation encompasses any biological changes related to age, or otherwise that occur with the passage of time. This can include becoming hungry, tired, or fatigued, wound healing, recovering from surgery, and disease progression. 

Maturation threatens internal validity because natural changes over time can provide an alternative explanation for study results rather than the independent variable itself. 

For example, in a year-long study of a new reading program for children, students may show reading gains over the course of the year. However, some of that improvement could simply be due to neural development and growing reading skills expected with age. 

The effects of maturation can also take effect over studies that have a short duration — for example, children given a repetitive computer task may lose focus within an hour, resulting in worsened performance (Flannelly et al., 2018).

Testing refers to when participants taking a test or assessment can perform better simply from having experienced it before. Familiarity with the test can influence results rather than any intervention or independent variable being studied.

For example, let’s say a researcher is testing a new method for improving memory in older adults. Participants take a memory assessment before and after completing the new memory training program.

However, participants may show memory improvements in the post-test partly just because it was their second time taking the exact same test. Their prior experience with the questions and format benefits their scores.

This demonstrates how repeated testing on the same measures can threaten internal validity. It provides an alternative explanation that improvements were due to practice effects rather than being an actual result of the intervention.

How can we prevent threats to internal validity?

Some methods for increasing the internal validity of an experiment include:

Random allocation

Random allocation is a technique that chooses individuals for treatment groups without regard to researchers’ will or patient condition and preference. This increases internal validity by reducing experimenter and selection bias (Kim & Shin, 2014).

Random allocation

Random Selection

Randomly selecting participants helps prevent systematic differences between groups that could provide alternative explanations.

It ensures any pre-existing factors are evenly distributed by chance, strengthening the ability to attribute results to the independent variable rather than confounds.

Blinding  (also called masking) refers to keeping trial participants, healthcare providers, and data collectors unaware of the assigned intervention so as not to be influenced by knowledge.

This minimizes bias in instrumentation, drop-out rates (attrition), and participant bias.

Control Groups

Control groups are groups for whom an experimental condition is not applied. These show whether or not there is a clear difference in outcome related to the application of the independent variable.

The use of a control group in combination with randomized allocation constitutes a randomized control trial, which scholars consider to be a “gold standard” for psychological research (Kim & Shin, 2014).

Study protocol

Study protocols are pre-defined plans that detail all aspects of a study: experimental design, methodology, data collection and analysis procedures, and so on.

This helps to ensure consistency throughout the study, reducing the effects of instrumentation and differential diffusion of information on internal validity (Kim & Shin, 2014).

Allocation concealment

In a research study comparing two treatments, participants must be randomly assigned so that neither the researchers nor participants know which treatment they will get ahead of time. 

This process of hiding the upcoming assignment is called allocation concealment. It’s crucial because if researchers or participants know or influence which treatment someone will receive, it ruins the randomness.

For example, if a researcher believes one treatment is better, they may steer sicker participants toward it rather than assigning them fairly by chance. 

Proper allocation concealment prevents this by keeping upcoming assignments hidden, ensuring unbiased random group assignments.

Internal Validity Example

What is the difference between internal and external validity.

Validity refers to how accurately a test measures what it claims to. Internal validity is a statement of causality and non-interference by extraneous factors, while external validity is a statement of an experiment’s generalizability to different situations or groups.

Why is internal validity more critical than external validity in a true experiment?

Internal validity concerns the robustness of an experiment in itself. An experiment with external but not internal validity cannot be used to conclude causality. Thus, it is generally unreliable for making any scientific inferences. On the contrary, an experiment that has only internal validity can be used, at least, to draw causal relationships in a narrow context.

American Psychological Association. Internal Validity. American Psychological Association Dictionary.

Blasco-Fontecilla, H., Delgado-Gomez, D., Legido-Gil, T., De Leon, J., Perez-Rodriguez, M. M., & Baca-Garcia, E. (2012). Can the Holmes-Rahe Social Readjustment Rating Scale (SRRS) be used as a suicide risk scale? An exploratory study. Archives of Suicide Research , 16 (1), 13-28.

Brehm, J. W. (1966). A theory of psychological reactance.

Campbell, D. T. (1957). Factors relevant to the validity of experiments in social settings. Psychological bulletin , 54 (4), 297.

Gerst, M. S., Grant, I., Yager, J., & Sweetwood, H. (1978). The reliability of the Social Readjustment Rating Scale: Moderate and long-term stability. Journal of psychosomatic research , 22 (6), 519-523.

Holmes, T. H., & Rahe, R. H. (1967). The social readjustment rating scale. Journal of psychosomatic research , 11 (2), 213-218.

Kevin J. Flannelly, Laura T. Flannelly & Katherine R. B. Jankowski (2018): Threats to the Internal Validity of Experimental and Quasi-Experimental Research in Healthcare, Journal of Health Care Chaplaincy, DOI: 10.1080/08854726.2017.1421019

Kim, J., & Shin, W. (2014). How to do random allocation (randomization). Clinics in orthopedic surgery , 6 (1), 103-109.

Morse, G., & Graves, D. F. (2009). Internal Validity. The American Counseling Association Encyclopedia , 292-294.

Print Friendly, PDF & Email

Related Articles

Qualitative Data Coding

Research Methodology

Qualitative Data Coding

What Is a Focus Group?

What Is a Focus Group?

Cross-Cultural Research Methodology In Psychology

Cross-Cultural Research Methodology In Psychology

What Is Face Validity In Research? Importance & How To Measure

Research Methodology , Statistics

What Is Face Validity In Research? Importance & How To Measure

Criterion Validity: Definition & Examples

Criterion Validity: Definition & Examples

Convergent Validity: Definition and Examples

Convergent Validity: Definition and Examples

Chapter 10 Experimental Research

Experimental research, often considered to be the “gold standard” in research designs, is one of the most rigorous of all research designs. In this design, one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different treatment levels (random assignment), and the results of the treatments on outcomes (dependent variables) are observed. The unique strength of experimental research is its internal validity (causality) due to its ability to link cause and effect through treatment manipulation, while controlling for the spurious effect of extraneous variable.

Experimental research is best suited for explanatory research (rather than for descriptive or exploratory research), where the goal of the study is to examine cause-effect relationships. It also works well for research that involves a relatively limited and well-defined set of independent variables that can either be manipulated or controlled. Experimental research can be conducted in laboratory or field settings. Laboratory experiments , conducted in laboratory (artificial) settings, tend to be high in internal validity, but this comes at the cost of low external validity (generalizability), because the artificial (laboratory) setting in which the study is conducted may not reflect the real world. Field experiments , conducted in field settings such as in a real organization, and high in both internal and external validity. But such experiments are relatively rare, because of the difficulties associated with manipulating treatments and controlling for extraneous effects in a field setting.

Experimental research can be grouped into two broad categories: true experimental designs and quasi-experimental designs. Both designs require treatment manipulation, but while true experiments also require random assignment, quasi-experiments do not. Sometimes, we also refer to non-experimental research, which is not really a research design, but an all-inclusive term that includes all types of research that do not employ treatment manipulation or random assignment, such as survey research, observational research, and correlational studies.

Basic Concepts

Treatment and control groups. In experimental research, some subjects are administered one or more experimental stimulus called a treatment (the treatment group ) while other subjects are not given such a stimulus (the control group ). The treatment may be considered successful if subjects in the treatment group rate more favorably on outcome variables than control group subjects. Multiple levels of experimental stimulus may be administered, in which case, there may be more than one treatment group. For example, in order to test the effects of a new drug intended to treat a certain medical condition like dementia, if a sample of dementia patients is randomly divided into three groups, with the first group receiving a high dosage of the drug, the second group receiving a low dosage, and the third group receives a placebo such as a sugar pill (control group), then the first two groups are experimental groups and the third group is a control group. After administering the drug for a period of time, if the condition of the experimental group subjects improved significantly more than the control group subjects, we can say that the drug is effective. We can also compare the conditions of the high and low dosage experimental groups to determine if the high dose is more effective than the low dose.

Treatment manipulation. Treatments are the unique feature of experimental research that sets this design apart from all other research methods. Treatment manipulation helps control for the “cause” in cause-effect relationships. Naturally, the validity of experimental research depends on how well the treatment was manipulated. Treatment manipulation must be checked using pretests and pilot tests prior to the experimental study. Any measurements conducted before the treatment is administered are called pretest measures , while those conducted after the treatment are posttest measures .

Random selection and assignment. Random selection is the process of randomly drawing a sample from a population or a sampling frame. This approach is typically employed in survey research, and assures that each unit in the population has a positive chance of being selected into the sample. Random assignment is however a process of randomly assigning subjects to experimental or control groups. This is a standard practice in true experimental research to ensure that treatment groups are similar (equivalent) to each other and to the control group, prior to treatment administration. Random selection is related to sampling, and is therefore, more closely related to the external validity (generalizability) of findings. However, random assignment is related to design, and is therefore most related to internal validity. It is possible to have both random selection and random assignment in well-designed experimental research, but quasi-experimental research involves neither random selection nor random assignment.

Threats to internal validity. Although experimental designs are considered more rigorous than other research methods in terms of the internal validity of their inferences (by virtue of their ability to control causes through treatment manipulation), they are not immune to internal validity threats. Some of these threats to internal validity are described below, within the context of a study of the impact of a special remedial math tutoring program for improving the math abilities of high school students.

  • History threat is the possibility that the observed effects (dependent variables) are caused by extraneous or historical events rather than by the experimental treatment. For instance, students’ post-remedial math score improvement may have been caused by their preparation for a math exam at their school, rather than the remedial math program.
  • Maturation threat refers to the possibility that observed effects are caused by natural maturation of subjects (e.g., a general improvement in their intellectual ability to understand complex concepts) rather than the experimental treatment.
  • Testing threat is a threat in pre-post designs where subjects’ posttest responses are conditioned by their pretest responses. For instance, if students remember their answers from the pretest evaluation, they may tend to repeat them in the posttest exam. Not conducting a pretest can help avoid this threat.
  • Instrumentation threat , which also occurs in pre-post designs, refers to the possibility that the difference between pretest and posttest scores is not due to the remedial math program, but due to changes in the administered test, such as the posttest having a higher or lower degree of difficulty than the pretest.
  • Mortality threat refers to the possibility that subjects may be dropping out of the study at differential rates between the treatment and control groups due to a systematic reason, such that the dropouts were mostly students who scored low on the pretest. If the low-performing students drop out, the results of the posttest will be artificially inflated by the preponderance of high-performing students.
  • Regression threat , also called a regression to the mean, refers to the statistical tendency of a group’s overall performance on a measure during a posttest to regress toward the mean of that measure rather than in the anticipated direction. For instance, if subjects scored high on a pretest, they will have a tendency to score lower on the posttest (closer to the mean) because their high scores (away from the mean) during the pretest was possibly a statistical aberration. This problem tends to be more prevalent in non-random samples and when the two measures are imperfectly correlated.

Two-Group Experimental Designs

The simplest true experimental designs are two group designs involving one treatment group and one control group, and are ideally suited for testing the effects of a single independent variable that can be manipulated as a treatment. The two basic two-group designs are the pretest-posttest control group design and the posttest-only control group design, while variations may include covariance designs. These designs are often depicted using a standardized design notation, where R represents random assignment of subjects to groups, X represents the treatment administered to the treatment group, and O represents pretest or posttest observations of the dependent variable (with different subscripts to distinguish between pretest and posttest observations of treatment and control groups).

Pretest-posttest control group design . In this design, subjects are randomly assigned to treatment and control groups, subjected to an initial (pretest) measurement of the dependent variables of interest, the treatment group is administered a treatment (representing the independent variable of interest), and the dependent variables measured again (posttest). The notation of this design is shown in Figure 10.1.

how to conduct true experimental research

Figure 10.1. Pretest-posttest control group design

The effect E of the experimental treatment in the pretest posttest design is measured as the difference in the posttest and pretest scores between the treatment and control groups:

E = (O 2 – O 1 ) – (O 4 – O 3 )

Statistical analysis of this design involves a simple analysis of variance (ANOVA) between the treatment and control groups. The pretest posttest design handles several threats to internal validity, such as maturation, testing, and regression, since these threats can be expected to influence both treatment and control groups in a similar (random) manner. The selection threat is controlled via random assignment. However, additional threats to internal validity may exist. For instance, mortality can be a problem if there are differential dropout rates between the two groups, and the pretest measurement may bias the posttest measurement (especially if the pretest introduces unusual topics or content).

Posttest-only control group design . This design is a simpler version of the pretest-posttest design where pretest measurements are omitted. The design notation is shown in Figure 10.2.

how to conduct true experimental research

Figure 10.2. Posttest only control group design.

The treatment effect is measured simply as the difference in the posttest scores between the two groups:

E = (O 1 – O 2 )

The appropriate statistical analysis of this design is also a two- group analysis of variance (ANOVA). The simplicity of this design makes it more attractive than the pretest-posttest design in terms of internal validity. This design controls for maturation, testing, regression, selection, and pretest-posttest interaction, though the mortality threat may continue to exist.

Covariance designs . Sometimes, measures of dependent variables may be influenced by extraneous variables called covariates . Covariates are those variables that are not of central interest to an experimental study, but should nevertheless be controlled in an experimental design in order to eliminate their potential effect on the dependent variable and therefore allow for a more accurate detection of the effects of the independent variables of interest. The experimental designs discussed earlier did not control for such covariates. A covariance design (also called a concomitant variable design) is a special type of pretest posttest control group design where the pretest measure is essentially a measurement of the covariates of interest rather than that of the dependent variables. The design notation is shown in Figure 10.3, where C represents the covariates:

how to conduct true experimental research

Figure 10.3. Covariance design

Because the pretest measure is not a measurement of the dependent variable, but rather a covariate, the treatment effect is measured as the difference in the posttest scores between the treatment and control groups as:

how to conduct true experimental research

Figure 10.4. 2 x 2 factorial design

Factorial designs can also be depicted using a design notation, such as that shown on the right panel of Figure 10.4. R represents random assignment of subjects to treatment groups, X represents the treatment groups themselves (the subscripts of X represents the level of each factor), and O represent observations of the dependent variable. Notice that the 2 x 2 factorial design will have four treatment groups, corresponding to the four combinations of the two levels of each factor. Correspondingly, the 2 x 3 design will have six treatment groups, and the 2 x 2 x 2 design will have eight treatment groups. As a rule of thumb, each cell in a factorial design should have a minimum sample size of 20 (this estimate is derived from Cohen’s power calculations based on medium effect sizes). So a 2 x 2 x 2 factorial design requires a minimum total sample size of 160 subjects, with at least 20 subjects in each cell. As you can see, the cost of data collection can increase substantially with more levels or factors in your factorial design. Sometimes, due to resource constraints, some cells in such factorial designs may not receive any treatment at all, which are called incomplete factorial designs . Such incomplete designs hurt our ability to draw inferences about the incomplete factors.

In a factorial design, a main effect is said to exist if the dependent variable shows a significant difference between multiple levels of one factor, at all levels of other factors. No change in the dependent variable across factor levels is the null case (baseline), from which main effects are evaluated. In the above example, you may see a main effect of instructional type, instructional time, or both on learning outcomes. An interaction effect exists when the effect of differences in one factor depends upon the level of a second factor. In our example, if the effect of instructional type on learning outcomes is greater for 3 hours/week of instructional time than for 1.5 hours/week, then we can say that there is an interaction effect between instructional type and instructional time on learning outcomes. Note that the presence of interaction effects dominate and make main effects irrelevant, and it is not meaningful to interpret main effects if interaction effects are significant.

Hybrid Experimental Designs

Hybrid designs are those that are formed by combining features of more established designs. Three such hybrid designs are randomized bocks design, Solomon four-group design, and switched replications design.

Randomized block design. This is a variation of the posttest-only or pretest-posttest control group design where the subject population can be grouped into relatively homogeneous subgroups (called blocks ) within which the experiment is replicated. For instance, if you want to replicate the same posttest-only design among university students and full -time working professionals (two homogeneous blocks), subjects in both blocks are randomly split between treatment group (receiving the same treatment) or control group (see Figure 10.5). The purpose of this design is to reduce the “noise” or variance in data that may be attributable to differences between the blocks so that the actual effect of interest can be detected more accurately.

how to conduct true experimental research

Figure 10.5. Randomized blocks design.

Solomon four-group design . In this design, the sample is divided into two treatment groups and two control groups. One treatment group and one control group receive the pretest, and the other two groups do not. This design represents a combination of posttest-only and pretest-posttest control group design, and is intended to test for the potential biasing effect of pretest measurement on posttest measures that tends to occur in pretest-posttest designs but not in posttest only designs. The design notation is shown in Figure 10.6.

how to conduct true experimental research

Figure 10.6. Solomon four-group design

Switched replication design . This is a two-group design implemented in two phases with three waves of measurement. The treatment group in the first phase serves as the control group in the second phase, and the control group in the first phase becomes the treatment group in the second phase, as illustrated in Figure 10.7. In other words, the original design is repeated or replicated temporally with treatment/control roles switched between the two groups. By the end of the study, all participants will have received the treatment either during the first or the second phase. This design is most feasible in organizational contexts where organizational programs (e.g., employee training) are implemented in a phased manner or are repeated at regular intervals.

how to conduct true experimental research

Figure 10.7. Switched replication design.

Quasi-Experimental Designs

Quasi-experimental designs are almost identical to true experimental designs, but lacking one key ingredient: random assignment. For instance, one entire class section or one organization is used as the treatment group, while another section of the same class or a different organization in the same industry is used as the control group. This lack of random assignment potentially results in groups that are non-equivalent, such as one group possessing greater mastery of a certain content than the other group, say by virtue of having a better teacher in a previous semester, which introduces the possibility of selection bias . Quasi-experimental designs are therefore inferior to true experimental designs in interval validity due to the presence of a variety of selection related threats such as selection-maturation threat (the treatment and control groups maturing at different rates), selection-history threat (the treatment and control groups being differentially impact by extraneous or historical events), selection-regression threat (the treatment and control groups regressing toward the mean between pretest and posttest at different rates), selection-instrumentation threat (the treatment and control groups responding differently to the measurement), selection-testing (the treatment and control groups responding differently to the pretest), and selection-mortality (the treatment and control groups demonstrating differential dropout rates). Given these selection threats, it is generally preferable to avoid quasi-experimental designs to the greatest extent possible.

Many true experimental designs can be converted to quasi-experimental designs by omitting random assignment. For instance, the quasi-equivalent version of pretest-posttest control group design is called nonequivalent groups design (NEGD), as shown in Figure 10.8, with random assignment R replaced by non-equivalent (non-random) assignment N . Likewise, the quasi -experimental version of switched replication design is called non-equivalent switched replication design (see Figure 10.9).

how to conduct true experimental research

Figure 10.8. NEGD design.

how to conduct true experimental research

Figure 10.9. Non-equivalent switched replication design.

In addition, there are quite a few unique non -equivalent designs without corresponding true experimental design cousins. Some of the more useful of these designs are discussed next.

Regression-discontinuity (RD) design . This is a non-equivalent pretest-posttest design where subjects are assigned to treatment or control group based on a cutoff score on a preprogram measure. For instance, patients who are severely ill may be assigned to a treatment group to test the efficacy of a new drug or treatment protocol and those who are mildly ill are assigned to the control group. In another example, students who are lagging behind on standardized test scores may be selected for a remedial curriculum program intended to improve their performance, while those who score high on such tests are not selected from the remedial program. The design notation can be represented as follows, where C represents the cutoff score:

how to conduct true experimental research

Figure 10.10. RD design.

Because of the use of a cutoff score, it is possible that the observed results may be a function of the cutoff score rather than the treatment, which introduces a new threat to internal validity. However, using the cutoff score also ensures that limited or costly resources are distributed to people who need them the most rather than randomly across a population, while simultaneously allowing a quasi-experimental treatment. The control group scores in the RD design does not serve as a benchmark for comparing treatment group scores, given the systematic non-equivalence between the two groups. Rather, if there is no discontinuity between pretest and posttest scores in the control group, but such a discontinuity persists in the treatment group, then this discontinuity is viewed as evidence of the treatment effect.

Proxy pretest design . This design, shown in Figure 10.11, looks very similar to the standard NEGD (pretest-posttest) design, with one critical difference: the pretest score is collected after the treatment is administered. A typical application of this design is when a researcher is brought in to test the efficacy of a program (e.g., an educational program) after the program has already started and pretest data is not available. Under such circumstances, the best option for the researcher is often to use a different prerecorded measure, such as students’ grade point average before the start of the program, as a proxy for pretest data. A variation of the proxy pretest design is to use subjects’ posttest recollection of pretest data, which may be subject to recall bias, but nevertheless may provide a measure of perceived gain or change in the dependent variable.

how to conduct true experimental research

Figure 10.11. Proxy pretest design.

Separate pretest-posttest samples design . This design is useful if it is not possible to collect pretest and posttest data from the same subjects for some reason. As shown in Figure 10.12, there are four groups in this design, but two groups come from a single non-equivalent group, while the other two groups come from a different non-equivalent group. For instance, you want to test customer satisfaction with a new online service that is implemented in one city but not in another. In this case, customers in the first city serve as the treatment group and those in the second city constitute the control group. If it is not possible to obtain pretest and posttest measures from the same customers, you can measure customer satisfaction at one point in time, implement the new service program, and measure customer satisfaction (with a different set of customers) after the program is implemented. Customer satisfaction is also measured in the control group at the same times as in the treatment group, but without the new program implementation. The design is not particularly strong, because you cannot examine the changes in any specific customer’s satisfaction score before and after the implementation, but you can only examine average customer satisfaction scores. Despite the lower internal validity, this design may still be a useful way of collecting quasi-experimental data when pretest and posttest data are not available from the same subjects.

how to conduct true experimental research

Figure 10.12. Separate pretest-posttest samples design.

Nonequivalent dependent variable (NEDV) design . This is a single-group pre-post quasi-experimental design with two outcome measures, where one measure is theoretically expected to be influenced by the treatment and the other measure is not. For instance, if you are designing a new calculus curriculum for high school students, this curriculum is likely to influence students’ posttest calculus scores but not algebra scores. However, the posttest algebra scores may still vary due to extraneous factors such as history or maturation. Hence, the pre-post algebra scores can be used as a control measure, while that of pre-post calculus can be treated as the treatment measure. The design notation, shown in Figure 10.13, indicates the single group by a single N , followed by pretest O 1 and posttest O 2 for calculus and algebra for the same group of students. This design is weak in internal validity, but its advantage lies in not having to use a separate control group.

An interesting variation of the NEDV design is a pattern matching NEDV design , which employs multiple outcome variables and a theory that explains how much each variable will be affected by the treatment. The researcher can then examine if the theoretical prediction is matched in actual observations. This pattern-matching technique, based on the degree of correspondence between theoretical and observed patterns is a powerful way of alleviating internal validity concerns in the original NEDV design.

how to conduct true experimental research

Figure 10.13. NEDV design.

Perils of Experimental Research

Experimental research is one of the most difficult of research designs, and should not be taken lightly. This type of research is often best with a multitude of methodological problems. First, though experimental research requires theories for framing hypotheses for testing, much of current experimental research is atheoretical. Without theories, the hypotheses being tested tend to be ad hoc, possibly illogical, and meaningless. Second, many of the measurement instruments used in experimental research are not tested for reliability and validity, and are incomparable across studies. Consequently, results generated using such instruments are also incomparable. Third, many experimental research use inappropriate research designs, such as irrelevant dependent variables, no interaction effects, no experimental controls, and non-equivalent stimulus across treatment groups. Findings from such studies tend to lack internal validity and are highly suspect. Fourth, the treatments (tasks) used in experimental research may be diverse, incomparable, and inconsistent across studies and sometimes inappropriate for the subject population. For instance, undergraduate student subjects are often asked to pretend that they are marketing managers and asked to perform a complex budget allocation task in which they have no experience or expertise. The use of such inappropriate tasks, introduces new threats to internal validity (i.e., subject’s performance may be an artifact of the content or difficulty of the task setting), generates findings that are non-interpretable and meaningless, and makes integration of findings across studies impossible.

The design of proper experimental treatments is a very important task in experimental design, because the treatment is the raison d’etre of the experimental method, and must never be rushed or neglected. To design an adequate and appropriate task, researchers should use prevalidated tasks if available, conduct treatment manipulation checks to check for the adequacy of such tasks (by debriefing subjects after performing the assigned task), conduct pilot tests (repeatedly, if necessary), and if doubt, using tasks that are simpler and familiar for the respondent sample than tasks that are complex or unfamiliar.

In summary, this chapter introduced key concepts in the experimental design research method and introduced a variety of true experimental and quasi-experimental designs. Although these designs vary widely in internal validity, designs with less internal validity should not be overlooked and may sometimes be useful under specific circumstances and empirical contingencies.

  • Social Science Research: Principles, Methods, and Practices. Authored by : Anol Bhattacherjee. Provided by : University of South Florida. Located at : http://scholarcommons.usf.edu/oa_textbooks/3/ . License : CC BY-NC-SA: Attribution-NonCommercial-ShareAlike

how to conduct true experimental research

Physical Chemistry Chemical Physics

Binding energies of ethanol and ethylamine on interstellar water ices: synergy between theory and experiments.

Experimental and computational chemistry are two disciplines to conduct research in Astrochemistry, providing essential reference data for both astronomical observations and modeling. These approaches not only mutually support each other, but also serve as complementary tools to overcome their respective limitations. Leveraging on such synergy, we characterized the binding energies (BEs) of ethanol (CH3CH2OH) and ethylamine (CH3CH2NH2), two interstellar complex organic molecules (iCOMs), onto crystalline and amorphous water ices through density functional theory (DFT) calculations and temperature programmed desorption (TPD) experiments. Experimentally, CH3CH2OH and CH3CH2NH2 behave similarly, in which desorption temperatures are higher on the water ices than on a bare gold surface. Computed cohesive energies of pure ethanol and ethylamine bulk structures allow describing the BEs of the pure species deposited on the gold surface, as extracted from the TPD curve analyses. The BEs of submonolayer coverages of CH3CH2OH and CH3CH2NH2 on the water ices cannot be directly extracted from TPD due to their co-desorption with water, but they are computed through DFT calculations, and found to be greater than the cohesive energy of water. The behaviour of CH3CH2OH and CH3CH2NH2 is different when depositing adsorbate multilayers on the amorphous ice, in that, according to their computed cohesive energies, ethylamine layers present weaker interactions compared to ethanol and water. Finally, from the computed BEs of ethanol, ethylamine and water, we can infer that the snow-lines of these three species in protoplanetary disks will be situated at different distances from the central star. It appears that a fraction of ethanol and ethylamine is already frozen on the grains in the water snow-lines, causing their incorporation in water-rich planetesimals.

Supplementary files

  • Supplementary information PDF (465K)

Article information

how to conduct true experimental research

Download Citation

Permissions.

how to conduct true experimental research

A. Rimola, J. Perrero, J. Vitorino, E. Congiu, P. Ugliengo and F. Dulieu, Phys. Chem. Chem. Phys. , 2024, Accepted Manuscript , DOI: 10.1039/D4CP01934B

This article is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported Licence . You can use material from this article in other publications, without requesting further permission from the RSC, provided that the correct acknowledgement is given and it is not used for commercial purposes.

To request permission to reproduce material from this article in a commercial publication , please go to the Copyright Clearance Center request page .

If you are an author contributing to an RSC publication, you do not need to request permission provided correct acknowledgement is given.

If you are the author of this article, you do not need to request permission to reproduce figures and diagrams provided correct acknowledgement is given. If you want to reproduce the whole article in a third-party commercial publication (excluding your thesis/dissertation for which permission is not required) please go to the Copyright Clearance Center request page .

Read more about how to correctly acknowledge RSC content .

Social activity

Search articles by author.

This article has not yet been cited.

Advertisements

Unfortunately we don't fully support your browser. If you have the option to, please upgrade to a newer version or use Mozilla Firefox , Microsoft Edge , Google Chrome , or Safari 14 or newer. If you are unable to, and need support, please send us your feedback .

We'd appreciate your feedback. Tell us what you think! opens in new tab/window

CRediT author statement

CRediT (Contributor Roles Taxonomy) was introduced with the intention of recognizing individual author contributions, reducing authorship disputes and facilitating collaboration. The idea came about following a 2012 collaborative workshop led by Harvard University and the Wellcome Trust, with input from researchers, the International Committee of Medical Journal Editors (ICMJE) and publishers, including Elsevier, represented by Cell Press.

CRediT offers authors the opportunity to share an accurate and detailed description of their diverse contributions to the published work.

The corresponding author is responsible for ensuring that the descriptions are accurate and agreed by all authors

The role(s) of all authors should be listed, using the relevant above categories

Authors may have contributed in multiple roles

CRediT in no way changes the journal’s criteria to qualify for authorship

CRediT statements should be provided during the submission process and will appear above the acknowledgment section of the published paper as shown further below.

Term

Definition

Conceptualization

Ideas; formulation or evolution of overarching research goals and aims

Methodology

Development or design of methodology; creation of models

Software

Programming, software development; designing computer programs; implementation of the computer code and supporting algorithms; testing of existing code components

Validation

Verification, whether as a part of the activity or separate, of the overall replication/ reproducibility of results/experiments and other research outputs

Formal analysis

Application of statistical, mathematical, computational, or other formal techniques to analyze or synthesize study data

Investigation

Conducting a research and investigation process, specifically performing the experiments, or data/evidence collection

Resources

Provision of study materials, reagents, materials, patients, laboratory samples, animals, instrumentation, computing resources, or other analysis tools

Data Curation

Management activities to annotate (produce metadata), scrub data and maintain research data (including software code, where it is necessary for interpreting the data itself) for initial use and later reuse

Writing - Original Draft

Preparation, creation and/or presentation of the published work, specifically writing the initial draft (including substantive translation)

Writing - Review & Editing

Preparation, creation and/or presentation of the published work by those from the original research group, specifically critical review, commentary or revision – including pre-or postpublication stages

Visualization

Preparation, creation and/or presentation of the published work, specifically visualization/ data presentation

Supervision

Oversight and leadership responsibility for the research activity planning and execution, including mentorship external to the core team

Project administration

Management and coordination responsibility for the research activity planning and execution

Funding acquisition

Acquisition of the financial support for the project leading to this publication

*Reproduced from Brand et al. (2015), Learned Publishing 28(2), with permission of the authors.

Sample CRediT author statement

Zhang San:  Conceptualization, Methodology, Software  Priya Singh. : Data curation, Writing- Original draft preparation.  Wang Wu : Visualization, Investigation.  Jan Jansen :  Supervision. : Ajay Kumar : Software, Validation.:  Sun Qi:  Writing- Reviewing and Editing,

Read more about CRediT  here opens in new tab/window  or check out this  article from  Authors' Updat e:  CRediT where credit's due .

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Research: Smaller, More Precise Discounts Could Increase Your Sales

  • Dinesh Gauri,
  • Abhijit Guha,
  • Abhijit Biswas,
  • Subhash Jha

how to conduct true experimental research

Why bigger discounts don’t necessarily attract more customers.

Retailers might think that bigger discounts attract more customers. But new research suggests that’s not always true. Sometimes, a smaller discount that looks more precise — say 6.8% as compared to 7% — can make people think the deal won’t last long, and they’ll buy more. In a series of nine experimental studies involving around 2,000 individuals considering online or retail purchases of a variety of products, the authors found precise discount depths — the difference between the original and sale price — can increase purchase intentions by up to 21%.

Discounts are an important promotional tactic retailers use to drive sales. So much so that discounts were a major factor for three out of four U.S. online shoppers in 2023 , luring consumers away from shopping at other retailers, getting them to increase their basket size, and convincing them to make purchases they otherwise wouldn’t. Discounts have a particularly strong impact on food purchases, where 90% of consumers reported stocking up on groceries when they were on sale .

  • DG Dinesh Gauri is a professor and Walmart chair in the department of marketing at the Sam M. Walton College of Business at the University of Arkansas. He is also the executive director of retail information at the Walton College. His research and teaching interests include retailing, pricing, marketing analytics, retail media, e-commerce and social media marketing. He advises for various companies in these areas and is a recognized leader in marketing.
  • AG Abhijit Guha is an associate professor in the department of marketing at the Darla Moore School of Business at the University of South Carolina. His research and teaching interests include retailing, pricing, and artificial intelligence.
  • AB Abhijit Biswas is the Kmart endowed chair and professor of marketing, chair of the department of marketing, and distinguished faculty fellow at the Mike Ilitch School of Business, Wayne State University. His research and teaching interests include retailing, pricing and advertising. He has published over a hundred articles, majority of which are in academic journals including the Journal of Marketing , Journal of Marketing Research , etc.
  • SJ Subhash Jha is an associate professor of marketing at the Fogelman College of Business & Economics at the University of Memphis. His research and teaching interests include retailing, pricing, online reviews and role of haptic cues.

Partner Center

IMAGES

  1. 23: The True Experiment: Rigid Research Methods

    how to conduct true experimental research

  2. What is Experimental Research & How is it Significant for Your Business

    how to conduct true experimental research

  3. What is a True Experimental Design?

    how to conduct true experimental research

  4. PPT

    how to conduct true experimental research

  5. The conceptual framework of the true-experimental research design

    how to conduct true experimental research

  6. The 3 Types Of Experimental Design (2024)

    how to conduct true experimental research

VIDEO

  1. True experimental design-(Experimental research design)

  2. Experimental Research Design

  3. Online Experiments

  4. "true experimental research design "

  5. True Experimental Research Design/Experimental Research Design -2/NPA Teaching/ Dr. Abdul Azeez N.P

  6. Experimental Research Design and its Types by Zeshan Umar

COMMENTS

  1. Guide to Experimental Design

    Step 1: Define your variables. You should begin with a specific research question. We will work with two research question examples, one from health sciences and one from ecology: Example question 1: Phone use and sleep. You want to know how phone use before bedtime affects sleep patterns.

  2. What is a True Experimental Design?

    The true experimental design offers an accurate analysis of the data collected using statistical data analysis tools. Absence vs Presence of control groups: Pre-experimental research designs do not usually employ a control group which makes it difficult to establish contrast. While all three types of true experiments employ control groups.

  3. True Experiment

    One method would be to conduct a true experiment. A true experiment is a type of experimental design and is thought to be the most accurate type of experimental research. This is because a true ...

  4. True Experimental Design

    True experimental design is regarded as the most accurate form of experimental research, in that it tries to prove or disprove a hypothesis mathematically, with statistical analysis. For some of the physical sciences, such as physics, chemistry and geology, they are standard and commonly used. For social sciences, psychology and biology, they ...

  5. Experimental Design

    Experimental Design. Experimental design is a process of planning and conducting scientific experiments to investigate a hypothesis or research question. It involves carefully designing an experiment that can test the hypothesis, and controlling for other variables that may influence the results. Experimental design typically includes ...

  6. How to Conduct a True Experiment: 15 Steps (with Pictures)

    1. Randomly assign subjects into two groups. One group is the experimental group, while the other is the control group. You must guarantee that any given subject has an equal chance of being in either group. Use a random number generator to assign a number to each subject. Then place them in the two groups by number.

  7. True experimental design

    Steps to conduct a true experimental study. Step 1: Identify the research objective and state the hypothesis. Step 2: Determine the dependent and independent variables. Step 3: Define and randomly assign participants to the control and experimental groups. Step 4: Conduct pre-tests before beginning the experiment. Step 5: Conduct the experiment.

  8. A Quick Guide to Experimental Design

    Step 1: Define your variables. You should begin with a specific research question. We will work with two research question examples, one from health sciences and one from ecology: Example question 1: Phone use and sleep. You want to know how phone use before bedtime affects sleep patterns.

  9. Experimental research

    10 Experimental research. 10. Experimental research. Experimental research—often considered to be the 'gold standard' in research designs—is one of the most rigorous of all research designs. In this design, one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different ...

  10. 14.2 True experiments

    A true experiment, often considered to be the "gold standard" in research designs, is thought of as one of the most rigorous of all research designs. In this design, one or more independent variables (as treatments) are manipulated by the researcher, subjects are randomly assigned (i.e., random assignment) to different treatment levels, and ...

  11. Experimental Research Designs: Types, Examples & Advantages

    A researcher can conduct experimental research in the following situations — ... A true experimental research design relies on statistical analysis to prove or disprove a researcher's hypothesis. It is one of the most accurate forms of research because it provides specific scientific evidence. Furthermore, out of all the types of ...

  12. Module 2: Research Design

    The American Heritage Dictionary of the English Language defines an experiment as "A test under controlled conditions that is made to demonstrate a known truth, to examine the validity of a hypothesis, or to determine the efficacy of something previously untried." True experiments have four elements: manipulation, control , random assignment ...

  13. 13.2: True experimental design

    True experimental design is best suited for explanatory research questions. True experiments require random assignment of participants to control and experimental groups. Pretest/post-test research design involves two points of measurement—one pre-intervention and one post-intervention. Post-test only research design involves only one point ...

  14. Experimental Research

    In true experimental research, ... (i.e., control and treatment). In quasi experimental research, the researcher does not randomly assign subjects to treatment and control groups. In other words, the treatment is not distributed among participants randomly. ... Suppose we were conducting a unit to increase student sensitivity to prejudice.

  15. Experimental Research Design

    True experimental research is the most robust type of experimental study due to its careful control and manipulation of variables, random sampling, and random assignment. ... They may then conduct ...

  16. Experimental Research

    For establishing true cause and effect relationships, conducting experiments is the easiest and definite method. There are two major variables of interest in an experiment—the 'cause' and the 'effect', and you directly manipulate causal variables, keeping other variables constant as far as possible.For establishing cause and effect relationships, you have to isolate and eliminate all ...

  17. A Complete Guide to Experimental Research

    Before conducting experimental research, you need to have a clear understanding of the experimental design. A true experimental design includes identifying a problem, formulating a hypothesis, determining the number of variables, selecting and assigning the participants, types of research designs, meeting ethical values, etc.

  18. Exploring Experimental Research: Methodologies, Designs, and

    Experimental research serves as a fundamental scientific method aimed at unraveling. cause-and-effect relationships between variables across various disciplines. This. paper delineates the key ...

  19. Four steps to complete an experimental research design

    True experimental research design. A true experimental research design involves testing a hypothesis in order to determine whether there is a cause-effect relationship between two or more sets of variables. Although there are a few established ways to conduct experimental research designs, all share four characteristics: ...

  20. Conducting an Experiment in Psychology

    When conducting an experiment, it is important to follow the seven basic steps of the scientific method: Ask a testable question. Define your variables. Conduct background research. Design your experiment. Perform the experiment. Collect and analyze the data. Draw conclusions.

  21. Experimental Research

    In the strict sense, experimental research is what we call a true experiment. This is an experiment where the researcher manipulates one variable, and control/randomizes the rest of the variables. ... It may be wise to first conduct a pilot-study or two before you do the real experiment. This ensures that the experiment measures what it should ...

  22. What Is a Controlled Experiment?

    Hypotheses are crucial to controlled experiments because they provide a clear focus and direction for the research. A hypothesis is a testable prediction about the relationship between variables. It guides the design of the experiment, including what variables to manipulate (independent variables) and what outcomes to measure (dependent variables).

  23. Experimental Design

    A pre-experimental design is a simple research process that happens before the actual experimental design takes place. The goal is to obtain preliminary results to gauge whether the financial and time investment of a true experiment will be worth it. Pre-experimental design example A researcher wants to investigate the effect of a new type of meditation on stress levels in college students.

  24. What Is Internal Validity In Research?

    Conducting research that has strong internal and external validity ... causing a misleading association and making it difficult to isolate the true effect of the ... Laura T. Flannelly & Katherine R. B. Jankowski (2018): Threats to the Internal Validity of Experimental and Quasi-Experimental Research in Healthcare, Journal of Health Care ...

  25. Chapter 10 Experimental Research

    This is a standard practice in true experimental research to ensure that treatment groups are similar (equivalent) to each other and to the control group, prior to treatment administration. Random selection is related to sampling, and is therefore, more closely related to the external validity (generalizability) of findings. ... Not conducting ...

  26. Binding energies of ethanol and ethylamine on interstellar water ices

    Experimental and computational chemistry are two disciplines to conduct research in Astrochemistry, providing essential reference data for both astronomical observations and modeling. These approaches not only mutually support each other, but also serve as complementary tools to overcome their respective limitations.

  27. CRediT author statement

    Conducting a research and investigation process, specifically performing the experiments, or data/evidence collection. Resources. Provision of study materials, reagents, materials, patients, laboratory samples, animals, instrumentation, computing resources, or other analysis tools.

  28. Research: Smaller, More Precise Discounts Could Increase Your Sales

    Retailers might think that bigger discounts attract more customers. But new research suggests that's not always true. Sometimes, a smaller discount that looks more precise — say 6.8% as ...