• En español – ExME
  • Em português – EME

An introduction to different types of study design

Posted on 6th April 2021 by Hadi Abbas

""

Study designs are the set of methods and procedures used to collect and analyze data in a study.

Broadly speaking, there are 2 types of study designs: descriptive studies and analytical studies.

Descriptive studies

  • Describes specific characteristics in a population of interest
  • The most common forms are case reports and case series
  • In a case report, we discuss our experience with the patient’s symptoms, signs, diagnosis, and treatment
  • In a case series, several patients with similar experiences are grouped.

Analytical Studies

Analytical studies are of 2 types: observational and experimental.

Observational studies are studies that we conduct without any intervention or experiment. In those studies, we purely observe the outcomes.  On the other hand, in experimental studies, we conduct experiments and interventions.

Observational studies

Observational studies include many subtypes. Below, I will discuss the most common designs.

Cross-sectional study:

  • This design is transverse where we take a specific sample at a specific time without any follow-up
  • It allows us to calculate the frequency of disease ( p revalence ) or the frequency of a risk factor
  • This design is easy to conduct
  • For example – if we want to know the prevalence of migraine in a population, we can conduct a cross-sectional study whereby we take a sample from the population and calculate the number of patients with migraine headaches.

Cohort study:

  • We conduct this study by comparing two samples from the population: one sample with a risk factor while the other lacks this risk factor
  • It shows us the risk of developing the disease in individuals with the risk factor compared to those without the risk factor ( RR = relative risk )
  • Prospective : we follow the individuals in the future to know who will develop the disease
  • Retrospective : we look to the past to know who developed the disease (e.g. using medical records)
  • This design is the strongest among the observational studies
  • For example – to find out the relative risk of developing chronic obstructive pulmonary disease (COPD) among smokers, we take a sample including smokers and non-smokers. Then, we calculate the number of individuals with COPD among both.

Case-Control Study:

  • We conduct this study by comparing 2 groups: one group with the disease (cases) and another group without the disease (controls)
  • This design is always retrospective
  •  We aim to find out the odds of having a risk factor or an exposure if an individual has a specific disease (Odds ratio)
  •  Relatively easy to conduct
  • For example – we want to study the odds of being a smoker among hypertensive patients compared to normotensive ones. To do so, we choose a group of patients diagnosed with hypertension and another group that serves as the control (normal blood pressure). Then we study their smoking history to find out if there is a correlation.

Experimental Studies

  • Also known as interventional studies
  • Can involve animals and humans
  • Pre-clinical trials involve animals
  • Clinical trials are experimental studies involving humans
  • In clinical trials, we study the effect of an intervention compared to another intervention or placebo. As an example, I have listed the four phases of a drug trial:

I:  We aim to assess the safety of the drug ( is it safe ? )

II: We aim to assess the efficacy of the drug ( does it work ? )

III: We want to know if this drug is better than the old treatment ( is it better ? )

IV: We follow-up to detect long-term side effects ( can it stay in the market ? )

  • In randomized controlled trials, one group of participants receives the control, while the other receives the tested drug/intervention. Those studies are the best way to evaluate the efficacy of a treatment.

Finally, the figure below will help you with your understanding of different types of study designs.

A visual diagram describing the following. Two types of epidemiological studies are descriptive and analytical. Types of descriptive studies are case reports, case series, descriptive surveys. Types of analytical studies are observational or experimental. Observational studies can be cross-sectional, case-control or cohort studies. Types of experimental studies can be lab trials or field trials.

References (pdf)

You may also be interested in the following blogs for further reading:

An introduction to randomized controlled trials

Case-control and cohort studies: a brief overview

Cohort studies: prospective and retrospective designs

Prevalence vs Incidence: what is the difference?

' src=

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

No Comments on An introduction to different types of study design

' src=

you are amazing one!! if I get you I’m working with you! I’m student from Ethiopian higher education. health sciences student

' src=

Very informative and easy understandable

' src=

You are my kind of doctor. Do not lose sight of your objective.

' src=

Wow very erll explained and easy to understand

' src=

I’m Khamisu Habibu community health officer student from Abubakar Tafawa Balewa university teaching hospital Bauchi, Nigeria, I really appreciate your write up and you have make it clear for the learner. thank you

' src=

well understood,thank you so much

' src=

Well understood…thanks

' src=

Simply explained. Thank You.

' src=

Thanks a lot for this nice informative article which help me to understand different study designs that I felt difficult before

' src=

That’s lovely to hear, Mona, thank you for letting the author know how useful this was. If there are any other particular topics you think would be useful to you, and are not already on the website, please do let us know.

' src=

it is very informative and useful.

thank you statistician

Fabulous to hear, thank you John.

' src=

Thanks for this information

Thanks so much for this information….I have clearly known the types of study design Thanks

That’s so good to hear, Mirembe, thank you for letting the author know.

' src=

Very helpful article!! U have simplified everything for easy understanding

' src=

I’m a health science major currently taking statistics for health care workers…this is a challenging class…thanks for the simified feedback.

That’s good to hear this has helped you. Hopefully you will find some of the other blogs useful too. If you see any topics that are missing from the website, please do let us know!

' src=

Hello. I liked your presentation, the fact that you ranked them clearly is very helpful to understand for people like me who is a novelist researcher. However, I was expecting to read much more about the Experimental studies. So please direct me if you already have or will one day. Thank you

Dear Ay. My sincere apologies for not responding to your comment sooner. You may find it useful to filter the blogs by the topic of ‘Study design and research methods’ – here is a link to that filter: https://s4be.cochrane.org/blog/topic/study-design/ This will cover more detail about experimental studies. Or have a look on our library page for further resources there – you’ll find that on the ‘Resources’ drop down from the home page.

However, if there are specific things you feel you would like to learn about experimental studies, that are missing from the website, it would be great if you could let me know too. Thank you, and best of luck. Emma

' src=

Great job Mr Hadi. I advise you to prepare and study for the Australian Medical Board Exams as soon as you finish your undergrad study in Lebanon. Good luck and hope we can meet sometime in the future. Regards ;)

' src=

You have give a good explaination of what am looking for. However, references am not sure of where to get them from.

Subscribe to our newsletter

You will receive our monthly newsletter and free access to Trip Premium.

Related Articles

""

Cluster Randomized Trials: Concepts

This blog summarizes the concepts of cluster randomization, and the logistical and statistical considerations while designing a cluster randomized controlled trial.

""

Expertise-based Randomized Controlled Trials

This blog summarizes the concepts of Expertise-based randomized controlled trials with a focus on the advantages and challenges associated with this type of study.

case study vs experimental research

A well-designed cohort study can provide powerful results. This blog introduces prospective and retrospective cohort studies, discussing the advantages, disadvantages and use of these type of study designs.

Observational vs. Experimental Study: A Comprehensive Guide

Explore the fundamental disparities between experimental and observational studies in this comprehensive guide by Santos Research Center, Corp. Uncover concepts such as control group, random sample, cohort studies, response variable, and explanatory variable that shape the foundation of these methodologies. Discover the significance of randomized controlled trials and case control studies, examining causal relationships and the role of dependent variables and independent variables in research designs.

This enlightening exploration also delves into the meticulous scientific study process, involving survey members, systematic reviews, and statistical analyses. Investigate the careful balance of control group and treatment group dynamics, highlighting how researchers meticulously assign variables and analyze statistical patterns to discern meaningful insights. From dissecting issues like lung cancer to understanding sleep patterns, this guide emphasizes the precision of controlled experiments and controlled trials, where variables are isolated and scrutinized, paving the way for a deeper comprehension of the world through empirical research.

Introduction to Observational and Experimental Studies

These two studies are the cornerstones of scientific inquiry, each offering a distinct approach to unraveling the mysteries of the natural world.

Observational studies allow us to observe, document, and gather data without direct intervention. They provide a means to explore real-world scenarios and trends, making them valuable when manipulating variables is not feasible or ethical. From surveys to meticulous observations, these studies shed light on existing conditions and relationships.

Experimental studies , in contrast, put researchers in the driver's seat. They involve the deliberate manipulation of variables to understand their impact on specific outcomes. By controlling the conditions, experimental studies establish causal relationships, answering questions of causality with precision. This approach is pivotal for hypothesis testing and informed decision-making.

At Santos Research Center, Corp., we recognize the importance of both observational and experimental studies. We employ these methodologies in our diverse research projects to ensure the highest quality of scientific investigation and to answer a wide range of research questions.

Observational Studies: A Closer Look

In our exploration of research methodologies, let's zoom in on observational research studies—an essential facet of scientific inquiry that we at Santos Research Center, Corp., expertly employ in our diverse research projects.

What is an Observational Study?

Observational research studies involve the passive observation of subjects without any intervention or manipulation by researchers. These studies are designed to scrutinize the relationships between variables and test subjects, uncover patterns, and draw conclusions grounded in real-world data.

Researchers refrain from interfering with the natural course of events in controlled experiment. Instead, they meticulously gather data by keenly observing and documenting information about the test subjects and their surroundings. This approach permits the examination of variables that cannot be ethically or feasibly manipulated, making it particularly valuable in certain research scenarios.

Types of Observational Studies

Now, let's delve into the various forms that observational studies can take, each with its distinct characteristics and applications.

Cohort Studies:  A cohort study is a type of observational study that entails tracking one group of individuals over an extended period. Its primary goal is to identify potential causes or risk factors for specific outcomes or treatment group. Cohort studies provide valuable insights into the development of conditions or diseases and the factors that influence them.

Case-Control Studies:  Case-control studies, on the other hand, involve the comparison of individuals with a particular condition or outcome to those without it (the control group). These studies aim to discern potential causal factors or associations that may have contributed to the development of the condition under investigation.

Cross-Sectional Studies:  Cross-sectional studies take a snapshot of a diverse group of individuals at a single point in time. By collecting data from this snapshot, researchers gain insights into the prevalence of a specific condition or the relationships between variables at that precise moment. Cross-sectional studies are often used to assess the health status of the different groups within a population or explore the interplay between various factors.

Advantages and Limitations of Observational Studies

Observational studies, as we've explored, are a vital pillar of scientific research, offering unique insights into real-world phenomena. In this section, we will dissect the advantages and limitations that characterize these studies, shedding light on the intricacies that researchers grapple with when employing this methodology.

Advantages: One of the paramount advantages of observational studies lies in their utilization of real-world data. Unlike controlled experiments that operate in artificial settings, observational studies embrace the complexities of the natural world. This approach enables researchers to capture genuine behaviors, patterns, and occurrences as they unfold. As a result, the data collected reflects the intricacies of real-life scenarios, making it highly relevant and applicable to diverse settings and populations.

Moreover, in a randomized controlled trial, researchers looked to randomly assign participants to a group. Observational studies excel in their capacity to examine long-term trends. By observing one group of subjects over extended periods, research scientists gain the ability to track developments, trends, and shifts in behavior or outcomes. This longitudinal perspective is invaluable when studying phenomena that evolve gradually, such as chronic diseases, societal changes, or environmental shifts. It allows for the detection of subtle nuances that may be missed in shorter-term investigations.

Limitations: However, like any research methodology, observational studies are not without their limitations. One significant challenge of statistical study lies in the potential for biases. Since researchers do not intervene in the subjects' experiences, various biases can creep into the data collection process. These biases may arise from participant self-reporting, observer bias, or selection bias in random sample, among others. Careful design and rigorous data analysis are crucial for mitigating these biases.

Another limitation is the presence of confounding variables. In observational studies, it can be challenging to isolate the effect of a specific variable from the myriad of other factors at play. These confounding variables can obscure the true relationship between the variables of interest, making it difficult to establish causation definitively. Research scientists must employ statistical techniques to control for or adjust these confounding variables.

Additionally, observational studies face constraints in their ability to establish causation. While they can identify associations and correlations between variables, they cannot prove causality or causal relationship. Establishing causation typically requires controlled experiments where researchers can manipulate independent variables systematically. In observational studies, researchers can only infer potential causation based on the observed associations.

Experimental Studies: Delving Deeper

In the intricate landscape of scientific research, we now turn our gaze toward experimental studies—a dynamic and powerful method that Santos Research Center, Corp. skillfully employs in our pursuit of knowledge.

What is an Experimental Study?

While some studies observe and gather data passively, experimental studies take a more proactive approach. Here, researchers actively introduce an intervention or treatment to an experiment group study its effects on one or more variables. This methodology empowers researchers to manipulate independent variables deliberately and examine their direct impact on dependent variables.

Experimental research are distinguished by their exceptional ability to establish cause-and-effect relationships. This invaluable characteristic allows researchers to unlock the mysteries of how one variable influences another, offering profound insights into the scientific questions at hand. Within the controlled environment of an experimental study, researchers can systematically test hypotheses, shedding light on complex phenomena.

Key Features of Experimental Studies

Central to statistical analysis, the rigor and reliability of experimental studies are several key features that ensure the validity of their findings.

Randomized Controlled Trials:  Randomization is a critical element in experimental studies, as it ensures that subjects are assigned to groups in a random assignment. This randomly assigned allocation minimizes the risk of unintentional biases and confounding variables, strengthening the credibility of the study's outcomes.

Control Groups:  Control groups play a pivotal role in experimental studies by serving as a baseline for comparison. They enable researchers to assess the true impact of the intervention being studied. By comparing the outcomes of the intervention group to those of survey members of the control group, researchers can discern whether the intervention caused the observed changes.

Blinding:  Both single-blind and double-blind techniques are employed in experimental studies to prevent biases from influencing the study or controlled trial's outcomes. Single-blind studies keep either the subjects or the researchers unaware of certain aspects of the study, while double-blind studies extend this blindness to both parties, enhancing the objectivity of the study.

These key features work in concert to uphold the integrity and trustworthiness of the results generated through experimental studies.

Advantages and Limitations of Experimental Studies

As with any research methodology, this one comes with its unique set of advantages and limitations.

Advantages:  These studies offer the distinct advantage of establishing causal relationships between two or more variables together. The controlled environment allows researchers to exert authority over variables, ensuring that changes in the dependent variable can be attributed to the independent variable. This meticulous control results in high-quality, reliable data that can significantly contribute to scientific knowledge.

Limitations:  However, experimental ones are not without their challenges. They may raise ethical concerns, particularly when the interventions involve potential risks to subjects. Additionally, their controlled nature can limit their real-world applicability, as the conditions in experiments may not accurately mirror those in the natural world. Moreover, executing an experimental study in randomized controlled, often demands substantial resources, with other variables including time, funding, and personnel.

Observational vs Experimental: A Side-by-Side Comparison

Having previously examined observational and experimental studies individually, we now embark on a side-by-side comparison to illuminate the key distinctions and commonalities between these foundational research approaches.

Key Differences and Notable Similarities

Methodologies

  • Observational Studies : Characterized by passive observation, where researchers collect data without direct intervention, allowing the natural course of events to unfold.
  • Experimental Studies : Involve active intervention, where researchers deliberately manipulate variables to discern their impact on specific outcomes, ensuring control over the experimental conditions.
  • Observational Studies : Designed to identify patterns, correlations, and associations within existing data, shedding light on relationships within real-world settings.
  • Experimental Studies : Geared toward establishing causality by determining the cause-and-effect relationships between variables, often in controlled laboratory environments.
  • Observational Studies : Yield real-world data, reflecting the complexities and nuances of natural phenomena.
  • Experimental Studies : Generate controlled data, allowing for precise analysis and the establishment of clear causal connections.

Observational studies excel at exploring associations and uncovering patterns within the intricacies of real-world settings, while experimental studies shine as the gold standard for discerning cause-and-effect relationships through meticulous control and manipulation in controlled environments. Understanding these differences and similarities empowers researchers to choose the most appropriate method for their specific research objectives.

When to Use Which: Practical Applications

The decision to employ either observational or experimental studies hinges on the research objectives at hand and the available resources. Observational studies prove invaluable when variable manipulation is impractical or ethically challenging, making them ideal for delving into long-term trends and uncovering intricate associations between certain variables (response variable or explanatory variable). On the other hand, experimental studies emerge as indispensable tools when the aim is to definitively establish causation and methodically control variables.

At Santos Research Center, Corp., our approach to both scientific study and methodology is characterized by meticulous consideration of the specific research goals. We recognize that the quality of outcomes hinges on selecting the most appropriate method of research study. Our unwavering commitment to employing both observational and experimental research studies further underscores our dedication to advancing scientific knowledge across diverse domains.

Conclusion: The Synergy of Experimental and Observational Studies in Research

In conclusion, both observational and experimental studies are integral to scientific research, offering complementary approaches with unique strengths and limitations. At Santos Research Center, Corp., we leverage these methodologies to contribute meaningfully to the scientific community.

Explore our projects and initiatives at Santos Research Center, Corp. by visiting our website or contacting us at (813) 249-9100, where our unwavering commitment to rigorous research practices and advancing scientific knowledge awaits.

Recent Posts

At Santos Research Center, a medical research facility dedicated to advancing TBI treatments, we emphasize the importance of tailored rehabilitation...

Learn about COVID-19 rebound after Paxlovid, its symptoms, causes, and management strategies. Join our study at Santos Research Center. Apply now!

Learn everything about Respiratory Syncytial Virus (RSV), from symptoms and diagnosis to treatment and prevention. Stay informed and protect your health with...

Discover key insights on Alzheimer's disease, including symptoms, stages, and care tips. Learn how to manage the condition and find out how you can...

Discover expert insights on migraines, from symptoms and causes to management strategies, and learn about our specialized support at Santos Research Center.

Explore our in-depth guide on UTIs, covering everything from symptoms and causes to effective treatments, and learn how to manage and prevent urinary tract infections.

Your definitive guide to COVID symptoms. Dive deep into the signs of COVID-19, understand the new variants, and get answers to your most pressing questions.

Santos Research Center, Corp. is a research facility conducting paid clinical trials, in partnership with major pharmaceutical companies & CROs. We work with patients from across the Tampa Bay area.

Contact Details

Navigation menu.

Case Study vs. Single-Case Experimental Designs

What's the difference.

Case study and single-case experimental designs are both research methods used in psychology and other social sciences to investigate individual cases or subjects. However, they differ in their approach and purpose. Case studies involve in-depth examination of a single case, such as an individual, group, or organization, to gain a comprehensive understanding of the phenomenon being studied. On the other hand, single-case experimental designs focus on studying the effects of an intervention or treatment on a single subject over time. These designs use repeated measures and control conditions to establish cause-and-effect relationships. While case studies provide rich qualitative data, single-case experimental designs offer more rigorous experimental control and allow for the evaluation of treatment effectiveness.

AttributeCase StudySingle-Case Experimental Designs
Research DesignQualitativeQuantitative
FocusExploratoryHypothesis Testing
Sample SizeUsually smallUsually small
Data CollectionObservations, interviews, documentsObservations, measurements
Data AnalysisQualitative analysisStatistical analysis
GeneralizabilityLowLow
Internal ValidityLowHigh
External ValidityLowLow

Further Detail

Introduction.

When conducting research in various fields, it is essential to choose the appropriate study design to answer research questions effectively. Two commonly used designs are case study and single-case experimental designs. While both approaches aim to provide valuable insights into specific phenomena, they differ in several key attributes. This article will compare and contrast the attributes of case study and single-case experimental designs, highlighting their strengths and limitations.

Definition and Purpose

A case study is an in-depth investigation of a particular individual, group, or event. It involves collecting and analyzing qualitative or quantitative data to gain a comprehensive understanding of the subject under study. Case studies are often used to explore complex phenomena, generate hypotheses, or provide detailed descriptions of unique cases.

On the other hand, single-case experimental designs are a type of research design that focuses on studying a single individual or a small group over time. These designs involve manipulating an independent variable and measuring its effects on a dependent variable. Single-case experimental designs are particularly useful for examining cause-and-effect relationships and evaluating the effectiveness of interventions or treatments.

Data Collection and Analysis

In terms of data collection, case studies rely on various sources such as interviews, observations, documents, and artifacts. Researchers often employ multiple methods to gather rich and diverse data, allowing for a comprehensive analysis of the case. The data collected in case studies are typically qualitative in nature, although quantitative data may also be included.

In contrast, single-case experimental designs primarily rely on quantitative data collection methods. Researchers use standardized measures and instruments to collect data on the dependent variable before, during, and after the manipulation of the independent variable. This allows for a systematic analysis of the effects of the intervention or treatment on the individual or group being studied.

Generalizability

One of the key differences between case studies and single-case experimental designs is their generalizability. Case studies are often conducted on unique or rare cases, making it challenging to generalize the findings to a larger population. The focus of case studies is on providing detailed insights into specific cases rather than making broad generalizations.

On the other hand, single-case experimental designs aim to establish causal relationships and can provide evidence for generalizability. By systematically manipulating the independent variable and measuring its effects on the dependent variable, researchers can draw conclusions about the effectiveness of interventions or treatments that may be applicable to similar cases or populations.

Internal Validity

Internal validity refers to the extent to which a study accurately measures the cause-and-effect relationship between variables. In case studies, establishing internal validity can be challenging due to the lack of control over extraneous variables. The presence of multiple data sources and the potential for subjective interpretation may also introduce bias.

In contrast, single-case experimental designs prioritize internal validity by employing rigorous control over extraneous variables. Researchers carefully design the intervention or treatment, implement it consistently, and measure the dependent variable under controlled conditions. This allows for a more confident determination of the causal relationship between the independent and dependent variables.

Time and Resources

Case studies often require significant time and resources due to their in-depth nature. Researchers need to spend considerable time collecting and analyzing data from various sources, conducting interviews, and immersing themselves in the case. Additionally, case studies may involve multiple researchers or a research team, further increasing the required resources.

On the other hand, single-case experimental designs can be more time and resource-efficient. Since they focus on a single individual or a small group, data collection and analysis can be more streamlined. Researchers can also implement interventions or treatments in a controlled manner, reducing the time and resources needed for data collection.

Ethical Considerations

Both case studies and single-case experimental designs require researchers to consider ethical implications. In case studies, researchers must ensure the privacy and confidentiality of the individuals or groups being studied. Informed consent and ethical guidelines for data collection and analysis should be followed to protect the rights and well-being of the participants.

Similarly, in single-case experimental designs, researchers must consider ethical considerations when implementing interventions or treatments. The well-being and safety of the individual or group being studied should be prioritized, and informed consent should be obtained. Additionally, researchers should carefully monitor and evaluate the potential risks and benefits associated with the intervention or treatment.

Case studies and single-case experimental designs are valuable research approaches that offer unique insights into specific phenomena. While case studies provide in-depth descriptions and exploratory analyses of individual cases, single-case experimental designs focus on establishing causal relationships and evaluating interventions or treatments. Researchers should carefully consider the attributes and goals of their study when choosing between these two designs, ensuring that the selected approach aligns with their research questions and objectives.

Comparisons may contain inaccurate information about people, places, or facts. Please report any issues.

Academic Success Center

Research Writing and Analysis

  • NVivo Group and Study Sessions
  • SPSS This link opens in a new window
  • Statistical Analysis Group sessions
  • Using Qualtrics
  • Dissertation and Data Analysis Group Sessions
  • Defense Schedule - Commons Calendar This link opens in a new window
  • Research Process Flow Chart
  • Research Alignment Chapter 1 This link opens in a new window
  • Step 1: Seek Out Evidence
  • Step 2: Explain
  • Step 3: The Big Picture
  • Step 4: Own It
  • Step 5: Illustrate
  • Annotated Bibliography
  • Seminal Authors
  • Systematic Reviews & Meta-Analyses
  • How to Synthesize and Analyze
  • Synthesis and Analysis Practice
  • Synthesis and Analysis Group Sessions
  • Problem Statement
  • Purpose Statement
  • Conceptual Framework
  • Theoretical Framework
  • Locating Theoretical and Conceptual Frameworks This link opens in a new window
  • Quantitative Research Questions
  • Qualitative Research Questions
  • Trustworthiness of Qualitative Data
  • Analysis and Coding Example- Qualitative Data
  • Thematic Data Analysis in Qualitative Design
  • Dissertation to Journal Article This link opens in a new window
  • International Journal of Online Graduate Education (IJOGE) This link opens in a new window
  • Journal of Research in Innovative Teaching & Learning (JRIT&L) This link opens in a new window

Writing a Case Study

Hands holding a world globe

What is a case study?

A Map of the world with hands holding a pen.

A Case study is: 

  • An in-depth research design that primarily uses a qualitative methodology but sometimes​​ includes quantitative methodology.
  • Used to examine an identifiable problem confirmed through research.
  • Used to investigate an individual, group of people, organization, or event.
  • Used to mostly answer "how" and "why" questions.

What are the different types of case studies?

Man and woman looking at a laptop

Descriptive

This type of case study allows the researcher to:

How has the implementation and use of the instructional coaching intervention for elementary teachers impacted students’ attitudes toward reading?

Explanatory

This type of case study allows the researcher to:

Why do differences exist when implementing the same online reading curriculum in three elementary classrooms?

Exploratory

This type of case study allows the researcher to:

 

What are potential barriers to student’s reading success when middle school teachers implement the Ready Reader curriculum online?

Multiple Case Studies

or

Collective Case Study

This type of case study allows the researcher to:

How are individual school districts addressing student engagement in an online classroom?

Intrinsic

This type of case study allows the researcher to:

How does a student’s familial background influence a teacher’s ability to provide meaningful instruction?

Instrumental

This type of case study allows the researcher to:

How a rural school district’s integration of a reward system maximized student engagement?

Note: These are the primary case studies. As you continue to research and learn

about case studies you will begin to find a robust list of different types. 

Who are your case study participants?

Boys looking through a camera

 

This type of study is implemented to understand an individual by developing a detailed explanation of the individual’s lived experiences or perceptions.

 

 

 

This type of study is implemented to explore a particular group of people’s perceptions.

This type of study is implemented to explore the perspectives of people who work for or had interaction with a specific organization or company.

This type of study is implemented to explore participant’s perceptions of an event.

What is triangulation ? 

Validity and credibility are an essential part of the case study. Therefore, the researcher should include triangulation to ensure trustworthiness while accurately reflecting what the researcher seeks to investigate.

Triangulation image with examples

How to write a Case Study?

When developing a case study, there are different ways you could present the information, but remember to include the five parts for your case study.

Man holding his hand out to show five fingers.

 

Writing Icon Purple Circle w/computer inside

Was this resource helpful?

  • << Previous: Thematic Data Analysis in Qualitative Design
  • Next: Journal Article Reporting Standards (JARS) >>
  • Last Updated: Aug 30, 2024 8:27 AM
  • URL: https://resources.nu.edu/researchtools

NCU Library Home

Frequently asked questions

What’s the difference between correlational and experimental research.

Controlled experiments establish causality, whereas correlational studies only show associations between variables.

  • In an experimental design , you manipulate an independent variable and measure its effect on a dependent variable. Other variables are controlled so they can’t impact the results.
  • In a correlational design , you measure variables without manipulating any of them. You can test whether your variables change together, but you can’t be sure that one variable caused a change in another.

In general, correlational research is high in external validity while experimental research is high in internal validity .

Frequently asked questions: Methodology

Attrition refers to participants leaving a study. It always happens to some extent—for example, in randomized controlled trials for medical research.

Differential attrition occurs when attrition or dropout rates differ systematically between the intervention and the control group . As a result, the characteristics of the participants who drop out differ from the characteristics of those who stay in the study. Because of this, study results may be biased .

Action research is conducted in order to solve a particular issue immediately, while case studies are often conducted over a longer period of time and focus more on observing and analyzing a particular ongoing phenomenon.

Action research is focused on solving a problem or informing individual and community-based knowledge in a way that impacts teaching, learning, and other related processes. It is less focused on contributing theoretical input, instead producing actionable input.

Action research is particularly popular with educators as a form of systematic inquiry because it prioritizes reflection and bridges the gap between theory and practice. Educators are able to simultaneously investigate an issue as they solve it, and the method is very iterative and flexible.

A cycle of inquiry is another name for action research . It is usually visualized in a spiral shape following a series of steps, such as “planning → acting → observing → reflecting.”

To make quantitative observations , you need to use instruments that are capable of measuring the quantity you want to observe. For example, you might use a ruler to measure the length of an object or a thermometer to measure its temperature.

Criterion validity and construct validity are both types of measurement validity . In other words, they both show you how accurately a method measures something.

While construct validity is the degree to which a test or other measurement method measures what it claims to measure, criterion validity is the degree to which a test can predictively (in the future) or concurrently (in the present) measure something.

Construct validity is often considered the overarching type of measurement validity . You need to have face validity , content validity , and criterion validity in order to achieve construct validity.

Convergent validity and discriminant validity are both subtypes of construct validity . Together, they help you evaluate whether a test measures the concept it was designed to measure.

  • Convergent validity indicates whether a test that is designed to measure a particular construct correlates with other tests that assess the same or similar construct.
  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related. This type of validity is also called divergent validity .

You need to assess both in order to demonstrate construct validity. Neither one alone is sufficient for establishing construct validity.

  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related

Content validity shows you how accurately a test or other measurement method taps  into the various aspects of the specific construct you are researching.

In other words, it helps you answer the question: “does the test measure all aspects of the construct I want to measure?” If it does, then the test has high content validity.

The higher the content validity, the more accurate the measurement of the construct.

If the test fails to include parts of the construct, or irrelevant parts are included, the validity of the instrument is threatened, which brings your results into question.

Face validity and content validity are similar in that they both evaluate how suitable the content of a test is. The difference is that face validity is subjective, and assesses content at surface level.

When a test has strong face validity, anyone would agree that the test’s questions appear to measure what they are intended to measure.

For example, looking at a 4th grade math test consisting of problems in which students have to add and multiply, most people would agree that it has strong face validity (i.e., it looks like a math test).

On the other hand, content validity evaluates how well a test represents all the aspects of a topic. Assessing content validity is more systematic and relies on expert evaluation. of each question, analyzing whether each one covers the aspects that the test was designed to cover.

A 4th grade math test would have high content validity if it covered all the skills taught in that grade. Experts(in this case, math teachers), would have to evaluate the content validity by comparing the test to the learning objectives.

Snowball sampling is a non-probability sampling method . Unlike probability sampling (which involves some form of random selection ), the initial individuals selected to be studied are the ones who recruit new participants.

Because not every member of the target population has an equal chance of being recruited into the sample, selection in snowball sampling is non-random.

Snowball sampling is a non-probability sampling method , where there is not an equal chance for every member of the population to be included in the sample .

This means that you cannot use inferential statistics and make generalizations —often the goal of quantitative research . As such, a snowball sample is not representative of the target population and is usually a better fit for qualitative research .

Snowball sampling relies on the use of referrals. Here, the researcher recruits one or more initial participants, who then recruit the next ones.

Participants share similar characteristics and/or know each other. Because of this, not every member of the population has an equal chance of being included in the sample, giving rise to sampling bias .

Snowball sampling is best used in the following cases:

  • If there is no sampling frame available (e.g., people with a rare disease)
  • If the population of interest is hard to access or locate (e.g., people experiencing homelessness)
  • If the research focuses on a sensitive topic (e.g., extramarital affairs)

The reproducibility and replicability of a study can be ensured by writing a transparent, detailed method section and using clear, unambiguous language.

Reproducibility and replicability are related terms.

  • Reproducing research entails reanalyzing the existing data in the same manner.
  • Replicating (or repeating ) the research entails reconducting the entire analysis, including the collection of new data . 
  • A successful reproduction shows that the data analyses were conducted in a fair and honest manner.
  • A successful replication shows that the reliability of the results is high.

Stratified sampling and quota sampling both involve dividing the population into subgroups and selecting units from each subgroup. The purpose in both cases is to select a representative sample and/or to allow comparisons between subgroups.

The main difference is that in stratified sampling, you draw a random sample from each subgroup ( probability sampling ). In quota sampling you select a predetermined number or proportion of units, in a non-random manner ( non-probability sampling ).

Purposive and convenience sampling are both sampling methods that are typically used in qualitative data collection.

A convenience sample is drawn from a source that is conveniently accessible to the researcher. Convenience sampling does not distinguish characteristics among the participants. On the other hand, purposive sampling focuses on selecting participants possessing characteristics associated with the research study.

The findings of studies based on either convenience or purposive sampling can only be generalized to the (sub)population from which the sample is drawn, and not to the entire population.

Random sampling or probability sampling is based on random selection. This means that each unit has an equal chance (i.e., equal probability) of being included in the sample.

On the other hand, convenience sampling involves stopping people at random, which means that not everyone has an equal chance of being selected depending on the place, time, or day you are collecting your data.

Convenience sampling and quota sampling are both non-probability sampling methods. They both use non-random criteria like availability, geographical proximity, or expert knowledge to recruit study participants.

However, in convenience sampling, you continue to sample units or cases until you reach the required sample size.

In quota sampling, you first need to divide your population of interest into subgroups (strata) and estimate their proportions (quota) in the population. Then you can start your data collection, using convenience sampling to recruit participants, until the proportions in each subgroup coincide with the estimated proportions in the population.

A sampling frame is a list of every member in the entire population . It is important that the sampling frame is as complete as possible, so that your sample accurately reflects your population.

Stratified and cluster sampling may look similar, but bear in mind that groups created in cluster sampling are heterogeneous , so the individual characteristics in the cluster vary. In contrast, groups created in stratified sampling are homogeneous , as units share characteristics.

Relatedly, in cluster sampling you randomly select entire groups and include all units of each group in your sample. However, in stratified sampling, you select some units of all groups and include them in your sample. In this way, both methods can ensure that your sample is representative of the target population .

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

The key difference between observational studies and experimental designs is that a well-done observational study does not influence the responses of participants, while experiments do have some sort of treatment condition applied to at least some participants by random assignment .

An observational study is a great choice for you if your research question is based purely on observations. If there are ethical, logistical, or practical concerns that prevent you from conducting a traditional experiment , an observational study may be a good choice. In an observational study, there is no interference or manipulation of the research subjects, as well as no control or treatment groups .

It’s often best to ask a variety of people to review your measurements. You can ask experts, such as other researchers, or laypeople, such as potential participants, to judge the face validity of tests.

While experts have a deep understanding of research methods , the people you’re studying can provide you with valuable insights you may have missed otherwise.

Face validity is important because it’s a simple first step to measuring the overall validity of a test or technique. It’s a relatively intuitive, quick, and easy way to start checking whether a new measure seems useful at first glance.

Good face validity means that anyone who reviews your measure says that it seems to be measuring what it’s supposed to. With poor face validity, someone reviewing your measure may be left confused about what you’re measuring and why you’re using this method.

Face validity is about whether a test appears to measure what it’s supposed to measure. This type of validity is concerned with whether a measure seems relevant and appropriate for what it’s assessing only on the surface.

Statistical analyses are often applied to test validity with data from your measures. You test convergent validity and discriminant validity with correlations to see if results from your test are positively or negatively related to those of other established tests.

You can also use regression analyses to assess whether your measure is actually predictive of outcomes that you expect it to predict theoretically. A regression analysis that supports your expectations strengthens your claim of construct validity .

When designing or evaluating a measure, construct validity helps you ensure you’re actually measuring the construct you’re interested in. If you don’t have construct validity, you may inadvertently measure unrelated or distinct constructs and lose precision in your research.

Construct validity is often considered the overarching type of measurement validity ,  because it covers all of the other types. You need to have face validity , content validity , and criterion validity to achieve construct validity.

Construct validity is about how well a test measures the concept it was designed to evaluate. It’s one of four types of measurement validity , which includes construct validity, face validity , and criterion validity.

There are two subtypes of construct validity.

  • Convergent validity : The extent to which your measure corresponds to measures of related constructs
  • Discriminant validity : The extent to which your measure is unrelated or negatively related to measures of distinct constructs

Naturalistic observation is a valuable tool because of its flexibility, external validity , and suitability for topics that can’t be studied in a lab setting.

The downsides of naturalistic observation include its lack of scientific control , ethical considerations , and potential for bias from observers and subjects.

Naturalistic observation is a qualitative research method where you record the behaviors of your research subjects in real world settings. You avoid interfering or influencing anything in a naturalistic observation.

You can think of naturalistic observation as “people watching” with a purpose.

A dependent variable is what changes as a result of the independent variable manipulation in experiments . It’s what you’re interested in measuring, and it “depends” on your independent variable.

In statistics, dependent variables are also called:

  • Response variables (they respond to a change in another variable)
  • Outcome variables (they represent the outcome you want to measure)
  • Left-hand-side variables (they appear on the left-hand side of a regression equation)

An independent variable is the variable you manipulate, control, or vary in an experimental study to explore its effects. It’s called “independent” because it’s not influenced by any other variables in the study.

Independent variables are also called:

  • Explanatory variables (they explain an event or outcome)
  • Predictor variables (they can be used to predict the value of a dependent variable)
  • Right-hand-side variables (they appear on the right-hand side of a regression equation).

As a rule of thumb, questions related to thoughts, beliefs, and feelings work well in focus groups. Take your time formulating strong questions, paying special attention to phrasing. Be careful to avoid leading questions , which can bias your responses.

Overall, your focus group questions should be:

  • Open-ended and flexible
  • Impossible to answer with “yes” or “no” (questions that start with “why” or “how” are often best)
  • Unambiguous, getting straight to the point while still stimulating discussion
  • Unbiased and neutral

A structured interview is a data collection method that relies on asking questions in a set order to collect data on a topic. They are often quantitative in nature. Structured interviews are best used when: 

  • You already have a very clear understanding of your topic. Perhaps significant research has already been conducted, or you have done some prior research yourself, but you already possess a baseline for designing strong structured questions.
  • You are constrained in terms of time or resources and need to analyze your data quickly and efficiently.
  • Your research question depends on strong parity between participants, with environmental conditions held constant.

More flexible interview options include semi-structured interviews , unstructured interviews , and focus groups .

Social desirability bias is the tendency for interview participants to give responses that will be viewed favorably by the interviewer or other participants. It occurs in all types of interviews and surveys , but is most common in semi-structured interviews , unstructured interviews , and focus groups .

Social desirability bias can be mitigated by ensuring participants feel at ease and comfortable sharing their views. Make sure to pay attention to your own body language and any physical or verbal cues, such as nodding or widening your eyes.

This type of bias can also occur in observations if the participants know they’re being observed. They might alter their behavior accordingly.

The interviewer effect is a type of bias that emerges when a characteristic of an interviewer (race, age, gender identity, etc.) influences the responses given by the interviewee.

There is a risk of an interviewer effect in all types of interviews , but it can be mitigated by writing really high-quality interview questions.

A semi-structured interview is a blend of structured and unstructured types of interviews. Semi-structured interviews are best used when:

  • You have prior interview experience. Spontaneous questions are deceptively challenging, and it’s easy to accidentally ask a leading question or make a participant uncomfortable.
  • Your research question is exploratory in nature. Participant answers can guide future research questions and help you develop a more robust knowledge base for future research.

An unstructured interview is the most flexible type of interview, but it is not always the best fit for your research topic.

Unstructured interviews are best used when:

  • You are an experienced interviewer and have a very strong background in your research topic, since it is challenging to ask spontaneous, colloquial questions.
  • Your research question is exploratory in nature. While you may have developed hypotheses, you are open to discovering new or shifting viewpoints through the interview process.
  • You are seeking descriptive data, and are ready to ask questions that will deepen and contextualize your initial thoughts and hypotheses.
  • Your research depends on forming connections with your participants and making them feel comfortable revealing deeper emotions, lived experiences, or thoughts.

The four most common types of interviews are:

  • Structured interviews : The questions are predetermined in both topic and order. 
  • Semi-structured interviews : A few questions are predetermined, but other questions aren’t planned.
  • Unstructured interviews : None of the questions are predetermined.
  • Focus group interviews : The questions are presented to a group instead of one individual.

Deductive reasoning is commonly used in scientific research, and it’s especially associated with quantitative research .

In research, you might have come across something called the hypothetico-deductive method . It’s the scientific method of testing hypotheses to check whether your predictions are substantiated by real-world data.

Deductive reasoning is a logical approach where you progress from general ideas to specific conclusions. It’s often contrasted with inductive reasoning , where you start with specific observations and form general conclusions.

Deductive reasoning is also called deductive logic.

There are many different types of inductive reasoning that people use formally or informally.

Here are a few common types:

  • Inductive generalization : You use observations about a sample to come to a conclusion about the population it came from.
  • Statistical generalization: You use specific numbers about samples to make statements about populations.
  • Causal reasoning: You make cause-and-effect links between different things.
  • Sign reasoning: You make a conclusion about a correlational relationship between different things.
  • Analogical reasoning: You make a conclusion about something based on its similarities to something else.

Inductive reasoning is a bottom-up approach, while deductive reasoning is top-down.

Inductive reasoning takes you from the specific to the general, while in deductive reasoning, you make inferences by going from general premises to specific conclusions.

In inductive research , you start by making observations or gathering data. Then, you take a broad scan of your data and search for patterns. Finally, you make general conclusions that you might incorporate into theories.

Inductive reasoning is a method of drawing conclusions by going from the specific to the general. It’s usually contrasted with deductive reasoning, where you proceed from general information to specific conclusions.

Inductive reasoning is also called inductive logic or bottom-up reasoning.

A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.

A hypothesis is not just a guess — it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations and statistical analysis of data).

Triangulation can help:

  • Reduce research bias that comes from using a single method, theory, or investigator
  • Enhance validity by approaching the same topic with different tools
  • Establish credibility by giving you a complete picture of the research problem

But triangulation can also pose problems:

  • It’s time-consuming and labor-intensive, often involving an interdisciplinary team.
  • Your results may be inconsistent or even contradictory.

There are four main types of triangulation :

  • Data triangulation : Using data from different times, spaces, and people
  • Investigator triangulation : Involving multiple researchers in collecting or analyzing data
  • Theory triangulation : Using varying theoretical perspectives in your research
  • Methodological triangulation : Using different methodologies to approach the same topic

Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure. 

Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field. It acts as a first defense, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.

Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.

In general, the peer review process follows the following steps: 

  • First, the author submits the manuscript to the editor.
  • Reject the manuscript and send it back to author, or 
  • Send it onward to the selected peer reviewer(s) 
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made. 
  • Lastly, the edited manuscript is sent back to the author. They input the edits, and resubmit it to the editor for publication.

Exploratory research is often used when the issue you’re studying is new or when the data collection process is challenging for some reason.

You can use exploratory research if you have a general idea or a specific question that you want to study but there is no preexisting knowledge or paradigm with which to study it.

Exploratory research is a methodology approach that explores research questions that have not previously been studied in depth. It is often used when the issue you’re studying is new, or the data collection process is challenging in some way.

Explanatory research is used to investigate how or why a phenomenon occurs. Therefore, this type of research is often one of the first stages in the research process , serving as a jumping-off point for future research.

Exploratory research aims to explore the main aspects of an under-researched problem, while explanatory research aims to explain the causes and consequences of a well-defined problem.

Explanatory research is a research method used to investigate how or why something occurs when only a small amount of information is available pertaining to that topic. It can help you increase your understanding of a given topic.

Clean data are valid, accurate, complete, consistent, unique, and uniform. Dirty data include inconsistencies and errors.

Dirty data can come from any part of the research process, including poor research design , inappropriate measurement materials, or flawed data entry.

Data cleaning takes place between data collection and data analyses. But you can use some methods even before collecting data.

For clean data, you should start by designing measures that collect valid data. Data validation at the time of data entry or collection helps you minimize the amount of data cleaning you’ll need to do.

After data collection, you can use data standardization and data transformation to clean your data. You’ll also deal with any missing values, outliers, and duplicate values.

Every dataset requires different techniques to clean dirty data , but you need to address these issues in a systematic way. You focus on finding and resolving data points that don’t agree or fit with the rest of your dataset.

These data might be missing values, outliers, duplicate values, incorrectly formatted, or irrelevant. You’ll start with screening and diagnosing your data. Then, you’ll often standardize and accept or remove data to make your dataset consistent and valid.

Data cleaning is necessary for valid and appropriate analyses. Dirty data contain inconsistencies or errors , but cleaning your data helps you minimize or resolve these.

Without data cleaning, you could end up with a Type I or II error in your conclusion. These types of erroneous conclusions can be practically significant with important consequences, because they lead to misplaced investments or missed opportunities.

Data cleaning involves spotting and resolving potential data inconsistencies or errors to improve your data quality. An error is any value (e.g., recorded weight) that doesn’t reflect the true value (e.g., actual weight) of something that’s being measured.

In this process, you review, analyze, detect, modify, or remove “dirty” data to make your dataset “clean.” Data cleaning is also called data cleansing or data scrubbing.

Research misconduct means making up or falsifying data, manipulating data analyses, or misrepresenting results in research reports. It’s a form of academic fraud.

These actions are committed intentionally and can have serious consequences; research misconduct is not a simple mistake or a point of disagreement but a serious ethical failure.

Anonymity means you don’t know who the participants are, while confidentiality means you know who they are but remove identifying information from your research report. Both are important ethical considerations .

You can only guarantee anonymity by not collecting any personally identifying information—for example, names, phone numbers, email addresses, IP addresses, physical characteristics, photos, or videos.

You can keep data confidential by using aggregate information in your research report, so that you only refer to groups of participants rather than individuals.

Research ethics matter for scientific integrity, human rights and dignity, and collaboration between science and society. These principles make sure that participation in studies is voluntary, informed, and safe.

Ethical considerations in research are a set of principles that guide your research designs and practices. These principles include voluntary participation, informed consent, anonymity, confidentiality, potential for harm, and results communication.

Scientists and researchers must always adhere to a certain code of conduct when collecting data from others .

These considerations protect the rights of research participants, enhance research validity , and maintain scientific integrity.

In multistage sampling , you can use probability or non-probability sampling methods .

For a probability sample, you have to conduct probability sampling at every stage.

You can mix it up by using simple random sampling , systematic sampling , or stratified sampling to select units at different stages, depending on what is applicable and relevant to your study.

Multistage sampling can simplify data collection when you have large, geographically spread samples, and you can obtain a probability sample without a complete sampling frame.

But multistage sampling may not lead to a representative sample, and larger samples are needed for multistage samples to achieve the statistical properties of simple random samples .

These are four of the most common mixed methods designs :

  • Convergent parallel: Quantitative and qualitative data are collected at the same time and analyzed separately. After both analyses are complete, compare your results to draw overall conclusions. 
  • Embedded: Quantitative and qualitative data are collected at the same time, but within a larger quantitative or qualitative design. One type of data is secondary to the other.
  • Explanatory sequential: Quantitative data is collected and analyzed first, followed by qualitative data. You can use this design if you think your qualitative data will explain and contextualize your quantitative findings.
  • Exploratory sequential: Qualitative data is collected and analyzed first, followed by quantitative data. You can use this design if you think the quantitative data will confirm or validate your qualitative findings.

Triangulation in research means using multiple datasets, methods, theories and/or investigators to address a research question. It’s a research strategy that can help you enhance the validity and credibility of your findings.

Triangulation is mainly used in qualitative research , but it’s also commonly applied in quantitative research . Mixed methods research always uses triangulation.

In multistage sampling , or multistage cluster sampling, you draw a sample from a population using smaller and smaller groups at each stage.

This method is often used to collect data from a large, geographically spread group of people in national surveys, for example. You take advantage of hierarchical groupings (e.g., from state to city to neighborhood) to create a sample that’s less expensive and time-consuming to collect data from.

No, the steepness or slope of the line isn’t related to the correlation coefficient value. The correlation coefficient only tells you how closely your data fit on a line, so two datasets with the same correlation coefficient can have very different slopes.

To find the slope of the line, you’ll need to perform a regression analysis .

Correlation coefficients always range between -1 and 1.

The sign of the coefficient tells you the direction of the relationship: a positive value means the variables change together in the same direction, while a negative value means they change together in opposite directions.

The absolute value of a number is equal to the number without its sign. The absolute value of a correlation coefficient tells you the magnitude of the correlation: the greater the absolute value, the stronger the correlation.

These are the assumptions your data must meet if you want to use Pearson’s r :

  • Both variables are on an interval or ratio level of measurement
  • Data from both variables follow normal distributions
  • Your data have no outliers
  • Your data is from a random or representative sample
  • You expect a linear relationship between the two variables

Quantitative research designs can be divided into two main categories:

  • Correlational and descriptive designs are used to investigate characteristics, averages, trends, and associations between variables.
  • Experimental and quasi-experimental designs are used to test causal relationships .

Qualitative research designs tend to be more flexible. Common types of qualitative design include case study , ethnography , and grounded theory designs.

A well-planned research design helps ensure that your methods match your research aims, that you collect high-quality data, and that you use the right kind of analysis to answer your questions, utilizing credible sources . This allows you to draw valid , trustworthy conclusions.

The priorities of a research design can vary depending on the field, but you usually have to specify:

  • Your research questions and/or hypotheses
  • Your overall approach (e.g., qualitative or quantitative )
  • The type of design you’re using (e.g., a survey , experiment , or case study )
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods (e.g., questionnaires , observations)
  • Your data collection procedures (e.g., operationalization , timing and data management)
  • Your data analysis methods (e.g., statistical tests  or thematic analysis )

A research design is a strategy for answering your   research question . It defines your overall approach and determines how you will collect and analyze data.

Questionnaires can be self-administered or researcher-administered.

Self-administered questionnaires can be delivered online or in paper-and-pen formats, in person or through mail. All questions are standardized so that all respondents receive the same questions with identical wording.

Researcher-administered questionnaires are interviews that take place by phone, in-person, or online between researchers and respondents. You can gain deeper insights by clarifying questions for respondents or asking follow-up questions.

You can organize the questions logically, with a clear progression from simple to complex, or randomly between respondents. A logical flow helps respondents process the questionnaire easier and quicker, but it may lead to bias. Randomization can minimize the bias from order effects.

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. These questions are easier to answer quickly.

Open-ended or long-form questions allow respondents to answer in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analyzing data from people using questionnaires.

The third variable and directionality problems are two main reasons why correlation isn’t causation .

The third variable problem means that a confounding variable affects both variables to make them seem causally related when they are not.

The directionality problem is when two variables correlate and might actually have a causal relationship, but it’s impossible to conclude which variable causes changes in the other.

Correlation describes an association between variables : when one variable changes, so does the other. A correlation is a statistical indicator of the relationship between variables.

Causation means that changes in one variable brings about changes in the other (i.e., there is a cause-and-effect relationship between variables). The two variables are correlated with each other, and there’s also a causal link between them.

While causation and correlation can exist simultaneously, correlation does not imply causation. In other words, correlation is simply a relationship where A relates to B—but A doesn’t necessarily cause B to happen (or vice versa). Mistaking correlation for causation is a common error and can lead to false cause fallacy .

A correlation is usually tested for two variables at a time, but you can test correlations between three or more variables.

A correlation coefficient is a single number that describes the strength and direction of the relationship between your variables.

Different types of correlation coefficients might be appropriate for your data based on their levels of measurement and distributions . The Pearson product-moment correlation coefficient (Pearson’s r ) is commonly used to assess a linear relationship between two quantitative variables.

A correlational research design investigates relationships between two variables (or more) without the researcher controlling or manipulating any of them. It’s a non-experimental type of quantitative research .

A correlation reflects the strength and/or direction of the association between two or more variables.

  • A positive correlation means that both variables change in the same direction.
  • A negative correlation means that the variables change in opposite directions.
  • A zero correlation means there’s no relationship between the variables.

Random error  is almost always present in scientific studies, even in highly controlled settings. While you can’t eradicate it completely, you can reduce random error by taking repeated measurements, using a large sample, and controlling extraneous variables .

You can avoid systematic error through careful design of your sampling , data collection , and analysis procedures. For example, use triangulation to measure your variables using multiple methods; regularly calibrate instruments or procedures; use random sampling and random assignment ; and apply masking (blinding) where possible.

Systematic error is generally a bigger problem in research.

With random error, multiple measurements will tend to cluster around the true value. When you’re collecting data from a large sample , the errors in different directions will cancel each other out.

Systematic errors are much more problematic because they can skew your data away from the true value. This can lead you to false conclusions ( Type I and II errors ) about the relationship between the variables you’re studying.

Random and systematic error are two types of measurement error.

Random error is a chance difference between the observed and true values of something (e.g., a researcher misreading a weighing scale records an incorrect measurement).

Systematic error is a consistent or proportional difference between the observed and true values of something (e.g., a miscalibrated scale consistently records weights as higher than they actually are).

On graphs, the explanatory variable is conventionally placed on the x-axis, while the response variable is placed on the y-axis.

  • If you have quantitative variables , use a scatterplot or a line graph.
  • If your response variable is categorical, use a scatterplot or a line graph.
  • If your explanatory variable is categorical, use a bar graph.

The term “ explanatory variable ” is sometimes preferred over “ independent variable ” because, in real world contexts, independent variables are often influenced by other variables. This means they aren’t totally independent.

Multiple independent variables may also be correlated with each other, so “explanatory variables” is a more appropriate term.

The difference between explanatory and response variables is simple:

  • An explanatory variable is the expected cause, and it explains the results.
  • A response variable is the expected effect, and it responds to other variables.

In a controlled experiment , all extraneous variables are held constant so that they can’t influence the results. Controlled experiments require:

  • A control group that receives a standard treatment, a fake treatment, or no treatment.
  • Random assignment of participants to ensure the groups are equivalent.

Depending on your study topic, there are various other methods of controlling variables .

There are 4 main types of extraneous variables :

  • Demand characteristics : environmental cues that encourage participants to conform to researchers’ expectations.
  • Experimenter effects : unintentional actions by researchers that influence study outcomes.
  • Situational variables : environmental variables that alter participants’ behaviors.
  • Participant variables : any characteristic or aspect of a participant’s background that could affect study results.

An extraneous variable is any variable that you’re not investigating that can potentially affect the dependent variable of your research study.

A confounding variable is a type of extraneous variable that not only affects the dependent variable, but is also related to the independent variable.

In a factorial design, multiple independent variables are tested.

If you test two variables, each level of one independent variable is combined with each level of the other independent variable to create different conditions.

Within-subjects designs have many potential threats to internal validity , but they are also very statistically powerful .

Advantages:

  • Only requires small samples
  • Statistically powerful
  • Removes the effects of individual differences on the outcomes

Disadvantages:

  • Internal validity threats reduce the likelihood of establishing a direct relationship between variables
  • Time-related effects, such as growth, can influence the outcomes
  • Carryover effects mean that the specific order of different treatments affect the outcomes

While a between-subjects design has fewer threats to internal validity , it also requires more participants for high statistical power than a within-subjects design .

  • Prevents carryover effects of learning and fatigue.
  • Shorter study duration.
  • Needs larger samples for high power.
  • Uses more resources to recruit participants, administer sessions, cover costs, etc.
  • Individual differences may be an alternative explanation for results.

Yes. Between-subjects and within-subjects designs can be combined in a single study when you have two or more independent variables (a factorial design). In a mixed factorial design, one variable is altered between subjects and another is altered within subjects.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word “between” means that you’re comparing different conditions between groups, while the word “within” means you’re comparing different conditions within the same group.

Random assignment is used in experiments with a between-groups or independent measures design. In this research design, there’s usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable.

In general, you should always use random assignment in this type of experimental design when it is ethically possible and makes sense for your study topic.

To implement random assignment , assign a unique number to every member of your study’s sample .

Then, you can use a random number generator or a lottery method to randomly assign each number to a control or experimental group. You can also do so manually, by flipping a coin or rolling a dice to randomly assign participants to groups.

Random selection, or random sampling , is a way of selecting members of a population for your study’s sample.

In contrast, random assignment is a way of sorting the sample into control and experimental groups.

Random sampling enhances the external validity or generalizability of your results, while random assignment improves the internal validity of your study.

In experimental research, random assignment is a way of placing participants from your sample into different groups using randomization. With this method, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.

“Controlling for a variable” means measuring extraneous variables and accounting for them statistically to remove their effects on other variables.

Researchers often model control variable data along with independent and dependent variable data in regression analyses and ANCOVAs . That way, you can isolate the control variable’s effects from the relationship between the variables of interest.

Control variables help you establish a correlational or causal relationship between variables by enhancing internal validity .

If you don’t control relevant extraneous variables , they may influence the outcomes of your study, and you may not be able to demonstrate that your results are really an effect of your independent variable .

A control variable is any variable that’s held constant in a research study. It’s not a variable of interest in the study, but it’s controlled because it could influence the outcomes.

Including mediators and moderators in your research helps you go beyond studying a simple relationship between two variables for a fuller picture of the real world. They are important to consider when studying complex correlational or causal relationships.

Mediators are part of the causal pathway of an effect, and they tell you how or why an effect takes place. Moderators usually help you judge the external validity of your study by identifying the limitations of when the relationship between variables holds.

If something is a mediating variable :

  • It’s caused by the independent variable .
  • It influences the dependent variable
  • When it’s taken into account, the statistical correlation between the independent and dependent variables is higher than when it isn’t considered.

A confounder is a third variable that affects variables of interest and makes them seem related when they are not. In contrast, a mediator is the mechanism of a relationship between two variables: it explains the process by which they are related.

A mediator variable explains the process through which two variables are related, while a moderator variable affects the strength and direction of that relationship.

There are three key steps in systematic sampling :

  • Define and list your population , ensuring that it is not ordered in a cyclical or periodic order.
  • Decide on your sample size and calculate your interval, k , by dividing your population by your target sample size.
  • Choose every k th member of the population as your sample.

Systematic sampling is a probability sampling method where researchers select members of the population at a regular interval – for example, by selecting every 15th person on a list of the population. If the population is in a random order, this can imitate the benefits of simple random sampling .

Yes, you can create a stratified sample using multiple characteristics, but you must ensure that every participant in your study belongs to one and only one subgroup. In this case, you multiply the numbers of subgroups for each characteristic to get the total number of groups.

For example, if you were stratifying by location with three subgroups (urban, rural, or suburban) and marital status with five subgroups (single, divorced, widowed, married, or partnered), you would have 3 x 5 = 15 subgroups.

You should use stratified sampling when your sample can be divided into mutually exclusive and exhaustive subgroups that you believe will take on different mean values for the variable that you’re studying.

Using stratified sampling will allow you to obtain more precise (with lower variance ) statistical estimates of whatever you are trying to measure.

For example, say you want to investigate how income differs based on educational attainment, but you know that this relationship can vary based on race. Using stratified sampling, you can ensure you obtain a large enough sample from each racial group, allowing you to draw more precise conclusions.

In stratified sampling , researchers divide subjects into subgroups called strata based on characteristics that they share (e.g., race, gender, educational attainment).

Once divided, each subgroup is randomly sampled using another probability sampling method.

Cluster sampling is more time- and cost-efficient than other probability sampling methods , particularly when it comes to large samples spread across a wide geographical area.

However, it provides less statistical certainty than other methods, such as simple random sampling , because it is difficult to ensure that your clusters properly represent the population as a whole.

There are three types of cluster sampling : single-stage, double-stage and multi-stage clustering. In all three types, you first divide the population into clusters, then randomly select clusters for use in your sample.

  • In single-stage sampling , you collect data from every unit within the selected clusters.
  • In double-stage sampling , you select a random sample of units from within the clusters.
  • In multi-stage sampling , you repeat the procedure of randomly sampling elements from within the clusters until you have reached a manageable sample.

Cluster sampling is a probability sampling method in which you divide a population into clusters, such as districts or schools, and then randomly select some of these clusters as your sample.

The clusters should ideally each be mini-representations of the population as a whole.

If properly implemented, simple random sampling is usually the best sampling method for ensuring both internal and external validity . However, it can sometimes be impractical and expensive to implement, depending on the size of the population to be studied,

If you have a list of every member of the population and the ability to reach whichever members are selected, you can use simple random sampling.

The American Community Survey  is an example of simple random sampling . In order to collect detailed data on the population of the US, the Census Bureau officials randomly select 3.5 million households per year and use a variety of methods to convince them to fill out the survey.

Simple random sampling is a type of probability sampling in which the researcher randomly selects a subset of participants from a population . Each member of the population has an equal chance of being selected. Data is then collected from as large a percentage as possible of this random subset.

Quasi-experimental design is most useful in situations where it would be unethical or impractical to run a true experiment .

Quasi-experiments have lower internal validity than true experiments, but they often have higher external validity  as they can use real-world interventions instead of artificial laboratory settings.

A quasi-experiment is a type of research design that attempts to establish a cause-and-effect relationship. The main difference with a true experiment is that the groups are not randomly assigned.

Blinding is important to reduce research bias (e.g., observer bias , demand characteristics ) and ensure a study’s internal validity .

If participants know whether they are in a control or treatment group , they may adjust their behavior in ways that affect the outcome that researchers are trying to measure. If the people administering the treatment are aware of group assignment, they may treat participants differently and thus directly or indirectly influence the final results.

  • In a single-blind study , only the participants are blinded.
  • In a double-blind study , both participants and experimenters are blinded.
  • In a triple-blind study , the assignment is hidden not only from participants and experimenters, but also from the researchers analyzing the data.

Blinding means hiding who is assigned to the treatment group and who is assigned to the control group in an experiment .

A true experiment (a.k.a. a controlled experiment) always includes at least one control group that doesn’t receive the experimental treatment.

However, some experiments use a within-subjects design to test treatments without a control group. In these designs, you usually compare one group’s outcomes before and after a treatment (instead of comparing outcomes between different groups).

For strong internal validity , it’s usually best to include a control group if possible. Without a control group, it’s harder to be certain that the outcome was caused by the experimental treatment and not by other variables.

An experimental group, also known as a treatment group, receives the treatment whose effect researchers wish to study, whereas a control group does not. They should be identical in all other ways.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyze your data.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviors. It is made up of 4 or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with 5 or 7 possible responses, to capture their degree of agreement.

In scientific research, concepts are the abstract ideas or phenomena that are being studied (e.g., educational achievement). Variables are properties or characteristics of the concept (e.g., performance at school), while indicators are ways of measuring or quantifying variables (e.g., yearly grade reports).

The process of turning abstract concepts into measurable variables and indicators is called operationalization .

There are various approaches to qualitative data analysis , but they all share five steps in common:

  • Prepare and organize your data.
  • Review and explore your data.
  • Develop a data coding system.
  • Assign codes to the data.
  • Identify recurring themes.

The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .

There are five common approaches to qualitative research :

  • Grounded theory involves collecting data in order to develop new theories.
  • Ethnography involves immersing yourself in a group or organization to understand its culture.
  • Narrative research involves interpreting stories to understand how people make sense of their experiences and perceptions.
  • Phenomenological research involves investigating phenomena through people’s lived experiences.
  • Action research links theory and practice in several cycles to drive innovative changes.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

Operationalization means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioral avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalize the variables that you want to measure.

When conducting research, collecting original data has significant advantages:

  • You can tailor data collection to your specific research aims (e.g. understanding the needs of your consumers or user testing your website)
  • You can control and standardize the process for high reliability and validity (e.g. choosing appropriate measurements and sampling methods )

However, there are also some drawbacks: data collection can be time-consuming, labor-intensive and expensive. In some cases, it’s more efficient to use secondary data that has already been collected by someone else, but the data might be less reliable.

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.

There are several methods you can use to decrease the impact of confounding variables on your research: restriction, matching, statistical control and randomization.

In restriction , you restrict your sample by only including certain subjects that have the same values of potential confounding variables.

In matching , you match each of the subjects in your treatment group with a counterpart in the comparison group. The matched subjects have the same values on any potential confounding variables, and only differ in the independent variable .

In statistical control , you include potential confounders as variables in your regression .

In randomization , you randomly assign the treatment (or independent variable) in your study to a sufficiently large number of subjects, which allows you to control for all potential confounding variables.

A confounding variable is closely related to both the independent and dependent variables in a study. An independent variable represents the supposed cause , while the dependent variable is the supposed effect . A confounding variable is a third variable that influences both the independent and dependent variables.

Failing to account for confounding variables can cause you to wrongly estimate the relationship between your independent and dependent variables.

To ensure the internal validity of your research, you must consider the impact of confounding variables. If you fail to account for them, you might over- or underestimate the causal relationship between your independent and dependent variables , or even find a causal relationship where none exists.

Yes, but including more than one of either type requires multiple research questions .

For example, if you are interested in the effect of a diet on health, you can use multiple measures of health: blood sugar, blood pressure, weight, pulse, and many more. Each of these is its own dependent variable with its own research question.

You could also choose to look at the effect of exercise levels as well as diet, or even the additional effect of the two combined. Each of these is a separate independent variable .

To ensure the internal validity of an experiment , you should only change one independent variable at a time.

No. The value of a dependent variable depends on an independent variable, so a variable cannot be both independent and dependent at the same time. It must be either the cause or the effect, not both!

You want to find out how blood sugar levels are affected by drinking diet soda and regular soda, so you conduct an experiment .

  • The type of soda – diet or regular – is the independent variable .
  • The level of blood sugar that you measure is the dependent variable – it changes depending on the type of soda.

Determining cause and effect is one of the most important parts of scientific research. It’s essential to know which is the cause – the independent variable – and which is the effect – the dependent variable.

In non-probability sampling , the sample is selected based on non-random criteria, and not every member of the population has a chance of being included.

Common non-probability sampling methods include convenience sampling , voluntary response sampling, purposive sampling , snowball sampling, and quota sampling .

Probability sampling means that every member of the target population has a known chance of being included in the sample.

Probability sampling methods include simple random sampling , systematic sampling , stratified sampling , and cluster sampling .

Using careful research design and sampling procedures can help you avoid sampling bias . Oversampling can be used to correct undercoverage bias .

Some common types of sampling bias include self-selection bias , nonresponse bias , undercoverage bias , survivorship bias , pre-screening or advertising bias, and healthy user bias.

Sampling bias is a threat to external validity – it limits the generalizability of your findings to a broader group of people.

A sampling error is the difference between a population parameter and a sample statistic .

A statistic refers to measures about the sample , while a parameter refers to measures about the population .

Populations are used when a research question requires data from every member of the population. This is usually only feasible when the population is small and easily accessible.

Samples are used to make inferences about populations . Samples are easier to collect data from because they are practical, cost-effective, convenient, and manageable.

There are seven threats to external validity : selection bias , history, experimenter effect, Hawthorne effect , testing effect, aptitude-treatment and situation effect.

The two types of external validity are population validity (whether you can generalize to other groups of people) and ecological validity (whether you can generalize to other situations and settings).

The external validity of a study is the extent to which you can generalize your findings to different groups of people, situations, and measures.

Cross-sectional studies cannot establish a cause-and-effect relationship or analyze behavior over a period of time. To investigate cause and effect, you need to do a longitudinal study or an experimental study .

Cross-sectional studies are less expensive and time-consuming than many other types of study. They can provide useful insights into a population’s characteristics and identify correlations for further research.

Sometimes only cross-sectional data is available for analysis; other times your research question may only require a cross-sectional study to answer it.

Longitudinal studies can last anywhere from weeks to decades, although they tend to be at least a year long.

The 1970 British Cohort Study , which has collected data on the lives of 17,000 Brits since their births in 1970, is one well-known example of a longitudinal study .

Longitudinal studies are better to establish the correct sequence of events, identify changes over time, and provide insight into cause-and-effect relationships, but they also tend to be more expensive and time-consuming than other types of studies.

Longitudinal studies and cross-sectional studies are two different types of research design . In a cross-sectional study you collect data from a population at a specific point in time; in a longitudinal study you repeatedly collect data from the same sample over an extended period of time.

Longitudinal study Cross-sectional study
observations Observations at a in time
Observes the multiple times Observes (a “cross-section”) in the population
Follows in participants over time Provides of society at a given point

There are eight threats to internal validity : history, maturation, instrumentation, testing, selection bias , regression to the mean, social interaction and attrition .

Internal validity is the extent to which you can be confident that a cause-and-effect relationship established in a study cannot be explained by other factors.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts and meanings, use qualitative methods .
  • If you want to analyze a large amount of readily-available data, use secondary data. If you want data specific to your purposes with control over how it is generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

Discrete and continuous variables are two types of quantitative variables :

  • Discrete variables represent counts (e.g. the number of objects in a collection).
  • Continuous variables represent measurable amounts (e.g. water volume or weight).

Quantitative variables are any variables where the data represent amounts (e.g. height, weight, or age).

Categorical variables are any variables where the data represent groups. This includes rankings (e.g. finishing places in a race), classifications (e.g. brands of cereal), and binary outcomes (e.g. coin flips).

You need to know what type of variables you are working with to choose the right statistical test for your data and interpret your results .

You can think of independent and dependent variables in terms of cause and effect: an independent variable is the variable you think is the cause , while a dependent variable is the effect .

In an experiment, you manipulate the independent variable and measure the outcome in the dependent variable. For example, in an experiment about the effect of nutrients on crop growth:

  • The  independent variable  is the amount of nutrients added to the crop field.
  • The  dependent variable is the biomass of the crops at harvest time.

Defining your variables, and deciding how you will manipulate and measure them, is an important part of experimental design .

Experimental design means planning a set of procedures to investigate a relationship between variables . To design a controlled experiment, you need:

  • A testable hypothesis
  • At least one independent variable that can be precisely manipulated
  • At least one dependent variable that can be precisely measured

When designing the experiment, you decide:

  • How you will manipulate the variable(s)
  • How you will control for any potential confounding variables
  • How many subjects or samples will be included in the study
  • How subjects will be assigned to treatment levels

Experimental design is essential to the internal and external validity of your experiment.

I nternal validity is the degree of confidence that the causal relationship you are testing is not influenced by other factors or variables .

External validity is the extent to which your results can be generalized to other contexts.

The validity of your experiment depends on your experimental design .

Reliability and validity are both about how well a method measures something:

  • Reliability refers to the  consistency of a measure (whether the results can be reproduced under the same conditions).
  • Validity   refers to the  accuracy of a measure (whether the results really do represent what they are supposed to measure).

If you are doing experimental research, you also have to consider the internal and external validity of your experiment.

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyze data (for example, experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

Ask our team

Want to contact us directly? No problem.  We  are always here for you.

Support team - Nina

Our team helps students graduate by offering:

  • A world-class citation generator
  • Plagiarism Checker software powered by Turnitin
  • Innovative Citation Checker software
  • Professional proofreading services
  • Over 300 helpful articles about academic writing, citing sources, plagiarism, and more

Scribbr specializes in editing study-related documents . We proofread:

  • PhD dissertations
  • Research proposals
  • Personal statements
  • Admission essays
  • Motivation letters
  • Reflection papers
  • Journal articles
  • Capstone projects

Scribbr’s Plagiarism Checker is powered by elements of Turnitin’s Similarity Checker , namely the plagiarism detection software and the Internet Archive and Premium Scholarly Publications content databases .

The add-on AI detector is powered by Scribbr’s proprietary software.

The Scribbr Citation Generator is developed using the open-source Citation Style Language (CSL) project and Frank Bennett’s citeproc-js . It’s the same technology used by dozens of other popular citation tools, including Mendeley and Zotero.

You can find all the citation styles and locales used in the Scribbr Citation Generator in our publicly accessible repository on Github .

case study vs experimental research

Distinguishing Between Case Studies & Experiments

Maria Nguyen

Case Study vs Experiment

Case studies and experiments are two distinct research methods used across various disciplines, providing researchers with the ability to study and analyze a subject through different approaches. This variety in research methods allows the researcher to gather both qualitative and quantitative data, cross-check the data, and assign greater validity to the conclusions and overall findings of the research. A case study is a research method in which the researcher explores the subject in depth, while an experiment is a research method where two specific groups or variables are used to test a hypothesis. This article will examine the differences between case study and experiment further.

What is a Case Study?

A case study is a research method where an individual, event, or significant place is studied in depth. In the case of an individual, the researcher studies the person’s life history, which can include important days or special experiences. The case study method is used in various social sciences such as sociology, anthropology, and psychology. Through a case study, the researcher can identify and understand the subjective experiences of an individual regarding a specific topic. For example, a researcher studying the impact of second rape on the lives of rape victims can conduct several case studies to understand the subjective experiences of individuals and social mechanisms that contribute to this phenomenon. The case study is a qualitative research method that can be subjective.

What is an Experiment?

An experiment, unlike a case study, can be classified as a quantitative research method, as it provides statistically significant data and an objective, empirical approach. Experiments are primarily used in natural sciences, as they allow the scientist to control variables. In social sciences, controlling variables can be challenging and may lead to faulty conclusions. In an experiment, there are mainly two variables: the independent variable and the dependent variable. The researcher tries to test their hypothesis by manipulating these variables. There are different types of experiments, such as laboratory experiments (conducted in laboratories where conditions can be strictly controlled) and natural experiments (which take place in real-life settings). As seen, case study methods and experiments are very different from one another. However, most researchers prefer to use triangulation when conducting research to minimize biases.

Key Takeaways

  • Case studies are in-depth explorations of a subject, providing qualitative data, while experiments test hypotheses by manipulating variables, providing quantitative data.
  • Experiments are primarily used in natural sciences, whereas case studies are primarily used in social sciences.
  • Experiments involve testing the correlation between two variables (independent and dependent), while case studies focus on exploring a subject in depth without testing correlations between variables.

LEAVE A REPLY Cancel reply

Save my name, email, and website in this browser for the next time I comment.

Related Articles

Difference between power & authority, distinguishing could of & could have, distinguishing pixie & bob haircuts, distinguishing between debate & discussion, distinguishing between dialogue & conversation, distinguishing between a present & a gift, distinguishing between will & can, distinguishing between up & upon.

Case Study Research Method in Psychology

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Case studies are in-depth investigations of a person, group, event, or community. Typically, data is gathered from various sources using several methods (e.g., observations & interviews).

The case study research method originated in clinical medicine (the case history, i.e., the patient’s personal history). In psychology, case studies are often confined to the study of a particular individual.

The information is mainly biographical and relates to events in the individual’s past (i.e., retrospective), as well as to significant events that are currently occurring in his or her everyday life.

The case study is not a research method, but researchers select methods of data collection and analysis that will generate material suitable for case studies.

Freud (1909a, 1909b) conducted very detailed investigations into the private lives of his patients in an attempt to both understand and help them overcome their illnesses.

This makes it clear that the case study is a method that should only be used by a psychologist, therapist, or psychiatrist, i.e., someone with a professional qualification.

There is an ethical issue of competence. Only someone qualified to diagnose and treat a person can conduct a formal case study relating to atypical (i.e., abnormal) behavior or atypical development.

case study

 Famous Case Studies

  • Anna O – One of the most famous case studies, documenting psychoanalyst Josef Breuer’s treatment of “Anna O” (real name Bertha Pappenheim) for hysteria in the late 1800s using early psychoanalytic theory.
  • Little Hans – A child psychoanalysis case study published by Sigmund Freud in 1909 analyzing his five-year-old patient Herbert Graf’s house phobia as related to the Oedipus complex.
  • Bruce/Brenda – Gender identity case of the boy (Bruce) whose botched circumcision led psychologist John Money to advise gender reassignment and raise him as a girl (Brenda) in the 1960s.
  • Genie Wiley – Linguistics/psychological development case of the victim of extreme isolation abuse who was studied in 1970s California for effects of early language deprivation on acquiring speech later in life.
  • Phineas Gage – One of the most famous neuropsychology case studies analyzes personality changes in railroad worker Phineas Gage after an 1848 brain injury involving a tamping iron piercing his skull.

Clinical Case Studies

  • Studying the effectiveness of psychotherapy approaches with an individual patient
  • Assessing and treating mental illnesses like depression, anxiety disorders, PTSD
  • Neuropsychological cases investigating brain injuries or disorders

Child Psychology Case Studies

  • Studying psychological development from birth through adolescence
  • Cases of learning disabilities, autism spectrum disorders, ADHD
  • Effects of trauma, abuse, deprivation on development

Types of Case Studies

  • Explanatory case studies : Used to explore causation in order to find underlying principles. Helpful for doing qualitative analysis to explain presumed causal links.
  • Exploratory case studies : Used to explore situations where an intervention being evaluated has no clear set of outcomes. It helps define questions and hypotheses for future research.
  • Descriptive case studies : Describe an intervention or phenomenon and the real-life context in which it occurred. It is helpful for illustrating certain topics within an evaluation.
  • Multiple-case studies : Used to explore differences between cases and replicate findings across cases. Helpful for comparing and contrasting specific cases.
  • Intrinsic : Used to gain a better understanding of a particular case. Helpful for capturing the complexity of a single case.
  • Collective : Used to explore a general phenomenon using multiple case studies. Helpful for jointly studying a group of cases in order to inquire into the phenomenon.

Where Do You Find Data for a Case Study?

There are several places to find data for a case study. The key is to gather data from multiple sources to get a complete picture of the case and corroborate facts or findings through triangulation of evidence. Most of this information is likely qualitative (i.e., verbal description rather than measurement), but the psychologist might also collect numerical data.

1. Primary sources

  • Interviews – Interviewing key people related to the case to get their perspectives and insights. The interview is an extremely effective procedure for obtaining information about an individual, and it may be used to collect comments from the person’s friends, parents, employer, workmates, and others who have a good knowledge of the person, as well as to obtain facts from the person him or herself.
  • Observations – Observing behaviors, interactions, processes, etc., related to the case as they unfold in real-time.
  • Documents & Records – Reviewing private documents, diaries, public records, correspondence, meeting minutes, etc., relevant to the case.

2. Secondary sources

  • News/Media – News coverage of events related to the case study.
  • Academic articles – Journal articles, dissertations etc. that discuss the case.
  • Government reports – Official data and records related to the case context.
  • Books/films – Books, documentaries or films discussing the case.

3. Archival records

Searching historical archives, museum collections and databases to find relevant documents, visual/audio records related to the case history and context.

Public archives like newspapers, organizational records, photographic collections could all include potentially relevant pieces of information to shed light on attitudes, cultural perspectives, common practices and historical contexts related to psychology.

4. Organizational records

Organizational records offer the advantage of often having large datasets collected over time that can reveal or confirm psychological insights.

Of course, privacy and ethical concerns regarding confidential data must be navigated carefully.

However, with proper protocols, organizational records can provide invaluable context and empirical depth to qualitative case studies exploring the intersection of psychology and organizations.

  • Organizational/industrial psychology research : Organizational records like employee surveys, turnover/retention data, policies, incident reports etc. may provide insight into topics like job satisfaction, workplace culture and dynamics, leadership issues, employee behaviors etc.
  • Clinical psychology : Therapists/hospitals may grant access to anonymized medical records to study aspects like assessments, diagnoses, treatment plans etc. This could shed light on clinical practices.
  • School psychology : Studies could utilize anonymized student records like test scores, grades, disciplinary issues, and counseling referrals to study child development, learning barriers, effectiveness of support programs, and more.

How do I Write a Case Study in Psychology?

Follow specified case study guidelines provided by a journal or your psychology tutor. General components of clinical case studies include: background, symptoms, assessments, diagnosis, treatment, and outcomes. Interpreting the information means the researcher decides what to include or leave out. A good case study should always clarify which information is the factual description and which is an inference or the researcher’s opinion.

1. Introduction

  • Provide background on the case context and why it is of interest, presenting background information like demographics, relevant history, and presenting problem.
  • Compare briefly to similar published cases if applicable. Clearly state the focus/importance of the case.

2. Case Presentation

  • Describe the presenting problem in detail, including symptoms, duration,and impact on daily life.
  • Include client demographics like age and gender, information about social relationships, and mental health history.
  • Describe all physical, emotional, and/or sensory symptoms reported by the client.
  • Use patient quotes to describe the initial complaint verbatim. Follow with full-sentence summaries of relevant history details gathered, including key components that led to a working diagnosis.
  • Summarize clinical exam results, namely orthopedic/neurological tests, imaging, lab tests, etc. Note actual results rather than subjective conclusions. Provide images if clearly reproducible/anonymized.
  • Clearly state the working diagnosis or clinical impression before transitioning to management.

3. Management and Outcome

  • Indicate the total duration of care and number of treatments given over what timeframe. Use specific names/descriptions for any therapies/interventions applied.
  • Present the results of the intervention,including any quantitative or qualitative data collected.
  • For outcomes, utilize visual analog scales for pain, medication usage logs, etc., if possible. Include patient self-reports of improvement/worsening of symptoms. Note the reason for discharge/end of care.

4. Discussion

  • Analyze the case, exploring contributing factors, limitations of the study, and connections to existing research.
  • Analyze the effectiveness of the intervention,considering factors like participant adherence, limitations of the study, and potential alternative explanations for the results.
  • Identify any questions raised in the case analysis and relate insights to established theories and current research if applicable. Avoid definitive claims about physiological explanations.
  • Offer clinical implications, and suggest future research directions.

5. Additional Items

  • Thank specific assistants for writing support only. No patient acknowledgments.
  • References should directly support any key claims or quotes included.
  • Use tables/figures/images only if substantially informative. Include permissions and legends/explanatory notes.
  • Provides detailed (rich qualitative) information.
  • Provides insight for further research.
  • Permitting investigation of otherwise impractical (or unethical) situations.

Case studies allow a researcher to investigate a topic in far more detail than might be possible if they were trying to deal with a large number of research participants (nomothetic approach) with the aim of ‘averaging’.

Because of their in-depth, multi-sided approach, case studies often shed light on aspects of human thinking and behavior that would be unethical or impractical to study in other ways.

Research that only looks into the measurable aspects of human behavior is not likely to give us insights into the subjective dimension of experience, which is important to psychoanalytic and humanistic psychologists.

Case studies are often used in exploratory research. They can help us generate new ideas (that might be tested by other methods). They are an important way of illustrating theories and can help show how different aspects of a person’s life are related to each other.

The method is, therefore, important for psychologists who adopt a holistic point of view (i.e., humanistic psychologists ).

Limitations

  • Lacking scientific rigor and providing little basis for generalization of results to the wider population.
  • Researchers’ own subjective feelings may influence the case study (researcher bias).
  • Difficult to replicate.
  • Time-consuming and expensive.
  • The volume of data, together with the time restrictions in place, impacted the depth of analysis that was possible within the available resources.

Because a case study deals with only one person/event/group, we can never be sure if the case study investigated is representative of the wider body of “similar” instances. This means the conclusions drawn from a particular case may not be transferable to other settings.

Because case studies are based on the analysis of qualitative (i.e., descriptive) data , a lot depends on the psychologist’s interpretation of the information she has acquired.

This means that there is a lot of scope for Anna O , and it could be that the subjective opinions of the psychologist intrude in the assessment of what the data means.

For example, Freud has been criticized for producing case studies in which the information was sometimes distorted to fit particular behavioral theories (e.g., Little Hans ).

This is also true of Money’s interpretation of the Bruce/Brenda case study (Diamond, 1997) when he ignored evidence that went against his theory.

Breuer, J., & Freud, S. (1895).  Studies on hysteria . Standard Edition 2: London.

Curtiss, S. (1981). Genie: The case of a modern wild child .

Diamond, M., & Sigmundson, K. (1997). Sex Reassignment at Birth: Long-term Review and Clinical Implications. Archives of Pediatrics & Adolescent Medicine , 151(3), 298-304

Freud, S. (1909a). Analysis of a phobia of a five year old boy. In The Pelican Freud Library (1977), Vol 8, Case Histories 1, pages 169-306

Freud, S. (1909b). Bemerkungen über einen Fall von Zwangsneurose (Der “Rattenmann”). Jb. psychoanal. psychopathol. Forsch ., I, p. 357-421; GW, VII, p. 379-463; Notes upon a case of obsessional neurosis, SE , 10: 151-318.

Harlow J. M. (1848). Passage of an iron rod through the head.  Boston Medical and Surgical Journal, 39 , 389–393.

Harlow, J. M. (1868).  Recovery from the Passage of an Iron Bar through the Head .  Publications of the Massachusetts Medical Society. 2  (3), 327-347.

Money, J., & Ehrhardt, A. A. (1972).  Man & Woman, Boy & Girl : The Differentiation and Dimorphism of Gender Identity from Conception to Maturity. Baltimore, Maryland: Johns Hopkins University Press.

Money, J., & Tucker, P. (1975). Sexual signatures: On being a man or a woman.

Further Information

  • Case Study Approach
  • Case Study Method
  • Enhancing the Quality of Case Studies in Health Services Research
  • “We do things together” A case study of “couplehood” in dementia
  • Using mixed methods for evaluating an integrative approach to cancer care: a case study

Print Friendly, PDF & Email

case study vs. experiment

Case Study vs Experiment: Know the Difference

case study vs experimental research

A case study and experiment are the two prominent approaches often used at the forefront of scholarly inquiry. While case studies study the complexities of real-life situations, aiming for depth and contextual understanding, experiments seek to uncover causal relationships through controlled manipulation and observation. Both these research methods are indispensable tools in understanding phenomena, yet they diverge significantly in their approaches, aims, and applications.

In this article, we’ll unpack the key differences between case studies and experiments, exploring their strengths, limitations, and the unique insights they offer when working with quantitative data. In the meantime, feel free to use our specialized case study writing service if you seek to streamline your efforts when handling this academic task.

What Is a Case Study?

A case study is a research method that involves an in-depth examination of a particular individual, group, event, or phenomenon within its real-life context. It aims to provide a detailed and comprehensive analysis of the subject under investigation, often using multiple data sources such as interviews, observations, documents, and archival records.

The case study method is used in psychology, sociology, anthropology, education, and business to explore complex issues, understand unique situations, and generate rich, contextualized insights. They allow scholars to explore the intricacies of real-world phenomena, uncovering patterns, relationships, and underlying factors in social sciences that may not be readily apparent through other research methods.

Overall, case studies offer a holistic and nuanced understanding of the subject of interest, facilitating deeper exploration and interpretation of complex social and human phenomena. If you’re struggling with this assignment, simply say, ‘ write my case study for me ,’ and our experts will help you promptly.

What Is an Experiment?

Compared to the case study method, an experiment investigates cause-and-effect relationships by systematically manipulating one or more variables and observing the effects on other variables. In an experiment, students aim to establish causal relationships between an independent variable (the factors being manipulated) and a dependent variable (the outcomes being measured).

Experiments are characterized by their controlled and systematic approach, often involving the random assignment of participants to different experimental conditions to minimize bias and ensure the validity of the findings. They are commonly used in such fields of social sciences as psychology, biology, physics, and medicine to test hypotheses, identify causal mechanisms, and provide empirical evidence for theories.

An experiment method allows scholars to establish causal relationships with high confidence, providing valuable insights into the underlying mechanisms of behavior, natural phenomena, and social processes. Other research methods include:

  • Survey method: Collects research data from individuals through questionnaires or interviews to gather information about attitudes, opinions, behaviors, and characteristics of a population.
  • Observation method: Systematically observes and records behavior, events, or phenomena as they naturally occur in real-life settings to study social interactions, environmental factors, or naturalistic behavior.
  • Qualitative and quantitative research method: Qualitative research explores meanings, perceptions, and experiences using interviews or content analysis, while quantitative research analyzes numerical data to test hypotheses or identify patterns and relationships.
  • Archival research method: Analyzes existing documents, records, or data sources such as historical documents or organizational archives to investigate trends, patterns, or historical events.
  • Action research method: Involves collaboration between scholars and practitioners to identify and address practical problems or challenges within specific organizational or community contexts, aiming to generate actionable knowledge and facilitate positive change.

case study vs experimental research

Difference Between Case Study and Experiment

Case study and experiment definitions.

A case study method involves a deep investigation into a specific individual, group, event, or phenomenon within its real-life context, aiming to provide rich and detailed insights into complex issues. Learners gather research data from multiple sources, such as interviews, observations, documents, and archival records, to comprehensively understand the subject under study.

Case studies are particularly useful for exploring unique or rare phenomena, offering a holistic view that captures the intricacies and nuances of the situation. However, findings from case studies may be challenging to generalize to broader populations due to the specificity of the case and the lack of experimental control. To learn more about how to write a case study , please refer to our guide.

An experiment is a research method that systematically manipulates one or more variables to observe their effects on other variables, aiming to establish cause-and-effect relationships under controlled conditions. Researchers design experiments with high control over variables, often using standardized procedures and quantitative measures for research data collection.

Experiments are well-suited for testing hypotheses and identifying causal relationships in controlled environments, allowing educatees to conclude the effects of specific interventions or manipulations. However, experiments may lack the depth and contextual richness of case studies, and findings are typically limited to the specific conditions of the experiment.

Case Study and Experiment Characteristic Features

  • In case studies , variables are observed rather than manipulated. Researchers do not typically control variables; instead, they examine how naturally occurring variables interact within the case context.
  • Experiments involve manipulating one or more variables to observe their effects on other variables. Students actively control and manipulate variables to test hypotheses and establish cause-and-effect relationships.
  • Case studies may not always begin with a specific hypothesis. Instead, researchers often seek to generate hypotheses based on the data collected during the study.
  • Experiments are typically conducted to test specific hypotheses. Researchers formulate a hypothesis based on existing theory or observations, and the experiment is designed to confirm or refute this hypothesis.

Case Study vs Experiment

Manipulating Variables

  • Variables are not manipulated in case studies . Instead, researchers observe and analyze how naturally occurring variables influence the phenomenon of interest.
  • In experiments , researchers actively manipulate one independent or dependent variable or more to observe their effects on other variables. This manipulation allows researchers to establish causal relationships between variables.
  • Case studies often involve collecting qualitative data from multiple sources, such as interviews, observations, documents, and archival records. Researchers analyze this research data to provide a detailed and contextualized understanding of the case.
  • Experiments typically involve the collection of quantitative data using standardized procedures and measures. Researchers use statistical analysis to interpret the research data and draw conclusions about the effects of the manipulated variables.

Areas of Implementation

  • Case studies are widely used in social sciences, such as psychology, sociology, anthropology, education, and business, to explore complex issues, understand unique situations, and generate rich, contextualized insights.
  • Experiments are common in fields such as psychology, biology, physics, and medicine, where researchers seek to test hypotheses, identify causal mechanisms, and provide empirical evidence for theories through controlled manipulation and observation.

Frequently asked questions

She was flawless! first time using a website like this, I've ordered article review and i totally adored it! grammar punctuation, content - everything was on point

This writer is my go to, because whenever I need someone who I can trust my task to - I hire Joy. She wrote almost every paper for me for the last 2 years

Term paper done up to a highest standard, no revisions, perfect communication. 10s across the board!!!!!!!

I send him instructions and that's it. my paper was done 10 hours later, no stupid questions, he nailed it.

Sometimes I wonder if Michael is secretly a professor because he literally knows everything. HE DID SO WELL THAT MY PROF SHOWED MY PAPER AS AN EXAMPLE. unbelievable, many thanks

critical thinking in education

New posts to your inbox!

Stay in touch

helpful professor logo

15 Famous Experiments and Case Studies in Psychology

15 Famous Experiments and Case Studies in Psychology

Chris Drew (PhD)

Dr. Chris Drew is the founder of the Helpful Professor. He holds a PhD in education and has published over 20 articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education. [Image Descriptor: Photo of Chris]

Learn about our Editorial Process

psychology theories, explained below

Psychology has seen thousands upon thousands of research studies over the years. Most of these studies have helped shape our current understanding of human thoughts, behavior, and feelings.

The psychology case studies in this list are considered classic examples of psychological case studies and experiments, which are still being taught in introductory psychology courses up to this day.

Some studies, however, were downright shocking and controversial that you’d probably wonder why such studies were conducted back in the day. Imagine participating in an experiment for a small reward or extra class credit, only to be left scarred for life. These kinds of studies, however, paved the way for a more ethical approach to studying psychology and implementation of research standards such as the use of debriefing in psychology research .

Case Study vs. Experiment

Before we dive into the list of the most famous studies in psychology, let us first review the difference between case studies and experiments.

  • It is an in-depth study and analysis of an individual, group, community, or phenomenon. The results of a case study cannot be applied to the whole population, but they can provide insights for further studies.
  • It often uses qualitative research methods such as observations, surveys, and interviews.
  • It is often conducted in real-life settings rather than in controlled environments.
  • An experiment is a type of study done on a sample or group of random participants, the results of which can be generalized to the whole population.
  • It often uses quantitative research methods that rely on numbers and statistics.
  • It is conducted in controlled environments, wherein some things or situations are manipulated.

See Also: Experimental vs Observational Studies

Famous Experiments in Psychology

1. the marshmallow experiment.

Psychologist Walter Mischel conducted the marshmallow experiment at Stanford University in the 1960s to early 1970s. It was a simple test that aimed to define the connection between delayed gratification and success in life.

The instructions were fairly straightforward: children ages 4-6 were presented a piece of marshmallow on a table and they were told that they would receive a second piece if they could wait for 15 minutes without eating the first marshmallow.

About one-third of the 600 participants succeeded in delaying gratification to receive the second marshmallow. Mischel and his team followed up on these participants in the 1990s, learning that those who had the willpower to wait for a larger reward experienced more success in life in terms of SAT scores and other metrics.

This case study also supported self-control theory , a theory in criminology that holds that people with greater self-control are less likely to end up in trouble with the law!

The classic marshmallow experiment, however, was debunked in a 2018 replication study done by Tyler Watts and colleagues.

This more recent experiment had a larger group of participants (900) and a better representation of the general population when it comes to race and ethnicity. In this study, the researchers found out that the ability to wait for a second marshmallow does not depend on willpower alone but more so on the economic background and social status of the participants.

2. The Bystander Effect

In 1694, Kitty Genovese was murdered in the neighborhood of Kew Gardens, New York. It was told that there were up to 38 witnesses and onlookers in the vicinity of the crime scene, but nobody did anything to stop the murder or call for help.

Such tragedy was the catalyst that inspired social psychologists Bibb Latane and John Darley to formulate the phenomenon called bystander effect or bystander apathy .

Subsequent investigations showed that this story was exaggerated and inaccurate, as there were actually only about a dozen witnesses, at least two of whom called the police. But the case of Kitty Genovese led to various studies that aim to shed light on the bystander phenomenon.

Latane and Darley tested bystander intervention in an experimental study . Participants were asked to answer a questionnaire inside a room, and they would either be alone or with two other participants (who were actually actors or confederates in the study). Smoke would then come out from under the door. The reaction time of participants was tested — how long would it take them to report the smoke to the authorities or the experimenters?

The results showed that participants who were alone in the room reported the smoke faster than participants who were with two passive others. The study suggests that the more onlookers are present in an emergency situation, the less likely someone would step up to help, a social phenomenon now popularly called the bystander effect.

3. Asch Conformity Study

Have you ever made a decision against your better judgment just to fit in with your friends or family? The Asch Conformity Studies will help you understand this kind of situation better.

In this experiment, a group of participants were shown three numbered lines of different lengths and asked to identify the longest of them all. However, only one true participant was present in every group and the rest were actors, most of whom told the wrong answer.

Results showed that the participants went for the wrong answer, even though they knew which line was the longest one in the first place. When the participants were asked why they identified the wrong one, they said that they didn’t want to be branded as strange or peculiar.

This study goes to show that there are situations in life when people prefer fitting in than being right. It also tells that there is power in numbers — a group’s decision can overwhelm a person and make them doubt their judgment.

4. The Bobo Doll Experiment

The Bobo Doll Experiment was conducted by Dr. Albert Bandura, the proponent of social learning theory .

Back in the 1960s, the Nature vs. Nurture debate was a popular topic among psychologists. Bandura contributed to this discussion by proposing that human behavior is mostly influenced by environmental rather than genetic factors.

In the Bobo Doll Experiment, children were divided into three groups: one group was shown a video in which an adult acted aggressively toward the Bobo Doll, the second group was shown a video in which an adult play with the Bobo Doll, and the third group served as the control group where no video was shown.

The children were then led to a room with different kinds of toys, including the Bobo Doll they’ve seen in the video. Results showed that children tend to imitate the adults in the video. Those who were presented the aggressive model acted aggressively toward the Bobo Doll while those who were presented the passive model showed less aggression.

While the Bobo Doll Experiment can no longer be replicated because of ethical concerns, it has laid out the foundations of social learning theory and helped us understand the degree of influence adult behavior has on children.

5. Blue Eye / Brown Eye Experiment

Following the assassination of Martin Luther King Jr. in 1968, third-grade teacher Jane Elliott conducted an experiment in her class. Although not a formal experiment in controlled settings, A Class Divided is a good example of a social experiment to help children understand the concept of racism and discrimination.

The class was divided into two groups: blue-eyed children and brown-eyed children. For one day, Elliott gave preferential treatment to her blue-eyed students, giving them more attention and pampering them with rewards. The next day, it was the brown-eyed students’ turn to receive extra favors and privileges.

As a result, whichever group of students was given preferential treatment performed exceptionally well in class, had higher quiz scores, and recited more frequently; students who were discriminated against felt humiliated, answered poorly in tests, and became uncertain with their answers in class.

This study is now widely taught in sociocultural psychology classes.

6. Stanford Prison Experiment

One of the most controversial and widely-cited studies in psychology is the Stanford Prison Experiment , conducted by Philip Zimbardo at the basement of the Stanford psychology building in 1971. The hypothesis was that abusive behavior in prisons is influenced by the personality traits of the prisoners and prison guards.

The participants in the experiment were college students who were randomly assigned as either a prisoner or a prison guard. The prison guards were then told to run the simulated prison for two weeks. However, the experiment had to be stopped in just 6 days.

The prison guards abused their authority and harassed the prisoners through verbal and physical means. The prisoners, on the other hand, showed submissive behavior. Zimbardo decided to stop the experiment because the prisoners were showing signs of emotional and physical breakdown.

Although the experiment wasn’t completed, the results strongly showed that people can easily get into a social role when others expect them to, especially when it’s highly stereotyped .

7. The Halo Effect

Have you ever wondered why toothpastes and other dental products are endorsed in advertisements by celebrities more often than dentists? The Halo Effect is one of the reasons!

The Halo Effect shows how one favorable attribute of a person can gain them positive perceptions in other attributes. In the case of product advertisements, attractive celebrities are also perceived as intelligent and knowledgeable of a certain subject matter even though they’re not technically experts.

The Halo Effect originated in a classic study done by Edward Thorndike in the early 1900s. He asked military commanding officers to rate their subordinates based on different qualities, such as physical appearance, leadership, dependability, and intelligence.

The results showed that high ratings of a particular quality influences the ratings of other qualities, producing a halo effect of overall high ratings. The opposite also applied, which means that a negative rating in one quality also correlated to negative ratings in other qualities.

Experiments on the Halo Effect came in various formats as well, supporting Thorndike’s original theory. This phenomenon suggests that our perception of other people’s overall personality is hugely influenced by a quality that we focus on.

8. Cognitive Dissonance

There are experiences in our lives when our beliefs and behaviors do not align with each other and we try to justify them in our minds. This is cognitive dissonance , which was studied in an experiment by Leon Festinger and James Carlsmith back in 1959.

In this experiment, participants had to go through a series of boring and repetitive tasks, such as spending an hour turning pegs in a wooden knob. After completing the tasks, they were then paid either $1 or $20 to tell the next participants that the tasks were extremely fun and enjoyable. Afterwards, participants were asked to rate the experiment. Those who were given $1 rated the experiment as more interesting and fun than those who received $20.

The results showed that those who received a smaller incentive to lie experienced cognitive dissonance — $1 wasn’t enough incentive for that one hour of painstakingly boring activity, so the participants had to justify that they had fun anyway.

Famous Case Studies in Psychology

9. little albert.

In 1920, behaviourist theorists John Watson and Rosalie Rayner experimented on a 9-month-old baby to test the effects of classical conditioning in instilling fear in humans.

This was such a controversial study that it gained popularity in psychology textbooks and syllabi because it is a classic example of unethical research studies done in the name of science.

In one of the experiments, Little Albert was presented with a harmless stimulus or object, a white rat, which he wasn’t scared of at first. But every time Little Albert would see the white rat, the researchers would play a scary sound of hammer and steel. After about 6 pairings, Little Albert learned to fear the rat even without the scary sound.

Little Albert developed signs of fear to different objects presented to him through classical conditioning . He even generalized his fear to other stimuli not present in the course of the experiment.

10. Phineas Gage

Phineas Gage is such a celebrity in Psych 101 classes, even though the way he rose to popularity began with a tragic accident. He was a resident of Central Vermont and worked in the construction of a new railway line in the mid-1800s. One day, an explosive went off prematurely, sending a tamping iron straight into his face and through his brain.

Gage survived the accident, fortunately, something that is considered a feat even up to this day. He managed to find a job as a stagecoach after the accident. However, his family and friends reported that his personality changed so much that “he was no longer Gage” (Harlow, 1868).

New evidence on the case of Phineas Gage has since come to light, thanks to modern scientific studies and medical tests. However, there are still plenty of mysteries revolving around his brain damage and subsequent recovery.

11. Anna O.

Anna O., a social worker and feminist of German Jewish descent, was one of the first patients to receive psychoanalytic treatment.

Her real name was Bertha Pappenheim and she inspired much of Sigmund Freud’s works and books on psychoanalytic theory, although they hadn’t met in person. Their connection was through Joseph Breuer, Freud’s mentor when he was still starting his clinical practice.

Anna O. suffered from paralysis, personality changes, hallucinations, and rambling speech, but her doctors could not find the cause. Joseph Breuer was then called to her house for intervention and he performed psychoanalysis, also called the “talking cure”, on her.

Breuer would tell Anna O. to say anything that came to her mind, such as her thoughts, feelings, and childhood experiences. It was noted that her symptoms subsided by talking things out.

However, Breuer later referred Anna O. to the Bellevue Sanatorium, where she recovered and set out to be a renowned writer and advocate of women and children.

12. Patient HM

H.M., or Henry Gustav Molaison, was a severe amnesiac who had been the subject of countless psychological and neurological studies.

Henry was 27 when he underwent brain surgery to cure the epilepsy that he had been experiencing since childhood. In an unfortunate turn of events, he lost his memory because of the surgery and his brain also became unable to store long-term memories.

He was then regarded as someone living solely in the present, forgetting an experience as soon as it happened and only remembering bits and pieces of his past. Over the years, his amnesia and the structure of his brain had helped neuropsychologists learn more about cognitive functions .

Suzanne Corkin, a researcher, writer, and good friend of H.M., recently published a book about his life. Entitled Permanent Present Tense , this book is both a memoir and a case study following the struggles and joys of Henry Gustav Molaison.

13. Chris Sizemore

Chris Sizemore gained celebrity status in the psychology community when she was diagnosed with multiple personality disorder, now known as dissociative identity disorder.

Sizemore has several alter egos, which included Eve Black, Eve White, and Jane. Various papers about her stated that these alter egos were formed as a coping mechanism against the traumatic experiences she underwent in her childhood.

Sizemore said that although she has succeeded in unifying her alter egos into one dominant personality, there were periods in the past experienced by only one of her alter egos. For example, her husband married her Eve White alter ego and not her.

Her story inspired her psychiatrists to write a book about her, entitled The Three Faces of Eve , which was then turned into a 1957 movie of the same title.

14. David Reimer

When David was just 8 months old, he lost his penis because of a botched circumcision operation.

Psychologist John Money then advised Reimer’s parents to raise him as a girl instead, naming him Brenda. His gender reassignment was supported by subsequent surgery and hormonal therapy.

Money described Reimer’s gender reassignment as a success, but problems started to arise as Reimer was growing up. His boyishness was not completely subdued by the hormonal therapy. When he was 14 years old, he learned about the secrets of his past and he underwent gender reassignment to become male again.

Reimer became an advocate for children undergoing the same difficult situation he had been. His life story ended when he was 38 as he took his own life.

15. Kim Peek

Kim Peek was the inspiration behind Rain Man , an Oscar-winning movie about an autistic savant character played by Dustin Hoffman.

The movie was released in 1988, a time when autism wasn’t widely known and acknowledged yet. So it was an eye-opener for many people who watched the film.

In reality, Kim Peek was a non-autistic savant. He was exceptionally intelligent despite the brain abnormalities he was born with. He was like a walking encyclopedia, knowledgeable about travel routes, US zip codes, historical facts, and classical music. He also read and memorized approximately 12,000 books in his lifetime.

This list of experiments and case studies in psychology is just the tip of the iceberg! There are still countless interesting psychology studies that you can explore if you want to learn more about human behavior and dynamics.

You can also conduct your own mini-experiment or participate in a study conducted in your school or neighborhood. Just remember that there are ethical standards to follow so as not to repeat the lasting physical and emotional harm done to Little Albert or the Stanford Prison Experiment participants.

Asch, S. E. (1956). Studies of independence and conformity: I. A minority of one against a unanimous majority. Psychological Monographs: General and Applied, 70 (9), 1–70. https://doi.org/10.1037/h0093718

Bandura, A., Ross, D., & Ross, S. A. (1961). Transmission of aggression through imitation of aggressive models. The Journal of Abnormal and Social Psychology, 63 (3), 575–582. https://doi.org/10.1037/h0045925

Elliott, J., Yale University., WGBH (Television station : Boston, Mass.), & PBS DVD (Firm). (2003). A class divided. New Haven, Conn.: Yale University Films.

Festinger, L., & Carlsmith, J. M. (1959). Cognitive consequences of forced compliance. The Journal of Abnormal and Social Psychology, 58 (2), 203–210. https://doi.org/10.1037/h0041593

Haney, C., Banks, W. C., & Zimbardo, P. G. (1973). A study of prisoners and guards in a simulated prison. Naval Research Review , 30 , 4-17.

Latane, B., & Darley, J. M. (1968). Group inhibition of bystander intervention in emergencies. Journal of Personality and Social Psychology, 10 (3), 215–221. https://doi.org/10.1037/h0026570

Mischel, W. (2014). The Marshmallow Test: Mastering self-control. Little, Brown and Co.

Thorndike, E. (1920) A Constant Error in Psychological Ratings. Journal of Applied Psychology , 4 , 25-29. http://dx.doi.org/10.1037/h0071663

Watson, J. B., & Rayner, R. (1920). Conditioned emotional reactions. Journal of experimental psychology , 3 (1), 1.

Chris

  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 10 Reasons you’re Perpetually Single
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 20 Montessori Toddler Bedrooms (Design Inspiration)
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 21 Montessori Homeschool Setups
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 101 Hidden Talents Examples

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

Experimental and Quasi-Experimental Research

Guide Title: Experimental and Quasi-Experimental Research Guide ID: 64

You approach a stainless-steel wall, separated vertically along its middle where two halves meet. After looking to the left, you see two buttons on the wall to the right. You press the top button and it lights up. A soft tone sounds and the two halves of the wall slide apart to reveal a small room. You step into the room. Looking to the left, then to the right, you see a panel of more buttons. You know that you seek a room marked with the numbers 1-0-1-2, so you press the button marked "10." The halves slide shut and enclose you within the cubicle, which jolts upward. Soon, the soft tone sounds again. The door opens again. On the far wall, a sign silently proclaims, "10th floor."

You have engaged in a series of experiments. A ride in an elevator may not seem like an experiment, but it, and each step taken towards its ultimate outcome, are common examples of a search for a causal relationship-which is what experimentation is all about.

You started with the hypothesis that this is in fact an elevator. You proved that you were correct. You then hypothesized that the button to summon the elevator was on the left, which was incorrect, so then you hypothesized it was on the right, and you were correct. You hypothesized that pressing the button marked with the up arrow would not only bring an elevator to you, but that it would be an elevator heading in the up direction. You were right.

As this guide explains, the deliberate process of testing hypotheses and reaching conclusions is an extension of commonplace testing of cause and effect relationships.

Basic Concepts of Experimental and Quasi-Experimental Research

Discovering causal relationships is the key to experimental research. In abstract terms, this means the relationship between a certain action, X, which alone creates the effect Y. For example, turning the volume knob on your stereo clockwise causes the sound to get louder. In addition, you could observe that turning the knob clockwise alone, and nothing else, caused the sound level to increase. You could further conclude that a causal relationship exists between turning the knob clockwise and an increase in volume; not simply because one caused the other, but because you are certain that nothing else caused the effect.

Independent and Dependent Variables

Beyond discovering causal relationships, experimental research further seeks out how much cause will produce how much effect; in technical terms, how the independent variable will affect the dependent variable. You know that turning the knob clockwise will produce a louder noise, but by varying how much you turn it, you see how much sound is produced. On the other hand, you might find that although you turn the knob a great deal, sound doesn't increase dramatically. Or, you might find that turning the knob just a little adds more sound than expected. The amount that you turned the knob is the independent variable, the variable that the researcher controls, and the amount of sound that resulted from turning it is the dependent variable, the change that is caused by the independent variable.

Experimental research also looks into the effects of removing something. For example, if you remove a loud noise from the room, will the person next to you be able to hear you? Or how much noise needs to be removed before that person can hear you?

Treatment and Hypothesis

The term treatment refers to either removing or adding a stimulus in order to measure an effect (such as turning the knob a little or a lot, or reducing the noise level a little or a lot). Experimental researchers want to know how varying levels of treatment will affect what they are studying. As such, researchers often have an idea, or hypothesis, about what effect will occur when they cause something. Few experiments are performed where there is no idea of what will happen. From past experiences in life or from the knowledge we possess in our specific field of study, we know how some actions cause other reactions. Experiments confirm or reconfirm this fact.

Experimentation becomes more complex when the causal relationships they seek aren't as clear as in the stereo knob-turning examples. Questions like "Will olestra cause cancer?" or "Will this new fertilizer help this plant grow better?" present more to consider. For example, any number of things could affect the growth rate of a plant-the temperature, how much water or sun it receives, or how much carbon dioxide is in the air. These variables can affect an experiment's results. An experimenter who wants to show that adding a certain fertilizer will help a plant grow better must ensure that it is the fertilizer, and nothing else, affecting the growth patterns of the plant. To do this, as many of these variables as possible must be controlled.

Matching and Randomization

In the example used in this guide (you'll find the example below), we discuss an experiment that focuses on three groups of plants -- one that is treated with a fertilizer named MegaGro, another group treated with a fertilizer named Plant!, and yet another that is not treated with fetilizer (this latter group serves as a "control" group). In this example, even though the designers of the experiment have tried to remove all extraneous variables, results may appear merely coincidental. Since the goal of the experiment is to prove a causal relationship in which a single variable is responsible for the effect produced, the experiment would produce stronger proof if the results were replicated in larger treatment and control groups.

Selecting groups entails assigning subjects in the groups of an experiment in such a way that treatment and control groups are comparable in all respects except the application of the treatment. Groups can be created in two ways: matching and randomization. In the MegaGro experiment discussed below, the plants might be matched according to characteristics such as age, weight and whether they are blooming. This involves distributing these plants so that each plant in one group exactly matches characteristics of plants in the other groups. Matching may be problematic, though, because it "can promote a false sense of security by leading [the experimenter] to believe that [the] experimental and control groups were really equated at the outset, when in fact they were not equated on a host of variables" (Jones, 291). In other words, you may have flowers for your MegaGro experiment that you matched and distributed among groups, but other variables are unaccounted for. It would be difficult to have equal groupings.

Randomization, then, is preferred to matching. This method is based on the statistical principle of normal distribution. Theoretically, any arbitrarily selected group of adequate size will reflect normal distribution. Differences between groups will average out and become more comparable. The principle of normal distribution states that in a population most individuals will fall within the middle range of values for a given characteristic, with increasingly fewer toward either extreme (graphically represented as the ubiquitous "bell curve").

Differences between Quasi-Experimental and Experimental Research

Thus far, we have explained that for experimental research we need:

  • a hypothesis for a causal relationship;
  • a control group and a treatment group;
  • to eliminate confounding variables that might mess up the experiment and prevent displaying the causal relationship; and
  • to have larger groups with a carefully sorted constituency; preferably randomized, in order to keep accidental differences from fouling things up.

But what if we don't have all of those? Do we still have an experiment? Not a true experiment in the strictest scientific sense of the term, but we can have a quasi-experiment, an attempt to uncover a causal relationship, even though the researcher cannot control all the factors that might affect the outcome.

A quasi-experimenter treats a given situation as an experiment even though it is not wholly by design. The independent variable may not be manipulated by the researcher, treatment and control groups may not be randomized or matched, or there may be no control group. The researcher is limited in what he or she can say conclusively.

The significant element of both experiments and quasi-experiments is the measure of the dependent variable, which it allows for comparison. Some data is quite straightforward, but other measures, such as level of self-confidence in writing ability, increase in creativity or in reading comprehension are inescapably subjective. In such cases, quasi-experimentation often involves a number of strategies to compare subjectivity, such as rating data, testing, surveying, and content analysis.

Rating essentially is developing a rating scale to evaluate data. In testing, experimenters and quasi-experimenters use ANOVA (Analysis of Variance) and ANCOVA (Analysis of Co-Variance) tests to measure differences between control and experimental groups, as well as different correlations between groups.

Since we're mentioning the subject of statistics, note that experimental or quasi-experimental research cannot state beyond a shadow of a doubt that a single cause will always produce any one effect. They can do no more than show a probability that one thing causes another. The probability that a result is the due to random chance is an important measure of statistical analysis and in experimental research.

Example: Causality

Let's say you want to determine that your new fertilizer, MegaGro, will increase the growth rate of plants. You begin by getting a plant to go with your fertilizer. Since the experiment is concerned with proving that MegaGro works, you need another plant, using no fertilizer at all on it, to compare how much change your fertilized plant displays. This is what is known as a control group.

Set up with a control group, which will receive no treatment, and an experimental group, which will get MegaGro, you must then address those variables that could invalidate your experiment. This can be an extensive and exhaustive process. You must ensure that you use the same plant; that both groups are put in the same kind of soil; that they receive equal amounts of water and sun; that they receive the same amount of exposure to carbon-dioxide-exhaling researchers, and so on. In short, any other variable that might affect the growth of those plants, other than the fertilizer, must be the same for both plants. Otherwise, you can't prove absolutely that MegaGro is the only explanation for the increased growth of one of those plants.

Such an experiment can be done on more than two groups. You may not only want to show that MegaGro is an effective fertilizer, but that it is better than its competitor brand of fertilizer, Plant! All you need to do, then, is have one experimental group receiving MegaGro, one receiving Plant! and the other (the control group) receiving no fertilizer. Those are the only variables that can be different between the three groups; all other variables must be the same for the experiment to be valid.

Controlling variables allows the researcher to identify conditions that may affect the experiment's outcome. This may lead to alternative explanations that the researcher is willing to entertain in order to isolate only variables judged significant. In the MegaGro experiment, you may be concerned with how fertile the soil is, but not with the plants'; relative position in the window, as you don't think that the amount of shade they get will affect their growth rate. But what if it did? You would have to go about eliminating variables in order to determine which is the key factor. What if one receives more shade than the other and the MegaGro plant, which received more shade, died? This might prompt you to formulate a plausible alternative explanation, which is a way of accounting for a result that differs from what you expected. You would then want to redo the study with equal amounts of sunlight.

Methods: Five Steps

Experimental research can be roughly divided into five phases:

Identifying a research problem

The process starts by clearly identifying the problem you want to study and considering what possible methods will affect a solution. Then you choose the method you want to test, and formulate a hypothesis to predict the outcome of the test.

For example, you may want to improve student essays, but you don't believe that teacher feedback is enough. You hypothesize that some possible methods for writing improvement include peer workshopping, or reading more example essays. Favoring the former, your experiment would try to determine if peer workshopping improves writing in high school seniors. You state your hypothesis: peer workshopping prior to turning in a final draft will improve the quality of the student's essay.

Planning an experimental research study

The next step is to devise an experiment to test your hypothesis. In doing so, you must consider several factors. For example, how generalizable do you want your end results to be? Do you want to generalize about the entire population of high school seniors everywhere, or just the particular population of seniors at your specific school? This will determine how simple or complex the experiment will be. The amount of time funding you have will also determine the size of your experiment.

Continuing the example from step one, you may want a small study at one school involving three teachers, each teaching two sections of the same course. The treatment in this experiment is peer workshopping. Each of the three teachers will assign the same essay assignment to both classes; the treatment group will participate in peer workshopping, while the control group will receive only teacher comments on their drafts.

Conducting the experiment

At the start of an experiment, the control and treatment groups must be selected. Whereas the "hard" sciences have the luxury of attempting to create truly equal groups, educators often find themselves forced to conduct their experiments based on self-selected groups, rather than on randomization. As was highlighted in the Basic Concepts section, this makes the study a quasi-experiment, since the researchers cannot control all of the variables.

For the peer workshopping experiment, let's say that it involves six classes and three teachers with a sample of students randomly selected from all the classes. Each teacher will have a class for a control group and a class for a treatment group. The essay assignment is given and the teachers are briefed not to change any of their teaching methods other than the use of peer workshopping. You may see here that this is an effort to control a possible variable: teaching style variance.

Analyzing the data

The fourth step is to collect and analyze the data. This is not solely a step where you collect the papers, read them, and say your methods were a success. You must show how successful. You must devise a scale by which you will evaluate the data you receive, therefore you must decide what indicators will be, and will not be, important.

Continuing our example, the teachers' grades are first recorded, then the essays are evaluated for a change in sentence complexity, syntactical and grammatical errors, and overall length. Any statistical analysis is done at this time if you choose to do any. Notice here that the researcher has made judgments on what signals improved writing. It is not simply a matter of improved teacher grades, but a matter of what the researcher believes constitutes improved use of the language.

Writing the paper/presentation describing the findings

Once you have completed the experiment, you will want to share findings by publishing academic paper (or presentations). These papers usually have the following format, but it is not necessary to follow it strictly. Sections can be combined or not included, depending on the structure of the experiment, and the journal to which you submit your paper.

  • Abstract : Summarize the project: its aims, participants, basic methodology, results, and a brief interpretation.
  • Introduction : Set the context of the experiment.
  • Review of Literature : Provide a review of the literature in the specific area of study to show what work has been done. Should lead directly to the author's purpose for the study.
  • Statement of Purpose : Present the problem to be studied.
  • Participants : Describe in detail participants involved in the study; e.g., how many, etc. Provide as much information as possible.
  • Materials and Procedures : Clearly describe materials and procedures. Provide enough information so that the experiment can be replicated, but not so much information that it becomes unreadable. Include how participants were chosen, the tasks assigned them, how they were conducted, how data were evaluated, etc.
  • Results : Present the data in an organized fashion. If it is quantifiable, it is analyzed through statistical means. Avoid interpretation at this time.
  • Discussion : After presenting the results, interpret what has happened in the experiment. Base the discussion only on the data collected and as objective an interpretation as possible. Hypothesizing is possible here.
  • Limitations : Discuss factors that affect the results. Here, you can speculate how much generalization, or more likely, transferability, is possible based on results. This section is important for quasi-experimentation, since a quasi-experiment cannot control all of the variables that might affect the outcome of a study. You would discuss what variables you could not control.
  • Conclusion : Synthesize all of the above sections.
  • References : Document works cited in the correct format for the field.

Experimental and Quasi-Experimental Research: Issues and Commentary

Several issues are addressed in this section, including the use of experimental and quasi-experimental research in educational settings, the relevance of the methods to English studies, and ethical concerns regarding the methods.

Using Experimental and Quasi-Experimental Research in Educational Settings

Charting causal relationships in human settings.

Any time a human population is involved, prediction of casual relationships becomes cloudy and, some say, impossible. Many reasons exist for this; for example,

  • researchers in classrooms add a disturbing presence, causing students to act abnormally, consciously or unconsciously;
  • subjects try to please the researcher, just because of an apparent interest in them (known as the Hawthorne Effect); or, perhaps
  • the teacher as researcher is restricted by bias and time pressures.

But such confounding variables don't stop researchers from trying to identify causal relationships in education. Educators naturally experiment anyway, comparing groups, assessing the attributes of each, and making predictions based on an evaluation of alternatives. They look to research to support their intuitive practices, experimenting whenever they try to decide which instruction method will best encourage student improvement.

Combining Theory, Research, and Practice

The goal of educational research lies in combining theory, research, and practice. Educational researchers attempt to establish models of teaching practice, learning styles, curriculum development, and countless other educational issues. The aim is to "try to improve our understanding of education and to strive to find ways to have understanding contribute to the improvement of practice," one writer asserts (Floden 1996, p. 197).

In quasi-experimentation, researchers try to develop models by involving teachers as researchers, employing observational research techniques. Although results of this kind of research are context-dependent and difficult to generalize, they can act as a starting point for further study. The "educational researcher . . . provides guidelines and interpretive material intended to liberate the teacher's intelligence so that whatever artistry in teaching the teacher can achieve will be employed" (Eisner 1992, p. 8).

Bias and Rigor

Critics contend that the educational researcher is inherently biased, sample selection is arbitrary, and replication is impossible. The key to combating such criticism has to do with rigor. Rigor is established through close, proper attention to randomizing groups, time spent on a study, and questioning techniques. This allows more effective application of standards of quantitative research to qualitative research.

Often, teachers cannot wait to for piles of experimentation data to be analyzed before using the teaching methods (Lauer and Asher 1988). They ultimately must assess whether the results of a study in a distant classroom are applicable in their own classrooms. And they must continuously test the effectiveness of their methods by using experimental and qualitative research simultaneously. In addition to statistics (quantitative), researchers may perform case studies or observational research (qualitative) in conjunction with, or prior to, experimentation.

Relevance to English Studies

Situations in english studies that might encourage use of experimental methods.

Whenever a researcher would like to see if a causal relationship exists between groups, experimental and quasi-experimental research can be a viable research tool. Researchers in English Studies might use experimentation when they believe a relationship exists between two variables, and they want to show that these two variables have a significant correlation (or causal relationship).

A benefit of experimentation is the ability to control variables, such as the amount of treatment, when it is given, to whom and so forth. Controlling variables allows researchers to gain insight into the relationships they believe exist. For example, a researcher has an idea that writing under pseudonyms encourages student participation in newsgroups. Researchers can control which students write under pseudonyms and which do not, then measure the outcomes. Researchers can then analyze results and determine if this particular variable alone causes increased participation.

Transferability-Applying Results

Experimentation and quasi-experimentation allow for generating transferable results and accepting those results as being dependent upon experimental rigor. It is an effective alternative to generalizability, which is difficult to rely upon in educational research. English scholars, reading results of experiments with a critical eye, ultimately decide if results will be implemented and how. They may even extend that existing research by replicating experiments in the interest of generating new results and benefiting from multiple perspectives. These results will strengthen the study or discredit findings.

Concerns English Scholars Express about Experiments

Researchers should carefully consider if a particular method is feasible in humanities studies, and whether it will yield the desired information. Some researchers recommend addressing pertinent issues combining several research methods, such as survey, interview, ethnography, case study, content analysis, and experimentation (Lauer and Asher, 1988).

Advantages and Disadvantages of Experimental Research: Discussion

In educational research, experimentation is a way to gain insight into methods of instruction. Although teaching is context specific, results can provide a starting point for further study. Often, a teacher/researcher will have a "gut" feeling about an issue which can be explored through experimentation and looking at causal relationships. Through research intuition can shape practice .

A preconception exists that information obtained through scientific method is free of human inconsistencies. But, since scientific method is a matter of human construction, it is subject to human error . The researcher's personal bias may intrude upon the experiment , as well. For example, certain preconceptions may dictate the course of the research and affect the behavior of the subjects. The issue may be compounded when, although many researchers are aware of the affect that their personal bias exerts on their own research, they are pressured to produce research that is accepted in their field of study as "legitimate" experimental research.

The researcher does bring bias to experimentation, but bias does not limit an ability to be reflective . An ethical researcher thinks critically about results and reports those results after careful reflection. Concerns over bias can be leveled against any research method.

Often, the sample may not be representative of a population, because the researcher does not have an opportunity to ensure a representative sample. For example, subjects could be limited to one location, limited in number, studied under constrained conditions and for too short a time.

Despite such inconsistencies in educational research, the researcher has control over the variables , increasing the possibility of more precisely determining individual effects of each variable. Also, determining interaction between variables is more possible.

Even so, artificial results may result . It can be argued that variables are manipulated so the experiment measures what researchers want to examine; therefore, the results are merely contrived products and have no bearing in material reality. Artificial results are difficult to apply in practical situations, making generalizing from the results of a controlled study questionable. Experimental research essentially first decontextualizes a single question from a "real world" scenario, studies it under controlled conditions, and then tries to recontextualize the results back on the "real world" scenario. Results may be difficult to replicate .

Perhaps, groups in an experiment may not be comparable . Quasi-experimentation in educational research is widespread because not only are many researchers also teachers, but many subjects are also students. With the classroom as laboratory, it is difficult to implement randomizing or matching strategies. Often, students self-select into certain sections of a course on the basis of their own agendas and scheduling needs. Thus when, as often happens, one class is treated and the other used for a control, the groups may not actually be comparable. As one might imagine, people who register for a class which meets three times a week at eleven o'clock in the morning (young, no full-time job, night people) differ significantly from those who register for one on Monday evenings from seven to ten p.m. (older, full-time job, possibly more highly motivated). Each situation presents different variables and your group might be completely different from that in the study. Long-term studies are expensive and hard to reproduce. And although often the same hypotheses are tested by different researchers, various factors complicate attempts to compare or synthesize them. It is nearly impossible to be as rigorous as the natural sciences model dictates.

Even when randomization of students is possible, problems arise. First, depending on the class size and the number of classes, the sample may be too small for the extraneous variables to cancel out. Second, the study population is not strictly a sample, because the population of students registered for a given class at a particular university is obviously not representative of the population of all students at large. For example, students at a suburban private liberal-arts college are typically young, white, and upper-middle class. In contrast, students at an urban community college tend to be older, poorer, and members of a racial minority. The differences can be construed as confounding variables: the first group may have fewer demands on its time, have less self-discipline, and benefit from superior secondary education. The second may have more demands, including a job and/or children, have more self-discipline, but an inferior secondary education. Selecting a population of subjects which is representative of the average of all post-secondary students is also a flawed solution, because the outcome of a treatment involving this group is not necessarily transferable to either the students at a community college or the students at the private college, nor are they universally generalizable.

When a human population is involved, experimental research becomes concerned if behavior can be predicted or studied with validity. Human response can be difficult to measure . Human behavior is dependent on individual responses. Rationalizing behavior through experimentation does not account for the process of thought, making outcomes of that process fallible (Eisenberg, 1996).

Nevertheless, we perform experiments daily anyway . When we brush our teeth every morning, we are experimenting to see if this behavior will result in fewer cavities. We are relying on previous experimentation and we are transferring the experimentation to our daily lives.

Moreover, experimentation can be combined with other research methods to ensure rigor . Other qualitative methods such as case study, ethnography, observational research and interviews can function as preconditions for experimentation or conducted simultaneously to add validity to a study.

We have few alternatives to experimentation. Mere anecdotal research , for example is unscientific, unreplicatable, and easily manipulated. Should we rely on Ed walking into a faculty meeting and telling the story of Sally? Sally screamed, "I love writing!" ten times before she wrote her essay and produced a quality paper. Therefore, all the other faculty members should hear this anecdote and know that all other students should employ this similar technique.

On final disadvantage: frequently, political pressure drives experimentation and forces unreliable results. Specific funding and support may drive the outcomes of experimentation and cause the results to be skewed. The reader of these results may not be aware of these biases and should approach experimentation with a critical eye.

Advantages and Disadvantages of Experimental Research: Quick Reference List

Experimental and quasi-experimental research can be summarized in terms of their advantages and disadvantages. This section combines and elaborates upon many points mentioned previously in this guide.

gain insight into methods of instruction

subject to human error

intuitive practice shaped by research

personal bias of researcher may intrude

teachers have bias but can be reflective

sample may not be representative

researcher can have control over variables

can produce artificial results

humans perform experiments anyway

results may only apply to one situation and may be difficult to replicate

can be combined with other research methods for rigor

groups may not be comparable

use to determine what is best for population

human response can be difficult to measure

provides for greater transferability than anecdotal research

political pressure may skew results

Ethical Concerns

Experimental research may be manipulated on both ends of the spectrum: by researcher and by reader. Researchers who report on experimental research, faced with naive readers of experimental research, encounter ethical concerns. While they are creating an experiment, certain objectives and intended uses of the results might drive and skew it. Looking for specific results, they may ask questions and look at data that support only desired conclusions. Conflicting research findings are ignored as a result. Similarly, researchers, seeking support for a particular plan, look only at findings which support that goal, dismissing conflicting research.

Editors and journals do not publish only trouble-free material. As readers of experiments members of the press might report selected and isolated parts of a study to the public, essentially transferring that data to the general population which may not have been intended by the researcher. Take, for example, oat bran. A few years ago, the press reported how oat bran reduces high blood pressure by reducing cholesterol. But that bit of information was taken out of context. The actual study found that when people ate more oat bran, they reduced their intake of saturated fats high in cholesterol. People started eating oat bran muffins by the ton, assuming a causal relationship when in actuality a number of confounding variables might influence the causal link.

Ultimately, ethical use and reportage of experimentation should be addressed by researchers, reporters and readers alike.

Reporters of experimental research often seek to recognize their audience's level of knowledge and try not to mislead readers. And readers must rely on the author's skill and integrity to point out errors and limitations. The relationship between researcher and reader may not sound like a problem, but after spending months or years on a project to produce no significant results, it may be tempting to manipulate the data to show significant results in order to jockey for grants and tenure.

Meanwhile, the reader may uncritically accept results that receive validity by being published in a journal. However, research that lacks credibility often is not published; consequentially, researchers who fail to publish run the risk of being denied grants, promotions, jobs, and tenure. While few researchers are anything but earnest in their attempts to conduct well-designed experiments and present the results in good faith, rhetorical considerations often dictate a certain minimization of methodological flaws.

Concerns arise if researchers do not report all, or otherwise alter, results. This phenomenon is counterbalanced, however, in that professionals are also rewarded for publishing critiques of others' work. Because the author of an experimental study is in essence making an argument for the existence of a causal relationship, he or she must be concerned not only with its integrity, but also with its presentation. Achieving persuasiveness in any kind of writing involves several elements: choosing a topic of interest, providing convincing evidence for one's argument, using tone and voice to project credibility, and organizing the material in a way that meets expectations for a logical sequence. Of course, what is regarded as pertinent, accepted as evidence, required for credibility, and understood as logical varies according to context. If the experimental researcher hopes to make an impact on the community of professionals in their field, she must attend to the standards and orthodoxy's of that audience.

Related Links

Contrasts: Traditional and computer-supported writing classrooms. This Web presents a discussion of the Transitions Study, a year-long exploration of teachers and students in computer-supported and traditional writing classrooms. Includes description of study, rationale for conducting the study, results and implications of the study.

http://kairos.technorhetoric.net/2.2/features/reflections/page1.htm

Annotated Bibliography

A cozy world of trivial pursuits? (1996, June 28) The Times Educational Supplement . 4174, pp. 14-15.

A critique discounting the current methods Great Britain employs to fund and disseminate educational research. The belief is that research is performed for fellow researchers not the teaching public and implications for day to day practice are never addressed.

Anderson, J. A. (1979, Nov. 10-13). Research as argument: the experimental form. Paper presented at the annual meeting of the Speech Communication Association, San Antonio, TX.

In this paper, the scientist who uses the experimental form does so in order to explain that which is verified through prediction.

Anderson, Linda M. (1979). Classroom-based experimental studies of teaching effectiveness in elementary schools . (Technical Report UTR&D-R- 4102). Austin: Research and Development Center for Teacher Education, University of Texas.

Three recent large-scale experimental studies have built on a database established through several correlational studies of teaching effectiveness in elementary school.

Asher, J. W. (1976). Educational research and evaluation methods . Boston: Little, Brown.

Abstract unavailable by press time.

Babbie, Earl R. (1979). The Practice of Social Research . Belmont, CA: Wadsworth.

A textbook containing discussions of several research methodologies used in social science research.

Bangert-Drowns, R.L. (1993). The word processor as instructional tool: a meta-analysis of word processing in writing instruction. Review of Educational Research, 63 (1), 69-93.

Beach, R. (1993). The effects of between-draft teacher evaluation versus student self-evaluation on high school students' revising of rough drafts. Research in the Teaching of English, 13 , 111-119.

The question of whether teacher evaluation or guided self-evaluation of rough drafts results in increased revision was addressed in Beach's study. Differences in the effects of teacher evaluations, guided self-evaluation (using prepared guidelines,) and no evaluation of rough drafts were examined. The final drafts of students (10th, 11th, and 12th graders) were compared with their rough drafts and rated by judges according to degree of change.

Beishuizen, J. & Moonen, J. (1992). Research in technology enriched schools: a case for cooperation between teachers and researchers . (ERIC Technical Report ED351006).

This paper describes the research strategies employed in the Dutch Technology Enriched Schools project to encourage extensive and intensive use of computers in a small number of secondary schools, and to study the effects of computer use on the classroom, the curriculum, and school administration and management.

Borg, W. P. (1989). Educational Research: an Introduction . (5th ed.). New York: Longman.

An overview of educational research methodology, including literature review and discussion of approaches to research, experimental design, statistical analysis, ethics, and rhetorical presentation of research findings.

Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research . Boston: Houghton Mifflin.

A classic overview of research designs.

Campbell, D.T. (1988). Methodology and epistemology for social science: selected papers . ed. E. S. Overman. Chicago: University of Chicago Press.

This is an overview of Campbell's 40-year career and his work. It covers in seven parts measurement, experimental design, applied social experimentation, interpretive social science, epistemology and sociology of science. Includes an extensive bibliography.

Caporaso, J. A., & Roos, Jr., L. L. (Eds.). Quasi-experimental approaches: Testing theory and evaluating policy. Evanston, WA: Northwestern University Press.

A collection of articles concerned with explicating the underlying assumptions of quasi-experimentation and relating these to true experimentation. With an emphasis on design. Includes a glossary of terms.

Collier, R. Writing and the word processor: How wary of the gift-giver should we be? Unpublished manuscript.

Unpublished typescript. Charts the developments to date in computers and composition and speculates about the future within the framework of Willie Sypher's model of the evolution of creative discovery.

Cook, T.D. & Campbell, D.T. (1979). Quasi-experimentation: design and analysis issues for field settings . Boston: Houghton Mifflin Co.

The authors write that this book "presents some quasi-experimental designs and design features that can be used in many social research settings. The designs serve to probe causal hypotheses about a wide variety of substantive issues in both basic and applied research."

Cutler, A. (1970). An experimental method for semantic field study. Linguistic Communication, 2 , N. pag.

This paper emphasizes the need for empirical research and objective discovery procedures in semantics, and illustrates a method by which these goals may be obtained.

Daniels, L. B. (1996, Summer). Eisenberg's Heisenberg: The indeterminancies of rationality. Curriculum Inquiry, 26 , 181-92.

Places Eisenberg's theories in relation to the death of foundationalism by showing that he distorts rational studies into a form of relativism. He looks at Eisenberg's ideas on indeterminacy, methods and evidence, what he is against and what we should think of what he says.

Danziger, K. (1990). Constructing the subject: Historical origins of psychological research. Cambridge: Cambridge University Press.

Danzinger stresses the importance of being aware of the framework in which research operates and of the essentially social nature of scientific activity.

Diener, E., et al. (1972, December). Leakage of experimental information to potential future subjects by debriefed subjects. Journal of Experimental Research in Personality , 264-67.

Research regarding research: an investigation of the effects on the outcome of an experiment in which information about the experiment had been leaked to subjects. The study concludes that such leakage is not a significant problem.

Dudley-Marling, C., & Rhodes, L. K. (1989). Reflecting on a close encounter with experimental research. Canadian Journal of English Language Arts. 12 , 24-28.

Researchers, Dudley-Marling and Rhodes, address some problems they met in their experimental approach to a study of reading comprehension. This article discusses the limitations of experimental research, and presents an alternative to experimental or quantitative research.

Edgington, E. S. (1985). Random assignment and experimental research. Educational Administration Quarterly, 21 , N. pag.

Edgington explores ways on which random assignment can be a part of field studies. The author discusses both non-experimental and experimental research and the need for using random assignment.

Eisenberg, J. (1996, Summer). Response to critiques by R. Floden, J. Zeuli, and L. Daniels. Curriculum Inquiry, 26 , 199-201.

A response to critiques of his argument that rational educational research methods are at best suspect and at worst futile. He believes indeterminacy controls this method and worries that chaotic research is failing students.

Eisner, E. (1992, July). Are all causal claims positivistic? A reply to Francis Schrag. Educational Researcher, 21 (5), 8-9.

Eisner responds to Schrag who claimed that critics like Eisner cannot escape a positivistic paradigm whatever attempts they make to do so. Eisner argues that Schrag essentially misses the point for trying to argue for the paradigm solely on the basis of cause and effect without including the rest of positivistic philosophy. This weakens his argument against multiple modal methods, which Eisner argues provides opportunities to apply the appropriate research design where it is most applicable.

Floden, R.E. (1996, Summer). Educational research: limited, but worthwhile and maybe a bargain. (response to J.A. Eisenberg). Curriculum Inquiry, 26 , 193-7.

Responds to John Eisenberg critique of educational research by asserting the connection between improvement of practice and research results. He places high value of teacher discrepancy and knowledge that research informs practice.

Fortune, J. C., & Hutson, B. A. (1994, March/April). Selecting models for measuring change when true experimental conditions do not exist. Journal of Educational Research, 197-206.

This article reviews methods for minimizing the effects of nonideal experimental conditions by optimally organizing models for the measurement of change.

Fox, R. F. (1980). Treatment of writing apprehension and tts effects on composition. Research in the Teaching of English, 14 , 39-49.

The main purpose of Fox's study was to investigate the effects of two methods of teaching writing on writing apprehension among entry level composition students, A conventional teaching procedure was used with a control group, while a workshop method was employed with the treatment group.

Gadamer, H-G. (1976). Philosophical hermeneutics . (D. E. Linge, Trans.). Berkeley, CA: University of California Press.

A collection of essays with the common themes of the mediation of experience through language, the impossibility of objectivity, and the importance of context in interpretation.

Gaise, S. J. (1981). Experimental vs. non-experimental research on classroom second language learning. Bilingual Education Paper Series, 5 , N. pag.

Aims on classroom-centered research on second language learning and teaching are considered and contrasted with the experimental approach.

Giordano, G. (1983). Commentary: Is experimental research snowing us? Journal of Reading, 27 , 5-7.

Do educational research findings actually benefit teachers and students? Giordano states his opinion that research may be helpful to teaching, but is not essential and often is unnecessary.

Goldenson, D. R. (1978, March). An alternative view about the role of the secondary school in political socialization: A field-experimental study of theory and research in social education. Theory and Research in Social Education , 44-72.

This study concludes that when political discussion among experimental groups of secondary school students is led by a teacher, the degree to which the students' views were impacted is proportional to the credibility of the teacher.

Grossman, J., and J. P. Tierney. (1993, October). The fallibility of comparison groups. Evaluation Review , 556-71.

Grossman and Tierney present evidence to suggest that comparison groups are not the same as nontreatment groups.

Harnisch, D. L. (1992). Human judgment and the logic of evidence: A critical examination of research methods in special education transition literature. In D. L. Harnisch et al. (Eds.), Selected readings in transition.

This chapter describes several common types of research studies in special education transition literature and the threats to their validity.

Hawisher, G. E. (1989). Research and recommendations for computers and composition. In G. Hawisher and C. Selfe. (Eds.), Critical Perspectives on Computers and Composition Instruction . (pp. 44-69). New York: Teacher's College Press.

An overview of research in computers and composition to date. Includes a synthesis grid of experimental research.

Hillocks, G. Jr. (1982). The interaction of instruction, teacher comment, and revision in teaching the composing process. Research in the Teaching of English, 16 , 261-278.

Hillock conducted a study using three treatments: observational or data collecting activities prior to writing, use of revisions or absence of same, and either brief or lengthy teacher comments to identify effective methods of teaching composition to seventh and eighth graders.

Jenkinson, J. C. (1989). Research design in the experimental study of intellectual disability. International Journal of Disability, Development, and Education, 69-84.

This article catalogues the difficulties of conducting experimental research where the subjects are intellectually disables and suggests alternative research strategies.

Jones, R. A. (1985). Research Methods in the Social and Behavioral Sciences. Sunderland, MA: Sinauer Associates, Inc..

A textbook designed to provide an overview of research strategies in the social sciences, including survey, content analysis, ethnographic approaches, and experimentation. The author emphasizes the importance of applying strategies appropriately and in variety.

Kamil, M. L., Langer, J. A., & Shanahan, T. (1985). Understanding research in reading and writing . Newton, Massachusetts: Allyn and Bacon.

Examines a wide variety of problems in reading and writing, with a broad range of techniques, from different perspectives.

Kennedy, J. L. (1985). An Introduction to the Design and Analysis of Experiments in Behavioral Research . Lanham, MD: University Press of America.

An introductory textbook of psychological and educational research.

Keppel, G. (1991). Design and analysis: a researcher's handbook . Englewood Cliffs, NJ: Prentice Hall.

This updates Keppel's earlier book subtitled "a student's handbook." Focuses on extensive information about analytical research and gives a basic picture of research in psychology. Covers a range of statistical topics. Includes a subject and name index, as well as a glossary.

Knowles, G., Elija, R., & Broadwater, K. (1996, Spring/Summer). Teacher research: enhancing the preparation of teachers? Teaching Education, 8 , 123-31.

Researchers looked at one teacher candidate who participated in a class which designed their own research project correlating to a question they would like answered in the teaching world. The goal of the study was to see if preservice teachers developed reflective practice by researching appropriate classroom contexts.

Lace, J., & De Corte, E. (1986, April 16-20). Research on media in western Europe: A myth of sisyphus? Paper presented at the annual meeting of the American Educational Research Association. San Francisco.

Identifies main trends in media research in western Europe, with emphasis on three successive stages since 1960: tools technology, systems technology, and reflective technology.

Latta, A. (1996, Spring/Summer). Teacher as researcher: selected resources. Teaching Education, 8 , 155-60.

An annotated bibliography on educational research including milestones of thought, practical applications, successful outcomes, seminal works, and immediate practical applications.

Lauer. J.M. & Asher, J. W. (1988). Composition research: Empirical designs . New York: Oxford University Press.

Approaching experimentation from a humanist's perspective to it, authors focus on eight major research designs: Case studies, ethnographies, sampling and surveys, quantitative descriptive studies, measurement, true experiments, quasi-experiments, meta-analyses, and program evaluations. It takes on the challenge of bridging language of social science with that of the humanist. Includes name and subject indexes, as well as a glossary and a glossary of symbols.

Mishler, E. G. (1979). Meaning in context: Is there any other kind? Harvard Educational Review, 49 , 1-19.

Contextual importance has been largely ignored by traditional research approaches in social/behavioral sciences and in their application to the education field. Developmental and social psychologists have increasingly noted the inadequacies of this approach. Drawing examples for phenomenology, sociolinguistics, and ethnomethodology, the author proposes alternative approaches for studying meaning in context.

Mitroff, I., & Bonoma, T. V. (1978, May). Psychological assumptions, experimentations, and real world problems: A critique and an alternate approach to evaluation. Evaluation Quarterly , 235-60.

The authors advance the notion of dialectic as a means to clarify and examine the underlying assumptions of experimental research methodology, both in highly controlled situations and in social evaluation.

Muller, E. W. (1985). Application of experimental and quasi-experimental research designs to educational software evaluation. Educational Technology, 25 , 27-31.

Muller proposes a set of guidelines for the use of experimental and quasi-experimental methods of research in evaluating educational software. By obtaining empirical evidence of student performance, it is possible to evaluate if programs are making the desired learning effect.

Murray, S., et al. (1979, April 8-12). Technical issues as threats to internal validity of experimental and quasi-experimental designs . San Francisco: University of California.

The article reviews three evaluation models and analyzes the flaws common to them. Remedies are suggested.

Muter, P., & Maurutto, P. (1991). Reading and skimming from computer screens and books: The paperless office revisited? Behavior and Information Technology, 10 (4), 257-66.

The researchers test for reading and skimming effectiveness, defined as accuracy combined with speed, for written text compared to text on a computer monitor. They conclude that, given optimal on-line conditions, both are equally effective.

O'Donnell, A., Et al. (1992). The impact of cooperative writing. In J. R. Hayes, et al. (Eds.). Reading empirical research studies: The rhetoric of research . (pp. 371-84). Hillsdale, NJ: Lawrence Erlbaum Associates.

A model of experimental design. The authors investigate the efficacy of cooperative writing strategies, as well as the transferability of skills learned to other, individual writing situations.

Palmer, D. (1988). Looking at philosophy . Mountain View, CA: Mayfield Publishing.

An introductory text with incisive but understandable discussions of the major movements and thinkers in philosophy from the Pre-Socratics through Sartre. With illustrations by the author. Includes a glossary.

Phelps-Gunn, T., & Phelps-Terasaki, D. (1982). Written language instruction: Theory and remediation . London: Aspen Systems Corporation.

The lack of research in written expression is addressed and an application on the Total Writing Process Model is presented.

Poetter, T. (1996, Spring/Summer). From resistance to excitement: becoming qualitative researchers and reflective practitioners. Teaching Education , 8109-19.

An education professor reveals his own problematic research when he attempted to institute a educational research component to a teacher preparation program. He encountered dissent from students and cooperating professionals and ultimately was rewarded with excitement towards research and a recognized correlation to practice.

Purves, A. C. (1992). Reflections on research and assessment in written composition. Research in the Teaching of English, 26 .

Three issues concerning research and assessment is writing are discussed: 1) School writing is a matter of products not process, 2) school writing is an ill-defined domain, 3) the quality of school writing is what observers report they see. Purves discusses these issues while looking at data collected in a ten-year study of achievement in written composition in fourteen countries.

Rathus, S. A. (1987). Psychology . (3rd ed.). Poughkeepsie, NY: Holt, Rinehart, and Winston.

An introductory psychology textbook. Includes overviews of the major movements in psychology, discussions of prominent examples of experimental research, and a basic explanation of relevant physiological factors. With chapter summaries.

Reiser, R. A. (1982). Improving the research skills of instructional designers. Educational Technology, 22 , 19-21.

In his paper, Reiser starts by stating the importance of research in advancing the field of education, and points out that graduate students in instructional design lack the proper skills to conduct research. The paper then goes on to outline the practicum in the Instructional Systems Program at Florida State University which includes: 1) Planning and conducting an experimental research study; 2) writing the manuscript describing the study; 3) giving an oral presentation in which they describe their research findings.

Report on education research . (Journal). Washington, DC: Capitol Publication, Education News Services Division.

This is an independent bi-weekly newsletter on research in education and learning. It has been publishing since Sept. 1969.

Rossell, C. H. (1986). Why is bilingual education research so bad?: Critique of the Walsh and Carballo study of Massachusetts bilingual education programs . Boston: Center for Applied Social Science, Boston University. (ERIC Working Paper 86-5).

The Walsh and Carballo evaluation of the effectiveness of transitional bilingual education programs in five Massachusetts communities has five flaws and the five flaws are discussed in detail.

Rubin, D. L., & Greene, K. (1992). Gender-typical style in written language. Research in the Teaching of English, 26.

This study was designed to find out whether the writing styles of men and women differ. Rubin and Green discuss the pre-suppositions that women are better writers than men.

Sawin, E. (1992). Reaction: Experimental research in the context of other methods. School of Education Review, 4 , 18-21.

Sawin responds to Gage's article on methodologies and issues in educational research. He agrees with most of the article but suggests the concept of scientific should not be regarded in absolute terms and recommends more emphasis on scientific method. He also questions the value of experiments over other types of research.

Schoonmaker, W. E. (1984). Improving classroom instruction: A model for experimental research. The Technology Teacher, 44, 24-25.

The model outlined in this article tries to bridge the gap between classroom practice and laboratory research, using what Schoonmaker calls active research. Research is conducted in the classroom with the students and is used to determine which two methods of classroom instruction chosen by the teacher is more effective.

Schrag, F. (1992). In defense of positivist research paradigms. Educational Researcher, 21, (5), 5-8.

The controversial defense of the use of positivistic research methods to evaluate educational strategies; the author takes on Eisner, Erickson, and Popkewitz.

Smith, J. (1997). The stories educational researchers tell about themselves. Educational Researcher, 33 (3), 4-11.

Recapitulates main features of an on-going debate between advocates for using vocabularies of traditional language arts and whole language in educational research. An "impasse" exists were advocates "do not share a theoretical disposition concerning both language instruction and the nature of research," Smith writes (p. 6). He includes a very comprehensive history of the debate of traditional research methodology and qualitative methods and vocabularies. Definitely worth a read by graduates.

Smith, N. L. (1980). The feasibility and desirability of experimental methods in evaluation. Evaluation and Program Planning: An International Journal , 251-55.

Smith identifies the conditions under which experimental research is most desirable. Includes a review of current thinking and controversies.

Stewart, N. R., & Johnson, R. G. (1986, March 16-20). An evaluation of experimental methodology in counseling and counselor education research. Paper presented at the annual meeting of the American Educational Research Association, San Francisco.

The purpose of this study was to evaluate the quality of experimental research in counseling and counselor education published from 1976 through 1984.

Spector, P. E. (1990). Research Designs. Newbury Park, California: Sage Publications.

In this book, Spector introduces the basic principles of experimental and nonexperimental design in the social sciences.

Tait, P. E. (1984). Do-it-yourself evaluation of experimental research. Journal of Visual Impairment and Blindness, 78 , 356-363 .

Tait's goal is to provide the reader who is unfamiliar with experimental research or statistics with the basic skills necessary for the evaluation of research studies.

Walsh, S. M. (1990). The current conflict between case study and experimental research: A breakthrough study derives benefits from both . (ERIC Document Number ED339721).

This paper describes a study that was not experimentally designed, but its major findings were generalizable to the overall population of writers in college freshman composition classes. The study was not a case study, but it provided insights into the attitudes and feelings of small clusters of student writers.

Waters, G. R. (1976). Experimental designs in communication research. Journal of Business Communication, 14 .

The paper presents a series of discussions on the general elements of experimental design and the scientific process and relates these elements to the field of communication.

Welch, W. W. (March 1969). The selection of a national random sample of teachers for experimental curriculum evaluation. Scholastic Science and Math , 210-216.

Members of the evaluation section of Harvard project physics describe what is said to be the first attempt to select a national random sample of teachers, and list 6 steps to do so. Cost and comparison with a volunteer group are also discussed.

Winer, B.J. (1971). Statistical principles in experimental design , (2nd ed.). New York: McGraw-Hill.

Combines theory and application discussions to give readers a better understanding of the logic behind statistical aspects of experimental design. Introduces the broad topic of design, then goes into considerable detail. Not for light reading. Bring your aspirin if you like statistics. Bring morphine is you're a humanist.

Winn, B. (1986, January 16-21). Emerging trends in educational technology research. Paper presented at the Annual Convention of the Association for Educational Communication Technology.

This examination of the topic of research in educational technology addresses four major areas: (1) why research is conducted in this area and the characteristics of that research; (2) the types of research questions that should or should not be addressed; (3) the most appropriate methodologies for finding answers to research questions; and (4) the characteristics of a research report that make it good and ultimately suitable for publication.

Citation Information

Luann Barnes, Jennifer Hauser, Luana Heikes, Anthony J. Hernandez, Paul Tim Richard, Katherine Ross, Guo Hua Yang, and Mike Palmquist. (1994-2024). Experimental and Quasi-Experimental Research. The WAC Clearinghouse. Colorado State University. Available at https://wac.colostate.edu/repository/writing/guides/.

Copyright Information

Copyright © 1994-2024 Colorado State University and/or this site's authors, developers, and contributors . Some material displayed on this site is used with permission.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Case Study | Definition, Examples & Methods

Case Study | Definition, Examples & Methods

Published on 5 May 2022 by Shona McCombes . Revised on 30 January 2023.

A case study is a detailed study of a specific subject, such as a person, group, place, event, organisation, or phenomenon. Case studies are commonly used in social, educational, clinical, and business research.

A case study research design usually involves qualitative methods , but quantitative methods are sometimes also used. Case studies are good for describing , comparing, evaluating, and understanding different aspects of a research problem .

Table of contents

When to do a case study, step 1: select a case, step 2: build a theoretical framework, step 3: collect your data, step 4: describe and analyse the case.

A case study is an appropriate research design when you want to gain concrete, contextual, in-depth knowledge about a specific real-world subject. It allows you to explore the key characteristics, meanings, and implications of the case.

Case studies are often a good choice in a thesis or dissertation . They keep your project focused and manageable when you don’t have the time or resources to do large-scale research.

You might use just one complex case study where you explore a single subject in depth, or conduct multiple case studies to compare and illuminate different aspects of your research problem.

Case study examples
Research question Case study
What are the ecological effects of wolf reintroduction? Case study of wolf reintroduction in Yellowstone National Park in the US
How do populist politicians use narratives about history to gain support? Case studies of Hungarian prime minister Viktor Orbán and US president Donald Trump
How can teachers implement active learning strategies in mixed-level classrooms? Case study of a local school that promotes active learning
What are the main advantages and disadvantages of wind farms for rural communities? Case studies of three rural wind farm development projects in different parts of the country
How are viral marketing strategies changing the relationship between companies and consumers? Case study of the iPhone X marketing campaign
How do experiences of work in the gig economy differ by gender, race, and age? Case studies of Deliveroo and Uber drivers in London

Prevent plagiarism, run a free check.

Once you have developed your problem statement and research questions , you should be ready to choose the specific case that you want to focus on. A good case study should have the potential to:

  • Provide new or unexpected insights into the subject
  • Challenge or complicate existing assumptions and theories
  • Propose practical courses of action to resolve a problem
  • Open up new directions for future research

Unlike quantitative or experimental research, a strong case study does not require a random or representative sample. In fact, case studies often deliberately focus on unusual, neglected, or outlying cases which may shed new light on the research problem.

If you find yourself aiming to simultaneously investigate and solve an issue, consider conducting action research . As its name suggests, action research conducts research and takes action at the same time, and is highly iterative and flexible. 

However, you can also choose a more common or representative case to exemplify a particular category, experience, or phenomenon.

While case studies focus more on concrete details than general theories, they should usually have some connection with theory in the field. This way the case study is not just an isolated description, but is integrated into existing knowledge about the topic. It might aim to:

  • Exemplify a theory by showing how it explains the case under investigation
  • Expand on a theory by uncovering new concepts and ideas that need to be incorporated
  • Challenge a theory by exploring an outlier case that doesn’t fit with established assumptions

To ensure that your analysis of the case has a solid academic grounding, you should conduct a literature review of sources related to the topic and develop a theoretical framework . This means identifying key concepts and theories to guide your analysis and interpretation.

There are many different research methods you can use to collect data on your subject. Case studies tend to focus on qualitative data using methods such as interviews, observations, and analysis of primary and secondary sources (e.g., newspaper articles, photographs, official records). Sometimes a case study will also collect quantitative data .

The aim is to gain as thorough an understanding as possible of the case and its context.

In writing up the case study, you need to bring together all the relevant aspects to give as complete a picture as possible of the subject.

How you report your findings depends on the type of research you are doing. Some case studies are structured like a standard scientific paper or thesis, with separate sections or chapters for the methods , results , and discussion .

Others are written in a more narrative style, aiming to explore the case from various angles and analyse its meanings and implications (for example, by using textual analysis or discourse analysis ).

In all cases, though, make sure to give contextual details about the case, connect it back to the literature and theory, and discuss how it fits into wider patterns or debates.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2023, January 30). Case Study | Definition, Examples & Methods. Scribbr. Retrieved 2 September 2024, from https://www.scribbr.co.uk/research-methods/case-studies/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, correlational research | guide, design & examples, a quick guide to experimental design | 5 steps & examples, descriptive research design | definition, methods & examples.

Logo for BCcampus Open Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 3. Psychological Science

3.2 Psychologists Use Descriptive, Correlational, and Experimental Research Designs to Understand Behaviour

Learning objectives.

  • Differentiate the goals of descriptive, correlational, and experimental research designs and explain the advantages and disadvantages of each.
  • Explain the goals of descriptive research and the statistical techniques used to interpret it.
  • Summarize the uses of correlational research and describe why correlational research cannot be used to infer causality.
  • Review the procedures of experimental research and explain how it can be used to draw causal inferences.

Psychologists agree that if their ideas and theories about human behaviour are to be taken seriously, they must be backed up by data. However, the research of different psychologists is designed with different goals in mind, and the different goals require different approaches. These varying approaches, summarized in Table 3.2, are known as research designs . A research design  is the specific method a researcher uses to collect, analyze, and interpret data . Psychologists use three major types of research designs in their research, and each provides an essential avenue for scientific investigation. Descriptive research  is research designed to provide a snapshot of the current state of affairs . Correlational research  is research designed to discover relationships among variables and to allow the prediction of future events from present knowledge . Experimental research  is research in which initial equivalence among research participants in more than one group is created, followed by a manipulation of a given experience for these groups and a measurement of the influence of the manipulation . Each of the three research designs varies according to its strengths and limitations, and it is important to understand how each differs.

Table 3.2 Characteristics of the Three Research Designs
Research design Goal Advantages Disadvantages
Descriptive To create a snapshot of the current state of affairs Provides a relatively complete picture of what is occurring at a given time. Allows the development of questions for further study. Does not assess relationships among variables. May be unethical if participants do not know they are being observed.
Correlational To assess the relationships between and among two or more variables Allows testing of expected relationships between and among variables and the making of predictions. Can assess these relationships in everyday life events. Cannot be used to draw inferences about the causal relationships between and among the variables.
Experimental To assess the causal impact of one or more experimental manipulations on a dependent variable Allows drawing of conclusions about the causal relationships among variables. Cannot experimentally manipulate many important variables. May be expensive and time consuming.
Source: Stangor, 2011.

Descriptive Research: Assessing the Current State of Affairs

Descriptive research is designed to create a snapshot of the current thoughts, feelings, or behaviour of individuals. This section reviews three types of descriptive research : case studies , surveys , and naturalistic observation (Figure 3.4).

Sometimes the data in a descriptive research project are based on only a small set of individuals, often only one person or a single small group. These research designs are known as case studies — descriptive records of one or more individual’s experiences and behaviour . Sometimes case studies involve ordinary individuals, as when developmental psychologist Jean Piaget used his observation of his own children to develop his stage theory of cognitive development. More frequently, case studies are conducted on individuals who have unusual or abnormal experiences or characteristics or who find themselves in particularly difficult or stressful situations. The assumption is that by carefully studying individuals who are socially marginal, who are experiencing unusual situations, or who are going through a difficult phase in their lives, we can learn something about human nature.

Sigmund Freud was a master of using the psychological difficulties of individuals to draw conclusions about basic psychological processes. Freud wrote case studies of some of his most interesting patients and used these careful examinations to develop his important theories of personality. One classic example is Freud’s description of “Little Hans,” a child whose fear of horses the psychoanalyst interpreted in terms of repressed sexual impulses and the Oedipus complex (Freud, 1909/1964).

Another well-known case study is Phineas Gage, a man whose thoughts and emotions were extensively studied by cognitive psychologists after a railroad spike was blasted through his skull in an accident. Although there are questions about the interpretation of this case study (Kotowicz, 2007), it did provide early evidence that the brain’s frontal lobe is involved in emotion and morality (Damasio et al., 2005). An interesting example of a case study in clinical psychology is described by Rokeach (1964), who investigated in detail the beliefs of and interactions among three patients with schizophrenia, all of whom were convinced they were Jesus Christ.

In other cases the data from descriptive research projects come in the form of a survey — a measure administered through either an interview or a written questionnaire to get a picture of the beliefs or behaviours of a sample of people of interest . The people chosen to participate in the research (known as the sample) are selected to be representative of all the people that the researcher wishes to know about (the population). In election polls, for instance, a sample is taken from the population of all “likely voters” in the upcoming elections.

The results of surveys may sometimes be rather mundane, such as “Nine out of 10 doctors prefer Tymenocin” or “The median income in the city of Hamilton is $46,712.” Yet other times (particularly in discussions of social behaviour), the results can be shocking: “More than 40,000 people are killed by gunfire in the United States every year” or “More than 60% of women between the ages of 50 and 60 suffer from depression.” Descriptive research is frequently used by psychologists to get an estimate of the prevalence (or incidence ) of psychological disorders.

A final type of descriptive research — known as naturalistic observation — is research based on the observation of everyday events . For instance, a developmental psychologist who watches children on a playground and describes what they say to each other while they play is conducting descriptive research, as is a biopsychologist who observes animals in their natural habitats. One example of observational research involves a systematic procedure known as the strange situation , used to get a picture of how adults and young children interact. The data that are collected in the strange situation are systematically coded in a coding sheet such as that shown in Table 3.3.

Table 3.3 Sample Coding Form Used to Assess Child’s and Mother’s Behaviour in the Strange Situation
Coder name:
This table represents a sample coding sheet from an episode of the “strange situation,” in which an infant (usually about one year old) is observed playing in a room with two adults — the child’s mother and a stranger. Each of the four coding categories is scored by the coder from 1 (the baby makes no effort to engage in the behaviour) to 7 (the baby makes a significant effort to engage in the behaviour). More information about the meaning of the coding can be found in Ainsworth, Blehar, Waters, and Wall (1978).
Coding categories explained
Proximity The baby moves toward, grasps, or climbs on the adult.
Maintaining contact The baby resists being put down by the adult by crying or trying to climb back up.
Resistance The baby pushes, hits, or squirms to be put down from the adult’s arms.
Avoidance The baby turns away or moves away from the adult.
Episode Coding categories
Proximity Contact Resistance Avoidance
Mother and baby play alone 1 1 1 1
Mother puts baby down 4 1 1 1
Stranger enters room 1 2 3 1
Mother leaves room; stranger plays with baby 1 3 1 1
Mother re-enters, greets and may comfort baby, then leaves again 4 2 1 2
Stranger tries to play with baby 1 3 1 1
Mother re-enters and picks up baby 6 6 1 2
Source: Stang0r, 2011.

The results of descriptive research projects are analyzed using descriptive statistics — numbers that summarize the distribution of scores on a measured variable . Most variables have distributions similar to that shown in Figure 3.5 where most of the scores are located near the centre of the distribution, and the distribution is symmetrical and bell-shaped. A data distribution that is shaped like a bell is known as a normal distribution .

A distribution can be described in terms of its central tendency — that is, the point in the distribution around which the data are centred — and its dispersion, or spread . The arithmetic average, or arithmetic mean , symbolized by the letter M , is the most commonly used measure of central tendency . It is computed by calculating the sum of all the scores of the variable and dividing this sum by the number of participants in the distribution (denoted by the letter N ). In the data presented in Figure 3.5 the mean height of the students is 67.12 inches (170.5 cm). The sample mean is usually indicated by the letter M .

In some cases, however, the data distribution is not symmetrical. This occurs when there are one or more extreme scores (known as outliers ) at one end of the distribution. Consider, for instance, the variable of family income (see Figure 3.6), which includes an outlier (a value of $3,800,000). In this case the mean is not a good measure of central tendency. Although it appears from Figure 3.6 that the central tendency of the family income variable should be around $70,000, the mean family income is actually $223,960. The single very extreme income has a disproportionate impact on the mean, resulting in a value that does not well represent the central tendency.

The median is used as an alternative measure of central tendency when distributions are not symmetrical. The median  is the score in the center of the distribution, meaning that 50% of the scores are greater than the median and 50% of the scores are less than the median . In our case, the median household income ($73,000) is a much better indication of central tendency than is the mean household income ($223,960).

A final measure of central tendency, known as the mode , represents the value that occurs most frequently in the distribution . You can see from Figure 3.6 that the mode for the family income variable is $93,000 (it occurs four times).

In addition to summarizing the central tendency of a distribution, descriptive statistics convey information about how the scores of the variable are spread around the central tendency. Dispersion refers to the extent to which the scores are all tightly clustered around the central tendency , as seen in Figure 3.7.

Or they may be more spread out away from it, as seen in Figure 3.8.

One simple measure of dispersion is to find the largest (the maximum ) and the smallest (the minimum ) observed values of the variable and to compute the range of the variable as the maximum observed score minus the minimum observed score. You can check that the range of the height variable in Figure 3.5 is 72 – 62 = 10. The standard deviation , symbolized as s , is the most commonly used measure of dispersion . Distributions with a larger standard deviation have more spread. The standard deviation of the height variable is s = 2.74, and the standard deviation of the family income variable is s = $745,337.

An advantage of descriptive research is that it attempts to capture the complexity of everyday behaviour. Case studies provide detailed information about a single person or a small group of people, surveys capture the thoughts or reported behaviours of a large population of people, and naturalistic observation objectively records the behaviour of people or animals as it occurs naturally. Thus descriptive research is used to provide a relatively complete understanding of what is currently happening.

Despite these advantages, descriptive research has a distinct disadvantage in that, although it allows us to get an idea of what is currently happening, it is usually limited to static pictures. Although descriptions of particular experiences may be interesting, they are not always transferable to other individuals in other situations, nor do they tell us exactly why specific behaviours or events occurred. For instance, descriptions of individuals who have suffered a stressful event, such as a war or an earthquake, can be used to understand the individuals’ reactions to the event but cannot tell us anything about the long-term effects of the stress. And because there is no comparison group that did not experience the stressful situation, we cannot know what these individuals would be like if they hadn’t had the stressful experience.

Correlational Research: Seeking Relationships among Variables

In contrast to descriptive research, which is designed primarily to provide static pictures, correlational research involves the measurement of two or more relevant variables and an assessment of the relationship between or among those variables. For instance, the variables of height and weight are systematically related (correlated) because taller people generally weigh more than shorter people. In the same way, study time and memory errors are also related, because the more time a person is given to study a list of words, the fewer errors he or she will make. When there are two variables in the research design, one of them is called the predictor variable and the other the outcome variable . The research design can be visualized as shown in Figure 3.9, where the curved arrow represents the expected correlation between these two variables.

One way of organizing the data from a correlational study with two variables is to graph the values of each of the measured variables using a scatter plot . As you can see in Figure 3.10 a scatter plot  is a visual image of the relationship between two variables . A point is plotted for each individual at the intersection of his or her scores for the two variables. When the association between the variables on the scatter plot can be easily approximated with a straight line , as in parts (a) and (b) of Figure 3.10 the variables are said to have a linear relationship .

When the straight line indicates that individuals who have above-average values for one variable also tend to have above-average values for the other variable , as in part (a), the relationship is said to be positive linear . Examples of positive linear relationships include those between height and weight, between education and income, and between age and mathematical abilities in children. In each case, people who score higher on one of the variables also tend to score higher on the other variable. Negative linear relationships , in contrast, as shown in part (b), occur when above-average values for one variable tend to be associated with below-average values for the other variable. Examples of negative linear relationships include those between the age of a child and the number of diapers the child uses, and between practice on and errors made on a learning task. In these cases, people who score higher on one of the variables tend to score lower on the other variable.

Relationships between variables that cannot be described with a straight line are known as nonlinear relationships . Part (c) of Figure 3.10 shows a common pattern in which the distribution of the points is essentially random. In this case there is no relationship at all between the two variables, and they are said to be independent . Parts (d) and (e) of Figure 3.10 show patterns of association in which, although there is an association, the points are not well described by a single straight line. For instance, part (d) shows the type of relationship that frequently occurs between anxiety and performance. Increases in anxiety from low to moderate levels are associated with performance increases, whereas increases in anxiety from moderate to high levels are associated with decreases in performance. Relationships that change in direction and thus are not described by a single straight line are called curvilinear relationships .

The most common statistical measure of the strength of linear relationships among variables is the Pearson correlation coefficient , which is symbolized by the letter r . The value of the correlation coefficient ranges from r = –1.00 to r = +1.00. The direction of the linear relationship is indicated by the sign of the correlation coefficient. Positive values of r (such as r = .54 or r = .67) indicate that the relationship is positive linear (i.e., the pattern of the dots on the scatter plot runs from the lower left to the upper right), whereas negative values of r (such as r = –.30 or r = –.72) indicate negative linear relationships (i.e., the dots run from the upper left to the lower right). The strength of the linear relationship is indexed by the distance of the correlation coefficient from zero (its absolute value). For instance, r = –.54 is a stronger relationship than r = .30, and r = .72 is a stronger relationship than r = –.57. Because the Pearson correlation coefficient only measures linear relationships, variables that have curvilinear relationships are not well described by r , and the observed correlation will be close to zero.

It is also possible to study relationships among more than two measures at the same time. A research design in which more than one predictor variable is used to predict a single outcome variable is analyzed through multiple regression (Aiken & West, 1991).  Multiple regression  is a statistical technique, based on correlation coefficients among variables, that allows predicting a single outcome variable from more than one predictor variable . For instance, Figure 3.11 shows a multiple regression analysis in which three predictor variables (Salary, job satisfaction, and years employed) are used to predict a single outcome (job performance). The use of multiple regression analysis shows an important advantage of correlational research designs — they can be used to make predictions about a person’s likely score on an outcome variable (e.g., job performance) based on knowledge of other variables.

An important limitation of correlational research designs is that they cannot be used to draw conclusions about the causal relationships among the measured variables. Consider, for instance, a researcher who has hypothesized that viewing violent behaviour will cause increased aggressive play in children. He has collected, from a sample of Grade 4 children, a measure of how many violent television shows each child views during the week, as well as a measure of how aggressively each child plays on the school playground. From his collected data, the researcher discovers a positive correlation between the two measured variables.

Although this positive correlation appears to support the researcher’s hypothesis, it cannot be taken to indicate that viewing violent television causes aggressive behaviour. Although the researcher is tempted to assume that viewing violent television causes aggressive play, there are other possibilities. One alternative possibility is that the causal direction is exactly opposite from what has been hypothesized. Perhaps children who have behaved aggressively at school develop residual excitement that leads them to want to watch violent television shows at home (Figure 3.13):

Although this possibility may seem less likely, there is no way to rule out the possibility of such reverse causation on the basis of this observed correlation. It is also possible that both causal directions are operating and that the two variables cause each other (Figure 3.14).

Still another possible explanation for the observed correlation is that it has been produced by the presence of a common-causal variable (also known as a third variable ). A common-causal variable  is a variable that is not part of the research hypothesis but that causes both the predictor and the outcome variable and thus produces the observed correlation between them . In our example, a potential common-causal variable is the discipline style of the children’s parents. Parents who use a harsh and punitive discipline style may produce children who like to watch violent television and who also behave aggressively in comparison to children whose parents use less harsh discipline (Figure 3.15)

In this case, television viewing and aggressive play would be positively correlated (as indicated by the curved arrow between them), even though neither one caused the other but they were both caused by the discipline style of the parents (the straight arrows). When the predictor and outcome variables are both caused by a common-causal variable, the observed relationship between them is said to be spurious . A spurious relationship  is a relationship between two variables in which a common-causal variable produces and “explains away” the relationship . If effects of the common-causal variable were taken away, or controlled for, the relationship between the predictor and outcome variables would disappear. In the example, the relationship between aggression and television viewing might be spurious because by controlling for the effect of the parents’ disciplining style, the relationship between television viewing and aggressive behaviour might go away.

Common-causal variables in correlational research designs can be thought of as mystery variables because, as they have not been measured, their presence and identity are usually unknown to the researcher. Since it is not possible to measure every variable that could cause both the predictor and outcome variables, the existence of an unknown common-causal variable is always a possibility. For this reason, we are left with the basic limitation of correlational research: correlation does not demonstrate causation. It is important that when you read about correlational research projects, you keep in mind the possibility of spurious relationships, and be sure to interpret the findings appropriately. Although correlational research is sometimes reported as demonstrating causality without any mention being made of the possibility of reverse causation or common-causal variables, informed consumers of research, like you, are aware of these interpretational problems.

In sum, correlational research designs have both strengths and limitations. One strength is that they can be used when experimental research is not possible because the predictor variables cannot be manipulated. Correlational designs also have the advantage of allowing the researcher to study behaviour as it occurs in everyday life. And we can also use correlational designs to make predictions — for instance, to predict from the scores on their battery of tests the success of job trainees during a training session. But we cannot use such correlational information to determine whether the training caused better job performance. For that, researchers rely on experiments.

Experimental Research: Understanding the Causes of Behaviour

The goal of experimental research design is to provide more definitive conclusions about the causal relationships among the variables in the research hypothesis than is available from correlational designs. In an experimental research design, the variables of interest are called the independent variable (or variables ) and the dependent variable . The independent variable  in an experiment is the causing variable that is created (manipulated) by the experimenter . The dependent variable  in an experiment is a measured variable that is expected to be influenced by the experimental manipulation . The research hypothesis suggests that the manipulated independent variable or variables will cause changes in the measured dependent variables. We can diagram the research hypothesis by using an arrow that points in one direction. This demonstrates the expected direction of causality (Figure 3.16):

Research Focus: Video Games and Aggression

Consider an experiment conducted by Anderson and Dill (2000). The study was designed to test the hypothesis that viewing violent video games would increase aggressive behaviour. In this research, male and female undergraduates from Iowa State University were given a chance to play with either a violent video game (Wolfenstein 3D) or a nonviolent video game (Myst). During the experimental session, the participants played their assigned video games for 15 minutes. Then, after the play, each participant played a competitive game with an opponent in which the participant could deliver blasts of white noise through the earphones of the opponent. The operational definition of the dependent variable (aggressive behaviour) was the level and duration of noise delivered to the opponent. The design of the experiment is shown in Figure 3.17

Two advantages of the experimental research design are (a) the assurance that the independent variable (also known as the experimental manipulation ) occurs prior to the measured dependent variable, and (b) the creation of initial equivalence between the conditions of the experiment (in this case by using random assignment to conditions).

Experimental designs have two very nice features. For one, they guarantee that the independent variable occurs prior to the measurement of the dependent variable. This eliminates the possibility of reverse causation. Second, the influence of common-causal variables is controlled, and thus eliminated, by creating initial equivalence among the participants in each of the experimental conditions before the manipulation occurs.

The most common method of creating equivalence among the experimental conditions is through random assignment to conditions, a procedure in which the condition that each participant is assigned to is determined through a random process, such as drawing numbers out of an envelope or using a random number table . Anderson and Dill first randomly assigned about 100 participants to each of their two groups (Group A and Group B). Because they used random assignment to conditions, they could be confident that, before the experimental manipulation occurred, the students in Group A were, on average, equivalent to the students in Group B on every possible variable, including variables that are likely to be related to aggression, such as parental discipline style, peer relationships, hormone levels, diet — and in fact everything else.

Then, after they had created initial equivalence, Anderson and Dill created the experimental manipulation — they had the participants in Group A play the violent game and the participants in Group B play the nonviolent game. Then they compared the dependent variable (the white noise blasts) between the two groups, finding that the students who had viewed the violent video game gave significantly longer noise blasts than did the students who had played the nonviolent game.

Anderson and Dill had from the outset created initial equivalence between the groups. This initial equivalence allowed them to observe differences in the white noise levels between the two groups after the experimental manipulation, leading to the conclusion that it was the independent variable (and not some other variable) that caused these differences. The idea is that the only thing that was different between the students in the two groups was the video game they had played.

Despite the advantage of determining causation, experiments do have limitations. One is that they are often conducted in laboratory situations rather than in the everyday lives of people. Therefore, we do not know whether results that we find in a laboratory setting will necessarily hold up in everyday life. Second, and more important, is that some of the most interesting and key social variables cannot be experimentally manipulated. If we want to study the influence of the size of a mob on the destructiveness of its behaviour, or to compare the personality characteristics of people who join suicide cults with those of people who do not join such cults, these relationships must be assessed using correlational designs, because it is simply not possible to experimentally manipulate these variables.

Key Takeaways

  • Descriptive, correlational, and experimental research designs are used to collect and analyze data.
  • Descriptive designs include case studies, surveys, and naturalistic observation. The goal of these designs is to get a picture of the current thoughts, feelings, or behaviours in a given group of people. Descriptive research is summarized using descriptive statistics.
  • Correlational research designs measure two or more relevant variables and assess a relationship between or among them. The variables may be presented on a scatter plot to visually show the relationships. The Pearson Correlation Coefficient ( r ) is a measure of the strength of linear relationship between two variables.
  • Common-causal variables may cause both the predictor and outcome variable in a correlational design, producing a spurious relationship. The possibility of common-causal variables makes it impossible to draw causal conclusions from correlational research designs.
  • Experimental research involves the manipulation of an independent variable and the measurement of a dependent variable. Random assignment to conditions is normally used to create initial equivalence between the groups, allowing researchers to draw causal conclusions.

Exercises and Critical Thinking

  • There is a negative correlation between the row that a student sits in in a large class (when the rows are numbered from front to back) and his or her final grade in the class. Do you think this represents a causal relationship or a spurious relationship, and why?
  • Think of two variables (other than those mentioned in this book) that are likely to be correlated, but in which the correlation is probably spurious. What is the likely common-causal variable that is producing the relationship?
  • Imagine a researcher wants to test the hypothesis that participating in psychotherapy will cause a decrease in reported anxiety. Describe the type of research design the investigator might use to draw this conclusion. What would be the independent and dependent variables in the research?

Image Attributions

Figure 3.4: “ Reading newspaper ” by Alaskan Dude (http://commons.wikimedia.org/wiki/File:Reading_newspaper.jpg) is licensed under CC BY 2.0

Aiken, L., & West, S. (1991).  Multiple regression: Testing and interpreting interactions . Newbury Park, CA: Sage.

Ainsworth, M. S., Blehar, M. C., Waters, E., & Wall, S. (1978).  Patterns of attachment: A psychological study of the strange situation . Hillsdale, NJ: Lawrence Erlbaum Associates.

Anderson, C. A., & Dill, K. E. (2000). Video games and aggressive thoughts, feelings, and behavior in the laboratory and in life.  Journal of Personality and Social Psychology, 78 (4), 772–790.

Damasio, H., Grabowski, T., Frank, R., Galaburda, A. M., Damasio, A. R., Cacioppo, J. T., & Berntson, G. G. (2005). The return of Phineas Gage: Clues about the brain from the skull of a famous patient. In  Social neuroscience: Key readings.  (pp. 21–28). New York, NY: Psychology Press.

Freud, S. (1909/1964). Analysis of phobia in a five-year-old boy. In E. A. Southwell & M. Merbaum (Eds.),  Personality: Readings in theory and research  (pp. 3–32). Belmont, CA: Wadsworth. (Original work published 1909).

Kotowicz, Z. (2007). The strange case of Phineas Gage.  History of the Human Sciences, 20 (1), 115–131.

Rokeach, M. (1964).  The three Christs of Ypsilanti: A psychological study . New York, NY: Knopf.

Stangor, C. (2011). Research methods for the behavioural sciences (4th ed.). Mountain View, CA: Cengage.

Long Descriptions

Figure 3.6 long description: There are 25 families. 24 families have an income between $44,000 and $111,000 and one family has an income of $3,800,000. The mean income is $223,960 while the median income is $73,000. [Return to Figure 3.6]

Figure 3.10 long description: Types of scatter plots.

  • Positive linear, r=positive .82. The plots on the graph form a rough line that runs from lower left to upper right.
  • Negative linear, r=negative .70. The plots on the graph form a rough line that runs from upper left to lower right.
  • Independent, r=0.00. The plots on the graph are spread out around the centre.
  • Curvilinear, r=0.00. The plots of the graph form a rough line that goes up and then down like a hill.
  • Curvilinear, r=0.00. The plots on the graph for a rough line that goes down and then up like a ditch.

[Return to Figure 3.10]

Introduction to Psychology - 1st Canadian Edition Copyright © 2014 by Jennifer Walinga and Charles Stangor is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

case study vs experimental research

Logo for M Libraries Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

2.2 Psychologists Use Descriptive, Correlational, and Experimental Research Designs to Understand Behavior

Learning objectives.

  • Differentiate the goals of descriptive, correlational, and experimental research designs and explain the advantages and disadvantages of each.
  • Explain the goals of descriptive research and the statistical techniques used to interpret it.
  • Summarize the uses of correlational research and describe why correlational research cannot be used to infer causality.
  • Review the procedures of experimental research and explain how it can be used to draw causal inferences.

Psychologists agree that if their ideas and theories about human behavior are to be taken seriously, they must be backed up by data. However, the research of different psychologists is designed with different goals in mind, and the different goals require different approaches. These varying approaches, summarized in Table 2.2 “Characteristics of the Three Research Designs” , are known as research designs . A research design is the specific method a researcher uses to collect, analyze, and interpret data . Psychologists use three major types of research designs in their research, and each provides an essential avenue for scientific investigation. Descriptive research is research designed to provide a snapshot of the current state of affairs . Correlational research is research designed to discover relationships among variables and to allow the prediction of future events from present knowledge . Experimental research is research in which initial equivalence among research participants in more than one group is created, followed by a manipulation of a given experience for these groups and a measurement of the influence of the manipulation . Each of the three research designs varies according to its strengths and limitations, and it is important to understand how each differs.

Table 2.2 Characteristics of the Three Research Designs

Research design Goal Advantages Disadvantages
Descriptive To create a snapshot of the current state of affairs Provides a relatively complete picture of what is occurring at a given time. Allows the development of questions for further study. Does not assess relationships among variables. May be unethical if participants do not know they are being observed.
Correlational To assess the relationships between and among two or more variables Allows testing of expected relationships between and among variables and the making of predictions. Can assess these relationships in everyday life events. Cannot be used to draw inferences about the causal relationships between and among the variables.
Experimental To assess the causal impact of one or more experimental manipulations on a dependent variable Allows drawing of conclusions about the causal relationships among variables. Cannot experimentally manipulate many important variables. May be expensive and time consuming.
There are three major research designs used by psychologists, and each has its own advantages and disadvantages.

Stangor, C. (2011). Research methods for the behavioral sciences (4th ed.). Mountain View, CA: Cengage.

Descriptive Research: Assessing the Current State of Affairs

Descriptive research is designed to create a snapshot of the current thoughts, feelings, or behavior of individuals. This section reviews three types of descriptive research: case studies , surveys , and naturalistic observation .

Sometimes the data in a descriptive research project are based on only a small set of individuals, often only one person or a single small group. These research designs are known as case studies — descriptive records of one or more individual’s experiences and behavior . Sometimes case studies involve ordinary individuals, as when developmental psychologist Jean Piaget used his observation of his own children to develop his stage theory of cognitive development. More frequently, case studies are conducted on individuals who have unusual or abnormal experiences or characteristics or who find themselves in particularly difficult or stressful situations. The assumption is that by carefully studying individuals who are socially marginal, who are experiencing unusual situations, or who are going through a difficult phase in their lives, we can learn something about human nature.

Sigmund Freud was a master of using the psychological difficulties of individuals to draw conclusions about basic psychological processes. Freud wrote case studies of some of his most interesting patients and used these careful examinations to develop his important theories of personality. One classic example is Freud’s description of “Little Hans,” a child whose fear of horses the psychoanalyst interpreted in terms of repressed sexual impulses and the Oedipus complex (Freud (1909/1964).

Three news papers on a table (The Daily Telegraph, The Guardian, and The Times), all predicting Obama has the edge in the early polls.

Political polls reported in newspapers and on the Internet are descriptive research designs that provide snapshots of the likely voting behavior of a population.

Another well-known case study is Phineas Gage, a man whose thoughts and emotions were extensively studied by cognitive psychologists after a railroad spike was blasted through his skull in an accident. Although there is question about the interpretation of this case study (Kotowicz, 2007), it did provide early evidence that the brain’s frontal lobe is involved in emotion and morality (Damasio et al., 2005). An interesting example of a case study in clinical psychology is described by Rokeach (1964), who investigated in detail the beliefs and interactions among three patients with schizophrenia, all of whom were convinced they were Jesus Christ.

In other cases the data from descriptive research projects come in the form of a survey — a measure administered through either an interview or a written questionnaire to get a picture of the beliefs or behaviors of a sample of people of interest . The people chosen to participate in the research (known as the sample ) are selected to be representative of all the people that the researcher wishes to know about (the population ). In election polls, for instance, a sample is taken from the population of all “likely voters” in the upcoming elections.

The results of surveys may sometimes be rather mundane, such as “Nine out of ten doctors prefer Tymenocin,” or “The median income in Montgomery County is $36,712.” Yet other times (particularly in discussions of social behavior), the results can be shocking: “More than 40,000 people are killed by gunfire in the United States every year,” or “More than 60% of women between the ages of 50 and 60 suffer from depression.” Descriptive research is frequently used by psychologists to get an estimate of the prevalence (or incidence ) of psychological disorders.

A final type of descriptive research—known as naturalistic observation —is research based on the observation of everyday events . For instance, a developmental psychologist who watches children on a playground and describes what they say to each other while they play is conducting descriptive research, as is a biopsychologist who observes animals in their natural habitats. One example of observational research involves a systematic procedure known as the strange situation , used to get a picture of how adults and young children interact. The data that are collected in the strange situation are systematically coded in a coding sheet such as that shown in Table 2.3 “Sample Coding Form Used to Assess Child’s and Mother’s Behavior in the Strange Situation” .

Table 2.3 Sample Coding Form Used to Assess Child’s and Mother’s Behavior in the Strange Situation

Coder name:
Mother and baby play alone
Mother puts baby down
Stranger enters room
Mother leaves room; stranger plays with baby
Mother reenters, greets and may comfort baby, then leaves again
Stranger tries to play with baby
Mother reenters and picks up baby
The baby moves toward, grasps, or climbs on the adult.
The baby resists being put down by the adult by crying or trying to climb back up.
The baby pushes, hits, or squirms to be put down from the adult’s arms.
The baby turns away or moves away from the adult.
This table represents a sample coding sheet from an episode of the “strange situation,” in which an infant (usually about 1 year old) is observed playing in a room with two adults—the child’s mother and a stranger. Each of the four coding categories is scored by the coder from 1 (the baby makes no effort to engage in the behavior) to 7 (the baby makes a significant effort to engage in the behavior). More information about the meaning of the coding can be found in Ainsworth, Blehar, Waters, and Wall (1978).

The results of descriptive research projects are analyzed using descriptive statistics — numbers that summarize the distribution of scores on a measured variable . Most variables have distributions similar to that shown in Figure 2.5 “Height Distribution” , where most of the scores are located near the center of the distribution, and the distribution is symmetrical and bell-shaped. A data distribution that is shaped like a bell is known as a normal distribution .

Table 2.4 Height and Family Income for 25 Students

Student name Height in inches Family income in dollars
Lauren 62 48,000
Courtnie 62 57,000
Leslie 63 93,000
Renee 64 107,000
Katherine 64 110,000
Jordan 65 93,000
Rabiah 66 46,000
Alina 66 84,000
Young Su 67 68,000
Martin 67 49,000
Hanzhu 67 73,000
Caitlin 67 3,800,000
Steven 67 107,000
Emily 67 64,000
Amy 68 67,000
Jonathan 68 51,000
Julian 68 48,000
Alissa 68 93,000
Christine 69 93,000
Candace 69 111,000
Xiaohua 69 56,000
Charlie 70 94,000
Timothy 71 73,000
Ariane 72 70,000
Logan 72 44,000

Figure 2.5 Height Distribution

The distribution of the heights of the students in a class will form a normal distribution. In this sample the mean (M) = 67.12 and the standard deviation (s) = 2.74.

The distribution of the heights of the students in a class will form a normal distribution. In this sample the mean ( M ) = 67.12 and the standard deviation ( s ) = 2.74.

A distribution can be described in terms of its central tendency —that is, the point in the distribution around which the data are centered—and its dispersion , or spread. The arithmetic average, or arithmetic mean , is the most commonly used measure of central tendency . It is computed by calculating the sum of all the scores of the variable and dividing this sum by the number of participants in the distribution (denoted by the letter N ). In the data presented in Figure 2.5 “Height Distribution” , the mean height of the students is 67.12 inches. The sample mean is usually indicated by the letter M .

In some cases, however, the data distribution is not symmetrical. This occurs when there are one or more extreme scores (known as outliers ) at one end of the distribution. Consider, for instance, the variable of family income (see Figure 2.6 “Family Income Distribution” ), which includes an outlier (a value of $3,800,000). In this case the mean is not a good measure of central tendency. Although it appears from Figure 2.6 “Family Income Distribution” that the central tendency of the family income variable should be around $70,000, the mean family income is actually $223,960. The single very extreme income has a disproportionate impact on the mean, resulting in a value that does not well represent the central tendency.

The median is used as an alternative measure of central tendency when distributions are not symmetrical. The median is the score in the center of the distribution, meaning that 50% of the scores are greater than the median and 50% of the scores are less than the median . In our case, the median household income ($73,000) is a much better indication of central tendency than is the mean household income ($223,960).

Figure 2.6 Family Income Distribution

The distribution of family incomes is likely to be nonsymmetrical because some incomes can be very large in comparison to most incomes. In this case the median or the mode is a better indicator of central tendency than is the mean.

The distribution of family incomes is likely to be nonsymmetrical because some incomes can be very large in comparison to most incomes. In this case the median or the mode is a better indicator of central tendency than is the mean.

A final measure of central tendency, known as the mode , represents the value that occurs most frequently in the distribution . You can see from Figure 2.6 “Family Income Distribution” that the mode for the family income variable is $93,000 (it occurs four times).

In addition to summarizing the central tendency of a distribution, descriptive statistics convey information about how the scores of the variable are spread around the central tendency. Dispersion refers to the extent to which the scores are all tightly clustered around the central tendency, like this:

Graph of a tightly clustered central tendency.

Or they may be more spread out away from it, like this:

Graph of a more spread out central tendency.

One simple measure of dispersion is to find the largest (the maximum ) and the smallest (the minimum ) observed values of the variable and to compute the range of the variable as the maximum observed score minus the minimum observed score. You can check that the range of the height variable in Figure 2.5 “Height Distribution” is 72 – 62 = 10. The standard deviation , symbolized as s , is the most commonly used measure of dispersion . Distributions with a larger standard deviation have more spread. The standard deviation of the height variable is s = 2.74, and the standard deviation of the family income variable is s = $745,337.

An advantage of descriptive research is that it attempts to capture the complexity of everyday behavior. Case studies provide detailed information about a single person or a small group of people, surveys capture the thoughts or reported behaviors of a large population of people, and naturalistic observation objectively records the behavior of people or animals as it occurs naturally. Thus descriptive research is used to provide a relatively complete understanding of what is currently happening.

Despite these advantages, descriptive research has a distinct disadvantage in that, although it allows us to get an idea of what is currently happening, it is usually limited to static pictures. Although descriptions of particular experiences may be interesting, they are not always transferable to other individuals in other situations, nor do they tell us exactly why specific behaviors or events occurred. For instance, descriptions of individuals who have suffered a stressful event, such as a war or an earthquake, can be used to understand the individuals’ reactions to the event but cannot tell us anything about the long-term effects of the stress. And because there is no comparison group that did not experience the stressful situation, we cannot know what these individuals would be like if they hadn’t had the stressful experience.

Correlational Research: Seeking Relationships Among Variables

In contrast to descriptive research, which is designed primarily to provide static pictures, correlational research involves the measurement of two or more relevant variables and an assessment of the relationship between or among those variables. For instance, the variables of height and weight are systematically related (correlated) because taller people generally weigh more than shorter people. In the same way, study time and memory errors are also related, because the more time a person is given to study a list of words, the fewer errors he or she will make. When there are two variables in the research design, one of them is called the predictor variable and the other the outcome variable . The research design can be visualized like this, where the curved arrow represents the expected correlation between the two variables:

Figure 2.2.2

Left: Predictor variable, Right: Outcome variable.

One way of organizing the data from a correlational study with two variables is to graph the values of each of the measured variables using a scatter plot . As you can see in Figure 2.10 “Examples of Scatter Plots” , a scatter plot is a visual image of the relationship between two variables . A point is plotted for each individual at the intersection of his or her scores for the two variables. When the association between the variables on the scatter plot can be easily approximated with a straight line, as in parts (a) and (b) of Figure 2.10 “Examples of Scatter Plots” , the variables are said to have a linear relationship .

When the straight line indicates that individuals who have above-average values for one variable also tend to have above-average values for the other variable, as in part (a), the relationship is said to be positive linear . Examples of positive linear relationships include those between height and weight, between education and income, and between age and mathematical abilities in children. In each case people who score higher on one of the variables also tend to score higher on the other variable. Negative linear relationships , in contrast, as shown in part (b), occur when above-average values for one variable tend to be associated with below-average values for the other variable. Examples of negative linear relationships include those between the age of a child and the number of diapers the child uses, and between practice on and errors made on a learning task. In these cases people who score higher on one of the variables tend to score lower on the other variable.

Relationships between variables that cannot be described with a straight line are known as nonlinear relationships . Part (c) of Figure 2.10 “Examples of Scatter Plots” shows a common pattern in which the distribution of the points is essentially random. In this case there is no relationship at all between the two variables, and they are said to be independent . Parts (d) and (e) of Figure 2.10 “Examples of Scatter Plots” show patterns of association in which, although there is an association, the points are not well described by a single straight line. For instance, part (d) shows the type of relationship that frequently occurs between anxiety and performance. Increases in anxiety from low to moderate levels are associated with performance increases, whereas increases in anxiety from moderate to high levels are associated with decreases in performance. Relationships that change in direction and thus are not described by a single straight line are called curvilinear relationships .

Figure 2.10 Examples of Scatter Plots

Some examples of relationships between two variables as shown in scatter plots. Note that the Pearson correlation coefficient (r) between variables that have curvilinear relationships will likely be close to zero.

Some examples of relationships between two variables as shown in scatter plots. Note that the Pearson correlation coefficient ( r ) between variables that have curvilinear relationships will likely be close to zero.

Adapted from Stangor, C. (2011). Research methods for the behavioral sciences (4th ed.). Mountain View, CA: Cengage.

The most common statistical measure of the strength of linear relationships among variables is the Pearson correlation coefficient , which is symbolized by the letter r . The value of the correlation coefficient ranges from r = –1.00 to r = +1.00. The direction of the linear relationship is indicated by the sign of the correlation coefficient. Positive values of r (such as r = .54 or r = .67) indicate that the relationship is positive linear (i.e., the pattern of the dots on the scatter plot runs from the lower left to the upper right), whereas negative values of r (such as r = –.30 or r = –.72) indicate negative linear relationships (i.e., the dots run from the upper left to the lower right). The strength of the linear relationship is indexed by the distance of the correlation coefficient from zero (its absolute value). For instance, r = –.54 is a stronger relationship than r = .30, and r = .72 is a stronger relationship than r = –.57. Because the Pearson correlation coefficient only measures linear relationships, variables that have curvilinear relationships are not well described by r , and the observed correlation will be close to zero.

It is also possible to study relationships among more than two measures at the same time. A research design in which more than one predictor variable is used to predict a single outcome variable is analyzed through multiple regression (Aiken & West, 1991). Multiple regression is a statistical technique, based on correlation coefficients among variables, that allows predicting a single outcome variable from more than one predictor variable . For instance, Figure 2.11 “Prediction of Job Performance From Three Predictor Variables” shows a multiple regression analysis in which three predictor variables are used to predict a single outcome. The use of multiple regression analysis shows an important advantage of correlational research designs—they can be used to make predictions about a person’s likely score on an outcome variable (e.g., job performance) based on knowledge of other variables.

Figure 2.11 Prediction of Job Performance From Three Predictor Variables

Multiple regression allows scientists to predict the scores on a single outcome variable using more than one predictor variable.

Multiple regression allows scientists to predict the scores on a single outcome variable using more than one predictor variable.

An important limitation of correlational research designs is that they cannot be used to draw conclusions about the causal relationships among the measured variables. Consider, for instance, a researcher who has hypothesized that viewing violent behavior will cause increased aggressive play in children. He has collected, from a sample of fourth-grade children, a measure of how many violent television shows each child views during the week, as well as a measure of how aggressively each child plays on the school playground. From his collected data, the researcher discovers a positive correlation between the two measured variables.

Although this positive correlation appears to support the researcher’s hypothesis, it cannot be taken to indicate that viewing violent television causes aggressive behavior. Although the researcher is tempted to assume that viewing violent television causes aggressive play,

Viewing violent TV may lead to aggressive play.

there are other possibilities. One alternate possibility is that the causal direction is exactly opposite from what has been hypothesized. Perhaps children who have behaved aggressively at school develop residual excitement that leads them to want to watch violent television shows at home:

Or perhaps aggressive play leads to viewing violent TV.

Although this possibility may seem less likely, there is no way to rule out the possibility of such reverse causation on the basis of this observed correlation. It is also possible that both causal directions are operating and that the two variables cause each other:

One may cause the other, but there could be a common-causal variable.

Still another possible explanation for the observed correlation is that it has been produced by the presence of a common-causal variable (also known as a third variable ). A common-causal variable is a variable that is not part of the research hypothesis but that causes both the predictor and the outcome variable and thus produces the observed correlation between them . In our example a potential common-causal variable is the discipline style of the children’s parents. Parents who use a harsh and punitive discipline style may produce children who both like to watch violent television and who behave aggressively in comparison to children whose parents use less harsh discipline:

An example: Parents' discipline style may cause viewing violent TV, and it may also cause aggressive play.

In this case, television viewing and aggressive play would be positively correlated (as indicated by the curved arrow between them), even though neither one caused the other but they were both caused by the discipline style of the parents (the straight arrows). When the predictor and outcome variables are both caused by a common-causal variable, the observed relationship between them is said to be spurious . A spurious relationship is a relationship between two variables in which a common-causal variable produces and “explains away” the relationship . If effects of the common-causal variable were taken away, or controlled for, the relationship between the predictor and outcome variables would disappear. In the example the relationship between aggression and television viewing might be spurious because by controlling for the effect of the parents’ disciplining style, the relationship between television viewing and aggressive behavior might go away.

Common-causal variables in correlational research designs can be thought of as “mystery” variables because, as they have not been measured, their presence and identity are usually unknown to the researcher. Since it is not possible to measure every variable that could cause both the predictor and outcome variables, the existence of an unknown common-causal variable is always a possibility. For this reason, we are left with the basic limitation of correlational research: Correlation does not demonstrate causation. It is important that when you read about correlational research projects, you keep in mind the possibility of spurious relationships, and be sure to interpret the findings appropriately. Although correlational research is sometimes reported as demonstrating causality without any mention being made of the possibility of reverse causation or common-causal variables, informed consumers of research, like you, are aware of these interpretational problems.

In sum, correlational research designs have both strengths and limitations. One strength is that they can be used when experimental research is not possible because the predictor variables cannot be manipulated. Correlational designs also have the advantage of allowing the researcher to study behavior as it occurs in everyday life. And we can also use correlational designs to make predictions—for instance, to predict from the scores on their battery of tests the success of job trainees during a training session. But we cannot use such correlational information to determine whether the training caused better job performance. For that, researchers rely on experiments.

Experimental Research: Understanding the Causes of Behavior

The goal of experimental research design is to provide more definitive conclusions about the causal relationships among the variables in the research hypothesis than is available from correlational designs. In an experimental research design, the variables of interest are called the independent variable (or variables ) and the dependent variable . The independent variable in an experiment is the causing variable that is created (manipulated) by the experimenter . The dependent variable in an experiment is a measured variable that is expected to be influenced by the experimental manipulation . The research hypothesis suggests that the manipulated independent variable or variables will cause changes in the measured dependent variables. We can diagram the research hypothesis by using an arrow that points in one direction. This demonstrates the expected direction of causality:

Figure 2.2.3

Viewing violence (independent variable) and aggressive behavior (dependent variable).

Research Focus: Video Games and Aggression

Consider an experiment conducted by Anderson and Dill (2000). The study was designed to test the hypothesis that viewing violent video games would increase aggressive behavior. In this research, male and female undergraduates from Iowa State University were given a chance to play with either a violent video game (Wolfenstein 3D) or a nonviolent video game (Myst). During the experimental session, the participants played their assigned video games for 15 minutes. Then, after the play, each participant played a competitive game with an opponent in which the participant could deliver blasts of white noise through the earphones of the opponent. The operational definition of the dependent variable (aggressive behavior) was the level and duration of noise delivered to the opponent. The design of the experiment is shown in Figure 2.17 “An Experimental Research Design” .

Figure 2.17 An Experimental Research Design

Two advantages of the experimental research design are (1) the assurance that the independent variable (also known as the experimental manipulation) occurs prior to the measured dependent variable, and (2) the creation of initial equivalence between the conditions of the experiment (in this case by using random assignment to conditions).

Two advantages of the experimental research design are (1) the assurance that the independent variable (also known as the experimental manipulation) occurs prior to the measured dependent variable, and (2) the creation of initial equivalence between the conditions of the experiment (in this case by using random assignment to conditions).

Experimental designs have two very nice features. For one, they guarantee that the independent variable occurs prior to the measurement of the dependent variable. This eliminates the possibility of reverse causation. Second, the influence of common-causal variables is controlled, and thus eliminated, by creating initial equivalence among the participants in each of the experimental conditions before the manipulation occurs.

The most common method of creating equivalence among the experimental conditions is through random assignment to conditions , a procedure in which the condition that each participant is assigned to is determined through a random process, such as drawing numbers out of an envelope or using a random number table . Anderson and Dill first randomly assigned about 100 participants to each of their two groups (Group A and Group B). Because they used random assignment to conditions, they could be confident that, before the experimental manipulation occurred, the students in Group A were, on average, equivalent to the students in Group B on every possible variable, including variables that are likely to be related to aggression, such as parental discipline style, peer relationships, hormone levels, diet—and in fact everything else.

Then, after they had created initial equivalence, Anderson and Dill created the experimental manipulation—they had the participants in Group A play the violent game and the participants in Group B play the nonviolent game. Then they compared the dependent variable (the white noise blasts) between the two groups, finding that the students who had viewed the violent video game gave significantly longer noise blasts than did the students who had played the nonviolent game.

Anderson and Dill had from the outset created initial equivalence between the groups. This initial equivalence allowed them to observe differences in the white noise levels between the two groups after the experimental manipulation, leading to the conclusion that it was the independent variable (and not some other variable) that caused these differences. The idea is that the only thing that was different between the students in the two groups was the video game they had played.

Despite the advantage of determining causation, experiments do have limitations. One is that they are often conducted in laboratory situations rather than in the everyday lives of people. Therefore, we do not know whether results that we find in a laboratory setting will necessarily hold up in everyday life. Second, and more important, is that some of the most interesting and key social variables cannot be experimentally manipulated. If we want to study the influence of the size of a mob on the destructiveness of its behavior, or to compare the personality characteristics of people who join suicide cults with those of people who do not join such cults, these relationships must be assessed using correlational designs, because it is simply not possible to experimentally manipulate these variables.

Key Takeaways

  • Descriptive, correlational, and experimental research designs are used to collect and analyze data.
  • Descriptive designs include case studies, surveys, and naturalistic observation. The goal of these designs is to get a picture of the current thoughts, feelings, or behaviors in a given group of people. Descriptive research is summarized using descriptive statistics.
  • Correlational research designs measure two or more relevant variables and assess a relationship between or among them. The variables may be presented on a scatter plot to visually show the relationships. The Pearson Correlation Coefficient ( r ) is a measure of the strength of linear relationship between two variables.
  • Common-causal variables may cause both the predictor and outcome variable in a correlational design, producing a spurious relationship. The possibility of common-causal variables makes it impossible to draw causal conclusions from correlational research designs.
  • Experimental research involves the manipulation of an independent variable and the measurement of a dependent variable. Random assignment to conditions is normally used to create initial equivalence between the groups, allowing researchers to draw causal conclusions.

Exercises and Critical Thinking

  • There is a negative correlation between the row that a student sits in in a large class (when the rows are numbered from front to back) and his or her final grade in the class. Do you think this represents a causal relationship or a spurious relationship, and why?
  • Think of two variables (other than those mentioned in this book) that are likely to be correlated, but in which the correlation is probably spurious. What is the likely common-causal variable that is producing the relationship?
  • Imagine a researcher wants to test the hypothesis that participating in psychotherapy will cause a decrease in reported anxiety. Describe the type of research design the investigator might use to draw this conclusion. What would be the independent and dependent variables in the research?

Aiken, L., & West, S. (1991). Multiple regression: Testing and interpreting interactions . Newbury Park, CA: Sage.

Ainsworth, M. S., Blehar, M. C., Waters, E., & Wall, S. (1978). Patterns of attachment: A psychological study of the strange situation . Hillsdale, NJ: Lawrence Erlbaum Associates.

Anderson, C. A., & Dill, K. E. (2000). Video games and aggressive thoughts, feelings, and behavior in the laboratory and in life. Journal of Personality and Social Psychology, 78 (4), 772–790.

Damasio, H., Grabowski, T., Frank, R., Galaburda, A. M., Damasio, A. R., Cacioppo, J. T., & Berntson, G. G. (2005). The return of Phineas Gage: Clues about the brain from the skull of a famous patient. In Social neuroscience: Key readings. (pp. 21–28). New York, NY: Psychology Press.

Freud, S. (1964). Analysis of phobia in a five-year-old boy. In E. A. Southwell & M. Merbaum (Eds.), Personality: Readings in theory and research (pp. 3–32). Belmont, CA: Wadsworth. (Original work published 1909)

Kotowicz, Z. (2007). The strange case of Phineas Gage. History of the Human Sciences, 20 (1), 115–131.

Rokeach, M. (1964). The three Christs of Ypsilanti: A psychological study . New York, NY: Knopf.

Introduction to Psychology Copyright © 2015 by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Banner

Research Methods

  • Research Process
  • Research Design & Method

Qualitative vs. Quantiative

Correlational vs. experimental, empirical vs. non-empirical.

  • Survey Research
  • Survey & Interview Data Analysis
  • Resources for Research
  • Ethical Considerations in Research

Qualitative Research gathers data about lived experiences, emotions or behaviors, and the meanings individuals attach to them. It assists in enabling researchers to gain a better understanding of complex concepts, social interactions or cultural phenomena. This type of research is useful in the exploration of how or why things have occurred, interpreting events and describing actions.

Quantitative Research gathers numerical data which can be ranked, measured or categorized through statistical analysis. It assists with uncovering patterns or relationships, and for making generalizations. This type of research is useful for finding out how many, how much, how often, or to what extent.

: can be structured, semi-structured or unstructured. : the same questions asked to large numbers of participants (e.g., Likert scale response) (see book below).
: several participants discussing a topic or set of questions. : test hypothesis in controlled conditions (see video below).
: can be on-site, in-context, or role play (see video below). : counting the number of times a phenomenon occurs or coding observed data in order to translate it into numbers.
: analysis of correspondence or reports. : using numerical data from financial reports or counting word occurrences.
: memories told to a researcher.

Correlational Research cannot determine causal relationships. Instead they examine relationships between variables.

Experimental Research can establish causal relationship and variables can be manipulated.

Empirical Studies are based on evidence. The data is collected through experimentation or observation.

Non-empirical Studies do not require researchers to collect first-hand data.

  • << Previous: Research Design & Method
  • Next: Survey Research >>
  • Last Updated: Apr 5, 2023 4:19 PM
  • URL: https://semo.libguides.com/ResearchMethods

Going beyond the comparison: toward experimental instructional design research with impact

  • Methodology
  • Published: 28 August 2024

Cite this article

case study vs experimental research

  • Adam G. Gavarkovs 1 ,
  • Rashmi A. Kusurkar 2 , 3 , 4 ,
  • Kulamakan Kulasegaram 5 , 6 &
  • Ryan Brydges 6 , 7  

31 Accesses

Explore all metrics

To design effective instruction, educators need to know what design strategies are generally effective and why these strategies work, based on the mechanisms through which they operate. Experimental comparison studies, which compare one instructional design against another, can generate much needed evidence in support of effective design strategies. However, experimental comparison studies are often not equipped to generate evidence regarding the mechanisms through which strategies operate. Therefore, simply conducting experimental comparison studies may not provide educators with all the information they need to design more effective instruction. To generate evidence for the what and the why of design strategies, we advocate for researchers to conduct experimental comparison studies that include mediation or moderation analyses, which can illuminate the mechanisms through which design strategies operate. The purpose of this article is to provide a conceptual overview of mediation and moderation analyses for researchers who conduct experimental comparison studies in instructional design. While these statistical techniques add complexity to study design and analysis, they hold great promise for providing educators with more powerful information upon which to base their instructional design decisions. Using two real-world examples from our own work, we describe the structure of mediation and moderation analyses, emphasizing the need to control for confounding even in the context of experimental studies. We also discuss the importance of using learning theories to help identify mediating or moderating variables to test.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

Similar content being viewed by others

case study vs experimental research

Research-Based Instructional Perspectives

case study vs experimental research

To prove or improve, that is the question: the resurgence of comparative, confounded research between 2010 and 2019

case study vs experimental research

Instructional Design Methods and Practice

Explore related subjects.

  • Artificial Intelligence

Data availability

No datasets were generated or analysed during the current study.

As an alternative to the regression approach, structural equation modelling (SEM) has gained popularity in the health professions education literature (Stoffels et al., 2023 ). SEM requires that a researcher make additional assumptions regarding the functional relationships between the covariates, the mediator(s), and the outcome(s) (VanderWeele, 2012 ). Though specifying these relationships can increase power, it comes with an increased risk of model misspecification (VanderWeele, 2012 ). Accordingly, we recommend that researchers beginning with experimental comparison studies involving a single mediator opt for using the regression-based approach with controls for mediator-outcome confounding (VanderWeele, 2012 ).

We did not actually analyze our data in the manner described below, for reasons described in our published manuscript. Here, we describe an alternative data analysis strategy for clarity.

Baron, R. M., & Kenny, D. A. (1986). The moderator–mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51 (6), 1173–1182. https://doi.org/10.1037/0022-3514.51.6.1173

Article   Google Scholar  

Bürkner, P. C. (2017). brms: An R package for Bayesian multilevel models using Stan . Journal of Statistical Software . https://doi.org/10.18637/jss.v080.i01

Carver, C. S., & Scheier, M. F. (1998). On the Self-Regulation of Behavior (1st ed.). Cambridge University Press. https://doi.org/10.1017/CBO9781139174794

Book   Google Scholar  

Cheung, J. J. H., & Kulasegaram, K. M. (2022). Beyond the tensions within transfer theories: Implications for adaptive expertise in the health professions. Advances in Health Sciences Education, 27 (5), 1293–1315. https://doi.org/10.1007/s10459-022-10174-y

Cheung, J. J. H., Kulasegaram, K. M., Woods, N. N., & Brydges, R. (2019). Why Content and cognition matter: Integrating conceptual knowledge to support simulation-based procedural skills transfer. Journal of General Internal Medicine, 34 (6), 969–977. https://doi.org/10.1007/s11606-019-04959-y

Cheung, J. J. H., Kulasegaram, K. M., Woods, N. N., & Brydges, R. (2021). Making concepts material: A randomized trial exploring simulation as a medium to enhance cognitive integration and transfer of learning. Simulation in Healthcare: THe Journal of the Society for Simulation in Healthcare, 16 (6), 392–400. https://doi.org/10.1097/SIH.0000000000000543

Cheung, J. J. H., Kulasegaram, K. M., Woods, N. N., Moulton, C., Ringsted, C. V., & Brydges, R. (2018). Knowing How and Knowing Why: Testing the effect of instruction designed for cognitive integration on procedural skills transfer. Advances in Health Sciences Education, 23 (1), 61–74. https://doi.org/10.1007/s10459-017-9774-1

Cook, D. A. (2005). The research we still are not doing: An agenda for the study of computer-based learning. Academic Medicine, 80 (6), 541–548. https://doi.org/10.1097/00001888-200506000-00005

Cook, D. A. (2009). The failure of e-learning research to inform educational practice, and what we can do about it. Medical Teacher, 31 (2), 158–162. https://doi.org/10.1080/01421590802691393

Durik, A. M., Shechter, O. G., Noh, M., Rozek, C. S., & Harackiewicz, J. M. (2015). What if I can’t? Success expectancies moderate the effects of utility value information on situational interest and performance. Motivation and Emotion, 39 (1), 104–118. https://doi.org/10.1007/s11031-014-9419-0

Ertmer, P. A., & Stepich, D. A. (2005). Instructional design expertise: How will we know it when we see it? Educational Technology, 45 (6), 38–43.

Google Scholar  

Fiorella, L., & Mayer, R. E. (2016). Eight ways to promote generative learning. Educational Psychology Review, 28 (4), 717–741. https://doi.org/10.1007/s10648-015-9348-9

Friedman, C. P. (1994). The research we should be doing. Academic Medicine, 69 (6), 455–457. https://doi.org/10.1097/00001888-199406000-00005

Gavarkovs, A. G., Crukley, J., Miller, E., Kusurkar, R. A., Kulasegaram, K., & Brydges, R. (2023a). Effectiveness of life goal framing to motivate medical students during online learning: A randomized controlled trial. Perspectives on Medical Education, 12 (1), 444–454. https://doi.org/10.5334/pme.1017

Gavarkovs, A. G., Finan, E., Jensen, R. D., & Brydges, R. (2024). When I say … active learning. Medical Education . https://doi.org/10.1111/medu.15383

Gavarkovs, A. G., Kusurkar, R. A., & Brydges, R. (2023b). The purpose, adaptability, confidence, and engrossment model: A novel approach for supporting professional trainees’ motivation, engagement, and academic achievement. Frontiers in Education, 8 , 1036539. https://doi.org/10.3389/feduc.2023.1036539

Hardré, P. L., Ge, X., & Thomas, M. K. (2005). Toward a model of development for instructional design expertise. Educational Technology, 45 (1), 53–57.

Hatano, G. & Inagaki, I. (1986). Two courses of expertise. In Child Development and Education in Japan (pp. 262–272). W. H. Freeman.

Hayes, A. F. (2022). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach (3rd ed.). The Guilford Press.

Kalyuga, S. (2007). Expertise reversal effect and its implications for learner-tailored instruction. Educational Psychology Review, 19 (4), 509–539. https://doi.org/10.1007/s10648-007-9054-3

Kusurkar, R. A. (2023). Self-determination theory in health professions education research and practice. In R. M. Ryan (Ed.), The oxford handbook of self-determination theory (pp. 665–683). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780197600047.013.33

Chapter   Google Scholar  

Kusurkar, R. A., Croiset, G., & Ten Cate, OTh. J. (2011). Twelve tips to stimulate intrinsic motivation in students through autonomy-supportive classroom teaching derived from Self-Determination Theory. Medical Teacher, 33 (12), 978–982. https://doi.org/10.3109/0142159X.2011.599896

Laidley, T. L., & Braddock, C. H. (2000). Role of adult learning theory in evaluating and designing strategies for teaching residents in ambulatory settings. Advances in Health Sciences Education, 5 (1), 43–54. https://doi.org/10.1023/A:1009863211233

Lawson, A. P., & Mayer, R. E. (2021). Benefits of writing an explanation during pauses in multimedia lessons. Educational Psychology Review, 33 (4), 1859–1885. https://doi.org/10.1007/s10648-021-09594-w

Maheu-Cadotte, M.-A., Cossette, S., Dubé, V., Fontaine, G., Lavallée, A., Lavoie, P., Mailhot, T., & Deschênes, M.-F. (2021). Efficacy of serious games in healthcare professions education: A systematic review and meta-analysis. Simulation in Healthcare: THe Journal of the Society for Simulation in Healthcare, 16 (3), 199–212. https://doi.org/10.1097/SIH.0000000000000512

Mann, K. V. (2004). The role of educational theory in continuing medical education: Has it helped us? Journal of Continuing Education in the Health Professions, 24 (Supplement 1), S22–S30. https://doi.org/10.1002/chp.1340240505

Mayer, R. E. (2023). How to assess whether an instructional intervention has an effect on learning. Educational Psychology Review, 35 (2), 64. https://doi.org/10.1007/s10648-023-09783-9

Schoemann, A. M., Boulton, A. J., & Short, S. D. (2017). Determining power and sample size for simple and complex mediation models. Social Psychological and Personality Science, 8 (4), 379–386. https://doi.org/10.1177/1948550617715068

Shadish, W. R., Cook, T. D., & Campbell, D. T. (2001). Experimental and quasi-experimental designs for generalized causal inference . Houghton Mifflin.

Spencer, S. J., Zanna, M. P., & Fong, G. T. (2005). Establishing a causal chain: Why experiments are often more effective than mediational analyses in examining psychological processes. Journal of Personality and Social Psychology, 89 (6), 845–851. https://doi.org/10.1037/0022-3514.89.6.845

Stoffels, M., Torre, D. M., Sturgis, P., Koster, A. S., Westein, M. P. D., & Kusurkar, R. A. (2023). Steps and decisions involved when conducting structural equation modeling (SEM) analysis. Medical Teacher . https://doi.org/10.1080/0142159X.2023.2263233

Tai, A.-S., Lin, S.-H., Chu, Y.-C., Yu, T., Puhan, M. A., & VanderWeele, T. (2023). Causal mediation analysis with multiple time-varying mediators. Epidemiology, 34 (1), 8–19. https://doi.org/10.1097/EDE.0000000000001555

VanderWeele, T. J. (2012). Invited commentary: Structural equation models and epidemiologic analysis. American Journal of Epidemiology, 176 (7), 608–612. https://doi.org/10.1093/aje/kws213

VanderWeele, T. J. (2015). Explanation in causal inference: Methods for mediation and interaction . Oxford University Press.

VanderWeele, T. J. (2016). Mediation analysis: A practitioner’s guide. Annual Review of Public Health, 37 (1), 17–32. https://doi.org/10.1146/annurev-publhealth-032315-021402

VanderWeele, T. J., & Knol, M. J. (2014). A tutorial on interaction. Epidemiologic Methods . https://doi.org/10.1515/em-2013-0005

Woods, N. N., Brooks, L. R., & Norman, G. R. (2007). It all make sense: Biomedical knowledge, causal connections and memory in the novice diagnostician. Advances in Health Sciences Education, 12 (4), 405–415. https://doi.org/10.1007/s10459-006-9055-x

Download references

Author information

Authors and affiliations.

Faculty of Medicine, University of British Columbia, City Square East Tower, 555 W 12th Ave, Suite 200, Vancouver, BC, V5Z 3X7, Canada

Adam G. Gavarkovs

Research in Education, Amsterdam UMC Location Vrije Universiteit Amsterdam, De Boelelaan 1118, Amsterdam, The Netherlands

Rashmi A. Kusurkar

LEARN! Research Institute for Learning and Education, Faculty of Psychology and Education, VU University Amsterdam, Amsterdam, The Netherlands

Amsterdam Public Health, Quality of Care, Amsterdam, The Netherlands

Department of Family and Community Medicine, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada

Kulamakan Kulasegaram

The Wilson Centre, University of Toronto/University Health Network, Toronto, ON, Canada

Kulamakan Kulasegaram & Ryan Brydges

Department of Medicine, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada

Ryan Brydges

You can also search for this author in PubMed   Google Scholar

Contributions

A.G. conceptualized the topic of the manuscript and wrote the first draft. R.K., K.K., and R.B. provided contributions to subsequent drafts of the manuscript. All authors reviewed the final version of the manuscript.

Corresponding author

Correspondence to Adam G. Gavarkovs .

Ethics declarations

Conflict of interest.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Gavarkovs, A.G., Kusurkar, R.A., Kulasegaram, K. et al. Going beyond the comparison: toward experimental instructional design research with impact. Adv in Health Sci Educ (2024). https://doi.org/10.1007/s10459-024-10365-9

Download citation

Received : 06 March 2024

Accepted : 05 August 2024

Published : 28 August 2024

DOI : https://doi.org/10.1007/s10459-024-10365-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Randomized controlled trial
  • Quantitative data analysis
  • Learning theory
  • Find a journal
  • Publish with us
  • Track your research

Density‑dependent population regulation in freshwater fishes and small mammals: A literature review and insights for Ecological Risk Assessment

  • Accolla, Chiara
  • Schmolke, Amelie
  • Vaugeois, Maxime
  • Galic, Nika

The regulation of populations through density dependence (DD) has long been a central tenet of studies of ecological systems. As an important factor in regulating populations, DD is also crucial for understanding risks to populations from stressors, including its incorporation into population models applied for this purpose. However, study of density‑dependent regulation is challenging because it can occur through various mechanisms, and their identification in the field, as well as the quantification of the consequences on individuals and populations, can be difficult. We conducted a targeted literature review specifically focusing on empirical laboratory or field studies addressing negative DD in freshwater fish and small rodent populations, two vertebrate groups considered in pesticide Ecological Risk Assessment (ERA). We found that the most commonly recognized causes of negative DD were food (63% of 19 reviewed fish studies, 40% of 25 mammal studies) or space limitations (32% of mammal studies). In addition, trophic interactions were reported as causes of population regulation, with predation shaping mostly small mammal populations (36% of the mammal studies) and cannibalism impacting freshwater fish (26%). In the case of freshwater fish, 63% of the studies were experimental (i.e., with a length of weeks or months). They generally focused on the individual‑level causes and effects of DD, and had a short duration. Moreover, DD affected mostly juvenile growth and survival of fish (68%). On the other hand, studies on small mammals were mainly based on time series analyzing field population properties over longer timespans (68%). Density dependence primarily affected survival in subadult and adult mammal stages and, to a lesser extent, reproduction (60% vs. 36%). Furthermore, delayed DD was often observed (56%). We conclude by making suggestions on future research paths, providing recommendations for including DD in population models developed for ERA, and making the best use of the available data. Integr Environ Assess Manag 2024;20:1225–1236. © 2023 Syngenta Crop Protection. Integrated Environmental Assessment and Management published by Wiley Periodicals LLC on behalf of Society of Environmental Toxicology & Chemistry (SETAC).Key Points The study of density‑dependent regulation is challenging because it can occur through various mechanisms and their identification is difficult. We conducted a targeted literature review focusing on studies addressing negative density dependence in freshwater fish and small rodent populations, two vertebrate groups considered in pesticide Ecological Risk Assessment (ERA). The most commonly recognized causes of negative density dependence were food or space limitations, and trophic interactions, but important differences were found among the two species groups. We make suggestions on future research paths, providing recommendations for including density dependence in population models developed for ERA.

ORIGINAL RESEARCH article

Experimental study of dynamic shear stiffness decay characteristics of interbedded soil: a case study in yangtze river floodplain.

Haizhi Liu

  • 1 Zhejiang Institute of Communications Co., Ltd., Hangzhou, China
  • 2 China Nuclear Power Engineering Co., Ltd., Beijing, China
  • 3 Institute of Geotechnical Engineering, Nanjing Tech University, Nanjing, China
  • 4 School of Civil Engineering, Sanjiang University, Nanjing, China

To explore the characteristics of the dynamic shear modulus of river-phase (as opposed to estuarine) floodplain interbedded soil, undisturbed interbedded soil from the floodplain of the Yangtze River in Nanjing was subjected to strain-controlled cyclic triaxial tests to investigate how the initial effective confining pressure ( σʹ m ), consolidation ratio ( k c ), and degree of consolidation ( U ) influence the maximum dynamic shear modulus G max and the dynamic shear modulus ratio G / G max . The results show that for this soil, G decreases with increasing strain amplitude, and for a given strain amplitude, G increases with increasing σʹ m , k c , and U . Compared with soil from the Yangtze estuary, k c has a greater effect on G max of the floodplain interbedded soil. Finally, a modified Martin-Davidenkov model is proposed for predicting G / G max of river-phase floodplain interbedded soil under different σʹ m , k c , and U.

Introduction

River-phase (as opposed to estuarine) floodplain soil is a typical example of the interbedded soil that is found in deltas, coastal regions, river floodplains, and lakes. In the floodplain of a river ( Li et al., 2014 ; Tankiewicz, 2016 ; Boulanger and DeJong, 2018 ; Bucci, Villamor, and Almond, 2018 ; Beyzaei et al., 2020 ) the annual alternation between dry periods and ones when water is abundant causes the sediment and organic matter content of the water to change periodically, which also leads to similar cyclic variations in the hydrodynamic conditions. Consequently, the sediment exhibits differences in composition, thickness, particle size, and color, creating a unique and intuitive stratified structure. This process repeats numerously over time, leading to alternating, regular, and repetitive deposits of sandy soil and clayey soil.

Many underground structures near rivers and in coastal cities are in interbedded soil. Also, the foundations of coastal harbors and bridges often penetrate interbedded soil, and numerous marine engineering activities involve such soil. Studies have shown that stratified sites have obvious special characteristics in terms of the deformation of foundation supporting structures ( Wan et al., 2022a ), the stability of tunnel excavation and slopes, and the site liquefaction resistance ( Beyzaei et al., 2018 ; Boulanger et al., 2019 ; Tasiopoulou et al., 2019 ; Beyzaei et al., 2020 ; Ecemis, 2021 ; Zhou et al., 2021 ).

Various scholars have conducted tests to determine the static and dynamic characteristics of interlayered soil. Tankiewicz (2015) conducted static triaxial tests on interbedded soil, with the observed failure modes revealing pronounced strength anisotropy and with considerable variability noted in the sample failure modes and shear strength values; also, the permeability and shear strength anisotropy of the interbedded soil surpassed those of many other soil types. Ma et al. (2019) conducted a series of ring shear tests on remolded overconsolidated soft interlayers, investigating the influence of remolded water content and consolidation stress on shear behavior under drained conditions. They found that water content can weaken the shear strength of soft interlayers, with cohesion being more sensitive to changes in water content compared to the friction angle; also, consolidation stress is an important factor influencing the strain-softening and strain-hardening behaviors of soft interlayers. Via extensive cyclic triaxial testing, Duong et al. (2016) explored how water content and fines content affected the resilient modulus of interlayer soil sampled from a railway substructure in France. The conclusions indicate that under unsaturated conditions, soil with high fines content exhibits a higher resilient modulus because of the influence of capillary suction. However, as the soil approaches saturation, fine particles negatively affect the resilient modulus. This suggests that protective drainage measures must be implemented for interlayer soil when its mechanical performance is satisfactory under unsaturated conditions but unsatisfactory under saturated conditions. Studying the mechanical properties of soil via techniques such as X-ray diffraction (XRD), energy-dispersive X-ray analysis (EDXA), and scanning electron microscopy (SEM) offers a multifaceted advantage. These methods provide precise information on mineral composition, elemental distribution, and microstructure, offering crucial support for a comprehensive understanding of the chemical and mechanical properties of soil. Sun et al. (2022) used XRD, EDXA, concentration monitoring, triaxial compression tests, unconfined compressive tests, and SEM to investigate the mineral composition, mechanical properties, and microstructure of weak interlayers in various acidic environments. The results showed that the pH value of the solution and the immersion time were significant factors influencing the undrained strength of the samples. Moreover, as the immersion time increased, microscopic structural parameters showed a decrease in the area of mineral particles and a simultaneous rise in the pore ratio. These microstructural changes observed in images and parameters were consistent with the macroscopic physical and mechanical properties of the samples. The interlayers can be as thin as few millimeters, and conventional in situ investigation techniques such as CPT and sonic borings fail to characterize the laminar structure ( Beyzaei et al., 2020 ). To address this challenge, Tankiewicz (2016) conducted a thorough investigation using high-quality samples. Through the application of SEM and computed microtomography, intricate 3D models were reconstructed, unveiling the detailed nature of the varved clay structure. Furthermore, the mechanical properties of individual layers were scrutinized at the layer-thickness scale using nanoindentation ( Tankiewicz, 2018 ).

The dynamic shear modulus (DSM) is important for evaluating the response behavior of soil under dynamic loads. In-depth research on the DSM helps to accurately predict the behavior of soil under different dynamic conditions, providing effective guidance and an evaluation basis for earthquake engineering, infrastructure construction, and soil–structure interaction. Geotechnical assessments based on established knowledge of non-interbedded soil can lead to confusion in practice. For instance, conventional liquefaction assessment procedures predicted site liquefaction that did not occur during earthquake events ( Beyzaei et al., 2020 ; Ecemis, 2021 ). Challenges have also emerged in predicting the stability and consolidation behavior of embankments ( Ladd and Foott, 1977 ) and estimating the side resistance of drilled shafts ( Mackiewicz and Lehman-Svoboda, 2012 ). Consequently, establishing a distinct dynamic evaluation procedure for interbedded soil becomes imperative, especially for that in the floodplain of the Yangtze River, whose dynamic properties remain poorly understood.

Based on the aforementioned studies, this paper investigates the characteristics of the DSM ( G ) of interbedded soil in the floodplain of the Yangtze River under different values of the initial effective confining pressure ( σʹ m ), consolidation ratio ( k c ), and degree of consolidation ( U ). The degree of consolidation U is an important parameter for evaluating the soil consolidation level, which directly affects the bearing capacity and deformation characteristics of the soil body. For example, in bridge construction, the degree of consolidation U of the riverbed soil needs to be evaluated first to ensure that the soil body has sufficient support to carry the weight of the bridge. Similarly, in high fill projects, by adjusting the degree of consolidation of the fill, settlement can be effectively controlled to ensure a smooth road. The use of U as an influencing factor reflects the effect of stress history on the dynamic properties of the soil in this interlayer, and also effectively reduces the consolidation time and improves the testing efficiency, which is very important in the context of the increasing demand for testing of geodynamic parameters.Considering U as an influencing factor reflects the influence of stress history on the dynamic properties of this interbedded soil while also effectively shortening the time used for consolidation; this improves the testing efficiency, which is very important given the increasing demand for testing the dynamic parameters of soil.

Test program and procedures

Test material.

Undisturbed river-phase floodplain interbedded soil (FIS) is the main stratum for urban subsurface space development ( Wan et al., 2022b ). The soil samples tested in the study reported herein were obtained from the Central Business District of Jiangbei New District in Nanjing, China, as shown in Figure 1 . The original river-phase FIS is gray-brown in appearance, with obvious horizontal stratification and sand sandwich structure, which is typical of river-phase FIS. The specific gravity ( G s ), natural water content ( w 0 ), and natural wet density ( ρ 0 ) were determined according to D2216 ( ASTM, 2019 ), D854 ( ASTM, 2014 ), and D1556/D1556M ( ASTM, 2015 ), respectively, and the natural density of the samples was 1.76 g/cm −3 , the water content was 42.06%, the specific gravity was 2.71, the initial void ratio was 1.19, and the plasticity index was 17.18.

www.frontiersin.org

Figure 1 . Geographical sampling locations.

Test apparatus

In this study, multi-stage strain-controlled cyclic triaxial tests were conducted using the HCA-300 multifunctional cyclic triaxial instrument developed by GCTS, which can perform conventional static/cyclic triaxial tests and synchronous coupled bidirectional vibration loading torsion shear tests. The HCA-300 cyclic triaxial instrument and its main technical indicators are shown in Figure 2 . The system hardware comprises a test platform, a pressure control cabinet, a computer, an acquisition system, a hydraulic source, and a vacuum pump; the system software comprises a digital servo controller and the GCTS CATS software. The HCA-300 cyclic triaxial instrument uses electro-hydraulic servo closed-loop control, which allows direct testing of the axial stress σ d and axial strain ε of specimens. During cyclic loading, the σ d values of a specimen are picked up by the built-in small-range axial force transducer, and the ε values are picked up by the small-range LVDT displacement transducer. See ( Chen et al., 2019 ) for more details about the HCA-300 system.

www.frontiersin.org

Figure 2 . Test apparatus and main technical indicators.

For a cylindrical specimen, the shear stress τ and shear strain γ in the 45° plane of the specimen during loading are calculated as Equation 1 :

where ν is Poisson’s ratio; for the in situ interbedded soil of the Yangtze River floodplain, we take ν = 0.42 ( Zhuang et al., 2020 ). In the equivalent linear dynamic viscoelastic ontological model, the shear modulus is calculated as Equation 2 (D3999/D3999M-11 ASTM, 2013 )

and the typical strain, stress, and strain–stress time-dependent curves are shown in Figure 3 .

www.frontiersin.org

Figure 3 . Measured time histories of test results for a typical specimen.

Test program and method

To investigate how G of FIS vary with consolidation degree U , five sets of tests with different U were performed according to the test program given in Table 1 . The test steps were as follows. 1) Make the in situ FIS sample into a solid cylindrical specimen that has a standard size of 50 mm × 100 mm and is saturated. 2) Install the specimen into the base of the instrument, connect the top with the displacement transducer and the driving device, and close the pressure chamber. 2) After installation and according to the test conditions of the specimen, apply confining pressure to achieve different degrees of consolidation, with consolidation being completed after reaching the pre-set U . 4) The specimen is subjected to strain-controlled cyclic loading, with the amplitude of axial strain increasing in steps from 1 × 10 −5 to 1 × 10 −2 . Each level of cyclic loading was five cycles with a frequency of 0.5 Hz.

www.frontiersin.org

Table 1 . Dynamic triaxial test scheme for controlling degree of consolidation.

Control methods for consolidation degree

Figure 4 shows the time histories of the axial displacement of completely consolidated FIS under different consolidation conditions. These axial displacement curves are then normalized to obtain the consolidation degree as a function of time ( t ), as shown in Figure 5 . As can be seen, the U development trend is insensitive to the initial stress conditions, and U can be transformed into a function of the consolidation time t . Referring to the universal expression for the average consolidation degree of a soft clay foundation ( Martin and Seed, 1983 ), the relationship between consolidation degree and time is fitted to obtain an exponential function of the form as Equation 3

where k and b are the shape coefficients of the curves. This is how U was controlled in the present study. The reference intervals for the consolidation time for different values of U are given in Figure 5 , and as can be seen, the consolidation time used for the specimens is greatly reduced as U decreases.

www.frontiersin.org

Figure 4 . Consolidation displacement curves of Yangtze River floodplain interbedded soil (FIS) under different consolidation conditions.

www.frontiersin.org

Figure 5 . Consolidation curves and fitting for Yangtze River FIS under different initial stress conditions.

Test results and analysis

Effect of initial consolidation conditions on dynamic shear modulus.

Figure 6 shows the distribution of the DSM G of the undisturbed FIS over a wide range of shear strain γ under different values of U . As can be seen, γ plays an important role in the development of G : for given U , each specimen exhibits decaying G with increasing γ. Also, for given γ , G increases with increasing U , which indicates that the larger the value of U , the greater the cementation degree between soil particles and the more stable the particle fabric, which contribute to greater stiffness for resisting shear deformation. In addition, comparing among the subplots in Figure 6 reveals the interesting phenomenon that increasing σʹ m or k c results in G increasing more for a given increase in U .

www.frontiersin.org

Figure 6 . Test results for Yangtze River FIS with different degrees of consolidation. (A) σʹ m = 50 kPa and k c = 1.0; (B) σʹ m = 100 kPa and k c = 1.0; (C) σʹ m = 150 kPa and k c = 1.0; (D) σʹ m = 100 kPa and k c = 1.2; (E) σʹ m = 150 kPa and k c = 1.4.

Effect of initial consolidation conditions on maximum dynamic shear modulus

The maximum DSM G max is the DSM when the strain percentage is less than 10 −5 , in which case the soil is considered to be in a purely elastic state. and so in this study G max was obtained using the extrapolation method at 0.0001% strain ( Hardin and Drnevich, 1972 ). Figure 7 shows the variation of G max with U in the FIS with different values of σʹ m and k c . It is obvious that G max is closely related to U : it increases with increasing U , and the data suggest an exponential correlation between G max and U . Also, the growth rate of G max with increasing U is insensitive to σʹ m or k c . The proposed relationship between U and G max for FIS under different initial stress conditions is described as Equation 4

where A 1 and k are fitting parameters. The coefficient A 1 equals G max when U = 1, and the stress exponent k describes how U affects the growth rate of G max . Regression analysis suggests fixing k at 0.5 for different σʹ m or k c . To estimate G max empirically under different U , those values are denoted as G max,U , and G max for U = 1 (100%) is denoted as G max,100% . Then G max,U is determined as Equation 5

where μ is the G max reduction coefficient. Figure 7 gives the recommended values of μ for FIS with different values of U for engineering applications.

www.frontiersin.org

Figure 7 . Relationship between G max and degree of consolidation U of Yangtze River FIS.

Expression for and parameters of dynamic shear modulus ratio

To characterize quantitatively the variation of the nonlinear and hysteretic characteristics of river-phase floodplain soil at different U , the three-parameter Martin–Davidenkov model ( Hardin and Drnevich, 1972 ) is selected to fit the relationship between G / G max and γ as shown in Equation 6

where α , β , and γ 0 are best-fit parameters that control the shape of the G / G max – γ curve concerning the soil properties; γ 0 is commonly known as the reference shear strain and is generally taken to be the value of γ 0 at G / G max = 0.5.

As shown in Figure 8 , U has a significant effect on G / G max . For a given strain amplitude, G / G max of the FIS increases with increasing U ; i.e., the Yangtze River FIS presents stronger nonlinear characteristics as U increases. Also, for given U , the G / G max – γ curve rises with increasing σʹ m or k c .

www.frontiersin.org

Figure 8 . Relationship between G / G max and γ for Yangtze River FIS with different degrees of consolidation. (A) σʹ m = 50 kPa and k c = 1.0; (B) σʹ m = 100 kPa and k c = 1.0; (C) σʹ m = 150 kPa and k c = 1.0; (D) σʹ m = 100 kPa and k c = 1.2; (E) σʹ m = 150 kPa and k c = 1.4.

Table 2 lists the values of the fitting parameters α and β for all the tested soil samples. The values of α range from 0.998 to 1.010 and those of β range from 0.445 to 0.482, which indicates that U has little effect on α and β . Therefore, for the Yangtze River FIS, α and β can be regarded as 1.0 and 0.47, respectively. Figure 9 shows the variation of γ 0 with U under different σʹ m and k c . As can be seen, for given σʹ m and k c , γ 0 increases with increasing U , and for given U , γ 0 increases with increasing σʹ m and k c . To quantify the relationship between U and γ 0 , the latter is normalized by considering the effects of σʹ m and k c and is expressed as Equation 7

where n and m are related to the soil properties and are taken herein as n = 0.5 and m = 0.84, and P a = 100 kPa as the value of standard atmospheric pressure. Figure 10 shows the relationship between γ ʹ 0 and U , which as can be seen is a power law of the form as shown in Equation 8

where p and q are the fitting parameters, with p = 0.0436 and q = 0.7468.

www.frontiersin.org

Table 2 . Parameters of Davidenkov model for predicting G / G max – γ curve.

www.frontiersin.org

Figure 9 . Relationships between consolidation degree U and reference shear strain γ 0 under different values of σʹ m and k c .

www.frontiersin.org

Figure 10 . Relationship between U and normalized reference shear strain γ ʹ 0 .

Figure 11 compares G / G max of the present FIS with that of silt clay from the Yangtze estuary ( σʹ m = 50∼200 kPa), as well as recommended values given by Yuan et al. ( Shun et al., 2004 ) and normalized values given by the China Earthquake Administration ( GB, 1999 ). As can be seen, the distribution range of the G / G max – γ curves for the FIS is within the statistical range of the recommended and standardized values for clayey soil. Compared with the Yangtze estuary silty clay, the FIS decays less rapidly and has larger G / G max at large strain, which indicates that the nonlinear properties of the FIS in the Yangtze River are weaker than those of the silty clay at the Yangtze estuary. Overall, the empirical equations given herein can be used for predictions regarding the FIS in the Yangtze River.

www.frontiersin.org

Figure 11 . G / G max – γ curves for different clay soils.

In this study, the characteristics of the DSM G of river-phase FIS with differing consolidation degree U were investigated, and the variations of the maximum DSM G max and the normalized DSM ratio G / G max with different U were analyzed. The main conclusions are as follows.

Under different values of the consolidation ratio, the development of U of Yangtze River FIS with consolidation time is almost the same. A quantitative method for relating U and the consolidation time was established and gives the corresponding reference time.

The Yangtze River FIS with different U shows nonlinear characteristics of “low shear modulus” in the strain range of 10 −5 to 10 −4 . As the shear strain γ increases, the DSM G decreases, and at different strain levels, G increases with increasing U .

For Yangtze River FIS, G max is affected by U and increases gradually with increasing U . Taking the G max value at a consolidation degree of 100% as a reference, the reference range for the reduction coefficient μ of G max of soft soil in the Yangtze River floodplain corresponding to different consolidation degrees is provided.

Finally, the G / G max – γ curves of Yangtze River FIS show a “low to high” change with increasing U , and the nonlinear characteristics of the soil weaken gradually. Compared with conventional clayey soils, the FIS has more-obvious nonlinear characteristics.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Author contributions

HL: Resources, Visualization, Writing–original draft. ZH: Writing–review and editing. DC: Investigation, Resources, Validation, Visualization, Writing–review and editing. RZ: Data curation, Investigation, Writing–review and editing. QW: Project administration, Supervision, Writing–original draft.

The author(s) declare that financial support was received for the research, authorship, and/or publication of this article. This work was supported by the National Natural Science Foundation of China (52008206).

Conflict of interest

Authors LH and CD were employed by Zhejiang Institute of Communications Co., Ltd. Author HZ was employed by China Nuclear Power Engineering Co., Ltd.

The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

ASTM (2013). Standard test methods for the determination of the modulus and damping properties of soils using the cyclic triaxial apparatus . West Conshohocken, PA: ASTM . ASTM D3999/D3999M-11.

Google Scholar

ASTM (2014). Standard test methods for specific gravity of soil solids by water pycnometer . West Conshohocken, PA: ASTM . ASTM D854-14.

ASTM (2015). “Standard test method for density and unit weight of soil in place by sand-cone method,” in Astm D1556/D1556M-15E0 (West Conshohocken, PA: ASTM ).

ASTM (2019). Standard test Method for laboratory Determination of water (moisture) Content of Soil and Rock by mass. ASTM d2216-19 . West Conshohocken, PA: ASTM .

Beyzaei, C. Z., Bray, J. D., Ballegooy, S. V., Misko, C., and Sarah, B. (2018). Depositional environment effects on observed liquefaction performance in silt swamps during the canterbury earthquake sequence. Soil Dyn. Earthq. Eng. 107, 303–321. doi:10.1016/j.soildyn.2018.01.035

CrossRef Full Text | Google Scholar

Beyzaei, C. Z., Bray, J. D., Cubrinovski, M., Sarah, B., Mark, S., Mike, J., et al. (2020). Characterization of silty soil thin layering and groundwater conditions for liquefaction assessment. Can. Geotechnical J. 57 (2), 263–276. doi:10.1139/cgj-2018-0287

Boulanger, R. W., and DeJong, J. T. (2018). Inverse filtering procedure to correct cone penetration data for thin-layer and transition effects. Cone penetration testing . Boca Raton, FL: CRC Press .

Boulanger, R. W., Munter, S. K., Krage, C. P., and DeJong, J. T. (2019). Liquefaction evaluation of interbedded soil deposit: cark canal in 1999 M7.5 kocaeli earthquake. J. Geotechnical Geoenvironmental Eng. 145 (9), 5019007. doi:10.1061/(ASCE)GT.1943-5606.0002089

Bucci, M. G., Villamor, P., Almond, P., Tuttle, M., Stringer, M., Ries, W., et al. (2018). Associations between sediment architecture and liquefaction susceptibility in fluvial settings: the 2010-2011 canterbury earthquake sequence, New Zealand. Eng. Geol. 237, 181–197. doi:10.1016/j.enggeo.2018.01.013

Chen, G. X., Zhao, D. F., Chen, W. Y., and Juang, C. H. (2019). Excess pore-water pressure generation in cyclic undrained testing. J. Geotechnical Geoenvironmental Eng. 145 (7), 04019022. doi:10.1061/(ASCE)GT.1943-5606.0002057

Duong, T. V., Cui, Y. J., Tang, A. M., Dupla, J. C., Jean, C., Nicolas, C., et al. (2016). Effects of water and fines contents on the resilient modulus of the interlayer soil of railway substructure. Acta Geotech. 11, 51–59. doi:10.1007/s11440-014-0341-0

Ecemis, N. (2021). Experimental and numerical modeling on the liquefaction potential and ground settlement of silt-interlayered stratified sands. Soil Dyn. Earthq. Eng. 144 (10), 106691. doi:10.1016/j.soildyn.2021.106691

GB (1999). Technical specification for seismic safety evaluation of engineering sites. GB 17741 . (in Chinese).

Hardin, B. O., and Drnevich, V. P. (1972). Shear modulus and damping in soils: design equations and curves. Geotech. Spec. Publ. 98 (118), 667–692. doi:10.1061/JSFEAQ.0001760

Ladd, C. C., and Foott, R. (1977). Foundation design of embankments constructed on varved clays . Washington, DC: Federal Highway Administration .

Li, S., Xu, B. Z., Liu, J. T., and Zhou, Y. R. (2014). Study of characteristics of laminated soil in south China sea. Rock Soil Mech. 35 (S1), 203–208. (in Chinese). doi:10.16285/j.rsm.2014.s1.014

Ma, C., Zhan, H. B., Zhang, T., and Yao, W. M. (2019). Investigation on shear behavior of soft interlayers by ring shear tests. Eng. Geol. 254, 34–42. doi:10.1016/j.enggeo.2019.04.002

Mackiewicz, S. M., and Lehman-Svoboda, J. (2012). Measured versus predicted side resistance of drilled shafts in a heterogeneous soil profile. Geocongress , 2412–2421. doi:10.1061/9780784412121.247

Martin, P. P., and Seed, H. B. (1983). One-dimensional dynamic ground response analyses. J. Geotechnical Eng. Div. 108 (7), 935–952. doi:10.1061/AJGEB6.0001316

Shun, J., Yuan, X. M., and Shun, Y. (2004). Comparison of the rationality between recommended and standard values for soil dynamic shear modulus and damping ratio. Earthq. Eng. Eng. Vib. 24 (2), 9. (in Chinese). doi:10.13197/j.eeev.2004.02.022

Sun, S. R., Wang, W. C., Wei, J. H., Song, J. L., Yu, Y. X., H, W., et al. (2022). Experimental study on microstructure response and mechanical properties of weak interlayer in acidic environment. Nat. Hazards 112 (1), 327–348. doi:10.1007/s11069-021-05183-w

Tankiewicz, M. (2015). Experimental investigation of strength anisotropy of varved clay. Procedia Earth Planet. Sci. 15, 732–737. doi:10.1016/j.proeps.2015.08.116

Tankiewicz, M. (2016). Structure investigations of layered soil-varved clay. Ann. Warsaw Univ. Life Sci. - SGGW. Land Reclam. 48 (4), 365–375. doi:10.1515/sggw-2016-0028

Tankiewicz, M. (2018). Application of the nanoindentation technique for the characterization of varved clay. Open Geosci. 10, 902–910. doi:10.1515/geo-2018-0071

Tasiopoulou, P., Giannakou, A., Chacko, J., and Sjoerd, D. W. (2019). Liquefaction triggering and post-liquefaction deformation of laminated deposits. Soil Dyn. Earthq. Eng. 124, 330–344. doi:10.1016/j.soildyn.2018.04.044

Wan, X., Ding, J. W., Hong, Z. S., Huang, C., Shang, S. L., and Ding, C. (2022a). Dynamic response of a low embankment subjected to traffic loads on the Yangtze River floodplain, China. Int. J. Geomechanics 22 (6), 04022065. doi:10.1061/(ASCE)GM.1943-5622.0002357

Wan, X., Ding, J. W., Jiao, N., Sun, S., Liu, J. Y., and Guo, Q. Y. (2022b). Observed performance of long-zoned excavation with suspended waterproof curtain in Yangtze River floodplain. J. Perform. Constr. Facil. 36 (3), 04022018. doi:10.1061/(ASCE)CF.1943-5509.0001725

Zhou, H., Wotherspoon, L. M., Hayden, C. P., McGann, C. R., Stolte, A., and Haycock, L. (2021). Assessment of existing SPT-CPT correlations using a New Zealand database. J. Geotechnical Geoenvironmental Eng. 147 (11), 4021131. doi:10.1061/(ASCE)GT.1943-5606.0002650

Zhuang, H. Y., Yang, J., Chen, S., Li, H. X., Zhao, K., and Chen, G. X. (2020). Liquefaction performance and deformation of slightly sloping site in floodplains of the lower reaches of Yangtze River. Ocean. Eng. 217, 107869. doi:10.1016/j.oceaneng.2020.107869

Keywords: river-phase floodplain interbedded soil, dynamic shear modulus, initial static stress, degree of consolidation, stiffness decay

Citation: Liu H, Huo Z, Chen D, Zhou R and Wu Q (2024) Experimental study of dynamic shear stiffness decay characteristics of interbedded soil: a case study in Yangtze River floodplain. Front. Earth Sci. 12:1421253. doi: 10.3389/feart.2024.1421253

Received: 22 April 2024; Accepted: 31 July 2024; Published: 29 August 2024.

Reviewed by:

Copyright © 2024 Liu, Huo, Chen, Zhou and Wu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Qi Wu, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Facility for Rare Isotope Beams

At michigan state university, user community focuses on the future of the field and fostering a diverse and equitable workforce.

The 2024 Low Energy Community Meeting (LECM) took place 7-9 August on the campus of the University of Tennessee Knoxville. LECM brings together members of the worldwide low-energy nuclear physics community to interact and discuss future plans, initiatives, and instruments. Over the course of the three days, 250 participants attended the meeting from 65 institutions and eight countries.

The LECM organizing committee includes representatives from FRIB, Argonne National Laboratory (ANL), the Association for Research at University Nuclear Accelerators (ARUNA), the Argonne Tandem Linac Accelerator System (ATLAS), the Center for Nuclear Astrophysics across Messengers (CeNAM), Lawrence Berkeley National Laboratory (LBNL), Lawrence Livermore National Laboratory (LLNL), Oak Ridge National Laboratory (ORNL), the FRIB Theory Alliance (FRIB-TA), and the FRIB Users Organization Executive Committee. FRIB hosted the meeting last year, and ORNL hosted this year. Texas A&M University will host next year.

LECM included plenary sessions, four working group sessions, and four workshops: Modular Neutron Array (MoNA) collaboration, Fission studies with rare isotope beams, early careers, and public engagement. 

The LECM plenary sessions featured presentations from the FRIB Achievement Awards for Early Career Researchers; a presentation on diversity and inclusion; Kairos Power’s Hermes demonstration reactor; and comments from representatives from the Department of Energy and the National Science Foundation. The meeting highlighted the status at major user facilities—FRIB, ATLAS, and ARUNA.

The 2024 LECM affirmation and resolutions stated:

Affirmation: Our community affirms in the strongest possible terms its commitment to foster a diverse and equitable workforce and to support and respect diversity in all its forms. Individually and collectively we commit to ensuring an inclusive and accessible environment for all and taking action if these values are not being upheld.

Resolution 1: The highest priority for low-energy nuclear physics and nuclear astrophysics research is to maintain U.S. world leadership in nuclear science by capitalizing on recent investments. To this end, we strongly support: 

  • Robust theoretical and experimental research programs and the development and retention of a diverse and equitable workforce; 
  • The optimal operation of the FRIB and ATLAS national user facilities;
  • Investments in the ARUNA facilities, and key national laboratory facilities; 
  • The FRIB Theory Alliance and all its initiatives.

All are critical to fully realize the scientific potential of the field and foster future breakthroughs.

Resolution 2: The science case for an energy upgrade of FRIB to 400 MeV/u is compelling. FRIB400 greatly expands the opportunities in the field. We strongly endorse starting the upgrade during the upcoming Long Range Plan period to harness its significant discovery potential. We support instrument developments, including the FDS and ISLA, now that GRETA and HRS are underway. These community devices are important to realize the full scope of scientific opportunities

Resolution 3: Computing is essential to advance all fields of nuclear science. We strongly support enhancing opportunities in computational nuclear science to accelerate discoveries and maintain U.S. leadership by: 

  • Strengthening programs and partnerships to ensure the efficient utilization of new high-performance computing (HPC) hardware and new capabilities and approaches offered by artificial intelligence/machine learning (AI/ML) and quantum computing (QC); 
  • Establishing programs that support the education, training of, and professional pathways for a diverse and multidisciplinary workforce with cross-disciplinary collaborations in HPC, AI/ML, and QC; 
  • Expanding access to dedicated hardware and resources for HPC and new emerging computational technologies, as well as capacity computing essential for many research efforts.

Resolution 4: Research centers are important for low-energy nuclear science. They facilitate strong national and international communications and collaborations across disciplines and across theory and experiment. Interdisciplinary centers are particularly essential for nuclear astrophysics to seize new scientific opportunities in this area. We strongly endorse a nuclear astrophysics center that builds on the success of JINA, fulfills this vital role, and propels innovation in the multi-messenger era.

Resolution 5: Nuclear data play an essential role in all facets of nuclear science. Access to reliable, complete and up-to-date nuclear structure and reaction data is crucial for the fundamental nuclear physics research enterprise, as well as for the successes of applied missions in the areas of defense and security, nuclear energy, space exploration, isotope production, and medical applications. It is thus imperative to maintain an effective US role in the stewardship of nuclear data. 

  • We endorse support for the compilation, evaluation, dissemination and preservation of nuclear data and efforts to build a diverse, equitable and inclusive workforce that maintains reliable and up-to-date nuclear databases through national and international partnerships. 
  • We recommend prioritizing opportunities that enhance the prompt availability and quality of nuclear data and its utility for propelling scientific progress in nuclear structure, reactions and astrophysics and other fundamental physics research programs.
  • We endorse identifying interagency-supported crosscutting opportunities for nuclear data with other programs, that enrich the utility of nuclear data in both science and society.

The community also presented a statement on isotopes and applications:

Applied Nuclear Science offers many tangible benefits to the United States and to the world. The Low Energy Nuclear Physics Community recognizes the societal importance of applied research, and strongly encourages support for this exciting and growing field with funding and beam time allocations that enable critical discovery science that will improve our lives and make us all safer.

Rare isotopes are necessary for research and innovation and must be available.  

  • Open access
  • Published: 26 August 2024

Using a flipped teaching strategy in undergraduate nursing education: students’ perceptions and performance

  • Shaherah Yousef Andargeery 1 ,
  • Hibah Abdulrahim Bahri 2 ,
  • Rania Ali Alhalwani 1 ,
  • Shorok Hamed Alahmedi 1 &
  • Waad Hasan Ali 1  

BMC Medical Education volume  24 , Article number:  926 ( 2024 ) Cite this article

81 Accesses

1 Altmetric

Metrics details

Flipped teaching is an interactive learning strategy that actively engages students in the learning process. Students have an active role in flipped teaching as they independently prepare for the class. Class time is dedicated to discussion and learning activities. Thus, it is believed that flipped teaching promotes students’ critical thinking, communication, application of knowledge in real-life situations, and becoming lifelong learners. The aim of this study was to describe the students’ perception of flipped teaching as an innovative learning strategy. And to assess if there was a difference in students’ academic performance between those who participated in a traditional teaching strategy compared to those who participated in flipped teaching intervention.

A quasi-experimental design with intervention and control groups. A purposive sampling technique of undergraduate nursing students was used.

A total of 355 students participated in both groups, and 70 out of 182 students in the intervention group completed the survey. The students perceived a moderate level of effectiveness of the flipped teaching classroom as a teaching strategy. The result revealed that there is a statistically significant difference in the mean students’ scores for the intervention group (M = 83.34, SD = 9.81) and control group (M = 75.57, SD = 9.82).

Flipped teaching proves its effectiveness in improving students’ learning experience and academic performance. Also, students had a positive perception about flipped teaching as it allowed them to develop essential nursing competencies. Future studies must consider measuring the influence of flipped teaching on students’ ability to acquire nursing competencies, such as critical thinking and clinical reasoning.

Peer Review reports

The successful outcome of individualized nursing care of each patient depends on effective communication between nurses and patients. Therapeutic communication consists of an exchange of verbal and non-verbal cues. It is a process in which the professional nurse uses specific techniques to help patients better understand their conditions and promote patients’ open communication of their thoughts and feelings in an environment of mutual respect and acceptance [ 1 ]. Effective educational preparation, continuing practice, and self-reflection about one’s communication skills are all necessary for becoming proficient in therapeutic communication. Teaching therapeutic communication to nursing students explains the principles of verbal and non-verbal communication that can be emphasized through classroom presentation, discussion, case studies and role-play. It also helps them develop their ability to communicate effectively with patients, families, and other health care professionals. Nursing students should be able to critically think, conceptualizing, applying, analyzing, synthesizing, and evaluating information generated by observation, experience, reflection, reasoning, and communication. Utilizing a traditional teaching strategy can be a challenge to meet the previously stated requirements [ 2 ]. Therefore, nurse educators should adapt unique teaching methods to help students learn and participate in their own education.

The “flipped classroom” is a pedagogical approach that has gained popularity worldwide to foster active learning. Active learning is defined as instructional strategies that actively engage students in their learning. It requires them to do meaningful learning activities and reflect on their actions [ 3 ]. Flipped teaching is a teaching strategy that promotes critical thinking and the application of information learned outside of the classroom to real-world situations and solves problems within the classroom. It is used in a way that allows educators to deliver lectures by using technologies such as video, audio files, PowerPoint or other media. Thus, the students can read or study those materials on their own at home before attending the class. As a result, discussions and debates about the materials take place throughout the lecture time. Some of the main principles of flipped teaching are increasing interaction and communication between students and educators, allocating more time for content mastery and understanding, granting opportunities for closing gaps and development, creating opportunities for active engagement, and providing immediate feedback [ 4 , 5 ]. This teaching/learning methodology is supported by constructivism learning theory. A “problem-solving approach to learning” is how constructivism is frequently described. In which, it requires a shift in the nurse educator’s epistemic assumptions about the teaching-learning process. Constructivism requires nursing educators to take on the role of a learning facilitator who encourages collaboration and teamwork as well as guides the students in building their knowledge. The underlying assumptions of constructivism include the idea that learning occurs as a result of social interaction in which the student actively creates their own knowledge, while prior experiences serve as the foundation for the learning process. The “flipping classroom” reflects that approach, which integrates student-centered learning [ 6 ].

Flipped teaching approach has students learning before lectures, teaching the material to better use classroom time for cooperative learning. The discussed herein represents studies and case studies from primary through graduate schools. The literature indicated students did see value in this pedagogical approach. Most of the studies found that flipped teaching was associated with better understanding of the material learned, higher academic achievement/performance, and potentially improved psychosocial factors (self-esteem, self-efficacy) that are associated with learning. Interestingly, one article pointed out that non-didactic material used in flipped-teaching lead to an increase in performance and this did not happen with didactic material.

According to Jordan et al. [ 7 ], a flipped teaching is a methodology that was developed as a response to advancements and changes in society, pedagogical approaches, and rapid growth and advancement of technology; The flipped teaching was evolved from the peer instruction and just in time teaching approaches. Jordan and colleagues [ 7 ] state that independent learning happens outside the classroom prior to the lesson through instructional materials while classroom time is maximized to fosters an environment of collaborative learning. Qutob [ 8 ] states that flipped teaching enhances student learning and engagement and promotes greater independence for students.

Jordan et al. [ 7 ] studied the use of flipped teaching on the teaching of first- and fourth-year students’ discrete mathematics and graphs, models, and applications. Across all the classes studied (pilot, graph, model and application, practices, computer and business administration), students preferred flipped teaching compared to traditional teaching. According to Jordan et al. [ 7 ], the quality of the materials and exercises, and perceived difficulty of the course and material are important to student satisfaction with this method. Additionally, it was found that interactions with teachers and collaborative learning were positive. Likewise, Nguyen et al. [ 9 ] found students favorably perceive flipped teaching. This is especially true for those students who have an understanding that the method involves preparation and interaction and how these affect the outcomes. Vazquez and Chiang [ 10 ] discuss the lessons learned from observing two large Principles of Economics Classes at the University of Illinois; each class held 900 students. Vazquez and Chiang [ 10 ] found that the students preferred watching videos over reading the textbook. Secondly, students were better prepared after they watched pre-lecture videos compared to reading the textbook beforehand. The third finding involved the length of time pre-lecture work should take; the authors state pre-lecture work should be approximately 15 to 20 min of work ahead of each in-class session. The fourth finding is that the flipped teaching is a costly endeavor. Finally, it was found that having the students watch videos before the lectures reduced the time spent in class covering the material; the end result of this is students spend more time engaging in active learning than reviewing the material.

Qutob [ 8 ] studied the effects of flip teaching using two hematology courses. One of the courses was delivered using traditional teaching and the other course was flipped teaching. Qutob [ 8 ] found that students in the flipped course not only performed better on academic tasks, but also they had more knowledge and understanding of the material covered compared to those in the traditional format class. Additionally, Qutob [ 8 ] revealed that students in the flipped classroom found this style of learning is more beneficial than traditional teaching. Moreover, Florence and Kolski [ 11 ] found an improvement in high school students’ writing post-intervention. The authors further found that students were more engaged with the material and had a positive perception of the flipped model. Bahadur and Akhtar [ 12 ] conducted a meta-analysis of twelve research articles on flipped teaching; the studies demonstrated that students taught in the flip teaching classroom performed better academically and were more interactive and engaged in the material than students taught through traditional methods. Galindo-Dominguez [ 13 ] conducted a systematic review using 61 studies and found evidence for the effectiveness of this approach compared to other pedagogical approaches with regards to academic achievement, improved self-efficacy, motivation, engagement, and cooperativeness. Webb et al. [ 14 ] studied 127 students taking microeconomics and found the delivery of flipped material (didactic vs. non-didactic) influenced students’ improvements. They further found performance improvements for the students who attended flipped classes using non-didactic pre-class material. At the same time, Webb et al. [ 14 ] further found non-improvement associated with flipped classes that used didactic pre-class materials; these materials are akin to traditional lectures.

In the context of nursing education, flipped teaching strategy has demonstrated promising and effective results in enhancing student motivation, performance, critical thinking skills, and learning quality. The flipped teaching classrooms were associated with high ratings in teaching evaluations, increased course satisfaction, improved critical thinking skills [ 15 ], improved exam results and learning quality [ 16 ] and high levels of personal, teaching, and pedagogical readiness [ 17 ]. Another study showed that student performance motivation scores especially in extrinsic goal orientation, control beliefs, and self-efficacy for learning and performance were significantly higher in the flipped teaching classroom when compared to the traditional classroom strategy [ 16 ].

Regardless of these important findings, there have been limited studies published about the flipped teaching strategy in Saudi Arabia, particularly among nursing students. Therefore, implementing the flipped teaching strategy in a therapeutic communication course would be effective in academic performance and retention of knowledge. The flipped teaching method will fit best with the goals of a therapeutic communication course as both focus on active learning and student engagement. This approach is well-matched for a therapeutic communication course as it allows students to apply and practice the communication techniques and strategies, they have learned outside of class from the flipped teaching materials and freeing up class time for interactive and experiential activities. The filliped teaching method can provide opportunities for students to apply effective interpersonal communication skills in classes, provide more time to observe students practicing therapeutic communication techniques through role-play, group discussions, and case studies. It also allows instructors to refine and provide individualized feedback and offer real-time guidance to help students improve their interpersonal communication skills.

The current study aims to examine the students’ perception of a teaching innovation based on the use of the flipped teaching strategy in the therapeutic communication course. Further, to compare if there is a difference in students’ academic performance of students who participate in a traditional teaching strategy when compared with students who participate in flipped teaching intervention.

Students who participated in the intervention group perceived a high level of effectiveness of the flipped teaching classroom as a teaching/learning strategy.

There is a significant difference in the mean scores of students’ academic performance between students who participate in a traditional teaching strategy (control group) when compared with those students who participate in flipped teaching classroom (intervention group).

Design of the study

Quantitative method, quasi-experimental design was used in this study. This research study involves implementing a flipped teaching strategy (intervention) to examine the effectiveness of the flipped teaching among the participants in the intervention group and to examine the significant difference in the mean scores of the students’ performance between the intervention and control group.

College of Nursing at one of the educational universities located in Saudi Arabia.

A purposive sampling technique was conducted in this study. This sampling technique allows the researcher to target specific participants who have certain characteristics that are most relevant and informative for addressing the research questions. The advantages of the purposive sampling lie in gathering in-depth, detailed and contextual data from the most appropriate sources and ensure that the study captures a more comprehensive understanding of the concept of interest by considering different viewpoints [ 18 ]. Participants were eligible to participate in this study if they were (1) Enrolled in the undergraduate nursing programs (Nursing or Midwifery Programs) in the College Nursing; (2) Enrolled in Therapeutic Communication Course; (3) at least 18 years old or older. Participant’s data was excluded if 50% of the responses were incomplete. The sample size was calculated using G-Power. The required participants for recruitment to implement this study is 152 participants to reach a confidence level of 95% and a margin error of 5%.

Measurement

Demographic data including the participants’ age and GPA were collected from all the participants. Educational characteristics related to the flipped teaching were collected from the participants in the intervention group including the level of English proficiency, program enrollment, attending previous, attending previous course(s) that used flipped teaching strategy, time spent each week preparing for the lectures, time spent preparing for the course exams, and recommendation for applying flipped teaching in other classes.

The student’s perception of the effectiveness of the flipped teaching strategy was measured by a survey that focused on the effectiveness of flipped teaching. This data was collected only from the participants in the intervention group. The survey involves 14 items that used 5-point Likert-type scale (5 = strongly agree, 4 = agree, 3 = neutral, 2 = disagree and 1 = strongly disagree). The sum of the scores was calculated for the item, a high score indicates a high effectiveness of flipped teaching. The survey was developed by Neeli et al. [ 19 ] and the author was contacted to obtain permission to use the survey. The reliability of the scale was tested using Cronbach alpha, which was 0.91, indicating that the scale has an excellent reliability.

Also, student academic performance was measured for both the intervention and control groups though the average cumulative scores of the assessment methods of students who were enrolled in the Therapeutic Communication Course, given a total of 100. The students’ grades obtained in the course were calculated based grading structure of the Ministry of Education in Saudi Arabia (The Rules and Regulations of Undergraduate Study and Examination).

Ethical approval

Institutional Review Board (IRB) approval (No. 22-0860) was received before conducting the study. Participants were provided with information about the study and informed about the consent process. Informed consent to participate was obtained from all the participants in the study.

Intervention

Therapeutic communication course was taught face-to-face for students enrolled in the second year in the Bachelor of Science in Midwifery and Bachelor of Science in Nursing Programs. There were eight sections for the therapeutic communication course, two of them were under the midwifery program and the remaining (six sections) were under the nursing program. Each section was held once a week in a two-hour length for 10 weeks during the second semester of 2022. Students in all sections received the same materials, contents, and assessment methods, which is considered the traditional teaching strategy. The contents of the course included the following topics: introduction of communication, verbal and written communication, listening skills, non-verbal communication, nurse-patient relationship, professional boundaries, communication styles, effective communication skills for small groups, communication through nursing process, communication with special needs patient, health education and principles for empowering individuals, communication through technology, and trends and issues in therapeutic communication. The course materials, course objectives and learning outcomes, learning resources, and other supporting materials were uploaded to the electronic platform “Blackboard” (A Learning Management System) for all sections to facilitate students’ preparation during classes. The assessment methods include written mid-term examination, case studies, group presentation, and final written examination. The grading scores for each assessment method were also the same for all sections.

The eight course sections were randomly assigned into traditional teaching strategy (control group) or flipped teaching strategy (intervention group). Figure  1 shows random distribution of the course sections. The intervention group ( n  = 182) included one section of the Bachelor of Science in Midwifery program ( n  = 55 students) and three sections of Bachelor of Science in Nursing program ( n  = 127 students). The control group ( n  = 173) included one section of the Bachelor of Science in Midwifery program ( n  = 50 students) and three sections of Bachelor of Science in Nursing program ( n  = 123 students). Although randomization of the participants is not possible, we were able to create comparison groups between participants who received the flipped teaching and traditional teaching strategy. To ensure the consistency of the information given to the students and reduce the variability, the instructors were meeting periodically and reviewed the materials together. More importantly, all students received the same topics and assessment methods as stated in the course syllabus and as mentioned above. The instructors in all sections were required to answer students’ questions, provide clarification to the points raised throughout the semester, and give constructive feedback after the evaluation of each assessment method. Students were encouraged to freely express their opinions on the issues discussed and to share their thoughts when the opinions were inconsistent.

figure 1

Random Distribution of the Course Sections

The intervention group were taught the course contents by using the flipped teaching strategy. The participants in the intervention group were asked to read the lectures and watch short videos from online sources before coming to classes. Similar materials and links were uploaded by the course instructors into the Blackboard system. During the classes, participants were divided into groups and were given time to appraise research articles and case scenarios related to the topics of the course. During the discussion time, each group presented their answers, and the course instructors encouraged the students to share their thoughts and provided constructive feedback. Questions corresponded to the intended objectives and learning outcomes were posted during the class time in Kahoot and Nearpod platforms as a competition to enhance students’ engagement. By the end of the semester, the flipped teaching survey was electronically distributed to students who were involved in the intervention group to examine the educational characteristics and assess the students’ perceptions about the flipped teaching.

Data collection procedure

After obtaining the IRB approval, the PI sent invitation letters to the potential participants using their official university email accounts. The invitation letter included a Microsoft Forms’ link with the description about the study, aim, research question, and sample size required to conduct the study. All students gave their permission to participate, and informed consent was obtained from them ( N  = 355). The link also included questions related to age, GPA, and approval to use their scores from assessment methods for research purposes. The first part of data collection was obtained immediately after the therapeutic communication course was over. The average cumulative scores of all the assessment methods (out of 100) were calculated to measure the students’ academic performance for both the intervention and control groups.

The second part of data collection was conducted after the final exam of the therapeutic communication course ( n  = 182). A Microsoft Forms link was sent to the participants in the intervention group only. It included questions related to educational characteristics and students’ perception of the effectiveness of flipped teaching. Students needed a maximum of 10 min to complete the study survey.

Data analysis

Data was analyzed using the SPSS version 27. Descriptive analysis was used to analyze the demographic and educational characteristics and perception of flipped teaching strategy. An independent t-test was implemented to compare the mean scores of the intervention and control groups to examine whether there is a statistically significance difference between both groups. A significance level of p  < 0.05 was determined as statistical significance in this study.

The total number of students who enrolled in therapeutic communication course was 355 students. The intervention group included 182 students and the control group included 173 students. The mean age of all participants in the study was 19 years old (M = 19.56, SD = 1.19). The mean GPA was 3.53 (SD = 1.43). Of those enrolled in the intervention group, only 70 out of 182 students completed the survey. Table  1 represents the description of the educational characteristics of the participants in intervention group ( n  = 70). Around 65% of the participants reported that their level of English proficiency is intermediate, and they were enrolled in the nursing program. Half of the students had precious courses that used flipped teaching strategy. About one-third of the students indicated that they spent less than 15 min each week preparing for lectures. Around 65% of the students stated that they spent more than 120 min preparing for the course exam. Half of the students gave their recommendation for applying flipped teaching strategy in other courses. The mean score of the students’ performance in Therapeutic Communication course who enrolled in the intervention group is 83.34 (SD = 9.81) and for those who were enrolled in the control group is 75.57 (SD = 9.82).

The students perceived a moderate level of effectiveness of the flipped teaching classroom as a teaching strategy (M = 3.49, SD = 0.69) (Table  2 ). The three highest items that improved students’ perception about the flipped teaching strategy were: flipped classroom session develops logical thinking (M = 3.77, SD = 0.99), followed by flipped classroom session provides extra information (M = 3.68, SD = 1.02), then flipped classroom session improves the application of knowledge (M = 3.64, SD = 1.04). The three lowest items perceived by the students were: Flipped classroom session should have allotted more time for each topic (M = 3.11, SD = 1.07), flipped classroom session requires a long time for preparation and conduction (M = 3.23, SD = 1.04), and flipped classroom session reduces the amount of time needed for study when compared to lectures (M = 3.26, SD = 1.07).

An independent sample T-test was implemented to compare the mean scores of the students’ academic performance between the intervention group ( n  = 182) and control group ( n  = 173) (Table  3 ). The results of Levene’s test for equality of variances ( p  = 0.801) indicated that equal variances assumed, and the assumption of equal variances has not been violated. The significant level value (2-tailed) is p  ≤ 0.001, indicating that there is a statistically significant difference in the mean scores of students’ academic performance for the intervention group (M = 83.34, SD = 9.81) and control group (M = 75.57, SD = 9.82). The magnitude of the differences in the means (Mean difference= -7.77%, CI: -10.02 to -5.52) is very small (Eta squared = 0.00035).

Flipped teaching is a learning strategy that engages students in the learning process allowing them to improve their academic performance and develop cognitive skills [ 20 ]. This study investigated the effect of implementing flipped teaching as an interactive learning strategy on nursing students’ performance. Also, the study examined students’ perceptions of integrating flipped teaching into their learning process. Flipped teaching is identified as an interactive teaching strategy that provides an engaging learning environment with immediate feedback allowing students to master the learning content [ 4 , 5 ]. Improvement in the student’s academic performance and development of learning competencies were expected outcomes. The flipped classroom approach aligns with the constructivist theory of education, which posits that students actively construct their own knowledge and understanding through engaging with the content and applying it in meaningful contexts. By providing pre-class materials (e.g., videos, readings) for students to engage with independently, the flipped classroom allows them to build a foundational understanding of the concepts before class, enabling them to actively participate in discussions, problem-solving, and collaborative activities during the class. By shifting the passive acquisition of knowledge to the pre-class phase and dedicating in-class time to active, collaborative, and problem-based learning, the flipped classroom approach creates an environment that fosters deeper understanding, the development of critical thinking and clinical reasoning skills as well as the ability to apply knowledge in clinical practice [ 21 ].

Effectiveness of the flipped teaching on students’ academic performance

The influence of flipped teaching on students’ academic performance was identified by evaluating students’ examination scores. The results of this study indicated that flipped teaching had a significant influence on students’ academic performance ( p  = 0.000). This significant influence implies the positive effectiveness of flipped teaching on students’ academic performance (M = 83.34, SD = 9.81) compared to traditional classroom (M = 75.57, SD = 9.82). These results are in line with other researchers regarding improving students’ academic performance [ 7 , 8 , 9 , 10 ]. Qutob’s [ 8 ] study shows that flipped teaching positively influences students’ performance. Preparation for class positively influenced students’ academic performance. The flipped classroom approach is underpinned by the principles of constructivism. These principles emphasize the active role of students in constructing their own understanding of concepts and ideas, rather than passively receiving information [ 21 ].

In a traditional classroom, the teacher typically delivers content through lectures, and students are tasked with applying that knowledge through homework or in-class activities. However, this model often fails to engage students actively in the learning process. In contract,

Flipped classroom requires students to prepare for the class which allows them to be exposed to the learning material before the class. During class time, students are giving opportunities to interact with their classmates and instructors to discuss the learning topic which can positively influencing their academic performance later [ 7 , 9 ]. Furthermore, the flipped classroom approach aligns perfectly with the core tenets of constructivism. Its adherence to the constructivist 5E Instructional Model further demonstrates its grounding in this learning theory. The 5E model, which includes the phases of engagement, exploration, explanation, elaboration, and evaluation, provides a framework for facilitating the active construction of knowledge [ 22 ].

It first sparks student interest and curiosity about the concepts (engagement), then enables students to investigate and experiment with the ideas through hands-on activities and investigations (exploration). This is followed by opportunities for students to make sense of their explorations and construct their own explanations (explanation). The flipped classroom then allows students to apply their knowledge in new contexts, deepening their understanding (elaboration). Finally, the evaluation phase assesses student learning and provides feedback, completing the cycle of constructivist learning [ 22 ]. This alignment with the 5E model, along with the flipped classroom’s emphasis on active learning and create environment that nurtures deeper understanding, the development of higher-order thinking skills, and the ability to transfer learning to real-world contexts.

In this study, one third of the students indicated that the preparation time was less than fifteen minutes a week. According to Vazquez and Chiang [ 10 ], preparation time for classroom should be about 15 to 20 min for each topic. Preparation for class did not take much time but positively influenced students’ academic performance. Furthermore, preparation for class allows students to develop the skills to be independent learners [ 8 ]. Independence in learning develops continuous learning skills, such as long-life learning which is a required competency for nursing. Garcia et al. [ 22 ] found out that focusing on shifting teachers’ practices towards active learning approaches, such as the 5E Instructional Model, can have lasting, positive impacts on students’ conceptual understanding and learning.

Students’ perception of flipped teaching as a teaching strategy

Students’ perception of flipped teaching as a learning strategy was examined using a survey developed by Neeli et al. [ 19 ]. Students recognize flipped teaching as an effective teaching strategy (M = 3.49, SD = 0.69) that had a positive influence on their learning processes and outcomes. Several studies identified the positive influence of flipped teaching on students’ learning process and learning outcomes [ 8 , 19 ]. Flipped teaching provides a problem-based learning environment allowing students to develop clinical reasoning, critical thinking, and a deeper understanding of the subject [ 5 , 8 , 19 , 23 ]. The flipped teaching approach introduces students to the learning materials before class. Class time is then utilized for discussion, hands-on, and problem-solving activities to foster a deeper understanding of the studied subject [ 5 ]. Consequently, flipped teaching provides a problem-based learning environment as it encourages students to be actively engaged in the learning process, work collaboratively with their classmates, and apply previously learned knowledge and skills to solve a problem. The result of this study is consistent with the results from a systematic review conducted by Youhasan et al. [ 5 ]. Implementing flipped teaching in undergraduate nursing education provides positive outcomes on students’ learning experiences and outcomes and prepares them to deal with future challenges in their academic and professional activities [ 5 ].

Implications

The results from this study identified that flipped teaching has a significant influence on students’ academic performance. The results also indicated that students have positive perception of flipped teaching as an interactive learning strategy. Flipped teaching pedagogy could be integrated in nursing curriculum to improve the quality of education process and outcomes which will result in improving the students’ performance. Flipped teaching provides an interactive learning environment that enhances the development of essential nursing competencies, such as communication, teamwork, collaboration, life-long learning, clinical reasoning, and critical thinking. For example, flipped teaching allows students to develop communication skills throughout discussion in the classroom, and collaboration skills by working with their classmate and instructor. In this study, flipped teaching was implemented in a theoretical course (therapeutic communication course). This interactive learning strategy could also be applied in clinical and practice setting for effective and meaningful learning process and outcomes.

Strengths and limitations

This research study reveals the effectiveness of flipped teaching on students’ academic performance. This study used a quasi-experimental design with control and intervention groups to investigate the influence of flipped teaching on nursing education. Nevertheless, this study has limitations. One of the study’s limitations is the lack of randomization, thus causal association between the variables cannot be investigated. In addition, this study used a self-administered survey which may include respondents’ bias; thus, it may affect the results. Also, this study investigated students’ perceptions of flipped teaching as a learning strategy. The results from examining students’ perceptions indicated that students had a positive perception of flipped teaching as it allowed them to develop essential nursing competencies. This study did not focus on identifying and measuring competencies. Therefore, future studies must consider measuring the influence of flipped teaching on students’ ability to acquire nursing competencies, such as critical thinking and clinical reasoning.

Flipped teaching is an interactive learning strategy that depends on students’ preparation of the topic to be interactive learners in the learning environment. Interactive learning environment improves learning process and outcomes. This study indicated that flipped teaching has significant influence on students’ academic performance. Students perceived flipped teaching as a learning strategy that allowed them to acquire learning skills, such as logical thinking and application of knowledge. These skills allow students to have meaningful learning experience. Also, students could apply these skills in other learning content and/or environments, for example, in clinical. Thus, we believe that flipped teaching is an effective learning approach to be integrated in the nursing curriculum to enhance students’ learning experience.

Data availability

The datasets generated and/or analyzed during the current study are not publicly available due to data privacy but are available from the corresponding author on reasonable request.

Abbreviations

Institutional Review Board

Standard deviation

The level of marginal significance within a statistical test

Confidence Interval of the Difference

Figueiredo AR, Potra TS. Effective communication transitions in nursing care: a scoping review. Ann Med. 2019;51(sup1):201–201. https://doi.org/10.1080/07853890.2018.1560159 .

Article   Google Scholar  

O’Rae A, Ferreira C, Hnatyshyn T, Krut B. Family nursing telesimulation: teaching therapeutic communication in an authentic way. Teach Learn Nurs. 2021;16(4):404–9. https://doi.org/10.1016/j.teln.2021.06.013 .

Thai NTT, De Wever B, Valcke M. The impact of a flipped classroom design on learning performance in higher education: looking for the best blend of lectures and guiding questions with feedback. Computers Educ. 2017;107:113–26. https://doi.org/10.1016/j.compedu.2017.01.003 .

Özbay Ö, Çınar S. Effectiveness of flipped classroom teaching models in nursing education: a systematic review. Nurse Educ Today. 2021;104922–104922. https://doi.org/10.1016/j.nedt.2021.104922 . 102 (n. Issue).

Youhasan P, Chen Y, Lyndon M, Henning MA. Exploring the pedagogical design features of the flipped classroom in undergraduate nursing education: a systematic review. BMC Nurs. 2021;20(1):50–50. https://doi.org/10.1186/s12912-021-00555-w .

Barbour C, Schuessler JB. A preliminary framework to guide implementation of the flipped classroom method in nursing education. Nurse Educ Pract. 2019;34:36–42. https://doi.org/10.1016/j.nepr.2018.11.001 .

Jordan C, Magrenan A, Orcos L. Considerations about flip education in the teaching of advanced mathematics. Educational Sci. 2019;9(3):227.

Qutob H. Effect of flipped classroom approach in the teaching of a hematology course. PLoS ONE. 2022;17(4):1–8.

Nguyen B, Yu X, Japutra A, Chen C. Reverse teaching: exploring student perceptions of flip teaching. Act Learn High Educ. 2016;17(1):51–61.

Vazquez J, Chiang E. Flipping out! A case study on how to flip the principles of economics classroom. Int Adv Econ Res. 2015;21(4):379–90.

Florence E, Kolski T. Investigating the flipped classroom model in a high school writing course: action research to impact student writing achievement and engagement. TechTrends: Link Res Pract Improve Learn. 2021;65(6):1042–52.

Bahadur G, Akhtar Z. Effect of teaching with flipped classroom model: a meta-analysis. Adv Social Sci Educ Humanit Res. 2021;15(3):191–7.

Google Scholar  

Galindo-Dominguez H. Flipped classroom in the educational system: Trend or effective pedagogical model compared to other methodologies? J Educational Technol Soc. 2021;24(3):44–60.

Webb R, Watson D, Shepherd C, Cook S. Flipping the classroom: is it the type of flipping that adds value? Stud High Educ. 2021;46(8):1649–63.

Barranquero-Herbosa M, Abajas-Bustillo R, Ortego-Maté C. Effectiveness of flipped classroom in nursing education: a systematic review of systematic and integrative reviews. Int J Nurs Stud. 2022;135:104327. https://doi.org/10.1016/j.ijnurstu.2022.104327 .

Lelean H, Edwards F. The impact of flipped classrooms in nurse education. Waikato J Educ. 2020;25:145–57.

Youhasan P, Chen Y, Lyndon M, Henning MA. Assess the feasibility of flipped classroom pedagogy in undergraduate nursing education in Sri Lanka: a mixed-methods study. PLoS ONE. 2021;16(11):e0259003. https://doi.org/10.1371/journal.pone.0259003 .

Harris AD, McGregor JC, Perencevich EN, Furuno JP, Zhu J, Peterson DE, Finkelstein J. The use and interpretation of quasi-experimental studies in medical informatics. J Am Med Inf Association: JAMIA. 2006;13(1):16–23. https://doi.org/10.1197/jamia.M1749 .

Neeli D, Prasad U, Atla B, Kukkala SSS, Konuku VBS, Mohammad A. (2019). Integrated teaching in medical education: undergraduate student’s perception.

Baloch MH, Shahid S, Saeed S, Nasir A, Mansoor S. Does the implementation of flipped Classroom Model improve the Learning Outcomes of Medical College Students? A single centre analysis. J Coll Physicians Surgeons–pakistan: JCPSP. 2022;32(12):1544–7.

Robertson WH. The Constructivist flipped Classroom. J Coll Sci Teach. 2022;52(2):17–22.

Garcia I, Grau F, Valls C, Piqué N, Ruiz-Martín H. The long-term effects of introducing the 5E model of instruction on students’ conceptual learning. Int J Sci Educ. 2021;43(9):1441–58.

Chu TL, Wang J, Monrouxe L, Sung YC, Kuo CL, Ho LH, Lin YE. The effects of the flipped classroom in teaching evidence based nursing: a quasi-experimental study. PLoS ONE. 2019;14(1):e0210606.

Download references

Acknowledgements

The authors are grateful for the facilities and other support given by Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2024R447), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

This research was funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2024R447), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia

Author information

Authors and affiliations.

Nursing Management and Education Department, College of Nursing, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671, Saudi Arabia

Shaherah Yousef Andargeery, Rania Ali Alhalwani, Shorok Hamed Alahmedi & Waad Hasan Ali

Medical-Surgical Nursing Department, College of Nursing, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671, Saudi Arabia

Hibah Abdulrahim Bahri

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization, H.B, S.Y.A, W.A.; methodology, S.Y.A., S.H.A.; validation, S.Y.A.; formal analysis, S.Y.A.; resources, H.B, S.Y.A, W.A, R. A.; data curation, S.Y.A, S.H.A.; writing—original draft preparation, R.A, H.B, S.Y.A., S.H.A, W.A; writing—review and editing, R.A, H.B, S.Y.A, S.H.A, W.A; supervision, R.A, H.B, S.Y.A, S.H.A.; project administration, R.A, S.Y.A, S.H.A.; funding acquisition, S.Y.A. All authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Hibah Abdulrahim Bahri .

Ethics declarations

Institutional review board.

Institutional Review Board (IRB) in Princess Nourah bint Abdulrahman University, approval No. (22-0860).

Informed consent

Informed consents were obtained from all study participants.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Andargeery, S.Y., Bahri, H.A., Alhalwani, R.A. et al. Using a flipped teaching strategy in undergraduate nursing education: students’ perceptions and performance. BMC Med Educ 24 , 926 (2024). https://doi.org/10.1186/s12909-024-05749-9

Download citation

Received : 26 February 2024

Accepted : 05 July 2024

Published : 26 August 2024

DOI : https://doi.org/10.1186/s12909-024-05749-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Flipped teaching
  • Active learning
  • Teaching strategy
  • Nursing education
  • Undergraduate nursing education

BMC Medical Education

ISSN: 1472-6920

case study vs experimental research

  • Skip to main content
  • Skip to search
  • Skip to footer

Products and Services

case study vs experimental research

Partnerships that make everything possible

A Cisco partnership provides you exclusive benefits like programs and specializations made to reward you, training and enablement that modernize your practice, and opportunities to transform your capabilities and selling motions.

Register as a Cisco partner or affiliate your account

For new companies.

Log in with your Cisco account and register your company as a Cisco partner to gain access to exclusive partner content.

For individuals

Log in with your Cisco account and associate that account with an existing Cisco partner company to gain access to exclusive partner content.

Your profitability is our number 1 goal

case study vs experimental research

Partner Program

A program that is flexibly structured around how you deliver value to customers.

case study vs experimental research

Marketing Velocity

Training to help modernize your marketing approach, customizable campaigns, and access to a top-tier digital community.

case study vs experimental research

Paths to transformation

Explore the possibilities of how Cisco can help you build new capabilities and evolving customer needs.

Cisco partners bring real results

The stadium of the future.

Successful businesses don't always play by the rules. Cisco Partner AmpThink isn’t just playing the game, they’re redefining it—and they’re winning.

Paving the way for others

One of the first members of the African American Cisco Partner Community, TGS is helping to open doors for other black-owned companies.

Find a partner

Partners work to integrate, build, buy, and consult on solutions, software, and services for their customers.

@CiscoPartners

Cisco Partner

Cisco Partners

Partner Blogs

IMAGES

  1. Case Study Qualitative Research

    case study vs experimental research

  2. descriptive study vs case study

    case study vs experimental research

  3. Case Study vs. Experiment: What’s the Difference?

    case study vs experimental research

  4. Difference Between Descriptive and Experimental Research

    case study vs experimental research

  5. Discover the Advantages and Disadvantages of a Case Study

    case study vs experimental research

  6. Differences Between Experimental Studies and Observational Studies

    case study vs experimental research

VIDEO

  1. Differences Between Action Research and Case Study

  2. TYPES OF RESEARCH : Quick Review (Comprehensive Exam Reviewer)

  3. Case study Vs Case Series

  4. Case study vs Personal Brand

  5. Computational vs. Experimental Research |Pros & Cons of doing PhD Research using Computer Simulation

  6. Case Study vs Survey

COMMENTS

  1. Case Study vs. Experiment

    A case study involves in-depth analysis of a particular individual, group, or situation, aiming to provide a detailed understanding of a specific phenomenon. On the other hand, an experiment involves manipulating variables and observing the effects on a sample population, aiming to establish cause-and-effect relationships.

  2. Case Study vs. Experimental Research

    Case study research involves in-depth analysis of a single individual, group, or event, often using qualitative methods to explore complex phenomena. On the other hand, experimental research involves manipulating variables and measuring their effects on outcomes in a controlled setting to establish cause-and-effect relationships.

  3. What Is a Case Study?

    A case study is a detailed study of a specific subject, such as a person, group, place, event, organization, or phenomenon. Case studies are commonly used in social, educational, clinical, and business research. A case study research design usually involves qualitative methods, but quantitative methods are sometimes also used.

  4. Study designs in biomedical research: an introduction to the different

    Study design is the key essential step in conducting successful research. There are many types of study designs in the biomedical field.

  5. Types of Research Designs Compared

    You can also create a mixed methods research design that has elements of both. Descriptive research vs experimental research. Descriptive research gathers data without controlling any variables, while experimental research manipulates and controls variables to determine cause and effect.

  6. Observational vs. Experimental Study: A Comprehensive Guide

    Explore the fundamental disparities between experimental and observational studies in this comprehensive guide by Santos Research Center, Corp. Uncover concepts such as control group, random sample, cohort studies, response variable, and explanatory variable that shape the foundation of these methodologies. Discover the significance of randomized controlled trials and case control studies ...

  7. Case Study vs. Single-Case Experimental Designs

    Case study and single-case experimental designs are both research methods used in psychology and other social sciences to investigate individual cases or subjects.

  8. Case Study Methodology of Qualitative Research: Key Attributes and

    Abstract A case study is one of the most commonly used methodologies of social research. This article attempts to look into the various dimensions of a case study research strategy, the different epistemological strands which determine the particular case study type and approach adopted in the field, discusses the factors which can enhance the effectiveness of a case study research, and the ...

  9. Case Study

    A case study is a research method that involves an in-depth examination and analysis of a particular phenomenon or case, such as an individual, organization, community, event, or situation. It is a qualitative research approach that aims to provide a detailed and comprehensive understanding of the case being studied.

  10. Case Study

    Learn how to conduct a case study using qualitative and quantitative methods. Find useful resources, tips, and examples for your research project.

  11. What's the difference between correlational and experimental research?

    Controlled experiments establish causality, whereas correlational studies only show associations between variables. In an experimental design, you manipulate an independent variable and measure its effect on a dependent variable. Other variables are controlled so they can't impact the results. In a correlational design, you measure variables ...

  12. Distinguishing Between Case Studies & Experiments

    A case study is a research method in which the researcher explores the subject in depth, while an experiment is a research method where two specific groups or variables are used to test a hypothesis. This article will examine the differences between case study and experiment further.

  13. Case Study Research Method in Psychology

    Case study research involves an in-depth, detailed examination of a single case, such as a person, group, event, organization, or location, to explore causation in order to find underlying principles and gain insight for further research.

  14. Case Study vs Experiment: Definition, Characteristics, & Usage

    A case study and experiment are the two prominent approaches often used at the forefront of scholarly inquiry. While case studies study the complexities of real-life situations, aiming for depth and contextual understanding, experiments seek to uncover causal relationships through controlled manipulation and observation.

  15. 15 Famous Experiments and Case Studies in Psychology

    Case Study vs. Experiment Before we dive into the list of the most famous studies in psychology, let us first review the difference between case studies and experiments. Case Study It is an in-depth study and analysis of an individual, group, community, or phenomenon.

  16. 2.2 Research Designs in Psychology

    Correlational research is designed to discover relationships among variables. Experimental research is designed to assess cause and effect. Each of the three research designs has specific strengths and limitations, and it is important to understand how each differs. See the table below for a summary. Table 2.2.

  17. Research Designs: Quasi-Experimental, Case Studies & Correlational

    Explore quasi-experimental, case studies, and correlational research designs, and recognize how they differ from true experiments. Understand why for some research projects, designs other than ...

  18. Experimental and Quasi-Experimental Research

    Basic Concepts of Experimental and Quasi-Experimental Research Discovering causal relationships is the key to experimental research. In abstract terms, this means the relationship between a certain action, X, which alone creates the effect Y. For example, turning the volume knob on your stereo clockwise causes the sound to get louder.

  19. Case Study

    A case study is a detailed study of a specific subject, such as a person, group, place, event, organisation, or phenomenon. Case studies are commonly used in social, educational, clinical, and business research. A case study research design usually involves qualitative methods, but quantitative methods are sometimes also used.

  20. 3.2 Psychologists Use Descriptive, Correlational, and Experimental

    Learning Objectives Differentiate the goals of descriptive, correlational, and experimental research designs and explain the advantages and disadvantages of each. Explain the goals of descriptive research and the statistical techniques used to interpret it. Summarize the uses of correlational research and describe why correlational research cannot be used to infer causality. Review the ...

  21. 2.2 Psychologists Use Descriptive, Correlational, and Experimental

    Learning Objectives Differentiate the goals of descriptive, correlational, and experimental research designs and explain the advantages and disadvantages of each. Explain the goals of descriptive research and the statistical techniques used to interpret it. Summarize the uses of correlational research and describe why correlational research cannot be used to infer causality. Review the ...

  22. Research Guides: Research Methods: Types of Research

    Correlational vs. Experimental Correlational Research cannot determine causal relationships. Instead they examine relationships between variables. Experimental Research can establish causal relationship and variables can be manipulated.

  23. Going beyond the comparison: toward experimental ...

    How can researchers provide educators with the evidence necessary to help them design more effective instruction? Experimental comparison studies provide such evidence by comparing one instructional design against another (or several others) to make causal claims about the effectiveness of design strategies (Cook, 2005, 2009; Friedman, 1994).Design strategies refer to procedures that educators ...

  24. Density‑dependent population regulation in freshwater ...

    In addition, trophic interactions were reported as causes of population regulation, with predation shaping mostly small mammal populations (36% of the mammal studies) and cannibalism impacting freshwater fish (26%). In the case of freshwater fish, 63% of the studies were experimental (i.e., with a length of weeks or months).

  25. Frontiers

    This article is part of the Research Topic Multiscale Characterizations of Special Soils and the Geotechnical Implications View all 6 articles. Experimental study of dynamic shear stiffness decay characteristics of interbedded soil: a case study in Yangtze River floodplain ... Zhou R and Wu Q (2024) Experimental study of dynamic shear stiffness ...

  26. User community focuses on the future of the field and fostering a

    The 2024 Low Energy Community Meeting (LECM) took place 7-9 August on the campus of the University of Tennessee Knoxville. LECM brings together members of the worldwide low-energy nuclear physics community to interact and discuss future plans, initiatives, and instruments. Over the course of the three days, 250 participants attended the meeting from 65 institutions and eight countries.The LECM ...

  27. Using a flipped teaching strategy in undergraduate nursing education

    Quantitative method, quasi-experimental design was used in this study. This research study involves implementing a flipped teaching strategy (intervention) to examine the effectiveness of the flipped teaching among the participants in the intervention group and to examine the significant difference in the mean scores of the students ...

  28. Partners

    Partner with Cisco to be agile, relevant, and profitable. Explore programs, incentives, and the benefits of becoming a Cisco partner.