Logo for M Libraries Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

6.1 Experiment Basics

Learning objectives.

  • Explain what an experiment is and recognize examples of studies that are experiments and studies that are not experiments.
  • Explain what internal validity is and why experiments are considered to be high in internal validity.
  • Explain what external validity is and evaluate studies in terms of their external validity.
  • Distinguish between the manipulation of the independent variable and control of extraneous variables and explain the importance of each.
  • Recognize examples of confounding variables and explain how they affect the internal validity of a study.

What Is an Experiment?

As we saw earlier in the book, an experiment is a type of study designed specifically to answer the question of whether there is a causal relationship between two variables. Do changes in an independent variable cause changes in a dependent variable? Experiments have two fundamental features. The first is that the researchers manipulate, or systematically vary, the level of the independent variable. The different levels of the independent variable are called conditions. For example, in Darley and Latané’s experiment, the independent variable was the number of witnesses that participants believed to be present. The researchers manipulated this independent variable by telling participants that there were either one, two, or five other students involved in the discussion, thereby creating three conditions. The second fundamental feature of an experiment is that the researcher controls, or minimizes the variability in, variables other than the independent and dependent variable. These other variables are called extraneous variables. Darley and Latané tested all their participants in the same room, exposed them to the same emergency situation, and so on. They also randomly assigned their participants to conditions so that the three groups would be similar to each other to begin with. Notice that although the words manipulation and control have similar meanings in everyday language, researchers make a clear distinction between them. They manipulate the independent variable by systematically changing its levels and control other variables by holding them constant.

Internal and External Validity

Internal validity.

Recall that the fact that two variables are statistically related does not necessarily mean that one causes the other. “Correlation does not imply causation.” For example, if it were the case that people who exercise regularly are happier than people who do not exercise regularly, this would not necessarily mean that exercising increases people’s happiness. It could mean instead that greater happiness causes people to exercise (the directionality problem) or that something like better physical health causes people to exercise and be happier (the third-variable problem).

The purpose of an experiment, however, is to show that two variables are statistically related and to do so in a way that supports the conclusion that the independent variable caused any observed differences in the dependent variable. The basic logic is this: If the researcher creates two or more highly similar conditions and then manipulates the independent variable to produce just one difference between them, then any later difference between the conditions must have been caused by the independent variable. For example, because the only difference between Darley and Latané’s conditions was the number of students that participants believed to be involved in the discussion, this must have been responsible for differences in helping between the conditions.

An empirical study is said to be high in internal validity if the way it was conducted supports the conclusion that the independent variable caused any observed differences in the dependent variable. Thus experiments are high in internal validity because the way they are conducted—with the manipulation of the independent variable and the control of extraneous variables—provides strong support for causal conclusions.

External Validity

At the same time, the way that experiments are conducted sometimes leads to a different kind of criticism. Specifically, the need to manipulate the independent variable and control extraneous variables means that experiments are often conducted under conditions that seem artificial or unlike “real life” (Stanovich, 2010). In many psychology experiments, the participants are all college undergraduates and come to a classroom or laboratory to fill out a series of paper-and-pencil questionnaires or to perform a carefully designed computerized task. Consider, for example, an experiment in which researcher Barbara Fredrickson and her colleagues had college students come to a laboratory on campus and complete a math test while wearing a swimsuit (Fredrickson, Roberts, Noll, Quinn, & Twenge, 1998). At first, this might seem silly. When will college students ever have to complete math tests in their swimsuits outside of this experiment?

The issue we are confronting is that of external validity. An empirical study is high in external validity if the way it was conducted supports generalizing the results to people and situations beyond those actually studied. As a general rule, studies are higher in external validity when the participants and the situation studied are similar to those that the researchers want to generalize to. Imagine, for example, that a group of researchers is interested in how shoppers in large grocery stores are affected by whether breakfast cereal is packaged in yellow or purple boxes. Their study would be high in external validity if they studied the decisions of ordinary people doing their weekly shopping in a real grocery store. If the shoppers bought much more cereal in purple boxes, the researchers would be fairly confident that this would be true for other shoppers in other stores. Their study would be relatively low in external validity, however, if they studied a sample of college students in a laboratory at a selective college who merely judged the appeal of various colors presented on a computer screen. If the students judged purple to be more appealing than yellow, the researchers would not be very confident that this is relevant to grocery shoppers’ cereal-buying decisions.

We should be careful, however, not to draw the blanket conclusion that experiments are low in external validity. One reason is that experiments need not seem artificial. Consider that Darley and Latané’s experiment provided a reasonably good simulation of a real emergency situation. Or consider field experiments that are conducted entirely outside the laboratory. In one such experiment, Robert Cialdini and his colleagues studied whether hotel guests choose to reuse their towels for a second day as opposed to having them washed as a way of conserving water and energy (Cialdini, 2005). These researchers manipulated the message on a card left in a large sample of hotel rooms. One version of the message emphasized showing respect for the environment, another emphasized that the hotel would donate a portion of their savings to an environmental cause, and a third emphasized that most hotel guests choose to reuse their towels. The result was that guests who received the message that most hotel guests choose to reuse their towels reused their own towels substantially more often than guests receiving either of the other two messages. Given the way they conducted their study, it seems very likely that their result would hold true for other guests in other hotels.

A second reason not to draw the blanket conclusion that experiments are low in external validity is that they are often conducted to learn about psychological processes that are likely to operate in a variety of people and situations. Let us return to the experiment by Fredrickson and colleagues. They found that the women in their study, but not the men, performed worse on the math test when they were wearing swimsuits. They argued that this was due to women’s greater tendency to objectify themselves—to think about themselves from the perspective of an outside observer—which diverts their attention away from other tasks. They argued, furthermore, that this process of self-objectification and its effect on attention is likely to operate in a variety of women and situations—even if none of them ever finds herself taking a math test in her swimsuit.

Manipulation of the Independent Variable

Again, to manipulate an independent variable means to change its level systematically so that different groups of participants are exposed to different levels of that variable, or the same group of participants is exposed to different levels at different times. For example, to see whether expressive writing affects people’s health, a researcher might instruct some participants to write about traumatic experiences and others to write about neutral experiences. The different levels of the independent variable are referred to as conditions , and researchers often give the conditions short descriptive names to make it easy to talk and write about them. In this case, the conditions might be called the “traumatic condition” and the “neutral condition.”

Notice that the manipulation of an independent variable must involve the active intervention of the researcher. Comparing groups of people who differ on the independent variable before the study begins is not the same as manipulating that variable. For example, a researcher who compares the health of people who already keep a journal with the health of people who do not keep a journal has not manipulated this variable and therefore not conducted an experiment. This is important because groups that already differ in one way at the beginning of a study are likely to differ in other ways too. For example, people who choose to keep journals might also be more conscientious, more introverted, or less stressed than people who do not. Therefore, any observed difference between the two groups in terms of their health might have been caused by whether or not they keep a journal, or it might have been caused by any of the other differences between people who do and do not keep journals. Thus the active manipulation of the independent variable is crucial for eliminating the third-variable problem.

Of course, there are many situations in which the independent variable cannot be manipulated for practical or ethical reasons and therefore an experiment is not possible. For example, whether or not people have a significant early illness experience cannot be manipulated, making it impossible to do an experiment on the effect of early illness experiences on the development of hypochondriasis. This does not mean it is impossible to study the relationship between early illness experiences and hypochondriasis—only that it must be done using nonexperimental approaches. We will discuss this in detail later in the book.

In many experiments, the independent variable is a construct that can only be manipulated indirectly. For example, a researcher might try to manipulate participants’ stress levels indirectly by telling some of them that they have five minutes to prepare a short speech that they will then have to give to an audience of other participants. In such situations, researchers often include a manipulation check in their procedure. A manipulation check is a separate measure of the construct the researcher is trying to manipulate. For example, researchers trying to manipulate participants’ stress levels might give them a paper-and-pencil stress questionnaire or take their blood pressure—perhaps right after the manipulation or at the end of the procedure—to verify that they successfully manipulated this variable.

Control of Extraneous Variables

An extraneous variable is anything that varies in the context of a study other than the independent and dependent variables. In an experiment on the effect of expressive writing on health, for example, extraneous variables would include participant variables (individual differences) such as their writing ability, their diet, and their shoe size. They would also include situation or task variables such as the time of day when participants write, whether they write by hand or on a computer, and the weather. Extraneous variables pose a problem because many of them are likely to have some effect on the dependent variable. For example, participants’ health will be affected by many things other than whether or not they engage in expressive writing. This can make it difficult to separate the effect of the independent variable from the effects of the extraneous variables, which is why it is important to control extraneous variables by holding them constant.

Extraneous Variables as “Noise”

Extraneous variables make it difficult to detect the effect of the independent variable in two ways. One is by adding variability or “noise” to the data. Imagine a simple experiment on the effect of mood (happy vs. sad) on the number of happy childhood events people are able to recall. Participants are put into a negative or positive mood (by showing them a happy or sad video clip) and then asked to recall as many happy childhood events as they can. The two leftmost columns of Table 6.1 “Hypothetical Noiseless Data and Realistic Noisy Data” show what the data might look like if there were no extraneous variables and the number of happy childhood events participants recalled was affected only by their moods. Every participant in the happy mood condition recalled exactly four happy childhood events, and every participant in the sad mood condition recalled exactly three. The effect of mood here is quite obvious. In reality, however, the data would probably look more like those in the two rightmost columns of Table 6.1 “Hypothetical Noiseless Data and Realistic Noisy Data” . Even in the happy mood condition, some participants would recall fewer happy memories because they have fewer to draw on, use less effective strategies, or are less motivated. And even in the sad mood condition, some participants would recall more happy childhood memories because they have more happy memories to draw on, they use more effective recall strategies, or they are more motivated. Although the mean difference between the two groups is the same as in the idealized data, this difference is much less obvious in the context of the greater variability in the data. Thus one reason researchers try to control extraneous variables is so their data look more like the idealized data in Table 6.1 “Hypothetical Noiseless Data and Realistic Noisy Data” , which makes the effect of the independent variable is easier to detect (although real data never look quite that good).

Table 6.1 Hypothetical Noiseless Data and Realistic Noisy Data

Idealized “noiseless” data Realistic “noisy” data
4 3 3 1
4 3 6 3
4 3 2 4
4 3 4 0
4 3 5 5
4 3 2 7
4 3 3 2
4 3 1 5
4 3 6 1
4 3 8 2
= 4 = 3 = 4 = 3

One way to control extraneous variables is to hold them constant. This can mean holding situation or task variables constant by testing all participants in the same location, giving them identical instructions, treating them in the same way, and so on. It can also mean holding participant variables constant. For example, many studies of language limit participants to right-handed people, who generally have their language areas isolated in their left cerebral hemispheres. Left-handed people are more likely to have their language areas isolated in their right cerebral hemispheres or distributed across both hemispheres, which can change the way they process language and thereby add noise to the data.

In principle, researchers can control extraneous variables by limiting participants to one very specific category of person, such as 20-year-old, straight, female, right-handed, sophomore psychology majors. The obvious downside to this approach is that it would lower the external validity of the study—in particular, the extent to which the results can be generalized beyond the people actually studied. For example, it might be unclear whether results obtained with a sample of younger straight women would apply to older gay men. In many situations, the advantages of a diverse sample outweigh the reduction in noise achieved by a homogeneous one.

Extraneous Variables as Confounding Variables

The second way that extraneous variables can make it difficult to detect the effect of the independent variable is by becoming confounding variables. A confounding variable is an extraneous variable that differs on average across levels of the independent variable. For example, in almost all experiments, participants’ intelligence quotients (IQs) will be an extraneous variable. But as long as there are participants with lower and higher IQs at each level of the independent variable so that the average IQ is roughly equal, then this variation is probably acceptable (and may even be desirable). What would be bad, however, would be for participants at one level of the independent variable to have substantially lower IQs on average and participants at another level to have substantially higher IQs on average. In this case, IQ would be a confounding variable.

To confound means to confuse, and this is exactly what confounding variables do. Because they differ across conditions—just like the independent variable—they provide an alternative explanation for any observed difference in the dependent variable. Figure 6.1 “Hypothetical Results From a Study on the Effect of Mood on Memory” shows the results of a hypothetical study, in which participants in a positive mood condition scored higher on a memory task than participants in a negative mood condition. But if IQ is a confounding variable—with participants in the positive mood condition having higher IQs on average than participants in the negative mood condition—then it is unclear whether it was the positive moods or the higher IQs that caused participants in the first condition to score higher. One way to avoid confounding variables is by holding extraneous variables constant. For example, one could prevent IQ from becoming a confounding variable by limiting participants only to those with IQs of exactly 100. But this approach is not always desirable for reasons we have already discussed. A second and much more general approach—random assignment to conditions—will be discussed in detail shortly.

Figure 6.1 Hypothetical Results From a Study on the Effect of Mood on Memory

Hypothetical Results From a Study on the Effect of Mood on Memory

Because IQ also differs across conditions, it is a confounding variable.

Key Takeaways

  • An experiment is a type of empirical study that features the manipulation of an independent variable, the measurement of a dependent variable, and control of extraneous variables.
  • Studies are high in internal validity to the extent that the way they are conducted supports the conclusion that the independent variable caused any observed differences in the dependent variable. Experiments are generally high in internal validity because of the manipulation of the independent variable and control of extraneous variables.
  • Studies are high in external validity to the extent that the result can be generalized to people and situations beyond those actually studied. Although experiments can seem “artificial”—and low in external validity—it is important to consider whether the psychological processes under study are likely to operate in other people and situations.
  • Practice: List five variables that can be manipulated by the researcher in an experiment. List five variables that cannot be manipulated by the researcher in an experiment.

Practice: For each of the following topics, decide whether that topic could be studied using an experimental research design and explain why or why not.

  • Effect of parietal lobe damage on people’s ability to do basic arithmetic.
  • Effect of being clinically depressed on the number of close friendships people have.
  • Effect of group training on the social skills of teenagers with Asperger’s syndrome.
  • Effect of paying people to take an IQ test on their performance on that test.

Cialdini, R. (2005, April). Don’t throw in the towel: Use social influence research. APS Observer . Retrieved from http://www.psychologicalscience.org/observer/getArticle.cfm?id=1762 .

Fredrickson, B. L., Roberts, T.-A., Noll, S. M., Quinn, D. M., & Twenge, J. M. (1998). The swimsuit becomes you: Sex differences in self-objectification, restrained eating, and math performance. Journal of Personality and Social Psychology, 75 , 269–284.

Stanovich, K. E. (2010). How to think straight about psychology (9th ed.). Boston, MA: Allyn & Bacon.

Research Methods in Psychology Copyright © 2016 by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Encyclopedia Britannica

  • History & Society
  • Science & Tech
  • Biographies
  • Animals & Nature
  • Geography & Travel
  • Arts & Culture
  • Games & Quizzes
  • On This Day
  • One Good Fact
  • New Articles
  • Lifestyles & Social Issues
  • Philosophy & Religion
  • Politics, Law & Government
  • World History
  • Health & Medicine
  • Browse Biographies
  • Birds, Reptiles & Other Vertebrates
  • Bugs, Mollusks & Other Invertebrates
  • Environment
  • Fossils & Geologic Time
  • Entertainment & Pop Culture
  • Sports & Recreation
  • Visual Arts
  • Demystified
  • Image Galleries
  • Infographics
  • Top Questions
  • Britannica Kids
  • Saving Earth
  • Space Next 50
  • Student Center

Start of Men's 100 meter sprint where Usain Bolt wins and sets a new world record at the 2008 Summer Olympic Games August 18, 2008 in Beijing, China.

experimental psychology

Our editors will review what you’ve submitted and determine whether to revise the article.

  • American Psychological Association - Understanding Experimental Psychology

experimental psychology , a method of studying psychological phenomena and processes. The experimental method in psychology attempts to account for the activities of animals (including humans) and the functional organization of mental processes by manipulating variables that may give rise to behaviour; it is primarily concerned with discovering laws that describe manipulable relationships. The term generally connotes all areas of psychology that use the experimental method.

These areas include the study of sensation and perception , learning and memory , motivation , and biological psychology . There are experimental branches in many other areas, however, including child psychology , clinical psychology , educational psychology , and social psychology . Usually the experimental psychologist deals with normal, intact organisms; in biological psychology, however, studies are often conducted with organisms modified by surgery, radiation, drug treatment, or long-standing deprivations of various kinds or with organisms that naturally present organic abnormalities or emotional disorders. See also psychophysics .

5.1 Experiment Basics

Learning objectives.

  • Explain what an experiment is and recognize examples of studies that are experiments and studies that are not experiments.
  • Distinguish between the manipulation of the independent variable and control of extraneous variables and explain the importance of each.
  • Recognize examples of confounding variables and explain how they affect the internal validity of a study.

What Is an Experiment?

As we saw earlier in the book, an  experiment  is a type of study designed specifically to answer the question of whether there is a causal relationship between two variables. In other words, whether changes in an independent variable  cause  a change in a dependent variable. Experiments have two fundamental features. The first is that the researchers manipulate, or systematically vary, the level of the independent variable. The different levels of the independent variable are called conditions . For example, in Darley and Latané’s experiment, the independent variable was the number of witnesses that participants believed to be present. The researchers manipulated this independent variable by telling participants that there were either one, two, or five other students involved in the discussion, thereby creating three conditions. For a new researcher, it is easy to confuse  these terms by believing there are three independent variables in this situation: one, two, or five students involved in the discussion, but there is actually only one independent variable (number of witnesses) with three different levels or conditions (one, two or five students). The second fundamental feature of an experiment is that the researcher controls, or minimizes the variability in, variables other than the independent and dependent variable. These other variables are called extraneous variables . Darley and Latané tested all their participants in the same room, exposed them to the same emergency situation, and so on. They also randomly assigned their participants to conditions so that the three groups would be similar to each other to begin with. Notice that although the words  manipulation  and  control  have similar meanings in everyday language, researchers make a clear distinction between them. They manipulate  the independent variable by systematically changing its levels and control  other variables by holding them constant.

Manipulation of the Independent Variable

Again, to  manipulate  an independent variable means to change its level systematically so that different groups of participants are exposed to different levels of that variable, or the same group of participants is exposed to different levels at different times. For example, to see whether expressive writing affects people’s health, a researcher might instruct some participants to write about traumatic experiences and others to write about neutral experiences. As discussed earlier in this chapter, the different levels of the independent variable are referred to as  conditions , and researchers often give the conditions short descriptive names to make it easy to talk and write about them. In this case, the conditions might be called the “traumatic condition” and the “neutral condition.”

Notice that the manipulation of an independent variable must involve the active intervention of the researcher. Comparing groups of people who differ on the independent variable before the study begins is not the same as manipulating that variable. For example, a researcher who compares the health of people who already keep a journal with the health of people who do not keep a journal has not manipulated this variable and therefore has not conducted an experiment. This distinction  is important because groups that already differ in one way at the beginning of a study are likely to differ in other ways too. For example, people who choose to keep journals might also be more conscientious, more introverted, or less stressed than people who do not. Therefore, any observed difference between the two groups in terms of their health might have been caused by whether or not they keep a journal, or it might have been caused by any of the other differences between people who do and do not keep journals. Thus the active manipulation of the independent variable is crucial for eliminating potential alternative explanations for the results.

Of course, there are many situations in which the independent variable cannot be manipulated for practical or ethical reasons and therefore an experiment is not possible. For example, whether or not people have a significant early illness experience cannot be manipulated, making it impossible to conduct an experiment on the effect of early illness experiences on the development of hypochondriasis. This caveat does not mean it is impossible to study the relationship between early illness experiences and hypochondriasis—only that it must be done using nonexperimental approaches. We will discuss this type of methodology in detail later in the book.

Independent variables can be manipulated to create two conditions and experiments involving a single independent variable with two conditions is often referred to as a  single factor two-level design.  However, sometimes greater insights can be gained by adding more conditions to an experiment. When an experiment has one independent variable that is manipulated to produce more than two conditions it is referred to as a single factor multi level design.  So rather than comparing a condition in which there was one witness to a condition in which there were five witnesses (which would represent a single-factor two-level design), Darley and Latané’s used a single factor multi-level design, by manipulating the independent variable to produce three conditions (a one witness, a two witnesses, and a five witnesses condition).

Control of Extraneous Variables

As we have seen previously in the chapter, an  extraneous variable  is anything that varies in the context of a study other than the independent and dependent variables. In an experiment on the effect of expressive writing on health, for example, extraneous variables would include participant variables (individual differences) such as their writing ability, their diet, and their gender. They would also include situational or task variables such as the time of day when participants write, whether they write by hand or on a computer, and the weather. Extraneous variables pose a problem because many of them are likely to have some effect on the dependent variable. For example, participants’ health will be affected by many things other than whether or not they engage in expressive writing. This influencing factor can make it difficult to separate the effect of the independent variable from the effects of the extraneous variables, which is why it is important to  control  extraneous variables by holding them constant.

Extraneous Variables as “Noise”

Extraneous variables make it difficult to detect the effect of the independent variable in two ways. One is by adding variability or “noise” to the data. Imagine a simple experiment on the effect of mood (happy vs. sad) on the number of happy childhood events people are able to recall. Participants are put into a negative or positive mood (by showing them a happy or sad video clip) and then asked to recall as many happy childhood events as they can. The two leftmost columns of  Table 5.1 show what the data might look like if there were no extraneous variables and the number of happy childhood events participants recalled was affected only by their moods. Every participant in the happy mood condition recalled exactly four happy childhood events, and every participant in the sad mood condition recalled exactly three. The effect of mood here is quite obvious. In reality, however, the data would probably look more like those in the two rightmost columns of  Table 5.1 . Even in the happy mood condition, some participants would recall fewer happy memories because they have fewer to draw on, use less effective recall strategies, or are less motivated. And even in the sad mood condition, some participants would recall more happy childhood memories because they have more happy memories to draw on, they use more effective recall strategies, or they are more motivated. Although the mean difference between the two groups is the same as in the idealized data, this difference is much less obvious in the context of the greater variability in the data. Thus one reason researchers try to control extraneous variables is so their data look more like the idealized data in  Table 5.1 , which makes the effect of the independent variable easier to detect (although real data never look quite  that  good).

4 3 3 1
4 3 6 3
4 3 2 4
4 3 4 0
4 3 5 5
4 3 2 7
4 3 3 2
4 3 1 5
4 3 6 1
4 3 8 2
 = 4  = 3  = 4  = 3

One way to control extraneous variables is to hold them constant. This technique can mean holding situation or task variables constant by testing all participants in the same location, giving them identical instructions, treating them in the same way, and so on. It can also mean holding participant variables constant. For example, many studies of language limit participants to right-handed people, who generally have their language areas isolated in their left cerebral hemispheres. Left-handed people are more likely to have their language areas isolated in their right cerebral hemispheres or distributed across both hemispheres, which can change the way they process language and thereby add noise to the data.

In principle, researchers can control extraneous variables by limiting participants to one very specific category of person, such as 20-year-old, heterosexual, female, right-handed psychology majors. The obvious downside to this approach is that it would lower the external validity of the study—in particular, the extent to which the results can be generalized beyond the people actually studied. For example, it might be unclear whether results obtained with a sample of younger heterosexual women would apply to older homosexual men. In many situations, the advantages of a diverse sample (increased external validity) outweigh the reduction in noise achieved by a homogeneous one.

Extraneous Variables as Confounding Variables

The second way that extraneous variables can make it difficult to detect the effect of the independent variable is by becoming confounding variables. A confounding variable  is an extraneous variable that differs on average across  levels of the independent variable (i.e., it is an extraneous variable that varies systematically with the independent variable). For example, in almost all experiments, participants’ intelligence quotients (IQs) will be an extraneous variable. But as long as there are participants with lower and higher IQs in each condition so that the average IQ is roughly equal across the conditions, then this variation is probably acceptable (and may even be desirable). What would be bad, however, would be for participants in one condition to have substantially lower IQs on average and participants in another condition to have substantially higher IQs on average. In this case, IQ would be a confounding variable.

To confound means to confuse , and this effect is exactly why confounding variables are undesirable. Because they differ systematically across conditions—just like the independent variable—they provide an alternative explanation for any observed difference in the dependent variable.  Figure 5.1  shows the results of a hypothetical study, in which participants in a positive mood condition scored higher on a memory task than participants in a negative mood condition. But if IQ is a confounding variable—with participants in the positive mood condition having higher IQs on average than participants in the negative mood condition—then it is unclear whether it was the positive moods or the higher IQs that caused participants in the first condition to score higher. One way to avoid confounding variables is by holding extraneous variables constant. For example, one could prevent IQ from becoming a confounding variable by limiting participants only to those with IQs of exactly 100. But this approach is not always desirable for reasons we have already discussed. A second and much more general approach—random assignment to conditions—will be discussed in detail shortly.

Figure 6.1 Hypothetical Results From a Study on the Effect of Mood on Memory. Because IQ also differs across conditions, it is a confounding variable.

Figure 5.1 Hypothetical Results From a Study on the Effect of Mood on Memory. Because IQ also differs across conditions, it is a confounding variable.

Key Takeaways

  • An experiment is a type of empirical study that features the manipulation of an independent variable, the measurement of a dependent variable, and control of extraneous variables.
  • An extraneous variable is any variable other than the independent and dependent variables. A confound is an extraneous variable that varies systematically with the independent variable.
  • Practice: List five variables that can be manipulated by the researcher in an experiment. List five variables that cannot be manipulated by the researcher in an experiment.
  • Effect of parietal lobe damage on people’s ability to do basic arithmetic.
  • Effect of being clinically depressed on the number of close friendships people have.
  • Effect of group training on the social skills of teenagers with Asperger’s syndrome.
  • Effect of paying people to take an IQ test on their performance on that test.

Creative Commons License

Share This Book

  • Increase Font Size

psychology

Experimental Psychology

Definition:

Experimental psychology is a subfield of psychology that focuses on scientific investigation and research methods to study human behavior and mental processes. It involves conducting controlled experiments to examine hypotheses and gather empirical data.

Subfields of Experimental Psychology:

Sensory processes:.

Sensory processes in experimental psychology involve understanding how humans perceive and process information through their senses, such as vision, hearing, taste, smell, and touch.

Learning and Memory:

This subfield explores how individuals acquire and retain knowledge and skills, including the study of different types of memory, learning strategies, and factors that influence memory processes.

Cognitive Psychology:

Cognitive psychology examines mental processes, including attention, perception, problem-solving, decision-making, language, and thinking. It investigates how individuals process information, solve problems, and make decisions.

Developmental Psychology:

Developmental psychology focuses on the study of human development across the lifespan, from infancy to old age. It investigates how individuals change physically, cognitively, and emotionally as they grow and mature.

Social Psychology:

Social psychology studies how individuals’ thoughts, feelings, and behaviors are influenced by social interactions and social environments. It examines topics such as conformity, persuasion, group dynamics, and intergroup relations.

Personality Psychology:

Personality psychology aims to understand individual differences in behavior, thoughts, and emotions. It investigates various personality traits, their development, and how they influence behavior and well-being.

Psychopathology:

This subfield focuses on the study of mental disorders, their causes, symptoms, and treatments. Psychopathology research is often conducted using experimental methods to examine the effectiveness of therapeutic interventions.

Psychopharmacology:

Psychopharmacology involves studying the effects of drugs on behavior, cognition, and emotions. It examines how different medications impact mental processes and aims to develop effective pharmacological treatments for psychological disorders.

Neuropsychology:

Neuropsychology investigates the relationship between brain function and behavior. It examines how brain damage, genetics, and neurological disorders affect cognitive abilities, emotions, and behavior.

Logo for Kwantlen Polytechnic University

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Experimental Research

23 Experiment Basics

Learning objectives.

  • Explain what an experiment is and recognize examples of studies that are experiments and studies that are not experiments.
  • Distinguish between the manipulation of the independent variable and control of extraneous variables and explain the importance of each.
  • Recognize examples of confounding variables and explain how they affect the internal validity of a study.
  • Define what a control condition is, explain its purpose in research on treatment effectiveness, and describe some alternative types of control conditions.

What Is an Experiment?

As we saw earlier in the book, an  experiment is a type of study designed specifically to answer the question of whether there is a causal relationship between two variables. In other words, whether changes in one variable (referred to as an independent variable ) cause a change in another variable (referred to as a dependent variable ). Experiments have two fundamental features. The first is that the researchers manipulate, or systematically vary, the level of the independent variable. The different levels of the independent variable are called conditions . For example, in Darley and Latané’s experiment, the independent variable was the number of witnesses that participants believed to be present. The researchers manipulated this independent variable by telling participants that there were either one, two, or five other students involved in the discussion, thereby creating three conditions. For a new researcher, it is easy to confuse these terms by believing there are three independent variables in this situation: one, two, or five students involved in the discussion, but there is actually only one independent variable (number of witnesses) with three different levels or conditions (one, two or five students). The second fundamental feature of an experiment is that the researcher exerts control over, or minimizes the variability in, variables other than the independent and dependent variable. These other variables are called extraneous variables . Darley and Latané tested all their participants in the same room, exposed them to the same emergency situation, and so on. They also randomly assigned their participants to conditions so that the three groups would be similar to each other to begin with. Notice that although the words  manipulation  and  control  have similar meanings in everyday language, researchers make a clear distinction between them. They manipulate  the independent variable by systematically changing its levels and control  other variables by holding them constant.

Manipulation of the Independent Variable

Again, to  manipulate an independent variable means to change its level systematically so that different groups of participants are exposed to different levels of that variable, or the same group of participants is exposed to different levels at different times. For example, to see whether expressive writing affects people’s health, a researcher might instruct some participants to write about traumatic experiences and others to write about neutral experiences. The different levels of the independent variable are referred to as conditions , and researchers often give the conditions short descriptive names to make it easy to talk and write about them. In this case, the conditions might be called the “traumatic condition” and the “neutral condition.”

Notice that the manipulation of an independent variable must involve the active intervention of the researcher. Comparing groups of people who differ on the independent variable before the study begins is not the same as manipulating that variable. For example, a researcher who compares the health of people who already keep a journal with the health of people who do not keep a journal has not manipulated this variable and therefore has not conducted an experiment. This distinction  is important because groups that already differ in one way at the beginning of a study are likely to differ in other ways too. For example, people who choose to keep journals might also be more conscientious, more introverted, or less stressed than people who do not. Therefore, any observed difference between the two groups in terms of their health might have been caused by whether or not they keep a journal, or it might have been caused by any of the other differences between people who do and do not keep journals. Thus the active manipulation of the independent variable is crucial for eliminating potential alternative explanations for the results.

Of course, there are many situations in which the independent variable cannot be manipulated for practical or ethical reasons and therefore an experiment is not possible. For example, whether or not people have a significant early illness experience cannot be manipulated, making it impossible to conduct an experiment on the effect of early illness experiences on the development of hypochondriasis. This caveat does not mean it is impossible to study the relationship between early illness experiences and hypochondriasis—only that it must be done using nonexperimental approaches. We will discuss this type of methodology in detail later in the book.

Independent variables can be manipulated to create two conditions and experiments involving a single independent variable with two conditions are often referred to as a single factor two-level design .  However, sometimes greater insights can be gained by adding more conditions to an experiment. When an experiment has one independent variable that is manipulated to produce more than two conditions it is referred to as a single factor multi level design .  So rather than comparing a condition in which there was one witness to a condition in which there were five witnesses (which would represent a single-factor two-level design), Darley and Latané’s experiment used a single factor multi-level design, by manipulating the independent variable to produce three conditions (a one witness, a two witnesses, and a five witnesses condition).

Control of Extraneous Variables

As we have seen previously in the chapter, an  extraneous variable  is anything that varies in the context of a study other than the independent and dependent variables. In an experiment on the effect of expressive writing on health, for example, extraneous variables would include participant variables (individual differences) such as their writing ability, their diet, and their gender. They would also include situational or task variables such as the time of day when participants write, whether they write by hand or on a computer, and the weather. Extraneous variables pose a problem because many of them are likely to have some effect on the dependent variable. For example, participants’ health will be affected by many things other than whether or not they engage in expressive writing. This influencing factor can make it difficult to separate the effect of the independent variable from the effects of the extraneous variables, which is why it is important to control extraneous variables by holding them constant.

Extraneous Variables as “Noise”

Extraneous variables make it difficult to detect the effect of the independent variable in two ways. One is by adding variability or “noise” to the data. Imagine a simple experiment on the effect of mood (happy vs. sad) on the number of happy childhood events people are able to recall. Participants are put into a negative or positive mood (by showing them a happy or sad video clip) and then asked to recall as many happy childhood events as they can. The two leftmost columns of  Table 5.1 show what the data might look like if there were no extraneous variables and the number of happy childhood events participants recalled was affected only by their moods. Every participant in the happy mood condition recalled exactly four happy childhood events, and every participant in the sad mood condition recalled exactly three. The effect of mood here is quite obvious. In reality, however, the data would probably look more like those in the two rightmost columns of  Table 5.1 . Even in the happy mood condition, some participants would recall fewer happy memories because they have fewer to draw on, use less effective recall strategies, or are less motivated. And even in the sad mood condition, some participants would recall more happy childhood memories because they have more happy memories to draw on, they use more effective recall strategies, or they are more motivated. Although the mean difference between the two groups is the same as in the idealized data, this difference is much less obvious in the context of the greater variability in the data. Thus one reason researchers try to control extraneous variables is so their data look more like the idealized data in  Table 5.1 , which makes the effect of the independent variable easier to detect (although real data never look quite  that  good).

4 3 3 1
4 3 6 3
4 3 2 4
4 3 4 0
4 3 5 5
4 3 2 7
4 3 3 2
4 3 1 5
4 3 6 1
4 3 8 2
 = 4  = 3  = 4  = 3

One way to control extraneous variables is to hold them constant. This technique can mean holding situation or task variables constant by testing all participants in the same location, giving them identical instructions, treating them in the same way, and so on. It can also mean holding participant variables constant. For example, many studies of language limit participants to right-handed people, who generally have their language areas isolated in their left cerebral hemispheres [1] . Left-handed people are more likely to have their language areas isolated in their right cerebral hemispheres or distributed across both hemispheres, which can change the way they process language and thereby add noise to the data.

In principle, researchers can control extraneous variables by limiting participants to one very specific category of person, such as 20-year-old, heterosexual, female, right-handed psychology majors. The obvious downside to this approach is that it would lower the external validity of the study—in particular, the extent to which the results can be generalized beyond the people actually studied. For example, it might be unclear whether results obtained with a sample of younger lesbian women would apply to older gay men. In many situations, the advantages of a diverse sample (increased external validity) outweigh the reduction in noise achieved by a homogeneous one.

Extraneous Variables as Confounding Variables

The second way that extraneous variables can make it difficult to detect the effect of the independent variable is by becoming confounding variables. A confounding variable  is an extraneous variable that differs on average across  levels of the independent variable (i.e., it is an extraneous variable that varies systematically with the independent variable). For example, in almost all experiments, participants’ intelligence quotients (IQs) will be an extraneous variable. But as long as there are participants with lower and higher IQs in each condition so that the average IQ is roughly equal across the conditions, then this variation is probably acceptable (and may even be desirable). What would be bad, however, would be for participants in one condition to have substantially lower IQs on average and participants in another condition to have substantially higher IQs on average. In this case, IQ would be a confounding variable.

To confound means to confuse , and this effect is exactly why confounding variables are undesirable. Because they differ systematically across conditions—just like the independent variable—they provide an alternative explanation for any observed difference in the dependent variable.  Figure 5.1  shows the results of a hypothetical study, in which participants in a positive mood condition scored higher on a memory task than participants in a negative mood condition. But if IQ is a confounding variable—with participants in the positive mood condition having higher IQs on average than participants in the negative mood condition—then it is unclear whether it was the positive moods or the higher IQs that caused participants in the first condition to score higher. One way to avoid confounding variables is by holding extraneous variables constant. For example, one could prevent IQ from becoming a confounding variable by limiting participants only to those with IQs of exactly 100. But this approach is not always desirable for reasons we have already discussed. A second and much more general approach—random assignment to conditions—will be discussed in detail shortly.

Figure 5.1 Hypothetical Results From a Study on the Effect of Mood on Memory. Because IQ also differs across conditions, it is a confounding variable.

Treatment and Control Conditions

In psychological research, a treatment is any intervention meant to change people’s behavior for the better. This intervention includes psychotherapies and medical treatments for psychological disorders but also interventions designed to improve learning, promote conservation, reduce prejudice, and so on. To determine whether a treatment works, participants are randomly assigned to either a treatment condition , in which they receive the treatment, or a control condition , in which they do not receive the treatment. If participants in the treatment condition end up better off than participants in the control condition—for example, they are less depressed, learn faster, conserve more, express less prejudice—then the researcher can conclude that the treatment works. In research on the effectiveness of psychotherapies and medical treatments, this type of experiment is often called a randomized clinical trial .

There are different types of control conditions. In a no-treatment control condition , participants receive no treatment whatsoever. One problem with this approach, however, is the existence of placebo effects. A placebo is a simulated treatment that lacks any active ingredient or element that should make it effective, and a placebo effect is a positive effect of such a treatment. Many folk remedies that seem to work—such as eating chicken soup for a cold or placing soap under the bed sheets to stop nighttime leg cramps—are probably nothing more than placebos. Although placebo effects are not well understood, they are probably driven primarily by people’s expectations that they will improve. Having the expectation to improve can result in reduced stress, anxiety, and depression, which can alter perceptions and even improve immune system functioning (Price, Finniss, & Benedetti, 2008) [2] .

Placebo effects are interesting in their own right (see Note “The Powerful Placebo” ), but they also pose a serious problem for researchers who want to determine whether a treatment works. Figure 5.2 shows some hypothetical results in which participants in a treatment condition improved more on average than participants in a no-treatment control condition. If these conditions (the two leftmost bars in Figure 5.2 ) were the only conditions in this experiment, however, one could not conclude that the treatment worked. It could be instead that participants in the treatment group improved more because they expected to improve, while those in the no-treatment control condition did not.

Figure 5.2 Hypothetical Results From a Study Including Treatment, No-Treatment, and Placebo Conditions

Fortunately, there are several solutions to this problem. One is to include a placebo control condition , in which participants receive a placebo that looks much like the treatment but lacks the active ingredient or element thought to be responsible for the treatment’s effectiveness. When participants in a treatment condition take a pill, for example, then those in a placebo control condition would take an identical-looking pill that lacks the active ingredient in the treatment (a “sugar pill”). In research on psychotherapy effectiveness, the placebo might involve going to a psychotherapist and talking in an unstructured way about one’s problems. The idea is that if participants in both the treatment and the placebo control groups expect to improve, then any improvement in the treatment group over and above that in the placebo control group must have been caused by the treatment and not by participants’ expectations. This difference is what is shown by a comparison of the two outer bars in Figure 5.4 .

Of course, the principle of informed consent requires that participants be told that they will be assigned to either a treatment or a placebo control condition—even though they cannot be told which until the experiment ends. In many cases the participants who had been in the control condition are then offered an opportunity to have the real treatment. An alternative approach is to use a wait-list control condition , in which participants are told that they will receive the treatment but must wait until the participants in the treatment condition have already received it. This disclosure allows researchers to compare participants who have received the treatment with participants who are not currently receiving it but who still expect to improve (eventually). A final solution to the problem of placebo effects is to leave out the control condition completely and compare any new treatment with the best available alternative treatment. For example, a new treatment for simple phobia could be compared with standard exposure therapy. Because participants in both conditions receive a treatment, their expectations about improvement should be similar. This approach also makes sense because once there is an effective treatment, the interesting question about a new treatment is not simply “Does it work?” but “Does it work better than what is already available?

The Powerful Placebo

Many people are not surprised that placebos can have a positive effect on disorders that seem fundamentally psychological, including depression, anxiety, and insomnia. However, placebos can also have a positive effect on disorders that most people think of as fundamentally physiological. These include asthma, ulcers, and warts (Shapiro & Shapiro, 1999) [3] . There is even evidence that placebo surgery—also called “sham surgery”—can be as effective as actual surgery.

Medical researcher J. Bruce Moseley and his colleagues conducted a study on the effectiveness of two arthroscopic surgery procedures for osteoarthritis of the knee (Moseley et al., 2002) [4] . The control participants in this study were prepped for surgery, received a tranquilizer, and even received three small incisions in their knees. But they did not receive the actual arthroscopic surgical procedure. Note that the IRB would have carefully considered the use of deception in this case and judged that the benefits of using it outweighed the risks and that there was no other way to answer the research question (about the effectiveness of a placebo procedure) without it. The surprising result was that all participants improved in terms of both knee pain and function, and the sham surgery group improved just as much as the treatment groups. According to the researchers, “This study provides strong evidence that arthroscopic lavage with or without débridement [the surgical procedures used] is not better than and appears to be equivalent to a placebo procedure in improving knee pain and self-reported function” (p. 85).

  • Knecht, S., Dräger, B., Deppe, M., Bobe, L., Lohmann, H., Flöel, A., . . . Henningsen, H. (2000). Handedness and hemispheric language dominance in healthy humans. Brain: A Journal of Neurology, 123 (12), 2512-2518. http://dx.doi.org/10.1093/brain/123.12.2512 ↵
  • Price, D. D., Finniss, D. G., & Benedetti, F. (2008). A comprehensive review of the placebo effect: Recent advances and current thought. Annual Review of Psychology, 59 , 565–590. ↵
  • Shapiro, A. K., & Shapiro, E. (1999). The powerful placebo: From ancient priest to modern physician . Baltimore, MD: Johns Hopkins University Press. ↵
  • Moseley, J. B., O’Malley, K., Petersen, N. J., Menke, T. J., Brody, B. A., Kuykendall, D. H., … Wray, N. P. (2002). A controlled trial of arthroscopic surgery for osteoarthritis of the knee. The New England Journal of Medicine, 347 , 81–88. ↵

A type of study designed specifically to answer the question of whether there is a causal relationship between two variables.

The variable the experimenter manipulates.

The variable the experimenter measures (it is the presumed effect).

The different levels of the independent variable to which participants are assigned.

Holding extraneous variables constant in order to separate the effect of the independent variable from the effect of the extraneous variables.

Any variable other than the dependent and independent variable.

Changing the level, or condition, of the independent variable systematically so that different groups of participants are exposed to different levels of that variable, or the same group of participants is exposed to different levels at different times.

An experiment design involving a single independent variable with two conditions.

When an experiment has one independent variable that is manipulated to produce more than two conditions.

An extraneous variable that varies systematically with the independent variable, and thus confuses the effect of the independent variable with the effect of the extraneous one.

Any intervention meant to change people’s behavior for the better.

The condition in which participants receive the treatment.

The condition in which participants do not receive the treatment.

An experiment that researches the effectiveness of psychotherapies and medical treatments.

The condition in which participants receive no treatment whatsoever.

A simulated treatment that lacks any active ingredient or element that is hypothesized to make the treatment effective, but is otherwise identical to the treatment.

An effect that is due to the placebo rather than the treatment.

Condition in which the participants receive a placebo rather than the treatment.

Condition in which participants are told that they will receive the treatment but must wait until the participants in the treatment condition have already received it.

Research Methods in Psychology Copyright © 2019 by Rajiv S. Jhangiani, I-Chant A. Chiang, Carrie Cuttler, & Dana C. Leighton is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

helpful professor logo

Experimental Psychology: 10 Examples & Definition

Experimental Psychology: 10 Examples & Definition

Dave Cornell (PhD)

Dr. Cornell has worked in education for more than 20 years. His work has involved designing teacher certification for Trinity College in London and in-service training for state governments in the United States. He has trained kindergarten teachers in 8 countries and helped businessmen and women open baby centers and kindergartens in 3 countries.

Learn about our Editorial Process

Experimental Psychology: 10 Examples & Definition

Chris Drew (PhD)

This article was peer-reviewed and edited by Chris Drew (PhD). The review process on Helpful Professor involves having a PhD level expert fact check, edit, and contribute to articles. Reviewers ensure all content reflects expert academic consensus and is backed up with reference to academic studies. Dr. Drew has published over 20 academic articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education and holds a PhD in Education from ACU.

experiments psychology definition

Experimental psychology refers to studying psychological phenomena using scientific methods. Originally, the primary scientific method involved manipulating one variable and observing systematic changes in another variable.

Today, psychologists utilize several types of scientific methodologies.

Experimental psychology examines a wide range of psychological phenomena, including: memory, sensation and perception, cognitive processes, motivation, emotion, developmental processes, in addition to the neurophysiological concomitants of each of these subjects.

Studies are conducted on both animal and human participants, and must comply with stringent requirements and controls regarding the ethical treatment of both.

Definition of Experimental Psychology

Experimental psychology is a branch of psychology that utilizes scientific methods to investigate the mind and behavior.

It involves the systematic and controlled study of human and animal behavior through observation and experimentation .

Experimental psychologists design and conduct experiments to understand cognitive processes, perception, learning, memory, emotion, and many other aspects of psychology. They often manipulate variables ( independent variables ) to see how this affects behavior or mental processes (dependent variables).

The findings from experimental psychology research are often used to better understand human behavior and can be applied in a range of contexts, such as education, health, business, and more.

Experimental Psychology Examples

1. The Puzzle Box Studies (Thorndike, 1898) Placing different cats in a box that can only be escaped by pulling a cord, and then taking detailed notes on how long it took for them to escape allowed Edward Thorndike to derive the Law of Effect: actions followed by positive consequences are more likely to occur again, and actions followed by negative consequences are less likely to occur again (Thorndike, 1898).

2. Reinforcement Schedules (Skinner, 1956) By placing rats in a Skinner Box and changing when and how often the rats are rewarded for pressing a lever, it is possible to identify how each schedule results in different behavior patterns (Skinner, 1956). This led to a wide range of theoretical ideas around how rewards and consequences can shape the behaviors of both animals and humans.

3. Observational Learning (Bandura, 1980) Some children watch a video of an adult punching and kicking a Bobo doll. Other children watch a video in which the adult plays nicely with the doll. By carefully observing the children’s behavior later when in a room with a Bobo doll, researchers can determine if television violence affects children’s behavior (Bandura, 1980).

4. The Fallibility of Memory (Loftus & Palmer, 1974) A group of participants watch the same video of two cars having an accident. Two weeks later, some are asked to estimate the rate of speed the cars were going when they “smashed” into each other. Some participants are asked to estimate the rate of speed the cars were going when they “bumped” into each other. Changing the phrasing of the question changes the memory of the eyewitness.

5. Intrinsic Motivation in the Classroom (Dweck, 1990) To investigate the role of autonomy on intrinsic motivation, half of the students are told they are “free to choose” which tasks to complete. The other half of the students are told they “must choose” some of the tasks. Researchers then carefully observe how long the students engage in the tasks and later ask them some questions about if they enjoyed doing the tasks or not.

6. Systematic Desensitization (Wolpe, 1958) A clinical psychologist carefully documents his treatment of a patient’s social phobia with progressive relaxation. At first, the patient is trained to monitor, tense, and relax various muscle groups while viewing photos of parties. Weeks later, they approach a stranger to ask for directions, initiate a conversation on a crowded bus, and attend a small social gathering. The therapist’s notes are transcribed into a scientific report and published in a peer-reviewed journal.

7. Study of Remembering (Bartlett, 1932) Bartlett’s work is a seminal study in the field of memory, where he used the concept of “schema” to describe an organized pattern of thought or behavior. He conducted a series of experiments using folk tales to show that memory recall is influenced by cultural schemas and personal experiences.

8. Study of Obedience (Milgram, 1963) This famous study explored the conflict between obedience to authority and personal conscience. Milgram found that a majority of participants were willing to administer what they believed were harmful electric shocks to a stranger when instructed by an authority figure, highlighting the power of authority and situational factors in driving behavior.

9. Pavlov’s Dog Study (Pavlov, 1927) Ivan Pavlov, a Russian physiologist, conducted a series of experiments that became a cornerstone in the field of experimental psychology. Pavlov noticed that dogs would salivate when they saw food. He then began to ring a bell each time he presented the food to the dogs. After a while, the dogs began to salivate merely at the sound of the bell. This experiment demonstrated the principle of “classical conditioning.”

10, Piaget’s Stages of Development (Piaget, 1958) Jean Piaget proposed a theory of cognitive development in children that consists of four distinct stages: the sensorimotor stage (birth to 2 years), where children learn about the world through their senses and motor activities, through to the the formal operational stage (12 years and beyond), where abstract reasoning and hypothetical thinking develop. Piaget’s theory is an example of experimental psychology as it was developed through systematic observation and experimentation on children’s problem-solving behaviors .

Types of Research Methodologies in Experimental Psychology 

Researchers utilize several different types of research methodologies since the early days of Wundt (1832-1920).

1. The Experiment

The experiment involves the researcher manipulating the level of one variable, called the Independent Variable (IV), and then observing changes in another variable, called the Dependent Variable (DV).

The researcher is interested in determining if the IV causes changes in the DV. For example, does television violence make children more aggressive?

So, some children in the study, called research participants, will watch a show with TV violence, called the treatment group. Others will watch a show with no TV violence, called the control group.

So, there are two levels of the IV: violence and no violence. Next, children will be observed to see if they act more aggressively. This is the DV.

If TV violence makes children more aggressive, then the children that watched the violent show will me more aggressive than the children that watched the non-violent show.

A key requirement of the experiment is random assignment . Each research participant is assigned to one of the two groups in a way that makes it a completely random process. This means that each group will have a mix of children: different personality types, diverse family backgrounds, and range of intelligence levels.

2. The Longitudinal Study

A longitudinal study involves selecting a sample of participants and then following them for years, or decades, periodically collecting data on the variables of interest.

For example, a researcher might be interested in determining if parenting style affects academic performance of children. Parenting style is called the predictor variable , and academic performance is called the outcome variable .

Researchers will begin by randomly selecting a group of children to be in the study. Then, they will identify the type of parenting practices used when the children are 4 and 5 years old.

A few years later, perhaps when the children are 8 and 9, the researchers will collect data on their grades. This process can be repeated over the next 10 years, including through college.

If parenting style has an effect on academic performance, then the researchers will see a connection between the predictor variable and outcome variable.

Children raised with parenting style X will have higher grades than children raised with parenting style Y.

3. The Case Study

The case study is an in-depth study of one individual. This is a research methodology often used early in the examination of a psychological phenomenon or therapeutic treatment.

For example, in the early days of treating phobias, a clinical psychologist may try teaching one of their patients how to relax every time they see the object that creates so much fear and anxiety, such as a large spider.

The therapist would take very detailed notes on how the teaching process was implemented and the reactions of the patient. When the treatment had been completed, those notes would be written in a scientific form and submitted for publication in a scientific journal for other therapists to learn from.

There are several other types of methodologies available which vary different aspects of the three described above. The researcher will select a methodology that is most appropriate to the phenomenon they want to examine.

They also must take into account various practical considerations such as how much time and resources are needed to complete the study. Conducting research always costs money.

People and equipment are needed to carry-out every study, so researchers often try to obtain funding from their university or a government agency. 

Origins and Key Developments in Experimental Psychology

timeline of experimental psychology, explained below

Wilhelm Maximilian Wundt (1832-1920) is considered one of the fathers of modern psychology. He was a physiologist and philosopher and helped establish psychology as a distinct discipline (Khaleefa, 1999).  

In 1879 he established the world’s first psychology research lab at the University of Leipzig. This is considered a key milestone for establishing psychology as a scientific discipline. In addition to being the first person to use the term “psychologist,” to describe himself, he also founded the discipline’s first scientific journal Philosphische Studien in 1883.

Another notable figure in the development of experimental psychology is Ernest Weber . Trained as a physician, Weber studied sensation and perception and created the first quantitative law in psychology.

The equation denotes how judgments of sensory differences are relative to previous levels of sensation, referred to as the just-noticeable difference (jnd). This is known today as Weber’s Law (Hergenhahn, 2009).    

Gustav Fechner , one of Weber’s students, published the first book on experimental psychology in 1860, titled Elemente der Psychophysik. His worked centered on the measurement of psychophysical facets of sensation and perception, with many of his methods still in use today.    

The first American textbook on experimental psychology was Elements of Physiological Psychology, published in 1887 by George Trumball Ladd .

Ladd also established a psychology lab at Yale University, while Stanley Hall and Charles Sanders continued Wundt’s work at a lab at Johns Hopkins University.

In the late 1800s, Charles Pierce’s contribution to experimental psychology is especially noteworthy because he invented the concept of random assignment (Stigler, 1992; Dehue, 1997).

Go Deeper: 15 Random Assignment Examples

This procedure ensures that each participant has an equal chance of being placed in any of the experimental groups (e.g., treatment or control group). This eliminates the influence of confounding factors related to inherent characteristics of the participants.

Random assignment is a fundamental criterion for a study to be considered a valid experiment.

From there, experimental psychology flourished in the 20th century as a science and transformed into an approach utilized in cognitive psychology, developmental psychology, and social psychology .

Today, the term experimental psychology refers to the study of a wide range of phenomena and involves methodologies not limited to the manipulation of variables.

The Scientific Process and Experimental Psychology

The one thing that makes psychology a science and distinguishes it from its roots in philosophy is the reliance upon the scientific process to answer questions. This makes psychology a science was the main goal of its earliest founders such as Wilhelm Wundt.

There are numerous steps in the scientific process, outlined in the graphic below.

an overview of the scientific process, summarized in text in the appendix

1. Observation

First, the scientist observes an interesting phenomenon that sparks a question. For example, are the memories of eyewitnesses really reliable, or are they subject to bias or unintentional manipulation?

2. Hypothesize

Next, this question is converted into a testable hypothesis. For instance: the words used to question a witness can influence what they think they remember.

3. Devise a Study

Then the researcher(s) select a methodology that will allow them to test that hypothesis. In this case, the researchers choose the experiment, which will involve randomly assigning some participants to different conditions.

In one condition, participants are asked a question that implies a certain memory (treatment group), while other participants are asked a question which is phrased neutrally and does not imply a certain memory (control group).

The researchers then write a proposal that describes in detail the procedures they want to use, how participants will be selected, and the safeguards they will employ to ensure the rights of the participants.

That proposal is submitted to an Institutional Review Board (IRB). The IRB is comprised of a panel of researchers, community representatives, and other professionals that are responsible for reviewing all studies involving human participants.

4. Conduct the Study

If the IRB accepts the proposal, then the researchers may begin collecting data. After the data has been collected, it is analyzed using a software program such as SPSS.

Those analyses will either support or reject the hypothesis. That is, either the participants’ memories were affected by the wording of the question, or not.

5. Publish the study

Finally, the researchers write a paper detailing their procedures and results of the statistical analyses. That paper is then submitted to a scientific journal.

The lead editor of that journal will then send copies of the paper to 3-5 experts in that subject. Each of those experts will read the paper and basically try to find as many things wrong with it as possible. Because they are experts, they are very good at this task.

After reading those critiques, most likely, the editor will send the paper back to the researchers and require that they respond to the criticisms, collect more data, or reject the paper outright.

In some cases, the study was so well-done that the criticisms were minimal and the editor accepts the paper. It then gets published in the scientific journal several months later.

That entire process can easily take 2 years, usually more. But, the findings of that study went through a very rigorous process. This means that we can have substantial confidence that the conclusions of the study are valid.

Experimental psychology refers to utilizing a scientific process to investigate psychological phenomenon.

There are a variety of methods employed today. They are used to study a wide range of subjects, including memory, cognitive processes, emotions and the neurophysiological basis of each.

The history of psychology as a science began in the 1800s primarily in Germany. As interest grew, the field expanded to the United States where several influential research labs were established.

As more methodologies were developed, the field of psychology as a science evolved into a prolific scientific discipline that has provided invaluable insights into human behavior.

Bartlett, F. C., & Bartlett, F. C. (1995).  Remembering: A study in experimental and social psychology . Cambridge university press.

Dehue, T. (1997). Deception, efficiency, and random groups: Psychology and the gradual origination of the random group design. Isis , 88 (4), 653-673.

Ebbinghaus, H. (2013). Memory: A contribution to experimental psychology.  Annals of neurosciences ,  20 (4), 155.

Hergenhahn, B. R. (2009). An introduction to the history of psychology. Belmont. CA: Wadsworth Cengage Learning .

Khaleefa, O. (1999). Who is the founder of psychophysics and experimental psychology? American Journal of Islam and Society , 16 (2), 1-26.

Loftus, E. F., & Palmer, J. C. (1974).  Reconstruction of auto-mobile destruction : An example of the interaction between language and memory.  Journal of Verbal Learning and Verbal behavior , 13, 585-589.

Pavlov, I.P. (1927). Conditioned reflexes . Dover, New York.

Piaget, J. (1959).  The language and thought of the child  (Vol. 5). Psychology Press.

Piaget, J., Fraisse, P., & Reuchlin, M. (2014). Experimental psychology its scope and method: Volume I (Psychology Revivals): History and method . Psychology Press.

Skinner, B. F. (1956). A case history in scientlfic method. American Psychologist, 11 , 221-233

Stigler, S. M. (1992). A historical view of statistical concepts in psychology and educational research. American Journal of Education , 101 (1), 60-70.

Thorndike, E. L. (1898). Animal intelligence: An experimental study of the associative processes in animals. Psychological Review Monograph Supplement 2 .

Wolpe, J. (1958). Psychotherapy by reciprocal inhibition. Stanford, CA: Stanford University Press.

Appendix: Images reproduced as Text

Definition: Experimental psychology is a branch of psychology that focuses on conducting systematic and controlled experiments to study human behavior and cognition.

Overview: Experimental psychology aims to gather empirical evidence and explore cause-and-effect relationships between variables. Experimental psychologists utilize various research methods, including laboratory experiments, surveys, and observations, to investigate topics such as perception, memory, learning, motivation, and social behavior .

Example: The Pavlov’s Dog experimental psychology experiment used scientific methods to develop a theory about how learning and association occur in animals. The same concepts were subsequently used in the study of humans, wherein psychology-based ideas about learning were developed. Pavlov’s use of the empirical evidence was foundational to the study’s success.

Experimental Psychology Milestones:

1890: William James publishes “The Principles of Psychology”, a foundational text in the field of psychology.

1896: Lightner Witmer opens the first psychological clinic at the University of Pennsylvania, marking the beginning of clinical psychology.

1913: John B. Watson publishes “Psychology as the Behaviorist Views It”, marking the beginning of Behaviorism.

1920: Hermann Rorschach introduces the Rorschach inkblot test.

1938: B.F. Skinner introduces the concept of operant conditioning .

1967: Ulric Neisser publishes “Cognitive Psychology” , marking the beginning of the cognitive revolution.

1980: The third edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-III) is published, introducing a new classification system for mental disorders.

The Scientific Process

  • Observe an interesting phenomenon
  • Formulate testable hypothesis
  • Select methodology and design study
  • Submit research proposal to IRB
  • Collect and analyzed data; write paper
  • Submit paper for critical reviews

Dave

  • Dave Cornell (PhD) https://helpfulprofessor.com/author/dave-cornell-phd/ 23 Achieved Status Examples
  • Dave Cornell (PhD) https://helpfulprofessor.com/author/dave-cornell-phd/ 25 Defense Mechanisms Examples
  • Dave Cornell (PhD) https://helpfulprofessor.com/author/dave-cornell-phd/ 15 Theory of Planned Behavior Examples
  • Dave Cornell (PhD) https://helpfulprofessor.com/author/dave-cornell-phd/ 18 Adaptive Behavior Examples

Chris

  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 23 Achieved Status Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 15 Ableism Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 25 Defense Mechanisms Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 15 Theory of Planned Behavior Examples

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Sweepstakes
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

How to Conduct a Psychology Experiment

Conducting your first psychology experiment can be a long, complicated, and sometimes intimidating process. It can be especially confusing if you are not quite sure where to begin or which steps to take.

Like other sciences, psychology utilizes the  scientific method  and bases conclusions upon empirical evidence. When conducting an experiment, it is important to follow the seven basic steps of the scientific method:

  • Ask a testable question
  • Define your variables
  • Conduct background research
  • Design your experiment
  • Perform the experiment
  • Collect and analyze the data
  • Draw conclusions
  • Share the results with the scientific community

At a Glance

It's important to know the steps of the scientific method if you are conducting an experiment in psychology or other fields. The processes encompasses finding a problem you want to explore, learning what has already been discovered about the topic, determining your variables, and finally designing and performing your experiment. But the process doesn't end there! Once you've collected your data, it's time to analyze the numbers, determine what they mean, and share what you've found.

Find a Research Problem or Question

Picking a research problem can be one of the most challenging steps when you are conducting an experiment. After all, there are so many different topics you might choose to investigate.

Are you stuck for an idea? Consider some of the following:

Investigate a Commonly Held Belief

Folk knowledge is a good source of questions that can serve as the basis for psychological research. For example, many people believe that staying up all night to cram for a big exam can actually hurt test performance.

You could conduct a study to compare the test scores of students who stayed up all night with the scores of students who got a full night's sleep before the exam.

Review Psychology Literature

Published studies are a great source of unanswered research questions. In many cases, the authors will even note the need for further research. Find a published study that you find intriguing, and then come up with some questions that require further exploration.

Think About Everyday Problems

There are many practical applications for psychology research. Explore various problems that you or others face each day, and then consider how you could research potential solutions. For example, you might investigate different memorization strategies to determine which methods are most effective.

Define Your Variables

Variables are anything that might impact the outcome of your study. An operational definition describes exactly what the variables are and how they are measured within the context of your study.

For example, if you were doing a study on the impact of sleep deprivation on driving performance, you would need to operationally define sleep deprivation and driving performance .

An operational definition refers to a precise way that an abstract concept will be measured. For example, you cannot directly observe and measure something like test anxiety . You can, however, use an anxiety scale and assign values based on how many anxiety symptoms a person is experiencing. 

In this example, you might define sleep deprivation as getting less than seven hours of sleep at night. You might define driving performance as how well a participant does on a driving test.

What is the purpose of operationally defining variables? The main purpose is control. By understanding what you are measuring, you can control for it by holding the variable constant between all groups or manipulating it as an independent variable .

Develop a Hypothesis

The next step is to develop a testable hypothesis that predicts how the operationally defined variables are related. In the recent example, the hypothesis might be: "Students who are sleep-deprived will perform worse than students who are not sleep-deprived on a test of driving performance."

Null Hypothesis

In order to determine if the results of the study are significant, it is essential to also have a null hypothesis. The null hypothesis is the prediction that one variable will have no association to the other variable.

In other words, the null hypothesis assumes that there will be no difference in the effects of the two treatments in our experimental and control groups .

The null hypothesis is assumed to be valid unless contradicted by the results. The experimenters can either reject the null hypothesis in favor of the alternative hypothesis or not reject the null hypothesis.

It is important to remember that not rejecting the null hypothesis does not mean that you are accepting the null hypothesis. To say that you are accepting the null hypothesis is to suggest that something is true simply because you did not find any evidence against it. This represents a logical fallacy that should be avoided in scientific research.  

Conduct Background Research

Once you have developed a testable hypothesis, it is important to spend some time doing some background research. What do researchers already know about your topic? What questions remain unanswered?

You can learn about previous research on your topic by exploring books, journal articles, online databases, newspapers, and websites devoted to your subject.

Reading previous research helps you gain a better understanding of what you will encounter when conducting an experiment. Understanding the background of your topic provides a better basis for your own hypothesis.

After conducting a thorough review of the literature, you might choose to alter your own hypothesis. Background research also allows you to explain why you chose to investigate your particular hypothesis and articulate why the topic merits further exploration.

As you research the history of your topic, take careful notes and create a working bibliography of your sources. This information will be valuable when you begin to write up your experiment results.

Select an Experimental Design

After conducting background research and finalizing your hypothesis, your next step is to develop an experimental design. There are three basic types of designs that you might utilize. Each has its own strengths and weaknesses:

Pre-Experimental Design

A single group of participants is studied, and there is no comparison between a treatment group and a control group. Examples of pre-experimental designs include case studies (one group is given a treatment and the results are measured) and pre-test/post-test studies (one group is tested, given a treatment, and then retested).

Quasi-Experimental Design

This type of experimental design does include a control group but does not include randomization. This type of design is often used if it is not feasible or ethical to perform a randomized controlled trial.

True Experimental Design

A true experimental design, also known as a randomized controlled trial, includes both of the elements that pre-experimental designs and quasi-experimental designs lack—control groups and random assignment to groups.

Standardize Your Procedures

In order to arrive at legitimate conclusions, it is essential to compare apples to apples.

Each participant in each group must receive the same treatment under the same conditions.

For example, in our hypothetical study on the effects of sleep deprivation on driving performance, the driving test must be administered to each participant in the same way. The driving course must be the same, the obstacles faced must be the same, and the time given must be the same.

Choose Your Participants

In addition to making sure that the testing conditions are standardized, it is also essential to ensure that your pool of participants is the same.

If the individuals in your control group (those who are not sleep deprived) all happen to be amateur race car drivers while your experimental group (those that are sleep deprived) are all people who just recently earned their driver's licenses, your experiment will lack standardization.

When choosing subjects, there are some different techniques you can use.

Simple Random Sample

In a simple random sample, the participants are randomly selected from a group. A simple random sample can be used to represent the entire population from which the representative sample is drawn.

Drawing a simple random sample can be helpful when you don't know a lot about the characteristics of the population.

Stratified Random Sample

Participants must be randomly selected from different subsets of the population. These subsets might include characteristics such as geographic location, age, sex, race, or socioeconomic status.

Stratified random samples are more complex to carry out. However, you might opt for this method if there are key characteristics about the population that you want to explore in your research.

Conduct Tests and Collect Data

After you have selected participants, the next steps are to conduct your tests and collect the data. Before doing any testing, however, there are a few important concerns that need to be addressed.

Address Ethical Concerns

First, you need to be sure that your testing procedures are ethical . Generally, you will need to gain permission to conduct any type of testing with human participants by submitting the details of your experiment to your school's Institutional Review Board (IRB), sometimes referred to as the Human Subjects Committee.

Obtain Informed Consent

After you have gained approval from your institution's IRB, you will need to present informed consent forms to each participant. This form offers information on the study, the data that will be gathered, and how the results will be used. The form also gives participants the option to withdraw from the study at any point in time.

Once this step has been completed, you can begin administering your testing procedures and collecting the data.

Analyze the Results

After collecting your data, it is time to analyze the results of your experiment. Researchers use statistics to determine if the results of the study support the original hypothesis and if the results are statistically significant.

Statistical significance means that the study's results are unlikely to have occurred simply by chance.

The types of statistical methods you use to analyze your data depend largely on the type of data that you collected. If you are using a random sample of a larger population, you will need to utilize inferential statistics.

These statistical methods make inferences about how the results relate to the population at large.

Because you are making inferences based on a sample, it has to be assumed that there will be a certain margin of error. This refers to the amount of error in your results. A large margin of error means that there will be less confidence in your results, while a small margin of error means that you are more confident that your results are an accurate reflection of what exists in that population.

Share Your Results After Conducting an Experiment

Your final task in conducting an experiment is to communicate your results. By sharing your experiment with the scientific community, you are contributing to the knowledge base on that particular topic.

One of the most common ways to share research results is to publish the study in a peer-reviewed professional journal. Other methods include sharing results at conferences, in book chapters, or academic presentations.

In your case, it is likely that your class instructor will expect a formal write-up of your experiment in the same format required in a professional journal article or lab report :

  • Introduction
  • Tables and figures

What This Means For You

Designing and conducting a psychology experiment can be quite intimidating, but breaking the process down step-by-step can help. No matter what type of experiment you decide to perform, always check with your instructor and your school's institutional review board for permission before you begin.

NOAA SciJinks. What is the scientific method? .

Nestor, PG, Schutt, RK. Research Methods in Psychology . SAGE; 2015.

Andrade C. A student's guide to the classification and operationalization of variables in the conceptualization and eesign of a clinical study: Part 2 .  Indian J Psychol Med . 2021;43(3):265-268. doi:10.1177/0253717621996151

Purna Singh A, Vadakedath S, Kandi V. Clinical research: A review of study designs, hypotheses, errors, sampling types, ethics, and informed consent .  Cureus . 2023;15(1):e33374. doi:10.7759/cureus.33374

Colby College. The Experimental Method .

Leite DFB, Padilha MAS, Cecatti JG. Approaching literature review for academic purposes: The Literature Review Checklist .  Clinics (Sao Paulo) . 2019;74:e1403. doi:10.6061/clinics/2019/e1403

Salkind NJ. Encyclopedia of Research Design . SAGE Publications, Inc.; 2010. doi:10.4135/9781412961288

Miller CJ, Smith SN, Pugatch M. Experimental and quasi-experimental designs in implementation research .  Psychiatry Res . 2020;283:112452. doi:10.1016/j.psychres.2019.06.027

Nijhawan LP, Manthan D, Muddukrishna BS, et. al. Informed consent: Issues and challenges . J Adv Pharm Technol Rese . 2013;4(3):134-140. doi:10.4103/2231-4040.116779

Serdar CC, Cihan M, Yücel D, Serdar MA. Sample size, power and effect size revisited: simplified and practical approaches in pre-clinical, clinical and laboratory studies .  Biochem Med (Zagreb) . 2021;31(1):010502. doi:10.11613/BM.2021.010502

American Psychological Association.  Publication Manual of the American Psychological Association  (7th ed.). Washington DC: The American Psychological Association; 2019.

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

B.A. in Psychology

What Is Experimental Psychology?

experiments psychology definition

The science of psychology spans several fields. There are dozens of disciplines in psychology, including abnormal psychology, cognitive psychology and social psychology.

One way to view these fields is to separate them into two types: applied vs. experimental psychology. These groups describe virtually any type of work in psychology.

The following sections explore what experimental psychology is and some examples of what it covers.

Experimental psychology seeks to explore and better understand behavior through empirical research methods. This work allows findings to be employed in real-world applications (applied psychology) across fields such as clinical psychology, educational psychology, forensic psychology, sports psychology, and social psychology. Experimental psychology is able to shed light on people’s personalities and life experiences by examining what the way people behave and how behavior is shaped throughout life, along with other theoretical questions. The field looks at a wide range of behavioral topics including sensation, perception, attention, memory, cognition, and emotion, according to the  American Psychological Association  (APA).

Research is the focus of experimental psychology. Using scientific methods to collect data and perform research, experimental psychology focuses on certain questions, and, one study at a time, reveals information that contributes to larger findings or a conclusion. Due to the breadth and depth of certain areas of study, researchers can spend their entire careers looking at a complex research question.

Experimental Psychology in Action

The APA  writes about  one experimental psychologist, Robert McCann, who is now retired after 19 years working at NASA. During his time at NASA, his work focused on the user experience — on land and in space — where he applied his expertise to cockpit system displays, navigation systems, and safety displays used by astronauts in NASA spacecraft. McCann’s knowledge of human information processing allowed him to help NASA design shuttle displays that can increase the safety of shuttle missions. He looked at human limitations of attention and display processing to gauge what people can reliably see and correctly interpret on an instrument panel. McCann played a key role in helping determining the features of cockpit displays without overloading the pilot or taxing their attention span.

“One of the purposes of the display was to alert the astronauts to the presence of a failure that interrupted power in a specific region,” McCann said, “The most obvious way to depict this interruption was to simply remove (or dim) the white line(s) connecting the affected components. Basic research on visual attention has shown that humans do not notice the removal of a display feature very easily when the display is highly cluttered. We are much better at noticing a feature or object that is suddenly added to a display.” McCann utilized his knowledge in experimental psychology to research and develop this very important development for NASA. 

Valve Corporation

Another experimental psychologist, Mike Ambinder, uses his expertise to help design video games. He is a senior experimental psychologist at Valve Corporation, a video game developer and developer of the software distribution platform Steam. Ambinder told  Orlando Weekly  that his career working on gaming hits such as Portal 2 and Left 4 Dead “epitomizes the intersection between scientific innovation and electronic entertainment.” His career started when he gave a presentation to Valve on applying psychology to game design; this occurred while he was finishing his PhD in experimental design. “I’m very lucky to have landed at a company where freedom and autonomy and analytical decision-making are prized,” he said. “I realized how fortunate I was to work for a company that would encourage someone with a background in psychology to see what they could contribute in a field where they had no prior experience.” 

Ambinder spends his time on data analysis, hardware research, play-testing methodologies, and on any aspect of games where knowledge of human behavior could be useful. Ambinder described Valve’s process for refining a product as straightforward. “We come up with a game design (our hypothesis), and we place it in front of people external to the company (our play-test or experiment). We gather their feedback, and then iterate and improve the design (refining the theory). It’s essentially the scientific method applied to game design, and the end result is the consequence of many hours of applying this process.” To gather play-test data, Ambinder is engaged in the newer field of biofeedback technology, which can quantify gamers’ enjoyment. His research looks at unobtrusive measurements of facial expressions that can achieve such goals. Ambinder is also examining eye-tracking as a next-generation input method.

Pursue Your Career Goals in Psychology

Develop a greater understanding of psychology concepts and applications with Concordia St. Paul’s  online bachelor’s in psychology . Enjoy small class sizes with a personal learning environment geared toward your success, and learn from knowledgeable faculty who have industry experience. 

  • Abnormal Psychology
  • Assessment (IB)
  • Biological Psychology
  • Cognitive Psychology
  • Criminology
  • Developmental Psychology
  • Extended Essay
  • General Interest
  • Health Psychology
  • Human Relationships
  • IB Psychology
  • IB Psychology HL Extensions
  • Internal Assessment (IB)
  • Love and Marriage
  • Post-Traumatic Stress Disorder
  • Prejudice and Discrimination
  • Qualitative Research Methods
  • Research Methodology
  • Revision and Exam Preparation
  • Social and Cultural Psychology
  • Studies and Theories
  • Teaching Ideas

What is an “experiment?”

Travis Dixon October 7, 2017 Research Methodology

  • Click to share on Facebook (Opens in new window)
  • Click to share on Twitter (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Pinterest (Opens in new window)
  • Click to email a link to a friend (Opens in new window)

If you’re reading this it’s probably because your teacher has assigned this as homework because you’ve called a study an “experiment” when it wasn’t an experiment at all. So this post is to help you know exactly when to use the term “experiment”, and when it’s safe just to say “study.”

But before we get to that, let’s first clarify why this is important knowledge. I think there are two reasons:

  • If you use the term experiment incorrectly in an exam it will suggest to the examiner that you have limited knowledge – this will affect your marks.
  • Research methods (and especially experiments) are the backbone of psychological research and so they’re a pretty important concept to understand.

Definition of experiment:

An experiment in psychology is when there is a study conducted that investigates the direct effect of an independent variable on a dependent variable. 

Tip:  If you’re not sure if it’s an experiment, you’re always safe to call it a “study.”

Before you can call a study an experiment, you have to identify the independent variable. Ask yourself:

  • “Are there different groups in the study that the researchers are comparing?”

For example, in this study about serotonin’s effects on the brain we can see that there are two groups: drinking the placebo or the drink that reduces serotonin. So the IV is serotonin levels.

Experiments test  causal  relationships. 

If there are different groups and there is clearly an IV, you then need to ask yourself, “are the researchers studying the  effects  on a DV?”  In other words, is there a  causal  relationship between the IV and DV being examined?

If you look at the examples in the studies in this post , you’ll see that all of these studies are clearly investigating the effects of an IV on a DV:

  • Serotonin study: the effects of serotonin levels (IV) on prefrontal cortex activity (DV)
  • Rat experiment: the effects of testosterone (IV) on aggression (DV)
  • Watching TV (Bandura): the effects of observing violence (IV) on aggression (DV).

So if the study is testing a causal relationship between an IV and a DV, you my friend, have got yourself an experiment 🙂

Laboratory.

Laboratories are good places to conduct experiments because the conditions can be controlled so the IV can be isolated – this is essential when testing causal relationships.

So when is a study  not  an experiment?

A simple test would be to ask yourself:

  • Did the researchers create the groups/conditions?

If they did, then you’ve got an experiment.

  • Serotonin study: the researchers chose who drank which drink and when.
  • Rat study: the researchers chose which rats to castrate and which ones not to.
  • TV study: they told which kids to watch TV and which ones not to.

If the researchers didn’t create the groups you might be better to call it a “study” to be on the safe side.

In a true experiment it is the researchers who manipulate the independent variable (i.e. they create the groups/conditions in the experiment).

For example, in studies that compare cultures the researchers cannot create the groups because people are born into their existing cultures. These types of studies are most commonly correlational studies.

Another example is research on communication in relationships: the researchers compare the differences between couples with positive communication with those who have negative communication, but they didn’t create these groups – they occurred naturally. These are also correlational studies.

To conclude: if the researchers create the groups for comparison it’s an experiment. If not, you’re safer to call it a “study.”

Qualitative studies are  never  experiments, so be extra-careful when using this word in Paper 3.

But just to get tricky, there are some experiments where the independent variable is naturally occurring. These are called natural experiments (or quasi-experiments). So if you have a study with a naturally occurring variable, before you can call it an experiment you have to ask yourself:

  • Are they testing a causal relationship between the IV and the DV?

But the problem is in order to answer this question you need to know quite a bit about the methodology. This is because you have to know if they’ve controlled for confounding variables in either their design or their statistical analyses. If they’ve tried to control confounding variables and isolate the IV as the only variable affecting the DV, you can call it an experiment.

But here we’re getting pretty complicated, which is why it might better to err on the side of caution with naturally occurring IVs and call them studies if you’re not sure if they’re experimental or not (another way to check is to ask your teacher).

What makes an experiment “quasi?” (Read More)

I hope this post helps. If you need anything clarified or you have questions about a specific study, please feel free to post it in the comments.

Travis Dixon

Travis Dixon is an IB Psychology teacher, author, workshop leader, examiner and IA moderator.

Daniela Aidley Ph.D.

What Do Psychologists Mean When They Say "Experiment"?

Control groups and control conditions allow for vital comparisons..

Posted August 29, 2021 | Reviewed by Devon Frye

  • Control is one of the key features of an experiment.
  • This means using control groups or control conditions for comparison.
  • The quality of comparison matters—we can't just compare doing something with doing nothing.

What makes an experiment, an experiment?

In the last, first post of this blog, I mentioned that much of research methods is trying to make sure we draw the right conclusions, while also trying very hard not to draw the wrong conclusions. The type of study particularly suited for this is the experiment . Contrary to popular belief, not every study is an experiment—in fact, in psychological research, the term "experiment" is narrowly defined as a study involving both randomisation and control. In this blog post, I want to explain what we mean by control and why it is such an important part of research.

Let’s assume a group of researchers wants to find out how to improve childrens’ working memory in the long term. In fact, we don't need to assume because that's precisely what researchers Henry, Messer, and Nash wanted to find out in their 2013 study . In particular, they want to test whether adaptive "executive-loaded exercises" are effective in training children working memory. "Executive-loaded" means these are exercises that put a cognitive load on the executive function , i.e. the part of the brain that allocates cognitive resources and attention ; adaptive means they adjust to the children's skill and get easier or more difficult as the children progress.

Surely the easiest way for Henry et al. would have been to test the childrens’ working memory to establish a baseline for comparison, then train them with this set of exercises, then test their working memory again, right?

Simple Comparisons Don't Work

Let's assume for a moment that's what they did. In principle, there are three(ish) possible outcomes in such a situation: the second set of tests show that children perform better in the same working memory tests; they perform equally well; or they perform worse. Luckily for the researchers, the tests show an improvement. Would that allow Henry and colleagues to conclude that these exercises helped children to improve their working memory?

Sadly no. As is the case with most of us,1 we continue to learn and improve our skills —and of course, this is particularly true for children. In other words, there is a distinct chance that the children in the study would have improved over time anyway, and the researchers would not be able to say with confidence that any improvement they might have seen is due to their training method.

What the researchers need, therefore, is a way of finding out whether the improvement is (only) due to the passage of time or, at least partially, to them training the children in this method. They need another group of children who don’t get trained in executive-loaded exercises. This helps establish whether any improvement they observe would have happened anyway, or whether it’s a consequence of the intervention (i.e., the training method). If both groups show roughly equivalent 1 improvement, it’s unlikely 2 that our method made any difference; however, if the comparison group does not improve, and ours does, we are slightly more justified in concluding that method X might have some merit.

We Need Control

In psychological research, such a comparison group is also called a control group as it’s essentially controlling for the passage of time and the change of skills, abilities, opinions, and experiences that go with it. We also refer to the two groups as conditions , as in “Group 1 experiences condition X, group 2 experiences condition Y.”

But simply having a control group isn’t enough. It’s also important how that control group is selected, and what the control group experiences. In the study by Henry et al., the intervention consisted of repeated in-person meetings with an experimenter. But it could also have consisted of children coming into the psychology department with their parents and spending some time with the researcher during training. Or perhaps the researcher(s) paid house visits to the children and their families.

In any of these cases, the children in our intervention group did receive some more attention and interaction from their parents and/or the researcher(s) than they usually would have—and more than the control group, if control just means doing nothing! You may have heard this referred to as the Hawthorne effect , after research at the eponymous production plant which found that workers’ productivity in a factory improved regardless of the actual intervention (e.g., more light, less light) and eventually concluded that it was the existence of the intervention and the resulting increase in attention that improved workers’ productivity.

… But Not Just Any Kind of Control

Whether the original study really showed such an effect is fiercely debated in today’s literature, but that the presence of an intervention alone can have an effect is fairly well established. That’s the reason why we tend to use what’s called “active controls,” that is, control groups or conditions that also get a comparable intervention or experience. And that's exactly what Henry, Messer, and Nash did: In their study, participants were allocated to either the intervention or an “active control”—a different memory training that was similar in time-commitment and involvement by the children.

experiments psychology definition

Still, in some contexts, the Hawthorne effect may be very difficult to mitigate. In their article in the British Medical Journal , Sedgwick and Greenwood (2015) describe a study testing comparing patient-controlled vs. nurse-controlled administration of pain medication to patients with pain from traumatic injuries.

Which patients fare better: Those that are allowed to control dosage and administration of their pain medication, or those that have medication dosages set and administered by nurses? The answer may surprise you! … Or it probably won’t. Patients who have control over their own medication report better pain management and satisfaction.

But is this because pain management was objectively better and more effective, or because participants had a higher degree of control (and autonomy) over their treatment? They conclude that it’s likely both patients and nurses involved in the study may have been affected by the Hawthorne effect, and that even the “gold standard” of empirical research (double-blinding—more on that in an upcoming post) would likely not have made much of a difference.

Control Alone Isn't Enough

Even active control, however, is not enough for a study to be called an experiment. While control groups or control conditions allow us to exclude some potential influences and reasons for our observations, there are still too many potential distractions and disturbances that we need to account for. And, counterintuitively, one of those requirements relies on randomness. But that's a topic for the next post.

1 The question of what constitutes „roughly equivalent” is in itself a complex question and is linked to concepts such as statistical significance – yet another topic for another post.

2 Note that while unlikely, it‘s not impossible, and is also related to concepts such as significance.

Henry, L., Messer, D. J. and Nash, G. (2013). Testing for Near and Far Transfer Effects with a Short, Face-to-Face Adaptive Working Memory Training Intervention in Typical Children. Infant and Child Development, 23(1), pp. 84-103. doi: 10.1002/icd.1816

Sedgwick, P., & Greenwood, N. (2015). Understanding the Hawthorne effect. British Medical Journal, 351.

Daniela Aidley Ph.D.

Daniela Aidley, Ph.D., is a professor in business psychology at the West Coast Applied University, Heide, Germany, where she's teaching psychology, diversity management, and research methods.

  • Find a Therapist
  • Find a Treatment Center
  • Find a Psychiatrist
  • Find a Support Group
  • Find Online Therapy
  • International
  • New Zealand
  • South Africa
  • Switzerland
  • Asperger's
  • Bipolar Disorder
  • Chronic Pain
  • Eating Disorders
  • Passive Aggression
  • Personality
  • Goal Setting
  • Positive Psychology
  • Stopping Smoking
  • Low Sexual Desire
  • Relationships
  • Child Development
  • Self Tests NEW
  • Therapy Center
  • Diagnosis Dictionary
  • Types of Therapy

July 2024 magazine cover

Sticking up for yourself is no easy task. But there are concrete skills you can use to hone your assertiveness and advocate for yourself.

  • Emotional Intelligence
  • Gaslighting
  • Affective Forecasting
  • Neuroscience

Experimental Methods In Psychology

March 7, 2021 - paper 2 psychology in context | research methods.

There are three experimental methods in the field of psychology; Laboratory, Field and Natural Experiments. Each of the experimental methods holds different characteristics in relation to; the manipulation of the IV, the control of the EVs and the ability to accurately replicate the study in exactly the same way.











·  A highly controlled setting Â·  Artificial setting·  High control over the IV and EVs·  For example, Loftus and Palmer’s study looking at leading questions(+) High level of control, researchers are able to control the IV and potential EVs. This is a strength because researchers are able to establish a cause and effect relationship and there is high internal validity.  (+) Due to the high level of control it means that a lab experiment can be replicated in exactly the same way under exactly the same conditions. This is a strength as it means that the reliability of the research can be assessed (i.e. a reliable study will produce the same findings over and over again).(-) Low ecological validity. A lab experiment takes place in an unnatural, artificial setting. As a result participants may behave in an unnatural manner. This is a weakness because it means that the experiment may not be measuring real-life behaviour.  (-) Another weakness is that there is a high chance of demand characteristics. For example as the laboratory setting makes participants aware they are taking part in research, this may cause them to change their behaviour in some way. For example, a participant in a memory experiment might deliberately remember less in one experimental condition if they think that is what the experimenter expects them to do to avoid ruining the results. This is a problem because it means that the results do not reflect real-life as they are responding to demand characteristics and not just the independent variable.
·  Real life setting Â·  Experimenter can control the IV·  Experimenter doesn’t have control over EVs (e.g. weather etc )·  For example, research looking at altruistic behaviour had a stooge (actor) stage a collapse in a subway and recorded how many passers-by stopped to help.(+) High ecological validity. Due to the fact that a field experiment takes place in a real-life setting, participants are unaware that they are being watched and therefore are more likely to act naturally. This is a strength because it means that the participants behaviour will be reflective of their real-life behaviour.  (+) Another strength is that there is less chance of demand characteristics. For example, because the research consists of a real life task in a natural environment it’s unlikely that participants will change their behaviour in response to demand characteristics. This is positive because it means that the results reflect real-life as they are not responding to demand characteristics, just the independent variable. (-) Low degree of control over variables. For example,  such as the weather (if a study is taking place outdoors), noise levels or temperature are more difficult to control if the study is taking place outside the laboratory. This is problematic because there is a greater chance of extraneous variables affecting participant’s behaviour which reduces the experiments internal validity and makes a cause and effect relationship difficult to establish. (-) Difficult to replicate. For example, if a study is taking place outdoors, the weather might change between studies and affect the participants’ behaviour. This is a problem because it reduces the chances of the same results being found time and time again and therefore can reduce the reliability of the experiment. 
·  Real-life setting Â·  Experimenter has no control over EVs or the IV·  IV is naturally occurring·  For example, looking at the changes in levels of aggression after the introduction of the television. The introduction of the TV is the natural occurring IV and the DV is the changes in aggression (comparing aggression levels before and after the introduction of the TV).The   of the natural experiment are exactly the same as the strengths of the field experiment:  (+) High ecological validity due to the fact that the research is taking place in a natural setting and therefore is reflective of real-life natural behaviour. (+) Low chance of demand characteristics. Because participants do not know that they are taking part in a study they will not change their behaviour and act unnaturally therefore the experiment can be said to be measuring real-life natural behaviour.The   of the natural experiment are exactly the same as the strengths of the field experiment:  (-)Low control over variables. For example, the researcher isn’t able to control EVs and the IV is naturally occurring. This means that a cause and effect relationship cannot be established and there is low internal validity. (-) Due to the fact that there is no control over variables, a natural experiment cannot be replicated and therefore reliability is difficult to assess for.

When conducting research, it is important to create an aim and a hypothesis,  click here  to learn more about the formation of aims and hypotheses.

We're not around right now. But you can send us an email and we'll get back to you, asap.

Research Methods In Psychology

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

research methods3

Hypotheses are statements about the prediction of the results, that can be verified or disproved by some investigation.

There are four types of hypotheses :
  • Null Hypotheses (H0 ) – these predict that no difference will be found in the results between the conditions. Typically these are written ‘There will be no difference…’
  • Alternative Hypotheses (Ha or H1) – these predict that there will be a significant difference in the results between the two conditions. This is also known as the experimental hypothesis.
  • One-tailed (directional) hypotheses – these state the specific direction the researcher expects the results to move in, e.g. higher, lower, more, less. In a correlation study, the predicted direction of the correlation can be either positive or negative.
  • Two-tailed (non-directional) hypotheses – these state that a difference will be found between the conditions of the independent variable but does not state the direction of a difference or relationship. Typically these are always written ‘There will be a difference ….’

All research has an alternative hypothesis (either a one-tailed or two-tailed) and a corresponding null hypothesis.

Once the research is conducted and results are found, psychologists must accept one hypothesis and reject the other. 

So, if a difference is found, the Psychologist would accept the alternative hypothesis and reject the null.  The opposite applies if no difference is found.

Sampling techniques

Sampling is the process of selecting a representative group from the population under study.

Sample Target Population

A sample is the participants you select from a target population (the group you are interested in) to make generalizations about.

Representative means the extent to which a sample mirrors a researcher’s target population and reflects its characteristics.

Generalisability means the extent to which their findings can be applied to the larger population of which their sample was a part.

  • Volunteer sample : where participants pick themselves through newspaper adverts, noticeboards or online.
  • Opportunity sampling : also known as convenience sampling , uses people who are available at the time the study is carried out and willing to take part. It is based on convenience.
  • Random sampling : when every person in the target population has an equal chance of being selected. An example of random sampling would be picking names out of a hat.
  • Systematic sampling : when a system is used to select participants. Picking every Nth person from all possible participants. N = the number of people in the research population / the number of people needed for the sample.
  • Stratified sampling : when you identify the subgroups and select participants in proportion to their occurrences.
  • Snowball sampling : when researchers find a few participants, and then ask them to find participants themselves and so on.
  • Quota sampling : when researchers will be told to ensure the sample fits certain quotas, for example they might be told to find 90 participants, with 30 of them being unemployed.

Experiments always have an independent and dependent variable .

  • The independent variable is the one the experimenter manipulates (the thing that changes between the conditions the participants are placed into). It is assumed to have a direct effect on the dependent variable.
  • The dependent variable is the thing being measured, or the results of the experiment.

variables

Operationalization of variables means making them measurable/quantifiable. We must use operationalization to ensure that variables are in a form that can be easily tested.

For instance, we can’t really measure ‘happiness’, but we can measure how many times a person smiles within a two-hour period. 

By operationalizing variables, we make it easy for someone else to replicate our research. Remember, this is important because we can check if our findings are reliable.

Extraneous variables are all variables which are not independent variable but could affect the results of the experiment.

It can be a natural characteristic of the participant, such as intelligence levels, gender, or age for example, or it could be a situational feature of the environment such as lighting or noise.

Demand characteristics are a type of extraneous variable that occurs if the participants work out the aims of the research study, they may begin to behave in a certain way.

For example, in Milgram’s research , critics argued that participants worked out that the shocks were not real and they administered them as they thought this was what was required of them. 

Extraneous variables must be controlled so that they do not affect (confound) the results.

Randomly allocating participants to their conditions or using a matched pairs experimental design can help to reduce participant variables. 

Situational variables are controlled by using standardized procedures, ensuring every participant in a given condition is treated in the same way

Experimental Design

Experimental design refers to how participants are allocated to each condition of the independent variable, such as a control or experimental group.
  • Independent design ( between-groups design ): each participant is selected for only one group. With the independent design, the most common way of deciding which participants go into which group is by means of randomization. 
  • Matched participants design : each participant is selected for only one group, but the participants in the two groups are matched for some relevant factor or factors (e.g. ability; sex; age).
  • Repeated measures design ( within groups) : each participant appears in both groups, so that there are exactly the same participants in each group.
  • The main problem with the repeated measures design is that there may well be order effects. Their experiences during the experiment may change the participants in various ways.
  • They may perform better when they appear in the second group because they have gained useful information about the experiment or about the task. On the other hand, they may perform less well on the second occasion because of tiredness or boredom.
  • Counterbalancing is the best way of preventing order effects from disrupting the findings of an experiment, and involves ensuring that each condition is equally likely to be used first and second by the participants.

If we wish to compare two groups with respect to a given independent variable, it is essential to make sure that the two groups do not differ in any other important way. 

Experimental Methods

All experimental methods involve an iv (independent variable) and dv (dependent variable)..

The researcher decides where the experiment will take place, at what time, with which participants, in what circumstances,  using a standardized procedure.

  • Field experiments are conducted in the everyday (natural) environment of the participants. The experimenter still manipulates the IV, but in a real-life setting. It may be possible to control extraneous variables, though such control is more difficult than in a lab experiment.
  • Natural experiments are when a naturally occurring IV is investigated that isn’t deliberately manipulated, it exists anyway. Participants are not randomly allocated, and the natural event may only occur rarely.

Case studies are in-depth investigations of a person, group, event, or community. It uses information from a range of sources, such as from the person concerned and also from their family and friends.

Many techniques may be used such as interviews, psychological tests, observations and experiments. Case studies are generally longitudinal: in other words, they follow the individual or group over an extended period of time. 

Case studies are widely used in psychology and among the best-known ones carried out were by Sigmund Freud . He conducted very detailed investigations into the private lives of his patients in an attempt to both understand and help them overcome their illnesses.

Case studies provide rich qualitative data and have high levels of ecological validity. However, it is difficult to generalize from individual cases as each one has unique characteristics.

Correlational Studies

Correlation means association; it is a measure of the extent to which two variables are related. One of the variables can be regarded as the predictor variable with the other one as the outcome variable.

Correlational studies typically involve obtaining two different measures from a group of participants, and then assessing the degree of association between the measures. 

The predictor variable can be seen as occurring before the outcome variable in some sense. It is called the predictor variable, because it forms the basis for predicting the value of the outcome variable.

Relationships between variables can be displayed on a graph or as a numerical score called a correlation coefficient.

types of correlation. Scatter plot. Positive negative and no correlation

  • If an increase in one variable tends to be associated with an increase in the other, then this is known as a positive correlation .
  • If an increase in one variable tends to be associated with a decrease in the other, then this is known as a negative correlation .
  • A zero correlation occurs when there is no relationship between variables.

After looking at the scattergraph, if we want to be sure that a significant relationship does exist between the two variables, a statistical test of correlation can be conducted, such as Spearman’s rho.

The test will give us a score, called a correlation coefficient . This is a value between 0 and 1, and the closer to 1 the score is, the stronger the relationship between the variables. This value can be both positive e.g. 0.63, or negative -0.63.

Types of correlation. Strong, weak, and perfect positive correlation, strong, weak, and perfect negative correlation, no correlation. Graphs or charts ...

A correlation between variables, however, does not automatically mean that the change in one variable is the cause of the change in the values of the other variable. A correlation only shows if there is a relationship between variables.

Correlation does not always prove causation, as a third variable may be involved. 

causation correlation

Interview Methods

Interviews are commonly divided into two types: structured and unstructured.

A fixed, predetermined set of questions is put to every participant in the same order and in the same way. 

Responses are recorded on a questionnaire, and the researcher presets the order and wording of questions, and sometimes the range of alternative answers.

The interviewer stays within their role and maintains social distance from the interviewee.

There are no set questions, and the participant can raise whatever topics he/she feels are relevant and ask them in their own way. Questions are posed about participants’ answers to the subject

Unstructured interviews are most useful in qualitative research to analyze attitudes and values.

Though they rarely provide a valid basis for generalization, their main advantage is that they enable the researcher to probe social actors’ subjective point of view. 

Questionnaire Method

Questionnaires can be thought of as a kind of written interview. They can be carried out face to face, by telephone, or post.

The choice of questions is important because of the need to avoid bias or ambiguity in the questions, ‘leading’ the respondent or causing offense.

  • Open questions are designed to encourage a full, meaningful answer using the subject’s own knowledge and feelings. They provide insights into feelings, opinions, and understanding. Example: “How do you feel about that situation?”
  • Closed questions can be answered with a simple “yes” or “no” or specific information, limiting the depth of response. They are useful for gathering specific facts or confirming details. Example: “Do you feel anxious in crowds?”

Its other practical advantages are that it is cheaper than face-to-face interviews and can be used to contact many respondents scattered over a wide area relatively quickly.

Observations

There are different types of observation methods :
  • Covert observation is where the researcher doesn’t tell the participants they are being observed until after the study is complete. There could be ethical problems or deception and consent with this particular observation method.
  • Overt observation is where a researcher tells the participants they are being observed and what they are being observed for.
  • Controlled : behavior is observed under controlled laboratory conditions (e.g., Bandura’s Bobo doll study).
  • Natural : Here, spontaneous behavior is recorded in a natural setting.
  • Participant : Here, the observer has direct contact with the group of people they are observing. The researcher becomes a member of the group they are researching.  
  • Non-participant (aka “fly on the wall): The researcher does not have direct contact with the people being observed. The observation of participants’ behavior is from a distance

Pilot Study

A pilot  study is a small scale preliminary study conducted in order to evaluate the feasibility of the key s teps in a future, full-scale project.

A pilot study is an initial run-through of the procedures to be used in an investigation; it involves selecting a few people and trying out the study on them. It is possible to save time, and in some cases, money, by identifying any flaws in the procedures designed by the researcher.

A pilot study can help the researcher spot any ambiguities (i.e. unusual things) or confusion in the information given to participants or problems with the task devised.

Sometimes the task is too hard, and the researcher may get a floor effect, because none of the participants can score at all or can complete the task – all performances are low.

The opposite effect is a ceiling effect, when the task is so easy that all achieve virtually full marks or top performances and are “hitting the ceiling”.

Research Design

In cross-sectional research , a researcher compares multiple segments of the population at the same time

Sometimes, we want to see how people change over time, as in studies of human development and lifespan. Longitudinal research is a research design in which data-gathering is administered repeatedly over an extended period of time.

In cohort studies , the participants must share a common factor or characteristic such as age, demographic, or occupation. A cohort study is a type of longitudinal study in which researchers monitor and observe a chosen population over an extended period.

Triangulation means using more than one research method to improve the study’s validity.

Reliability

Reliability is a measure of consistency, if a particular measurement is repeated and the same result is obtained then it is described as being reliable.

  • Test-retest reliability :  assessing the same person on two different occasions which shows the extent to which the test produces the same answers.
  • Inter-observer reliability : the extent to which there is an agreement between two or more observers.

Meta-Analysis

Meta-analysis is a statistical procedure used to combine and synthesize findings from multiple independent studies to estimate the average effect size for a particular research question.

Meta-analysis goes beyond traditional narrative reviews by using statistical methods to integrate the results of several studies, leading to a more objective appraisal of the evidence.

This is done by looking through various databases, and then decisions are made about what studies are to be included/excluded.

  • Strengths : Increases the conclusions’ validity as they’re based on a wider range.
  • Weaknesses : Research designs in studies can vary, so they are not truly comparable.

Peer Review

A researcher submits an article to a journal. The choice of the journal may be determined by the journal’s audience or prestige.

The journal selects two or more appropriate experts (psychologists working in a similar field) to peer review the article without payment. The peer reviewers assess: the methods and designs used, originality of the findings, the validity of the original research findings and its content, structure and language.

Feedback from the reviewer determines whether the article is accepted. The article may be: Accepted as it is, accepted with revisions, sent back to the author to revise and re-submit or rejected without the possibility of submission.

The editor makes the final decision whether to accept or reject the research report based on the reviewers comments/ recommendations.

Peer review is important because it prevent faulty data from entering the public domain, it provides a way of checking the validity of findings and the quality of the methodology and is used to assess the research rating of university departments.

Peer reviews may be an ideal, whereas in practice there are lots of problems. For example, it slows publication down and may prevent unusual, new work being published. Some reviewers might use it as an opportunity to prevent competing researchers from publishing work.

Some people doubt whether peer review can really prevent the publication of fraudulent research.

The advent of the internet means that a lot of research and academic comment is being published without official peer reviews than before, though systems are evolving on the internet where everyone really has a chance to offer their opinions and police the quality of research.

Types of Data

  • Quantitative data is numerical data e.g. reaction time or number of mistakes. It represents how much or how long, how many there are of something. A tally of behavioral categories and closed questions in a questionnaire collect quantitative data.
  • Qualitative data is virtually any type of information that can be observed and recorded that is not numerical in nature and can be in the form of written or verbal communication. Open questions in questionnaires and accounts from observational studies collect qualitative data.
  • Primary data is first-hand data collected for the purpose of the investigation.
  • Secondary data is information that has been collected by someone other than the person who is conducting the research e.g. taken from journals, books or articles.

Validity means how well a piece of research actually measures what it sets out to, or how well it reflects the reality it claims to represent.

Validity is whether the observed effect is genuine and represents what is actually out there in the world.

  • Concurrent validity is the extent to which a psychological measure relates to an existing similar measure and obtains close results. For example, a new intelligence test compared to an established test.
  • Face validity : does the test measure what it’s supposed to measure ‘on the face of it’. This is done by ‘eyeballing’ the measuring or by passing it to an expert to check.
  • Ecological validit y is the extent to which findings from a research study can be generalized to other settings / real life.
  • Temporal validity is the extent to which findings from a research study can be generalized to other historical times.

Features of Science

  • Paradigm – A set of shared assumptions and agreed methods within a scientific discipline.
  • Paradigm shift – The result of the scientific revolution: a significant change in the dominant unifying theory within a scientific discipline.
  • Objectivity – When all sources of personal bias are minimised so not to distort or influence the research process.
  • Empirical method – Scientific approaches that are based on the gathering of evidence through direct observation and experience.
  • Replicability – The extent to which scientific procedures and findings can be repeated by other researchers.
  • Falsifiability – The principle that a theory cannot be considered scientific unless it admits the possibility of being proved untrue.

Statistical Testing

A significant result is one where there is a low probability that chance factors were responsible for any observed difference, correlation, or association in the variables tested.

If our test is significant, we can reject our null hypothesis and accept our alternative hypothesis.

If our test is not significant, we can accept our null hypothesis and reject our alternative hypothesis. A null hypothesis is a statement of no effect.

In Psychology, we use p < 0.05 (as it strikes a balance between making a type I and II error) but p < 0.01 is used in tests that could cause harm like introducing a new drug.

A type I error is when the null hypothesis is rejected when it should have been accepted (happens when a lenient significance level is used, an error of optimism).

A type II error is when the null hypothesis is accepted when it should have been rejected (happens when a stringent significance level is used, an error of pessimism).

Ethical Issues

  • Informed consent is when participants are able to make an informed judgment about whether to take part. It causes them to guess the aims of the study and change their behavior.
  • To deal with it, we can gain presumptive consent or ask them to formally indicate their agreement to participate but it may invalidate the purpose of the study and it is not guaranteed that the participants would understand.
  • Deception should only be used when it is approved by an ethics committee, as it involves deliberately misleading or withholding information. Participants should be fully debriefed after the study but debriefing can’t turn the clock back.
  • All participants should be informed at the beginning that they have the right to withdraw if they ever feel distressed or uncomfortable.
  • It causes bias as the ones that stayed are obedient and some may not withdraw as they may have been given incentives or feel like they’re spoiling the study. Researchers can offer the right to withdraw data after participation.
  • Participants should all have protection from harm . The researcher should avoid risks greater than those experienced in everyday life and they should stop the study if any harm is suspected. However, the harm may not be apparent at the time of the study.
  • Confidentiality concerns the communication of personal information. The researchers should not record any names but use numbers or false names though it may not be possible as it is sometimes possible to work out who the researchers were.

Print Friendly, PDF & Email

  • What is New
  • Download Your Software
  • Behavioral Research
  • Software for Consumer Research
  • Software for Human Factors R&D
  • Request Live Demo
  • Contact Sales

Sensor Hardware

Man wearing VR headset

We carry a range of biosensors from the top hardware producers. All compatible with iMotions

iMotions for Higher Education

Imotions for business.

experiments psychology definition

The Human Factors Dirty Dozen: From Aviation to Automotive

Jessica Justinussen

experiments psychology definition

Neurogaming: Bridging the Mind and Machine in the Gaming Universe

Consumer Insights

Morten Pedersen

News & Events

iMotions Lab

  • iMotions Online
  • Eye Tracking
  • Eye Tracking Screen Based
  • Eye Tracking VR
  • Eye Tracking Glasses
  • Eye Tracking Webcam
  • FEA (Facial Expression Analysis)
  • Voice Analysis
  • EDA/GSR (Electrodermal Activity)
  • EEG (Electroencephalography)
  • ECG (Electrocardiography)
  • EMG (Electromyography)
  • Respiration
  • iMotions Lab: New features
  • iMotions Lab: Developers
  • EEG sensors
  • Sensory and Perceptual
  • Consumer Inights
  • Human Factors R&D
  • Work Environments, Training and Safety
  • Customer Stories
  • Published Research Papers
  • Document Library
  • Customer Support Program
  • Help Center
  • Release Notes
  • Contact Support
  • Partnerships
  • Mission Statement
  • Ownership and Structure
  • Executive Management
  • Job Opportunities

Publications

  • Newsletter Sign Up

What is Experimental Psychology?

Bryn Farnsworth

Bryn Farnsworth

Table of Contents

The mind is a complicated place. Fortunately, the scientific method is perfectly equipped to deal with complexity. If we put these two things together we have the field of experimental psychology, broadly defined as the scientific study of the mind. The word “experimental” in this context means that tests are administered to participants, outcomes are measured, and comparisons are made.

More formally, this means that a group of participants are exposed to a stimulus (or stimuli), and their behavior in response is recorded. This behavior is compared to some kind of control condition, which could be either a neutral stimulus, the absence of a stimulus, or against a control group (who maybe do nothing at all).

Experimental psychology is concerned with testing theories of human thoughts, feelings, actions, and beyond – any aspect of being human that involves the mind. This is a broad category that features many branches within it (e.g. behavioral psychology , cognitive psychology). Below, we will go through a brief history of experimental psychology, the aspects that characterize it, and outline research that has gone on to shape this field.

A Brief History of Experimental Psychology

As with anything, and perhaps particularly with scientific ideas, it’s difficult to pinpoint the exact moment in which a thought or approach was conceived. One of the best candidates with which to credit the emergence of experimental psychology with is Gustav Fechner who came to prominence in the 1830’s. After completing his Ph.D in biology at the University of Leipzig [1], and continuing his work as a professor, he made a significant breakthrough in the conception of mental states.

Scientists later wrote about Fechner’s breakthrough for understanding perception: “An increase in the intensity of a stimulus, Fechner argued, does not produce a one-to-one increase in the intensity of the sensation … For example, adding the sound of one bell to that of an already ringing bell produces a greater increase in sensation than adding one bell to 10 others already ringing. Therefore, the effects of stimulus intensities are not absolute but are relative to the amount of sensation that already exists.” [2]

portrait of Gustav Fechner

This ultimately meant that mental perception is responsive to the material world – the mind doesn’t passively respond to a stimulus (if that was the case, there would be a linear relationship between the intensity of a stimulus and the actual perception of it), but is instead dynamically responsive to it. This conception ultimately shapes much of experimental psychology, and the grounding theory: that the response of the brain to the environment can be quantified .

Fechner went on to research within this area for many subsequent years, testing new ideas regarding human perception. Meanwhile, another German scientist working in Heidelberg to the West, began his work on the problem of multitasking, and created the next paradigm shift for experimental psychology. The scientist was Wilhem Wundt, who had followed the work of Gustav Fechner.

Wilhem Wundt is often credited with being “the father of experimental psychology” and is the founding point for many aspects of it. He began the first experimental psychology lab, scientific journal, and ultimately formalized the approach as a science. Wundt set in stone what Fechner had put on paper.

The next scientist to advance the field of experimental psychology was influenced directly by reading Fechner’s book “ Elements of Psychophysics ”. Hermann Ebbinghaus, once again a German scientist, carried out the first properly formalized research into memory and forgetting, by using long lists of (mostly) nonsense syllables (such as: “VAW”, “TEL”, “BOC”) and recording how long it took for people to forget them.

Experiments using this list, concerning learning and memory, would take up much of Ebbinghaus’ career, and help cement experimental psychology as a science. There are many other scientists’ whose contributions helped pave the way for the direction, approach, and success of experimental psychology (Hermann von Helmholtz, Ernst Weber, and Mary Whiton Calkins, to name just a few) – all played a part in creating the field as we know it today. The work that they did defined the field, providing it with characteristics that we’ll now go through below.

Interested in Human Behavior and Psychology?

Sign up to our newsletter to get the latest articles and research send to you

experiments psychology definition

What Defines Experimental Psychology?

Defining any scientific field is in itself no exact science – there are inevitably aspects that will be missed. However, experimental psychology features at least three central components that define it: empiricism, falsifiability, and determinism . These features are central to experimental psychology but also many other fields within science.

Pipette in a beaker with liquid in it

Empiricism refers to the collection of data that can support or refute a theory. In opposition to purely theoretical reasoning, empiricism is concerned with observations that can be tested. It is based on the idea that all knowledge stems from observations that can be perceived, and data surrounding them can be collected to form experiments.

Falsifiability is a foundational aspect of all contemporary scientific work. Karl Popper , a 20th century philosopher, formalized this concept – that for any theory to be scientific there must be a way to falsify it. Otherwise, ludicrous, but unprovable claims could be made with equal weight as the most rigorously tested theories.

For example, the Theory of Relativity is scientific, for example, because it is possible that evidence could emerge to disprove it. This means that it can be tested. An example of an unfalsifiable argument is that the earth is younger than it appears, but that it was created to appear older than it is – any evidence against this is dismissed within the argument itself, rendering it impossible to falsify, and therefore untestable.

Determinism refers to the notion that any event has a cause before it. Applied to mental states, this means that the brain responds to stimuli, and that these responses can ultimately be predicted, given the correct data.

These aspects of experimental psychology run throughout the research carried out within this field. There are thousands of articles featuring research that have been carried out within this vein – below we will go through just a few of the most influential and well-cited studies that have shaped this field, and look to the future of experimental psychology.

Classic Studies in Experimental Psychology

Little albert.

One of the most notorious studies within experimental psychology was also one of the foundational pieces of research for behaviorism. Popularly known as the study of “Little Albert”, this experiment, carried out in 1920, focused on whether a baby could be made to fear a stimulus through conditioning (conditioning refers to the association of a response to a stimulus) [3].

The psychologist, John B. Watson , devised an experiment in which a baby was exposed to an unconditioned stimulus (in this case, a white rat) at the same time as a fear-inducing stimulus (the loud, sudden sound of a hammer hitting a metal bar). The repetition of this loud noise paired with the appearance of the white rat eventually led to the white rat becoming a conditioned stimulus – inducing the fear response even without the sound of the hammer.

White rat with red eyes looking at the camera from inside a cage

While the study was clearly problematic, and wouldn’t (and shouldn’t!) clear any ethical boards today, it was hugely influential for its time, showing how human emotional responses can be shaped intentionally by conditioning – a feat only carried out with animals prior to this [4].

Watson, later referred to by a previous professor of his as a person “who thought too highly of himself and was more interested in his own ideas than in people” [5], was later revered and reviled in equal measure [2]. While his approach has since been rightly questioned, the study was a breakthrough for the conception of human behavior .

Asch’s Conformity Experiment

Three decades following Watson’s infamous experiment, beliefs were studied rather than behavior. Research carried out by Solomon Asch in 1951 showed how the influence of group pressure could make people say what they didn’t believe.

The goal was to examine how social pressures “induce individuals to resist or to yield to group pressures when the latter are perceived to be contrary to fact” [6]. Participant’s were introduced to a group of seven people in which, unbeknownst to them, all other individuals were actors hired by Asch. The task was introduced as a perceptual test, in which the length of lines was to be compared.

Asch conformity study example lines

Sets of lines were shown to the group of participants – three on one card, one on another (as in the image above). The apparent task was to compare the three lines and say which was most like the single line in length. The answers were plainly obvious, and in one-on-one testing, participants got a correct answer over 99% of the time. Yet in this group setting, in which each actor, one after the other, incorrectly said an incorrect line out loud, the answers of the participants would change.

On average, around 38% of the answers the participants gave were incorrect – a huge jump from the less than 1% reported in non-group settings. The study was hugely influential for showing how our actions can be impacted by the environment we are placed in, particularly when it comes to social factors.

The Invisible Gorilla

If you don’t know this research from the title already, then it’s best experienced by watching the video below, and counting the number of ball passes.

The research of course has little to do with throwing a ball around, but more to do with the likelihood of not seeing the person in a gorilla costume who appears in the middle of the screen for eight seconds. The research, carried out in 1999, investigated how our attentional resources can impact how we perceive the world [7]. The term “ inattentional blindness ” refers to the effective blindness of our perceptions when our attention is engaged in another task.

The study tested how attentional processing is distributed, suggesting that objects that are more relevant to the task are more likely to be seen than objects which simply have close spatial proximity (very roughly – something expected is more likely to be seen even if it’s further away, whereas something unexpected is less likely to be seen even if it’s close).

The research not only showed the effect of our perceptions on our experience, but also has real-world implications. A replication of this study was done using eye tracking to record the visual search of radiologists who were instructed to look for nodules on one of several X-rays of lungs [8]. As the researchers state “A gorilla, 48 times the size of the average nodule, was inserted in the last case that was presented . Eighty-three percent of the radiologists did not see the gorilla.”

The original study, and research that followed since, has been crucial for showing how our expectations about the environment can shape our perceptions. Modern research has built upon each of the ideas and studies that have been carried out across almost 200 years.

experiments psychology definition

Powering Human Insights

The complete research platform for psychology experiments

The Future of Experimental Psychology

The majority of this article has been concerned with what experimental psychology is, where it comes from, and what it has achieved so far. An inevitable follow-up question to this is – where is it going?

While predictions are difficult to make, there are at least indications. The best place to look is to experts in the field. Schultz and Schultz refer to modern psychology “as the science of behavior and mental processes instead of only behavior, a science seeking to explain overt behavior and its relationship to mental processes.” [2].

The Association for Psychological Science (APS) asked for forecasts from several prominent psychology researchers ( original article available here ), and received some of the following responses.

Association for Psychological Science logo

Lauri Nummenmaa (Assistant professor, Aalto University, Finland) predicts a similar path to Schultz and Schultz, stating that “a major aim of the future psychological science would involve re-establishing the link between the brain and behavior”. While Modupe Akinola (Assistant professor, Columbia Business School) hopes “that advancements in technology will allow for more unobtrusive ways of measuring bodily responses”.

Kristen Lindquist (Assistant professor of psychology, University of North Carolina School of Medicine) centers in on emotional responses, saying that “We are just beginning to understand how a person’s expectations, knowledge, and prior experiences shape his or her emotions. Emotions play a role in every moment of waking life from decisions to memories to feelings, so understanding emotions will help us to understand the mind more generally.”

Tal Yarkoni (Director, Psychoinformatics Lab, University of Texas at Austin) provides a forthright assessment of what the future of experimental psychology has in store: “psychological scientists will have better data, better tools, and more reliable methods of aggregation and evaluation”.

Whatever the future of experimental psychology looks like, we at iMotions aim to keep providing all the tools needed to carry out rigorous experimental psychology research.

I hope you’ve enjoyed reading this introduction to experimental psychology. If you’d like to get an even closer look at the background and research within this field, then download our free guide to human behavior below.

Free 52-page Human Behavior Guide

For Beginners and Intermediates

  • Get accessible and comprehensive walkthrough
  • Valuable human behavior research insight
  • Learn how to take your research to the next level

experiments psychology definition

[1] Shiraev, E. (2015). A history of psychology . Thousand Oaks, CA: SAGE Publications.

[2] Schultz, D. P., & Schultz, S. E. (2011). A History of Modern Psychology . Cengage, Canada.

[3] Watson, J.B.; Rayner, R. (1920). “Conditioned emotional reactions”. Journal of Experimental Psychology . 3 (1): 1–14. doi:10.1037/h0069608.

[4] Pavlov, I. P. (1928). Lectures on conditioned reflexes . (Translated by W.H. Gantt) London: Allen and Unwin.

[5] Brewer, C. L. (1991). Perspectives on John B. Watson . In G. A. Kimble, M. Wertheimer, & C. White (Eds.), Portraits of pioneers in psychology (pp. 171–186). Washington, DC: American Psychological Association.

[6] Asch, S.E. (1951). Effects of group pressure on the modification and distortion of judgments . In H. Guetzkow (Ed.), Groups, leadership and men(pp. 177–190). Pittsburgh, PA:Carnegie Press.

[7] Simons, D. and Chabris, C. (1999). Gorillas in our midst: sustained inattentional blindness for dynamic events. Perception , 28(9), pp.1059-1074.

[8] Drew, T., Võ, M. L-H., Wolfe, J. M. (2013). The invisible gorilla strikes again: sustained inattentional blindness in expert observers. Psychological Science, 24 (9):1848–1853. doi: 10.1177/0956797613479386.

Last edited

About the author

See what is next in human behavior research

Follow our newsletter to get the latest insights and events send to your inbox.

Related Posts

experiments psychology definition

The Impact of Gaze Tracking Technology: Applications and Benefits

experiments psychology definition

The Ultimatum Game

experiments psychology definition

The Stag Hunt (Game Theory)

experiments psychology definition

Unlocking the Potential of VR Eye Trackers: How They Work and Their Applications

You might also like these.

Human Factors

experiments psychology definition

Neuroeconomics: The Best of Neuroscience, Psychology, and Economics

experiments psychology definition

What is Attribution Theory?

Case Stories

Explore Blog Categories

Best Practice

Collaboration, product guides, product news, research fundamentals, research insights, 🍪 use of cookies.

We are committed to protecting your privacy and only use cookies to improve the user experience.

Chose which third-party services that you will allow to drop cookies. You can always change your cookie settings via the Cookie Settings link in the footer of the website. For more information read our Privacy Policy.

  • gtag This tag is from Google and is used to associate user actions with Google Ad campaigns to measure their effectiveness. Enabling this will load the gtag and allow for the website to share information with Google. This service is essential and can not be disabled.
  • Livechat Livechat provides you with direct access to the experts in our office. The service tracks visitors to the website but does not store any information unless consent is given. This service is essential and can not be disabled.
  • Pardot Collects information such as the IP address, browser type, and referring URL. This information is used to create reports on website traffic and track the effectiveness of marketing campaigns.
  • Third-party iFrames Allows you to see thirdparty iFrames.

American Psychological Association Logo

Experimental Psychology Studies Humans and Animals

Experimental psychologists use science to explore the processes behind human and animal behavior.

Understanding Experimental Psychology

Our personalities, and to some degree our life experiences, are defined by the way we behave. But what influences the way we behave in the first place? How does our behavior shape our experiences throughout our lives? 

Experimental psychologists are interested in exploring theoretical questions, often by creating a hypothesis and then setting out to prove or disprove it through experimentation. They study a wide range of behavioral topics among humans and animals, including sensation, perception, attention, memory, cognition and emotion.

Experimental Psychology Applied

Experimental psychologists use scientific methods to collect data and perform research. Often, their work builds, one study at a time, to a larger finding or conclusion. Some researchers have devoted their entire career to answering one complex research question. 

These psychologists work in a variety of settings, including universities, research centers, government agencies and private businesses. The focus of their research is as varied as the settings in which they work. Often, personal interest and educational background will influence the research questions they choose to explore. 

In a sense, all psychologists can be considered experimental psychologists since research is the foundation of the discipline, and many psychologists split their professional focus among research, patient care, teaching or program administration. Experimental psychologists, however, often devote their full attention to research — its design, execution, analysis and dissemination. 

Those focusing their careers specifically on experimental psychology contribute work across subfields . For example, they use scientific research to provide insights that improve teaching and learning, create safer workplaces and transportation systems, improve substance abuse treatment programs and promote healthy child development.

Pursuing a Career in Experimental Psychology

Related books

Deliberate Practice in Interpersonal Psychotherapy

APA Handbook of Health Psychology

Methodological Issues and Strategies, 5e

Essentials of Constructivist Critical Incident ...

How to Mix Methods

Psychology subfields

Study.com

In order to continue enjoying our site, we ask that you confirm your identity as a human. Thank you very much for your cooperation.

  • Daily Crossword
  • Word Puzzle
  • Word Finder
  • Word of the Day
  • Synonym of the Day
  • Word of the Year
  • Language stories
  • All featured
  • Gender and sexuality
  • All pop culture
  • Writing hub
  • Grammar essentials
  • Commonly confused
  • All writing tips
  • Pop culture
  • Writing tips

Advertisement

experimental psychology

  • the branch of psychology dealing with the study of emotional and mental activity, as learning, in humans and other animals by means of experimental methods.
  • the scientific study of the individual behaviour of man and other animals, esp of perception, learning, memory, motor skills, and thinking

Word History and Origins

Origin of experimental psychology 1

Example Sentences

Although experimental psychology originated in Germany in 1879, Watson’s notorious study foreshadowed a messy, contentious approach to the “science of us” that has played out over the last 100 years.

The effect’s existence has since become one of the most robust findings in all of experimental psychology.

Experimental psychology began about twenty-five years ago; at that time there existed one psychological laboratory.

Experimental psychology is but a half-century old; educational psychology, less than a quarter-century old.

His demonstrations were conducted along lines familiar to all students of experimental psychology.

The early history of experimental psychology in America once occasioned discussion.

So experimental psychology needs as its starting point an exact definition of the technique to be used in making the experiment.

IMAGES

  1. PPT

    experiments psychology definition

  2. PPT

    experiments psychology definition

  3. Experimental Psychology: 10 Examples & Definition (2024)

    experiments psychology definition

  4. The science of psychology

    experiments psychology definition

  5. The 25 Most Influential Psychological Experiments in History

    experiments psychology definition

  6. Seven Famous Psychology Experiments

    experiments psychology definition

COMMENTS

  1. Experimental Method In Psychology

    There are three types of experiments you need to know: 1. Lab Experiment. A laboratory experiment in psychology is a research method in which the experimenter manipulates one or more independent variables and measures the effects on the dependent variable under controlled conditions. A laboratory experiment is conducted under highly controlled ...

  2. Experimental psychology

    Experimental psychology refers to work done by those who apply experimental methods to psychological study and the underlying processes. Experimental psychologists employ human participants and animal subjects to study a great many topics, including (among others) sensation, perception, memory, cognition, learning, motivation, emotion; developmental processes, social psychology, and the neural ...

  3. APA Dictionary of Psychology

    An experiment involves the manipulation of an independent variable, the measurement of a dependent variable, and the exposure of various participants to one or more of the conditions being studied. Random selection of participants and their random assignment to conditions also are necessary in experiments. —experimental adj.

  4. APA Dictionary of Psychology

    experimental psychology. the scientific study of behavior, motives, or cognition in a laboratory or other controlled setting in order to predict, explain, or influence behavior or other psychological phenomena. Experimental psychology aims at establishing quantified relationships and explanatory theory through the analysis of responses under ...

  5. How the Experimental Method Works in Psychology

    The experimental method involves manipulating one variable to determine if this causes changes in another variable. This method relies on controlled research methods and random assignment of study subjects to test a hypothesis. For example, researchers may want to learn how different visual patterns may impact our perception.

  6. 6.1 Experiment Basics

    Experiments have two fundamental features. The first is that the researchers manipulate, or systematically vary, the level of the independent variable. The different levels of the independent variable are called conditions. For example, in Darley and Latané's experiment, the independent variable was the number of witnesses that participants ...

  7. How Does Experimental Psychology Study Behavior?

    The experimental method in psychology helps us learn more about how people think and why they behave the way they do. Experimental psychologists can research a variety of topics using many different experimental methods. Each one contributes to what we know about the mind and human behavior. 4 Sources.

  8. Experimental psychology

    experimental psychology, a method of studying psychological phenomena and processes.The experimental method in psychology attempts to account for the activities of animals (including humans) and the functional organization of mental processes by manipulating variables that may give rise to behaviour; it is primarily concerned with discovering laws that describe manipulable relationships.

  9. Experimental Design: Types, Examples & Methods

    Three types of experimental designs are commonly used: 1. Independent Measures. Independent measures design, also known as between-groups, is an experimental design where different participants are used in each condition of the independent variable. This means that each condition of the experiment includes a different group of participants.

  10. 5.1 Experiment Basics

    An experiment is a type of empirical study that features the manipulation of an independent variable, the measurement of a dependent variable, and control of extraneous variables. An extraneous variable is any variable other than the independent and dependent variables. A confound is an extraneous variable that varies systematically with the ...

  11. Experimental Psychology

    Experimental Psychology. Definition: Experimental psychology is a subfield of psychology that focuses on scientific investigation and research methods to study human behavior and mental processes. It involves conducting controlled experiments to examine hypotheses and gather empirical data.

  12. Experiment Basics

    Experiments have two fundamental features. The first is that the researchers manipulate, or systematically vary, the level of the independent variable. The different levels of the independent variable are called conditions. For example, in Darley and Latané's experiment, the independent variable was the number of witnesses that participants ...

  13. Experimental Psychology: 10 Examples & Definition

    Definition: Experimental psychology is a branch of psychology that focuses on conducting systematic and controlled experiments to study human behavior and cognition. Overview: Experimental psychology aims to gather empirical evidence and explore cause-and-effect relationships between variables.

  14. Conducting an Experiment in Psychology

    Like other sciences, psychology utilizes the scientific method and bases conclusions upon empirical evidence. When conducting an experiment, it is important to follow the seven basic steps of the scientific method: Ask a testable question. Define your variables.

  15. What is Experimental Psychology

    Experimental psychology is able to shed light on people's personalities and life experiences by examining what the way people behave and how behavior is shaped throughout life, along with other theoretical questions. The field looks at a wide range of behavioral topics including sensation, perception, attention, memory, cognition, and emotion ...

  16. What is an "experiment?"

    Definition of experiment: An experiment in psychology is when there is a study conducted that investigates the direct effect of an independent variable on a dependent variable. Tip: If you're not sure if it's an experiment, you're always safe to call it a "study.". Before you can call a study an experiment, you have to identify the ...

  17. What Do Psychologists Mean When They Say "Experiment"?

    Contrary to popular belief, not every study is an experiment—in fact, in psychological research, the term "experiment" is narrowly defined as a study involving both randomisation and control. In ...

  18. Experimental Methods In Psychology

    There are three experimental methods in the field of psychology; Laboratory, Field and Natural Experiments. Each of the experimental methods holds different characteristics in relation to; the manipulation of the IV, the control of the EVs and the ability to accurately replicate the study in exactly the same way. Method. Description of Method.

  19. Research Methods In Psychology

    Research Methods In Psychology. Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

  20. What is Experimental Psychology?

    Experimental psychology is concerned with testing theories of human thoughts, feelings, actions, and beyond - any aspect of being human that involves the mind. This is a broad category that features many branches within it (e.g. behavioral psychology, cognitive psychology). Below, we will go through a brief history of experimental psychology ...

  21. Experimental Psychology Studies Humans and Animals

    Experimental psychologists are interested in exploring theoretical questions, often by creating a hypothesis and then setting out to prove or disprove it through experimentation. They study a wide range of behavioral topics among humans and animals, including sensation, perception, attention, memory, cognition and emotion.

  22. Experimental Psychology Definition, Studies & Controversy

    Psychology is the discipline that studies the human mind. Experimental psychology does so through carefully constructed experiments, usually involving humans but sometimes involving animals ...

  23. EXPERIMENTAL PSYCHOLOGY Definition & Meaning

    Experimental psychology definition: the branch of psychology dealing with the study of emotional and mental activity, as learning, in humans and other animals by means of experimental methods.. See examples of EXPERIMENTAL PSYCHOLOGY used in a sentence.