Every print subscription comes with full digital access

Science News

Lies, damned lies and psychology experiments.

Researchers may deceive themselves when they mislead study participants

Share this:

By Bruce Bower

October 22, 2010 at 2:09 pm

BASEL, Switzerland — As dusk settled over this charming city by the Rhine in early October, psychologist Ralph Hertwig sipped scotch in his office with a visiting journalist and bemoaned the toxic — and for some researchers, intoxicating — effects of telling lies to gather data and get published.

Hertwig’s theme: Inauthentic experimenters and the research subjects who follow their lead. His case in point: A study in the May Psychological Science reporting that people who wear discount, mock designer sunglasses feel phony as a result and become more likely to cheat and to judge others as unethical.

With apologies to Jerry Lee Lewis, there was a whole lotta fakin’ going on in this investigation. Half of female participants in one trial completed a bogus questionnaire and were told that their answers reflected a preference for counterfeit products. They were then instructed to take a pair of sunglasses from a box marked “Counterfeit Sunglasses” — which actually contained expensive designer shades — and wear them while walking outside the lab for five minutes and then while working on lab tasks that paid money for correct responses.

Volunteers recorded their responses on a work sheet, after having been promised anonymity by the experimenters. But numbers on work sheets were used to identify each responder so that her actual and self-reported performance could be compared.

And behold—relative to women who hadn’t been misled about favoring faux stuff, tricked participants claimed to have made more correct responses than they actually did. In another experiment, misled women frequently described others as unethical and devious.

Hertwig rubbed his eyes wearily. “It’s just as likely that the experimenters’ own behavior encouraged the dishonest behavior that they observed,” he said.

Participants in the counterfeit condition could have read the situation as one in which normal standards of behavior didn’t apply because the researchers approved of designer knock-offs, Hertwig explained. Each woman saw that the experimenter had somehow acquired fake designer gear and displayed it openly. What’s more, the experimenter claimed special insights into people’s likings for counterfeit products, told volunteers to wear the glasses in public and had them evaluate positive statements about the glasses.

Sociologists’ “broken-windows” theory posits that signs of disorder and petty criminal behavior cause such acts to spread in communities. If that’s the case, Hertwig noted, the counterfeit-sunglasses scientists metaphorically “broke their lab’s window and cried foul when participants sprayed graffiti on the wall.”

And assuming volunteers were debriefed after the experiment, as required by the American Psychological Association’s rules of conduct, one shouldn’t expect them to trust any future researchers’ pledges of anonymity.

Ironically, psychologists’ blindness to these issues could stem from a counterfeit-sunglasses effect. “Deceptive research practices may induce a sense of self-alienation and lack of authenticity among experimenters that interferes with analyzing the signals that the experimental situation conveys to participants,” Hertwig mused.

Some of the most famous psychology experiments of the past 60 years have hinged on trickery, despite longstanding ethical and practical concerns about fooling people in the name of science ( SN: 6/20/98, p. 394 ).

Deceptive psychology’s heyday occurred in the 1960s and 1970s. Literature searches conducted by Hertwig and economist Andreas Ortmann of the University of New South Wales in Sydney, Australia, indicate that experimenters still mislead volunteers in between one-third and one-half of studies published in major social psychology journals.

Hertwig doesn’t want to ban deceptive research practices. He’d settle for researchers taking off their rose-colored, counterfeit sunglasses and scrutinizing how their devious methods may shape volunteers’ responses.

In other words, let the liar beware.

More Stories from Science News on Psychology

Tracking feature in Snapchat can make people feel excluded.

Online spaces may intensify teens’ uncertainty in social interactions

Language model misses depression in Black people's social media posts.

Language models may miss signs of depression in Black people’s Facebook posts

psychology experiments lie

Timbre can affect what harmony is music to our ears

An illustration of many happy people

Not all cultures value happiness over other aspects of well-being

A profile photo of Bruce the Kea

What parrots can teach us about human intelligence

Depiction of Odysseus tying himself to his ship's mast to resist the Sirens' call. Psychologists call that act self-control.

Most people say self-control is the same as willpower. Researchers disagree

gifts wrapped in holiday paper

Here’s how to give a good gift, according to science

An illustration of a black man in a red jacket hugging himself while several other people walk by and don't seem to notice him.

Why scientists are expanding the definition of loneliness

Subscribers, enter your e-mail address for full access to the Science News archives and digital editions.

Not a subscriber? Become one now .

  • The 25 Most Influential Psychological Experiments in History

Most Influential Psychological Experiments in History

While each year thousands and thousands of studies are completed in the many specialty areas of psychology, there are a handful that, over the years, have had a lasting impact in the psychological community as a whole. Some of these were dutifully conducted, keeping within the confines of ethical and practical guidelines. Others pushed the boundaries of human behavior during their psychological experiments and created controversies that still linger to this day. And still others were not designed to be true psychological experiments, but ended up as beacons to the psychological community in proving or disproving theories.

This is a list of the 25 most influential psychological experiments still being taught to psychology students of today.

1. A Class Divided

Study conducted by: jane elliott.

Study Conducted in 1968 in an Iowa classroom

A Class Divided Study Conducted By: Jane Elliott

Experiment Details: Jane Elliott’s famous experiment was inspired by the assassination of Dr. Martin Luther King Jr. and the inspirational life that he led. The third grade teacher developed an exercise, or better yet, a psychological experiment, to help her Caucasian students understand the effects of racism and prejudice.

Elliott divided her class into two separate groups: blue-eyed students and brown-eyed students. On the first day, she labeled the blue-eyed group as the superior group and from that point forward they had extra privileges, leaving the brown-eyed children to represent the minority group. She discouraged the groups from interacting and singled out individual students to stress the negative characteristics of the children in the minority group. What this exercise showed was that the children’s behavior changed almost instantaneously. The group of blue-eyed students performed better academically and even began bullying their brown-eyed classmates. The brown-eyed group experienced lower self-confidence and worse academic performance. The next day, she reversed the roles of the two groups and the blue-eyed students became the minority group.

At the end of the experiment, the children were so relieved that they were reported to have embraced one another and agreed that people should not be judged based on outward appearances. This exercise has since been repeated many times with similar outcomes.

For more information click here

2. Asch Conformity Study

Study conducted by: dr. solomon asch.

Study Conducted in 1951 at Swarthmore College

Asch Conformity Study

Experiment Details: Dr. Solomon Asch conducted a groundbreaking study that was designed to evaluate a person’s likelihood to conform to a standard when there is pressure to do so.

A group of participants were shown pictures with lines of various lengths and were then asked a simple question: Which line is longest? The tricky part of this study was that in each group only one person was a true participant. The others were actors with a script. Most of the actors were instructed to give the wrong answer. Strangely, the one true participant almost always agreed with the majority, even though they knew they were giving the wrong answer.

The results of this study are important when we study social interactions among individuals in groups. This study is a famous example of the temptation many of us experience to conform to a standard during group situations and it showed that people often care more about being the same as others than they do about being right. It is still recognized as one of the most influential psychological experiments for understanding human behavior.

3. Bobo Doll Experiment

Study conducted by: dr. alburt bandura.

Study Conducted between 1961-1963 at Stanford University

Bobo Doll Experiment

In his groundbreaking study he separated participants into three groups:

  • one was exposed to a video of an adult showing aggressive behavior towards a Bobo doll
  • another was exposed to video of a passive adult playing with the Bobo doll
  • the third formed a control group

Children watched their assigned video and then were sent to a room with the same doll they had seen in the video (with the exception of those in the control group). What the researcher found was that children exposed to the aggressive model were more likely to exhibit aggressive behavior towards the doll themselves. The other groups showed little imitative aggressive behavior. For those children exposed to the aggressive model, the number of derivative physical aggressions shown by the boys was 38.2 and 12.7 for the girls.

The study also showed that boys exhibited more aggression when exposed to aggressive male models than boys exposed to aggressive female models. When exposed to aggressive male models, the number of aggressive instances exhibited by boys averaged 104. This is compared to 48.4 aggressive instances exhibited by boys who were exposed to aggressive female models.

While the results for the girls show similar findings, the results were less drastic. When exposed to aggressive female models, the number of aggressive instances exhibited by girls averaged 57.7. This is compared to 36.3 aggressive instances exhibited by girls who were exposed to aggressive male models. The results concerning gender differences strongly supported Bandura’s secondary prediction that children will be more strongly influenced by same-sex models. The Bobo Doll Experiment showed a groundbreaking way to study human behavior and it’s influences.

4. Car Crash Experiment

Study conducted by: elizabeth loftus and john palmer.

Study Conducted in 1974 at The University of California in Irvine

Car Crash Experiment

The participants watched slides of a car accident and were asked to describe what had happened as if they were eyewitnesses to the scene. The participants were put into two groups and each group was questioned using different wording such as “how fast was the car driving at the time of impact?” versus “how fast was the car going when it smashed into the other car?” The experimenters found that the use of different verbs affected the participants’ memories of the accident, showing that memory can be easily distorted.

This research suggests that memory can be easily manipulated by questioning technique. This means that information gathered after the event can merge with original memory causing incorrect recall or reconstructive memory. The addition of false details to a memory of an event is now referred to as confabulation. This concept has very important implications for the questions used in police interviews of eyewitnesses.

5. Cognitive Dissonance Experiment

Study conducted by: leon festinger and james carlsmith.

Study Conducted in 1957 at Stanford University

Experiment Details: The concept of cognitive dissonance refers to a situation involving conflicting:

This conflict produces an inherent feeling of discomfort leading to a change in one of the attitudes, beliefs or behaviors to minimize or eliminate the discomfort and restore balance.

Cognitive dissonance was first investigated by Leon Festinger, after an observational study of a cult that believed that the earth was going to be destroyed by a flood. Out of this study was born an intriguing experiment conducted by Festinger and Carlsmith where participants were asked to perform a series of dull tasks (such as turning pegs in a peg board for an hour). Participant’s initial attitudes toward this task were highly negative.

They were then paid either $1 or $20 to tell a participant waiting in the lobby that the tasks were really interesting. Almost all of the participants agreed to walk into the waiting room and persuade the next participant that the boring experiment would be fun. When the participants were later asked to evaluate the experiment, the participants who were paid only $1 rated the tedious task as more fun and enjoyable than the participants who were paid $20 to lie.

Being paid only $1 is not sufficient incentive for lying and so those who were paid $1 experienced dissonance. They could only overcome that cognitive dissonance by coming to believe that the tasks really were interesting and enjoyable. Being paid $20 provides a reason for turning pegs and there is therefore no dissonance.

6. Fantz’s Looking Chamber

Study conducted by: robert l. fantz.

Study Conducted in 1961 at the University of Illinois

Experiment Details: The study conducted by Robert L. Fantz is among the simplest, yet most important in the field of infant development and vision. In 1961, when this experiment was conducted, there very few ways to study what was going on in the mind of an infant. Fantz realized that the best way was to simply watch the actions and reactions of infants. He understood the fundamental factor that if there is something of interest near humans, they generally look at it.

To test this concept, Fantz set up a display board with two pictures attached. On one was a bulls-eye. On the other was the sketch of a human face. This board was hung in a chamber where a baby could lie safely underneath and see both images. Then, from behind the board, invisible to the baby, he peeked through a hole to watch what the baby looked at. This study showed that a two-month old baby looked twice as much at the human face as it did at the bulls-eye. This suggests that human babies have some powers of pattern and form selection. Before this experiment it was thought that babies looked out onto a chaotic world of which they could make little sense.

7. Hawthorne Effect

Study conducted by: henry a. landsberger.

Study Conducted in 1955 at Hawthorne Works in Chicago, Illinois

Hawthorne Effect

Landsberger performed the study by analyzing data from experiments conducted between 1924 and 1932, by Elton Mayo, at the Hawthorne Works near Chicago. The company had commissioned studies to evaluate whether the level of light in a building changed the productivity of the workers. What Mayo found was that the level of light made no difference in productivity. The workers increased their output whenever the amount of light was switched from a low level to a high level, or vice versa.

The researchers noticed a tendency that the workers’ level of efficiency increased when any variable was manipulated. The study showed that the output changed simply because the workers were aware that they were under observation. The conclusion was that the workers felt important because they were pleased to be singled out. They increased productivity as a result. Being singled out was the factor dictating increased productivity, not the changing lighting levels, or any of the other factors that they experimented upon.

The Hawthorne Effect has become one of the hardest inbuilt biases to eliminate or factor into the design of any experiment in psychology and beyond.

8. Kitty Genovese Case

Study conducted by: new york police force.

Study Conducted in 1964 in New York City

Experiment Details: The murder case of Kitty Genovese was never intended to be a psychological experiment, however it ended up having serious implications for the field.

According to a New York Times article, almost 40 neighbors witnessed Kitty Genovese being savagely attacked and murdered in Queens, New York in 1964. Not one neighbor called the police for help. Some reports state that the attacker briefly left the scene and later returned to “finish off” his victim. It was later uncovered that many of these facts were exaggerated. (There were more likely only a dozen witnesses and records show that some calls to police were made).

What this case later become famous for is the “Bystander Effect,” which states that the more bystanders that are present in a social situation, the less likely it is that anyone will step in and help. This effect has led to changes in medicine, psychology and many other areas. One famous example is the way CPR is taught to new learners. All students in CPR courses learn that they must assign one bystander the job of alerting authorities which minimizes the chances of no one calling for assistance.

9. Learned Helplessness Experiment

Study conducted by: martin seligman.

Study Conducted in 1967 at the University of Pennsylvania

Learned Helplessness Experiment

Seligman’s experiment involved the ringing of a bell and then the administration of a light shock to a dog. After a number of pairings, the dog reacted to the shock even before it happened. As soon as the dog heard the bell, he reacted as though he’d already been shocked.

During the course of this study something unexpected happened. Each dog was placed in a large crate that was divided down the middle with a low fence. The dog could see and jump over the fence easily. The floor on one side of the fence was electrified, but not on the other side of the fence. Seligman placed each dog on the electrified side and administered a light shock. He expected the dog to jump to the non-shocking side of the fence. In an unexpected turn, the dogs simply laid down.

The hypothesis was that as the dogs learned from the first part of the experiment that there was nothing they could do to avoid the shocks, they gave up in the second part of the experiment. To prove this hypothesis the experimenters brought in a new set of animals and found that dogs with no history in the experiment would jump over the fence.

This condition was described as learned helplessness. A human or animal does not attempt to get out of a negative situation because the past has taught them that they are helpless.

10. Little Albert Experiment

Study conducted by: john b. watson and rosalie rayner.

Study Conducted in 1920 at Johns Hopkins University

Little Albert Experiment

The experiment began by placing a white rat in front of the infant, who initially had no fear of the animal. Watson then produced a loud sound by striking a steel bar with a hammer every time little Albert was presented with the rat. After several pairings (the noise and the presentation of the white rat), the boy began to cry and exhibit signs of fear every time the rat appeared in the room. Watson also created similar conditioned reflexes with other common animals and objects (rabbits, Santa beard, etc.) until Albert feared them all.

This study proved that classical conditioning works on humans. One of its most important implications is that adult fears are often connected to early childhood experiences.

11. Magical Number Seven

Study conducted by: george a. miller.

Study Conducted in 1956 at Princeton University

Experiment Details:   Frequently referred to as “ Miller’s Law,” the Magical Number Seven experiment purports that the number of objects an average human can hold in working memory is 7 ± 2. This means that the human memory capacity typically includes strings of words or concepts ranging from 5-9. This information on the limits to the capacity for processing information became one of the most highly cited papers in psychology.

The Magical Number Seven Experiment was published in 1956 by cognitive psychologist George A. Miller of Princeton University’s Department of Psychology in Psychological Review .  In the article, Miller discussed a concurrence between the limits of one-dimensional absolute judgment and the limits of short-term memory.

In a one-dimensional absolute-judgment task, a person is presented with a number of stimuli that vary on one dimension (such as 10 different tones varying only in pitch). The person responds to each stimulus with a corresponding response (learned before).

Performance is almost perfect up to five or six different stimuli but declines as the number of different stimuli is increased. This means that a human’s maximum performance on one-dimensional absolute judgment can be described as an information store with the maximum capacity of approximately 2 to 3 bits of information There is the ability to distinguish between four and eight alternatives.

12. Pavlov’s Dog Experiment

Study conducted by: ivan pavlov.

Study Conducted in the 1890s at the Military Medical Academy in St. Petersburg, Russia

Pavlov’s Dog Experiment

Pavlov began with the simple idea that there are some things that a dog does not need to learn. He observed that dogs do not learn to salivate when they see food. This reflex is “hard wired” into the dog. This is an unconditioned response (a stimulus-response connection that required no learning).

Pavlov outlined that there are unconditioned responses in the animal by presenting a dog with a bowl of food and then measuring its salivary secretions. In the experiment, Pavlov used a bell as his neutral stimulus. Whenever he gave food to his dogs, he also rang a bell. After a number of repeats of this procedure, he tried the bell on its own. What he found was that the bell on its own now caused an increase in salivation. The dog had learned to associate the bell and the food. This learning created a new behavior. The dog salivated when he heard the bell. Because this response was learned (or conditioned), it is called a conditioned response. The neutral stimulus has become a conditioned stimulus.

This theory came to be known as classical conditioning.

13. Robbers Cave Experiment

Study conducted by: muzafer and carolyn sherif.

Study Conducted in 1954 at the University of Oklahoma

Experiment Details: This experiment, which studied group conflict, is considered by most to be outside the lines of what is considered ethically sound.

In 1954 researchers at the University of Oklahoma assigned 22 eleven- and twelve-year-old boys from similar backgrounds into two groups. The two groups were taken to separate areas of a summer camp facility where they were able to bond as social units. The groups were housed in separate cabins and neither group knew of the other’s existence for an entire week. The boys bonded with their cabin mates during that time. Once the two groups were allowed to have contact, they showed definite signs of prejudice and hostility toward each other even though they had only been given a very short time to develop their social group. To increase the conflict between the groups, the experimenters had them compete against each other in a series of activities. This created even more hostility and eventually the groups refused to eat in the same room. The final phase of the experiment involved turning the rival groups into friends. The fun activities the experimenters had planned like shooting firecrackers and watching movies did not initially work, so they created teamwork exercises where the two groups were forced to collaborate. At the end of the experiment, the boys decided to ride the same bus home, demonstrating that conflict can be resolved and prejudice overcome through cooperation.

Many critics have compared this study to Golding’s Lord of the Flies novel as a classic example of prejudice and conflict resolution.

14. Ross’ False Consensus Effect Study

Study conducted by: lee ross.

Study Conducted in 1977 at Stanford University

Experiment Details: In 1977, a social psychology professor at Stanford University named Lee Ross conducted an experiment that, in lay terms, focuses on how people can incorrectly conclude that others think the same way they do, or form a “false consensus” about the beliefs and preferences of others. Ross conducted the study in order to outline how the “false consensus effect” functions in humans.

Featured Programs

In the first part of the study, participants were asked to read about situations in which a conflict occurred and then were told two alternative ways of responding to the situation. They were asked to do three things:

  • Guess which option other people would choose
  • Say which option they themselves would choose
  • Describe the attributes of the person who would likely choose each of the two options

What the study showed was that most of the subjects believed that other people would do the same as them, regardless of which of the two responses they actually chose themselves. This phenomenon is referred to as the false consensus effect, where an individual thinks that other people think the same way they do when they may not. The second observation coming from this important study is that when participants were asked to describe the attributes of the people who will likely make the choice opposite of their own, they made bold and sometimes negative predictions about the personalities of those who did not share their choice.

15. The Schacter and Singer Experiment on Emotion

Study conducted by: stanley schachter and jerome e. singer.

Study Conducted in 1962 at Columbia University

Experiment Details: In 1962 Schachter and Singer conducted a ground breaking experiment to prove their theory of emotion.

In the study, a group of 184 male participants were injected with epinephrine, a hormone that induces arousal including increased heartbeat, trembling, and rapid breathing. The research participants were told that they were being injected with a new medication to test their eyesight. The first group of participants was informed the possible side effects that the injection might cause while the second group of participants were not. The participants were then placed in a room with someone they thought was another participant, but was actually a confederate in the experiment. The confederate acted in one of two ways: euphoric or angry. Participants who had not been informed about the effects of the injection were more likely to feel either happier or angrier than those who had been informed.

What Schachter and Singer were trying to understand was the ways in which cognition or thoughts influence human emotion. Their study illustrates the importance of how people interpret their physiological states, which form an important component of your emotions. Though their cognitive theory of emotional arousal dominated the field for two decades, it has been criticized for two main reasons: the size of the effect seen in the experiment was not that significant and other researchers had difficulties repeating the experiment.

16. Selective Attention / Invisible Gorilla Experiment

Study conducted by: daniel simons and christopher chabris.

Study Conducted in 1999 at Harvard University

Experiment Details: In 1999 Simons and Chabris conducted their famous awareness test at Harvard University.

Participants in the study were asked to watch a video and count how many passes occurred between basketball players on the white team. The video moves at a moderate pace and keeping track of the passes is a relatively easy task. What most people fail to notice amidst their counting is that in the middle of the test, a man in a gorilla suit walked onto the court and stood in the center before walking off-screen.

The study found that the majority of the subjects did not notice the gorilla at all, proving that humans often overestimate their ability to effectively multi-task. What the study set out to prove is that when people are asked to attend to one task, they focus so strongly on that element that they may miss other important details.

17. Stanford Prison Study

Study conducted by philip zimbardo.

Study Conducted in 1971 at Stanford University

Stanford Prison Study

The Stanford Prison Experiment was designed to study behavior of “normal” individuals when assigned a role of prisoner or guard. College students were recruited to participate. They were assigned roles of “guard” or “inmate.”  Zimbardo played the role of the warden. The basement of the psychology building was the set of the prison. Great care was taken to make it look and feel as realistic as possible.

The prison guards were told to run a prison for two weeks. They were told not to physically harm any of the inmates during the study. After a few days, the prison guards became very abusive verbally towards the inmates. Many of the prisoners became submissive to those in authority roles. The Stanford Prison Experiment inevitably had to be cancelled because some of the participants displayed troubling signs of breaking down mentally.

Although the experiment was conducted very unethically, many psychologists believe that the findings showed how much human behavior is situational. People will conform to certain roles if the conditions are right. The Stanford Prison Experiment remains one of the most famous psychology experiments of all time.

18. Stanley Milgram Experiment

Study conducted by stanley milgram.

Study Conducted in 1961 at Stanford University

Experiment Details: This 1961 study was conducted by Yale University psychologist Stanley Milgram. It was designed to measure people’s willingness to obey authority figures when instructed to perform acts that conflicted with their morals. The study was based on the premise that humans will inherently take direction from authority figures from very early in life.

Participants were told they were participating in a study on memory. They were asked to watch another person (an actor) do a memory test. They were instructed to press a button that gave an electric shock each time the person got a wrong answer. (The actor did not actually receive the shocks, but pretended they did).

Participants were told to play the role of “teacher” and administer electric shocks to “the learner,” every time they answered a question incorrectly. The experimenters asked the participants to keep increasing the shocks. Most of them obeyed even though the individual completing the memory test appeared to be in great pain. Despite these protests, many participants continued the experiment when the authority figure urged them to. They increased the voltage after each wrong answer until some eventually administered what would be lethal electric shocks.

This experiment showed that humans are conditioned to obey authority and will usually do so even if it goes against their natural morals or common sense.

19. Surrogate Mother Experiment

Study conducted by: harry harlow.

Study Conducted from 1957-1963 at the University of Wisconsin

Experiment Details: In a series of controversial experiments during the late 1950s and early 1960s, Harry Harlow studied the importance of a mother’s love for healthy childhood development.

In order to do this he separated infant rhesus monkeys from their mothers a few hours after birth and left them to be raised by two “surrogate mothers.” One of the surrogates was made of wire with an attached bottle for food. The other was made of soft terrycloth but lacked food. The researcher found that the baby monkeys spent much more time with the cloth mother than the wire mother, thereby proving that affection plays a greater role than sustenance when it comes to childhood development. They also found that the monkeys that spent more time cuddling the soft mother grew up to healthier.

This experiment showed that love, as demonstrated by physical body contact, is a more important aspect of the parent-child bond than the provision of basic needs. These findings also had implications in the attachment between fathers and their infants when the mother is the source of nourishment.

20. The Good Samaritan Experiment

Study conducted by: john darley and daniel batson.

Study Conducted in 1973 at The Princeton Theological Seminary (Researchers were from Princeton University)

Experiment Details: In 1973, an experiment was created by John Darley and Daniel Batson, to investigate the potential causes that underlie altruistic behavior. The researchers set out three hypotheses they wanted to test:

  • People thinking about religion and higher principles would be no more inclined to show helping behavior than laymen.
  • People in a rush would be much less likely to show helping behavior.
  • People who are religious for personal gain would be less likely to help than people who are religious because they want to gain some spiritual and personal insights into the meaning of life.

Student participants were given some religious teaching and instruction. They were then were told to travel from one building to the next. Between the two buildings was a man lying injured and appearing to be in dire need of assistance. The first variable being tested was the degree of urgency impressed upon the subjects, with some being told not to rush and others being informed that speed was of the essence.

The results of the experiment were intriguing, with the haste of the subject proving to be the overriding factor. When the subject was in no hurry, nearly two-thirds of people stopped to lend assistance. When the subject was in a rush, this dropped to one in ten.

People who were on the way to deliver a speech about helping others were nearly twice as likely to help as those delivering other sermons,. This showed that the thoughts of the individual were a factor in determining helping behavior. Religious beliefs did not appear to make much difference on the results. Being religious for personal gain, or as part of a spiritual quest, did not appear to make much of an impact on the amount of helping behavior shown.

21. The Halo Effect Experiment

Study conducted by: richard e. nisbett and timothy decamp wilson.

Study Conducted in 1977 at the University of Michigan

Experiment Details: The Halo Effect states that people generally assume that people who are physically attractive are more likely to:

  • be intelligent
  • be friendly
  • display good judgment

To prove their theory, Nisbett and DeCamp Wilson created a study to prove that people have little awareness of the nature of the Halo Effect. They’re not aware that it influences:

  • their personal judgments
  • the production of a more complex social behavior

In the experiment, college students were the research participants. They were asked to evaluate a psychology instructor as they view him in a videotaped interview. The students were randomly assigned to one of two groups. Each group was shown one of two different interviews with the same instructor. The instructor is a native French-speaking Belgian who spoke English with a noticeable accent. In the first video, the instructor presented himself as someone:

  • respectful of his students’ intelligence and motives
  • flexible in his approach to teaching
  • enthusiastic about his subject matter

In the second interview, he presented himself as much more unlikable. He was cold and distrustful toward the students and was quite rigid in his teaching style.

After watching the videos, the subjects were asked to rate the lecturer on:

  • physical appearance

His mannerisms and accent were kept the same in both versions of videos. The subjects were asked to rate the professor on an 8-point scale ranging from “like extremely” to “dislike extremely.” Subjects were also told that the researchers were interested in knowing “how much their liking for the teacher influenced the ratings they just made.” Other subjects were asked to identify how much the characteristics they just rated influenced their liking of the teacher.

After responding to the questionnaire, the respondents were puzzled about their reactions to the videotapes and to the questionnaire items. The students had no idea why they gave one lecturer higher ratings. Most said that how much they liked the lecturer had not affected their evaluation of his individual characteristics at all.

The interesting thing about this study is that people can understand the phenomenon, but they are unaware when it is occurring. Without realizing it, humans make judgments. Even when it is pointed out, they may still deny that it is a product of the halo effect phenomenon.

22. The Marshmallow Test

Study conducted by: walter mischel.

Study Conducted in 1972 at Stanford University

The Marshmallow Test

In his 1972 Marshmallow Experiment, children ages four to six were taken into a room where a marshmallow was placed in front of them on a table. Before leaving each of the children alone in the room, the experimenter informed them that they would receive a second marshmallow if the first one was still on the table after they returned in 15 minutes. The examiner recorded how long each child resisted eating the marshmallow and noted whether it correlated with the child’s success in adulthood. A small number of the 600 children ate the marshmallow immediately and one-third delayed gratification long enough to receive the second marshmallow.

In follow-up studies, Mischel found that those who deferred gratification were significantly more competent and received higher SAT scores than their peers. This characteristic likely remains with a person for life. While this study seems simplistic, the findings outline some of the foundational differences in individual traits that can predict success.

23. The Monster Study

Study conducted by: wendell johnson.

Study Conducted in 1939 at the University of Iowa

Experiment Details: The Monster Study received this negative title due to the unethical methods that were used to determine the effects of positive and negative speech therapy on children.

Wendell Johnson of the University of Iowa selected 22 orphaned children, some with stutters and some without. The children were in two groups. The group of children with stutters was placed in positive speech therapy, where they were praised for their fluency. The non-stutterers were placed in negative speech therapy, where they were disparaged for every mistake in grammar that they made.

As a result of the experiment, some of the children who received negative speech therapy suffered psychological effects and retained speech problems for the rest of their lives. They were examples of the significance of positive reinforcement in education.

The initial goal of the study was to investigate positive and negative speech therapy. However, the implication spanned much further into methods of teaching for young children.

24. Violinist at the Metro Experiment

Study conducted by: staff at the washington post.

Study Conducted in 2007 at a Washington D.C. Metro Train Station

Grammy-winning musician, Joshua Bell

During the study, pedestrians rushed by without realizing that the musician playing at the entrance to the metro stop was Grammy-winning musician, Joshua Bell. Two days before playing in the subway, he sold out at a theater in Boston where the seats average $100. He played one of the most intricate pieces ever written with a violin worth 3.5 million dollars. In the 45 minutes the musician played his violin, only 6 people stopped and stayed for a while. Around 20 gave him money, but continued to walk their normal pace. He collected $32.

The study and the subsequent article organized by the Washington Post was part of a social experiment looking at:

  • the priorities of people

Gene Weingarten wrote about the social experiment: “In a banal setting at an inconvenient time, would beauty transcend?” Later he won a Pulitzer Prize for his story. Some of the questions the article addresses are:

  • Do we perceive beauty?
  • Do we stop to appreciate it?
  • Do we recognize the talent in an unexpected context?

As it turns out, many of us are not nearly as perceptive to our environment as we might like to think.

25. Visual Cliff Experiment

Study conducted by: eleanor gibson and richard walk.

Study Conducted in 1959 at Cornell University

Experiment Details: In 1959, psychologists Eleanor Gibson and Richard Walk set out to study depth perception in infants. They wanted to know if depth perception is a learned behavior or if it is something that we are born with. To study this, Gibson and Walk conducted the visual cliff experiment.

They studied 36 infants between the ages of six and 14 months, all of whom could crawl. The infants were placed one at a time on a visual cliff. A visual cliff was created using a large glass table that was raised about a foot off the floor. Half of the glass table had a checker pattern underneath in order to create the appearance of a ‘shallow side.’

In order to create a ‘deep side,’ a checker pattern was created on the floor; this side is the visual cliff. The placement of the checker pattern on the floor creates the illusion of a sudden drop-off. Researchers placed a foot-wide centerboard between the shallow side and the deep side. Gibson and Walk found the following:

  • Nine of the infants did not move off the centerboard.
  • All of the 27 infants who did move crossed into the shallow side when their mothers called them from the shallow side.
  • Three of the infants crawled off the visual cliff toward their mother when called from the deep side.
  • When called from the deep side, the remaining 24 children either crawled to the shallow side or cried because they could not cross the visual cliff and make it to their mother.

What this study helped demonstrate is that depth perception is likely an inborn train in humans.

Among these experiments and psychological tests, we see boundaries pushed and theories taking on a life of their own. It is through the endless stream of psychological experimentation that we can see simple hypotheses become guiding theories for those in this field. The greater field of psychology became a formal field of experimental study in 1879, when Wilhelm Wundt established the first laboratory dedicated solely to psychological research in Leipzig, Germany. Wundt was the first person to refer to himself as a psychologist. Since 1879, psychology has grown into a massive collection of:

  • methods of practice

It’s also a specialty area in the field of healthcare. None of this would have been possible without these and many other important psychological experiments that have stood the test of time.

  • 20 Most Unethical Experiments in Psychology
  • What Careers are in Experimental Psychology?
  • 10 Things to Know About the Psychology of Psychotherapy

About Education: Psychology

Explorable.com

Mental Floss.com

About the Author

After earning a Bachelor of Arts in Psychology from Rutgers University and then a Master of Science in Clinical and Forensic Psychology from Drexel University, Kristen began a career as a therapist at two prisons in Philadelphia. At the same time she volunteered as a rape crisis counselor, also in Philadelphia. After a few years in the field she accepted a teaching position at a local college where she currently teaches online psychology courses. Kristen began writing in college and still enjoys her work as a writer, editor, professor and mother.

  • 5 Best Online Ph.D. Marriage and Family Counseling Programs
  • Top 5 Online Doctorate in Educational Psychology
  • 5 Best Online Ph.D. in Industrial and Organizational Psychology Programs
  • Top 10 Online Master’s in Forensic Psychology
  • 10 Most Affordable Counseling Psychology Online Programs
  • 10 Most Affordable Online Industrial Organizational Psychology Programs
  • 10 Most Affordable Online Developmental Psychology Online Programs
  • 15 Most Affordable Online Sport Psychology Programs
  • 10 Most Affordable School Psychology Online Degree Programs
  • Top 50 Online Psychology Master’s Degree Programs
  • Top 25 Online Master’s in Educational Psychology
  • Top 25 Online Master’s in Industrial/Organizational Psychology
  • Top 10 Most Affordable Online Master’s in Clinical Psychology Degree Programs
  • Top 6 Most Affordable Online PhD/PsyD Programs in Clinical Psychology
  • 50 Great Small Colleges for a Bachelor’s in Psychology
  • 50 Most Innovative University Psychology Departments
  • The 30 Most Influential Cognitive Psychologists Alive Today
  • Top 30 Affordable Online Psychology Degree Programs
  • 30 Most Influential Neuroscientists
  • Top 40 Websites for Psychology Students and Professionals
  • Top 30 Psychology Blogs
  • 25 Celebrities With Animal Phobias
  • Your Phobias Illustrated (Infographic)
  • 15 Inspiring TED Talks on Overcoming Challenges
  • 10 Fascinating Facts About the Psychology of Color
  • 15 Scariest Mental Disorders of All Time
  • 15 Things to Know About Mental Disorders in Animals
  • 13 Most Deranged Serial Killers of All Time

Online Psychology Degree Guide

Site Information

  • About Online Psychology Degree Guide
  • Skip to main content
  • Keyboard shortcuts for audio player

Author Interviews

The 'truth' about why we lie, cheat and steal, 'the honest truth' about why we lie, cheat and steal.

The Honest Truth About Dishonesty

The Honest Truth About Dishonesty

Buy featured book.

Your purchase helps support NPR programming. How?

  • Independent Bookstores

Chances are, you're a liar. Maybe not a big liar — but a liar nonetheless. That's the finding of Dan Ariely, a professor of psychology and behavioral economics at Duke University. He's run experiments with some 30,000 people and found that very few people lie a lot, but almost everyone lies a little.

Ariely describes these experiments and the results in a new book, The (Honest) Truth About Dishonesty: How We Lie To Everyone — Especially Ourselves. He talks with NPR's Robert Siegel about how society's troubles aren't always caused by the really bad apples; they're caused by the scores of slightly rotting apples who are cheating just a little bit.

Interview Highlights

On the traditional, cost/benefit theory of dishonesty

"The standard view is a cost/benefit view. It says that every time we see something, we ask ourselves: What do I stand to gain from this and what do I stand to lose? Imagine it's a gas station: Going by a gas station, you ask yourself: How much money is in this gas station? If I steal it, what's the chance that somebody will catch me and how much time will I have in prison? And you basically look at the cost and benefit, and if it's a good deal, you go for it."

On why the cost/benefit theory is flawed

"It's inaccurate, first of all. When we do experiments, when we try to tempt people to cheat, we don't find that these three elements — what do we stand to gain, probability of being caught and size of punishment — end up describing much of the result.

"Not only is it a bad descriptor of human behavior, it's also a bad input for policy. Think about it: When we try to curb dishonesty in the world, what do we do? We get more police force, we increase punishment in prison. If those are not the things that people consider when they think about committing a particular crime, then all of these efforts are going to be wasted."

On how small-time cheaters still perceive themselves as good people

"We want to view ourselves as honest, wonderful people and when we cheat ... as long as we cheat just a little bit, we can still view ourselves as good people, but once we start cheating too much ... we can't view ourselves as good people and therefore we stop. So this model of trying to balance the ability to view ourselves as good people on one hand and the ability to cheat on the other hand predicts that people will cheat a little bit and they will still feel good about themselves. ... That's what we see across many, many experiments."

Hear Dan Ariely On The TED Radio Hour

Dan Ariely: Why Do We Cheat?

TED Radio Hour

Why do we cheat.

On how only a few people cheat a lot, but a lot of people cheat a little

"Across all of our experiments, we've tested maybe 30,000 people, and we had a dozen or so bad apples and they stole about $150 from us. And we had about 18,000 little rotten apples, each of them just stole a couple of dollars, but together it was $36,000. And if you think about it, I think it's actually a good reflection of what happens in society."

On his favorite cheating experiment

"We give people a sheet of paper with 20 simple math problems and we say, 'You have 5 minutes to solve as many of those as you can, and we'll give you $1 per question.' We say, 'Go!' People start, they solve as many as they can, at the end of the five minutes, we say, 'Stop! Please count how many questions you got correctly, and now that you know how many questions you got correctly, go to the back of the room and shred this piece of paper. And once you've finished shredding this piece of paper, come to the front of the room and tell me how many questions you got correctly.'

"Well, people do this, they shred, they come back, and they say they solved on average six problems, we pay them $6, they go home. What the people in the experiment don't know is that we've played with the shredder, and so the shredder only shreds the sides of the page but the main body of the page remains intact. ... What we find is people basically solve four and report six. ... We find that lots of people cheat a little bit; very, very few people cheat a lot.

On a variation of this experiment in which participants cheated twice as much

"In one of the experiments, people did the same thing exactly, finished shredding the piece of paper, but when they came to report, they didn't say, 'Mr. Experimenter, I solved x problems, give me x dollars.' They say, 'Mr. Experimenter, I solved x problems, give me x tokens,' and we paid people with pieces of plastic in terms of money. And then they took these pieces of plastic and they walk 12 feet to the side and exchanged them for dollars. ... The only difference is when people stared somebody else in the eyes and lied, they lied for pieces of plastic and not money. And what happened? Our participants doubled their cheating."

More On Cheating

For Creative People, Cheating Comes More Easily

For Creative People, Cheating Comes More Easily

In College, Maybe Everybody IS Doing It: Cheating

Sweetness And Light

In college, maybe everybody is doing it: cheating.

Cheating In College Is Widespread -- But Why?

Cheating In College Is Widespread — But Why?

The Wide World of Sports Cheating

The Wide World Of Sports Cheating

On how our cashless economy may encourage cheaters

"The moment something is one step removed from money ... people can cheat more and [still] feel good about themselves. It basically relieves people from the moral shackles. And, the reason this worries me so much is because if you think about modern society, we are creating lots of cashless economy. We have electronic wallets, we have mortgage-backed securities, we have stock options, and could it be that all of those payment modalities that as they get more and more further from money become easier for us to cheat and be dishonest with them."

On one version of the experiment, in which the administrator of the test takes a cell phone call while he's giving instructions to the participants, which causes the participants to cheat even more

"I think this goes back to the law of karma, right? So if you ask yourself, how can I rationalize cheating, really the main mechanism in all of our experiments is rationalization. How can you rationalize your actions and still think of yourself as a good person? And if somebody has mistreated you, now you can probably rationalize something to a higher degree."

On the dishonesty that arises from conflicts of interest

"We need to change ... regulation, and it's basically to change conflicts of interest. ... Much like in sports, if you like a particular team and the referee calls against your team, you think the referee is evil, vicious, stupid. ... In the same way, if you have a financial stake in seeing the world in a certain way, you're going to see the world in a certain way. So the first thing I think we need to do is eradicate conflicts of interest."

On the Broken Windows theory of policing — cracking down on minor offenses in an effort to curb major offenses

"There's kind of two ways to think about the Broken Windows theory: one is about cost/benefit analysis and do people do it; the other one is about what ... society around us tells us is acceptable and not acceptable. I actually believe in the second approach for this. So when we go around the world and we ask ourselves what behavior are we willing to engage in/what behavior we're not, we look at other people for a gauge for what is acceptable. In our experiments, we've shown that if we get one person to cheat in an egregious way and other people see them, they start cheating to a higher degree. So, for me, the broken window theory is more as a social signal than fear of being caught."

APS

Cover Story

The truth about lying.

  • APS 28th Annual Convention (2016)
  • Behavioral Economics
  • Experimental Psychology

psychology experiments lie

In one of his many experiments designed to measure people’s rationalization of cheating, Dan Ariely rigged a vending machine to return both candy and the customer’s money. Although people could have filled their pockets with candy without paying a cent, on average they took no more than three or four items, he says. “Nobody took five because [they thought] five would be stealing,” he adds.

God goes to Sarah and says, “You’re going to have a child.” Sarah laughs and responds, “How can I have a child when my husband is so old?” God then goes to Abraham and tells him, “You’re going to have a child.” Abraham responds, “What did Sarah say?” And God lies: “Sarah wondered how [she can] have a child when she is so old.”

The moral of the story: It’s okay to lie for peace at home.

“When you think about it, that’s what dishonesty is all about,” Ariely said in his Fred Kavli Keynote Address at the 2016 APS Annual Convention in Chicago.

Ariely, the James B. Duke Professor of Psychology and Behavioral Economics at Duke University, points out that how we think we would act often strays far from how we actually act in the real world.

“Just to be clear, the prevalent theory of dishonesty from a legal perspective is the idea of cost–benefit analysis,” said Ariely. “It says that when people think about being dishonest, they think about ‘What can I gain? What can I lose?’ and figure out if this is a worthwhile act of dishonesty. If there’s a big cost, we’re not going to be dishonest.”

The idea of cost–benefit analysis does not describe our personal experiences, though. For instance, the theory behind the death penalty is that people considering whether to murder someone will think ahead and realize that committing that crime could result in a death sentence, so they won’t kill. But this is not how people actually function in the real world, Ariely said.

“If we have the wrong theory, our solutions are going to be ineffective,” he added.

Ariely joked that it is difficult to get people to steal millions of dollars to fund studies on dishonesty. So he has employed several different strategies, such as running task-based experiments and conducting qualitative research with criminals.

In one experiment, he and his colleagues had participants roll a die for a monetary reward corresponding to the number on the die. If the die landed on the number 5, the individual was paid $5, for example. Before rolling, though, participants decided which side of the die — top or bottom — determined the dollar amount they were to receive. Participants were instructed not to tell the researcher but to mark “top” or “bottom” on a sheet of paper.

For instance, a die might land with 5 on the bottom and 2 on top. Ariely asked the participant who rolled the die, “Which side did you pick?” If the participant had picked “bottom,” no problem, but if they had picked “top,” they faced a dilemma — should they lie to make more money or tell the truth and make less money?

“When people did this 20 times, we found that they were incredibly lucky,” said Ariely. “Not lucky 100 percent of the time, but maybe 13 or 14 times.”

In another die experiment conducted at Duke University, researchers presented participants with the following situation: You can earn either $4 or $40 depending on where the die lands. In every scenario, the experimenter said, “Sorry, you landed on the $4 one.” Then, the experimenter told the participant, “My boss isn’t here, so if you give me the $3 you received just for coming in, I’ll pretend you landed on the $40 one.” Ninety percent of students took the bribe.

In another study, Ariely utilized a vending machine. The machine was set up to say that bags of candy cost 75 cents on the outside, but its mechanism on the inside was set to zero cents. So when people put money in the vending machine, they would get extra bags of candy, and all of their money back. A big sign on the vending machine read, “If there’s something wrong with this machine, please call this number”— in this instance, Ariely’s cell phone number. Nobody called, but nobody took more than four bags of candy.

“The majority took three or four, but nobody took five because five would be stealing,” Ariely said, drawing laughs. “And you think about how people might rationalize this decision: ‘This other vending machine took my money and didn’t give me candy, and this vending machine must be a close relative of that one.’ We’re just sorting out the vending karma in the world.”

In another experiment, participants performed some tasks and then told the experimenter how much money they earned and immediately received that amount. In a variation of the same experiment, they came and asked for tokens instead, then walked 12 feet and exchanged those tokens for money. Participants were twice as likely to cheat when they requested tokens compared with when they asked for money.

“As a society, we’re moving away from tangible representations of money,” said Ariely. “Could it be that, as psychological distance increases, people behave in a worse way but still feel good about themselves? If it does, what are the precautions we should have under those systems?”

Additionally, Ariely talked about the role that conflicts of interest and dishonesty can play in the academic world. In an experiment conducted at Harvard University, the pattern of results confirmed the researchers’ hypothesis except for one outlier. Researchers recalled that the man who represented the outlier was 20 years older than the other participants and also had been intoxicated. When they pulled out his data point, the data was much more uniform.

About 1 week later, one of Ariely’s students asked, “What if that drunk person had fit into the mean average and wasn’t an outlier?”

“We probably would have never looked,” said Ariely. “There was a particular version of reality that we wanted to see, and we were using our creativity to justify setting this path. We were cheating ourselves.”

To better understand cheating and dishonesty, Ariely also took an anthropological approach to his research and spoke with various criminals. He tells the story of Joe Papp, an Olympic cyclist who went back to school to complete his undergraduate education. When Papp returned to cycling, he felt like he was performing as well as he had before college but that other cyclists were faster. One of Papp’s friends recommended that he see a physician, who wrote Papp a prescription for erythropoietin (EPO), a cancer treatment that increases the production of red blood cells. Papp gave himself the injections, but when there was a shortage of EPO, he imported and distributed EPO for his team and for other teams. He essentially became a drug dealer.

“When you look at crimes, in a lot of the cases, it’s about the slippery slope,” explained Ariely. “You say to yourself, ‘I can’t imagine being a drug dealer.’ But ask yourself, when would you have stopped? Because of the commonality and danger of the first step, what is the difference between people who commit crimes and those who don’t? Is it just missed opportunity? We find that it’s all about the ability to rationalize dishonesty.”

But Ariely also shared results of experiments that, by priming people to think about their personal morals or ethics, tilt their behaviors in a more honest direction. He recounted experiments in which he and his colleagues asked one group of study participants to recall the Ten Commandments, and the other group to recall 10 books they had read in high school. The latter group largely engaged in widespread but moderate cheating when given subsequent reward-based tasks designed to measure honesty. But the group that recalled the Ten Commandments didn’t cheat at all. The result was the same when they reran the experiment on a group of self-declared atheists who were asked to swear on the Bible.

Those findings have plenty of real-world applications, some of which already are being tested or implemented. One of the most noteworthy, Ariely pointed out, is having people put their signature at the top rather than the bottom of various documents (e.g., insurance forms); they’re essentially verifying that the information they’re providing is true before they have a chance to fudge it.

“So there is hope,” he says, “and I think as long as we understand where dishonesty comes from, we can do something about it.”

psychology experiments lie

Im interested in using this as an external source for an essay. Is there any information, such as the author and publication date, available. Please let me know . Thank you for your time.

psychology experiments lie

So if we are required to leave our name and email are we more likely to make our comments honest and objective?

FYI – Dan does an excellent documentary on Netflix on this topic.

psychology experiments lie

I need the citations to this article. By the way, is the spelling correct for “die” or is it actually spell as “dice”. Thank you.

psychology experiments lie

Please have Dan Ariely read Genesis 18 properly. God never went to Sarah but to Abraham. Sarah eavesdropped and laughed at God’s promise of a son. God asked Abraham why Sarah laughed and SARAH lied!

No point in starting your entire speech with a lie. It discredits everything else you have to say.

psychology experiments lie

your story about Abramham and sarah is incorrect here, Sarah over heard the comment from three strangers who Abraham had invited to a meal, that she would be in child within the next year when they return, she laughed while behind the tent due to her old age and her husbands age, it says the Lord asked why she had laughed in private behind her tent/ she denied laughing and Lied, You

APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines .

Please login with your APS account to comment.

psychology experiments lie

Careers Up Close: Joel Anderson on Gender and Sexual Prejudices, the Freedoms of Academic Research, and the Importance of Collaboration

Joel Anderson, a senior research fellow at both Australian Catholic University and La Trobe University, researches group processes, with a specific interest on prejudice, stigma, and stereotypes.

psychology experiments lie

Experimental Methods Are Not Neutral Tools

Ana Sofia Morais and Ralph Hertwig explain how experimental psychologists have painted too negative a picture of human rationality, and how their pessimism is rooted in a seemingly mundane detail: methodological choices. 

APS Fellows Elected to SEP

In addition, an APS Rising Star receives the society’s Early Investigator Award.

Privacy Overview

CookieDurationDescription
__cf_bm30 minutesThis cookie, set by Cloudflare, is used to support Cloudflare Bot Management.
CookieDurationDescription
AWSELBCORS5 minutesThis cookie is used by Elastic Load Balancing from Amazon Web Services to effectively balance load on the servers.
CookieDurationDescription
at-randneverAddThis sets this cookie to track page visits, sources of traffic and share counts.
CONSENT2 yearsYouTube sets this cookie via embedded youtube-videos and registers anonymous statistical data.
uvc1 year 27 daysSet by addthis.com to determine the usage of addthis.com service.
_ga2 yearsThe _ga cookie, installed by Google Analytics, calculates visitor, session and campaign data and also keeps track of site usage for the site's analytics report. The cookie stores information anonymously and assigns a randomly generated number to recognize unique visitors.
_gat_gtag_UA_3507334_11 minuteSet by Google to distinguish users.
_gid1 dayInstalled by Google Analytics, _gid cookie stores information on how visitors use a website, while also creating an analytics report of the website's performance. Some of the data that are collected include the number of visitors, their source, and the pages they visit anonymously.
CookieDurationDescription
loc1 year 27 daysAddThis sets this geolocation cookie to help understand the location of users who share the information.
VISITOR_INFO1_LIVE5 months 27 daysA cookie set by YouTube to measure bandwidth that determines whether the user gets the new or old player interface.
YSCsessionYSC cookie is set by Youtube and is used to track the views of embedded videos on Youtube pages.
yt-remote-connected-devicesneverYouTube sets this cookie to store the video preferences of the user using embedded YouTube video.
yt-remote-device-idneverYouTube sets this cookie to store the video preferences of the user using embedded YouTube video.
yt.innertube::nextIdneverThis cookie, set by YouTube, registers a unique ID to store data on what videos from YouTube the user has seen.
yt.innertube::requestsneverThis cookie, set by YouTube, registers a unique ID to store data on what videos from YouTube the user has seen.

The Truth about Lying

You can’t spot a liar just by looking, but psychologists are zeroing in on methods that might actually work.

A person undergoing a lie detector test

Police thought that 17-year-old Marty Tankleff seemed too calm after finding his mother stabbed to death and his father mortally bludgeoned in the family’s sprawling Long Island home. Authorities didn’t believe his claims of innocence, and he spent 17 years in prison for the murders.

JSTOR Daily Membership Ad

Yet in another case, detectives thought that 16-year-old Jeffrey Deskovic seemed too distraught and too eager to help detectives after his high school classmate was found strangled. He, too, was judged to be lying and served nearly 16 years for the crime.

Audio brought to you by  curio.io

One man was not upset enough. The other was too upset. How can such opposite feelings both be telltale clues of hidden guilt?

They’re not, says psychologist Maria Hartwig, a deception researcher at John Jay College of Criminal Justice at the City University of New York. The men, both later exonerated, were victims of a pervasive misconception: that you can spot a liar by the way they act. Across cultures, people believe that behaviors such as averted gaze, fidgeting and stuttering betray deceivers.

In fact, researchers have found little evidence to support this belief despite decades of searching. “One of the problems we face as scholars of lying is that everybody thinks they know how lying works,” says Hartwig, who coauthored a study of nonverbal cues to lying in the Annual Review of Psychology . Such overconfidence has led to serious miscarriages of justice, as Tankleff and Deskovic know all too well. “The mistakes of lie detection are costly to society and people victimized by misjudgments,” says Hartwig. “The stakes are really high.”

Weekly Newsletter

Get your fix of JSTOR Daily’s best stories in your inbox each Thursday.

Privacy Policy   Contact Us You may unsubscribe at any time by clicking on the provided link on any marketing message.

Tough to tell

Psychologists have long known how hard it is to spot a liar. In 2003, psychologist Bella DePaulo, now affiliated with the University of California, Santa Barbara, and her colleagues combed through the scientific literature, gathering 116 experiments that compared people’s behavior when lying and when telling the truth. The studies assessed 102 possible nonverbal cues, including averted gaze, blinking, talking louder (a nonverbal cue because it does not depend on the words used), shrugging, shifting posture and movements of the head, hands, arms or legs. None proved reliable indicators of a liar , though a few were weakly correlated, such as dilated pupils and a tiny increase — undetectable to the human ear — in the pitch of the voice.

Three years later, DePaulo and psychologist Charles Bond of Texas Christian University reviewed 206 studies involving 24,483 observers judging the veracity of 6,651 communications by 4,435 individuals. Neither law enforcement experts nor student volunteers were able to pick true from false statements better than 54 percent of the time — just slightly above chance. In individual experiments, accuracy ranged from 31 to 73 percent, with the smaller studies varying more widely. “The impact of luck is apparent in small studies,” Bond says. “In studies of sufficient size, luck evens out.”

This size effect suggests that the greater accuracy reported in some of the experiments may just boil down to chance , says psychologist and applied data analyst Timothy Luke at the University of Gothenburg in Sweden. “If we haven’t found large effects by now,” he says, “it’s probably because they don’t exist .”

psychology experiments lie

Police experts, however, have frequently made a different argument: that the experiments weren’t realistic enough. After all, they say, volunteers — mostly students — instructed to lie or tell the truth in psychology labs do not face the same consequences as criminal suspects in the interrogation room or on the witness stand. “The ‘guilty’ people had nothing at stake,” says Joseph Buckley, president of John E. Reid and Associates, which trains thousands of law enforcement officers each year in behavior-based lie detection. “It wasn’t real, consequential motivation.”

Samantha Mann , a psychologist at the University of Portsmouth, UK, thought that such police criticism had a point when she was drawn to deception research 20 years ago. To delve into the issue, she and colleague Aldert Vrij first went through hours of videotaped police interviews of a convicted serial killer and picked out three known truths and three known lies. Then Mann asked 65 English police officers to view the six statements and judge which were true, and which false. Since the interviews were in Dutch, the officers judged entirely on the basis of nonverbal cues.

The officers were correct 64 percent of the time — better than chance, but still not very accurate, she says. And the officers who did worst were those who said they relied on nonverbal stereotypes like “liars look away” or “liars fidget.” In fact, the killer maintained eye contact and did not fidget while deceiving. “This guy was clearly very nervous, no doubt,” Mann says, but he controlled his behavior to strategically counter the stereotypes.

In a later study, also by Mann and Vrij, 52 Dutch police officers did no better than chance at distinguishing true and false statements given by family members who’d murdered their relatives but denied it in anguished displays during televised press conferences used in the study. Notably, officers who performed the worst were those who felt that the emotional displays were genuine. But what did that signify? “If a husband killed his wife, he could be upset for all kinds of reasons, like remorse or getting caught,” says Mann. “Nonverbal behavior is so idiosyncratic. If you focus on the nonverbal behavior like emotions, you will trip up.”

Confirming these results on a large scale years later, Hartwig and Bond reviewed the literature for studies comparing people’s abilities to detect high- and low-stakes lies. They found no evidence that people were any better at detecting lies told by criminals or wrongly accused suspects in police investigations than those told by laboratory volunteers.

Something to hide

From serial killers to students fibbing in laboratory experiments, the lies in all these experiments were spoken. In a study published in 2019, Mann examined nonverbal deceit, such as when someone is trying to conceal illicit activity — a type of deception relevant to detecting bombers or smugglers . She recruited 52 university student volunteers and asked half of them to transport a laptop said to contain sensitive photographs on a ferry ride between two cities. The volunteers were told to try to blend into the crowd and not look “suspicious,” because people would be trying to identify them. The other half of the group were given a regular cell phone to transport, with no instructions to conceal what they were doing.

When Mann showed videos of the ferry rides to 104 other volunteers and asked them to pick out the “smugglers,” the spotters did no better than chance. In interviews afterward, the “smugglers” said they were nervous, but they consciously tried to act normal and control their nerves with tactics such as listening to music or using their phones.

Next, Mann raised the stakes. Half of a new group of 60 volunteers were given an envelope of Russian, Egyptian and Korean currency to conceal, while the other half didn’t “smuggle” anything. But this time, Mann sent two researchers onto the ferry to pace around and scrutinize the passengers, appearing to compare their faces to photos on a cell phone.

This time, 120 observers trying to pick out the “smugglers” on video guessed correctly just 39.2 percent of the time — well below chance. The reason, Mann says, is that the “smugglers” consciously made an effort to look normal, while the “innocent” control volunteers just acted naturally. Their surprise at the unexpected scrutiny looked to the observers like a sign of guilt.

The finding that deceivers can successfully hide nervousness fills in a missing piece in deception research , says psychologist Ronald Fisher of Florida International University, who trains FBI agents. “Not too many studies compare people’s internal emotions with what others notice,” he says. “The whole point is, liars do feel more nervous, but that’s an internal feeling as opposed to how they behave as observed by others.”

Studies like these have led researchers to largely abandon the hunt for nonverbal cues to deception. But are there other ways to spot a liar? Today, psychologists investigating deception are more likely to focus on verbal cues, and particularly on ways to magnify the differences between what liars and truth-tellers say.

For example, interviewers can strategically withhold evidence longer, allowing a suspect to speak more freely, which can lead liars into contradictions. In one experiment, Hartwig taught this technique to 41 police trainees, who then correctly identified liars about 85 percent of the time, as compared to 55 percent for another 41 recruits who had not yet received the training. “We are talking significant improvements in accuracy rates,” says Hartwig.

Another interviewing technique taps spatial memory by asking suspects and witnesses to sketch a scene related to a crime or alibi. Because this enhances recall, truth-tellers may report more detail. In a simulated spy mission study published by Mann and her colleagues last year, 122 participants met an “agent” in the school cafeteria, exchanged a code, then received a package. Afterward, participants instructed to tell the truth about what happened gave 76 percent more detail about experiences at the location during a sketching interview than those asked to cover up the code-package exchange . “When you sketch, you are reliving an event — so it aids memory,” says study coauthor Haneen Deeb, a psychologist at the University of Portsmouth.

The experiment was designed with input from UK police, who regularly use sketching interviews and work with psychology researchers as part of the nation’s switch to non-guilt-assumptive questioning, which officially replaced accusation-style interrogations in the 1980s and 1990s in that country after scandals involving wrongful conviction and abuse.

Slow to change

In the US, though, such science-based reforms have yet to make significant inroads among police and other security officials. The US Department of Homeland Security’s Transportation Security Administration, for example, still uses nonverbal deception clues to screen airport passengers for questioning. The agency’s secretive behavioral screening checklist instructs agents to look for supposed liars’ tells such as averted gaze — considered a sign of respect in some cultures — and prolonged stare, rapid blinking, complaining, whistling, exaggerated yawning, covering the mouth while speaking and excessive fidgeting or personal grooming. All have been thoroughly debunked by researchers.

With agents relying on such vague, contradictory grounds for suspicion, it’s perhaps not surprising that passengers lodged 2,251 formal complaints between 2015 and 2018 claiming that they’d been profiled based on nationality, race, ethnicity or other reasons. Congressional scrutiny of TSA airport screening methods goes back to 2013, when the US Government Accountability Office — an arm of Congress that audits, evaluates and advises on government programs — reviewed the scientific evidence for behavioral detection and found it lacking, recommending that the TSA limit funding and curtail its use. In response, the TSA eliminated the use of stand-alone behavior detection officers and reduced the checklist from 94 to 36 indicators, but retained many scientifically unsupported elements like heavy sweating.

In response to renewed Congressional scrutiny, the TSA in 2019 promised to improve staff supervision to reduce profiling. Still, the agency continues to see the value of behavioral screening. As a Homeland Security official told congressional investigators, “common sense” behavioral indicators are worth including in a “rational and defensible security program” even if they do not meet academic standards of scientific evidence. In a statement to Knowable , TSA media relations manager R. Carter Langston said that “TSA believes behavioral detection provides a critical and effective layer of security within the nation’s transportation system.” The TSA points to two separate behavioral detection successes in the last 11 years that prevented three passengers from boarding airplanes with explosive or incendiary devices.

But, says Mann, without knowing how many would-be terrorists slipped through security undetected, the success of such a program cannot be measured. And, in fact, in 2015 the acting head of the TSA was reassigned after Homeland Security undercover agents in an internal investigation successfully smuggled fake explosive devices and real weapons through airport security 95 percent of the time.

In 2019, Mann, Hartwig and 49 other university researchers published a review evaluating the evidence for behavioral analysis screening, concluding that law enforcement professionals should abandon this “fundamentally misguided” pseudoscience, which may “harm the life and liberty of individuals.”

Hartwig, meanwhile, has teamed with national security expert Mark Fallon, a former special agent with the US Naval Criminal Investigative Service and former Homeland Security assistant director, to create a new training curriculum for investigators that is more firmly based in science. “Progress has been slow,” Fallon says. But he hopes that future reforms may save people from the sort of unjust convictions that marred the lives of Jeffrey Deskovic and Marty Tankleff.

For Tankleff, stereotypes about liars have proved tenacious. In his years-long campaign to win exoneration and recently to practice law, the reserved, bookish man had to learn to show more feeling “to create a new narrative” of wronged innocence, says Lonnie Soury, a crisis manager who coached him in the effort. It worked, and Tankleff finally won admittance to the New York bar in 2020. Why was showing emotion so critical? “People,” says Soury, “are very biased.”

Editor’s note: This article was updated on March 25, 2021, to correct the last name of a crisis manager quoted in the story. Their name is Lonnie Soury, not Lonnie Stouffer.

This article originally appeared in Knowable Magazine , an independent journalistic endeavor from Annual Reviews. Sign up for the newsletter .

JSTOR logo

JSTOR is a digital library for scholars, researchers, and students. JSTOR Daily readers can access the original research behind our articles for free on JSTOR.

 alt=

Get Our Newsletter

More stories.

A student studying in her dorm

  • Back to School

Source: https://www.jstor.org/stable/community.31326238

  • The Sovereignty of the Latter-day Saints

An Indian bistro in New York City

  • The Shrewd Business Logic of Immigrant Cooks

The game of Jai-Alai and the hall, Havana, Cuba

Hi, Jai Alai

Recent posts.

  • The Bawdy House Riots of 1668
  • The Curious History of Competitive Eating

Support JSTOR Daily

Sign up for our weekly newsletter.

SciTechDaily

Exposing Liars by Distraction – Science Reveals a New Method of Lie Detection

Woman Liar Concept

According to an experiment, investigators who asked a suspect to carry out an additional, secondary, task while being questioned were more likely to expose liars.

A new method of lie detection shows that lie-tellers who are made to multitask while being interviewed are easier to detect.

It has been clearly established that lying during interviews consumes more cognitive energy than telling the truth. Now, a new study by the University of Portsmouth has found that investigators who used this knowledge to their advantage by asking a suspect to carry out an additional, secondary, task while being questioned were more likely to expose liars. The extra brain power required to concentrate on a secondary task (other than lying) was particularly challenging for lie-tellers.

In this experiment, the secondary task used was to recall a seven-digit car registration number. The secondary task was only found to be effective if lie tellers were led to believe that it was important.

“Our research has shown that truths and lies can sound equally plausible as long as lie tellers are given a good opportunity to think what to say. When the opportunity to think becomes less, truths often sound more plausible than lies.” Professor Aldert Vrij, Professor in Psychology

Professor Aldert Vrij, from the Department of Psychology at the University of Portsmouth, who designed the experiment said: “In the last 15 years we have shown that lies can be detected by outsmarting lie tellers. We demonstrated that this can be done by forcing lie tellers to divide their attention between formulating a statement and a secondary task.

“Our research has shown that truths and lies can sound equally plausible as long as lie tellers are given a good opportunity to think what to say. When the opportunity to think becomes less, truths often sound more plausible than lies. Lies sounded less plausible than truths in our experiment, particularly when the interviewees also had to carry out a secondary task and were told that this task was important.”

The 164 participants in the experiment were first asked to give their levels of support or opposition about various societal topics that were in the news. They were then randomly allocated to a truth or lie condition and interviewed about the three topics that they felt most strongly about. Truth tellers were instructed to report their true opinions whereas lie tellers were instructed to lie about their opinions during the interviews.

“The pattern of results suggests that the introduction of secondary tasks in an interview could facilitate lie detection but such tasks need to be introduced carefully.” Professor Aldert Vrij, Professor in Psychology

Those doing the secondary task were given a seven-digit car registration number and instructed to recall it back to the interviewer. Half of them received additional instructions that if they could not remember the car registration number during the interview, they may be asked to write down their opinions after the interview.

Participants were given the opportunity to prepare themselves for the interview and were told it was important to come across as convincingly as possible during the interviews – which was incentivized by being entered into a prize draw.

The results revealed that lie tellers’ stories sounded less plausible and less clear than truth tellers’ stories, particularly when lie tellers were given the secondary task and told that it was important.

Professor Vrij said: “The pattern of results suggests that the introduction of secondary tasks in an interview could facilitate lie detection but such tasks need to be introduced carefully. It seems that a secondary task will only be effective if lie tellers do not neglect it. This can be achieved by either telling interviewees that the secondary task is important, as demonstrated in this experiment, or by introducing a secondary task that cannot be neglected (such as gripping an object, holding an object into the air, or driving a car simulator). Secondary tasks that do not fulfil these criteria are unlikely to facilitate lie detection.”

The research was published in the International Journal of Psychology and Behaviour Analysis .

Reference: “The Effects of a Secondary Task on True and False Opinion Statements” by Aldert Vrij, Haneen Deeb, Sharon Leal and Ronald P. Fisher, 28 March 2022, International Journal of Psychology and Behaviour Analysis . PDF

Related Articles

Debunking the myth: alcohol, attraction, and the illusive beer goggles effect, three new species of flying reptiles discovered – pterosaurs that inhabited the sahara 100 million years ago, 110-million-year-old treasure found by scientists – “something that was thought to be impossible”, sexting isn’t just about sex – surprising new research shows 3 main motivations, fake news can lead to false memories, newly discovered fossils reveal man’s earliest ancestors, unlike people, capuchin monkeys aren’t fooled by expensive brands, controversial psychiatry change could see bereavement as a disease, puzzle play with children results in better spatial skills.

psychology experiments lie

Same, age-old problem obscures this “new” method; stress can easily make truth seem less plausible as well.

Save my name, email, and website in this browser for the next time I comment.

Type above and press Enter to search. Press Esc to cancel.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of plosone

Telling Lies: The Irrepressible Truth?

Emma j. williams.

School of Psychology, Cardiff University, Cardiff, United Kingdom

Lewis A. Bott

John patrick, michael b. lewis.

Conceived and designed the experiments: EJW LAB MBL JP. Performed the experiments: EJW. Analyzed the data: EJW. Contributed reagents/materials/analysis tools: EJW LAB MBL JP. Wrote the paper: EJW LAB MBL JP.

Telling a lie takes longer than telling the truth but precisely why remains uncertain. We investigated two processes suggested to increase response times, namely the decision to lie and the construction of a lie response. In Experiments 1 and 2, participants were directed or chose whether to lie or tell the truth. A colored square was presented and participants had to name either the true color of the square or lie about it by claiming it was a different color. In both experiments we found that there was a greater difference between lying and telling the truth when participants were directed to lie compared to when they chose to lie. In Experiments 3 and 4, we compared response times when participants had only one possible lie option to a choice of two or three possible options. There was a greater lying latency effect when questions involved more than one possible lie response. Experiment 5 examined response choice mechanisms through the manipulation of lie plausibility. Overall, results demonstrate several distinct mechanisms that contribute to additional processing requirements when individuals tell a lie.

Introduction

People lie surprisingly often, a task which requires a number of complex processes [1] . For example, 40% of adults have reported telling a lie at least once per day [2] . The majority of these lies are likely to be trivial in nature, serving a communicative function [3] – [5] , however, others can have more drastic consequences, such as those told by criminal witnesses and suspects [6] – [10] . Despite the apparent prevalence of lie-telling within society, lying is a complicated behavior that requires breaking the normal, default rules of communication [11] . The liar must first of all decide not to assert the truth, and then must assert an alternative statement that is plausible and appears informative to the listener, all the while concealing any outward signs of nervousness. Such a pragmatic feat requires cognitive processes in addition to those used when telling the truth. In this article we investigate what those processes might be. As such, we are less interested in the intent to instil a false belief in another’s mind but more interested in the necessary and universal cognitive processes associated with making a statement that is not true. The research presented here may be far removed from an aggressive interrogation where lives or liberty are at stake; but, the fundamental cognitive processes that are taking place when someone either tells the truth or constructs a falsehood are going to have some aspects in common regardless of the situation. The aim of the current research is to understand better these cognitive processes.

Our starting point is to examine the reasons given in the literature for why lying appears to be more difficult than telling the truth. Longer lie times, for example, must be indicative of additional cognitive processes involved in lying compared to telling the truth. Based on a framework developed in 2003 [1] , we will discuss three processes that have been implicated in lying and summarise the empirical evidence in favour of each.

Suppression of the truth

Our default communicative stance is to tell the truth. Without the assumption that speakers utter the truth most of the time, it is difficult to see how efficient communication could ever occur [11] . This suggests that when people wish to lie to a question they will need to intentionally suppress the default, truthful response, which should increase the difficulty of lying relative to telling the truth.

There is indeed plenty of empirical evidence consistent with the claim that telling lies involves suppressing the truth. For example many researchers have found longer response times for lying relative to telling the truth [1] , [12] – [17] , and there is neuroscientific evidence that brain regions active in lying overlap with brain regions associated with general response inhibition [18] – [22] .

A number of these studies have been based around a lie detection technique known as the Concealed Information Test (CIT) [23] . This typically involves the presentation of a variety of different images or words via a computer screen. Some of these stimuli relate to previously learned information, known as probes, whereas others are irrelevant items. In practical situations, individuals may be asked the identity of a murder weapon, with the probe item being an image of the actual murder weapon (i.e., a knife) embedded within a series of irrelevant images (i.e., a gun, a hammer, a baseball bat). Participants are instructed to deny recognition of all items. If participants have concealed knowledge and recognise the murder weapon, they are expected to respond differentially to probe and irrelevant items. Although traditionally used to examine physiological responses, such as skin conductance [16] and event related potentials [24] – [26] , this paradigm has recently been used with response times to successfully discriminate “guilty” from “innocent” participants, with guilty participants taking longer to deny recognition of probes than irrelevant items [16] , [27] , [28] . It has been argued, however, that such paradigms measure the possession of concealed knowledge rather than deception per se [14] and therefore may not allow us to fully elucidate the distinct processes involved in responding to questions deceptively.

These findings have meant that recent cognitive models of deception have incorporated both the automatic activation of the truth and its resultant suppression as additional processes that contribute to longer response times for liars [1] , [20] , [29] – [31] . For example, the Activation-Decision-Construction Model (ADCM) [1] , [31] claims that following a question, relevant information (in particular, the truth) is automatically activated in long-term memory [32] . This information is then made consciously available in working memory [33] . In order to respond to a question deceptively, cognitive resources are required to inhibit the truthful response. Similarly, the Working Model of Deception (WMD) [30] highlights response inhibition as a pre-requisite to responding to a question deceptively.

While the need to suppress the truth is undeniably an important component of why lying is more difficult than telling the truth, there are several other reasons that have received less attention in the literature and that might also contribute. These are discussed below.

The decision to lie

Assuming that people tell the truth by default [11] , they must make a conscious choice to lie. The decision to lie is therefore likely to be an additional cognitive process associated with lying that takes time to execute. Indeed, current models of how we lie include a lie decision component. For example, the Working Model of Deception (WMD) [30] assumes that when an individual hears a question to which they may respond deceptively, executive control processes are used to determine the appropriate response (i.e., lie or truth), with a decision being made based on the likely risks and benefits involved. Similarly, the Activation Decision Construction Model Revised (ADCM-R) [34] considers individuals who have previously decided to lie to particular questions and have rehearsed an answer. In these cases, the model states that a decision is still required because individuals must remind themselves of their decision to lie when that particular question is heard.

Despite the inclusion of decision components in the models, there is surprisingly little work that has specifically investigated how people make the decision to lie. This is perhaps because it is experimentally much easier to instruct people when to lie than to allow them to choose. We can find only a few papers that have investigated the decision process [21] , [31] . The first of these [31] presented participants with a selection of neutral questions and questions probing embarrassing information. Participants were instructed to lie to certain questions, such as those regarding their employment history, and tell the truth to others, such as those regarding what they did on Sunday morning. However, for general questions, they were instructed to answer truthfully unless a question probed embarrassing information about which they would normally lie to a stranger, in which case they should lie. In this condition, participants needed to decide themselves when to lie and when to tell the truth. The experiment demonstrated that more time was needed to respond when individuals chose to lie compared with when they had been instructed, and both took longer than telling the truth, consistent with the idea that the decision of how to respond adds to cognitive processing load. However, it is difficult to be certain whether the elevated response times were due to the evaluation of whether a question was embarrassing or to the decision of how to respond.

The second of these papers [21] allowed participants to choose whether to lie or tell the truth to computer-generated yes-no questions regarding an embarrassing past life event, although participants were asked to achieve an approximate balance between truths and lies over the course of the experiment. Brain activity (using neuroimaging techniques) was recorded rather than behavioural data. Similar to findings when individuals have been instructed on how to respond [12] , [15] , [20] , [35] , lying showed increased activation of the ventrolateral prefrontal cortices (implicated in deceptive capabilities [20] ) compared to truth-telling. However, because there was no direct comparison of trials between choosing to respond and being instructed to respond, little can be concluded about the decision process itself.

Construction of the lie

Lies and truths also differ in the way in which they are constructed. It is often the case that more than one possible lie is available. In this case the particular lie produced needs to be explicitly chosen from a range of alternatives. For a lie to be convincing then it must be plausible and consistent with previous information and so selecting such a lie introduces additional constraints. Truths, on the other hand, seem to be generated automatically without a need to always select “which” truth, since stimulus questions must merely be evaluated in relation to known information [36] . The procedures needed to choose which lie to use and to verify the plausibility may be costly to operate.

One study [31] directly tested whether the added complexity of lie construction was a contributing factor to elevated lie response times. Their approach was to manipulate whether participants responded to open-ended questions, such as, “What color is your hair?” or yes/no questions, such as, “Is your hair brown?”(Although we appreciate that differing definitions of open-ended questions exist, for clarity we use the same terms as the above cited paper). It was argued that more lie construction was needed to respond to open-ended questions than yes/no questions because open-ended questions required explicit retrieval of information from long-term memory, whereas yes/no questions merely needed the production of an affirmation or denial. If lie construction was contributing to longer lie response times, then lying to open-ended questions should be more difficult than lying to yes/no questions. Consistent with these predictions, longer lie response times were observed in the open-ended question condition than in the yes/no condition [31] . There are a number of issues that make the interpretation of this result difficult, however. First, while lying to open ended questions was slow relative to yes/no questions, telling the truth was also slow. It is therefore not clear whether their effect relates to lie construction or to the difficulty of responding to open-ended questions in general. Second, the content of the question was not equated across yes/no and open-ended conditions. For example, response times to questions such as “Do you like chocolate” were compared with questions such as “How many credit cards do you own?” Differences in response times could therefore be explained by differences in the ease of accessing information, rather than the question types per se .

While there has been no direct evidence about how people assess the plausibility of potential lies, there is indirect evidence that complex lies are costly to generate. If a person needs to monitor plausibility of a lie then this will be more difficult for more complex lies. First, studies investigating the effects of making lies more complex have found that they are easier to detect. For example, asking participants to recall events in reverse order [10] and using interview techniques that require longer answers to questions [37] have increased discrimination between liars and truth tellers. Finding that lies are easier to detect when the lie is more complex suggests that extra resources are needed to construct the plausible lie.

Secondly, if lie construction independently contributes to the processing difference between lying and truth-telling, individuals who have been given the opportunity to rehearse or prepare a lie response will require less processing time than unprepared liars. Several studies have found evidence that this is the case. A review of the literature conducted in 1981, found that the response time difference between lying and truth-telling only occurred when participants had not rehearsed a response [17] . A recent meta-analysis of 158 cues to deception similarly found that longer response times for liars only demonstrated a significant effect size when participants were not given the opportunity to prepare their lie [38] . Alternative paradigms incorporating an explicit period of rehearsal have shown smaller response time differences between rehearsed lies and truths compared to unrehearsed lies and truths [34] .

In summary, we have reviewed the evidence for three processes involved in lying that are not involved in telling the truth. There is substantial evidence that the first process, the suppression of the truth, contributes to the extra costs involved in lying, but the evidence for the other processes is weaker. Our study therefore concentrates on testing whether the decision to lie and the construction of the lie contribute to the greater difficulty of lying, as distinct from suppressing the truth. In doing so, we hope to understand in more detail what cognitive processes are involved when people lie.

Cognitive load

The aspects of lying described above all arguably add to the cognitive load of the process. Adding additional cognitive load to a deception situation has been shown to be effective in lie detection research. In support of this idea, studies investigating the effects of making lies more complex have found that they are easier to detect. For example, asking participants to recall an event in reverse order [10] or using interview techniques that require longer answers to questions [37] , have been shown to increase discrimination ability between liars and truth tellers.

Although cognitive load provides a basis for current theoretical considerations of deception, its underlying mechanisms and processes are not fully understood. For example, while the cognitive load approach suggests that telling a lie is cognitively more complex than telling the truth and will result in behaviour that highlights this additional mental effort, such as a decrease in body movements and an increase in response time, there is no in-depth explanation of precisely why deception is more cognitively challenging, or the particular processes involved in any deceptive encounter. This is what the current study aims to explore.

The Current Study

Our paradigm involved presenting participants with a colored square and asking them to lie or tell the truth about the color. We used vocal onset time as the dependent measure. This paradigm allowed us to focus on two main aspects of the lie process, namely, the suppression of factually truthful information and the production of an alternative, false response, since both should be required when falsely describing the color of a square. In Experiments 1 and 2 we investigated the decision to lie by comparing trials in which participants chose whether to lie or tell the truth compared to being instructed. In this way, it was possible to evaluate whether the process of making the decision to lie had a carryover effect into the lie itself. In the real world setting a person needs to decide to lie rather than being directed to lie and so in our pared-down version of the process it would be important to know whether differences present when deciding to lie are the same as those when directed to lie. In Experiments 3, 4 and 5 we investigated the lie construction process by comparing one possible lie response to a choice of two or three lie response possibilities, and by manipulating the plausibility of particular lie responses.

The color-naming paradigm that we have developed is different to the paradigms generally used in lie research. For example, in previous studies, participants have watched a simulated crime and lied about the protagonist [39] , or been questioned by an interviewer regarding their background and instructed to lie about certain details [40] . The reason for the difference in methodology is that most of the previous research into lying has been concerned with lie detection whereas we are interested in the underlying cognitive processes. Deception researchers, understandably, are interested in the measure which is most able to distinguish lies from truths, whether that is skin conductance [41] , facial expressions [42] , or offline measures such as linguistic analyses [43] , none of which are necessarily indicative of cognitive processes.

When researchers have used more traditional cognitive markers of deceit, such as response times, the emphasis has been on discovering whether a difference between lies and truths exists and how these compare to other ways of differentiation of deception [16] . Our experiments were designed to isolate the individual components of lying, however, which required eliminating as much variability as possible. We therefore removed factors such as the stress associated with lying, or the incentive to lie, which by their nature may variably affect the process [14] , [15] , [44] , [38] . We consider the processes investigated here – the suppression of the truth and the production of alternatives – to be involved in every instance of lying and are therefore fundamental to the cognition of lying. Stress, the incentive to lie, and other situational factors need to be considered beyond the basic cognitive processes considered here.

Experiment 1

There were two goals for Experiment 1. First, to establish whether our paradigm produced results consistent with the past literature on lying; specifically, that lie responses require slower response times than true responses [15] , [45] . Second, we wanted to investigate the effects of deciding to lie by manipulating whether participants chose to lie, or whether they were directed to lie. Thus, prior to the presentation of the colored square, participants were either presented with an instruction to lie or tell the truth in their response or were given a choice between the two. On the latter trials, participants had to input their decision (lie or truth) on the keyboard. Once the square was presented, participants had to vocally respond with either the true color of the square, or lie about its color. We reasoned that the decision-making process would be involved in the former but not the latter condition and this would be reflected in differences in lying latency.

Different decision processes make different predictions about the interaction between the type of instruction ( directed or given a choice ) and the honesty of the response ( truth or lie ). We consider two possibilities. First, the decision to lie could be a departure from the normal, truth-telling state. Deciding to lie, rather than adhering to the default truth, would therefore require extra processing effort. This is the basic idea behind the decision components of the ADCM [34] and WMD [30] . If the decision to lie is more difficult than the decision to tell the truth, participants should need relatively longer to lie than to tell the truth in the choice condition compared to the directed condition. In short, there should be an interaction between instruction and honesty with a larger difference between lies and truths in the choice condition. Second, deciding to lie could be no different to deciding to tell the truth. As such, the having to make the decision will not impact upon the size of the lie/truth difference in reaction times. Having to choose a response would generally be more difficult than being directed on the response and so longer overall latencies might be expected for the directed compared to the choice conditions, and longer lie latencies than truth latencies. Under this account then, only main effects of type of instruction and honesty would be expected.

Participants

Twenty-one Cardiff University undergraduate psychology students volunteered for this study in exchange for course credit. Of these, 20 were female. Participants had a mean age of 19.52 ( SD  =  0.68; Range  =  18–21) and spoke English as their first language. For this experiment, and all subsequent reported experiments, ethical approval was granted by the School of Psychology Ethics Committee at Cardiff University. In accordance with this, informed written and oral consent was obtained from all participants prior to the experimental task.

A 2 x 2 within-subjects design was used, with the independent variables being honesty of response (lie vs. truth) and type of instruction (choice vs. directed). The dependent variable was response time. A total of 192 trials were included, with 64 from the directed to lie condition, 64 from the directed to tell the truth condition and 64 from the choice condition. The order of trials was randomised for each participant.

The experiment progressed as a series of trials each of which began with the presentation of one of three words in the centre of the computer screen (LIE, TRUTH or CHOICE). Participants were asked to indicate whether they understood by pressing the ‘T’ key when presented with the word ‘TRUTH’, the ‘L’ key when presented with the word ‘LIE’ and either the ‘T’ or ‘L’ key when presented with the word ‘CHOICE’, according to whether they chose to lie or tell the truth. Participants were asked to choose to lie and choose to tell the truth at least 10 times each, to enable data from both responses to be collected. The word remained on the screen until the participant pressed the appropriate button and was then replaced with either a blue or a red square. Participants then had to say either the true color of the square or lie about the color of the square by claiming that it was the opposite color (e.g., blue if it was red). Voice key responses were recorded via a clip microphone. An example of a directed trial and a choice trial are presented in Figure 1 . After the vocal response was made, the next trial began after 500 ms. Instructions were presented on the screen and emphasised the importance of responding both as quickly and as accurately as possible. Participants took part in a practice block of 12 trials identical to the main trials. The question ‘What color is the square?’ was visually presented prior to both the practice block and the block of main trials. All stimuli were presented on a black background, with the squares being of equal size and the text being presented in Arial font, size 40.

An external file that holds a picture, illustration, etc.
Object name is pone.0060713.g001.jpg

Two subjects were removed from the analysis because they failed to follow experimental instructions of choosing to lie at least 10 times in the choice condition. All participants chose to tell to truth at least 10 times.

We treated response times greater than 2 s (approximately 3 SDs above the grand mean) as outliers in all of the experiments reported in this paper. Response times longer than this represented an excessively long time to retrieve the name of a color, and we found that using this cut-off meant that a similar number of outliers were eliminated across conditions. There were 103 (less than 3%) outliers in total, with 95 of these being a result of microphone problems (the microphone failed to pick up the initial answer). No responses were less than 100 ms. Inaccurate responses (132) were also removed from the analysis. Overall, there were 13 (2.0%) inaccurate responses in the choice lie condition and 53 (7.9%) in the choice truth condition, X 2 (1)  =  25.6, p < 0.05. There were 36 (2.7%) errors in the directed lie condition and 30 (2.2%) in the directed truth condition, X 2 (1)  =  0.6 p > 0.05. In total, 235 out of 3,648 data points were removed from the analysis.

Mean response times for the four possible treatment combinations are presented in Figure 2 . In contrast to either of the hypotheses considered above, there appears to be a large difference between truth and lies in the directed condition but not in the choice condition. To test this pattern we conducted a repeated-measures ANOVA with factors of type of instruction and honesty of response. We found a main effect of honesty with true responses being faster than lie responses, F (1,18)  =  7.89, p < .05, η 2  =  .31, and a main effect of type of instruction with responses in the choice condition being longer than in the directed condition, F (1,18)  =  17.28, p < .001, η 2  =  .49. The interaction was also significant, F (1,18)  =  9.97, p < .005, η 2  =  .36. The faster production of true than lie statements was significant in the directed condition, (Directed - Truth: M  =  758.85, SD  =  111.08; Directed - Lie: M  =  822.98, SD  =  110.86; F (1,18)  =  21.88, p < .001, η 2  =  .51), but not in the choice condition, (Choice - Truth: M  =  854.02, SD  =  118.12; Choice - Lie: M  =  857.39, SD  =  109.83; F (1,18)  =  0.40, p  =  .84, η 2 < .01, CI  =  [–32, 38]).

An external file that holds a picture, illustration, etc.
Object name is pone.0060713.g002.jpg

Note: Error bars are standard error.

When directed to lie or tell the truth, participants in our experiment needed on average 60 ms longer to lie than to tell the truth. This result demonstrates that our paradigm produces data consistent with previous research investigating response time and lying [1] , [15] , [31] . One way in which this result extends previous work, however, is that the role of the lie construction process was minimal in our experiment. Participants did not have to consider what an appropriate lie response might be (the only possible lie response was the alternate color) nor did they have to construct a convincing lie sentence. The most likely explanation for the differences in lie times is therefore that participants needed time to suppress the truth when lying.

The main aim of Experiment 1 was to investigate the effects of deciding to lie over being directed to lie. We were interested in whether there was a cost associated with deciding to lie in particular [34] or whether there was a general cost associated with having to choose a response compared to being directed. Surprisingly, the findings of Experiment 1 were not consistent with either of these possibilities. Although we observed an interaction between honesty of the response and the type of instruction, the difference between lying and telling the truth was significantly greater in the directed condition than in the choice condition; indeed, there was no significant difference between lying and telling the truth in the choice condition and there were significantly more errors in the truth condition. Before discussing the theoretical implications of these findings, however, we consider one factor that could have obscured differences between conditions in the choice condition.

Participants were slower to respond overall when they had to choose their response type than when they were directed on the response type. Also, participants were making more errors in the choice condition. In the choice condition, participants pressed a button to indicate their choice, whereas in the directed condition participants saw the word “truth” or “lie”. Participants therefore received a visual prompt regarding the response type in the directed condition but not in the choice condition. A greater degree of uncertainty about the expected response in the choice condition could therefore explain longer latencies overall, which could in turn have obscured honesty differences. We address these problems in Experiment 2 by providing a visual prompt to participants in both the choice condition and the directed condition.

Experiment 2

Experiment 2 used a similar design to Experiment 1 except that participants were given a visual reminder of their decision in the choice condition, just as they were in the directed condition.

Twenty-three Cardiff University students were paid for participation in the experiment. Of these, 14 were female. Participants had a mean age of 21.65 ( SD  =  4.59; Range  =  18–37) and spoke English as their first language.

The design of the experiment was the same as that shown in Experiment 1. However, we increased the total number of trials to 200 to ensure an equal number in the choice and directed conditions overall (100 in the choice condition, 50 in the directed to lie condition and 50 in the directed to tell the truth condition).

The task was a modified version of that described in Experiment 1 and involved the presentation of one of two words in the centre of the computer screen (READY or CHOICE). When the word ‘READY’ was presented, participants were instructed to press the space bar. When the word ‘CHOICE’ was presented, participants could press either the ‘T’ or the ‘L’ key, depending on whether they had chosen to tell the truth (T) or lie (L). On a ‘READY’ trial, the key press was followed by either the letter ‘L’ (relating to lie) or ‘T’ (relating to truth) presented in the centre of the screen for a one second period. On a ‘CHOICE’ trial, the key press was followed by a visual reminder of what key was pressed by presenting either an ‘L’ or a ‘T’ in the centre of the screen for a one second period. A colored square would then appear on the screen and the participant would report its true color or lie about it. The time taken to do this was recorded via a voice key. Examples of a directed and a choice trial are presented in Figure 3 . The presentation of visual prompt was the only aspect of the procedure that differed from Experiment 1.

An external file that holds a picture, illustration, etc.
Object name is pone.0060713.g003.jpg

One participant was removed from the analysis because they failed to follow experimental instructions of choosing to lie at least 10 times, providing a final sample size of 22. There were 100 outliers (2.3%) in total, with 67 of these being a result of microphone problems. No responses were less than 100 ms. These were removed from the analysis. Inaccurate responses (126) were also removed from the analysis. There were 25 (2.3%) inaccurate responses in the choice lie condition and 53 (4.8%) in the choice truth condition, X 2 (1)  =  10.4 p < 0.05. There were 28 (2.5%) errors in the directed lie condition and 20 (1.8%) in the directed truth condition, X 2 (1)  =  1.4, p > 0.05. In total, 226 out of 4,400 data points were removed from the analysis.

Mean response times for the four possible treatment combinations are presented in Figure 4 . Overall, telling a lie took longer than telling the truth, F (1,21)  =  84.66, p < .001, η 2  =  .80. Choosing how to respond took longer than being directed, F (1,21)  =  5.55, p < .05, η 2  =  .21. There was also a significant interaction between the type of instruction and honesty of response, F (1,21)  =  5.93, p < .05, η 2  =  .22, such that there was a greater difference between lying and telling the truth in the directed condition, (Directed - Truth: M  =  668.73, SD  =  142.87; Directed - Lie: M  =  763.06, SD  =  159.57), than in the choice condition, (Choice - Truth: M  =  707.83, SD  =  152.75; Choice - Lie: M  =  769.94, SD  =  167.12). This shows a similar pattern to Experiment 1, where a response time difference for lies and truths was only shown in the directed condition. Simple main effects analysis found that the effect of honesty of response was present in the directed condition, F (1,21)  =  80.30, p < .001, η 2  =  .79 and, in contrast to Experiment 1, it was also present in the choice condition, F (1,21)  =  31.82, p < .001, η 2  =  .60. Participants also took longer to respond when they chose to tell the truth compared to when they were directed to tell the truth, F (1,21)  =  16.65, p < .001, η 2  =  .44, whereas there were no differences in response times when individuals chose to lie compared to when they were directed to lie, F (1,21)  =  0.25, p  =  .62, η 2  =  .01, CI  =  [–21, 35].

An external file that holds a picture, illustration, etc.
Object name is pone.0060713.g004.jpg

The results of Experiment 2 provide further support for the finding that telling a lie takes significantly longer than telling the truth. In contrast to the findings of Experiment 1, this occurred both when individuals were directed in their response and when they chose their response. Furthermore, we no longer observed that responses in the choice condition required longer than in the directed condition. These findings suggest that the extra overall processing cost of making a choice in Experiment 1 was likely due to participants having difficulty in recalling their chosen response type. Nonetheless, we observed a significant interaction between type of instruction and honesty of response and an increase in errors for truths in the choice condition, just as we did in Experiment 1. The response time difference between lying and telling the truth was smaller when participants chose their response than when they were directed to do so. In particular, participants were slower to respond with the truth when they chose the response compared to when they were directed to do so, but lying was much less affected by the choice manipulation. No explanation based on retrieval of the decision can be invoked because the visual prompt provided was identical for both conditions. The choice condition, however, provided slightly more time in terms of preparation. This is because the time between the participant making the choice and pressing the appropriate key would have to be added to the 1000 ms preparation time that is available in both choice and directed conditions. The fact that there is still a significant difference between time to lie and time to tell the truth means that this additional preparation time does not negate the key findings.

Neither of the decision making mechanisms that we discussed in Experiment 1 were borne out by the data. It is not the case that telling the truth is always the default option and that people have to choose to lie but not to tell the truth, otherwise we would have observed larger differences between truths and lies in the choice condition than the directed condition, nor is it the case that needing to choose a response is simply more difficult overall than being directed to respond. The decision mechanism involved in choosing whether to lie is therefore more complex than previously thought [34] . Our suggestion for how the decision mechanism functions is as follows. First, we assume that when people lie they must necessarily suppress the truthful response. This accounts for longer latencies for lies relative to truths in both choice and directed conditions. In addition, when people have to make an active decision of how to respond, the evaluation of these competing response possibilities is likely to invoke conflict monitoring processes. The conflict of choosing between a truth or lie response, compared to no such action being required in the directed condition, leads to overall longer response times for the choice condition. This evaluation of competing responses in authentic decisions is represented overtly when participants choose between a T or L response on the keyboard. Once individuals have considered these competing possibilities and made a response decision, the alternative, unused response will then require suppression. This suppression of the alternative response requires longer processing time for both lie and truth responses. Since liars are already suppressing the alternative response (the truth) on directed trials, this suppression only represents an additional process on choice trials for truth tellers, who now have to suppress a lie response.

It should be noted, however, that the findings of these two experiments relate specifically to questions where only one response alternative to the truth is available, such as yes-no questions. These findings have yet to be confirmed with questions involving more than one lie response option, although there is no reason to believe that the overall pattern of findings relating to the decision process would differ.

Experiment 3

In Experiments 1 and 2 participants did not have a choice about which lie they told. When the square was red, for example, they had to lie with “blue,” and vice versa. The lie construction element was therefore minimal. Lying is often more complicated than this however, because liars have to construct a lie from a range of alternatives, as we discussed in the Introduction. Experiment 3 investigated which parts of the lie construction process contribute to longer response times.

We manipulated the range of lie and truth responses available to participants. In one condition, the square could be of one of two colors, as in Experiments 1 and 2. This is similar to yes-no questions, as in “Is your hair brown?” In the other condition the square could be one of three colors, similar to more open-ended questions, such as “What color is your hair?” The three-color trials therefore required a choice about which lie to use, whereas the two-color trials did not. All participants were directed about whether to lie, as in the directed conditions of Experiments 1 and 2. If the need to choose a lie contributes to the greater difficulty of lying, longer lie response times will be observed in the three-color lie condition than the two-color lie condition. Alternatively, longer response times might be observed in the three-color condition for both lie and truth responses.

Thirty-six Cardiff University students participated in this study in exchange for payment. Of these, 26 were female. Participants had a mean age of 21.83 ( SD  =  3.60; Range  =  18–38) and spoke English as their first language.

We used a 2 x 2 design with honesty of response (lie vs. truth) and number of response possibilities (two-color vs. three-color) as within-subjects factors. The dependent variable was response time. The paradigm consisted of two blocks of trials. The two-color block showed participants one of two colored squares and their lie response could only be the opposite color (hence one possible answer). The three-color block showed participants one of three colored squares and their lie response could be either of the other two colors (therefore a choice of two possible answers). The order of these blocks was counterbalanced across participants to minimise order effects. The color pair that participants were given in the two-color block (red/green, green/blue, blue/red) was also counterbalanced across participants so that all color combinations were present in all conditions. Participants took part in a practice block of 12 trials identical to the main trials. A total of 202 main trials were used in the paradigm: 100 in the two-color condition and 102 in the three-color condition.

As in Experiment 1, the task involved the presentation of one of two words in the centre of the computer screen (LIE or TRUTH) and participants indicated that they understood by pressing the ‘T’ key when presented with the word ‘TRUTH’ and the ‘L’ key when presented with the word ‘LIE’. A colored square (blue, red or green) was then presented. Participants were required to lie or tell the truth about the color seen. Responses were recorded using a voice key. An example trial is shown in Figure 5 .

An external file that holds a picture, illustration, etc.
Object name is pone.0060713.g005.jpg

There were 181 outliers (2.5%) in total and 62 of these were a result of microphone problems. No responses were less than 100 ms. These were removed from the analysis. Inaccurate responses (175) were also removed from the analysis. There were 38 (2.1%) inaccurate responses in the two-color lie condition and 50 (2.8%) in the two-color truth condition, X 2 (1)  =  1.7, p > 0.05. There were 51 (2.7%) inaccuracies in the three-color lie condition and 36 (2.0%) in the three-color truth condition, X 2 (1)  =  2.6, p > 0.05. Altogether, 356 out of 7,272 data points were removed from the analysis.

Mean response times for the four possible treatment conditions are presented in Figure 6 . A repeated measures ANOVA was conducted with factors of honesty of response and number of response possibilities. Consistent with Experiment 2, telling a lie took longer than telling the truth, F (1,35)  =  139.79, p < .001, η 2  =  .80. There was also a main effect of number of response possibilities, F (1,35)  =  4.11, p < .05, η 2  =  .10 and a significant interaction, F (1,35)  =  31.78, p < .001, η 2  =  .48, showing the lie-truth difference was significantly larger in the three-color condition than in the two-color condition. Simple main effects analysis revealed that the effect of honesty of response was significant in the two-color condition, F (1,35)  =  46.51, p < .001, η 2  =  .57 and in the three-color condition, F (1,35)  =  112.02, p < .001, η 2  =  .76. The interaction was driven by longer response times for lying to questions in the three-color condition compared to questions in the two-color condition, (Two-Color - Lie: M  =  866.16, SD  =  153.13; Three-Color - Lie: M  =  937.41, SD  =  153.07; F (1,35)  =  12.51, p < .001, η 2  =  .26), and no effect of number of possible responses on truthful responding, (Two-Color - Truth: M  =  812.86, SD  =  141.86; Three-Color - Truth: M  =  807.94, SD  =  122.67; F (1,35)  =  0.11, p  =  .74, η 2 < .01, CI  =  [–25, 35]).

An external file that holds a picture, illustration, etc.
Object name is pone.0060713.g006.jpg

In order to identify whether participants used one particular color more often than any other, we also examined which colors participants chose when they lied in the three color condition. Red was chosen 33% of the time, blue 35% of the time and green31% of the time. However, none of the colors were chosen more often than chance, t (35) ’ s < 1.40, p’ s > .18.

In Experiment 3 we found that lying takes longer than telling the truth in both color conditions. More interestingly, we also found that there was a greater difference between lying and telling the truth in the three-color condition compared to the two-color condition. The interaction was driven by a significant increase in the time taken to lie to three-color compared with two-color questions and a nonsignificant difference in the time taken to tell the truth, consistent with the claim that lie construction is a costly process. Unlike other studies that have tested the difference between different question-types [31] , our findings cannot be explained by differences in question content across conditions.

There are at least two explanations for why we observed a larger cost of lying in the three-color condition compared to the two-color condition. The first is that participants had to choose a lie in the three-color condition but not in the two-color condition (the lie was simply the one remaining option in the two-color condition). Having to make any kind of choice may have slowed participants down. The second is that participants could have been evaluating each of the possible lie responses in turn for their acceptability. Because there were twice as many possible lie responses in the three-color condition compared to the two-color condition, participants would have had to evaluate twice as many possibilities in the three-color condition than the two-color condition. There may be both a fixed cost of choosing and a cost to evaluating each alternative, or there could be one or other. In Experiment 4 we test whether participants evaluate each alternative.

Experiment 4

If participants evaluate each of the possible lie responses in turn, expanding the range of possible lie options should continue to add time onto lie latencies. Conversely, if the cost we observed is a choice cost, expanding the range of options should not result in a proportional increase in lie latencies (there would be a single choice cost regardless of the number of possible lie responses). Experiment 4 tested these explanations by comparing trials with two possible lie responses (a three-color condition, as in Experiment 3) against trials with three possible lie responses (a four-color condition).

Thirty-two Cardiff University students participated in this study in exchange for course credit. Of these, 29 were female. Participants had a mean age of 18.94 ( SD  =  0.95; Range  =  18–21) and spoke English as their first language.

We used a 2 x 2 within-subjects design, with honesty of response (lie vs. truth) and number of response possibilities (three-color vs. four-color) as within-subjects factors. The dependent variable was response time. The paradigm consisted of two blocks of trials. The three-color block showed participants one of three colored squares and their lie response could be either of the other two colors (hence two possible answers). The four-color block showed participants one of four colored squares and their lie response could be any of the other three colors (hence three possible answers). The order of these blocks was counterbalanced across participants to prevent order effects. The colors that participants were given in the three-color block (red/green/blue, green/blue/purple, blue/purple/red, purple/red/green) were also counterbalanced across participants so that all color combinations were present in all conditions. Participants took part in a practice block of 12 trials identical to the main trials. A total of 202 main trials were used in the paradigm.

The procedure was identical to that used in Experiment 3 except that participants saw one of four colored squares in the four-color condition.

There were 174 outliers (2.7%) in total. 78 of these were due to microphone problems. These were removed from the analysis. Inaccurate responses (260) were also removed from the analysis. No responses were less than 100 ms. There were 69 (4.3%) inaccurate responses in the three-color lie condition and 75 (4.7%) in the three-color truth condition, X 2 (1)  =  0.3, p > 0.05. There were 59 (3.7%) inaccuracies in the four-color lie condition and 57 (3.6%) in the four-color truth condition, X 2 (1)  =  0.1, p > 0.05. Altogether, 434 out of 6,464 data points were removed from the analysis.

Mean response times for the four possible treatment combinations are presented in Figure 7 . A repeated measures ANOVA was conducted with factors of honesty of response and number of response possibilities. This found a significant main effect of honesty of response with true responses being faster than lie responses, F (1,31)  =  117.06, p < .001, η 2  =  .79. However, in contrast to the findings of Experiment 3, a further increase in the number of possible lie responses did not affect response times in either the truth, (Three-Color - Truth: M  =  728.96, SD  =  121.51; Four-Color - Truth: M  =  726.17, SD  =  106.90; F (1, 31)  =  0.04, p  =  .84, η 2 < .01, CI  =  [-25, 30]), or lie conditions, (Three-Color - Lie: M  =  875.34, SD  =  171.42; Four-Color - Lie: M  =  888.39, SD  =  148.72; F (1, 31)  =  0.35, p  =  .56, η 2 < 05, CI  =  [–58, 32]), nor was the interaction between number of response possibilities and honesty of response significant, F (1, 31)  =  0.57, p  =  .46, η 2 < .02, showing that the lie-truth difference was not significantly larger in the four-color condition than in the three-color condition. A power analysis revealed that if the interaction was as large as we found in Experiment 2, i.e., η 2  =  .26, we would have had a 99% chance of finding the effect.

An external file that holds a picture, illustration, etc.
Object name is pone.0060713.g007.jpg

As in Experiment 3, we investigated how participants chose their lie response. In the 3-color block, participants chose red 36% of the time, blue 31% of the time, green 31% of the time and purple 28% of the time. A one-sample t-test found that purple was used less than would be expected by chance, t (23)  =  2.53, p < .05, but that red, blue and green were not, t (23) ’ s < 1.70, p’ s > .11. In the 4-color block, participants chose red 29% of the time, blue 20% of the time, green 27% of the time and purple 18% of the time. A one-sample t-test found that red was used more than chance, t (31)  =  2.28, p < .05, whereas blue, t (31)  =  3.18, p < .005 and purple, t (31)  =  3.58, p < .001 were used less than chance. The use of the green did not significantly differ from chance, t (31)  =  0.83, p  =  .41.

The results of Experiment 4 support previous findings of increased response times when individuals lie compared to when they tell the truth, regardless of the number of possible lie responses available. We also found that the number of possible lie responses did not significantly affect response times when individuals told the truth, consistent with the results of Experiment 3. Unlike Experiment 3, however, in this experiment no significant differences were demonstrated when individuals lied in the three-color compared to the four-color block and a power analysis indicated that we had a 99% chance of detecting an effect of the same size as that observed in Experiment 3. The processing time difference between questions with multiple response possibilities and those with only one response option is therefore likely to be due to the cost of choosing between lies in working memory, and not due to costs associated with evaluating each possible lie response in turn. We are not arguing that participants will never consider additional lie options in turn (or that lie times will never increase with options greater than three); rather, that the cost of having to choose per se will always be at least part of the extra cost of lying in multiple lie contexts.

It can be argued that individuals use a variety of strategies when generating lies in authentic settings, such as manipulating truthful information [38] , and that our paradigm prevents this, and as such, prevents generalization to authentic settings. Indeed, our paradigm severely limits the available lie responses. However, three points should be considered here. Firstly, there are many situations that require individuals to complete the relatively simple task of choosing a lie response from a predetermined set of possibilities. For example, if asked the color of someone’s hair, individuals can choose between a predetermined set of acceptable hair colors in creating their lie response. Secondly, there are certain situations whereby lies are entirely false and do not involve any manipulation of the truth, such as denying recognising a well known acquaintance. Thirdly, it could be considered that using a different color as the lie response is to some extent an alteration of the truth, and as such, all lies involve a degree of alteration of truthful information, regardless of the specific context of the individual lie. Further considerations relating to lie selection, specifically the differing plausibility and acceptability of particular lies, are now addressed in Experiment 5.

Experiment 5

In our previous experiments we showed that choosing between multiple lie responses increases response time. It should also be considered, however, that for the majority of lies some responses will be more plausible than others and the successful liar will need to consider this when selecting their response. The more plausible a response, the more likely that it will be chosen above other possibilities, since this increases the likelihood that a lie will be believed. In order to prevent implausible responses being used as a lie, like the truth, they become unacceptable answers to questions and must be suppressed alongside truthful information. What makes the task even more difficult is that a particular response is not necessarily implausible per se but depends on the question asked and the context (much like the truth). For example, “On the moon” would be a perfectly plausible (or truthful) answer to some questions, just not the location of the stolen money. Overall then, in any deceptive interaction there will be particular lies that cannot be used if the deception is to be successful. This discrimination of plausible and implausible lies can be considered a form of rule constraint, with limitations on the particular response that can be effectively used.

We are not aware of any evidence, however, that directly addresses the question of how implausible responses are discriminated from plausible responses, or how they are suppressed when people lie. One possibility is that plausibility computations are carried out in long term memory and that only plausible responses are transferred to working memory to be articulated. The ADCM assumes a similar process. An alternative, however, is that since lying is arguably an act that works against standard communicative principles [11] , plausibility constraints may have to be implemented at a higher level than other language mechanisms. In order to override the use of truthful information when answering a question, lying may involve explicit, goal-oriented suppression of the default response. This may require distinct processes to be implemented in working memory. Experiment 5 was designed to test between these two accounts.

Participants engaged in a color naming task similar to Experiments 3 and 4. The difference was that in Experiment 5 we introduced constraints on which lies (colors) participants could use. Specifically, we told participants that they would have to name squares of three different colors, red, green, and blue either truthfully or untruthfully, but that they were not allowed to lie with one of the colors (red, say). We therefore had lie and truth trials. In the lie trials they would have to say whatever color was presented whether it be green, blue or red. The lie trials were broken down depending on the plausibility constraint. When the colored square was the disallowed lie color (red), participants had the choice of two lie possibilities (blue and green). We refer to these as lie control trials because the lie possibilities were the same as if no constraint was introduced. When the square was one of the allowed lie colors (green, say), participants could not say the prohibited lie color (red) and hence had to choose the other lie color (blue). These were lie constraint trials.

If plausibility constraints are implemented in long term memory, only allowable responses would be transferred into working memory. In the lie control trials, this would mean two potential lie responses, that is, green and blue, but in the lie constraint trials, only one possible response would be available, i.e., green (or blue). From Experiment 3 we know that lying with two possible responses is more difficult than lying with only one possible response, hence RTs in the lie control trials should be slower than those in the lie constraint trials. Alternatively, if plausibility constraints are implemented in working memory, participants would have two lie responses in working memory in both conditions. They would then have to explicitly suppress the disallowed lie response in the lie constraint condition, which should take additional time, as it did when participants suppressed the truthful response throughout Experiments 1–4. RTs to the lie constraint condition should therefore be higher than in the lie control condition.

Thirty undergraduate psychology students volunteered for this study in exchange for course credit. Of these, 29 were female. Participants had a mean age of 20 ( SD  =  3.2; Range  =  18–33) and spoke English as their first language.

A 2x2 within-subjects design was used, with honesty of response (truth vs. lie) and plausibility (constraint vs. control) as within-subjects factors. The dependent variable was response time measured in milliseconds (ms). A total of 408 trials were included in the main experimental task, with 68 from the lie control condition, 68 from the truth control condition, 136 from the lie constraint condition and 136 from the truth constraint condition. The order of trials was randomised for each participant.

A similar paradigm was used to Experiments 3 and 4, with the presentation of either the word TRUTH or LIE in the centre of the computer screen. Once again, participants pressed the ‘T’ key when presented with the word TRUTH and the ‘L’ key when presented with the word ‘LIE’. This was followed by the presentation of either a blue, green or red square. As before, participants then had to say either the true color of the square or lie about the color of the square by claiming that it was a different color. Prior to the main trials, participants completed a short practice block containing 4 trials.

In contrast with our previous experiments, participants were instructed that they could only use two of the presented colors as their lie response and could not use the third color as a lie answer (e.g., participants could use green red or blue but not red). The particular color (red, blue or green) that participants were instructed against using as a lie was counterbalanced across participants.

There were 264 outliers (2.2%) in total. 256 of these were due to microphone problems. These were removed from the analysis. No responses were less than 100 ms. Inaccurate responses (363) were also removed from the analysis. Overall, there were 55 (2.7%) inaccurate responses when participants lied in the control condition and 53 (2.6%) when participants told the truth in the control condition, X 2 (1)  =  0.1, p > 0.05. There were 162 (4.0%) when participants lied in the constraint condition, and 93 (2.3%) when participants told the truth in the constraint condition, X 2 (1)  =  19.2, p > 0.05. In total, 627 out of 11,970 data points were removed from the analysis.

A repeated measures ANOVA was conducted with honesty (truth vs. lie) and plausibility (constraint vs. control) as within-subjects factors. A main effect of honesty was demonstrated, F (1,29)  = 145.52, p < .001, η 2  =  .83, such that lie response times were significantly longer than truth response times, for both control and constraint trials. In addition, a main effect of plausibility was demonstrated, F (1,29)  = 14.89, p < .005, η 2  =  .34 and a significant interaction between honesty and plausibility, F (1,29)  =  23.27, p < .001, η 2  =  .44, such that the lie-truth difference was significantly larger in the constraint condition than in the control condition. This interaction was due to significantly longer response times when participants lied in the control condition compared to the constraint condition (Lie - Control: M  =  909.56, SD  =  175.51; Lie - Constraint: M  =  860.16, SD  =  151.06; F (1,29)  = 40.48, p < .001, η 2  =  .58). This finding is evidence in favour of constraints being applied in long-term memory. Little difference was shown between the two conditions when individuals told the truth (Truth - Control: M  =  762.73, SD  =  148.29; Truth - Constraint: M  =  774.53, SD  =  156.15; F (1,29)  = 2.06, p  =  .162, η 2  =  .07). Mean response times for the four possible treatment combinations are shown in Figure 8 .

An external file that holds a picture, illustration, etc.
Object name is pone.0060713.g008.jpg

The main effect of honesty of response shown in our previous experiments was also demonstrated in Experiment 5, with lying taking longer than telling the truth in both the constraint and control conditions. Two main predictions were considered regarding the choice between lie possibilities in relation to response plausibility. These focused on whether implausible lies entered working memory and were considered in the decision process, or whether such responses were inhibited prior to this in long-term memory systems. Our findings support the latter hypothesis because there were significantly longer lie responses in lie control trials compared to lie constraint trials. If both implausible and plausible lies were transferred to, and active in working memory, then a choice would be required between them (as seen in Experiment 3). This would result in little response time difference between the lie control and lie constraint conditions, since a choice would be required between two possible responses in both conditions. Our findings suggest instead that the implausible lie response is inhibited prior to this decision process, so a decision between the two possibilities is not required (since only one color can be plausibly used). This supports the suggestion (consistent with the ADCM) that implausible lies are inhibited in long-term memory and only plausible lies enter working memory systems.

General Discussion

The aim of the current study was to investigate the cognitive processes that occur when people lie. Telling a lie typically takes longer than telling the truth and we were interested in understanding why. We organised our experiments around three potential contributing factors: suppressing a truthful response; the decision to lie; and the construction of a lie. We now summarize our results and describe their implications with respect to these factors.

Suppression of the truthful response

In all of our experiments in which participants were instructed to lie, lying response times were longer than truthful response times. More interestingly, we observed this result under conditions in which many of the factors that are usually considered to slow down lying were absent. In particular, participants did not need to construct a plausible lie (in Experiments 1 and 2 only one possible lie response was available) nor did they need time to decide to lie (Experiments 3, 4 and 5 removed the decision process completely). According to models such as the ADCM Revised [34] , the only process left to explain longer lie response times is that the truthful response needs to be suppressed. Our experiments therefore provide direct evidence that suppression of the truthful response is a contributing factor to longer lie response times.

While we agree that suppression is part of the explanation, it is important to outline the different mechanisms by which suppression might lead to slower response times. One possibility is that lying is a multi-stage, serial processing mechanism in which the truthful response is retrieved and enters into working memory first, it is then rejected (because a lie is needed), and then a lie response retrieved. Telling the truth, in contrast, is only a single-stage processing mechanism, in which the truthful response is retrieved and enters into working memory. Under this account, the difference in response times between lies and truths is due to having to retrieve two responses in the lie condition (the lie and the truth) and only one in the truth condition (the truth). An alternative but similar proposal is that lying involves rejecting a response, whereas telling the truth does not. Perhaps rejection is a conscious process that takes time.

A more distinct alternative is that the processes that underlie suppression of the truth occur in parallel, and in long-term memory, not in serial, short term memory. Assuming that response time is determined by variation in activation levels across the response possibilities (with large differences in activation levels being associated with short response times), reducing the activation of the truthful response might reduce overall variation in activation levels. This would make it more difficult to generate a response when lying than when telling the truth because it would be more difficult to select one response over the others. While this might explain why lying takes longer than telling the truth on some occasions, it is unlikely to be a general explanation. First, recent brain imaging research has found increased activation of brain areas associated with working memory when individuals lie [22] . The extra cost of lying cannot therefore be restricted to long-term memory under all circumstances. Second, lying involves deliberately choosing not to say the truth [46] . Now, since working memory is typically associated with conscious awareness [47] , lying should involve truthful responses entering working memory (and being suppressed in working memory).

The two types of suppression that we have identified may both be correct but apply under different circumstances. Serial suppression in working memory is likely to be the more standard, day-to-day type of suppression in which a speaker lies to an unexpected question on a single occasion. However, if a speaker has to lie on multiple occasions to the same question, or they are in a situation in which lying is likely to be common and expected, they may be able to suppress truthful answers in long-term memory, almost “forgetting” the truth because the lie response has been so frequently associated with a given question.

Experiments 1 and 2 tested the role of the decision process by comparing response times in trials in which participants chose to lie with trials in which they were directed to lie. While we found effects of deciding to lie in both of our experiments, we discovered that there was a much greater cost to deciding to tell the truth than deciding to lie, relative to the cost of being directed in the response. Thus, although it has been suggested that the decision contribution to elevated lie response times is at least partially determined by the difficulty in lying [34] , our data show that this process also occurs for decisions related to truthful responses. Our general view is therefore that there is no cost of deciding to lie per se but there is a cost to choosing to depart from the norm for that context. Most of the time when people lie they will be departing from a truth-telling context, which is likely to incur a cost, but in some contexts, e.g., interrogation situations, or playing poker, delays may be experienced when the decision is taken to tell the truth.

One caveat to our conclusion is that when people choose to lie they often do so on the basis of the question that they are asked, whereas in our experiments the choice was internally driven. For example, a person may choose to lie to questions about the whereabouts of a suspect but not about their own activities. Evaluating the content of the question is a component of the decision process which is not included in our task. It could therefore be that the evaluation component of the decision process contributes to elevated lie latencies. However, we feel that this cost is also caused by a departure from the normal communicative stance. This is because if the person would normally tell the truth, the question needs to be evaluated in order to decide to lie, but if the person expects to lie, the question needs to be evaluated in order to decide whether to tell the truth. Thus, the departure from the norm is the causal factor, not the decision to lie.

Second, we observed longer response times when participants told the truth in the choice condition compared to the directed condition. This occurred across both experiments and therefore was not related to differential visual availability of the response type across conditions. As a consequence of this effect, the difference between lying and telling the truth was greatly diminished in the choice conditions (to the extent that we did not observe a significant difference in Experiment 1). What is different about choosing to lie compared to being directed to lie? One hypothesis is that choosing to lie means considering lie and truthful responses. For example, when deciding whether to lie to a red square, the responses “blue” (the lie) and “red” (the truth) become activated. Consequently, in our study, there was a small (or nonexistent) response time difference between truthful and lie responses in the choice condition because both responses were highly activated under both response conditions. In contrast, being directed to tell the truth means that only the truthful response becomes activated (there is no need to consider and suppress the lie response), but being directed to lie means that the truth and the lie response become activated (the truth is always activated). In other words, both response types were activated in the choice-lie, directed-lie, and choice-truth conditions, but only the truth was activated in the directed-truth condition.

Finally, these results should be considered in relation to practical situations. In almost all lie detection work participants are directed to lie or tell the truth rather than choosing to do so whereas when people lie in everyday situations, they choose to lie rather than being directed. Our experiments show that the difference between lying and telling the truth is much smaller when participants are given a choice. This should certainly be considered in further work targeted at more practical settings, since such lies may therefore be less detectable when using automated lie detection techniques.

The construction of a lie

There is a strong intuition that lying takes longer than telling the truth because lies need to be constructed whereas truths do not. Yet, the evidence we reviewed in the Introduction was inconclusive about why, or even whether, this was the case. Our experiments make two novel contributions to understanding the construction component of lying.

First, having to make a choice about which lie to use from many, arbitrary possibilities is difficult. Experiments 3 and 4 demonstrated that when participants had to choose a lie they were slow at responding, but, crucially, the same range of response options did not slow truthful responses. Even after hundreds of trials, and with only two choices, participants experienced difficulty in making an arbitrary choice when they were forced to lie. It seems that part of what makes lying difficult is resolving all of the inconsequential decisions that are needed in order to construct a story. When telling the truth, the “decisions” are determined by fact, or by memory, and are therefore relatively resource free.

Second, and somewhat conversely, when there is a clear preference about which lie is the most appropriate, lying is relatively easy. In Experiment 5 we found that when participants were prevented from using one lie response out of two (but were required to use both responses when stating the truth), participants behaved as if there was only one possible lie available. Rejection of the implausible lie occurred in long term memory, as if no choice between lies was necessary. One caveat to this result is that our effects were obtained over many trials with the same plausibility constraint applied on each occasion. It may be the case that making plausibility assessments in unrehearsed lie situations is much more difficult. We leave this investigation to future research, however.

Our results on lie construction additionally make one suggestion that contrasts with previous claims that yes/no questions provide better indicators of deceit than open-ended questions [1] , [31] , [34] . These claims are based on findings of greater response time differences between lies and truths when participants lied to yes/no compared to open-ended questions. In contrast, we found a greater difference for questions with more than one possible lie response. We suggest that different patterns arose because different methodologies were used across studies. In our experiments, participants answered the same type of question in both conditions and the truthful answer was equally accessible across conditions. In the above cited papers, however, different types of questions were asked across conditions and the truthful answer could have been more difficult to retrieve in the open-ended questions (hence truthful response times were longer in the open-ended condition). While we agree that the difficulty of retrieving truthful information contributes to the response time difference between lies and truths, we feel that this issue is orthogonal to the issue of yes/no vs. open-ended questioning. The results of our experiments on lie construction suggest that an interviewee may need more time to lie to an open-ended question than to a yes/no question, ceteris paribus , because they need to choose which lie to use in the open-ended case but not in the yes/no case. Before any firm conclusions can be drawn regarding the effect of question type on the optimisation of deception detection, however, the likely accessibility of truthful information and the situational context should be further examined.

Limitations and future directions

The paradigm that we used appears quite different to the usual methods of investigating how people lie [10] , [48] . For example, participants were not asked to lie about personal information, nor was there an interlocutor present asking questions. Further, there was no incentive to lie, which should have meant that there were no stress effects. We argued in the introduction that the method we employed is a powerful technique without which we would not have been able to address the detailed processing questions discussed above. It is important, however, to consider the relationship between our task and lying outside of the laboratory.

Similar to many cognitive experiments [15] , [21] , [29] , [49] , our paradigm did not require participants to engage in the direct deception of another individual. They were producing verbal responses recorded by a computer, and there was no human “addressee” to fool. While this procedure means that participants may have felt that the task was different to lying in everyday life, they were performing operations that must necessarily be present in even the most simple of lies independently of both the intention and motivation to deceive. What is important is that participants in our study intentionally and knowingly produced falsehoods. While there are situations in which a person can knowingly produce falsehoods without lying (e.g., when both parties are aware of the falsehood) there are very few situations when lies are produced without falsehoods [50] . Clearly, however, it is possible that the effects found in our experiments may interact or be overshadowed by the affective components of lying, such as guilt, stress or negative emotions in general. Future studies may be able to test these interactions by, for example, inducing negative moods in participants in the laboratory [51] , [52] .

Atypically for research in deception, participants in the current study had to lie when a representation of the truth was in front of them. For example, participants had to lie, “red” when the truth, a yellow square, was present on the screen (compare this with a study in which participants are asked to lie about having performed an everyday act [53] ). One likely effect of having the visual stimulus on the screen would be to make it more difficult to suppress the truthful response when lying. This design, therefore, maximised the suppression effect so we could manipulate particular components of the lie process. Despite the likelihood of larger effects, however, there is no reason why the overall difficulty should have interacted with the difference between choosing to lie and being directed to lie (Experiments 1 and 2) or the difference between one and two or three plausible and implausible lie possibilities (Experiments 3, 4 and 5). Both lying about a visual stimulus and lying about the content of memory involve suppression of the truthful response and the experiments reported here investigated this suppression. Furthermore, participants were not being presented with the color name, i.e., a possible response, only a colored square. This meant that the truthful response still needed to be recalled from memory, just as if we had asked them what they were up to the night before last.

Lastly, we acknowledge that only a single cue to deception was used as a measure of cognitive load. Although response times are a well regarded measure of cognitive processing, other researchers have recommended the use of multiple cues to detect deceit [54] , including blink rate [55] and body movements [56] , and this should be considered in practical lie detection settings.

Despite the wealth of research investigating lying in general, such as lie detection [37] , the social psychology of lying [3] , [4] and the linguistics and philosophy of lying [50] , very little work has been conducted on how we lie. Our study has tried to address the imbalance by investigating why people take longer to lie than to tell the truth. We come to three conclusions. First, lying involves suppressing truthful information and suppressing or rejecting a default response will increase response time. Second, there can be costs associated with choosing to tell the truth, just as there can be with choosing to lie. We therefore maintain that the decision to depart from the normal type of communication can be costly, and while this will often be a cost associated with a decision to lie, it is not an obligatory component of lying. Lastly, lying often requires more choice in generating a response than telling the truth. There is typically only one truth but there are many possible lie options. Making a choice about which lie to use is a difficult job and contributes to the longer time needed to tell a lie.

Acknowledgments

Emma J. Williams, Lewis A. Bott, John Patrick and Michael B. Lewis are at the School of Psychology, Cardiff University, Cardiff, UK. We would like to thank Peter Talbot-Jones of EADS for his support and encouragement in this research.

Funding Statement

This work was conducted as part of a PhD study undertaken by the first author. This PhD was funded by the European Aeronautic Defence and Space Foundation Wales (grant number RCPS400): http://eadsfoundation.com/ . The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

  • Health & Disease
  • Living World
  • Physical World
  • Food & Environment
  • Climate Change
  • Coronavirus
  • Disease Update
  • Story Behind a Picture
  • Building Bodies
  • Special Report: Reset
  • The Working Life
  • Newsletter Signup
  • Privacy Policy

LAYOUT MENU

psychology experiments lie

CREDIT: ESTHER AARTS

Many people think that liars will give themselves away through nervous mannerisms like shrugging, blinking or smiling. But the scientific evidence tells a different story.

The truth about lying

You can’t spot a liar just by looking — but psychologists are zeroing in on methods that might actually work

By Jessica Seigel 03.25.2021

Support sound science and smart stories Help us make scientific knowledge accessible to all Donate today

Police thought that 17-year-old Marty Tankleff seemed too calm after finding his mother stabbed to death and his father mortally bludgeoned in the family’s sprawling Long Island home. Authorities didn’t believe his claims of innocence, and he spent 17 years in prison for the murders.

Yet in another case, detectives thought that 16-year-old Jeffrey Deskovic seemed too distraught and too eager to help detectives after his high school classmate was found strangled. He, too, was judged to be lying and served nearly 16 years for the crime.

One man was not upset enough. The other was too upset. How can such opposite feelings both be telltale clues of hidden guilt?

They’re not, says psychologist Maria Hartwig , a deception researcher at John Jay College of Criminal Justice at the City University of New York. The men, both later exonerated, were victims of a pervasive misconception: that you can spot a liar by the way they act. Across cultures, people believe that behaviors such as averted gaze, fidgeting and stuttering betray deceivers.

In fact, researchers have found little evidence to support this belief despite decades of searching. “One of the problems we face as scholars of lying is that everybody thinks they know how lying works,” says Hartwig, who coauthored a study of nonverbal cues to lying in the Annual Review of Psychology . Such overconfidence has led to serious miscarriages of justice, as Tankleff and Deskovic know all too well. “The mistakes of lie detection are costly to society and people victimized by misjudgments,” says Hartwig. “The stakes are really high.”

Tough to tell

Psychologists have long known how hard it is to spot a liar. In 2003, psychologist Bella DePaulo, now affiliated with the University of California, Santa Barbara, and her colleagues combed through the scientific literature, gathering 116 experiments that compared people’s behavior when lying and when telling the truth. The studies assessed 102 possible nonverbal cues, including averted gaze, blinking, talking louder (a nonverbal cue because it does not depend on the words used), shrugging, shifting posture and movements of the head, hands, arms or legs. None proved reliable indicators of a liar , though a few were weakly correlated, such as dilated pupils and a tiny increase — undetectable to the human ear — in the pitch of the voice.

Three years later, DePaulo and psychologist Charles Bond of Texas Christian University reviewed 206 studies involving 24,483 observers judging the veracity of 6,651 communications by 4,435 individuals. Neither law enforcement experts nor student volunteers were able to pick true from false statements better than 54 percent of the time — just slightly above chance. In individual experiments, accuracy ranged from 31 to 73 percent, with the smaller studies varying more widely. “The impact of luck is apparent in small studies,” Bond says. “In studies of sufficient size, luck evens out.”

This size effect suggests that the greater accuracy reported in some of the experiments may just boil down to chance , says psychologist and applied data analyst Timothy Luke at the University of Gothenburg in Sweden. “If we haven’t found large effects by now,” he says, “it’s probably because they don’t exist .”

Common wisdom has it that you can spot a liar by how they sound or act. But when scientists looked at the evidence, they found that very few cues actually had any significant relationship to lying or truth-telling. Even the few associations that were statistically significant were not strong enough to be reliable indicators.

Police experts, however, have frequently made a different argument: that the experiments weren’t realistic enough. After all, they say, volunteers — mostly students — instructed to lie or tell the truth in psychology labs do not face the same consequences as criminal suspects in the interrogation room or on the witness stand. “The ‘guilty’ people had nothing at stake,” says Joseph Buckley, president of John E. Reid and Associates, which trains thousands of law enforcement officers each year in behavior-based lie detection. “It wasn’t real, consequential motivation.”

Samantha Mann , a psychologist at the University of Portsmouth, UK, thought that such police criticism had a point when she was drawn to deception research 20 years ago. To delve into the issue, she and colleague Aldert Vrij first went through hours of videotaped police interviews of a convicted serial killer and picked out three known truths and three known lies. Then Mann asked 65 English police officers to view the six statements and judge which were true, and which false. Since the interviews were in Dutch, the officers judged entirely on the basis of nonverbal cues.

The officers were correct 64 percent of the time — better than chance, but still not very accurate, she says. And the officers who did worst were those who said they relied on nonverbal stereotypes like “liars look away” or “liars fidget.” In fact, the killer maintained eye contact and did not fidget while deceiving. “This guy was clearly very nervous, no doubt,” Mann says, but he controlled his behavior to strategically counter the stereotypes.

Five Black men on stage at an ACLU awards luncheon.

In 1990, five young men were convicted of raping a jogger in New York’s Central Park the year before, after police disbelieved their claims of innocence. The men, popularly known as the Central Park Five, were completely exonerated of the crime and released in 2002 after years in prison. Here, they appear at an awards luncheon of the American Civil Liberties Union in 2019.

CREDIT: MARIO TAMA / GETTY IMAGES

In a later study, also by Mann and Vrij, 52 Dutch police officers did no better than chance at distinguishing true and false statements given by family members who’d murdered their relatives but denied it in anguished displays during televised press conferences used in the study. Notably, officers who performed the worst were those who felt that the emotional displays were genuine. But what did that signify? “If a husband killed his wife, he could be upset for all kinds of reasons, like remorse or getting caught,” says Mann. “Nonverbal behavior is so idiosyncratic. If you focus on the nonverbal behavior like emotions, you will trip up.”

Confirming these results on a large scale years later, Hartwig and Bond reviewed the literature for studies comparing people’s abilities to detect high- and low-stakes lies. They found no evidence that people were any better at detecting lies told by criminals or wrongly accused suspects in police investigations than those told by laboratory volunteers.

Something to hide

From serial killers to students fibbing in laboratory experiments, the lies in all these experiments were spoken. In a study published in 2019, Mann examined nonverbal deceit, such as when someone is trying to conceal illicit activity — a type of deception relevant to detecting bombers or smugglers . She recruited 52 university student volunteers and asked half of them to transport a laptop said to contain sensitive photographs on a ferry ride between two cities. The volunteers were told to try to blend into the crowd and not look “suspicious,” because people would be trying to identify them. The other half of the group were given a regular cell phone to transport, with no instructions to conceal what they were doing.

When Mann showed videos of the ferry rides to 104 other volunteers and asked them to pick out the “smugglers,” the spotters did no better than chance. In interviews afterward, the “smugglers” said they were nervous, but they consciously tried to act normal and control their nerves with tactics such as listening to music or using their phones.

Next, Mann raised the stakes. Half of a new group of 60 volunteers were given an envelope of Russian, Egyptian and Korean currency to conceal, while the other half didn’t “smuggle” anything. But this time, Mann sent two researchers onto the ferry to pace around and scrutinize the passengers, appearing to compare their faces to photos on a cell phone.

This time, 120 observers trying to pick out the “smugglers” on video guessed correctly just 39.2 percent of the time — well below chance. The reason, Mann says, is that the “smugglers” consciously made an effort to look normal, while the “innocent” control volunteers just acted naturally. Their surprise at the unexpected scrutiny looked to the observers like a sign of guilt.

The finding that deceivers can successfully hide nervousness fills in a missing piece in deception research , says psychologist Ronald Fisher of Florida International University, who trains FBI agents. “Not too many studies compare people’s internal emotions with what others notice,” he says. “The whole point is, liars do feel more nervous, but that’s an internal feeling as opposed to how they behave as observed by others.”

Studies like these have led researchers to largely abandon the hunt for nonverbal cues to deception. But are there other ways to spot a liar? Today, psychologists investigating deception are more likely to focus on verbal cues, and particularly on ways to magnify the differences between what liars and truth-tellers say.

A man in a white sweater leaving jail after his release, accompanied by several others.

Marty Tankleff, in white sweater, being released from prison after serving 17 years wrongfully convicted of murdering his parents. Officials thought that Tankleff’s claims of innocence must have been lies because he didn’t show enough emotion. There is no good evidence that you can reliably spot a liar by the way they act.

CREDIT: ANDREW THEODORAKIS / NY DAILY NEWS VIA GETTY IMAGES

For example, interviewers can strategically withhold evidence longer, allowing a suspect to speak more freely, which can lead liars into contradictions. In one experiment, Hartwig taught this technique to 41 police trainees, who then correctly identified liars about 85 percent of the time, as compared to 55 percent for another 41 recruits who had not yet received the training. “We are talking significant improvements in accuracy rates,” says Hartwig.

Another interviewing technique taps spatial memory by asking suspects and witnesses to sketch a scene related to a crime or alibi. Because this enhances recall, truth-tellers may report more detail. In a simulated spy mission study published by Mann and her colleagues last year, 122 participants met an “agent” in the school cafeteria, exchanged a code, then received a package. Afterward, participants instructed to tell the truth about what happened gave 76 percent more detail about experiences at the location during a sketching interview than those asked to cover up the code-package exchange . “When you sketch, you are reliving an event — so it aids memory,” says study coauthor Haneen Deeb, a psychologist at the University of Portsmouth.

The experiment was designed with input from UK police, who regularly use sketching interviews and work with psychology researchers as part of the nation’s switch to non-guilt-assumptive questioning, which officially replaced accusation-style interrogations in the 1980s and 1990s in that country after scandals involving wrongful conviction and abuse.

Stay in the Know Sign up for the Knowable Magazine newsletter today

Slow to change

In the US, though, such science-based reforms have yet to make significant inroads among police and other security officials. The US Department of Homeland Security’s Transportation Security Administration, for example, still uses nonverbal deception clues to screen airport passengers for questioning. The agency’s secretive behavioral screening checklist instructs agents to look for supposed liars’ tells such as averted gaze — considered a sign of respect in some cultures — and prolonged stare, rapid blinking, complaining, whistling, exaggerated yawning, covering the mouth while speaking and excessive fidgeting or personal grooming. All have been thoroughly debunked by researchers.

With agents relying on such vague, contradictory grounds for suspicion, it’s perhaps not surprising that passengers lodged 2,251 formal complaints between 2015 and 2018 claiming that they’d been profiled based on nationality, race, ethnicity or other reasons. Congressional scrutiny of TSA airport screening methods goes back to 2013, when the US Government Accountability Office — an arm of Congress that audits, evaluates and advises on government programs — reviewed the scientific evidence for behavioral detection and found it lacking, recommending that the TSA limit funding and curtail its use. In response, the TSA eliminated the use of stand-alone behavior detection officers and reduced the checklist from 94 to 36 indicators, but retained many scientifically unsupported elements like heavy sweating.

A security officer in uniform stands and watches a traveler, who is shown blurred in the foreground.

An officer of the US Transportation Security Administration watches travelers at an airport. The agency still uses behavioral indicators to pick out suspicious people, even though this has little scientific basis.

CREDIT: SCOTT OLSON / GETTY IMAGES

In response to renewed Congressional scrutiny, the TSA in 2019 promised to improve staff supervision to reduce profiling. Still, the agency continues to see the value of behavioral screening. As a Homeland Security official told congressional investigators, “common sense” behavioral indicators are worth including in a “rational and defensible security program” even if they do not meet academic standards of scientific evidence. In a statement to Knowable , TSA media relations manager R. Carter Langston said that “TSA believes behavioral detection provides a critical and effective layer of security within the nation’s transportation system.” The TSA points to two separate behavioral detection successes in the last 11 years that prevented three passengers from boarding airplanes with explosive or incendiary devices.

But, says Mann, without knowing how many would-be terrorists slipped through security undetected, the success of such a program cannot be measured. And, in fact, in 2015 the acting head of the TSA was reassigned after Homeland Security undercover agents in an internal investigation successfully smuggled fake explosive devices and real weapons through airport security 95 percent of the time.

In 2019, Mann, Hartwig and 49 other university researchers published a review evaluating the evidence for behavioral analysis screening, concluding that law enforcement professionals should abandon this “fundamentally misguided” pseudoscience, which may “harm the life and liberty of individuals.”

Hartwig, meanwhile, has teamed with national security expert Mark Fallon, a former special agent with the US Naval Criminal Investigative Service and former Homeland Security assistant director, to create a new training curriculum for investigators that is more firmly based in science. “Progress has been slow,” Fallon says. But he hopes that future reforms may save people from the sort of unjust convictions that marred the lives of Jeffrey Deskovic and Marty Tankleff.

For Tankleff, stereotypes about liars have proved tenacious. In his years-long campaign to win exoneration and recently to practice law, the reserved, bookish man had to learn to show more feeling “to create a new narrative” of wronged innocence, says Lonnie Soury, a crisis manager who coached him in the effort. It worked, and Tankleff finally won admittance to the New York bar in 2020. Why was showing emotion so critical? “People,” says Soury, “are very biased.”

Editor’s note: This article was updated on March 25, 2021, to correct the last name of a crisis manager quoted in the story. His name is Lonnie Soury, not Lonnie Stouffer.

10.1146/knowable-032421-1

Share this article

Support Knowable Magazine

Help us make scientific knowledge accessible to all

TAKE A DEEPER DIVE | Explore Related Scholarly Articles

Reading Lies: Nonverbal Communication and Deception

Research in the last 20 years has consistently debunked the stereotype that liars betray deceit through behavioral tells such as looking away or fidgeting. In fact, nonverbal clues to deception have proved unreliable, weak or not significant.

Science literacy is more important than ever. Help us combat misinformation.

Knowable Magazine’s award-winning science journalism is freely available for anyone, anywhere in the world. Our work provides a vital service in increasing the public’s understanding of science. But to keep everything free to read, we need your help. Donate today and help us inspire and inform readers everywhere.

  DONATE NOW  

Thank you for your interest in republishing! This HTML is pre-formatted to adhere to our guidelines , which include: Crediting both the author and Knowable Magazine; preserving all hyperlinks; including the canonical link to the original article in the article metadata. Article text (including the headline) may not be edited without prior permission from Knowable Magazine staff. Photographs and illustrations are not included in this license. Please see our full guidelines for more information.

Sound science. Smart stories. Every Sunday

A free newsletter from Knowable Magazine

Show me first

Susan Krauss Whitbourne Ph.D.

The Truth in the Newest Theory on Lying

A new approach shows the 4 thought processes that liars use to try to fool you..

Posted July 31, 2021 | Reviewed by Tyler Woods

  • Lying is a common feature of everyday life, leading researchers to propose that "everybody lies."
  • Cognitive psychology proposes that liars use four steps to produce their falsehoods.
  • A new study tests this cognitive model of deception by watching how liars behave in the lab.

Mangostar/Shutterstock

Lying is a topic that has risen in prominence as ordinary people try to figure out who to trust in their political leaders and scientific experts. You hear commentary on “The Big Lie” referring to the claim by Republicans that the 2020 U.S. presidential election was rigged. You probably also hear a considerable amount of debate regarding the COVID-19 vaccine. According to coronavirus conspiracy theories, there are claims that the vaccine might actually change your DNA , allow the government to track you, or just plain not work.

Closer to home, you may have people in your life who ascribe to these beliefs, or you may feel confused yourself about whether they have any validity. Apart from these potential “mega” lies, there can also be far smaller but still insidious lies that your friends, relatives, or coworkers seem to be guilty of committing. You hear excuses that you’re not sure to believe, claims that appear to be a bit of a stretch, and even gossip about others that seems both cruel and outlandish. A friend tells you that another friend is having an affair. But you know this friend well enough to take the news with a grain of salt. Or should you? Maybe you’ve missed some obvious cues that your potentially cheating friend isn’t as trustworthy as you thought.

A New Theory of Deception

According to Louisiana Tech University’s Jeffrey Walczak and Natalie Cockrell (2021), when researchers put deception under the microscope in lab settings, they typically define the behavior of lying as “intentionally erring and the inhibition of truth." For example, a participant chooses a set of 5 out of 10 pictures and either lies or tells the truth when prompted to answer whether they have a picture or not. This intentional type of deception, responding incorrectly, doesn’t include what may happen in real life when people try to use what they know about their targets as the basis for a fabrication. For example, what that gossiper telling you about the friend’s partner isn’t just “incorrect,” but is in fact an attempt to manipulate you into believing something bad about this person as a mean and jealous ploy.

The theory of deception that the Louisiana Tech researchers test is intended to account both for intentional errors (such as the pictures) and “purposely inducing false beliefs in others to achieve social goals " (i.e., the gossiper).

The ADCAT model, as described by Walczak and Cockrell, stands for “Activation-Decision-Construction-Action Theory.” Rather than just accounting for what you might consider a lazy lie such as intentionally providing a wrong answer, ADCAT explains what happens when people lie in “high stakes” social contexts. Here, by telling a lie, the deceiver avoids admitting the truth in order to avoid significant negative consequences. Examples of these high stakes lies include making something up on a job interview to conceal some ugly truth from the past. Another might be making a false excuse to a romantic partner to cover up bad behavior such as having a one-night fling or spending too much money on a little gift for oneself.

The Four Components of Deception

Taking apart the pieces of ADCAT, the Louisiana Tech researchers describe each as follows:

Activation: When you’re asked to provide information, such as the details of a certain night by your partner, you first have to retrieve that information from your long-term memory . If you think that this information might be incriminating then you have to take the extra step of leaving out the details or just making something up that sounds plausible and won’t get you in trouble. Maybe on a job interview, you’re asked about a period of time not listed on your resume. Now you have to think of some reason other than what you guess the interviewer might think is a problem, such as having taken a few years off just to travel. The added “cognitive load” means that you won’t respond automatically but instead might take a minute or two to figure out what to say.

Decision : At this point, having conjured up your cover-up, you have to choose whether or not to use this in your answer to the interviewer’s question. How much will you lose by admitting that you just wanted to bum around for a while without any responsibilities? Or could telling the truth make you appear to have a fun, adventurous side? According to ADCAT, if the cost of honesty is higher than the reward, you will lie. You’ll give some other reason, such as having to care for a sick relative. In part, though, you’re also trying to judge how the interviewer will react. “Potential lies judged to be implausible to targets will be strongly inhibited” notes the LA Tech research team.

Construction: According to the “plausibility principle,” if you decide to lie, you’ll modify the truth according to what you think the other person will believe, attempting to have it to conform to some established social norm. Caring for an ill relative is consistent with cultural expectations. If you don’t think you can pull this off, though, you’ll scramble your lie together from bits and pieces of information about your past and what you know about caregiving . Again, this takes time. Therefore if you know you’ll have some explaining to do, you might prepare your lie ahead of time, so your response will be quick and convincing. However, one little probing question, and all bets are off, as you'll see shortly.

Action: Now that your lie is ready to go, you have to figure out what demeanor to put on so that the words will have their intended effect. Most people believe that liars seem stiff, shifty, and uncomfortable, so they’ll try to appear relaxed when coming up with their falsehoods. The risk is that they “ self-regulate too much," according to the authors, causing others to regard them with suspicion.

Putting ADCAT to the Test

To test the utility of ADCAT in analyzing people’s behavior while lying in the lab, Walczyk and Cockrell recruited 81 undergraduate participants for an experiment in which they instructed the students to provide truthful or deceptive answers based on a combination of factual and autobiographical information. The researchers created three conditions varying in the instructions to be truthful or deceptive. You can get an idea of what participants experienced in this study by putting yourself in their place under these conditions:

psychology experiments lie

Truthful: participants responded to the instruction to respond “quickly and truthfully” to questions such as:

  • Is the shape of the Earth a square?
  • Are pigs a type of bird?
  • Are you currently a college student?
  • How many hours are in the day?

Intentional erring : Read each question one word at a time. When you get to the end of the question, give an answer that is intentionally incorrect:

  • Are apples a type of meat?
  • Does gravity pull us toward Earth?
  • How many tails do most dogs have?
  • Do lightbulbs run on electricity?

Lie plausibly: When you get to the last word of a question, imagine you are communicating with another adult who does not know the truth and you wish to deceive by answering with a plausible deception:

  • Do cows lay eggs?
  • What is the name of a common type of fruit?
  • Do you have a belly button?
  • Is California part of the United States?

What type of lies did you come up with in those last two conditions? How long did it take you to come up with your responses? The research team used time taken to respond and compliance with instructions as the key outcome variables of the study.

As the authors predicted, the response time of participants corresponded to ADCAT because they took longest in the plausible lie condition, particularly when the questions weren’t just yes or no. Importantly, relative to previous studies, participants also took longer in the intentional erring condition than the truth-telling condition, but not as long as did participants in the plausible lie condition.

In trying to lie when you answered these questions, did you also notice that it was not that easy to come up with a falsehood of either type? How would you lie to the question “I have a belly button?” Is there any type of condition in which this answer could be a "no"? What might that be? Indeed, in looking at compliance data from the participants, the research team noted that people actually couldn’t always follow the direction to lie when the lie would be this blatant.

How to Use ADCAT in your own Deception Detection

Turning now to the ways that you could use the ADCAT model in your own truth-telling experiments, the findings suggest that you pay careful attention to how long it takes for the other person to come up with an answer, especially to what should be a straightforward question. You can also think for yourself about what might motivate someone to lie. If it’s a lie that’s volunteered (such as a piece of gossip), be prepared to ask detailed follow-up questions, especially ones that aren’t just a simple yes or no. This will force the person to come up with more lies that veer further and further from the truth while also becoming increasingly inconsistent.

As you get the person’s answer, see whether they’re trying to read your reaction. In a case as emotionally-laden and potentially significant as the infidelity of a friend, don’t let your face show a strong reaction as that will give the perpetrator more cues to help guide answers to those questions. A clear implication of ADCAT is that liars use what’s called “ Theory of Mind ” to try to discern how to wind their way through the supposed “facts” they provide. Reading a potential liar's body language can also be helpful, especially if they seem to be trying too hard to look relaxed.

To sum up, sniffing out deception is always a challenging task. You’ll achieve greater success by getting into the mental processes of those who try to pull you into their version of the truth.

Facebook image: Mangostar/Shutterstock

Walczyk, J. J., & Cockrell, N. F. (2021). To err is human but not deceptive. Memory & Cognition. doi: 10.3758/s13421-021-01197-8

Susan Krauss Whitbourne Ph.D.

Susan Krauss Whitbourne, Ph.D. , is a Professor Emerita of Psychological and Brain Sciences at the University of Massachusetts Amherst. Her latest book is The Search for Fulfillment.

  • Find a Therapist
  • Find a Treatment Center
  • Find a Psychiatrist
  • Find a Support Group
  • Find Online Therapy
  • International
  • New Zealand
  • South Africa
  • Switzerland
  • Asperger's
  • Bipolar Disorder
  • Chronic Pain
  • Eating Disorders
  • Passive Aggression
  • Personality
  • Goal Setting
  • Positive Psychology
  • Stopping Smoking
  • Low Sexual Desire
  • Relationships
  • Child Development
  • Self Tests NEW
  • Therapy Center
  • Diagnosis Dictionary
  • Types of Therapy

July 2024 magazine cover

Sticking up for yourself is no easy task. But there are concrete skills you can use to hone your assertiveness and advocate for yourself.

  • Emotional Intelligence
  • Gaslighting
  • Affective Forecasting
  • Neuroscience

We need your support today

Independent journalism is more important than ever. Vox is here to explain this unprecedented election cycle and help you understand the larger stakes. We will break down where the candidates stand on major issues, from economic policy to immigration, foreign policy, criminal justice, and abortion. We’ll answer your biggest questions, and we’ll explain what matters — and why. This timely and essential task, however, is expensive to produce.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

The Stanford Prison Experiment was massively influential. We just learned it was a fraud.

The most famous psychological studies are often wrong, fraudulent, or outdated. Textbooks need to catch up.

by Brian Resnick

Rorschach test&nbsp;

The Stanford Prison Experiment, one of the most famous and compelling psychological studies of all time, told us a tantalizingly simple story about human nature.

The study took paid participants and assigned them to be “inmates” or “guards” in a mock prison at Stanford University. Soon after the experiment began, the “guards” began mistreating the “prisoners,” implying evil is brought out by circumstance. The authors, in their conclusions, suggested innocent people, thrown into a situation where they have power over others, will begin to abuse that power. And people who are put into a situation where they are powerless will be driven to submission, even madness.

The Stanford Prison Experiment has been included in many, many introductory psychology textbooks and is often cited uncritically . It’s the subject of movies, documentaries, books, television shows, and congressional testimony .

But its findings were wrong. Very wrong. And not just due to its questionable ethics or lack of concrete data — but because of deceit.

  • Philip Zimbardo defends the Stanford Prison Experiment, his most famous work 

A new exposé published by Medium based on previously unpublished recordings of Philip Zimbardo, the Stanford psychologist who ran the study, and interviews with his participants, offers convincing evidence that the guards in the experiment were coached to be cruel. It also shows that the experiment’s most memorable moment — of a prisoner descending into a screaming fit, proclaiming, “I’m burning up inside!” — was the result of the prisoner acting. “I took it as a kind of an improv exercise,” one of the guards told reporter Ben Blum . “I believed that I was doing what the researchers wanted me to do.”

The findings have long been subject to scrutiny — many think of them as more of a dramatic demonstration , a sort-of academic reality show, than a serious bit of science. But these new revelations incited an immediate response. “We must stop celebrating this work,” personality psychologist Simine Vazire tweeted , in response to the article . “It’s anti-scientific. Get it out of textbooks.” Many other psychologists have expressed similar sentiments.

( Update : Since this article published, the journal American Psychologist has published a thorough debunking of the Stanford Prison Experiment that goes beyond what Blum found in his piece. There’s even more evidence that the “guards” knew the results that Zimbardo wanted to produce, and were trained to meet his goals. It also provides evidence that the conclusions of the experiment were predetermined.)

Many of the classic show-stopping experiments in psychology have lately turned out to be wrong, fraudulent, or outdated. And in recent years, social scientists have begun to reckon with the truth that their old work needs a redo, the “ replication crisis .” But there’s been a lag — in the popular consciousness and in how psychology is taught by teachers and textbooks. It’s time to catch up.

Many classic findings in psychology have been reevaluated recently

psychology experiments lie

The Zimbardo prison experiment is not the only classic study that has been recently scrutinized, reevaluated, or outright exposed as a fraud. Recently, science journalist Gina Perry found that the infamous “Robbers Cave“ experiment in the 1950s — in which young boys at summer camp were essentially manipulated into joining warring factions — was a do-over from a failed previous version of an experiment, which the scientists never mentioned in an academic paper. That’s a glaring omission. It’s wrong to throw out data that refutes your hypothesis and only publicize data that supports it.

Perry has also revealed inconsistencies in another major early work in psychology: the Milgram electroshock test, in which participants were told by an authority figure to deliver seemingly lethal doses of electricity to an unseen hapless soul. Her investigations show some evidence of researchers going off the study script and possibly coercing participants to deliver the desired results. (Somewhat ironically, the new revelations about the prison experiment also show the power an authority figure — in this case Zimbardo himself and his “warden” — has in manipulating others to be cruel.)

  • The Stanford Prison Experiment is based on lies. Hear them for yourself.

Other studies have been reevaluated for more honest, methodological snafus. Recently, I wrote about the “marshmallow test,” a series of studies from the early ’90s that suggested the ability to delay gratification at a young age is correlated with success later in life . New research finds that if the original marshmallow test authors had a larger sample size, and greater research controls, their results would not have been the showstoppers they were in the ’90s. I can list so many more textbook psychology findings that have either not replicated, or are currently in the midst of a serious reevaluation.

  • Social priming: People who read “old”-sounding words (like “nursing home”) were more likely to walk slowly — showing how our brains can be subtly “primed” with thoughts and actions.
  • The facial feedback hypothesis: Merely activating muscles around the mouth caused people to become happier — demonstrating how our bodies tell our brains what emotions to feel.
  • Stereotype threat: Minorities and maligned social groups don’t perform as well on tests due to anxieties about becoming a stereotype themselves.
  • Ego depletion: The idea that willpower is a finite mental resource.

Alas, the past few years have brought about a reckoning for these ideas and social psychology as a whole.

Many psychological theories have been debunked or diminished in rigorous replication attempts. Psychologists are now realizing it’s more likely that false positives will make it through to publication than inconclusive results. And they’ve realized that experimental methods commonly used just a few years ago aren’t rigorous enough. For instance, it used to be commonplace for scientists to publish experiments that sampled about 50 undergraduate students. Today, scientists realize this is a recipe for false positives , and strive for sample sizes in the hundreds and ideally from a more representative subject pool.

Nevertheless, in so many of these cases, scientists have moved on and corrected errors, and are still doing well-intentioned work to understand the heart of humanity. For instance, work on one of psychology’s oldest fixations — dehumanization, the ability to see another as less than human — continues with methodological rigor, helping us understand the modern-day maltreatment of Muslims and immigrants in America.

In some cases, time has shown that flawed original experiments offer worthwhile reexamination. The original Milgram experiment was flawed. But at least its study design — which brings in participants to administer shocks (not actually carried out) to punish others for failing at a memory test — is basically repeatable today with some ethical tweaks.

And it seems like Milgram’s conclusions may hold up: In a recent study, many people found demands from an authority figure to be a compelling reason to shock another. However, it’s possible, due to something known as the file-drawer effect, that failed replications of the Milgram experiment have not been published. Replication attempts at the Stanford prison study, on the other hand, have been a mess .

In science, too often, the first demonstration of an idea becomes the lasting one — in both pop culture and academia. But this isn’t how science is supposed to work at all!

Science is a frustrating, iterative process. When we communicate it, we need to get beyond the idea that a single, stunning study ought to last the test of time. Scientists know this as well, but their institutions have often discouraged them from replicating old work, instead of the pursuit of new and exciting, attention-grabbing studies. (Journalists are part of the problem too , imbuing small, insignificant studies with more importance and meaning than they’re due.)

Thankfully, there are researchers thinking very hard, and very earnestly, on trying to make psychology a more replicable, robust science. There’s even a whole Society for the Improvement of Psychological Science devoted to these issues.

Follow-up results tend to be less dramatic than original findings , but they are more useful in helping discover the truth. And it’s not that the Stanford Prison Experiment has no place in a classroom. It’s interesting as history. Psychologists like Zimbardo and Milgram were highly influenced by World War II. Their experiments were, in part, an attempt to figure out why ordinary people would fall for Nazism. That’s an important question, one that set the agenda for a huge amount of research in psychological science, and is still echoed in papers today.

Textbooks need to catch up

Psychology has changed tremendously over the past few years. Many studies used to teach the next generation of psychologists have been intensely scrutinized, and found to be in error. But troublingly, the textbooks have not been updated accordingly .

That’s the conclusion of a 2016 study in Current Psychology. “ By and large,” the study explains (emphasis mine):

introductory textbooks have difficulty accurately portraying controversial topics with care or, in some cases, simply avoid covering them at all. ... readers of introductory textbooks may be unintentionally misinformed on these topics.

The study authors — from Texas A&M and Stetson universities — gathered a stack of 24 popular introductory psych textbooks and began looking for coverage of 12 contested ideas or myths in psychology.

The ideas — like stereotype threat, the Mozart effect , and whether there’s a “narcissism epidemic” among millennials — have not necessarily been disproven. Nevertheless, there are credible and noteworthy studies that cast doubt on them. The list of ideas also included some urban legends — like the one about the brain only using 10 percent of its potential at any given time, and a debunked story about how bystanders refused to help a woman named Kitty Genovese while she was being murdered.

The researchers then rated the texts on how they handled these contested ideas. The results found a troubling amount of “biased” coverage on many of the topic areas.

psychology experiments lie

But why wouldn’t these textbooks include more doubt? Replication, after all, is a cornerstone of any science.

One idea is that textbooks, in the pursuit of covering a wide range of topics, aren’t meant to be authoritative on these individual controversies. But something else might be going on. The study authors suggest these textbook authors are trying to “oversell” psychology as a discipline, to get more undergraduates to study it full time. (I have to admit that it might have worked on me back when I was an undeclared undergraduate.)

There are some caveats to mention with the study: One is that the 12 topics the authors chose to scrutinize are completely arbitrary. “And many other potential issues were left out of our analysis,” they note. Also, the textbooks included were printed in the spring of 2012; it’s possible they have been updated since then.

Recently, I asked on Twitter how intro psychology professors deal with inconsistencies in their textbooks. Their answers were simple. Some say they decided to get rid of textbooks (which save students money) and focus on teaching individual articles. Others have another solution that’s just as simple: “You point out the wrong, outdated, and less-than-replicable sections,” Daniël Lakens , a professor at Eindhoven University of Technology in the Netherlands, said. He offered a useful example of one of the slides he uses in class.

Anecdotally, Illinois State University professor Joe Hilgard said he thinks his students appreciate “the ‘cutting-edge’ feeling from knowing something that the textbook didn’t.” (Also, who really, earnestly reads the textbook in an introductory college course?)

And it seems this type of teaching is catching on. A (not perfectly representative) recent survey of 262 psychology professors found more than half said replication issues impacted their teaching . On the other hand, 40 percent said they hadn’t. So whether students are exposed to the recent reckoning is all up to the teachers they have.

If it’s true that textbooks and teachers are still neglecting to cover replication issues, then I’d argue they are actually underselling the science. To teach the “replication crisis” is to teach students that science strives to be self-correcting. It would instill in them the value that science ought to be reproducible.

Understanding human behavior is a hard problem. Finding out the answers shouldn’t be easy. If anything, that should give students more motivation to become the generation of scientists who get it right.

“Textbooks may be missing an opportunity for myth busting,” the Current Psychology study’s authors write. That’s, ideally, what young scientist ought to learn: how to bust myths and find the truth.

Further reading: Psychology’s “replication crisis”

  • The replication crisis, explained. Psychology is currently undergoing a painful period of introspection. It will emerge stronger than before.
  • The “marshmallow test” said patience was a key to success. A new replication tells us s’more.
  • The 7 biggest problems facing science, according to 270 scientists
  • What a nerdy debate about p-values shows about science — and how to fix it
  • Science is often flawed. It’s time we embraced that.

Most Popular

  • Georgia’s MAGA elections board is laying the groundwork for an actual stolen election
  • Did Ukraine just call Putin’s nuclear bluff?
  • Mark Zuckerberg’s letter about Facebook censorship is not what it seems
  • Zelenskyy’s new plan to end the war, explained
  • How is Kamala Harris getting away with this?

Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.

 alt=

This is the title for the native ad

 alt=

More in Science

SpaceX’s risky mission will go farther into space than we’ve been in 50 years

The privately funded venture will test out new aerospace technology.

The staggering death toll of scientific lies

Scientific fraud kills people. Should it be illegal?

Big Pharma claims lower prices will mean giving up miracle medications. Ignore them.

The case against Medicare drug price negotiations doesn’t add up.

Antibiotics are failing. The US has a plan to launch a research renaissance.

But there might be global consequences.

Why does it feel like everyone is getting Covid?

Covid’s summer surge, explained

Earthquakes are among our deadliest disasters. Scientists are racing to get ahead of them.

Japan’s early-warning system shows a few extra seconds can save scores of lives.

American Psychological Association Logo

The truth about lies

Almost all patients tell some lies while in therapy. But what patients keep hidden might reveal more than therapists think.

By Alyssa Shaffer

May 2019, Vol 50, No. 5

Print version: page 38

12 min read

  • Psychotherapy

2019-05-feature-truth-lies-1

Practicing psychologists typically believe that their offices are safe spaces, places where patients can feel comfortable sharing their deepest, most intimate thoughts and feelings without judgment, and work toward resolution and healing. Yet a surprisingly high percentage of patients—if not nearly all—admit that they have either lied to or not been completely truthful with their therapists.

"It’s not just common, it’s ubiquitous," notes Barry Farber, PhD, a professor in the clinical psychology program at Columbia University’s Teachers College and the editor of the Journal of Clinical Psychology: In Session . "Lying is inevitable in psychotherapy," he says.

Everyone shades the truth sometimes, whether it’s telling a friend that color really does look good on her or making up an excuse as to why you were late for dinner at your in-laws. "We are always deciding what we are going to say and what we may conceal from others," says Farber. And it seems time spent in a therapist’s office isn’t an exception.

Farber isn’t just speculating—he’s studied this topic for decades. In a survey of 547 psychotherapy clients, 93 percent said they consciously lied at least once to their therapist ( Counselling Psychology Quarterly , Vol. 29, No. 1, 2016). In a second survey, 84 percent said this dishonesty continued on a regular basis.

And while therapists might suspect that they can tell when patients are being less than truthful, research shows this is not the case. In Farber’s study, 73 percent of respondents reported that "the truth about their lies had never been acknowledged in therapy." Only 3.5 percent of patients owned up to the lies voluntarily, and in another 9 percent of cases the therapists uncovered the untruth, notes Farber, who reports on this and related research in a new book, "Secrets and Lies in Psychotherapy," with co-authors Matt Blanchard, PhD, and Melanie Love, MS. "It seems therapists aren’t particularly good at detecting lies," Farber says.

What's not being said

Patients tend to lie or not be entirely truthful to their therapists on a wide range of topics, but the researchers were surprised at some of the most common areas of misinformation. "The most commonly lied-about topics were often very subtle," observes co-author Blanchard, a clinical psychologist at New York University. More than half of the respondents (54 percent) in the first study reported minimizing their psychological distress when in therapy, pretending to feel happier and healthier than they really were. This minimizing was nearly twice as common as all other forms of dishonesty, the authors report. The second most commonly reported lie—similar to the first, though somewhat more focused—was minimizing the severity of their symptoms, reported by 39 percent of the sample.

The third most commonly reported lie was concealing or hiding thoughts about suicide, reported by 31 percent of the respondents, and the fourth was minimizing or hiding insecurities and self-doubts. (See a list of more common lies on the next page.) In all, six of the 20 most common lies were about the clients’ experience of therapy itself, such as pretending to find therapy effective.

2019-05-feature-truth-lies-2

Clients devote a good deal of their resources (both time and money) to therapy, so what’s the impetus for hiding the truth? Researchers say it all depends on the lie itself. For the high percentage of clients who are either minimizing their distress or saying that therapy is going better than they really think it is, it’s likely a combination of things. "This ‘distress minimization,’ or acting happier or healthier than they may really feel, may come from not wanting to upset the therapist or be seen as a complainer," says Blanchard. "But it may also be a way to protect themselves from a painful realization of how bad things may actually be. There’s this idea that ‘talking about how I’m doing makes me feel more depressed,’ or that they can’t admit a painful situation to themselves, let alone say it out loud."

For patients who are hiding thoughts of suicide or drug use, the primary reason is likely a fear of the consequences if the truth does come out. "About 70 percent of people who had concealed thoughts of suicide worried about being carted off to the hospital—yet most of them didn’t appear to be suicidal to the point where most clinicians would be forced to take that action," says Blanchard. "Many clients simply didn’t understand the triggers for hospitalization."

The same may be true for drug use, with patients concerned about being coerced into rehab. "Telling you I smoke weed isn’t that big of a deal, but I’m not sure I might want to tell you about the cocaine or OxyContin habit I’ve developed," says Farber.

Then, too, there is the idea of shame—especially as it relates to sex. "Many clients are motivated by shame and embarrassment to lie or hide the truth about this topic," says co-author Melanie Love. "There was also concern that the therapist might judge them or simply not understand where they were coming from."

Some patients were also concerned that if they admitted certain thoughts or feelings to their therapists, it would have an outsize effect on the rest of their therapy. "Some clients think that if I let my therapist know I have an occasional thought of suicide, it will be all he wants to talk about and we will never get to anything else," says Farber.

It’s also important for therapists to recognize the difference between a secret and a lie. The two are related but distinct, says Ellen Marks, PhD, an associate psychologist with University Health Services at the University of Wisconsin–Madison, who has conducted research in this area. "While they both may include a level of deception, a secret is an act of omission, while a lie is an act of deception," she notes.

This can be an important distinction, she adds, especially when it comes to clients revealing secrets during therapy or choosing to keep them to themselves. In Marks’s research, 41 percent of clients concealed at least one secret, while 85 percent disclosed at least one secret ( Journal of Counseling Psychology , Vol. 66, No. 1, 2019).

"We have to recognize that keeping secrets may not be a bad thing all of the time," she says. "We need to let go of our expectations that clients share everything with us." Instead, she says, by focusing on what patients do choose to share and establishing the therapist as a trustworthy confidante, "if and when the time is right, the space will be there for the client to share the secret."

Moving forward

So, what can psychologists do about lies in therapy? "In some cases, the best action is to do nothing," says Farber. For example, he says, a therapist might want to keep silent "if the client has explicitly told you that he or she needs to go at his or her own pace on this particular topic and doesn’t want to be rushed into discussing something difficult before he or she is ready, or if you have the sense that pursuing the truth—even gently—means the client may leave therapy altogether." The therapist may also find that a minor lie, such as why the client was late for a session, is better dealt with only if it occurs again or is part of a pattern that needs to be addressed.

But there are steps therapists can take to keep their sessions on track and their clients as honest as possible.

Be up front about the disclosure process. "Clients mentioned that they want therapists to be more active in explaining the process of disclosure," says Love, a predoctoral psychology intern at Temple University. "They would like a therapist to outline what might happen if they were to talk about this topic." Helping to explain why disclosure is valuable for treatment and what the client may gain from it—as well as exploring the idea that clients may experience certain emotions that motivate avoidance—can all be key.

This communication can and should begin early, even in the intake process, says Love. "Taking the temperature of what clients may be ready for and planting the seeds of what types of topics you may be covering is important," she notes.

For patients who may worry about discussing any thoughts of suicide, explaining the limits of confidentiality as clearly and openly as possible can be especially helpful. Knowing what triggers the process of hospitalization may help those who worry about this step if they have suicidal thoughts. Help keep patients safe and comfortable by educating them on what may or may not require a higher level of care.

Ask direct questions. Clients are often willing to discuss almost anything but may be hesitant to take the first step, especially around a topic they find shameful. Therapists who don’t introduce challenging topics can (inadvertently) communicate to the client that these areas are off-limits, according to Farber and his co-authors. Instead, they write, therapists should "model for clients that all topics are discussable in therapy."

The research bears this out. "In our second survey, 46 percent of clients reported they would have been more honest if the therapist had asked direct questions," says Blanchard. "As therapists, we don’t want to be seen as pushy because it’s not our job to be interrogator[s], but there are times when the therapist may need to lead a client toward disclosure with direct questions."

In some cases, questions that elicit a simple "yes" or "no" response may be the easiest way to move things forward. "We may be trained to ask open-ended questions, but this isn’t always the best approach," adds Blanchard.

Providing positive feedback when clients are more open is also important, especially when it comes to reducing some of the shame that may be associated with disclosures on topics that may be perceived as taboo. Ultimately, the authors say, this will strengthen the relationship between patient and therapist.

Be mindful about how you come off. Authenticity is important, especially in therapy, so it’s vital to come across to patients as both understanding and genuine. "For the most part, therapists need to balance curiosity with acceptance and understanding of clients’ limits for disclosure at any one time," the authors note. Using language that feels comfortable and authentic can help, as can being conscious of your own tone. A therapist who comes across as too eager or who overreacts emotionally or, conversely, who acts completely unaffected, like a topic is ho-hum—can lead a patient to shut down.

Some of the female respondents to the survey reported they were worried their female therapists would be especially judgmental of what they might reveal. "One of the most desired interventions was to normalize that it’s OK to talk about certain subjects in therapy and provide a rationale of why it may be helpful," explains Love.

Circle back to certain topics. Patients tend to drop what Farber calls "a doorknob comment," an off-handed comment at the end of a session that indicates there’s a deeper topic involved. "A good therapist is sensitive to this type of comment and will make a note that it may be worth revisiting at a future time," says Farber.

The need to revisit tough topics can also change over time, since some patients will want to wait until they are further into therapy before they feel comfortable discussing such topics; others will give some small indication that they might be hiding something and wait to see how the therapist reacts. It can help to start with a broader topic and narrow it down based on patient cues—such as asking more about relationships in general before getting into details about sexual issues, or broaching symptoms of depression before talking specifically about suicidal thoughts, says Farber.

A therapist may also need to be attuned to body language or other cues that the patient may not be being entirely truthful on a topic. "Take note if you notice that a client feels uncomfortable on a certain topic, and then wait for the right time to talk about it," advises Blanchard. "A lot happens around a person’s eyes, so I will often watch someone’s eyes for a reaction and notice if something is registering that he or she may not be willing to share."

Acknowledge difficulties. Therapy isn’t easy, and therapists sometimes need to take a moment and address that fact, both to themselves and to their patients. "It is sometimes difficult to get to the difficult part," says Farber. Often, it’s important to deal with the resistance to the topic before the topic itself. "It can be helpful to say, ‘We should talk about this more, it feels important,’ or even, ‘I understand it can be difficult to talk about this—let’s not talk about this issue, but why it’s hard to talk about it.’"

For patients who may be worried that their responses may elicit unwanted action by the therapist (such as hospitalization for suicidal thoughts or recommendations for rehab for an alcohol or drug issue), it’s especially important to address these concerns up front. "We need to be sensitive about how to address these issues," says Farber.

The bottom line

It seems inevitable that patients will lie to their therapists, but there is a bright side, says Blanchard. "With time and patience, we can create conditions where clients can be comfortable disclosing their feelings."

And sometimes, perhaps, not being truthful may play its own part in the therapy process.

"Although we most often consider concealment and lies as inevitably problematic, in minimal doses these behaviors are not only inevitable, but can help individuals create more effective narratives about their lives," says Farber. "That, in turn improves their sense of self and their ability to engage with others."

In fact, most therapists should be prepared to acknowledge that they may never really know what’s happening inside a patient’s mind. Even when it may be obvious that a client is hiding something, ultimately it is his or her own prerogative whether or not to share. "It’s not in our interest to be punitive—clients have the right to lie all they want to their therapists," says Blanchard. "Honest disclosure is at the heart of all psychotherapy, but if someone feels like they need to lie, that may also be important." 

Secrets and Lies in Psychotherapy Farber, B.A., et al. APA, 2019

Client Concealment and Disclosure of Secrets in Outpatient Psychotherapy Baumann, E.C., & Hill, C.E. Counselling, Psychology Quarterly , 2016

The Experience of Secrecy Slepian, M., et al., Journal of Personality and Social Psychology , 2017

Working With Client Lies and Concealment Farber, B.A. APA, 2019 www.apa.org/pubs/videos/4310003

Recommended Reading

Top 10 lies (with percents).

  • How bad I really feel (54%)
  • The severity of my symptoms (39%)
  • My thoughts about suicide  (31%)
  • My insecurities and doubts about myself  (31%)
  • Pretending to like my therapist’s comments (29%)
  • My use of drugs or alcohol (29%)
  • Why I missed appointments/was late (29%)
  • Pretending to find therapy more effective than I do (29%)
  • Pretending to be more hopeful than I really am (27%)
  • Things I have done that I regret (26%)

Letters to the Editor

  • Send us a letter

More From Forbes

A psychologist explores 6 types of lies, and how they affect us.

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

The intricate web of deception results in unseen burdens on well-being. Here’s how your lies affect ... [+] you.

Whether it is a white lie, gray lie, real lie or a small, inconsequential lie, everyone indulges in some form of lying across their lifetime. Its ubiquity seems unaffected by its moral disapproval and potential to harm one’s reputation and relationships. Although the direct consequences of a lie are usually minimal when it goes undetected and unpunished, there may still be a psychological cost associated with it.

A delicate balance exists between honesty and deception that involves a careful consideration of the advantages to be gained that cannot be achieved by truthful means. People are often tempted to lie when the potential benefits outweigh the potential costs. Which brings us to:

Decoding The Motives Behind Lying

A 2018 study described the psychological process behind lies on the basis of two factors: the beneficiary and the motivation. The decision to lie is influenced by the beneficiary or the person who will benefit from the lie. The motivation behind the lie can be to either obtain a desirable outcome or prevent an undesirable outcome. Researchers came up with six types of lies based on the reasons that lead people to be dishonest:

  • Self-oriented beneficial lies. These lies are told to obtain positive outcomes for oneself. For example, claiming that a sum of money found is one’s own.
  • Self-oriented protective lies. These lies are directed at avoiding a negative outcome or loss for oneself. For example, falsely denying hitting another car while parking.
  • Other-oriented beneficial lies . These lies are aimed at securing positive outcomes or achieving gains for others. For example, lying to a supervisor to support a co-worker’s claim of illness.
  • Other-oriented protective lies. These lies are spoken to protect others from loss or negative outcomes. For example, falsely telling one’s parents that one is doing well to prevent them from worrying.
  • Pareto beneficial lies. These lies are told to benefit the liar as well as another person. For example, falsifying the results in one's group project to get a better grade.
  • Pareto protective lies. These lies are spoken to prevent loss to oneself and another person. For example, a team manager telling superiors at work that they could not meet an important deadline due to technical issues, rather than blaming their team for not completing the task or taking personal accountability.

Irrespective of why people choose to lie, psychological burden of being deceptive weighs heavy on the conscience. Even if the lie goes undetected, the process of lying itself can be an inherently stressful activity.

New Password Hacking Warning For Gmail, Facebook And Amazon Users

Trump vs. harris 2024 polls: harris’ lead grows—winning by 5 points in one survey, samsung slashes galaxy s24 price ahead of iphone 16 release, unveiling the hidden costs of deception.

Lying can have a substantial impact on one’s well-being. Research shows that people with a tendency to conceal the truth are more preoccupied with their lie and experience higher levels of negative emotions and lower life and relationship satisfaction.

The liar might find themselves consumed by the fear of the recipient discovering the truth. This fear may stem from guilt, paranoia or the ramifications of deception for one’s integrity and their relationship with the recipient. The extent to which people fear discovery can influence how preoccupied they are with the lie and the level of negative emotions they subsequently feel.

A 2023 study examined the psychological consequences of telling lies. Liars were affected by their lies in the following ways:

  • Lower self-esteem. Liars had lower self-esteem than those who spoke the truth. Additionally, lying on any given day decreased the person’s self-esteem compared to their self-esteem on the previous day as well as their average self-esteem level.
  • Higher negative affect. Researchers found that individuals who lied experienced the negative emotions of nervousness, regret, discomfort, unhappiness, guilt, embarrassment, shame and anger to a greater extent than those who were truthful.
  • Lower positive affect. In addition to evaluating negative emotions, researchers assessed liars for four positive emotions. People who lied experienced less comfort, happiness, relief and pride than their truthful counterparts.

The psychological costs of lying are profound and extend to various facets of well-being. These detrimental effects emphasize the importance of honesty in maintaining a healthy sense of self and positive relationships with others. It can be difficult to resist the alluring pull of lying, but overcoming this challenge is possible through cultivating self-awareness and seeking expert help .

Wondering if your lies are impacting your well-being? Take this survey to find out: Survey of Pathological Lying Behaviors

Mark Travers

  • Editorial Standards
  • Reprints & Permissions

Join The Conversation

One Community. Many Voices. Create a free account to share your thoughts. 

Forbes Community Guidelines

Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

In order to do so, please follow the posting rules in our site's  Terms of Service.   We've summarized some of those key rules below. Simply put, keep it civil.

Your post will be rejected if we notice that it seems to contain:

  • False or intentionally out-of-context or misleading information
  • Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
  • Attacks on the identity of other commenters or the article's author
  • Content that otherwise violates our site's  terms.

User accounts will be blocked if we notice or believe that users are engaged in:

  • Continuous attempts to re-post comments that have been previously moderated/rejected
  • Racist, sexist, homophobic or other discriminatory comments
  • Attempts or tactics that put the site security at risk
  • Actions that otherwise violate our site's  terms.

So, how can you be a power user?

  • Stay on topic and share your insights
  • Feel free to be clear and thoughtful to get your point across
  • ‘Like’ or ‘Dislike’ to show your point of view.
  • Protect your community.
  • Use the report tool to alert us when someone breaks the rules.

Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's  Terms of Service.

helpful professor logo

15 Famous Experiments and Case Studies in Psychology

15 Famous Experiments and Case Studies in Psychology

Chris Drew (PhD)

Dr. Chris Drew is the founder of the Helpful Professor. He holds a PhD in education and has published over 20 articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education. [Image Descriptor: Photo of Chris]

Learn about our Editorial Process

psychology theories, explained below

Psychology has seen thousands upon thousands of research studies over the years. Most of these studies have helped shape our current understanding of human thoughts, behavior, and feelings.

The psychology case studies in this list are considered classic examples of psychological case studies and experiments, which are still being taught in introductory psychology courses up to this day.

Some studies, however, were downright shocking and controversial that you’d probably wonder why such studies were conducted back in the day. Imagine participating in an experiment for a small reward or extra class credit, only to be left scarred for life. These kinds of studies, however, paved the way for a more ethical approach to studying psychology and implementation of research standards such as the use of debriefing in psychology research .

Case Study vs. Experiment

Before we dive into the list of the most famous studies in psychology, let us first review the difference between case studies and experiments.

  • It is an in-depth study and analysis of an individual, group, community, or phenomenon. The results of a case study cannot be applied to the whole population, but they can provide insights for further studies.
  • It often uses qualitative research methods such as observations, surveys, and interviews.
  • It is often conducted in real-life settings rather than in controlled environments.
  • An experiment is a type of study done on a sample or group of random participants, the results of which can be generalized to the whole population.
  • It often uses quantitative research methods that rely on numbers and statistics.
  • It is conducted in controlled environments, wherein some things or situations are manipulated.

See Also: Experimental vs Observational Studies

Famous Experiments in Psychology

1. the marshmallow experiment.

Psychologist Walter Mischel conducted the marshmallow experiment at Stanford University in the 1960s to early 1970s. It was a simple test that aimed to define the connection between delayed gratification and success in life.

The instructions were fairly straightforward: children ages 4-6 were presented a piece of marshmallow on a table and they were told that they would receive a second piece if they could wait for 15 minutes without eating the first marshmallow.

About one-third of the 600 participants succeeded in delaying gratification to receive the second marshmallow. Mischel and his team followed up on these participants in the 1990s, learning that those who had the willpower to wait for a larger reward experienced more success in life in terms of SAT scores and other metrics.

This case study also supported self-control theory , a theory in criminology that holds that people with greater self-control are less likely to end up in trouble with the law!

The classic marshmallow experiment, however, was debunked in a 2018 replication study done by Tyler Watts and colleagues.

This more recent experiment had a larger group of participants (900) and a better representation of the general population when it comes to race and ethnicity. In this study, the researchers found out that the ability to wait for a second marshmallow does not depend on willpower alone but more so on the economic background and social status of the participants.

2. The Bystander Effect

In 1694, Kitty Genovese was murdered in the neighborhood of Kew Gardens, New York. It was told that there were up to 38 witnesses and onlookers in the vicinity of the crime scene, but nobody did anything to stop the murder or call for help.

Such tragedy was the catalyst that inspired social psychologists Bibb Latane and John Darley to formulate the phenomenon called bystander effect or bystander apathy .

Subsequent investigations showed that this story was exaggerated and inaccurate, as there were actually only about a dozen witnesses, at least two of whom called the police. But the case of Kitty Genovese led to various studies that aim to shed light on the bystander phenomenon.

Latane and Darley tested bystander intervention in an experimental study . Participants were asked to answer a questionnaire inside a room, and they would either be alone or with two other participants (who were actually actors or confederates in the study). Smoke would then come out from under the door. The reaction time of participants was tested — how long would it take them to report the smoke to the authorities or the experimenters?

The results showed that participants who were alone in the room reported the smoke faster than participants who were with two passive others. The study suggests that the more onlookers are present in an emergency situation, the less likely someone would step up to help, a social phenomenon now popularly called the bystander effect.

3. Asch Conformity Study

Have you ever made a decision against your better judgment just to fit in with your friends or family? The Asch Conformity Studies will help you understand this kind of situation better.

In this experiment, a group of participants were shown three numbered lines of different lengths and asked to identify the longest of them all. However, only one true participant was present in every group and the rest were actors, most of whom told the wrong answer.

Results showed that the participants went for the wrong answer, even though they knew which line was the longest one in the first place. When the participants were asked why they identified the wrong one, they said that they didn’t want to be branded as strange or peculiar.

This study goes to show that there are situations in life when people prefer fitting in than being right. It also tells that there is power in numbers — a group’s decision can overwhelm a person and make them doubt their judgment.

4. The Bobo Doll Experiment

The Bobo Doll Experiment was conducted by Dr. Albert Bandura, the proponent of social learning theory .

Back in the 1960s, the Nature vs. Nurture debate was a popular topic among psychologists. Bandura contributed to this discussion by proposing that human behavior is mostly influenced by environmental rather than genetic factors.

In the Bobo Doll Experiment, children were divided into three groups: one group was shown a video in which an adult acted aggressively toward the Bobo Doll, the second group was shown a video in which an adult play with the Bobo Doll, and the third group served as the control group where no video was shown.

The children were then led to a room with different kinds of toys, including the Bobo Doll they’ve seen in the video. Results showed that children tend to imitate the adults in the video. Those who were presented the aggressive model acted aggressively toward the Bobo Doll while those who were presented the passive model showed less aggression.

While the Bobo Doll Experiment can no longer be replicated because of ethical concerns, it has laid out the foundations of social learning theory and helped us understand the degree of influence adult behavior has on children.

5. Blue Eye / Brown Eye Experiment

Following the assassination of Martin Luther King Jr. in 1968, third-grade teacher Jane Elliott conducted an experiment in her class. Although not a formal experiment in controlled settings, A Class Divided is a good example of a social experiment to help children understand the concept of racism and discrimination.

The class was divided into two groups: blue-eyed children and brown-eyed children. For one day, Elliott gave preferential treatment to her blue-eyed students, giving them more attention and pampering them with rewards. The next day, it was the brown-eyed students’ turn to receive extra favors and privileges.

As a result, whichever group of students was given preferential treatment performed exceptionally well in class, had higher quiz scores, and recited more frequently; students who were discriminated against felt humiliated, answered poorly in tests, and became uncertain with their answers in class.

This study is now widely taught in sociocultural psychology classes.

6. Stanford Prison Experiment

One of the most controversial and widely-cited studies in psychology is the Stanford Prison Experiment , conducted by Philip Zimbardo at the basement of the Stanford psychology building in 1971. The hypothesis was that abusive behavior in prisons is influenced by the personality traits of the prisoners and prison guards.

The participants in the experiment were college students who were randomly assigned as either a prisoner or a prison guard. The prison guards were then told to run the simulated prison for two weeks. However, the experiment had to be stopped in just 6 days.

The prison guards abused their authority and harassed the prisoners through verbal and physical means. The prisoners, on the other hand, showed submissive behavior. Zimbardo decided to stop the experiment because the prisoners were showing signs of emotional and physical breakdown.

Although the experiment wasn’t completed, the results strongly showed that people can easily get into a social role when others expect them to, especially when it’s highly stereotyped .

7. The Halo Effect

Have you ever wondered why toothpastes and other dental products are endorsed in advertisements by celebrities more often than dentists? The Halo Effect is one of the reasons!

The Halo Effect shows how one favorable attribute of a person can gain them positive perceptions in other attributes. In the case of product advertisements, attractive celebrities are also perceived as intelligent and knowledgeable of a certain subject matter even though they’re not technically experts.

The Halo Effect originated in a classic study done by Edward Thorndike in the early 1900s. He asked military commanding officers to rate their subordinates based on different qualities, such as physical appearance, leadership, dependability, and intelligence.

The results showed that high ratings of a particular quality influences the ratings of other qualities, producing a halo effect of overall high ratings. The opposite also applied, which means that a negative rating in one quality also correlated to negative ratings in other qualities.

Experiments on the Halo Effect came in various formats as well, supporting Thorndike’s original theory. This phenomenon suggests that our perception of other people’s overall personality is hugely influenced by a quality that we focus on.

8. Cognitive Dissonance

There are experiences in our lives when our beliefs and behaviors do not align with each other and we try to justify them in our minds. This is cognitive dissonance , which was studied in an experiment by Leon Festinger and James Carlsmith back in 1959.

In this experiment, participants had to go through a series of boring and repetitive tasks, such as spending an hour turning pegs in a wooden knob. After completing the tasks, they were then paid either $1 or $20 to tell the next participants that the tasks were extremely fun and enjoyable. Afterwards, participants were asked to rate the experiment. Those who were given $1 rated the experiment as more interesting and fun than those who received $20.

The results showed that those who received a smaller incentive to lie experienced cognitive dissonance — $1 wasn’t enough incentive for that one hour of painstakingly boring activity, so the participants had to justify that they had fun anyway.

Famous Case Studies in Psychology

9. little albert.

In 1920, behaviourist theorists John Watson and Rosalie Rayner experimented on a 9-month-old baby to test the effects of classical conditioning in instilling fear in humans.

This was such a controversial study that it gained popularity in psychology textbooks and syllabi because it is a classic example of unethical research studies done in the name of science.

In one of the experiments, Little Albert was presented with a harmless stimulus or object, a white rat, which he wasn’t scared of at first. But every time Little Albert would see the white rat, the researchers would play a scary sound of hammer and steel. After about 6 pairings, Little Albert learned to fear the rat even without the scary sound.

Little Albert developed signs of fear to different objects presented to him through classical conditioning . He even generalized his fear to other stimuli not present in the course of the experiment.

10. Phineas Gage

Phineas Gage is such a celebrity in Psych 101 classes, even though the way he rose to popularity began with a tragic accident. He was a resident of Central Vermont and worked in the construction of a new railway line in the mid-1800s. One day, an explosive went off prematurely, sending a tamping iron straight into his face and through his brain.

Gage survived the accident, fortunately, something that is considered a feat even up to this day. He managed to find a job as a stagecoach after the accident. However, his family and friends reported that his personality changed so much that “he was no longer Gage” (Harlow, 1868).

New evidence on the case of Phineas Gage has since come to light, thanks to modern scientific studies and medical tests. However, there are still plenty of mysteries revolving around his brain damage and subsequent recovery.

11. Anna O.

Anna O., a social worker and feminist of German Jewish descent, was one of the first patients to receive psychoanalytic treatment.

Her real name was Bertha Pappenheim and she inspired much of Sigmund Freud’s works and books on psychoanalytic theory, although they hadn’t met in person. Their connection was through Joseph Breuer, Freud’s mentor when he was still starting his clinical practice.

Anna O. suffered from paralysis, personality changes, hallucinations, and rambling speech, but her doctors could not find the cause. Joseph Breuer was then called to her house for intervention and he performed psychoanalysis, also called the “talking cure”, on her.

Breuer would tell Anna O. to say anything that came to her mind, such as her thoughts, feelings, and childhood experiences. It was noted that her symptoms subsided by talking things out.

However, Breuer later referred Anna O. to the Bellevue Sanatorium, where she recovered and set out to be a renowned writer and advocate of women and children.

12. Patient HM

H.M., or Henry Gustav Molaison, was a severe amnesiac who had been the subject of countless psychological and neurological studies.

Henry was 27 when he underwent brain surgery to cure the epilepsy that he had been experiencing since childhood. In an unfortunate turn of events, he lost his memory because of the surgery and his brain also became unable to store long-term memories.

He was then regarded as someone living solely in the present, forgetting an experience as soon as it happened and only remembering bits and pieces of his past. Over the years, his amnesia and the structure of his brain had helped neuropsychologists learn more about cognitive functions .

Suzanne Corkin, a researcher, writer, and good friend of H.M., recently published a book about his life. Entitled Permanent Present Tense , this book is both a memoir and a case study following the struggles and joys of Henry Gustav Molaison.

13. Chris Sizemore

Chris Sizemore gained celebrity status in the psychology community when she was diagnosed with multiple personality disorder, now known as dissociative identity disorder.

Sizemore has several alter egos, which included Eve Black, Eve White, and Jane. Various papers about her stated that these alter egos were formed as a coping mechanism against the traumatic experiences she underwent in her childhood.

Sizemore said that although she has succeeded in unifying her alter egos into one dominant personality, there were periods in the past experienced by only one of her alter egos. For example, her husband married her Eve White alter ego and not her.

Her story inspired her psychiatrists to write a book about her, entitled The Three Faces of Eve , which was then turned into a 1957 movie of the same title.

14. David Reimer

When David was just 8 months old, he lost his penis because of a botched circumcision operation.

Psychologist John Money then advised Reimer’s parents to raise him as a girl instead, naming him Brenda. His gender reassignment was supported by subsequent surgery and hormonal therapy.

Money described Reimer’s gender reassignment as a success, but problems started to arise as Reimer was growing up. His boyishness was not completely subdued by the hormonal therapy. When he was 14 years old, he learned about the secrets of his past and he underwent gender reassignment to become male again.

Reimer became an advocate for children undergoing the same difficult situation he had been. His life story ended when he was 38 as he took his own life.

15. Kim Peek

Kim Peek was the inspiration behind Rain Man , an Oscar-winning movie about an autistic savant character played by Dustin Hoffman.

The movie was released in 1988, a time when autism wasn’t widely known and acknowledged yet. So it was an eye-opener for many people who watched the film.

In reality, Kim Peek was a non-autistic savant. He was exceptionally intelligent despite the brain abnormalities he was born with. He was like a walking encyclopedia, knowledgeable about travel routes, US zip codes, historical facts, and classical music. He also read and memorized approximately 12,000 books in his lifetime.

This list of experiments and case studies in psychology is just the tip of the iceberg! There are still countless interesting psychology studies that you can explore if you want to learn more about human behavior and dynamics.

You can also conduct your own mini-experiment or participate in a study conducted in your school or neighborhood. Just remember that there are ethical standards to follow so as not to repeat the lasting physical and emotional harm done to Little Albert or the Stanford Prison Experiment participants.

Asch, S. E. (1956). Studies of independence and conformity: I. A minority of one against a unanimous majority. Psychological Monographs: General and Applied, 70 (9), 1–70. https://doi.org/10.1037/h0093718

Bandura, A., Ross, D., & Ross, S. A. (1961). Transmission of aggression through imitation of aggressive models. The Journal of Abnormal and Social Psychology, 63 (3), 575–582. https://doi.org/10.1037/h0045925

Elliott, J., Yale University., WGBH (Television station : Boston, Mass.), & PBS DVD (Firm). (2003). A class divided. New Haven, Conn.: Yale University Films.

Festinger, L., & Carlsmith, J. M. (1959). Cognitive consequences of forced compliance. The Journal of Abnormal and Social Psychology, 58 (2), 203–210. https://doi.org/10.1037/h0041593

Haney, C., Banks, W. C., & Zimbardo, P. G. (1973). A study of prisoners and guards in a simulated prison. Naval Research Review , 30 , 4-17.

Latane, B., & Darley, J. M. (1968). Group inhibition of bystander intervention in emergencies. Journal of Personality and Social Psychology, 10 (3), 215–221. https://doi.org/10.1037/h0026570

Mischel, W. (2014). The Marshmallow Test: Mastering self-control. Little, Brown and Co.

Thorndike, E. (1920) A Constant Error in Psychological Ratings. Journal of Applied Psychology , 4 , 25-29. http://dx.doi.org/10.1037/h0071663

Watson, J. B., & Rayner, R. (1920). Conditioned emotional reactions. Journal of experimental psychology , 3 (1), 1.

Chris

  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 101 Hidden Talents Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 15 Green Flags in a Relationship
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 15 Signs you're Burnt Out, Not Lazy
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 15 Toxic Things Parents Say to their Children

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

Comscore

  • Newsletters
  • Best Industries
  • Business Plans
  • Home-Based Business
  • The UPS Store
  • Customer Service
  • Black in Business
  • Your Next Move
  • Female Founders
  • Best Workplaces
  • Company Culture
  • Public Speaking
  • HR/Benefits
  • Productivity
  • All the Hats
  • Digital Transformation
  • Artificial Intelligence
  • Bringing Innovation to Market
  • Cloud Computing
  • Social Media
  • Data Detectives
  • Exit Interview
  • Bootstrapping
  • Crowdfunding
  • Venture Capital
  • Business Models
  • Personal Finance
  • Founder-Friendly Investors
  • Upcoming Events
  • Inc. 5000 Vision Conference
  • Become a Sponsor
  • Cox Business
  • Verizon Business
  • Branded Content
  • Apply Inc. 5000 US

Inc. Premium

Subscribe to Inc. Magazine

A New Report Says Stanford's Most Famous Psychology Experiment Is a 'Fraud,' a 'Sham,' and a 'Lie'

You were almost certainly taught this 50-year-old experiment's findings. but what really happened.

University Lawn And Buildings Against Clear Sky

It's probably the most famous psychological experiment ever to come out of Stanford University. Now, 50 years later, critics are saying the entire thing was "a fraud."

If you took a psychology class in college, I guarantee you studied this one. If not, you've heard of it. And, its findings have been trumpeted before members of Congress and other policymakers for years.

But what if the entire thing was a sham?  

Below, we'll describe the experiment, its impact, and why it's suddenly become highly controversial to the point that critics are throwing around a word they rarely have the courage to use: "lie."

The Stanford Prison Experiment

The year was 1971, and a Stanford professor named Philip Zimbardo recruited graduate students to play the roles of inmates and guards in a mock jail. The experiment was supposed to run 14 days, but it was reportedly shut down early when both jailers and the jailed began to take their roles too seriously.

Ultimately, the Stanford Prison Experiment became very widely known, and it was used to demonstrate that people who are given power will often naturally abuse it, and that people who have all power stripped from them will often fall into despair, regardless of circumstances.

Now, it's facing withering criticism. As Vox put it in a summary recently, the Stanford Prison Experiment's "findings were wrong. Very wrong. And not just due to its questionable ethics or lack of concrete data -- but because of deceit."

'The Lifespan of a Lie'

The study has been controversial over the years, but the recent focus is the result of the work of a Ph.D and journalist named Ben Blum. Among his findings, based on recently found footage and audio recordings, along with a French filmmaker's work on the subject, Blum says:

A "prisoner" who famously had a breakdown after hours in fact was just fine--but acting--as he admitted in an interview with Blum last summer. "Guards" who supposedly began acting sadistically of their own accord (basically the entire main takeaway of the experiment) had in fact been coached and told to be mean. "Guards" who  supposedly came up with their own strict rules for the prisoners, in fact copied them from an earlier, much shorter "fake jail" experiment--or learned them from a former San Quentin inmate who served as a consultant on the project.

"The most famous psychology study of all time was a sham," Blum wrote in his more than 7,000-word expose, published on Medium , entitled,  The Lifespan of a Lie . "Why can't we escape the Stanford Prison Experiment?"

Okay ... But why lie?

Before we go further, we need to disclose a weird circumstance that explains how Blum came to look at Zimbardo's work. It turns out that Blum is the cousin of a former U.S. Army Ranger named Alex Blum, who was arrested and convicted of bank robbery in 2009.

Alex Blum received an extraordinarily lenient sentence, partly based on the expert testimony of an expert witness psychologist. That expert witness's name? You guessed it--Philip Zimbardo.

You might call this a conflict of interest--except for the fact that the target of Ben Blum's investigation is same person who helped his cousin, and Blum's report is anything but helpful to Zimbardo. 

It truly is a damning takedown. Blum says he uncovered video and audio evidence that completely many of undermines Zimbardo's claims--even his testimony before Congress.

All of which leaves an obvious question: Why lie?

Why would Zimbardo embellish his study's findings; why would participants go along with it?

Blum's explanation for this largely comes down to the most pedestrian reasons.

Students playing the roles of guards and prisoners were worried about things like getting into graduate school, he says, and simply played along. 

And Zimbardo himself wasn't prepared for the impact his experiment would have--in part because a national dialogue about prison conditions that was sparked at exactly the same time as his experiment, Blum claims.

In short, nobody involved thought it would be remembered or cited for as long as it has. After interviewing Zimbardo, Blum seems to conclude that the experiment became the psychologist's life's work, did so almost in spite of him--and that defending it is exhausting for him.

"After my talk with you, I'm not going to do any interviews about it. It's just a waste of time," Blum quotes Zimbardo as saying. "It's the most famous study in the history of psychology at this point. ... I'm not going to defend it anymore. The defense is its longevity."

A refreshed look at leadership from the desk of CEO and chief content officer Stephanie Mehta

Privacy Policy

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Sweepstakes
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

The Most Famous Social Psychology Experiments Ever Performed

Social experiments often seek to answer questions about how people behave in groups or how the presence of others impacts individual behavior. Over the years, social psychologists have explored these questions by conducting experiments .

The results of some of the most famous social psychology experiments remain relevant (and often quite controversial) today. Such experiments give us valuable information about human behavior and how group influence can impact our actions in social situations.

At a Glance

Some of the most famous social psychology experiments include Asch's conformity experiments, Bandura's Bobo doll experiments, the Stanford prison experiment, and Milgram's obedience experiments. Some of these studies are quite controversial for various reasons, including how they were conducted, serious ethical concerns, and what their results suggested.

The Asch Conformity Experiments

What do you do when you know you're right but the rest of the group disagrees with you? Do you bow to group pressure?

In a series of famous experiments conducted during the 1950s, psychologist Solomon Asch demonstrated that people would give the wrong answer on a test to fit in with the rest of the group.

In Asch's famous conformity experiments , people were shown a line and then asked to select a line of a matching length from a group of three. Asch also placed confederates in the group who would intentionally choose the wrong lines.

The results revealed that when other people picked the wrong line, participants were likely to conform and give the same answers as the rest of the group.

What the Results Revealed

While we might like to believe that we would resist group pressure (especially when we know the group is wrong), Asch's results revealed that people are surprisingly susceptible to conformity .

Not only did Asch's experiment teach us a great deal about the power of conformity, but it also inspired a whole host of additional research on how people conform and obey, including Milgram's infamous obedience experiments.

The Bobo Doll Experiment

Does watching violence on television cause children to behave more aggressively? In a series of experiments conducted during the early 1960s, psychologist Albert Bandura set out to investigate the impact of observed aggression on children's behavior.

In his Bobo doll experiments , children would watch an adult interacting with a Bobo doll. In one condition, the adult model behaved passively toward the doll, but in another, the adult would kick, punch, strike, and yell at the doll.

The results revealed that children who watched the adult model behave violently toward the doll were likelier to imitate the aggressive behavior later on.​

The Impact of Bandura's Social Psychology Experiment

The debate over the degree to which violence on television, movies, gaming, and other media influences children's behavior continues to rage on today, so it perhaps comes as no surprise that Bandura's findings are still so relevant.

The experiment has also helped inspire hundreds of additional studies exploring the impacts of observed aggression and violence.

The Stanford Prison Experiment

During the early 1970s, Philip Zimbardo set up a fake prison in the basement of the Stanford Psychology Department, recruited participants to play prisoners and guards, and played the role of the prison warden.

The experiment was designed to look at the effect that a prison environment would have on behavior, but it quickly became one of the most famous and controversial experiments of all time.

Results of the Stanford Prison Experiment

The Stanford prison experiment was initially slated to last a full two weeks. It ended after just six days. Why? Because the participants became so enmeshed in their assumed roles, the guards became almost sadistically abusive, and the prisoners became anxious, depressed, and emotionally disturbed.

While the Stanford prison experiment was designed to look at prison behavior, it has since become an emblem of how powerfully people are influenced by situations.  

Ethical Concerns

Part of the notoriety stems from the study's treatment of the participants. The subjects were placed in a situation that created considerable psychological distress. So much so that the study had to be halted less than halfway through the experiment.

The study has long been upheld as an example of how people yield to the situation, but critics have suggested that the participants' behavior may have been unduly influenced by Zimbardo himself in his capacity as the mock prison's "warden."  

Recent Criticisms

The Stanford prison experiment has long been controversial due to the serious ethical concerns of the research, but more recent evidence casts serious doubts on the study's scientific merits.

An examination of study records indicates participants faked their behavior to either get out of the experiment or "help" prove the researcher's hypothesis. The experimenters also appear to have encouraged certain behaviors to help foster more abusive behavior.

The Milgram Experiments

Following the trial of Adolph Eichmann for war crimes committed during World War II, psychologist Stanley Milgram wanted to better understand why people obey. "Could it be that Eichmann and his million accomplices in the Holocaust were just following orders? Could we call them all accomplices?" Milgram wondered.

The results of Milgram's controversial obedience experiments were astonishing and continue to be both thought-provoking and controversial today.

What the Social Psychology Experiment Involved

The study involved ordering participants to deliver increasingly painful shocks to another person. While the victim was simply a confederate pretending to be injured, the participants fully believed that they were giving electrical shocks to the other person.

Even when the victim was protesting or complaining of a heart condition, 65% of the participants continued to deliver painful, possibly fatal shocks on the experimenter's orders.

Obviously, no one wants to believe that they are capable of inflicting pain or torture on another human being simply on the orders of an authority figure. The results of the obedience experiments are disturbing because they reveal that people are much more obedient than they may believe.

Controversy and Recent Criticisms

The study is also controversial because it suffers from ethical concerns, primarily the psychological distress it created for the participants. More recent findings suggest that other problems question the study's findings.

Some participants were coerced into continuing against their wishes. Many participants appeared to have guessed that the learner was faking their responses, and other variations showed that many participants refused to continue the shocks.

What This Means For You

There are many interesting and famous social psychology experiments that can reveal a lot about our understanding of social behavior and influence. However, it is important to be aware of the controversies, limitations, and criticisms of these studies. More recent research may reflect differing results. In some cases, the re-evaluation of classic studies has revealed serious ethical and methodological flaws that call the results into question.

Jeon, HL.  The environmental factor within the Solomon Asch Line Test .  International Journal of Social Science and Humanity.  2014;4(4):264-268. doi:10.7763/IJSSH.2014.V4.360 

Bandura and Bobo . Association for Psychological Science.

Zimbardo, G. The Stanford Prison Experiment: a simulation study on the psychology of imprisonment .

Le Texier T.  Debunking the Stanford Prison Experiment.   Am Psychol.  2019;74(7):823-839. doi:10.1037/amp0000401

Blum B.  The lifespan of a lie .  Medium .

Baker PC. Electric Schlock: Did Stanley Milgram's famous obedience experiments prove anything? Pacific Standard .

Perry G.  Deception and illusion in Milgram's accounts of the obedience experiments .  Theory Appl Ethics . 2013;2(2):79-92.

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

11+ Psychology Experiment Ideas (Goals + Methods)

practical psychology logo

Have you ever wondered why some days you remember things easily, while on others you keep forgetting? Or why certain songs make you super happy and others just…meh?

Our minds are like big, mysterious puzzles, and every day we're finding new pieces to fit. One of the coolest ways to explore our brains and the way they work is through psychology experiments.

A psychology experiment is a special kind of test or activity researchers use to learn more about how our minds work and why we behave the way we do.

It's like a detective game where scientists ask questions and try out different clues to find answers about our feelings, thoughts, and actions. These experiments aren't just for scientists in white coats but can be fun activities we all try to discover more about ourselves and others.

Some of these experiments have become so famous, they’re like the celebrities of the science world! Like the Marshmallow Test, where kids had to wait to eat a yummy marshmallow, or Pavlov's Dogs, where dogs learned to drool just hearing a bell.

Let's look at a few examples of psychology experiments you can do at home.

What Are Some Classic Experiments?

Imagine a time when the mysteries of the mind were being uncovered in groundbreaking ways. During these moments, a few experiments became legendary, capturing the world's attention with their intriguing results.

testing tubes

The Marshmallow Test

One of the most talked-about experiments of the 20th century was the Marshmallow Test , conducted by Walter Mischel in the late 1960s at Stanford University.

The goal was simple but profound: to understand a child's ability to delay gratification and exercise self-control.

Children were placed in a room with a marshmallow and given a choice: eat the marshmallow now or wait 15 minutes and receive two as a reward. Many kids struggled with the wait, some devouring the treat immediately, while others demonstrated remarkable patience.

But the experiment didn’t end there. Years later, Mischel discovered something astonishing. The children who had waited for the second marshmallow were generally more successful in several areas of life, from school achievements to job satisfaction!

While this experiment highlighted the importance of teaching patience and self-control from a young age, it wasn't without its criticisms. Some argued that a child's background, upbringing, or immediate surroundings might play a significant role in their choices.

Moreover, there were concerns about the ethics of judging a child's potential success based on a brief interaction with a marshmallow.

Pavlov's Dogs

Traveling further back in time and over to Russia, another classic experiment took the world by storm. Ivan Pavlov , in the early 1900s, wasn't initially studying learning or behavior. He was exploring the digestive systems of dogs.

But during his research, Pavlov stumbled upon a fascinating discovery. He noticed that by ringing a bell every time he fed his dogs, they eventually began to associate the bell's sound with mealtime. So much so, that merely ringing the bell, even without presenting food, made the dogs drool in anticipation!

This reaction demonstrated the concept of "conditioning" - where behaviors can be learned by linking two unrelated stimuli. Pavlov's work revolutionized the world's understanding of learning and had ripple effects in various areas like animal training and therapy techniques.

Pavlov came up with the term classical conditioning , which is still used today. Other psychologists have developed more nuanced types of conditioning that help us understand how people learn to perform different behaviours.

Classical conditioning is the process by which a neutral stimulus becomes associated with a meaningful stimulus , leading to the same response. In Pavlov's case, the neutral stimulus (bell) became associated with the meaningful stimulus (food), leading the dogs to salivate just by hearing the bell.

Modern thinkers often critique Pavlov's methods from an ethical standpoint. The dogs, crucial to his discovery, may not have been treated with today's standards of care and respect in research.

Both these experiments, while enlightening, also underline the importance of conducting research with empathy and consideration, especially when it involves living beings.

What is Ethical Experimentation?

The tales of Pavlov's bells and Mischel's marshmallows offer us not just insights into the human mind and behavior but also raise a significant question: At what cost do these discoveries come?

Ethical experimentation isn't just a fancy term; it's the backbone of good science. When we talk about ethics, we're referring to the moral principles that guide a researcher's decisions and actions. But why does it matter so much in the realm of psychological experimentation?

An example of an experiment that had major ethical issues is an experiment called the Monster Study . This study was conducted in 1936 and was interested in why children develop a stutter.

The major issue with it is that the psychologists treated some of the children poorly over a period of five months, telling them things like “You must try to stop yourself immediately. Don’t ever speak unless you can do it right.”

You can imagine how that made the children feel!

This study helped create guidelines for ethical treatment in experiments. The guidelines include:

Respect for Individuals: Whether it's a dog in Pavlov's lab or a child in Mischel's study room, every participant—human or animal—deserves respect. They should never be subjected to harm or undue stress. For humans, informed consent (knowing what they're signing up for) is a must. This means that if a child is participating, they, along with their guardians, should understand what the experiment entails and agree to it without being pressured.

Honesty is the Best Policy: Researchers have a responsibility to be truthful. This means not only being honest with participants about the study but also reporting findings truthfully, even if the results aren't what they hoped for. There can be exceptions if an experiment will only succeed if the participants aren't fully aware, but it has to be approved by an ethics committee .

Safety First: No discovery, no matter how groundbreaking, is worth harming a participant. The well-being and mental, emotional, and physical safety of participants is paramount. Experiments should be designed to minimize risks and discomfort.

Considering the Long-Term: Some experiments might have effects that aren't immediately obvious. For example, while a child might seem fine after participating in an experiment, they could feel stressed or anxious later on. Ethical researchers consider and plan for these possibilities, offering support and follow-up if needed.

The Rights of Animals: Just because animals can't voice their rights doesn't mean they don't have any. They should be treated with care, dignity, and respect. This means providing them with appropriate living conditions, not subjecting them to undue harm, and considering alternatives to animal testing when possible.

While the world of psychological experiments offers fascinating insights into behavior and the mind, it's essential to tread with care and compassion. The golden rule? Treat every participant, human or animal, as you'd wish to be treated. After all, the true mark of a groundbreaking experiment isn't just its findings but the ethical integrity with which it's conducted.

So, even if you're experimenting at home, please keep in mind the impact your experiments could have on the people and beings around you!

Let's get into some ideas for experiments.

1) Testing Conformity

Our primary aim with this experiment is to explore the intriguing world of social influences, specifically focusing on how much sway a group has over an individual's decisions. This social influence is called groupthink .

Humans, as social creatures, often find solace in numbers, seeking the approval and acceptance of those around them. But how deep does this need run? Does the desire to "fit in" overpower our trust in our own judgments?

This experiment not only provides insights into these questions but also touches upon the broader themes of peer pressure, societal norms, and individuality. Understanding this could shed light on various real-world situations, from why fashion trends catch on to more critical scenarios like how misinformation can spread.

Method: This idea is inspired by the classic Asch Conformity Experiments . Here's a simple way to try it:

  • Assemble a group of people (about 7-8). Only one person will be the real participant; the others will be in on the experiment.
  • Show the group a picture of three lines of different lengths and another line labeled "Test Line."
  • Ask each person to say out loud which of the three lines matches the length of the "Test Line."
  • Unknown to the real participant, the other members will intentionally choose the wrong line. This is to see if the participant goes along with the group's incorrect choice, even if they can see it's wrong.

Real-World Impacts of Groupthink

Groupthink is more than just a science term; we see it in our daily lives:

Decisions at Work or School: Imagine being in a group where everyone wants to do one thing, even if it's not the best idea. People might not speak up because they're worried about standing out or being the only one with a different opinion.

Wrong Information: Ever heard a rumor that turned out to be untrue? Sometimes, if many people believe and share something, others might believe it too, even if it's not correct. This happens a lot on the internet.

Peer Pressure: Sometimes, friends might all want to do something that's not safe or right. People might join in just because they don't want to feel left out.

Missing Out on New Ideas: When everyone thinks the same way and agrees all the time, cool new ideas might never get heard. It's like always coloring with the same crayon and missing out on all the other bright colors!

2) Testing Color and Mood

colorful room

We all have favorite colors, right? But did you ever wonder if colors can make you feel a certain way? Color psychology is the study of how colors can influence our feelings and actions.

For instance, does blue always calm us down? Does red make us feel excited or even a bit angry? By exploring this, we can learn how colors play a role in our daily lives, from the clothes we wear to the color of our bedroom walls.

  • Find a quiet room and set up different colored lights or large sheets of colored paper: blue, red, yellow, and green.
  • Invite some friends over and let each person spend a few minutes under each colored light or in front of each colored paper.
  • After each color, ask your friends to write down or talk about how they feel. Are they relaxed? Energized? Happy? Sad?

Researchers have always been curious about this. Some studies have shown that colors like blue and green can make people feel calm, while colors like red might make them feel more alert or even hungry!

Real-World Impacts of Color Psychology

Ever noticed how different places use colors?

Hospitals and doctors' clinics often use soft blues and greens. This might be to help patients feel more relaxed and calm.

Many fast food restaurants use bright reds and yellows. These colors might make us feel hungry or want to eat quickly and leave.

Classrooms might use a mix of colors to help students feel both calm and energized.

3) Testing Music and Brainpower

Think about your favorite song. Do you feel smarter or more focused when you listen to it? This experiment seeks to understand the relationship between music and our brain's ability to remember things. Some people believe that certain types of music, like classical tunes, can help us study or work better. Let's find out if it's true!

  • Prepare a list of 10-15 things to remember, like a grocery list or names of places.
  • Invite some friends over. First, let them try to memorize the list in a quiet room.
  • After a short break, play some music (try different types like pop, classical, or even nature sounds) and ask them to memorize the list again.
  • Compare the results. Was there a difference in how much they remembered with and without music?

The " Mozart Effect " is a popular idea. Some studies in the past suggested that listening to Mozart's music might make people smarter, at least for a little while. But other researchers think the effect might not be specific to Mozart; it could be that any music we enjoy boosts our mood and helps our brain work better.

Real-World Impacts of Music and Memory

Think about how we use music:

  • Study Sessions: Many students listen to music while studying, believing it helps them concentrate better.
  • Workout Playlists: Gyms play energetic music to keep people motivated and help them push through tough workouts.
  • Meditation and Relaxation: Calm, soothing sounds are often used to help people relax or meditate.

4) Testing Dreams and Food

Ever had a really wild dream and wondered where it came from? Some say that eating certain foods before bedtime can make our dreams more vivid or even a bit strange.

This experiment is all about diving into the dreamy world of sleep to see if what we eat can really change our nighttime adventures. Can a piece of chocolate or a slice of cheese transport us to a land of wacky dreams? Let's find out!

  • Ask a group of friends to keep a "dream diary" for a week. Every morning, they should write down what they remember about their dreams.
  • For the next week, ask them to eat a small snack before bed, like cheese, chocolate, or even spicy foods.
  • They should continue writing in their "dream diary" every morning.
  • At the end of the two weeks, compare the dream notes. Do the dreams seem different during the snack week?

The link between food and dreams isn't super clear, but some people have shared personal stories. For example, some say that spicy food can lead to bizarre dreams. Scientists aren't completely sure why, but it could be related to how food affects our body temperature or brain activity during sleep.

A cool idea related to this experiment is that of vivid dreams , which are very clear, detailed, and easy to remember dreams. Some people are even able to control their vivid dreams, or say that they feel as real as daily, waking life !

Real-World Impacts of Food and Dreams

Our discoveries might shed light on:

  • Bedtime Routines: Knowing which foods might affect our dreams can help us choose better snacks before bedtime, especially if we want calmer sleep.
  • Understanding Our Brain: Dreams can be mysterious, but studying them can give us clues about how our brains work at night.
  • Cultural Beliefs: Many cultures have myths or stories about foods and dreams. Our findings might add a fun twist to these age-old tales!

5) Testing Mirrors and Self-image

Stand in front of a mirror. How do you feel? Proud? Shy? Curious? Mirrors reflect more than just our appearance; they might influence how we think about ourselves.

This experiment delves into the mystery of self-perception. Do we feel more confident when we see our reflection? Or do we become more self-conscious? Let's take a closer look.

  • Set up two rooms: one with mirrors on all walls and another with no mirrors at all.
  • Invite friends over and ask them to spend some time in each room doing normal activities, like reading or talking.
  • After their time in both rooms, ask them questions like: "Did you think about how you looked more in one room? Did you feel more confident or shy?"
  • Compare the responses to see if the presence of mirrors changes how they feel about themselves.

Studies have shown that when people are in rooms with mirrors, they can become more aware of themselves. Some might stand straighter, fix their hair, or even change how they behave. The mirror acts like an audience, making us more conscious of our actions.

Real-World Impacts of Mirrors and Self-perception

Mirrors aren't just for checking our hair. Ever wonder why clothing stores have so many mirrors? They might help shoppers visualize themselves in new outfits, encouraging them to buy.

Mirrors in gyms can motivate people to work out with correct form and posture. They also help us see progress in real-time!

And sometimes, looking in a mirror can be a reminder to take care of ourselves, both inside and out.

But remember, what we look like isn't as important as how we act in the world or how healthy we are. Some people claim that having too many mirrors around can actually make us more self conscious and distract us from the good parts of ourselves.

Some studies are showing that mirrors can actually increase self-compassion , amongst other things. As any tool, it seems like mirrors can be both good and bad, depending on how we use them!

6) Testing Plants and Talking

potted plants

Have you ever seen someone talking to their plants? It might sound silly, but some people believe that plants can "feel" our vibes and that talking to them might even help them grow better.

In this experiment, we'll explore whether plants can indeed react to our voices and if they might grow taller, faster, or healthier when we chat with them.

  • Get three similar plants, placing each one in a separate room.
  • Talk to the first plant, saying positive things like "You're doing great!" or singing to it.
  • Say negative things to the second plant, like "You're not growing fast enough!"
  • Don't talk to the third plant at all; let it be your "silent" control group .
  • Water all plants equally and make sure they all get the same amount of light.
  • At the end of the month, measure the growth of each plant and note any differences in their health or size.

The idea isn't brand new. Some experiments from the past suggest plants might respond to sounds or vibrations. Some growers play music for their crops, thinking it helps them flourish.

Even if talking to our plants doesn't have an impact on their growth, it can make us feel better! Sometimes, if we are lonely, talking to our plants can help us feel less alone. Remember, they are living too!

Real-World Impacts of Talking to Plants

If plants do react to our voices, gardeners and farmers might adopt new techniques, like playing music in greenhouses or regularly talking to plants.

Taking care of plants and talking to them could become a recommended activity for reducing stress and boosting mood.

And if plants react to sound, it gives us a whole new perspective on how connected all living things might be .

7) Testing Virtual Reality and Senses

Virtual reality (VR) seems like magic, doesn't it? You put on a headset and suddenly, you're in a different world! But how does this "new world" affect our senses? This experiment wants to find out how our brains react to VR compared to the real world. Do we feel, see, or hear things differently? Let's get to the bottom of this digital mystery!

  • You'll need a VR headset and a game or experience that can be replicated in real life (like walking through a forest). If you don't have a headset yourself, there are virtual reality arcades now!
  • Invite friends to first experience the scenario in VR.
  • Afterwards, replicate the experience in the real world, like taking a walk in an actual forest.
  • Ask them questions about both experiences: Did one seem more real than the other? Which sounds were more clear? Which colors were brighter? Did they feel different emotions?

As VR becomes more popular, scientists have been curious about its effects. Some studies show that our brains can sometimes struggle to tell the difference between VR and reality. That's why some people might feel like they're really "falling" in a VR game even though they're standing still.

Real-World Impacts of VR on Our Senses

Schools might use VR to teach lessons, like taking students on a virtual trip to ancient Egypt. Understanding how our senses react in VR can also help game designers create even more exciting and realistic games.

Doctors could use VR to help patients overcome fears or to provide relaxation exercises. This is actually already a method therapists can use for helping patients who have serious phobias. This is called exposure therapy , which basically means slowly exposing someone (or yourself) to the thing you fear, starting from very far away to becoming closer.

For instance, if someone is afraid of snakes. You might show them images of snakes first. Once they are comfortable with the picture, they can know there is one in the next room. Once they are okay with that, they might use a VR headset to see the snake in the same room with them, though of course there is not an actual snake there.

8) Testing Sleep and Learning

We all know that feeling of trying to study or work when we're super tired. Our brains feel foggy, and it's hard to remember stuff. But how exactly does sleep (or lack of it) influence our ability to learn and remember things?

With this experiment, we'll uncover the mysteries of sleep and see how it can be our secret weapon for better learning.

  • Split participants into two groups.
  • Ask both groups to study the same material in the evening.
  • One group goes to bed early, while the other stays up late.
  • The next morning, give both groups a quiz on what they studied.
  • Compare the results to see which group remembered more.

Sleep and its relation to learning have been explored a lot. Scientists believe that during sleep, especially deep sleep, our brains sort and store new information. This is why sometimes, after a good night's rest, we might understand something better or remember more.

Real-World Impacts of Sleep and Learning

Understanding the power of sleep can help:

  • Students: If they know the importance of sleep, students might plan better, mixing study sessions with rest, especially before big exams.
  • Workplaces: Employers might consider more flexible hours, understanding that well-rested employees learn faster and make fewer mistakes.
  • Health: Regularly missing out on sleep can have other bad effects on our health. So, promoting good sleep is about more than just better learning.

9) Testing Social Media and Mood

Have you ever felt different after spending time on social media? Maybe happy after seeing a friend's fun photos, or a bit sad after reading someone's tough news.

Social media is a big part of our lives, but how does it really affect our mood? This experiment aims to shine a light on the emotional roller-coaster of likes, shares, and comments.

  • Ask participants to note down how they're feeling - are they happy, sad, excited, or bored?
  • Have them spend a set amount of time (like 30 minutes) on their favorite social media platforms.
  • After the session, ask them again about their mood. Did it change? Why?
  • Discuss what they saw or read that made them feel that way.

Previous research has shown mixed results. Some studies suggest that seeing positive posts can make us feel good, while others say that too much time on social media can make us feel lonely or left out.

Real-World Impacts of Social Media on Mood

Understanding the emotional impact of social media can help users understand their feelings and take breaks if needed. Knowing is half the battle! Additionally, teachers and parents can guide young users on healthy social media habits, like limiting time or following positive accounts.

And if it's shown that social media does impact mood, social media companies can design friendlier, less stressful user experiences.

But even if the social media companies don't change things, we can still change our social media habits to make ourselves feel better.

10) Testing Handwriting or Typing

Think about the last time you took notes. Did you grab a pen and paper or did you type them out on a computer or tablet?

Both ways are popular, but there's a big question: which method helps us remember and understand better? In this experiment, we'll find out if the classic art of handwriting has an edge over speedy typing.

  • Divide participants into two groups.
  • Present a short lesson or story to both groups.
  • One group will take notes by hand, while the other will type them out.
  • After some time, quiz both groups on the content of the lesson or story.
  • Compare the results to see which note-taking method led to better recall and understanding.

Studies have shown some interesting results. While typing can be faster and allows for more notes, handwriting might boost memory and comprehension because it engages the brain differently, making us process the information as we write.

Importantly, each person might find one or the other works better for them. This could be useful in understanding our learning habits and what instructional style would be best for us.

Real-World Impacts of Handwriting vs. Typing

Knowing the pros and cons of each method can:

  • Boost Study Habits: Students can pick the method that helps them learn best, especially during important study sessions or lectures.
  • Work Efficiency: In jobs where information retention is crucial, understanding the best method can increase efficiency and accuracy.
  • Tech Design: If we find out more about how handwriting benefits us, tech companies might design gadgets that mimic the feel of writing while combining the advantages of digital tools.

11) Testing Money and Happiness

game board with money

We often hear the saying, "Money can't buy happiness," but is that really true? Many dream of winning the lottery or getting a big raise, believing it would solve all problems.

In this experiment, we dig deep to see if there's a real connection between wealth and well-being.

  • Survey a range of participants, from those who earn a little to those who earn a lot, about their overall happiness. You can keep it to your friends and family, but that might not be as accurate as surveying a wider group of people.
  • Ask them to rank things that bring them joy and note if they believe more money would boost their happiness. You could try different methods, one where you include some things that they have to rank, such as gardening, spending time with friends, reading books, learning, etc. Or you could just leave a blank list that they can fill in with their own ideas.
  • Study the data to find patterns or trends about income and happiness.

Some studies have found money can boost happiness, especially when it helps people out of tough financial spots. But after reaching a certain income, extra dollars usually do not add much extra joy.

In fact, psychologists just realized that once people have an income that can comfortably support their needs (and some of their wants), they stop getting happier with more . That number is roughly $75,000, but of course that depends on the cost of living and how many members are in the family.

Real-World Impacts of Money and Happiness

If we can understand the link between money and joy, it might help folks choose jobs they love over jobs that just pay well. And instead of buying things, people might spend on experiences, like trips or classes, that make lasting memories.

Most importantly, we all might spend more time on hobbies, friends, and family, knowing they're big parts of what makes life great.

Some people are hoping that with Artificial Intelligence being able to do a lot of the less well-paying jobs, people might be able to do work they enjoy more, all while making more money and having more time to do the things that make them happy.

12) Testing Temperature and Productivity

Have you ever noticed how a cold classroom or office makes it harder to focus? Or how on hot days, all you want to do is relax? In this experiment, we're going to find out if the temperature around us really does change how well we work.

  • Find a group of participants and a room where you can change the temperature.
  • Set the room to a chilly temperature and give the participants a set of tasks to do.
  • Measure how well and quickly they do these tasks.
  • The next day, make the room comfortably warm and have them do similar tasks.
  • Compare the results to see if the warmer or cooler temperature made them work better.

Some studies have shown that people can work better when they're in a room that feels just right, not too cold or hot. Being too chilly can make fingers slow, and being too warm can make minds wander.

What temperature is "just right"? It won't be the same for everyone, but most people find it's between 70-73 degrees Fahrenheit (21-23 Celsius).

Real-World Implications of Temperature and Productivity

If we can learn more about how temperature affects our work, teachers might set classroom temperatures to help students focus and learn better, offices might adjust temperatures to get the best work out of their teams, and at home, we might find the best temperature for doing homework or chores quickly and well.

Interestingly, temperature also has an impact on our sleep quality. Most people find slightly cooler rooms to be better for good sleep. While the daytime temperature between 70-73F is good for productivity, a nighttime temperature around 65F (18C) is ideal for most people's sleep.

Psychology is like a treasure hunt, where the prize is understanding ourselves better. With every experiment, we learn a little more about why we think, feel, and act the way we do. Some of these experiments might seem simple, like seeing if colors change our mood or if being warm helps us work better. But even the simple questions can have big answers that help us in everyday life.

Remember, while doing experiments is fun, it's also important to always be kind and think about how others feel. We should never make someone uncomfortable just for a test. Instead, let's use these experiments to learn and grow, helping to make the world a brighter, more understanding place for everyone.

Related posts:

  • 150+ Flirty Goodnight Texts For Him (Sweet and Naughty Examples)
  • Dream Interpreter & Dictionary (270+ Meanings)
  • Sleep Stages (Light, Deep, REM)
  • What Part of the Brain Regulates Body Temperature?
  • Why Do We Dream? (6 Theories and Psychological Reasons)

Reference this article:

About The Author

Photo of author

Free Personality Test

Free Personality Quiz

Free Memory Test

Free Memory Test

Free IQ Test

Free IQ Test

PracticalPie.com is a participant in the Amazon Associates Program. As an Amazon Associate we earn from qualifying purchases.

Follow Us On:

Youtube Facebook Instagram X/Twitter

Psychology Resources

Developmental

Personality

Relationships

Psychologists

Serial Killers

Psychology Tests

Personality Quiz

Memory Test

Depression test

Type A/B Personality Test

© PracticalPsychology. All rights reserved

Privacy Policy | Terms of Use

IMAGES

  1. Replication in Psychology Experiments: Everything We Know is a Lie

    psychology experiments lie

  2. PPT

    psychology experiments lie

  3. PPT

    psychology experiments lie

  4. PPT

    psychology experiments lie

  5. The 25 Most Influential Psychological Experiments in History

    psychology experiments lie

  6. psychology behind lying- why people lie Applied Psychology, Psychology

    psychology experiments lie

VIDEO

  1. The Experiment That Traumatised Orphans

  2. The Perfect Solution To Every Problem

  3. Laws of reflection

  4. 5 Scariest Experiments That Will Make You Question Humanity

  5. What Are Some Psychology Experiments With Interesting Results?

  6. The Pinocchio Nose Paradox: Can It Really Happen? #shorts #explained #science

COMMENTS

  1. Lies, damned lies and psychology experiments

    Lies, damned lies and psychology experiments. Researchers may deceive themselves when they mislead study participants. By Bruce Bower. October 22, 2010 at 2:09 pm. BASEL, Switzerland — As dusk ...

  2. The 25 Most Influential Psychological Experiments in History

    And still others were not designed to be true psychological experiments, but ended up as beacons to the psychological community in proving or disproving theories. This is a list of the 25 most influential psychological experiments still being taught to psychology students of today. 1. A Class Divided.

  3. The Truth in the Newest Theory on Lying

    Lying is a common feature of everyday life, leading researchers to propose that "everybody lies." Cognitive psychology proposes that liars use four steps to produce their falsehoods. A new study ...

  4. How Lying Destroys Self-Esteem: 5 New Scientific Insights

    4. 22% of people lie every day, 19% rarely tell a lie In the fourth and last experiment, a longitudinal approach was used. Volunteers had to track their lying behavior and self-esteem for 5 days.

  5. Controversial and Unethical Psychology Experiments

    At a Glance. Some of the most controversial and unethical experiments in psychology include Harlow's monkey experiments, Milgram's obedience experiments, Zimbardo's prison experiment, Watson's Little Albert experiment, and Seligman's learned helplessness experiment. These and other controversial experiments led to the formation of rules and ...

  6. 'The Honest Truth' About Why We Lie, Cheat And Steal : NPR

    That's the finding of Dan Ariely, a professor of psychology and behavioral economics at Duke University. He's run experiments with some 30,000 people and found that very few people lie a lot, but ...

  7. The Truth About Lying

    The Truth About Lying. In one of his many experiments designed to measure people's rationalization of cheating, Dan Ariely rigged a vending machine to return both candy and the customer's money. Although people could have filled their pockets with candy without paying a cent, on average they took no more than three or four items, he says.

  8. The Truth about Lying

    Police experts, however, have frequently made a different argument: that the experiments weren't realistic enough. After all, they say, volunteers — mostly students — instructed to lie or tell the truth in psychology labs do not face the same consequences as criminal suspects in the interrogation room or on the witness stand.

  9. Lying and Psychology

    Lying is a very complex behavior, occurring in different forms and situations. It requires the liar not only to constantly keep the perspective of the to-be-deceived person in mind, but at the same time to remember and activate the truth, prevent the truth from slipping out, and flexibly switch between the lie and the truth.

  10. Exposing Liars by Distraction

    According to an experiment, investigators who asked a suspect to carry out an additional, secondary, task while being questioned were more likely to expose liars. A new method of lie detection shows that lie-tellers who are made to multitask while being interviewed are easier to detect. It has been clearly established that lying during ...

  11. Telling Lies: The Irrepressible Truth?

    Abstract. Telling a lie takes longer than telling the truth but precisely why remains uncertain. We investigated two processes suggested to increase response times, namely the decision to lie and the construction of a lie response. In Experiments 1 and 2, participants were directed or chose whether to lie or tell the truth.

  12. The truth about lying

    Police experts, however, have frequently made a different argument: that the experiments weren't realistic enough. After all, they say, volunteers — mostly students — instructed to lie or tell the truth in psychology labs do not face the same consequences as criminal suspects in the interrogation room or on the witness stand.

  13. The Psychology of Lying

    The antisocial lie is selfish and is constructed to further a personal agenda at some cost to others. Only the deceiver is intended to benefit from such lies, despite knowledge of the harm that may be caused. ... Personality and Social Psychology Review, 10 (2006), pp. 214-234. Crossref View in Scopus Google Scholar. Bryant, 2008 ...

  14. The Truth in the Newest Theory on Lying

    Lying is a common feature of everyday life, leading researchers to propose that "everybody lies." Cognitive psychology proposes that liars use four steps to produce their falsehoods. A new study ...

  15. Stanford Prison Experiment: why famous psychology studies are now ...

    The Stanford Prison Experiment, one of the most famous and compelling psychological studies of all time, told us a tantalizingly simple story about human nature. The study took paid participants ...

  16. The Asch Line Study (+3 Conformity Experiments)

    In his famous "Line Experiment", Asch showed his subjects a picture of a vertical line followed by three lines of different lengths, one of which was obviously the same length as the first one. He then asked subjects to identify which line was the same length as the first line. Solomon Asch used 123 male college students as his subjects ...

  17. The truth about lies

    The third most commonly reported lie was concealing or hiding thoughts about suicide, reported by 31 percent of the respondents, and the fourth was minimizing or hiding insecurities and self-doubts. ... (Journal of Counseling Psychology, Vol. 66, No. 1, 2019). "We have to recognize that keeping secrets may not be a bad thing all of the time ...

  18. A Psychologist Explores 6 Types Of Lies, And How They Affect Us

    Higher negative affect. Researchers found that individuals who lied experienced the negative emotions of nervousness, regret, discomfort, unhappiness, guilt, embarrassment, shame and anger to a ...

  19. PDF The 25 Most Influential Psychological Experiments in History

    The 25 Most Influential Psychological Experiments in History. By Kristen Fescoe Published January 2016. The field of psychology is a very broad field comprised of many smaller specialty areas. Each of these specialty areas has been strengthened over the years by research studies designed to prove or disprove theories and hypotheses that pique ...

  20. 15 Famous Experiments and Case Studies in Psychology

    6. Stanford Prison Experiment. One of the most controversial and widely-cited studies in psychology is the Stanford Prison Experiment, conducted by Philip Zimbardo at the basement of the Stanford psychology building in 1971. The hypothesis was that abusive behavior in prisons is influenced by the personality traits of the prisoners and prison ...

  21. A New Report Says Stanford's Most Famous Psychology Experiment Is a

    A New Report Says Stanford's Most Famous Psychology Experiment Is a 'Fraud,' a 'Sham,' and a 'Lie' You were almost certainly taught this 50-year-old experiment's findings.

  22. Famous Social Psychology Experiments

    The Stanford Prison Experiment . During the early 1970s, Philip Zimbardo set up a fake prison in the basement of the Stanford Psychology Department, recruited participants to play prisoners and guards, and played the role of the prison warden. The experiment was designed to look at the effect that a prison environment would have on behavior, but it quickly became one of the most famous and ...

  23. 11+ Psychology Experiment Ideas (Goals + Methods)

    The Marshmallow Test. One of the most talked-about experiments of the 20th century was the Marshmallow Test, conducted by Walter Mischel in the late 1960s at Stanford University.. The goal was simple but profound: to understand a child's ability to delay gratification and exercise self-control.. Children were placed in a room with a marshmallow and given a choice: eat the marshmallow now or ...

  24. How studies of Maya children challenge a paradigm in psychology

    Developmental psychology aims to elucidate the "universals" in how the human mind develops, but has often gleaned those insights by studying White, middle-class children from Western countries.