• Tools and Resources
  • Customer Services
  • Corrections
  • Crime, Media, and Popular Culture
  • Criminal Behavior
  • Criminological Theory
  • Critical Criminology
  • Geography of Crime
  • International Crime
  • Juvenile Justice
  • Prevention/Public Policy
  • Race, Ethnicity, and Crime
  • Research Methods
  • Victimology/Criminal Victimization
  • White Collar Crime
  • Women, Crime, and Justice
  • Share This Facebook LinkedIn Twitter

Article contents

Violence, media effects, and criminology.

  • Nickie D. Phillips Nickie D. Phillips Department of Sociology and Criminal Justice, St. Francis College
  • https://doi.org/10.1093/acrefore/9780190264079.013.189
  • Published online: 27 July 2017

Debate surrounding the impact of media representations on violence and crime has raged for decades and shows no sign of abating. Over the years, the targets of concern have shifted from film to comic books to television to video games, but the central questions remain the same. What is the relationship between popular media and audience emotions, attitudes, and behaviors? While media effects research covers a vast range of topics—from the study of its persuasive effects in advertising to its positive impact on emotions and behaviors—of particular interest to criminologists is the relationship between violence in popular media and real-life aggression and violence. Does media violence cause aggression and/or violence?

The study of media effects is informed by a variety of theoretical perspectives and spans many disciplines including communications and media studies, psychology, medicine, sociology, and criminology. Decades of research have amassed on the topic, yet there is no clear agreement about the impact of media or about which methodologies are most appropriate. Instead, there continues to be disagreement about whether media portrayals of violence are a serious problem and, if so, how society should respond.

Conflicting interpretations of research findings inform and shape public debate around media effects. Although there seems to be a consensus among scholars that exposure to media violence impacts aggression, there is less agreement around its potential impact on violence and criminal behavior. While a few criminologists focus on the phenomenon of copycat crimes, most rarely engage with whether media directly causes violence. Instead, they explore broader considerations of the relationship between media, popular culture, and society.

  • media exposure
  • criminal behavior
  • popular culture
  • media violence
  • media and crime
  • copycat crimes

Media Exposure, Violence, and Aggression

On Friday July 22, 2016 , a gunman killed nine people at a mall in Munich, Germany. The 18-year-old shooter was subsequently characterized by the media as being under psychiatric care and harboring at least two obsessions. One, an obsession with mass shootings, including that of Anders Breivik who ultimately killed 77 people in Norway in 2011 , and the other an obsession with video games. A Los Angeles, California, news report stated that the gunman was “an avid player of first-person shooter video games, including ‘Counter-Strike,’” while another headline similarly declared, “Munich gunman, a fan of violent video games, rampage killers, had planned attack for a year”(CNN Wire, 2016 ; Reuters, 2016 ). This high-profile incident was hardly the first to link popular culture to violent crime. Notably, in the aftermath of the 1999 Columbine shooting massacre, for example, media sources implicated and later discredited music, video games, and a gothic aesthetic as causal factors of the crime (Cullen, 2009 ; Yamato, 2016 ). Other, more recent, incidents have echoed similar claims suggesting that popular culture has a nefarious influence on consumers.

Media violence and its impact on audiences are among the most researched and examined topics in communications studies (Hetsroni, 2007 ). Yet, debate over whether media violence causes aggression and violence persists, particularly in response to high-profile criminal incidents. Blaming video games, and other forms of media and popular culture, as contributing to violence is not a new phenomenon. However, interpreting media effects can be difficult because commenters often seem to indicate a grand consensus that understates more contradictory and nuanced interpretations of the data.

In fact, there is a consensus among many media researchers that media violence has an impact on aggression although its impact on violence is less clear. For example, in response to the shooting in Munich, Brad Bushman, professor of communication and psychology, avoided pinning the incident solely on video games, but in the process supported the assertion that video gameplay is linked to aggression. He stated,

While there isn’t complete consensus in any scientific field, a study we conducted showed more than 90% of pediatricians and about two-thirds of media researchers surveyed agreed that violent video games increase aggression in children. (Bushman, 2016 )

Others, too, have reached similar conclusions with regard to other media. In 2008 , psychologist John Murray summarized decades of research stating, “Fifty years of research on the effect of TV violence on children leads to the inescapable conclusion that viewing media violence is related to increases in aggressive attitudes, values, and behaviors” (Murray, 2008 , p. 1212). Scholars Glenn Sparks and Cheri Sparks similarly declared that,

Despite the fact that controversy still exists about the impact of media violence, the research results reveal a dominant and consistent pattern in favor of the notion that exposure to violent media images does increase the risk of aggressive behavior. (Sparks & Sparks, 2002 , p. 273)

In 2014 , psychologist Wayne Warburton more broadly concluded that the vast majority of studies have found “that exposure to violent media increases the likelihood of aggressive behavior in the short and longterm, increases hostile perceptions and attitudes, and desensitizes individuals to violent content” (Warburton, 2014 , p. 64).

Criminologists, too, are sensitive to the impact of media exposure. For example, Jacqueline Helfgott summarized the research:

There have been over 1000 studies on the effects of TV and film violence over the past 40 years. Research on the influence of TV violence on aggression has consistently shown that TV violence increases aggression and social anxiety, cultivates a “mean view” of the world, and negatively impacts real-world behavior. (Helfgott, 2015 , p. 50)

In his book, Media Coverage of Crime and Criminal Justice , criminologist Matthew Robinson stated, “Studies of the impact of media on violence are crystal clear in their findings and implications for society” (Robinson, 2011 , p. 135). He cited studies on childhood exposure to violent media leading to aggressive behavior as evidence. In his pioneering book Media, Crime, and Criminal Justice , criminologist Ray Surette concurred that media violence is linked to aggression, but offered a nuanced interpretation. He stated,

a small to modest but genuine causal role for media violence regarding viewer aggression has been established for most beyond a reasonable doubt . . . There is certainly a connection between violent media and social aggression, but its strength and configuration is simply not known at this time. (Surette, 2011 , p. 68)

The uncertainties about the strength of the relationship and the lack of evidence linking media violence to real-world violence is often lost in the news media accounts of high-profile violent crimes.

Media Exposure and Copycat Crimes

While many scholars do seem to agree that there is evidence that media violence—whether that of film, TV, or video games—increases aggression, they disagree about its impact on violent or criminal behavior (Ferguson, 2014 ; Gunter, 2008 ; Helfgott, 2015 ; Reiner, 2002 ; Savage, 2008 ). Nonetheless, it is violent incidents that most often prompt speculation that media causes violence. More specifically, violence that appears to mimic portrayals of violent media tends to ignite controversy. For example, the idea that films contribute to violent crime is not a new assertion. Films such as A Clockwork Orange , Menace II Society , Set it Off , and Child’s Play 3 , have been linked to crimes and at least eight murders have been linked to Oliver Stone’s 1994 film Natural Born Killers (Bracci, 2010 ; Brooks, 2002 ; PBS, n.d. ). Nonetheless, pinpointing a direct, causal relationship between media and violent crime remains elusive.

Criminologist Jacqueline Helfgott defined copycat crime as a “crime that is inspired by another crime” (Helfgott, 2015 , p. 51). The idea is that offenders model their behavior on media representations of violence whether real or fictional. One case, in particular, illustrated how popular culture, media, and criminal violence converge. On July 20, 2012 , James Holmes entered the midnight premiere of The Dark Knight Rises , the third film in the massively successful Batman trilogy, in a movie theater in Aurora, Colorado. He shot and killed 12 people and wounded 70 others. At the time, the New York Times described the incident,

Witnesses told the police that Mr. Holmes said something to the effect of “I am the Joker,” according to a federal law enforcement official, and that his hair had been dyed or he was wearing a wig. Then, as people began to rise from their seats in confusion or anxiety, he began to shoot. The gunman paused at least once, several witnesses said, perhaps to reload, and continued firing. (Frosch & Johnson, 2012 ).

The dyed hair, Holme’s alleged comment, and that the incident occurred at a popular screening led many to speculate that the shooter was influenced by the earlier film in the trilogy and reignited debate around the impact about media violence. The Daily Mail pointed out that Holmes may have been motivated by a 25-year-old Batman comic in which a gunman opens fire in a movie theater—thus further suggesting the iconic villain served as motivation for the attack (Graham & Gallagher, 2012 ). Perceptions of the “Joker connection” fed into the notion that popular media has a direct causal influence on violent behavior even as press reports later indicated that Holmes had not, in fact, made reference to the Joker (Meyer, 2015 ).

A week after the Aurora shooting, the New York Daily News published an article detailing a “possible copycat” crime. A suspect was arrested in his Maryland home after making threatening phone calls to his workplace. The article reported that the suspect stated, “I am a [sic] joker” and “I’m going to load my guns and blow everybody up.” In their search, police found “a lethal arsenal of 25 guns and thousands of rounds of ammunition” in the suspect’s home (McShane, 2012 ).

Though criminologists are generally skeptical that those who commit violent crimes are motivated solely by media violence, there does seem to be some evidence that media may be influential in shaping how some offenders commit crime. In his study of serious and violent juvenile offenders, criminologist Ray Surette found “about one out of three juveniles reports having considered a copycat crime and about one out of four reports actually having attempted one.” He concluded that “those juveniles who are self-reported copycats are significantly more likely to credit the media as both a general and personal influence.” Surette contended that though violent offenses garner the most media attention, copycat criminals are more likely to be career criminals and to commit property crimes rather than violent crimes (Surette, 2002 , pp. 56, 63; Surette 2011 ).

Discerning what crimes may be classified as copycat crimes is a challenge. Jacqueline Helfgott suggested they occur on a “continuum of influence.” On one end, she said, media plays a relatively minor role in being a “component of the modus operandi” of the offender, while on the other end, she said, “personality disordered media junkies” have difficulty distinguishing reality from violent fantasy. According to Helfgott, various factors such as individual characteristics, characteristics of media sources, relationship to media, demographic factors, and cultural factors are influential. Overall, scholars suggest that rather than pushing unsuspecting viewers to commit crimes, media more often influences how , rather than why, someone commits a crime (Helfgott, 2015 ; Marsh & Melville, 2014 ).

Given the public interest, there is relatively little research devoted to exactly what copycat crimes are and how they occur. Part of the problem of studying these types of crimes is the difficulty defining and measuring the concept. In an effort to clarify and empirically measure the phenomenon, Surette offered a scale that included seven indicators of copycat crimes. He used the following factors to identify copycat crimes: time order (media exposure must occur before the crime); time proximity (a five-year cut-off point of exposure); theme consistency (“a pattern of thought, feeling or behavior in the offender which closely parallels the media model”); scene specificity (mimicking a specific scene); repetitive viewing; self-editing (repeated viewing of single scene while “the balance of the film is ignored”); and offender statements and second-party statements indicating the influence of media. Findings demonstrated that cases are often prematurely, if not erroneously, labeled as “copycat.” Surette suggested that use of the scale offers a more precise way for researchers to objectively measure trends and frequency of copycat crimes (Surette, 2016 , p. 8).

Media Exposure and Violent Crimes

Overall, a causal link between media exposure and violent criminal behavior has yet to be validated, and most researchers steer clear of making such causal assumptions. Instead, many emphasize that media does not directly cause aggression and violence so much as operate as a risk factor among other variables (Bushman & Anderson, 2015 ; Warburton, 2014 ). In their review of media effects, Brad Bushman and psychologist Craig Anderson concluded,

In sum, extant research shows that media violence is a causal risk factor not only for mild forms of aggression but also for more serious forms of aggression, including violent criminal behavior. That does not mean that violent media exposure by itself will turn a normal child or adolescent who has few or no other risk factors into a violent criminal or a school shooter. Such extreme violence is rare, and tends to occur only when multiple risk factors converge in time, space, and within an individual. (Bushman & Anderson, 2015 , p. 1817)

Surette, however, argued that there is no clear linkage between media exposure and criminal behavior—violent or otherwise. In other words, a link between media violence and aggression does not necessarily mean that exposure to violent media causes violent (or nonviolent) criminal behavior. Though there are thousands of articles addressing media effects, many of these consist of reviews or commentary about prior research findings rather than original studies (Brown, 2007 ; Murray, 2008 ; Savage, 2008 ; Surette, 2011 ). Fewer, still, are studies that specifically measure media violence and criminal behavior (Gunter, 2008 ; Strasburger & Donnerstein, 2014 ). In their meta-analysis investigating the link between media violence and criminal aggression, scholars Joanne Savage and Christina Yancey did not find support for the assertion. Instead, they concluded,

The study of most consequence for violent crime policy actually found that exposure to media violence was significantly negatively related to violent crime rates at the aggregate level . . . It is plain to us that the relationship between exposure to violent media and serious violence has yet to be established. (Savage & Yancey, 2008 , p. 786)

Researchers continue to measure the impact of media violence among various forms of media and generally stop short of drawing a direct causal link in favor of more indirect effects. For example, one study examined the increase of gun violence in films over the years and concluded that violent scenes provide scripts for youth that justify gun violence that, in turn, may amplify aggression (Bushman, Jamieson, Weitz, & Romer, 2013 ). But others report contradictory findings. Patrick Markey and colleagues studied the relationship between rates of homicide and aggravated assault and gun violence in films from 1960–2012 and found that over the years, violent content in films increased while crime rates declined . After controlling for age shifts, poverty, education, incarceration rates, and economic inequality, the relationships remained statistically non-significant (Markey, French, & Markey, 2015 , p. 165). Psychologist Christopher Ferguson also failed to find a relationship between media violence in films and video games and violence (Ferguson, 2014 ).

Another study, by Gordon Dahl and Stefano DellaVigna, examined violent films from 1995–2004 and found decreases in violent crimes coincided with violent blockbuster movie attendance. Here, it was not the content that was alleged to impact crime rates, but instead what the authors called “voluntary incapacitation,” or the shifting of daily activities from that of potential criminal behavior to movie attendance. The authors concluded, “For each million people watching a strongly or mildly violent movie, respectively, violent crime decreases by 1.9% and 2.1%. Nonviolent movies have no statistically significant impact” (Dahl & DellaVigna, p. 39).

High-profile cases over the last several years have shifted public concern toward the perceived danger of video games, but research demonstrating a link between video games and criminal violence remains scant. The American Psychiatric Association declared that “research demonstrates a consistent relation between violent video game use and increases in aggressive behavior, aggressive cognitions and aggressive affect, and decreases in prosocial behavior, empathy and sensitivity to aggression . . .” but stopped short of claiming that video games impact criminal violence. According to Breuer and colleagues, “While all of the available meta-analyses . . . found a relationship between aggression and the use of (violent) video games, the size and interpretation of this connection differ largely between these studies . . .” (APA, 2015 ; Breuer et al., 2015 ; DeCamp, 2015 ). Further, psychologists Patrick Markey, Charlotte Markey, and Juliana French conducted four time-series analyses investigating the relationship between video game habits and assault and homicide rates. The studies measured rates of violent crime, the annual and monthly video game sales, Internet searches for video game walkthroughs, and rates of violent crime occurring after the release dates of popular games. The results showed that there was no relationship between video game habits and rates of aggravated assault and homicide. Instead, there was some indication of decreases in crime (Markey, Markey, & French, 2015 ).

Another longitudinal study failed to find video games as a predictor of aggression, instead finding support for the “selection hypothesis”—that physically aggressive individuals (aged 14–17) were more likely to choose media content that contained violence than those slightly older, aged 18–21. Additionally, the researchers concluded,

that violent media do not have a substantial impact on aggressive personality or behavior, at least in the phases of late adolescence and early adulthood that we focused on. (Breuer, Vogelgesang, Quandt, & Festl, 2015 , p. 324)

Overall, the lack of a consistent finding demonstrating that media exposure causes violent crime may not be particularly surprising given that studies linking media exposure, aggression, and violence suffer from a host of general criticisms. By way of explanation, social theorist David Gauntlett maintained that researchers frequently employ problematic definitions of aggression and violence, questionable methodologies, rely too much on fictional violence, neglect the social meaning of violence, and assume the third-person effect—that is, assume that other, vulnerable people are impacted by media, but “we” are not (Ferguson & Dyck, 2012 ; Gauntlett, 2001 ).

Others, such as scholars Martin Barker and Julian Petley, flatly reject the notion that violent media exposure is a causal factor for aggression and/or violence. In their book Ill Effects , the authors stated instead that it is simply “stupid” to query about “what are the effects of [media] violence” without taking context into account (p. 2). They counter what they describe as moral campaigners who advance the idea that media violence causes violence. Instead, Barker and Petley argue that audiences interpret media violence in a variety of ways based on their histories, experiences, and knowledge, and as such, it makes little sense to claim media “cause” violence (Barker & Petley, 2001 ).

Given the seemingly inconclusive and contradictory findings regarding media effects research, to say that the debate can, at times, be contentious is an understatement. One article published in European Psychologist queried “Does Doing Media Violence Research Make One Aggressive?” and lamented that the debate had devolved into an ideological one (Elson & Ferguson, 2013 ). Another academic journal published a special issue devoted to video games and youth and included a transcript of exchanges between two scholars to demonstrate that a “peaceful debate” was, in fact, possible (Ferguson & Konijn, 2015 ).

Nonetheless, in this debate, the stakes are high and the policy consequences profound. After examining over 900 published articles, publication patterns, prominent authors and coauthors, and disciplinary interest in the topic, scholar James Anderson argued that prominent media effects scholars, whom he deems the “causationists,” had developed a cottage industry dependent on funding by agencies focused primarily on the negative effects of media on children. Anderson argued that such a focus presents media as a threat to family values and ultimately operates as a zero-sum game. As a result, attention and resources are diverted toward media and away from other priorities that are essential to understanding aggression such as social disadvantage, substance abuse, and parental conflict (Anderson, 2008 , p. 1276).

Theoretical Perspectives on Media Effects

Understanding how media may impact attitudes and behavior has been the focus of media and communications studies for decades. Numerous theoretical perspectives offer insight into how and to what extent the media impacts the audience. As scholar Jenny Kitzinger documented in 2004 , there are generally two ways to approach the study of media effects. One is to foreground the power of media. That is, to suggest that the media holds powerful sway over viewers. Another perspective is to foreground the power and heterogeneity of the audience and to recognize that it is comprised of active agents (Kitzinger, 2004 ).

The notion of an all-powerful media can be traced to the influence of scholars affiliated with the Institute for Social Research, or Frankfurt School, in the 1930–1940s and proponents of the mass society theory. The institute was originally founded in Germany but later moved to the United States. Criminologist Yvonne Jewkes outlined how mass society theory assumed that members of the public were susceptible to media messages. This, theorists argued, was a result of rapidly changing social conditions and industrialization that produced isolated, impressionable individuals “cut adrift from kinship and organic ties and lacking moral cohesion” (Jewkes, 2015 , p. 13). In this historical context, in the era of World War II, the impact of Nazi propaganda was particularly resonant. Here, the media was believed to exhibit a unidirectional flow, operating as a powerful force influencing the masses. The most useful metaphor for this perspective described the media as a “hypodermic syringe” that could “‘inject’ values, ideas and information directly into the passive receiver producing direct and unmediated ‘effects’” (Jewkes, 2015 , pp. 16, 34). Though the hypodermic syringe model seems simplistic today, the idea that the media is all-powerful continues to inform contemporary public discourse around media and violence.

Concern of the power of media captured the attention of researchers interested in its purported negative impact on children. In one of the earliest series of studies in the United States during the late 1920s–1930s, researchers attempted to quantitatively measure media effects with the Payne Fund Studies. For example, they investigated how film, a relatively new medium, impacted children’s attitudes and behaviors, including antisocial and violent behavior. At the time, the Payne Fund Studies’ findings fueled the notion that children were indeed negatively influenced by films. This prompted the film industry to adopt a self-imposed code regulating content (Sparks & Sparks, 2002 ; Surette, 2011 ). Not everyone agreed with the approach. In fact, the methodologies employed in the studies received much criticism, and ultimately, the movement was branded as a moral crusade to regulate film content. Scholars Garth Jowett, Ian Jarvie, and Kathryn Fuller wrote about the significance of the studies,

We have seen this same policy battle fought and refought over radio, television, rock and roll, music videos and video games. Their researchers looked to see if intuitive concerns could be given concrete, measurable expression in research. While they had partial success, as have all subsequent efforts, they also ran into intractable problems . . . Since that day, no way has yet been found to resolve the dilemma of cause and effect: do crime movies create more crime, or do the criminally inclined enjoy and perhaps imitate crime movies? (Jowett, Jarvie, & Fuller, 1996 , p. 12)

As the debate continued, more sophisticated theoretical perspectives emerged. Efforts to empirically measure the impact of media on aggression and violence continued, albeit with equivocal results. In the 1950s and 1960s, psychological behaviorism, or understanding psychological motivations through observable behavior, became a prominent lens through which to view the causal impact of media violence. This type of research was exemplified by Albert Bandura’s Bobo Doll studies demonstrating that children exposed to aggressive behavior, either observed in real life or on film, behaved more aggressively than those in control groups who were not exposed to the behavior. The assumption derived was that children learn through exposure and imitate behavior (Bandura, Ross, & Ross, 1963 ). Though influential, the Bandura experiments were nevertheless heavily criticized. Some argued the laboratory conditions under which children were exposed to media were not generalizable to real-life conditions. Others challenged the assumption that children absorb media content in an unsophisticated manner without being able to distinguish between fantasy and reality. In fact, later studies did find children to be more discerning consumers of media than popularly believed (Gauntlett, 2001 ).

Hugely influential in our understandings of human behavior, the concept of social learning has been at the core of more contemporary understandings of media effects. For example, scholar Christopher Ferguson noted that the General Aggression Model (GAM), rooted in social learning and cognitive theory, has for decades been a dominant model for understanding how media impacts aggression and violence. GAM is described as the idea that “aggression is learned by the activation and repetition of cognitive scripts coupled with the desensitization of emotional responses due to repeated exposure.” However, Ferguson noted that its usefulness has been debated and advocated for a paradigm shift (Ferguson, 2013 , pp. 65, 27; Krahé, 2014 ).

Though the methodologies of the Payne Fund Studies and Bandura studies were heavily criticized, concern over media effects continued to be tied to larger moral debates including the fear of moral decline and concern over the welfare of children. Most notably, in the 1950s, psychiatrist Frederic Wertham warned of the dangers of comic books, a hugely popular medium at the time, and their impact on juveniles. Based on anecdotes and his clinical experience with children, Wertham argued that images of graphic violence and sexual debauchery in comic books were linked to juvenile delinquency. Though he was far from the only critic of comic book content, his criticisms reached the masses and gained further notoriety with the publication of his 1954 book, Seduction of the Innocent . Wertham described the comic book content thusly,

The stories have a lot of crime and gunplay and, in addition, alluring advertisements of guns, some of them full-page and in bright colors, with four guns of various sizes and descriptions on a page . . . Here is the repetition of violence and sexiness which no Freud, Krafft-Ebing or Havelock Ellis ever dreamed could be offered to children, and in such profusion . . . I have come to the conclusion that this chronic stimulation, temptation and seduction by comic books, both their content and their alluring advertisements of knives and guns, are contributing factors to many children’s maladjustment. (Wertham, 1954 , p. 39)

Wertham’s work was instrumental in shaping public opinion and policies about the dangers of comic books. Concern about the impact of comics reached its apex in 1954 with the United States Senate Judiciary Subcommittee on Juvenile Delinquency. Wertham testified before the committee, arguing that comics were a leading cause of juvenile delinquency. Ultimately, the protest of graphic content in comic books by various interest groups contributed to implementation of the publishers’ self-censorship code, the Comics Code Authority, which essentially designated select books that were deemed “safe” for children (Nyberg, 1998 ). The code remained in place for decades, though it was eventually relaxed and decades later phased out by the two most dominant publishers, DC and Marvel.

Wertham’s work, however influential in impacting the comic industry, was ultimately panned by academics. Although scholar Bart Beaty characterized Wertham’s position as more nuanced, if not progressive, than the mythology that followed him, Wertham was broadly dismissed as a moral reactionary (Beaty, 2005 ; Phillips & Strobl, 2013 ). The most damning criticism of Wertham’s work came decades later, from Carol Tilley’s examination of Wertham’s files. She concluded that in Seduction of the Innocent ,

Wertham manipulated, overstated, compromised, and fabricated evidence—especially that evidence he attributed to personal clinical research with young people—for rhetorical gain. (Tilley, 2012 , p. 386)

Tilley linked Wertham’s approach to that of the Frankfurt theorists who deemed popular culture a social threat and contended that Wertham was most interested in “cultural correction” rather than scientific inquiry (Tilley, 2012 , p. 404).

Over the decades, concern about the moral impact of media remained while theoretical and methodological approaches to media effects studies continued to evolve (Rich, Bickham, & Wartella, 2015 ). In what many consider a sophisticated development, theorists began to view the audience as more active and multifaceted than the mass society perspective allowed (Kitzinger, 2004 ). One perspective, based on a “uses and gratifications” model, assumes that rather than a passive audience being injected with values and information, a more active audience selects and “uses” media as a response to their needs and desires. Studies of uses and gratifications take into account how choice of media is influenced by one’s psychological and social circumstances. In this context, media provides a variety of functions for consumers who may engage with it for the purposes of gathering information, reducing boredom, seeking enjoyment, or facilitating communication (Katz, Blumler, & Gurevitch, 1973 ; Rubin, 2002 ). This approach differs from earlier views in that it privileges the perspective and agency of the audience.

Another approach, the cultivation theory, gained momentum among researchers in the 1970s and has been of particular interest to criminologists. It focuses on how television television viewing impacts viewers’ attitudes toward social reality. The theory was first introduced by communications scholar George Gerbner, who argued the importance of understanding messages that long-term viewers absorb. Rather than examine the effect of specific content within any given programming, cultivation theory,

looks at exposure to massive flows of messages over long periods of time. The cultivation process takes place in the interaction of the viewer with the message; neither the message nor the viewer are all-powerful. (Gerbner, Gross, Morgan, Singnorielli, & Shanahan, 2002 , p. 48)

In other words, he argued, television viewers are, over time, exposed to messages about the way the world works. As Gerbner and colleagues stated, “continued exposure to its messages is likely to reiterate, confirm, and nourish—that is, cultivate—its own values and perspectives” (p. 49).

One of the most well-known consequences of heavy media exposure is what Gerbner termed the “mean world” syndrome. He coined it based on studies that found that long-term exposure to media violence among heavy television viewers, “tends to cultivate the image of a relatively mean and dangerous world” (p. 52). Inherent in Gerbner’s view was that media representations are separate and distinct entities from “real life.” That is, it is the distorted representations of crime and violence that cultivate the notion that the world is a dangerous place. In this context, Gerbner found that heavy television viewers are more likely to be fearful of crime and to overestimate their chances of being a victim of violence (Gerbner, 1994 ).

Though there is evidence in support of cultivation theory, the strength of the relationship between media exposure and fear of crime is inconclusive. This is in part due to the recognition that audience members are not homogenous. Instead, researchers have found that there are many factors that impact the cultivating process. This includes, but is not limited to, “class, race, gender, place of residence, and actual experience of crime” (Reiner, 2002 ; Sparks, 1992 ). Or, as Ted Chiricos and colleagues remarked in their study of crime news and fear of crime, “The issue is not whether media accounts of crime increase fear, but which audiences, with which experiences and interests, construct which meanings from the messages received” (Chiricos, Eschholz, & Gertz, p. 354).

Other researchers found that exposure to media violence creates a desensitizing effect, that is, that as viewers consume more violent media, they become less empathetic as well as psychologically and emotionally numb when confronted with actual violence (Bartholow, Bushman, & Sestir, 2006 ; Carnagey, Anderson, & Bushman, 2007 ; Cline, Croft, & Courrier, 1973 ; Fanti, Vanman, Henrich, & Avraamides, 2009 ; Krahé et al., 2011 ). Other scholars such as Henry Giroux, however, point out that our contemporary culture is awash in violence and “everyone is infected.” From this perspective, the focus is not on certain individuals whose exposure to violent media leads to a desensitization of real-life violence, but rather on the notion that violence so permeates society that it has become normalized in ways that are divorced from ethical and moral implications. Giroux wrote,

While it would be wrong to suggest that the violence that saturates popular culture directly causes violence in the larger society, it is arguable that such violence serves not only to produce an insensitivity to real life violence but also functions to normalize violence as both a source of pleasure and as a practice for addressing social issues. When young people and others begin to believe that a world of extreme violence, vengeance, lawlessness, and revenge is the only world they inhabit, the culture and practice of real-life violence is more difficult to scrutinize, resist, and transform . . . (Giroux, 2015 )

For Giroux, the danger is that the normalization of violence has become a threat to democracy itself. In our culture of mass consumption shaped by neoliberal logics, depoliticized narratives of violence have become desired forms of entertainment and are presented in ways that express tolerance for some forms of violence while delegitimizing other forms of violence. In their book, Disposable Futures , Brad Evans and Henry Giroux argued that as the spectacle of violence perpetuates fear of inevitable catastrophe, it reinforces expansion of police powers, increased militarization and other forms of social control, and ultimately renders marginalized members of the populace disposable (Evans & Giroux, 2015 , p. 81).

Criminology and the “Media/Crime Nexus”

Most criminologists and sociologists who focus on media and crime are generally either dismissive of the notion that media violence directly causes violence or conclude that findings are more complex than traditional media effects models allow, preferring to focus attention on the impact of media violence on society rather than individual behavior (Carrabine, 2008 ; Ferrell, Hayward, & Young, 2015 ; Jewkes, 2015 ; Kitzinger, 2004 ; Marsh & Melville, 2014 ; Rafter, 2006 ; Sternheimer, 2003 ; Sternheimer 2013 ; Surette, 2011 ). Sociologist Karen Sternheimer forcefully declared “media culture is not the root cause of American social problems, not the Big Bad Wolf, as our ongoing public discussion would suggest” (Sternheimer, 2003 , p. 3). Sternheimer rejected the idea that media causes violence and argued that a false connection has been forged between media, popular culture, and violence. Like others critical of a singular focus on media, Sternheimer posited that overemphasis on the perceived dangers of media violence serves as a red herring that directs attention away from the actual causes of violence rooted in factors such as poverty, family violence, abuse, and economic inequalities (Sternheimer, 2003 , 2013 ). Similarly, in her Media and Crime text, Yvonne Jewkes stated that U.K. scholars tend to reject findings of a causal link because the studies are too reductionist; criminal behavior cannot be reduced to a single causal factor such as media consumption. Echoing Gauntlett’s critiques of media effects research, Jewkes stated that simplistic causal assumptions ignore “the wider context of a lifetime of meaning-making” (Jewkes, 2015 , p. 17).

Although they most often reject a “violent media cause violence” relationship, criminologists do not dismiss the notion of media as influential. To the contrary, over the decades much criminological interest has focused on the construction of social problems, the ideological implications of media, and media’s potential impact on crime policies and social control. Eamonn Carrabine noted that the focus of concern is not whether media directly causes violence but on “how the media promote damaging stereotypes of social groups, especially the young, to uphold the status quo” (Carrabine, 2008 , p. 34). Theoretically, these foci have been traced to the influence of cultural and Marxist studies. For example, criminologists frequently focus on how social anxieties and class inequalities impact our understandings of the relationship between media violence and attitudes, values, and behaviors. Influential works in the 1970s, such as Policing the Crisis: Mugging, the State, and Law and Order by Stuart Hall et al. and Stanley Cohen’s Folk Devils and Moral Panics , shifted criminological critique toward understanding media as a hegemonic force that reinforces state power and social control (Brown, 2011 ; Carrabine, 2008 ; Cohen, 2005 ; Garland, 2008 ; Hall et al., 2013 /1973, 2013/1973 ). Since that time, moral panic has become a common framework applied to public discourse around a variety of social issues including road rage, child abuse, popular music, sex panics, and drug abuse among others.

Into the 21st century , advances in technology, including increased use of social media, shifted the ways that criminologists approach the study of media effects. Scholar Sheila Brown traced how research in criminology evolved from a focus on “media and crime” to what she calls the “media/crime nexus” that recognizes that “media experience is real experience” (Brown, 2011 , p. 413). In other words, many criminologists began to reject as fallacy what social media theorist Nathan Jurgenson deemed “digital dualism,” or the notion that we have an “online” existence that is separate and distinct from our “off-line” existence. Instead, we exist simultaneously both online and offline, an

augmented reality that exists at the intersection of materiality and information, physicality and digitality, bodies and technology, atoms and bits, the off and the online. It is wrong to say “IRL” [in real life] to mean offline: Facebook is real life. (Jurgenson, 2012 )

The changing media landscape has been of particular interest to cultural criminologists. Michelle Brown recognized the omnipresence of media as significant in terms of methodological preferences and urged a move away from a focus on causality and predictability toward a more fluid approach that embraces the complex, contemporary media-saturated social reality characterized by uncertainty and instability (Brown, 2007 ).

Cultural criminologists have indeed rejected direct, causal relationships in favor of the recognition that social meanings of aggression and violence are constantly in transition, flowing through the media landscape, where “bits of information reverberate and bend back on themselves, creating a fluid porosity of meaning that defines late-modern life, and the nature of crime and media within it.” In other words, there is no linear relationship between crime and its representation. Instead, crime is viewed as inseparable from the culture in which our everyday lives are constantly re-created in loops and spirals that “amplify, distort, and define the experience of crime and criminality itself” (Ferrell, Hayward, & Young, 2015 , pp. 154–155). As an example of this shift in understanding media effects, criminologist Majid Yar proposed that we consider how the transition from being primarily consumers to primarily producers of content may serve as a motivating mechanism for criminal behavior. Here, Yar is suggesting that the proliferation of user-generated content via media technologies such as social media (i.e., the desire “to be seen” and to manage self-presentation) has a criminogenic component worthy of criminological inquiry (Yar, 2012 ). Shifting attention toward the media/crime nexus and away from traditional media effects analyses opens possibilities for a deeper understanding of the ways that media remains an integral part of our everyday lives and inseparable from our understandings of and engagement with crime and violence.

Over the years, from films to comic books to television to video games to social media, concerns over media effects have shifted along with changing technologies. While there seems to be some consensus that exposure to violent media impacts aggression, there is little evidence showing its impact on violent or criminal behavior. Nonetheless, high-profile violent crimes continue to reignite public interest in media effects, particularly with regard to copycat crimes.

At times, academic debate around media effects remains contentious and one’s academic discipline informs the study and interpretation of media effects. Criminologists and sociologists are generally reluctant to attribute violence and criminal behavior directly to exposure to violence media. They are, however, not dismissive of the impact of media on attitudes, social policies, and social control as evidenced by the myriad of studies on moral panics and other research that addresses the relationship between media, social anxieties, gender, race, and class inequalities. Scholars who study media effects are also sensitive to the historical context of the debates and ways that moral concerns shape public policies. The self-regulating codes of the film industry and the comic book industry have led scholars to be wary of hyperbole and policy overreach in response to claims of media effects. Future research will continue to explore ways that changing technologies, including increasing use of social media, will impact our understandings and perceptions of crime as well as criminal behavior.

Further Reading

  • American Psychological Association . (2015). Resolution on violent video games . Retrieved from http://www.apa.org/about/policy/violent-video-games.aspx
  • Anderson, J. A. , & Grimes, T. (2008). Special issue: Media violence. Introduction. American Behavioral Scientist , 51 (8), 1059–1060.
  • Berlatsky, N. (Ed.). (2012). Media violence: Opposing viewpoints . Farmington Hills, MI: Greenhaven.
  • Elson, M. , & Ferguson, C. J. (2014). Twenty-five years of research on violence in digital games and aggression. European Psychologist , 19 (1), 33–46.
  • Ferguson, C. (Ed.). (2015). Special issue: Video games and youth. Psychology of Popular Media Culture , 4 (4).
  • Ferguson, C. J. , Olson, C. K. , Kutner, L. A. , & Warner, D. E. (2014). Violent video games, catharsis seeking, bullying, and delinquency: A multivariate analysis of effects. Crime & Delinquency , 60 (5), 764–784.
  • Gentile, D. (2013). Catharsis and media violence: A conceptual analysis. Societies , 3 (4), 491–510.
  • Huesmann, L. R. (2007). The impact of electronic media violence: Scientific theory and research. Journal of Adolescent Health , 41 (6), S6–S13.
  • Huesmann, L. R. , & Taylor, L. D. (2006). The role of media violence in violent behavior. Annual Review of Public Health , 27 (1), 393–415.
  • Krahé, B. (Ed.). (2013). Special issue: Understanding media violence effects. Societies , 3 (3).
  • Media Violence Commission, International Society for Research on Aggression (ISRA) . (2012). Report of the Media Violence Commission. Aggressive Behavior , 38 (5), 335–341.
  • Rich, M. , & Bickham, D. (Eds.). (2015). Special issue: Methodological advances in the field of media influences on children. Introduction. American Behavioral Scientist , 59 (14), 1731–1735.
  • American Psychological Association (APA) . (2015, August 13). APA review confirms link between playing violent video games and aggression . Retrieved from http://www.apa.org/news/press/releases/2015/08/violent-video-games.aspx
  • Anderson, J. A. (2008). The production of media violence and aggression research: A cultural analysis. American Behavioral Scientist , 51 (8), 1260–1279.
  • Bandura, A. , Ross, D. , & Ross, S. A. (1963). Imitation of film-mediated aggressive models. The Journal of Abnormal and Social Psychology , 66 (1), 3–11.
  • Barker, M. , & Petley, J. (2001). Ill effects: The media violence debate (2d ed.). London: Routledge.
  • Bartholow, B. D. , Bushman, B. J. , & Sestir, M. A. (2006). Chronic violent video game exposure and desensitization to violence: Behavioral and event-related brain potential data. Journal of Experimental Social Psychology , 42 (4), 532–539.
  • Beaty, B. (2005). Fredric Wertham and the critique of mass culture . Jackson: University Press of Mississippi.
  • Bracci, P. (2010, March 12). The police were sure James Bulger’s ten-year-old killers were simply wicked. But should their parents have been in the dock? Retrieved from http://www.dailymail.co.uk/news/article-1257614/The-police-sure-James-Bulgers-year-old-killers-simply-wicked-But-parents-dock.html
  • Breuer, J. , Vogelgesang, J. , Quandt, T. , & Festl, R. (2015). Violent video games and physical aggression: Evidence for a selection effect among adolescents. Psychology of Popular Media Culture , 4 (4), 305–328.
  • Brooks, X. (2002, December 19). Natural born copycats . Retrieved from http://www.theguardian.com/culture/2002/dec/20/artsfeatures1
  • Brown, M. (2007). Beyond the requisites: Alternative starting points in the study of media effects and youth violence. Journal of Criminal Justice and Popular Culture , 14 (1), 1–20.
  • Brown, S. (2011). Media/crime/millennium: Where are we now? A reflective review of research and theory directions in the 21st century. Sociology Compass , 5 (6), 413–425.
  • Bushman, B. (2016, July 26). Violent video games and real violence: There’s a link but it’s not so simple . Retrieved from http://theconversation.com/violent-video-games-and-real-violence-theres-a-link-but-its-not-so-simple?63038
  • Bushman, B. J. , & Anderson, C. A. (2015). Understanding causality in the effects of media violence. American Behavioral Scientist , 59 (14), 1807–1821.
  • Bushman, B. J. , Jamieson, P. E. , Weitz, I. , & Romer, D. (2013). Gun violence trends in movies. Pediatrics , 132 (6), 1014–1018.
  • Carnagey, N. L. , Anderson, C. A. , & Bushman, B. J. (2007). The effect of video game violence on physiological desensitization to real-life violence. Journal of Experimental Social Psychology , 43 (3), 489–496.
  • Carrabine, E. (2008). Crime, culture and the media . Cambridge, U.K.: Polity.
  • Chiricos, T. , Eschholz, S. , & Gertz, M. (1997). Crime, news and fear of crime: Toward an identification of audience effects. Social Problems , 44 , 342.
  • Cline, V. B. , Croft, R. G. , & Courrier, S. (1973). Desensitization of children to television violence. Journal of Personality and Social Psychology , 27 (3), 360–365.
  • CNN Wire (2016, July 24). Officials: 18-year-old suspect in Munich attack was obsessed with mass shootings . Retrieved from http://ktla.com/2016/07/24/18-year-old-suspect-in-munich-shooting-played-violent-video-games-had-mental-illness-officials/
  • Cohen, S. (2005). Folk devils and moral panics (3d ed.). New York: Routledge.
  • Cullen, D. (2009). Columbine . New York: Hachette.
  • Dahl, G. , & DellaVigna, S. (2012). Does movie violence increase violent crime? In N. Berlatsky (Ed.), Media Violence: Opposing Viewpoints (pp. 36–43). Farmington Hills, MI: Greenhaven.
  • DeCamp, W. (2015). Impersonal agencies of communication: Comparing the effects of video games and other risk factors on violence. Psychology of Popular Media Culture , 4 (4), 296–304.
  • Elson, M. , & Ferguson, C. J. (2013). Does doing media violence research make one aggressive? European Psychologist , 19 (1), 68–75.
  • Evans, B. , & Giroux, H. (2015). Disposable futures: The seduction of violence in the age of spectacle . San Francisco: City Lights Publishers.
  • Fanti, K. A. , Vanman, E. , Henrich, C. C. , & Avraamides, M. N. (2009). Desensitization to media violence over a short period of time. Aggressive Behavior , 35 (2), 179–187.
  • Ferguson, C. J. (2013). Violent video games and the Supreme Court: Lessons for the scientific community in the wake of Brown v. Entertainment Merchants Association. American Psychologist , 68 (2), 57–74.
  • Ferguson, C. J. (2014). Does media violence predict societal violence? It depends on what you look at and when. Journal of Communication , 65 (1), E1–E22.
  • Ferguson, C. J. , & Dyck, D. (2012). Paradigm change in aggression research: The time has come to retire the general aggression model. Aggression and Violent Behavior , 17 (3), 220–228.
  • Ferguson, C. J. , & Konijn, E. A. (2015). She said/he said: A peaceful debate on video game violence. Psychology of Popular Media Culture , 4 (4), 397–411.
  • Ferrell, J. , Hayward, K. , & Young, J. (2015). Cultural criminology: An invitation . Thousand Oaks, CA: SAGE.
  • Frosch, D. , & Johnson, K. (2012, July 20). 12 are killed at showing of Batman movie in Colorado . Retrieved from http://www.nytimes.com/2012/07/21/us/shooting-at-colorado-theater-showing-batman-movie.html
  • Garland, D. (2008). On the concept of moral panic. Crime, Media, Culture , 4 (1), 9–30.
  • Gauntlett, D. (2001). The worrying influence of “media effects” studies. In ill effects: The media violence debate (2d ed.). London: Routledge.
  • Gerbner, G. (1994). TV violence and the art of asking the wrong question. Retrieved from http://www.medialit.org/reading-room/tv-violence-and-art-asking-wrong-question
  • Gerbner, G. , Gross, L. , Morgan, M. , Singnorielli, N. , & Shanahan, J. (2002). Growing up with television: Cultivation process. In J. Bryant & D. Zillmann (Eds.), Media effects: Advances in theory and research (pp. 43–67). Mahwah, NJ: Lawrence Erlbaum Associates.
  • Giroux, H. (2015, December 25). America’s addiction to violence . Retrieved from http://www.counterpunch.org/2015/12/25/americas-addiction-to-violence-2/
  • Graham, C. , & Gallagher, I. (2012, July 20). Gunman who massacred 12 at movie premiere used same drugs that killed Batman star Heath Ledger . Retrieved from http://www.dailymail.co.uk/news/article-2176377/James-Holmes-Colorado-shooting-Gunman-used-drugs-killed-Heath-Ledger.html
  • Gunter, B. (2008). Media violence: Is there a case for causality? American Behavioral Scientist , 51 (8), 1061–1122.
  • Hall, S. , Critcher, C. , Jefferson, T. , Clarke, J. , & Roberts, B. (2013/1973). Policing the crisis: Mugging, the state and law and order . Hampshire, U.K.: Palgrave.
  • Helfgott, J. B. (2015). Criminal behavior and the copycat effect: Literature review and theoretical framework for empirical investigation. Aggression and Violent Behavior , 22 (C), 46–64.
  • Hetsroni, A. (2007). Four decades of violent content on prime-time network programming: A longitudinal meta-analytic review. Journal of Communication , 57 (4), 759–784.
  • Jewkes, Y. (2015). Media & crime . London: SAGE.
  • Jowett, G. , Jarvie, I. , & Fuller, K. (1996). Children and the movies: Media influence and the Payne Fund controversy . Cambridge, U.K.: Cambridge University Press.
  • Jurgenson, N. (2012, June 28). The IRL fetish . Retrieved from http://thenewinquiry.com/essays/the-irl-fetish/
  • Katz, E. , Blumler, J. G. , & Gurevitch, M. (1973). Uses and gratifications research. The Public Opinion Quarterly .
  • Kitzinger, J. (2004). Framing abuse: Media influence and public understanding of sexual violence against children . London: Polity.
  • Krahé, B. (2014). Restoring the spirit of fair play in the debate about violent video games. European Psychologist , 19 (1), 56–59.
  • Krahé, B. , Möller, I. , Huesmann, L. R. , Kirwil, L. , Felber, J. , & Berger, A. (2011). Desensitization to media violence: Links with habitual media violence exposure, aggressive cognitions, and aggressive behavior. Journal of Personality and Social Psychology , 100 (4), 630–646.
  • Markey, P. M. , French, J. E. , & Markey, C. N. (2015). Violent movies and severe acts of violence: Sensationalism versus science. Human Communication Research , 41 (2), 155–173.
  • Markey, P. M. , Markey, C. N. , & French, J. E. (2015). Violent video games and real-world violence: Rhetoric versus data. Psychology of Popular Media Culture , 4 (4), 277–295.
  • Marsh, I. , & Melville, G. (2014). Crime, justice and the media . New York: Routledge.
  • McShane, L. (2012, July 27). Maryland police arrest possible Aurora copycat . Retrieved from http://www.nydailynews.com/news/national/maryland-cops-thwart-aurora-theater-shooting-copycat-discover-gun-stash-included-20-weapons-400-rounds-ammo-article-1.1123265
  • Meyer, J. (2015, September 18). The James Holmes “Joker” rumor . Retrieved from http://www.denverpost.com/2015/09/18/meyer-the-james-holmes-joker-rumor/
  • Murray, J. P. (2008). Media violence: The effects are both real and strong. American Behavioral Scientist , 51 (8), 1212–1230.
  • Nyberg, A. K. (1998). Seal of approval: The history of the comics code. Jackson: University Press of Mississippi.
  • PBS . (n.d.). Culture shock: Flashpoints: Theater, film, and video: Stanley Kubrick’s A Clockwork Orange. Retrieved from http://www.pbs.org/wgbh/cultureshock/flashpoints/theater/clockworkorange.html
  • Phillips, N. D. , & Strobl, S. (2013). Comic book crime: Truth, justice, and the American way . New York: New York University Press.
  • Rafter, N. (2006). Shots in the mirror: Crime films and society (2d ed.). New York: Oxford University Press.
  • Reiner, R. (2002). Media made criminality: The representation of crime in the mass media. In R. Reiner , M. Maguire , & R. Morgan (Eds.), The Oxford handbook of criminology (pp. 302–340). Oxford: Oxford University Press.
  • Reuters . (2016, July 24). Munich gunman, a fan of violent video games, rampage killers, had planned attack for a year . Retrieved from http://www.cnbc.com/2016/07/24/munich-gunman-a-fan-of-violent-video-games-rampage-killers-had-planned-attack-for-a-year.html
  • Rich, M. , Bickham, D. S. , & Wartella, E. (2015). Methodological advances in the field of media influences on children. American Behavioral Scientist , 59 (14), 1731–1735.
  • Robinson, M. B. (2011). Media coverage of crime and criminal justice. Durham, NC: Carolina Academic Press.
  • Rubin, A. (2002). The uses-and-gratifications perspective of media effects. In J. Bryant & D. Zillmann (Eds.), Media effects: Advances in theory and research (pp. 525–548). Mahwah, NJ: Lawrence Erlbaum Associates.
  • Savage, J. (2008). The role of exposure to media violence in the etiology of violent behavior: A criminologist weighs in. American Behavioral Scientist , 51 (8), 1123–1136.
  • Savage, J. , & Yancey, C. (2008). The effects of media violence exposure on criminal aggression: A meta-analysis. Criminal Justice and Behavior , 35 (6), 772–791.
  • Sparks, R. (1992). Television and the drama of crime: Moral tales and the place of crime in public life . Buckingham, U.K.: Open University Press.
  • Sparks, G. , & Sparks, C. (2002). Effects of media violence. In J. Bryant & D. Zillmann (Eds.), Media effects: Advances in theory and research (2d ed., pp. 269–286). Mahwah, NJ: Lawrence Erlbaum Associates.
  • Sternheimer, K. (2003). It’s not the media: The truth about pop culture’s influence on children . Boulder, CO: Westview.
  • Sternheimer, K. (2013). Connecting social problems and popular culture: Why media is not the answer (2d ed.). Boulder, CO: Westview.
  • Strasburger, V. C. , & Donnerstein, E. (2014). The new media of violent video games: Yet same old media problems? Clinical Pediatrics , 53 (8), 721–725.
  • Surette, R. (2002). Self-reported copycat crime among a population of serious and violent juvenile offenders. Crime & Delinquency , 48 (1), 46–69.
  • Surette, R. (2011). Media, crime, and criminal justice: Images, realities and policies (4th ed.). Belmont, CA: Wadsworth.
  • Surette, R. (2016). Measuring copycat crime. Crime, Media, Culture , 12 (1), 37–64.
  • Tilley, C. L. (2012). Seducing the innocent: Fredric Wertham and the falsifications that helped condemn comics. Information & Culture , 47 (4), 383–413.
  • Warburton, W. (2014). Apples, oranges, and the burden of proof—putting media violence findings into context. European Psychologist , 19 (1), 60–67.
  • Wertham, F. (1954). Seduction of the innocent . New York: Rinehart.
  • Yamato, J. (2016, June 14). Gaming industry mourns Orlando victims at E3—and sees no link between video games and gun violence . Retrieved from http://www.thedailybeast.com/articles/2016/06/14/gamers-mourn-orlando-victims-at-e3-and-see-no-link-between-gaming-and-gun-violence.html
  • Yar, M. (2012). Crime, media and the will-to-representation: Reconsidering relationships in the new media age. Crime, Media, Culture , 8 (3), 245–260.

Related Articles

  • Intimate Partner Violence
  • The Extent and Nature of Gang Crime
  • Intersecting Dimensions of Violence, Abuse, and Victimization

Printed from Oxford Research Encyclopedias, Criminology and Criminal Justice. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 28 August 2024

  • Cookie Policy
  • Privacy Policy
  • Legal Notice
  • Accessibility
  • [185.80.149.115]
  • 185.80.149.115

Character limit 500 /500

American Psychological Association Logo

Violence in the media: Psychologists study potential harmful effects

Early research on the effects of viewing violence on television—especially among children—found a desensitizing effect and the potential for aggression. Is the same true for those who play violent video games?

  • Video Games

young boy staring at videogame screen

Television and video violence

Virtually since the dawn of television, parents, teachers, legislators, and mental health professionals have wanted to understand the impact of television programs, particularly on children. Of special concern has been the portrayal of violence, particularly given psychologist Albert Bandura’s work in the 1970s on social learning and the tendency of children to imitate what they see.

As a result of 15 years of “consistently disturbing” findings about the violent content of children’s programs, the Surgeon General’s Scientific Advisory Committee on Television and Social Behavior was formed in 1969 to assess the impact of violence on the attitudes, values, and behavior of viewers. The resulting report and a follow-up report in 1982 by the National Institute of Mental Health identified these major effects of seeing violence on television:

  • Children may become less sensitive to the pain and suffering of others.
  • Children may be more fearful of the world around them.
  • Children may be more likely to behave in aggressive or harmful ways toward others.

Research by psychologists L. Rowell Huesmann, Leonard Eron, and others starting in the 1980s found that children who watched many hours of violence on television when they were in elementary school tended to show higher levels of aggressive behavior when they became teenagers. By observing these participants into adulthood, Huesmann and Eron found that the ones who’d watched a lot of TV violence when they were 8 years old were more likely to be arrested and prosecuted for criminal acts as adults.

Interestingly, being aggressive as a child did not predict watching more violent TV as a teenager, suggesting that TV watching could be a cause rather than a consequence of aggressive behavior. However, later research by psychologists Douglas Gentile and Brad Bushman, among others, suggested that exposure to media violence is just one of several factors that can contribute to aggressive behavior.

Other research has found that exposure to media violence can desensitize people to violence in the real world and that, for some people, watching violence in the media becomes enjoyable and does not result in the anxious arousal that would be expected from seeing such imagery.

Video game violence

The advent of video games raised new questions about the potential impact of media violence, since the video game player is an active participant rather than merely a viewer. 97% of adolescents age 12–17 play video games—on a computer, on consoles such as the Wii, Playstation, and Xbox, or on portable devices such as Gameboys, smartphones, and tablets. A Pew Research Center survey in 2008 found that half of all teens reported playing a video game “yesterday,” and those who played every day typically did so for an hour or more.

Many of the most popular video games, such as “Call of Duty” and “Grand Theft Auto,” are violent; however, as video game technology is relatively new, there are fewer empirical studies of video game violence than other forms of media violence. Still, several meta-analytic reviews have reported negative effects of exposure to violence in video games.

A 2010 review by psychologist Craig A. Anderson and others concluded that “the evidence strongly suggests that exposure to violent video games is a causal risk factor for increased aggressive behavior, aggressive cognition, and aggressive affect and for decreased empathy and prosocial behavior.” Anderson’s earlier research showed that playing violent video games can increase a person’s aggressive thoughts, feelings, and behavior both in laboratory settings and in daily life. “One major conclusion from this and other research on violent entertainment media is that content matters,” says Anderson.

Other researchers, including psychologist Christopher J. Ferguson, have challenged the position that video game violence harms children . While his own 2009 meta-analytic review reported results similar to Anderson’s, Ferguson contends that laboratory results have not translated into real world, meaningful effects. He also claims that much of the research into video game violence has failed to control for other variables such as mental health and family life, which may have impacted the results. His work has found that children who are already at risk may be more likely to choose to play violent video games. According to Ferguson, these other risk factors, as opposed to the games, cause aggressive and violent behavior.

APA launched an analysis in 2013 of peer-reviewed research on the impact of media violence and is reviewing its policy statements in the area.

Anderson, C.A., Ihori, Nobuko, Bushman, B.J., Rothstein, H.R., Shibuya, A., Swing, E.L., Sakamoto, A., & Saleem, M. (2010). Violent video game effects on aggression, empathy, and prosocial behavior in Eastern and Western countries: A Meta-analytic review.  Psychological Bulletin , Vol. 126, No. 2.

Anderson, C. A., Carnagey, N. L. & Eubanks, J. (2003). Exposure to violent media: The effects of songs with violent lyrics on aggressive thoughts and feelings.  Journal of Personality and Social Psycholog y, Vol. 84, No. 5.

Anderson, C. A., & Dill, K. E. (2000). Video games and aggressive thoughts, feelings, and behavior in the laboratory and in life.  Journal of Personality and Social Psychology , Vol. 78, No. 4.

Ferguson, C.J. (2011). Video games and youth violence: A Prospective analysis in adolescents.  Journal of Youth and Adolescence , Vol. 40, No. 4.

Gentile, D.A., & Bushman, B.J. (2012). Reassessing media violence effects using a risk and resilience approach to understanding aggression.  Psychology of Popular Media Culture , Vol. 1, No. 3.

Huesmann, L. R., & Eron, L. D. (1986). Television and the aggressive child: A cross-national comparison. Hillsdale, NJ: Erlbaum.

Huesmann, L. R., Moise-Titus, J., Podolski, C. L., & Eron, L. D. (2003). Longitudinal relations between children’s exposure to TV violence and their aggressive and violent behavior in young adulthood: 1977–1992.  Developmental Psychology , Vol. 39, No. 2, 201–221.

Huston, A. C., Donnerstein, E., Fairchild, H., Feshbach, N. D., Katz, P. A., Murray, J. P., Rubinstein, E. A., Wilcox, B. & Zuckerman, D. (1992). Big world, small screen: The role of television in American society. Lincoln, NE: University of Nebraska Press.

Krahe, B., Moller, I., Kirwil, L., Huesmann, L.R., Felber, J., & Berger, A. (2011). Desensitization to media violence: Links with habitual media violence exposure, aggressive cognitions, and aggressive behavior.  Journal of Personality and Social Psychology , Vol. 100, No. 4.

Murray, J. P. (1973). Television and violence: Implications of the Surgeon General’s research program.  American Psychologist , Vol. 28, 472–478.

National Institute of Mental Health (1982). Television and behavior: Ten years of scientific progress and implications for the eighties, Vol. 1. Rockville, MD: U.S. Department of Health and Human Services.

Recommended Reading

You may also like.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

The effects of violent media content on aggression

Affiliations.

  • 1 Department of Psychology, University of Copenhagen, Øster Farimagsgade 2A, 1353 Copenhagen C, Denmark. Electronic address: [email protected].
  • 2 Department of Psychology, Iowa State University, 112 Lagomarcino Hall, Ames, IA 50011, USA.
  • PMID: 29279205
  • DOI: 10.1016/j.copsyc.2017.04.003

Decades of research have shown that violent media exposure is one risk factor for aggression. This review presents findings from recent cross-sectional, experimental, and longitudinal studies, demonstrating the triangulation of evidence within the field. Importantly, this review also illustrates how media violence research has started to move away from merely establishing the existence of media effects and instead has begun to investigate the mechanisms underlying these effects and their limitations. Such studies range from investigations into cross-cultural differences to neurophysiological effects, and the interplay between media, individual, and contextual factors. Although violent media effects have been well-established for some time, they are not monolithic, and recent findings continue to shed light on the nuances and complexities of such effects.

Copyright © 2017 Elsevier Ltd. All rights reserved.

PubMed Disclaimer

Similar articles

  • Do Sex and Violence Sell? A Meta-Analytic Review of the Effects of Sexual and Violent Media and Ad Content on Memory, Attitudes, and Buying Intentions. Lull RB, Bushman BJ. Lull RB, et al. Psychol Bull. 2015 Sep;141(5):1022-48. doi: 10.1037/bul0000018. Epub 2015 Jul 20. Psychol Bull. 2015. PMID: 26191956 Review.
  • Desensitization to media violence: links with habitual media violence exposure, aggressive cognitions, and aggressive behavior. Krahé B, Möller I, Huesmann LR, Kirwil L, Felber J, Berger A. Krahé B, et al. J Pers Soc Psychol. 2011 Apr;100(4):630-46. doi: 10.1037/a0021711. J Pers Soc Psychol. 2011. PMID: 21186935 Free PMC article.
  • The role of attention problems and impulsiveness in media violence effects on aggression. Swing EL, Anderson CA. Swing EL, et al. Aggress Behav. 2014 May-Jun;40(3):197-203. doi: 10.1002/ab.21519. Epub 2014 Jan 22. Aggress Behav. 2014. PMID: 24452487
  • The role of media violence in violent behavior. Huesmann LR, Taylor LD. Huesmann LR, et al. Annu Rev Public Health. 2006;27:393-415. doi: 10.1146/annurev.publhealth.26.021304.144640. Annu Rev Public Health. 2006. PMID: 16533123 Review.
  • Screen Violence and Youth Behavior. Anderson CA, Bushman BJ, Bartholow BD, Cantor J, Christakis D, Coyne SM, Donnerstein E, Brockmyer JF, Gentile DA, Green CS, Huesmann R, Hummer T, Krahé B, Strasburger VC, Warburton W, Wilson BJ, Ybarra M. Anderson CA, et al. Pediatrics. 2017 Nov;140(Suppl 2):S142-S147. doi: 10.1542/peds.2016-1758T. Pediatrics. 2017. PMID: 29093050 Review.
  • Perceived offensiveness to the self, not that to others, is a robust positive predictor of support of censoring sexual, alcoholic, and violent media content. Zhang J. Zhang J. Front Psychol. 2023 Aug 29;14:1159014. doi: 10.3389/fpsyg.2023.1159014. eCollection 2023. Front Psychol. 2023. PMID: 37705946 Free PMC article.
  • Toddlers and the Telly: A latent profile analysis of children's television time and content and behavioral outcomes one year later in the U.S. Holmgren HG, Stockdale L, Shawcroft J, Coyne SM, Fraser AM. Holmgren HG, et al. J Child Media. 2023;17(3):298-317. doi: 10.1080/17482798.2023.2195194. Epub 2023 Mar 30. J Child Media. 2023. PMID: 37600082 Free PMC article.
  • Trajectories of Screen Time across Adolescence and Their Associations with Adulthood Mental Health and Behavioral Outcomes. Zhu X, Griffiths H, Xiao Z, Ribeaud D, Eisner M, Yang Y, Murray AL. Zhu X, et al. J Youth Adolesc. 2023 Jul;52(7):1433-1447. doi: 10.1007/s10964-023-01782-x. Epub 2023 May 6. J Youth Adolesc. 2023. PMID: 37148440 Free PMC article.
  • Roles of Hostility and Depression in the Association between the MAOA Gene Polymorphism and Internet Gaming Disorder. Yen JY, Chou WP, Lin HC, Wu HC, Tsai WX, Ko CH. Yen JY, et al. Int J Environ Res Public Health. 2021 Jun 27;18(13):6910. doi: 10.3390/ijerph18136910. Int J Environ Res Public Health. 2021. PMID: 34199135 Free PMC article.
  • Reactions to Unsolicited Violent, and Sexual, Explicit Media Content Shared over Social Media: Gender Differences and Links with Prior Exposure. Nicklin LL, Swain E, Lloyd J. Nicklin LL, et al. Int J Environ Res Public Health. 2020 Jun 16;17(12):4296. doi: 10.3390/ijerph17124296. Int J Environ Res Public Health. 2020. PMID: 32560142 Free PMC article.

Publication types

  • Search in MeSH

Related information

Linkout - more resources, full text sources.

  • Elsevier Science

Other Literature Sources

  • scite Smart Citations
  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

The Influence of Media Violence on Intimate Partner Violence Perpetration: An Examination of Inmates’ Domestic Violence Convictions and Self-Reported Perpetration

Samantha m. gavin.

1 Department of Sociology and Criminology, St. Bonaventure University, 3261 West State Street, Plassmann Room A1, St. Bonaventure, NY 14778 USA

Nathan E. Kruis

2 Department of Criminal Justice, Penn State Altoona, 3000 Ivyside Park, Cypress Building, Room 101E, Altoona, PA 16601 USA

Research suggests that the representation of violence against women in the media has resulted in an increased acceptance of attitudes favoring domestic violence. While prior work has investigated the relationship between violent media exposure and violent crime, there has been little effort to empirically examine the relationship between specific forms of violent media exposure and the perpetration of intimate partner violence. Using data collected from a sample of 148 inmates, the current study seeks to help fill these gaps in the literature by examining the relationship between exposure to various forms of pleasurable violent media and the perpetration of intimate partner violence (i.e., conviction and self-reported). At the bivariate level, results indicate a significant positive relationship between exposure to pleasurable television violence and self-reported intimate partner abuse. However, this relationship is reduced to insignificant levels in multivariable modeling. Endorsement of domestic violence beliefs and victimization experience were found to be the strongest predictors of intimate partner violence perpetration. Potential policy implications based on findings are discussed within.

Introduction

In the United States, more than 12 million men and women become victims of domestic violence each year [ 76 ]. In fact, every minute, roughly 20 Americans are victimized at the hands of an intimate partner [ 3 ]. Although both men and women are abused by an intimate partner, women have a higher likelihood of such abuse, with those ages 18–34 years being at the highest risk of victimization. Moreover, it is estimated that approximately 1 in 4 women and 1 in 7 men experience violence at the hands of an intimate partner at some point in their lifetime [ 77 ].

According to the United Nations Commission on the Status of Women [ 78 ], the representation of violence against women in the media has greatly increased over the years. Recent research suggests that women are commonly depicted as victims and sex objects in the media [ 12 , 69 ]. Portraying women in this way, media via pornography, pornographic movies, and music videos, has been found to increase attitudes which are supportive of violence, specifically sexual violence, against women. Notably, in relation to violence in general, research suggests that the media’s portrayal of women as sex objects and victims, tends to influence societal attitudes that are accepting of domestic violence, particularly violence against women [ 40 , 43 , 46 , 69 ].

Understanding the influence of media violence on an individual’s perceptions of domestic violence could help gain a better understanding of the factors that contribute to an individual’s domestic violence tendencies, as well as to gain a better understanding of how to lessen such tendencies. Not only can the influence of media violence on domestic violence perceptions be addressed, but specific media forms can be identified as to the level of influence that each form of media has on such perceptions as well. Understanding how exposure to media violence influences domestic violence perceptions, in comparison to the influence of media aggression on domestic violence perceptions, will allow for an overall perspective of how violent media in general influences domestic violence perpetration. Accordingly, the present study seeks to provide an empirical assessment of the relationship between violent media exposure and the perpetration of intimate partner abuse.

Literature Review

Although society believes that exposure to media violence 1 causes an individual to become violent, research has cast doubt on this belief, stating that violent media does not directly influence violent behavior at a highly correlated statistically significant level [ 2 , 4 , 21 , 22 , 65 , 85 ]. In relation to media aggression 2 and domestic violence perceptions however, research has demonstrated a relationship between the two variables [ 11 , 12 , 23 , 35 , 39 , 41 , 47 ], such that an increased level of exposure to media aggression, for example, video games and movies depicting aggression towards women, influences individuals to become more accepting of aggression toward women.

Domestic Violence

According to The United States Department of Justice [ 75 ], domestic violence is defined as “a pattern of abusive behavior in any relationship that is used by one partner to gain or maintain power and control over another intimate partner” (para. 1). Emotional/psychological, verbal, physical, sexual, and financial abuse [ 42 , 84 ], as well as digital abuse, are the different types of abuse that can occur amongst intimate partners.

In the United States alone, domestic violence hotlines received approximately 20,000 calls per day [ 51 ], with at least five million incidents occurring each year [ 34 ]. With the COVID-19 pandemic, the likelihood for domestic violence incidents to occur increased, while a victim’s ability to call and report decreased [ 18 ], due to individuals being locked down at home, being laid-off, or working from home. In examining the 1 in 4 women and 1 in 7 men who experience domestic violence at the hands of an intimate partner, 1 in 3 and 1 in 4, respectively, have experienced physical abuse [ 3 ], with 1 in 7 women and 1 in 25 men obtaining injuries from the abuse [ 51 ]. In addition, 1 in 10 women have been raped by an intimate partner, while the data on the true extent of male rape victimization is relatively unknown [ 51 ]. Even though domestic violence crimes make up approximately 15% of all reported violent crimes [ 77 ], almost half go unreported [ 57 ], due to various reasons (i.e., concerns about privacy, desire to protect the offender, fear of reprisal [ 19 ], relationship to the perpetrator [ 20 ]).

There are several risk factors that increase an individual’s likelihood of perpetrating domestic violence. Individuals who witnessed domestic violence between their parents [ 1 , 17 , 44 , 49 , 61 , 73 ], or were abused as children themselves [ 32 , 44 , 68 , 71 , 79 , 81 ], are more likely to perpetrate domestic violence than individuals who did not witness or experience such abuse. Research has found men who witnessed abuse between their parents had higher risk ratios for committing intimate partner violence themselves [ 61 ] and were more likely to engage in such violence [ 49 ], than men who did not witness such violence as children. Research has also shown that male adolescents who witnessed mother-to-father violence were more likely to engage in dating violence themselves [ 73 ]. Similarly, scholars have found women who witnessed intimate partner violence between their parents were over 1.5 times more likely to engage in such violence themselves [ 49 ], and adolescent girls were more likely to engage in dating violence when they witnessed violence between their parents [ 73 ]. Child abuse victims were more likely to perpetrate intimate partner violence as they aged, with 23-year-olds demonstrating a significant relationship compared to 21-year-olds [ 44 ], and males who identified as child abuse victims were found to be four times more likely to engage in such violence than males who had no history of such abuse [ 49 ]. Overall, both males and females who experienced child-family violence 3 were more likely to engage in both reciprocal and nonreciprocal intimate partner violence [ 49 ].

Research has also found that being diagnosed with conduct disorder as a child or antisocial personality disorder as an adult, also increases the likelihood of domestic violence perpetration [ 7 , 8 , 17 , 31 , 45 , 81 ], with antisocial personality disorder being a mediating factor between child abuse and later intimate partner violence perpetration [ 81 ]. Additionally, individuals who demonstrate antisocial characteristics during adolescence are at an elevated risk of engaging in domestic violence as adults [ 45 ]. Another key factor that influences domestic violence perpetration is having hostile attitudes and beliefs [ 5 , 37 , 48 , 49 , 70 ], with such attitudes being more of a predictive factor of intimate partner abuse than conduct problems [ 8 ]. Both men and women who approve of intimate partner violence are more likely to engage in or reciprocate such violence compared to those without such perceptions [ 49 ].

Media Violence and Crime

Media violence and behavior.

It has been long speculated that media violence is directly related to violent behavior and perpetration of violent crime, such as intimate partner abuse [ 6 , 14 , 33 , 50 ]. However, research has found very weak evidence demonstrating a correlation between exposure to media violence and crime, with Pearson’s r correlations of less than 0.4 being indicated in most studies in this area [ 2 , 21 , 22 , 64 , 65 , 85 ]. In fact, Savage [ 64 ] determined that exposure to violent activities through the media does not have a statistically significant relationship with crime perpetration. Likewise, Ferguson and colleagues’ [ 21 , 22 ] work supported these findings, indicating that “exposure to television [violence] and video game violence were not significant predictors of violent crime” [ 21 ] (p. 396).

More recently, Savage and Yancey [ 65 ] conducted a meta-analysis of thirty two studies that tested the relationship between media violence (i.e., television or film) and criminal aggression. Lester (1989), Krittschnitt, Heath, and Ward (1986), Lagerspetz and Viemerö (1986), Phillips (1983), Berkowitz and Macaulay (1971), and Steuer, Applefield, and Smith (1971) were among the evaluated studies. Collectively, Savage and Yancey [ 65 ] concluded that the results from their analysis suggested that a relationship between violent media exposure and criminal aggression had not been established in the existing scholarly literature. Although there was evidence of a slight, positive effect of media violence on criminal aggression found for males. However, the authors noted several limitations among each of the evaluated studies that questions the generalizability of findings. As such, there is need for more work to be done in this area before firm conclusions can be drawn about the relationship between violent media exposure and violent behaviors.

Media aggression and violence against women

Although research has demonstrated a lack of or weak correlation between media violence and violent behavior, research has found a moderate positive correlation between exposure to media aggression and domestic violence perceptions. Such research has found significant relationships between exposure to media aggression and a variety of delinquent perceptions, ranging from views on rape [ 47 , 67 ] to domestic violence [ 11 , 12 , 39 ]. These views support and accept the rape of women and abusive tendencies towards an intimate partner.

For instance, Malamuth and Check [ 47 ] examined how exposure to movies that contained high levels of violence and sexual content, especially misogynistic content, influenced one’s perceptions. Individuals who watched such content were more likely to have rape-supportive attitudes than individuals who were not exposed to such movies. Simpson Beck and colleagues [ 67 ] found that rape supportive attitudes were more common among individuals who played video games that sexually objectified and degraded women. Such individuals were more likely to accept the belief that rape is an acceptable behavior and that it is the woman’s fault if she is raped, compared to individuals who did not play such video games.

Related, Cundiff [ 12 ] classified the songs on the Billboard’s Hot 100 chart between 2000 and 2010 into categories such as rape/sexual assault, demeaning language, physical violence, and sexual conquest, and found that throughout these songs, the objectification and control of women were common themes. In surveying individuals in relation to their exposure to such music, a positive correlation was found between an individual’s exposure to suggestive music, and their misogynous thinking [ 12 ]. Further, Fischer and Greitemeyer [ 23 ] found that individuals who listened to more aggressive music were more likely to have negative views of and act more aggressively towards women. Likewise, Kerig [ 39 ] and Coyne and colleagues [ 11 ] found that individuals who are exposed to higher levels of media aggression are more likely to perpetrate domestic violence offenses. This suggests that an increased exposure to media aggression influences an individual’s perceptions of domestic violence, and could, subsequently influence the perpetration of domestic violence.

Cultivation Theory

A theoretical explanation for a relationship between violent media exposure and the perpetration of violent crime can be found in Cultivation Theory. Cultivation Theory assumes that “when people are exposed to media content or other socialization agents, they gradually come to cultivate or adopt beliefs about the world that coincide with the images they have been viewing or messages they have been hearing” [ 28 ] (p. 22). Essentially, this cultivation manifests into individuals mistaking their “world reality” with the “media reality,” thus increasing the likelihood of violence [ 26 ] (p. 350). Individuals who are exposed to violent media, are more likely to perceive their reality as filled with the same level of violence, resulting in an increased likelihood of the individual acting violently themselves. By identifying one’s reality with the “media reality,” individuals create their own social constructs and begin to believe that the violence demonstrated in the media is acceptable in life as well.

This cultivation and social construction creation based off of media is demonstrated through Kahlor and Eastin’s [ 38 ] examination of the influence of television shows on rape myth acceptance. Individuals who watched soap operas demonstrated race myth acceptance and an “overestimation of false rape accusations”, while individuals who watched crime shows were less likely to demonstrate rape myth acceptance [ 38 ] (p. 215). This demonstrates how the type of television show an individual watches, can influence how and what individuals learn from such viewing.

In relation to domestic violence perception, individuals who are exposed to violence in intimate relationships, or sexual aggression, whether through the media or in real life, are more likely to support or accept such actions over time [ 12 , 28 ]. A longitudinal study conducted by Williams [ 82 ], examined cultivation effects on individuals who play video games. It was found that individuals who played video games at higher rates began to fear dangers which they experienced through the video games, demonstrating how individuals adopt beliefs based on their media exposure. Therefore, according to Cultivation Theory, individuals who are exposed to higher levels of violent media, are likely to learn from the media, and act based on this learning [ 12 , 28 , 82 ]. In relation to domestic violence, this work suggests that it is reasonable then to hypothesize that individuals who are exposed to higher levels of media violence are more likely to become supportive or accepting of domestic violence actions.

Limitations of Previous Work

While prior research has explored the relationship between exposure to media aggression and domestic violence perceptions [ 11 ], [ 12 , 23 , 39 , 47 , 67 ], to date, we are unaware of research that has focused specifically on exploring the relationship between one’s level of exposure to media violence and domestic violence perceptions. As a result, the relationship between violent media exposure and domestic violence has yet to be fully examined. Further, research focusing specifically on media aggression, media violence, and violence perpetration has predominately focused on specific types of media (i.e., video games, movies, songs), often with the media platform and materials provided to the study participants by researchers. To date little research has investigated multiple forms of self-exposure to violent media and criminal perpetration. Moreover, previous research has failed to examine the effects of pleasure gained from such exposure, as we speculate that individual’s will be less likely to engage in consumption of media they find unpleasurable. Subsequently, the effects of media exposure are largely dependent on one’s disposition toward the content – which, admittedly, over time can be shaped by the content itself. Thus, we suggest that prior tests focusing exclusively on simulated exposure without consideration of pleasure have been incomplete.

Moreover, while there are scales that measure domestic violence perceptions (e.g., The Perceptions of and Attitudes Toward Domestic Violence Questionnaire – Revised (PADV-R) , The Definitions of Domestic Violence Scale , The Attitudes toward the Use of Interpersonal Violence – Revised Scale , and various others compiled by Flood [ 25 ]), they are very specific in nature, making it difficult to use such scales outside of the specified nature for which they were created. In fact, these scales fail to examine the actual perceptions an individual has towards domestic violence, and when they do, they tend to examine domestic violence perpetrated by men, and not women. Thus, the current study sought to help fill some of these gaps in the literature.

Current Focus

There were three overarching goals driving the current project:

  • First, we sought to create a psychometrically sound scale capable of measuring intrinsic endorsement of domestic violence beliefs.
  • Second, we were interested in assessing the relationship between intrinsic endorsement of domestic violence beliefs and domestic violence perpetration.
  • Third, we wanted to explore the relationship between various types of violent media exposure (i.e., video game, movie, television) and domestic violence perpetration.

Data and Method

The data used in this study came from a sample of incarcerated offenders in two jails in New York and a prison in West Virginia. A sample of 148 convicted offenders 4 were surveyed between April 2018 and September 2018. The sampling procedure was one of convenience, in a face-to-face manner. A student intern at the prison asked inmates with whom she came into contact if they would be willing to take the survey. Such surveys were administered individually. For one of the jails, all inmates participating in educational classes were asked by the researcher to take the survey. Such surveys were administered in a group setting. A sign-up sheet was also placed in each pod for inmates to sign-up for survey participation. Each individual on the list was brought to a room occupied by only the researcher, with surveys being administered individually. For the second jail, correctional officers made an announcement in one of the pods, asking those who were interested in participation to let them know. Such inmates were individually brought to a room occupied by the researcher with a plexiglass wall between them. The surveys were administered via paper hard copy, with a researcher present to answer any questions the participants had throughout the survey process. Due to working with a vulnerable population, confidentiality was key. Confidentiality was maintained by not allowing any correctional staff in the room when the surveys were taken, and informed consent documents were kept separate from the surveys. Respondents were informed that their decision to participate in the study was completely voluntary, and that information would not be shared with law enforcement or anyone within the jail.

Dependent Variables

Domestic violence perpetration.

Two measures were used to assess domestic violence perpetration. First, participants were asked if they had been convicted of a domestic violence offense. While this is a good indication of domestic violence perpetration, it is not the “best” measure, as many persons who commit domestic violence are never convicted of the crime. As such, we employed a second measure of domestic violence perpetration. Specifically, participants were also asked if they had ever abused an intimate partner. Response categories were a dichotomous “yes” (1) or “no” (0).

Independent Variables

Endorsement of domestic violence beliefs.

We were interested in assessing the relationship between the intrinsic endorsement of domestic violence beliefs and domestic violence perpetration. Unfortunately, at the time of the study, the research team was not aware of any psychometrically sound measure of intrinsic endorsement of domestic violence beliefs available in the scholarly literature. Thus, we sought to create one. Specifically, we used an eighteen-item self-report scale to capture respondents’ intrinsic support of domestic violence. Some items included, “A wife sometimes deserves to be hit by her husband,” “A husband who makes his wife jealous on purpose deserves to be hit,” and “A wife angry enough to hit her husband must really love him.” Respondents were asked to indicate their level of agreement with the eighteen items on a five-point Likert scale ranging from 1 (“strongly disagree”) to 5 (“strongly agree”). Responses were summed to create a scale measure of intrinsic support of domestic violence beliefs with higher scores indicative of greater support of domestic violence. As indicated in Table ​ Table1, 1 , these items loaded onto one latent factor in an Exploratory Factor Analysis (EFA) and demonstrated good internal consistency (Cronbach’s alpha = 0.974).

Results from exploratory factor analysis (EFA) for endorsement of domestic violence beliefs

Factor 1
If a wife does not like her husband's friends, she should stop him from seeing them.536
When a wife is mad at her husband, it is okay for her to call him names.666
A wife sometimes deserves to be hit by her husband.819
A husband angry enough to hit his wife must love her very much.875
When a husband does not like his wife's family, he should stop her from seeing them.831
When a wife is mad at her husband, it is okay for her to hit him.871
A husband who makes his wife jealous on purpose deserves to be hit.860
A husband sometimes deserves to be hit by his wife.741
When a husband is mad at his wife, it is okay for him to yell at her.783
When a husband is mad at his wife, it is okay for him to hit her.904
When a husband does not like his wife's friends, he should stop her from seeing them.786
A wife who makes her husband jealous on purpose deserves to be hit.922
When a husband is mad at his wife, it is okay for him to throw things at her.932
If a wife does not like her husband's family, she should stop him from seeing them.877
When a wife is mad at her husband, it is okay for her to throw things at him.893
When a husband is mad at his wife, it is okay for him to call her names.869
A wife angry enough to hit her husband must really love him.879
When a wife is mad at her husband, it is okay for her to yell at him.756
Eigenvalue12.624
Variance (%)70.136

KSMO = .928 ( p  = .000). The scree plot indicated a clear break at the second factor, suggesting a one factor matrix. Extraction method: Principal axis. α = .974

Violent media exposure

Prior research assessing the relationship between violent media exposure and crime has found mixed results [ 2 , 11 , 12 , 21 – 23 , 39 , 47 , 64 , 65 , 67 , 85 ]. However, most of this work has employed only one measure of media exposure and has ignored the pleasure that one may receive from violent media – that is, whether they get enjoyment from the content. In an attempt to fill these gaps in the literature we considered three types of violent media exposure: (1) video games, (2) movies, and (3) television. Consistent with recommendations made by Savage and Yancey [ 65 ] our measures include an estimate of both media exposure (e.g., time) and rating of violence. Specifically, participants were asked to report the number of hours that they spent playing videogames, watching movies, and watching television each week. Next, they were asked to indicate the percentage of violence (0–100%) in the games, movies, and television they played and watched. Additionally, participants were asked to report how pleasurable they found the video games, movies, and television that they played and watched (coded, 0 = “Not Pleasurable” through 10 = “Very Pleasurable”). Responses to each of the three questions in the different blocks of media were multiplied together to create a scale measure assessing pleasurable violent media exposure with higher numbers indicative of greater pleasurable violent media exposure.

Control Variables

Four measures were used as control variables in this study: (1) age, (2) sex, (3) race, and (4) domestic violence victimization, as research has not examined if such victimization is related to victims’ perpetration of intimate partner violence. Specifically, participants were asked if they had ever been abused by an intimate partner. Responses were also dichotomous with 1 = “yes” and 0 = “no.” Age was a continuous variable ranging from 18 years old to 95 years old. Sex and race were dichotomous variables (i.e., 1 = “male” or “white” and 0 = “female” or “other”). Specifically, participants were asked if they had ever been abused by an intimate partner. Responses were also dichotomous with 1 = “yes” and 0 = “no.”

Analytic Strategy

Data analysis proceeded in three key stages. First, all data were cleaned, coded, and univariate analyses were constructed to assess measures of central tendency and measures of dispersion. Missing data were assessed using Little’s Missing Completely at Random (MCAR) test. The significant MCAR test ( p  < 0.05) indicated that data were not missing at random, and as such, it would be inappropriate to impute the missing data for multivariable analyses. An Exploratory Factor Analysis (EFA) was also run to help support the creation of our intrinsic endorsement of domestic violence beliefs scale. A Principal Axis Factor Analysis (PAFA) was selected as the EFA technique because the constructs are latent. Second, bivariate analyses were run to support the construction of multivariable models. Third, multivariable models were constructed. Given the dichotomous coding of the two outcome measures assessing domestic violence perpetration, we used logistic regression as the primary multivariate analysis.

Descriptive Information

Table ​ Table2 2 displays the demographic information for the sample, as well as the descriptive statistics for key variables of interest. As indicated in Table ​ Table2, 2 , overall, the sample had an average age of 35.81 years. Most participants were male (91%) and identified as white (77%). About 45 percent of the sample reported being a victim of domestic violence. Regarding violent media exposure, participants indicated greater exposure to pleasurable violence in movies ( M  = 36.40, sd  = 41.87) than to pleasurable violence in television ( M  = 29.73, sd  = 38.73) and video games ( M  = 22.58, sd  = 38.66). In the aggregate, participants did not show much intrinsic support for domestic violence ( M  = 34.02, sd  = 18.22). However, 16.2 percent of the sample had been convicted of a domestic violence offense and 34.5 percent had admitted to abusing an intimate partner.

Descriptive statistics ( N  = 148)

M (%)SDMinimumMaximum
Age35.8112.361895
Male91.0
White77.0
Been abused44.6
Video game violence22.5838.660200
Movie violence36.4041.870200
Television violence29.7338.730200
Endorsement of DV34.0218.221888
DV (conviction)16.2
DV (self-report)34.5

Bivariate Correlations

Table ​ Table3 3 displays the results from zero-order correlations between variables of interest. As indicated in Table ​ Table3, 3 , only one variable, endorsement of domestic violence beliefs ( r  = 0.202, p  < 0.05), was statistically significantly correlated with a domestic violence conviction at the bivariate level. Results suggest that as one’s intrinsic support for domestic violence increases, so too does the likelihood that they have been convicted of a domestic violence offense. Interestingly, this variable was not statistically significantly correlated with the “self-reported” measure of domestic violence perpetration ( r  = 0.101, p  > 0.05). However, four other variables were found to be statistically significantly correlated with a participant’s self-reported domestic violence perpetration. These variables included being a male ( r  =  − 0.177, p  < 0.05), being a victim of domestic violence ( r  = 0.637, p  < 0.01), television violence ( r  = 0.179, p  < 0.05), and having a domestic violence conviction ( r  = 0.182, p  < 0.05). Results indicate that males were less likely to report abusing an intimate partner than were females. Further, results show that those who had been a victim of domestic violence, those who had greater exposure to pleasurable television violence, and those who had been convicted of a domestic violence offense, were more likely to report abusing an intimate partner than those in reference groups. The weak correlation between our two dependent measures supports the use of the two separate multivariable models reported below.

Correlations

Variables12345678910
1. Age1
2. Male − .1531
3. White − .038.1101
4. Been abused.050 − .299** − .0591
5. Video game violence − .041.071 − .176* − .0021
6. Movie violence − .027 − .106 − .112.224**.598**1
7. Television violence.124 − .292** − .142.262**.401**.714**1
8. Endorsement of DV − .013 − .077.054.016.007 − .081 − .0561
9. DV conviction.140 − .054.022.122.058.067.114.202*1
10. DV self-report.071 − .177* − .111.637** − .052.150.179*.101.182*1

Pearson product-moment correlations are reported. Two-tailed significance is reported

* p  ≤ .05, ** p  ≤ .01

Multivariable Models

Table ​ Table4 4 shows the results from logistic regression models estimating domestic violence convictions and self-reported domestic violence perpetration. The first model in Table ​ Table4 4 assessed the correlates of having a domestic violence conviction. Overall, the model fit the data well and explained about 14 percent of the variance in domestic violence in having a domestic violence conviction (Nagelkerke’s R 2  = 0.142). However, there was only one statistically significant predictor in that model, endorsement of domestic violence beliefs ( b  = 0.033, p  < 0.05). Results show that a one-unit increase in intrinsic support for domestic violence was associated with a 1.033 increase in the odds of being convicted of a domestic violence offense.

Logistic regression analyses predicting domestic violence

ConvictionSelf-report
bSEORbSEOR
Age.024.0221.025.030.0221.030
Male − .8371.094.433 − .478.788.620
White.017.7041.017.574.6391.775
Been abused − .457.600.633 − 3.533***.673.029
Video game violence.001.0091.001 − .008.009.992
Movie violence − .005.012.995.000.0111.000
Television violence.011.0111.011.009.0101.009
Endorsement of DV.033*.0141.033.004.0151.004
Nagelkerke’s R .142.548

Unstandardized coefficients are presented, OR  = odds ratio. DV  = “Domestic Violence”

* p  < .05, ** p  < .01, *** p  < .001

The second model in Table ​ Table4 4 depicts the results from the logistic regression model estimating self-reported domestic violence perpetration. Overall, the model fit the data well and explained nearly 55 percent of the variance in abusing an intimate partner (Nagelkerke’s R 2  = 0.548). Interestingly, endorsement of domestic violence beliefs was not a significant predictor in this model ( b  = 0.004, p  > 0.05). In fact, the only statistically significant predictor in that model was our measure of domestic violence victimization ( b  =  − 3.533, p  < 0.001). Results suggest that victims of domestic violence were 34.48 times less likely to report abusing an intimate partner than were non-victims, controlling for all other relevant factors. It is important to note that none of the measures of exposure to pleasurable media violence were related to either of our measures of domestic violence perpetration [ 74 , 83 ].

Much of the prior work assessing the relationship between exposure to violent media and crime perpetration has ignored the pleasure component of media exposure and failed to assess multiple forms of violent media simultaneously while controlling for the endorsement of criminogenic beliefs and other relevant factors (e.g., prior victimization). The current exploratory project sought to help fill these gaps in the literature. Specifically, the current project had three main goals: (1) to establish a psychometrically sound measure of intrinsic support for domestic violence, (2) to assess the relationship between intrinsic support for domestic violence and domestic violence perpetration, and (3) to analyze the relationship between pleasurable violent media exposure and two different measures of domestic violence perpetration (i.e., conviction and “self-report”) while controlling for appropriate covariates (e.g., prior victimization, endorsement of domestic violence, etc.). Our research, using data from a sample of convicted offenders ( N  = 148), yielded several key findings worth further consideration.

First, results from Exploratory Factor Analysis showed that we were able to effectively create a psychometrically sound measure of intrinsic support for domestic violence. We encourage other researchers to adopt this 18-item measure of intrinsic support for domestic violence to use in future projects as both a predictor and an outcome measure. Future research should also explore how these beliefs come to be. Perhaps more importantly, though, through our data analyses we were able to establish a relationship between intrinsic support for domestic violence and being convicted of a domestic violence offense. That is, our results show that offenders who hold beliefs that favor the emotional and physical abuse of an intimate partner are more likely to have been convicted of a domestic violence offense than those who do not hold such views. This finding suggests that in order to help prevent domestic violence, researchers and practitioners need to develop strategies to avert, disrupt, or reverse the internalization of such beliefs. We suggest that targeting adolescents who are at risk of experiencing child abuse or witnessing abuse between their parents, may help prevent such individuals from internalizing the acceptance of such beliefs and reduce the chances that they will grow up to perpetrate domestic violence, as prior research indicates that they are a high-risk group 5 [ 36 , 52 ]. For partners who have already engaged in violence toward one another, cognitive behavioral therapy programs, such as Behavioral Couples Therapy [ 53 , 54 , 62 ], are effective at changing domestic violence perceptions and reducing future violence [ 29 , 63 ].

Second, we did not find much support for a relationship between violent media exposure and domestic violence perpetration, questioning the effects of media cultivation. At the bivariate level, pleasurable violent television exposure was found to exhibit a small, positive effect on self-reported intimate partner abuse ( r  < 0.20) [ 9 ]. This finding suggests that, at the bivariate level, as one’s exposure to pleasurable violent television increases, so too does the likelihood that they self-report abusing an intimate partner. However, this relationship was reduced to insignificant levels in a multivariable modeling controlling for age, gender, race, endorsement of domestic violence beliefs, and prior victimization. In fact, no measure of pleasurable violent media exposure was significantly related to domestic violence perpetration in multivariable modeling. Thus, the current study supports prior research indicating no relationship between media violence and violent crime perpetration [ 21 , 22 , 64 , 65 ], and suggests that other variables (i.e., endorsement of domestic violence beliefs, and victimization), not violent media, are responsible for driving individuals to committing violent crimes.

Third, our work highlights the importance of the role prior victimization plays in criminal perpetration. Interestingly, at the bivariate level, domestic violence victimization at the hands of an intimate partner was unrelated to a domestic violence conviction, but significantly and positively related to admitting to abusing an intimate partner. In fact, the relationship between being a victim of domestic violence and admitting to abusing an intimate partner was very strong ( r  = 0.637) [ 9 ]. This finding suggests that individuals who have been previously victimized at the hands of an intimate partner, are at an increased likelihood of abusing an intimate partner themselves. However, in multivariable modeling, this relationship switched directions, and prior victimization was found to be negatively related to self-reported domestic violence perpetration. In fact, with the addition of appropriate statistical controls in multivariable modeling, our findings suggest that those who had been abused by an intimate partner were more than 34 times less likely to report abusing an intimate partner. This is an interesting and difficult finding to interpret because it opposes prior work indicating that victimization experiences, especially among the young [ 32 , 44 , 71 , 81 ], and witnessing domestic violence [ 1 , 17 , 44 , 49 , 61 , 68 , 73 , 79 ], can be positively related to perpetration. Initially, we speculated that the reason for this observed relationship had to do with controlling for the endorsement of domestic violence beliefs. However, the significant negative relationship between victimization and domestic violence perpetration existed in auxiliary analyses that removed the variable assessing endorsement of domestic violence beliefs from statistical modeling. Thus, we offer two plausible explanation for the observed relationship. First, this finding may reflect some form of empathy that serves as a protective factor against domestic violence perpetration – controlling for other relevant factors, such as demographics, endorsements of domestic violence beliefs, and pleasurable violent media exposure. That is, victims of domestic violence understand the horrific pain caused by intimate partner abuse, and in an attempt to avoid instilling such pain onto their spouse, they refrain from acting out aggressively against them. Second, this finding may simply be the result of sampling error. There was no relationship found between domestic violence conviction and domestic violence victimization in statistical modeling. As such, the relationship found between domestic violence victimization and self-reported domestic violence perpetration could merely be due to the fact that the measure was self-reported. That is, it may be that victims of domestic violence are less willing to admit to domestic violence perpetration than non-victims, for whatever reason. Future research should explore these findings more in relation to these two hypotheses.

Limitations

There are several limitations to our study that warrant disclosure. First, results reported above come from a small convenience sample of offenders incarcerated in New York and West Virginia. Thus, the findings from this exploratory study are not generalizable beyond these parameters. Second, the data had temporal ordering constraints. The dependent and independent variables were collected at the same time. Accordingly, our use of the term “predictor” in multivariable modeling is more consistent with “correlation.” Due to temporal ordering issues, it is unknown if individuals prone to violence seek out violent media, or if violent media causes such individuals to become violent. Future research should employ probabilistic sampling techniques, collect data from more urban sites, and use longitudinal research designs. Third, our measures of violent media exposure were not ideal. Notably, while more robust than prior estimates of violent media exposure, our measures of violent media exposure looked at general media violence across three different types of media—television, movies, and video games. It would be better for future researchers to examine the impact of specific types of violence depicted in media, such as domestic violence, on specific types of violent crimes.

Future work should also take steps to better explore this relationship from a theoretical lens, such as Cultivation Theory, “mean world” hypothesis, and catharsis effects. Future work may also benefit from approaching this topic inductively, by asking respondents to list the media they consume and then exploring the relationship between this media consumption and various forms of crime. For instance, it may be prudent to explore the relationship between exposure to types of pornography and acceptance of domestic violence beliefs, and subsequently, perpetration rates. This could further provide evidence of a media cultivation or catharsis effect. Lastly, the survey questions used wording pertaining to “husband” and “wife,” thereby limiting the range of domestic violence. Future research should change the wording in the survey, to examine perceptions of domestic violence between intimate partners, and not just between spouses.

The relationship between exposure to violent media and crime perpetration is complex. Results from the current study suggest that exposure to various forms of pleasurable violent media is unrelated to domestic violence perpetration. When considering domestic violence perpetration, prior victimization experience and endorsement of domestic violence beliefs appear to be significant correlates worthy of future exploration and policy development.

This project received no funding for any element of the project, including study design, data collection, data analysis, or manuscript preparation.

Declarations

The authors declare that they have no conflict of interest.

All research was conducted within the framework of the first author’s Institutional Review Board.

The study was approved by the institutional review board at the West Virginia Wesleyan College. The study was performed in accordance with the ethical standards as laid down in the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards.

All participants were given and signed written informed consent documents prior to submitting data used in this study. The agreed to have their data collected and findings from it published.

1 Media violence is defined as various forms of media (i.e., television, music, video games, movies, Internet), that contain or portray acts of violence [ 10 ].

2 Media aggression, for the purpose of this study, is defined as various forms of media that contain or portray acts of aggression. Aggression is defined as: “[1)] a forceful action or procedure (such as an unprovoked attack), especially when intended to dominate or master; [2)] the practice of making attacks or encroachments; [and 3)] hostile, injurious, or destructive behavior or outlook, especially when caused by frustration” [ 13 ].

3 A combined measure of childhood physical abuse victimization and witnessing violence between parents [ 49 ].

4 Four respondents did not provide their biological sex.

5 Programs affective at reducing the likelihood of violence include, but are not limited to [ 50 ], Safe Dates [ 27 ], The Fourth R: Strategies for Healthy Teen Relationships [ 81 ], Expect Respect Support Groups [ 58 ], Nurse Family Partnership [ 15 , 55 , 56 ], Child Parent Centers [ 59 , 60 ], Multidimensional Treatment Foster Care [ 16 , 24 , 30 ], Shifting Boundaries [ 72 ], and Multisystemic Therapy [ 66 , 80 ].

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Samantha M. Gavin, Email: ude.ubs@nivags .

Nathan E. Kruis, Email: ude.usp@231ken .

Advertisement

Advertisement

The Influence of Media Violence on Intimate Partner Violence Perpetration: An Examination of Inmates’ Domestic Violence Convictions and Self-Reported Perpetration

  • Original Article
  • Published: 20 June 2021
  • Volume 39 , pages 177–197, ( 2022 )

Cite this article

research questions on media violence

  • Samantha M. Gavin 1 &
  • Nathan E. Kruis   ORCID: orcid.org/0000-0002-2076-314X 2  

12k Accesses

3 Citations

35 Altmetric

Explore all metrics

Research suggests that the representation of violence against women in the media has resulted in an increased acceptance of attitudes favoring domestic violence. While prior work has investigated the relationship between violent media exposure and violent crime, there has been little effort to empirically examine the relationship between specific forms of violent media exposure and the perpetration of intimate partner violence. Using data collected from a sample of 148 inmates, the current study seeks to help fill these gaps in the literature by examining the relationship between exposure to various forms of pleasurable violent media and the perpetration of intimate partner violence (i.e., conviction and self-reported). At the bivariate level, results indicate a significant positive relationship between exposure to pleasurable television violence and self-reported intimate partner abuse. However, this relationship is reduced to insignificant levels in multivariable modeling. Endorsement of domestic violence beliefs and victimization experience were found to be the strongest predictors of intimate partner violence perpetration. Potential policy implications based on findings are discussed within.

Similar content being viewed by others

research questions on media violence

How Do Age, Sex, Political Orientation, Religiosity, and Sexism Affect Perceptions of Sex Assault/Harassment Allegations?

research questions on media violence

Intimate partner violence and third-party legal mobilization: considering the role of sexuality, gender, and violence severity

Young women’s experiences of intimate partner violence – narratives of control, terror, and resistance.

Avoid common mistakes on your manuscript.

Introduction

In the United States, more than 12 million men and women become victims of domestic violence each year [ 76 ]. In fact, every minute, roughly 20 Americans are victimized at the hands of an intimate partner [ 3 ]. Although both men and women are abused by an intimate partner, women have a higher likelihood of such abuse, with those ages 18–34 years being at the highest risk of victimization. Moreover, it is estimated that approximately 1 in 4 women and 1 in 7 men experience violence at the hands of an intimate partner at some point in their lifetime [ 77 ].

According to the United Nations Commission on the Status of Women [ 78 ], the representation of violence against women in the media has greatly increased over the years. Recent research suggests that women are commonly depicted as victims and sex objects in the media [ 12 , 69 ]. Portraying women in this way, media via pornography, pornographic movies, and music videos, has been found to increase attitudes which are supportive of violence, specifically sexual violence, against women. Notably, in relation to violence in general, research suggests that the media’s portrayal of women as sex objects and victims, tends to influence societal attitudes that are accepting of domestic violence, particularly violence against women [ 40 , 43 , 46 , 69 ].

Understanding the influence of media violence on an individual’s perceptions of domestic violence could help gain a better understanding of the factors that contribute to an individual’s domestic violence tendencies, as well as to gain a better understanding of how to lessen such tendencies. Not only can the influence of media violence on domestic violence perceptions be addressed, but specific media forms can be identified as to the level of influence that each form of media has on such perceptions as well. Understanding how exposure to media violence influences domestic violence perceptions, in comparison to the influence of media aggression on domestic violence perceptions, will allow for an overall perspective of how violent media in general influences domestic violence perpetration. Accordingly, the present study seeks to provide an empirical assessment of the relationship between violent media exposure and the perpetration of intimate partner abuse.

Literature Review

Although society believes that exposure to media violence Footnote 1 causes an individual to become violent, research has cast doubt on this belief, stating that violent media does not directly influence violent behavior at a highly correlated statistically significant level [ 2 , 4 , 21 , 22 , 65 , 85 ]. In relation to media aggression Footnote 2 and domestic violence perceptions however, research has demonstrated a relationship between the two variables [ 11 , 12 , 23 , 35 , 39 , 41 , 47 ], such that an increased level of exposure to media aggression, for example, video games and movies depicting aggression towards women, influences individuals to become more accepting of aggression toward women.

Domestic Violence

According to The United States Department of Justice [ 75 ], domestic violence is defined as “a pattern of abusive behavior in any relationship that is used by one partner to gain or maintain power and control over another intimate partner” (para. 1). Emotional/psychological, verbal, physical, sexual, and financial abuse [ 42 , 84 ], as well as digital abuse, are the different types of abuse that can occur amongst intimate partners.

In the United States alone, domestic violence hotlines received approximately 20,000 calls per day [ 51 ], with at least five million incidents occurring each year [ 34 ]. With the COVID-19 pandemic, the likelihood for domestic violence incidents to occur increased, while a victim’s ability to call and report decreased [ 18 ], due to individuals being locked down at home, being laid-off, or working from home. In examining the 1 in 4 women and 1 in 7 men who experience domestic violence at the hands of an intimate partner, 1 in 3 and 1 in 4, respectively, have experienced physical abuse [ 3 ], with 1 in 7 women and 1 in 25 men obtaining injuries from the abuse [ 51 ]. In addition, 1 in 10 women have been raped by an intimate partner, while the data on the true extent of male rape victimization is relatively unknown [ 51 ]. Even though domestic violence crimes make up approximately 15% of all reported violent crimes [ 77 ], almost half go unreported [ 57 ], due to various reasons (i.e., concerns about privacy, desire to protect the offender, fear of reprisal [ 19 ], relationship to the perpetrator [ 20 ]).

There are several risk factors that increase an individual’s likelihood of perpetrating domestic violence. Individuals who witnessed domestic violence between their parents [ 1 , 17 , 44 , 49 , 61 , 73 ], or were abused as children themselves [ 32 , 44 , 68 , 71 , 79 , 81 ], are more likely to perpetrate domestic violence than individuals who did not witness or experience such abuse. Research has found men who witnessed abuse between their parents had higher risk ratios for committing intimate partner violence themselves [ 61 ] and were more likely to engage in such violence [ 49 ], than men who did not witness such violence as children. Research has also shown that male adolescents who witnessed mother-to-father violence were more likely to engage in dating violence themselves [ 73 ]. Similarly, scholars have found women who witnessed intimate partner violence between their parents were over 1.5 times more likely to engage in such violence themselves [ 49 ], and adolescent girls were more likely to engage in dating violence when they witnessed violence between their parents [ 73 ]. Child abuse victims were more likely to perpetrate intimate partner violence as they aged, with 23-year-olds demonstrating a significant relationship compared to 21-year-olds [ 44 ], and males who identified as child abuse victims were found to be four times more likely to engage in such violence than males who had no history of such abuse [ 49 ]. Overall, both males and females who experienced child-family violence Footnote 3 were more likely to engage in both reciprocal and nonreciprocal intimate partner violence [ 49 ].

Research has also found that being diagnosed with conduct disorder as a child or antisocial personality disorder as an adult, also increases the likelihood of domestic violence perpetration [ 7 , 8 , 17 , 31 , 45 , 81 ], with antisocial personality disorder being a mediating factor between child abuse and later intimate partner violence perpetration [ 81 ]. Additionally, individuals who demonstrate antisocial characteristics during adolescence are at an elevated risk of engaging in domestic violence as adults [ 45 ]. Another key factor that influences domestic violence perpetration is having hostile attitudes and beliefs [ 5 , 37 , 48 , 49 , 70 ], with such attitudes being more of a predictive factor of intimate partner abuse than conduct problems [ 8 ]. Both men and women who approve of intimate partner violence are more likely to engage in or reciprocate such violence compared to those without such perceptions [ 49 ].

Media Violence and Crime

Media violence and behavior.

It has been long speculated that media violence is directly related to violent behavior and perpetration of violent crime, such as intimate partner abuse [ 6 , 14 33 , 50 ]. However, research has found very weak evidence demonstrating a correlation between exposure to media violence and crime, with Pearson’s r correlations of less than 0.4 being indicated in most studies in this area [ 2 , 21 , 22 , 64 , 65 , 85 ]. In fact, Savage [ 64 ] determined that exposure to violent activities through the media does not have a statistically significant relationship with crime perpetration. Likewise, Ferguson and colleagues’ [ 21 , 22 ] work supported these findings, indicating that “exposure to television [violence] and video game violence were not significant predictors of violent crime” [ 21 ] (p. 396).

More recently, Savage and Yancey [ 65 ] conducted a meta-analysis of thirty two studies that tested the relationship between media violence (i.e., television or film) and criminal aggression. Lester (1989), Krittschnitt, Heath, and Ward (1986), Lagerspetz and Viemerö (1986), Phillips (1983), Berkowitz and Macaulay (1971), and Steuer, Applefield, and Smith (1971) were among the evaluated studies. Collectively, Savage and Yancey [ 65 ] concluded that the results from their analysis suggested that a relationship between violent media exposure and criminal aggression had not been established in the existing scholarly literature. Although there was evidence of a slight, positive effect of media violence on criminal aggression found for males. However, the authors noted several limitations among each of the evaluated studies that questions the generalizability of findings. As such, there is need for more work to be done in this area before firm conclusions can be drawn about the relationship between violent media exposure and violent behaviors.

Media aggression and violence against women

Although research has demonstrated a lack of or weak correlation between media violence and violent behavior, research has found a moderate positive correlation between exposure to media aggression and domestic violence perceptions. Such research has found significant relationships between exposure to media aggression and a variety of delinquent perceptions, ranging from views on rape [ 47 , 67 ] to domestic violence [ 11 , 12 , 39 ]. These views support and accept the rape of women and abusive tendencies towards an intimate partner.

For instance, Malamuth and Check [ 47 ] examined how exposure to movies that contained high levels of violence and sexual content, especially misogynistic content, influenced one’s perceptions. Individuals who watched such content were more likely to have rape-supportive attitudes than individuals who were not exposed to such movies. Simpson Beck and colleagues [ 67 ] found that rape supportive attitudes were more common among individuals who played video games that sexually objectified and degraded women. Such individuals were more likely to accept the belief that rape is an acceptable behavior and that it is the woman’s fault if she is raped, compared to individuals who did not play such video games.

Related, Cundiff [ 12 ] classified the songs on the Billboard’s Hot 100 chart between 2000 and 2010 into categories such as rape/sexual assault, demeaning language, physical violence, and sexual conquest, and found that throughout these songs, the objectification and control of women were common themes. In surveying individuals in relation to their exposure to such music, a positive correlation was found between an individual’s exposure to suggestive music, and their misogynous thinking [ 12 ]. Further, Fischer and Greitemeyer [ 23 ] found that individuals who listened to more aggressive music were more likely to have negative views of and act more aggressively towards women. Likewise, Kerig [ 39 ] and Coyne and colleagues [ 11 ] found that individuals who are exposed to higher levels of media aggression are more likely to perpetrate domestic violence offenses. This suggests that an increased exposure to media aggression influences an individual’s perceptions of domestic violence, and could, subsequently influence the perpetration of domestic violence.

Cultivation Theory

A theoretical explanation for a relationship between violent media exposure and the perpetration of violent crime can be found in Cultivation Theory. Cultivation Theory assumes that “when people are exposed to media content or other socialization agents, they gradually come to cultivate or adopt beliefs about the world that coincide with the images they have been viewing or messages they have been hearing” [ 28 ] (p. 22). Essentially, this cultivation manifests into individuals mistaking their “world reality” with the “media reality,” thus increasing the likelihood of violence [ 26 ] (p. 350). Individuals who are exposed to violent media, are more likely to perceive their reality as filled with the same level of violence, resulting in an increased likelihood of the individual acting violently themselves. By identifying one’s reality with the “media reality,” individuals create their own social constructs and begin to believe that the violence demonstrated in the media is acceptable in life as well.

This cultivation and social construction creation based off of media is demonstrated through Kahlor and Eastin’s [ 38 ] examination of the influence of television shows on rape myth acceptance. Individuals who watched soap operas demonstrated race myth acceptance and an “overestimation of false rape accusations”, while individuals who watched crime shows were less likely to demonstrate rape myth acceptance [ 38 ] (p. 215). This demonstrates how the type of television show an individual watches, can influence how and what individuals learn from such viewing.

In relation to domestic violence perception, individuals who are exposed to violence in intimate relationships, or sexual aggression, whether through the media or in real life, are more likely to support or accept such actions over time [ 12 , 28 ]. A longitudinal study conducted by Williams [ 82 ], examined cultivation effects on individuals who play video games. It was found that individuals who played video games at higher rates began to fear dangers which they experienced through the video games, demonstrating how individuals adopt beliefs based on their media exposure. Therefore, according to Cultivation Theory, individuals who are exposed to higher levels of violent media, are likely to learn from the media, and act based on this learning [ 12 , 28 , 82 ]. In relation to domestic violence, this work suggests that it is reasonable then to hypothesize that individuals who are exposed to higher levels of media violence are more likely to become supportive or accepting of domestic violence actions.

Limitations of Previous Work

While prior research has explored the relationship between exposure to media aggression and domestic violence perceptions [ 11 ], [ 12 , 23 , 39 , 47 , 67 ], to date, we are unaware of research that has focused specifically on exploring the relationship between one’s level of exposure to media violence and domestic violence perceptions. As a result, the relationship between violent media exposure and domestic violence has yet to be fully examined. Further, research focusing specifically on media aggression, media violence, and violence perpetration has predominately focused on specific types of media (i.e., video games, movies, songs), often with the media platform and materials provided to the study participants by researchers. To date little research has investigated multiple forms of self-exposure to violent media and criminal perpetration. Moreover, previous research has failed to examine the effects of pleasure gained from such exposure, as we speculate that individual’s will be less likely to engage in consumption of media they find unpleasurable. Subsequently, the effects of media exposure are largely dependent on one’s disposition toward the content – which, admittedly, over time can be shaped by the content itself. Thus, we suggest that prior tests focusing exclusively on simulated exposure without consideration of pleasure have been incomplete.

Moreover, while there are scales that measure domestic violence perceptions (e.g., The Perceptions of and Attitudes Toward Domestic Violence Questionnaire – Revised (PADV-R) , The Definitions of Domestic Violence Scale , The Attitudes toward the Use of Interpersonal Violence – Revised Scale , and various others compiled by Flood [ 25 ]), they are very specific in nature, making it difficult to use such scales outside of the specified nature for which they were created. In fact, these scales fail to examine the actual perceptions an individual has towards domestic violence, and when they do, they tend to examine domestic violence perpetrated by men, and not women. Thus, the current study sought to help fill some of these gaps in the literature.

Current Focus

There were three overarching goals driving the current project:

First, we sought to create a psychometrically sound scale capable of measuring intrinsic endorsement of domestic violence beliefs.

Second, we were interested in assessing the relationship between intrinsic endorsement of domestic violence beliefs and domestic violence perpetration.

Third, we wanted to explore the relationship between various types of violent media exposure (i.e., video game, movie, television) and domestic violence perpetration.

Data and Method

The data used in this study came from a sample of incarcerated offenders in two jails in New York and a prison in West Virginia. A sample of 148 convicted offenders Footnote 4 were surveyed between April 2018 and September 2018. The sampling procedure was one of convenience, in a face-to-face manner. A student intern at the prison asked inmates with whom she came into contact if they would be willing to take the survey. Such surveys were administered individually. For one of the jails, all inmates participating in educational classes were asked by the researcher to take the survey. Such surveys were administered in a group setting. A sign-up sheet was also placed in each pod for inmates to sign-up for survey participation. Each individual on the list was brought to a room occupied by only the researcher, with surveys being administered individually. For the second jail, correctional officers made an announcement in one of the pods, asking those who were interested in participation to let them know. Such inmates were individually brought to a room occupied by the researcher with a plexiglass wall between them. The surveys were administered via paper hard copy, with a researcher present to answer any questions the participants had throughout the survey process. Due to working with a vulnerable population, confidentiality was key. Confidentiality was maintained by not allowing any correctional staff in the room when the surveys were taken, and informed consent documents were kept separate from the surveys. Respondents were informed that their decision to participate in the study was completely voluntary, and that information would not be shared with law enforcement or anyone within the jail.

Dependent Variables

Domestic violence perpetration.

Two measures were used to assess domestic violence perpetration. First, participants were asked if they had been convicted of a domestic violence offense. While this is a good indication of domestic violence perpetration, it is not the “best” measure, as many persons who commit domestic violence are never convicted of the crime. As such, we employed a second measure of domestic violence perpetration. Specifically, participants were also asked if they had ever abused an intimate partner. Response categories were a dichotomous “yes” (1) or “no” (0).

Independent Variables

Endorsement of domestic violence beliefs.

We were interested in assessing the relationship between the intrinsic endorsement of domestic violence beliefs and domestic violence perpetration. Unfortunately, at the time of the study, the research team was not aware of any psychometrically sound measure of intrinsic endorsement of domestic violence beliefs available in the scholarly literature. Thus, we sought to create one. Specifically, we used an eighteen-item self-report scale to capture respondents’ intrinsic support of domestic violence. Some items included, “A wife sometimes deserves to be hit by her husband,” “A husband who makes his wife jealous on purpose deserves to be hit,” and “A wife angry enough to hit her husband must really love him.” Respondents were asked to indicate their level of agreement with the eighteen items on a five-point Likert scale ranging from 1 (“strongly disagree”) to 5 (“strongly agree”). Responses were summed to create a scale measure of intrinsic support of domestic violence beliefs with higher scores indicative of greater support of domestic violence. As indicated in Table 1 , these items loaded onto one latent factor in an Exploratory Factor Analysis (EFA) and demonstrated good internal consistency (Cronbach’s alpha = 0.974).

Violent media exposure

Prior research assessing the relationship between violent media exposure and crime has found mixed results [ 2 , 11 , 12 , 21 , 22 , 23 , 39 , 47 , 64 , 65 , 67 , 85 ]. However, most of this work has employed only one measure of media exposure and has ignored the pleasure that one may receive from violent media – that is, whether they get enjoyment from the content. In an attempt to fill these gaps in the literature we considered three types of violent media exposure: (1) video games, (2) movies, and (3) television. Consistent with recommendations made by Savage and Yancey [ 65 ] our measures include an estimate of both media exposure (e.g., time) and rating of violence. Specifically, participants were asked to report the number of hours that they spent playing videogames, watching movies, and watching television each week. Next, they were asked to indicate the percentage of violence (0–100%) in the games, movies, and television they played and watched. Additionally, participants were asked to report how pleasurable they found the video games, movies, and television that they played and watched (coded, 0 = “Not Pleasurable” through 10 = “Very Pleasurable”). Responses to each of the three questions in the different blocks of media were multiplied together to create a scale measure assessing pleasurable violent media exposure with higher numbers indicative of greater pleasurable violent media exposure.

Control Variables

Four measures were used as control variables in this study: (1) age, (2) sex, (3) race, and (4) domestic violence victimization, as research has not examined if such victimization is related to victims’ perpetration of intimate partner violence. Specifically, participants were asked if they had ever been abused by an intimate partner. Responses were also dichotomous with 1 = “yes” and 0 = “no.” Age was a continuous variable ranging from 18 years old to 95 years old. Sex and race were dichotomous variables (i.e., 1 = “male” or “white” and 0 = “female” or “other”). Specifically, participants were asked if they had ever been abused by an intimate partner. Responses were also dichotomous with 1 = “yes” and 0 = “no.”

Analytic Strategy

Data analysis proceeded in three key stages. First, all data were cleaned, coded, and univariate analyses were constructed to assess measures of central tendency and measures of dispersion. Missing data were assessed using Little’s Missing Completely at Random (MCAR) test. The significant MCAR test ( p  < 0.05) indicated that data were not missing at random, and as such, it would be inappropriate to impute the missing data for multivariable analyses. An Exploratory Factor Analysis (EFA) was also run to help support the creation of our intrinsic endorsement of domestic violence beliefs scale. A Principal Axis Factor Analysis (PAFA) was selected as the EFA technique because the constructs are latent. Second, bivariate analyses were run to support the construction of multivariable models. Third, multivariable models were constructed. Given the dichotomous coding of the two outcome measures assessing domestic violence perpetration, we used logistic regression as the primary multivariate analysis.

Descriptive Information

Table 2 displays the demographic information for the sample, as well as the descriptive statistics for key variables of interest. As indicated in Table 2 , overall, the sample had an average age of 35.81 years. Most participants were male (91%) and identified as white (77%). About 45 percent of the sample reported being a victim of domestic violence. Regarding violent media exposure, participants indicated greater exposure to pleasurable violence in movies ( M  = 36.40, sd  = 41.87) than to pleasurable violence in television ( M  = 29.73, sd  = 38.73) and video games ( M  = 22.58, sd  = 38.66). In the aggregate, participants did not show much intrinsic support for domestic violence ( M  = 34.02, sd  = 18.22). However, 16.2 percent of the sample had been convicted of a domestic violence offense and 34.5 percent had admitted to abusing an intimate partner.

Bivariate Correlations

Table 3 displays the results from zero-order correlations between variables of interest. As indicated in Table 3 , only one variable, endorsement of domestic violence beliefs ( r  = 0.202, p  < 0.05), was statistically significantly correlated with a domestic violence conviction at the bivariate level. Results suggest that as one’s intrinsic support for domestic violence increases, so too does the likelihood that they have been convicted of a domestic violence offense. Interestingly, this variable was not statistically significantly correlated with the “self-reported” measure of domestic violence perpetration ( r  = 0.101, p  > 0.05). However, four other variables were found to be statistically significantly correlated with a participant’s self-reported domestic violence perpetration. These variables included being a male ( r  =  − 0.177, p  < 0.05), being a victim of domestic violence ( r  = 0.637, p  < 0.01), television violence ( r  = 0.179, p  < 0.05), and having a domestic violence conviction ( r  = 0.182, p  < 0.05). Results indicate that males were less likely to report abusing an intimate partner than were females. Further, results show that those who had been a victim of domestic violence, those who had greater exposure to pleasurable television violence, and those who had been convicted of a domestic violence offense, were more likely to report abusing an intimate partner than those in reference groups. The weak correlation between our two dependent measures supports the use of the two separate multivariable models reported below.

Multivariable Models

Table 4 shows the results from logistic regression models estimating domestic violence convictions and self-reported domestic violence perpetration. The first model in Table 4 assessed the correlates of having a domestic violence conviction. Overall, the model fit the data well and explained about 14 percent of the variance in domestic violence in having a domestic violence conviction (Nagelkerke’s R 2  = 0.142). However, there was only one statistically significant predictor in that model, endorsement of domestic violence beliefs ( b  = 0.033, p  < 0.05). Results show that a one-unit increase in intrinsic support for domestic violence was associated with a 1.033 increase in the odds of being convicted of a domestic violence offense.

The second model in Table 4 depicts the results from the logistic regression model estimating self-reported domestic violence perpetration. Overall, the model fit the data well and explained nearly 55 percent of the variance in abusing an intimate partner (Nagelkerke’s R 2  = 0.548). Interestingly, endorsement of domestic violence beliefs was not a significant predictor in this model ( b  = 0.004, p  > 0.05). In fact, the only statistically significant predictor in that model was our measure of domestic violence victimization ( b  =  − 3.533, p  < 0.001). Results suggest that victims of domestic violence were 34.48 times less likely to report abusing an intimate partner than were non-victims, controlling for all other relevant factors. It is important to note that none of the measures of exposure to pleasurable media violence were related to either of our measures of domestic violence perpetration [ 74 , 83 ].

Much of the prior work assessing the relationship between exposure to violent media and crime perpetration has ignored the pleasure component of media exposure and failed to assess multiple forms of violent media simultaneously while controlling for the endorsement of criminogenic beliefs and other relevant factors (e.g., prior victimization). The current exploratory project sought to help fill these gaps in the literature. Specifically, the current project had three main goals: (1) to establish a psychometrically sound measure of intrinsic support for domestic violence, (2) to assess the relationship between intrinsic support for domestic violence and domestic violence perpetration, and (3) to analyze the relationship between pleasurable violent media exposure and two different measures of domestic violence perpetration (i.e., conviction and “self-report”) while controlling for appropriate covariates (e.g., prior victimization, endorsement of domestic violence, etc.). Our research, using data from a sample of convicted offenders ( N  = 148), yielded several key findings worth further consideration.

First, results from Exploratory Factor Analysis showed that we were able to effectively create a psychometrically sound measure of intrinsic support for domestic violence. We encourage other researchers to adopt this 18-item measure of intrinsic support for domestic violence to use in future projects as both a predictor and an outcome measure. Future research should also explore how these beliefs come to be. Perhaps more importantly, though, through our data analyses we were able to establish a relationship between intrinsic support for domestic violence and being convicted of a domestic violence offense. That is, our results show that offenders who hold beliefs that favor the emotional and physical abuse of an intimate partner are more likely to have been convicted of a domestic violence offense than those who do not hold such views. This finding suggests that in order to help prevent domestic violence, researchers and practitioners need to develop strategies to avert, disrupt, or reverse the internalization of such beliefs. We suggest that targeting adolescents who are at risk of experiencing child abuse or witnessing abuse between their parents, may help prevent such individuals from internalizing the acceptance of such beliefs and reduce the chances that they will grow up to perpetrate domestic violence, as prior research indicates that they are a high-risk group Footnote 5 [ 36 , 52 ]. For partners who have already engaged in violence toward one another, cognitive behavioral therapy programs, such as Behavioral Couples Therapy [ 53 , 54 , 62 ], are effective at changing domestic violence perceptions and reducing future violence [ 29 , 63 ].

Second, we did not find much support for a relationship between violent media exposure and domestic violence perpetration, questioning the effects of media cultivation. At the bivariate level, pleasurable violent television exposure was found to exhibit a small, positive effect on self-reported intimate partner abuse ( r  < 0.20) [ 9 ]. This finding suggests that, at the bivariate level, as one’s exposure to pleasurable violent television increases, so too does the likelihood that they self-report abusing an intimate partner. However, this relationship was reduced to insignificant levels in a multivariable modeling controlling for age, gender, race, endorsement of domestic violence beliefs, and prior victimization. In fact, no measure of pleasurable violent media exposure was significantly related to domestic violence perpetration in multivariable modeling. Thus, the current study supports prior research indicating no relationship between media violence and violent crime perpetration [ 21 , 22 , 64 , 65 ], and suggests that other variables (i.e., endorsement of domestic violence beliefs, and victimization), not violent media, are responsible for driving individuals to committing violent crimes.

Third, our work highlights the importance of the role prior victimization plays in criminal perpetration. Interestingly, at the bivariate level, domestic violence victimization at the hands of an intimate partner was unrelated to a domestic violence conviction, but significantly and positively related to admitting to abusing an intimate partner. In fact, the relationship between being a victim of domestic violence and admitting to abusing an intimate partner was very strong ( r  = 0.637) [ 9 ]. This finding suggests that individuals who have been previously victimized at the hands of an intimate partner, are at an increased likelihood of abusing an intimate partner themselves. However, in multivariable modeling, this relationship switched directions, and prior victimization was found to be negatively related to self-reported domestic violence perpetration. In fact, with the addition of appropriate statistical controls in multivariable modeling, our findings suggest that those who had been abused by an intimate partner were more than 34 times less likely to report abusing an intimate partner. This is an interesting and difficult finding to interpret because it opposes prior work indicating that victimization experiences, especially among the young [ 32 , 44 , 71 , 81 ], and witnessing domestic violence [ 1 , 17 , 44 , 49 , 61 , 68 , 73 , 79 ], can be positively related to perpetration. Initially, we speculated that the reason for this observed relationship had to do with controlling for the endorsement of domestic violence beliefs. However, the significant negative relationship between victimization and domestic violence perpetration existed in auxiliary analyses that removed the variable assessing endorsement of domestic violence beliefs from statistical modeling. Thus, we offer two plausible explanation for the observed relationship. First, this finding may reflect some form of empathy that serves as a protective factor against domestic violence perpetration – controlling for other relevant factors, such as demographics, endorsements of domestic violence beliefs, and pleasurable violent media exposure. That is, victims of domestic violence understand the horrific pain caused by intimate partner abuse, and in an attempt to avoid instilling such pain onto their spouse, they refrain from acting out aggressively against them. Second, this finding may simply be the result of sampling error. There was no relationship found between domestic violence conviction and domestic violence victimization in statistical modeling. As such, the relationship found between domestic violence victimization and self-reported domestic violence perpetration could merely be due to the fact that the measure was self-reported. That is, it may be that victims of domestic violence are less willing to admit to domestic violence perpetration than non-victims, for whatever reason. Future research should explore these findings more in relation to these two hypotheses.

Limitations

There are several limitations to our study that warrant disclosure. First, results reported above come from a small convenience sample of offenders incarcerated in New York and West Virginia. Thus, the findings from this exploratory study are not generalizable beyond these parameters. Second, the data had temporal ordering constraints. The dependent and independent variables were collected at the same time. Accordingly, our use of the term “predictor” in multivariable modeling is more consistent with “correlation.” Due to temporal ordering issues, it is unknown if individuals prone to violence seek out violent media, or if violent media causes such individuals to become violent. Future research should employ probabilistic sampling techniques, collect data from more urban sites, and use longitudinal research designs. Third, our measures of violent media exposure were not ideal. Notably, while more robust than prior estimates of violent media exposure, our measures of violent media exposure looked at general media violence across three different types of media—television, movies, and video games. It would be better for future researchers to examine the impact of specific types of violence depicted in media, such as domestic violence, on specific types of violent crimes.

Future work should also take steps to better explore this relationship from a theoretical lens, such as Cultivation Theory, “mean world” hypothesis, and catharsis effects. Future work may also benefit from approaching this topic inductively, by asking respondents to list the media they consume and then exploring the relationship between this media consumption and various forms of crime. For instance, it may be prudent to explore the relationship between exposure to types of pornography and acceptance of domestic violence beliefs, and subsequently, perpetration rates. This could further provide evidence of a media cultivation or catharsis effect. Lastly, the survey questions used wording pertaining to “husband” and “wife,” thereby limiting the range of domestic violence. Future research should change the wording in the survey, to examine perceptions of domestic violence between intimate partners, and not just between spouses.

The relationship between exposure to violent media and crime perpetration is complex. Results from the current study suggest that exposure to various forms of pleasurable violent media is unrelated to domestic violence perpetration. When considering domestic violence perpetration, prior victimization experience and endorsement of domestic violence beliefs appear to be significant correlates worthy of future exploration and policy development.

Media violence is defined as various forms of media (i.e., television, music, video games, movies, Internet), that contain or portray acts of violence [ 10 ].

Media aggression, for the purpose of this study, is defined as various forms of media that contain or portray acts of aggression. Aggression is defined as: “[1)] a forceful action or procedure (such as an unprovoked attack), especially when intended to dominate or master; [2)] the practice of making attacks or encroachments; [and 3)] hostile, injurious, or destructive behavior or outlook, especially when caused by frustration” [ 13 ].

A combined measure of childhood physical abuse victimization and witnessing violence between parents [ 49 ].

Four respondents did not provide their biological sex.

Programs affective at reducing the likelihood of violence include, but are not limited to [ 50 ], Safe Dates [ 27 ], The Fourth R: Strategies for Healthy Teen Relationships [ 81 ], Expect Respect Support Groups [ 58 ], Nurse Family Partnership [ 15 , 55 , 56 ], Child Parent Centers [ 59 , 60 ], Multidimensional Treatment Foster Care [ 16 , 24 , 30 ], Shifting Boundaries [ 72 ], and Multisystemic Therapy [ 66 , 80 ].

Aldarondo, E., & Sugarman, D. B. (1996). Risk marker analysis of the cessation and persistence of wife assault. Journal of Consulting and Clinical Psychology, 64 (5), 1010–1019.

Article   Google Scholar  

Anderson, C. A., Berkowitz, L., Donnerstein, E., Huesmann, L. R., Johnson, J. D., Linz, D., Malamuth, N. M., & Wartella, E. (2003). The influence of media violence on youth. Psychological Science in the Public Interest, 4 (3), 81–110.

Black, M. C., Basile, K. C., Breiding, M. J., Smith, S. G., Walters, M. L., Merrick, M. T., Chen, J., & Stevens, M. R. (2011). The national intimate partner and sexual violence survey (NISVS): 2010 summary report . Atlanta, GA: National center for injury prevention and control, centers for disease control and prevention.

Browne, K. D. (2005). The influence of violent media on children and adolescents: A public health approach. The Lancet, 365 (9460), 702–710.

Brownridge, D. A., Chan, K. L., Hiebert-Murphy, D., Ristock, J., Tiwari, A., Leung, W. C., & Santos, S. C. (2008). The elevated risk for non-lethal post-separation violence in Canada: A comparison of separated, divorced, and married women. Journal of Interpersonal Violence, 23 (1), 117–135.

Bushman, B. J., & Anderson, C. A. (2001). Media violence and the American public: Scientific facts versus media misinformation. American Psychologist, 56 (6–7), 477–489.

Capaldi, D. M., & Clark, S. (1998). Prospective family predictors of aggression toward female partners for at-risk young men. Developmental Psychology, 34 (6), 1175–1188.

Capaldi, D. M., Dishion, T. J., Stoolmiller, M., & Yoerger, K. (2001). Aggression toward female partners by at-risk young men: The contribution of male adolescent friendships. Developmental Psychology, 37 (1), 61–73.

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Lawrence Erlbaum.

Google Scholar  

Council on Communications and Media. (2009). Media violence. Pediatrics, 124 (5), 1495–1503. https://doi.org/10.1542/peds.2009-2146

Coyne, S. M., Nelson, D. A., Graham-Kevan, N., Keister, E., & Grant, D. M. (2010). Mean on the screen: Psychopathy, relationship aggression, and aggression in the media. Personality & Individual Differences, 48 (3), 288–293. https://doi.org/10.1016/j.paid.2009.10.018

Cundiff, G. (2013). The influence of rap/hip-hop music: A mixed-method analysis on audience perceptions of misogynistic lyrics and the issue of domestic violence. The Elon Journal of Undergraduate Research in Communications, 4 (1), 71–93.

DeBord, K. (n.d.). Childhood aggression: Where does it come from? How can it be managed? . Raleigh, North Carolina: College of Agriculture and Life Sciences, North Carolina State University, School of Agriculture and Environmental and Allied Sciences, North Carolina A&T State University. Retrieved from http://www.ces.ncsu.edu/depts/fcs/pdfs/fcs_504.pdf .

Duck, J. M., & Mullin, B.-A. (1995). The perceived impact of the mass media: Reconsidering the third person effect. European Journal of Social Psychology, 25 (1), 77–93.

Eckenrode, J., Campa, M., Luckey, D. W., Henderson, C. R., Jr., Cole, R., Kitzman, H., Anson, E., Sidora-Arcoleo, K., Powers, J., & Olds, D. L. (2010). Long-term effects of prenatal and infancy nurse home visitation on the life course of youths: 19-year follow-up of a randomized trial. Archives of Pediatric and Adolescent Medicine, 164 (1), 9–15.

Eddy, J. M., Whaley, R. B., & Chamberlain, P. (2004). The prevention of violent behavior by chronic and serious male juvenile offenders: A 2-year-follow-up of a randomized clinical trial. Journal of Emotional and Behavioral Disorders, 12 (1), 2–8.

Ehrensaft, M. K., Cohen, P., Brown, J., Smailes, E., Chen, H. N., & Johnson, J. G. (2003). Intergenerational transmission of partner violence: A 20-year prospective study. Journal of Consulting and Clinical Psychology, 71 (4), 741–753.

Evans, M. L., Lindauer, M., & Farrell, M. E. (2020). A pandemic within a pandemic – intimate partner violence during Covid-19. The New England Journal of Medicine, 383 , 2302–2304. https://doi.org/10.1056/NEJMp2024046

Felson, R. B., Messner, S. F., Hoskin, A. W., & Deane, G. (2006). Reasons for reporting and not reporting domestic violence to the police. Criminology, 40 (3), 617–648. https://doi.org/10.1111/j.1745-9125.2002.tb00968.x

Felson, R. B., & Paré, P. P. (2005). The reporting of domestic violence and sexual assault by nonstrangers to the police. Journal of Marriage and Family, 67 (3), 597–610.

Ferguson, C. J., Cruz, A. M., Martinez, D., Rueda, S. M., Ferguson, D. E., & Negy, C. (2008). Personality, parental, and media influences on aggressive personality and violent crime in young adults. Journal of Aggression, Maltreatment & Trauma, 17 (4), 395–414. https://doi.org/10.1080/10926770802471522

Ferguson, C. J., Rueda, S. M., Cruz, A. M., Ferguson, D. E., Fritz, S., & Smith, S. M. (2008). Violent video games and aggression: Causal relationship or byproduct of family violence and intrinsic violence motivation? Criminal Justice and Behavior, 35 (3), 311–332. https://doi.org/10.1177/0093854807311719

Fischer, P., & Greitemeyer, T. (2006). Music and aggression: The impact of sexual aggressive song lyrics on aggression-related thoughts, emotions, and behavior toward the same and opposite sex. Personality and Social Psychology Bulletin, 32 (9), 1165–1176. https://doi.org/10.1177/0146167206288670

Fischer, P. A., & Gilliam, K. S. (2012). Multidimensional treatment foster care: An alternative to residential treatment for high risk children and adolescents. Psychosocial Intervention, 21 (2), 195–203.

Flood, M. (2008). Measures for the assessment of dimensions of violence against women: A compendium . Retrieved from https://www.svri.org/sites/default/files/attachments/2016-01-12/measures.pdf .

Florea, M. (2013). Media violence and the cathartic effect. Procedia – Social and Behavioral Sciences, 92 , 349–353. https://doi.org/10.1016/j.sbspro.2013.08.683

Foshee, V. A., Reyes, L. M., Agnew-Brune, C. B., Simon, T. R., Vagi, K. J., Lee, R. D., & Suchindran, C. (2014). The effects of the evidence-based safe dates dating abuse prevention program on other youth violence outcomes. Prevention Science, 15 (6), 907–916.

Gerbner, G., Gross, L., Morgan, M., & Signorielli, N. (1994). Growing up with television: The cultivation perspective. In J. Bryant & D. Zillman (Eds.), Media effects: Advances in theory and research. Lawrence Erlbaum Associates Inc.

Gondolf, E. W. (1997). Evaluating batterer counseling programs: A difficult task showing some effects and implications. Aggression & Violent Behavior, 9 , 605–631.

Hahn, R. A., Bilukha, O., Lowry, J., Crosby, A. E., Fullilove, M. T., Liberman, A., Moscicki, E., Snyder, S., Tuma, F., Corso, P., & Schofield, A. (2005). The effectiveness of therapeutic foster case for the prevention of violence: A systematic review. American Journal of Preventive Medicine, 28 (1), 72–90.

Hamel, J. (2005). Gender-inclusive treatment of intimate partner abuse: A comprehensive approach . Springer Publishing Company.

Herrenkohl, T. I., Mason, W. A., Kosterman, R., Lengua, L. J., Hawkins, J. D., & Abbott, R. D. (2004). Pathways from physical childhood abuse to partner violence in young adulthood. Violence and Victims, 19 (2), 123–136.

Hoffner, C., & Buchana, M. (2002). Parents’ responses to television violence: The third-person perception, parental mediation, and support for censorship. Media Psychology, 4 (3), 231–252.

Huecker, M. R., King, K. C., Jordan, G. A., & Smock, W. (2021). Domestic violence. Treasure Island, FL: StatPearls [Internet] Publishing.

Huesmann, L. R., Moise-Titus, J., Podolsky, C. L., & Eron, L. D. (2003). Longitudinal relations between children’s exposure to TV violence and their aggressive and violent behavior in young adulthood: 1977–1992. Developmental Psychology, 39 (2), 201–221. https://doi.org/10.1037/0012-1649.39.2.201

Jennings, W. G., Okeem, C., Piquero, A. R., Sellers, C. S., Theobald, D., & Farrington, D. P. (2017). Dating and intimate partner violence among young persons ages 15–30: Evidence from a systematic review. Aggression and Violence Behavior, 33 , 107–125.

Johnson, H. (2001). Contrasting views of the role of alcohol in cases of wife assault. Journal of Interpersonal Violence, 16 (1), 54–72.

Kahlor, L. A., & Eastin, M. S. (2011). Television’s role in the culture of violence toward women: A study of television viewing and the cultivation of rape myth acceptance in the United States. Journal of Broadcasting & Electronic Media, 55 (2), 215–231. https://doi.org/10.1080/08838151.2011.56608

Kerig, P. K. (2010). Adolescent dating violence in context: Introduction and overview. Journal of Aggression, Maltreatment & Trauma, 19 , 465–468. https://doi.org/10.1080/10926771.2010.495033

Kilbourne, J. (1999). Can’t buy my love: How advertising changes the way we think and feel . Simon and Schuster.

Kotrla, B. (2007). Sex and violence: Is exposure to media content harmful to children? Children & Libraries: The Journal of the Association for Library Service to Children, 5 (2), 50–52.

Kurst-Swanger, K., & Petcosky, J. L. (2003). Violence in the home: Multidisciplinary perspectives . Oxford University Press.

Book   Google Scholar  

Lanis, K., & Covell, K. (1995). Images of women in advertisements: Effects on attitudes related to sexual aggression. Sex Roles, 32 (9/10), 639–649.

Linder, J. R., & Collins, W. A. (2005). Parent and peer predictors of physical aggression and conflict management in romantic relationships in early adulthood. Journal of Family Psychology, 19 (2), 252–262.

Lussier, P., Farrington, D. P., & Moffitt, T. E. (2009). Is the antisocial child father of the abusive man? A 40-year perspective longitudinal study on the developmental antecedents of intimate partner violence. Criminology, 47 (3), 741–780.

MacKay, N. J., & Covell, K. (1997). The impact of women in advertisements on attitudes toward women. Sex Roles, 36 (9/10), 573–583.

Malamuth, N. M., & Check, J. V. P. (1981). The effects of mass media exposure on acceptance of violence against women: A field experiment. Journal of Research in Personality, 15 (4), 436–446.

Markowitz, F. E. (2001). Attitudes and family violence: Linking intergenerational and cultural theories. Journal of Family Violence, 16 (2), 205–218.

McKinney, C. M., Caetano, R., Ramisetty-Mikler, S., & Nelson, S. (2009). Childhood family violence and perpetration and victimization of intimate partner violence: Findings from a national population-based study of couples. Annals of Epidemiology, 19 (1), 25–32.

McQuail, D. (1979). The influence and effects of mass media. In J. Curran, M. Gurevitch, & J. Woolacott (Eds.), Mass communication and society. Sage Publications Inc.

National Coalition against Domestic Violence (NCADV). (2020). Domestic violence . Retrieved from https://assets.speakcdn.com/assets/2497/domestic_violence2.pdf .

Niolon, P. H., Kearns, M., Dills, J., Rambo, K., Irving, S., Armstead, T. L., & Gilbert, L. (2017). Preventing intimate partner violence across the lifespan: A technical package of programs, policies, and practices. Atlanta, GA: Division of Violence Prevention, National Center for Injury Prevention and Control, Centers for Disease Control and Prevention.

O’Farrell, T. J., Fals-Stewart, W., Murphy, M., & Murphy, C. M. (2003). Partner violence before and after individually based alcoholism treatment for male alcoholic patients. Journal of Consulting and Clinical Psychology, 71 (1), 92–102.

O’Farrell, T. J., Murphy, C. M., Stephan, S. F., Fals-Stewart, W., & Murphy, M. (2004). Partner violence before and after couples-based alcoholism treatment for male alcoholic patients: The role of treatment involvement and abstinence. Journal of Consulting and Clinical Psychology, 72 (2), 202–217.

Olds, D. L., Eckenrode, J., Henderson, C. R., Kitzman, H., Powers, J., Cole, R., Sidora, K., Morris, P., Pettit, L. M., & Lucky, D. (1997). Long-term effects of home visitation on maternal life course and child abuse and neglect: Fifteen-year follow-up of a randomized trial. Journal of the American Medical Association, 278 (8), 637–643.

Olds, D. L., Henderson, C. R., Cole, R., Eckenrode, J., Kitzman, H., Luckey, D., Pettit, L., Sidora, K., Morris, P., & Powers, J. (1998). Long-term effects of Nurse Home Visitation on children’s criminal and antisocial behavior: 15-year follow-up of a randomized controlled trial. Journal of the American Medical Association, 280 (14), 1238–1244.

Reaves, B. A. (2017). Police response to domestic violence, 2006–2015 . Washington, D.C.: U.S. Department of Justice, Office of Justice Programs, Bureau of Justice Statistics.

Reidy, D. E., Holland, K. M., Cortina, K., Ball, B., & Rosenbush, B. (2017). Expect respect support groups: A dating violence prevention program for high-risk youth. Preventive Medicine, 100 , 235–242.

Reynolds, A. J., Temple, J. A., Ou, S. R., Robertson, D. L., Mersky, J. P., Topitzes, J. W., & Niles, M. D. (2007). Effects of a school-based early childhood intervention on adult health and well-being: A 19-year-follow-up of low-income families. Archives of Pediatrics and Adolescent Medicine, 161 (8), 730–739.

Reynolds, A. J., Temple, J. A., Robertson, D. L., & Mann, E. A. (2001). Long-term effects of an early childhood intervention on educational achievement and juvenile arrest: A 15-year follow-up of low-income children in public schools. Journal of the American Medical Association, 285 (18), 2339–2346.

Roberts, A. L., Gilman, S. E., Fitzmaurice, G., Decker, M. R., & Koenen, K. C. (2010). Witness of intimate partner violence in childhood and perpetration of intimate partner violence in adulthood. Epidemiology, 21 (6), 809–818.

Ruff, S., McComb, J. L., Coker, C. J., & Sprenkle, D. H. (2010). Behavioral couples therapy for the treatment of substance abuse: A substantive and methodological review of O’Farrell, Fals-Stewart, and colleagues’ program of research. Family Process, 49 (4), 439–456.

Saunders, D. G. (2008). Group interventions for men who batter: A summary of program descriptions and research. Violence and Victims, 23 (2), 156–172.

Savage, J. (2004). Does viewing violent media really cause criminal violence? A methodological review. Aggression and Violence Behavior, 10 (1), 99–128.

Savage, J., & Yancey, C. (2008). The effects of media violence exposure on criminal aggression: A meta-analysis. Criminal Justice and Behaviors, 35 (6), 772–791. https://doi.org/10.1177/0093854808316487

Sawyer, A. M., & Borduin, C. M. (2011). Effects of multisystemic therapy through midlife: A 21.9-year-follow-up to a randomized clinical trial with serious violent juvenile offenders. Journal of Consulting and Clinical Psychology, 79 (5), 643–652.

Simpson Beck, V., Boys, S., Rose, C., & Beck, E. (2012). Violence against women in video games: A prequel or sequel to rape myth acceptance? Journal of Interpersonal Violence, 27 (15), 3016–3031. https://doi.org/10.1177/0886260512441078

Smith, S. G., Chen, J., Basile, K. C., Gilbert, L. K., Merrick, M. T., Patel, N., Walling, M., & Jain, A. (2017). The national intimate partner violence and sexual violence survey (NISVS): 2010–2012 state report . Atlanta, GA: National center for injury prevention and control, centers for disease control and prevention.

Stankiewicz, J. M., & Rosselli, F. (2008). Women as sex objects and victims in print. Sex Roles, 58 , 579–589.

Sugarman, D. B., Aldarondo, E., & Boney-McCoy, S. (1996). Risk marker analysis of husband-to-wife violence: A continuum of aggression. Journal of Applied Social Psychology, 26 (4), 313–337.

Swinford, S. P., DeMaris, A., Cernkovich, S. A., & Giordano, P. C. (2000). Harsh physical discipline in childhood and violence in later romantic involvements: The mediating role of problem behaviors. Journal of Marriage and the Family, 62 (2), 508–519.

Taylor, B. G., Stein, N. D., Mumford, E. A., & Woods, D. (2013). Shifting boundaries: An experimental evaluation of a dating violence prevention program in middle schools. Prevention Science, 14 (1), 64–76.

Temple, J. R., Shorey, R. C., Tortolero, S. R., Wolfe, D. A., & Stuart, G. L. (2013). Importance of gender and attitudes about violence in the relationship between exposure to interparental violence and the perpetration of teen dating violence. Child Abuse & Neglect, 37 (5), 343–352.

The National Domestic Violence Hotline. (2013). Statistics . Retrieved from http://www.thehotline.org/is-this-abuse/statistics/ .

The United States Department of Justice. (2013). Domestic violence. Retrieved from http://www.ovw.usdoj.gov/domviolence.htm .

Tiesman, H. M., Gurka, K. K., Konda, S., Coben, J. H., & Amandus, H. E. (2012). Workplace homicides among U.S. women: The role of intimate partner violence. Annals of Epidemiology, 22 (4), 277–284.

Truman, J. L., & Morgan, R. E. (2014). Nonfatal domestic violence, 2003–2012. Washington, D.C.: U.S. Department of Justice, Office of Justice Programs, Bureau of Justice Statistics.

United Nations Commission on the Status of Women. (1996). Elimination of stereotyping in the mass media: Report of the Secretary General . Retrieved from http://www.un.org/documents/ecosoc/cn6/1996/ecn61996–4.htm

Vagi, K. J., Rothman, E. F., Latzman, N. E., Tharp, A. T., Hall, D. M., & Breiding, M. J. (2013). Beyond correlates: A review of risk and protective factors for adolescent dating violence perpetration. Journal of Youth and Adolescence, 42 (4), 633–649.

Wagner, D. V., Borduin, C. M., Sawyer, A. M., & Dopp, A. R. (2014). Long-term prevention of criminality in siblings of serious and violent juvenile offenders: A 25-year follow-up to a randomized clinical trial of Multisystemic Therapy. Journal of Consulting and Clinical Psychology, 82 (3), 492–499.

White, H. R., & Widom, C. D. (2003). Intimate partner violence among abused and neglected children in young adulthood: The mediating effects of early aggression, antisocial personality, hostility and alcohol problems. Aggressive Behavior, 29 , 332–345.

Williams, D. (2006). Virtual cultivation: Online worlds, offline perceptions. Journal of Communication, 56 (1), 69–87. https://doi.org/10.1111/j.1460-2466.2006.00004.x

Wolfe, D. A., Crooks, C., Jaffe, P., Chiodo, D., Hughes, R., Ellis, W., Stitt, L., & Donner, A. (2009). A school-based program to prevent adolescent dating violence: A cluster randomized trial. Archives of Pediatrics & Adolescent Medicine, 163 (8), 692–699.

Women’s Center & Shelter of Greater Pittsburgh. (2009). Domestic violence . Retrieved from http://www.ecspittsburgh.org/page/aspx?pid=354

Wood, W., Wong, F. Y., & Chachere, J. G. (1991). Effects of media violence on viewers’ aggression in unconstrained social interaction. Psychological Bulletin, 109 (3), 371–383.

Download references

This project received no funding for any element of the project, including study design, data collection, data analysis, or manuscript preparation.

Author information

Authors and affiliations.

Department of Sociology and Criminology, St. Bonaventure University, 3261 West State Street, Plassmann Room A1, St. Bonaventure, NY, 14778, USA

Samantha M. Gavin

Department of Criminal Justice, Penn State Altoona, 3000 Ivyside Park, Cypress Building, Room 101E, Altoona, PA, 16601, USA

Nathan E. Kruis

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Nathan E. Kruis .

Ethics declarations

Conflict of interest.

The authors declare that they have no conflict of interest.

Ethical approval

All research was conducted within the framework of the first author’s Institutional Review Board.

Human and animal participants

The study was approved by the institutional review board at the West Virginia Wesleyan College. The study was performed in accordance with the ethical standards as laid down in the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards.

Consent to participate

All participants were given and signed written informed consent documents prior to submitting data used in this study. The agreed to have their data collected and findings from it published.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Gavin, S.M., Kruis, N.E. The Influence of Media Violence on Intimate Partner Violence Perpetration: An Examination of Inmates’ Domestic Violence Convictions and Self-Reported Perpetration. Gend. Issues 39 , 177–197 (2022). https://doi.org/10.1007/s12147-021-09284-5

Download citation

Accepted : 10 June 2021

Published : 20 June 2021

Issue Date : June 2022

DOI : https://doi.org/10.1007/s12147-021-09284-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Intimate partner violence
  • Media violence
  • Domestic violence
  • Partner abuse
  • Find a journal
  • Publish with us
  • Track your research

The Impact of Media Violence on Child and Adolescent Aggression

  • August 2023
  • Journal of Education Humanities and Social Sciences 18:70-76
  • CC BY-NC 4.0
  • This person is not on ResearchGate, or hasn't claimed this research yet.

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

Johnie J. Allen

  • Kathleen M. Thomas
  • Albert Bandura
  • AGGRESS VIOLENT BEH

Steven Jay Kirsh

  • Janie Eubanks

Tumaini Rucker Coker

  • Marc N Elliott

David C Schwebel

  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

FactCheck.org

The Facts on Media Violence

By Vanessa Schipani

Posted on March 8, 2018

In the wake of the Florida school shooting, politicians have raised concern over the influence of violent video games and films on young people, with the president claiming they’re “shaping young people’s thoughts.” Scientists still debate the issue, but the majority of studies show that extensive exposure to media violence is a risk factor for aggressive thoughts, feelings and behaviors.

research questions on media violence

The link between media violence and mass shootings is yet more tenuous. Compared with acts of aggression and violence, mass shootings are relatively rare events, which makes conducting conclusive research on them difficult.

President Donald Trump first raised the issue during a meeting on school safety with local and state officials, which took place a week after the shooting  at Marjory Stoneman Douglas High School in Parkland, Florida. The shooter, 19-year-old Nikolas Cruz, reportedly obsessively played violent video games.

Trump, Feb. 22: We have to look at the Internet because a lot of bad things are happening to young kids and young minds, and their minds are being formed. And we have to do something about maybe what they’re seeing and how they’re seeing it. And also video games. I’m hearing more and more people say the level of violence on video games is really shaping young people’s thoughts. And then you go the further step, and that’s the movies. You see these movies, they’re so violent.

Trump  discussed the issue again with members of Congress on Feb. 28 during another meeting on school safety. During that discussion, Tennessee Rep. Marsha Blackburn claimed mothers have told her they’re “very concerned” that “exposure” to entertainment media has “desensitized” children to violence.

Iowa Sen. Chuck Grassley also said during the meeting: “[Y]ou see all these films about everybody being blown up. Well, just think of the impact that makes on young people.”

The points Trump and members of Congress raise aren’t unfounded, but the research on the subject is complex. Scientists who study the effect of media violence have taken issue with how the popular press has portrayed their work, arguing that the nuance of their research is often left out.

In a 2015 review of the scientific literature on video game violence, the American Psychological Association elaborates on this point.

APA, 2015: News commentators often turn to violent video game use as a potential causal contributor to acts of mass homicide. The media point to perpetrators’ gaming habits as either a reason they have chosen to commit their crimes or as a method of training. This practice extends at least as far back as the Columbine massacre (1999). … As with most areas of science, the picture presented by this research is more complex than is usually depicted in news coverage and other information prepared for the general public.

Here, we break down the facts — nuance included — on the effect of media violence on young people.

Is Media Violence a Risk Factor for Aggression?

The 2015 report by the APA on video games is a good place to start. After systematically going through the scientific literature, the report’s authors “concluded that violent video game use has an effect on aggression.”

In particular, the authors explain that this effect manifests as an increase  in aggressive behaviors, thoughts and feelings and a decrease  in helping others, empathy and sensitivity to aggression. Though limited, evidence also suggests that “higher amounts of exposure” to video games is linked to “higher levels of aggression,” the report said.

The report emphasized that “aggression is a complex behavior” caused by multiple factors, each of which increases the likelihood that an individual will be aggressive. “Children who experience multiple risk factors are more likely to engage in aggression,” the report said.

The authors came to their conclusions because researchers have consistently found the effect across three different kinds of studies: cross-sectional studies, longitudinal studies and laboratory experiments. “One method’s limits are offset by another method’s strengths,” the APA report explains, so only together can they be used to infer a causal relationship.

Cross-sectional studies find correlations between different phenomena at one point in time. They’re relatively easy to conduct, but they can’t provide causal evidence because correlations can be spurious . For example, an increase in video game sales might correlate with a decrease in violent crime, but that doesn’t necessarily mean video games prevent violent crime. Other unknown factors might also be at play.

Longitudinal panel studies collect data on the same group over time, sometimes for decades. They’re used to investigate long-term effects, such as whether playing video games as a child might correlate with aggression as an adult. These studies also measure other risk factors for aggression, such as harsh discipline from parents, with the aim of singling out the effect of media violence. For this reason, these studies provide better evidence for causality than cross-sectional studies, but they are more difficult to conduct.

Laboratory experiments manipulate one phenomenon — in this case, exposure to media violence — and keep all others constant. Because of their controlled environment, experiments provide strong evidence for a causal effect. But for the same reason, laboratory studies may not accurately reflect how people act in the real world.

This brings us to why debate still exists among scientists studying media violence. Some researchers have found that the experimental evidence backing the causal relationship between playing video games and aggression might not be as solid as it seems.

Last July, Joseph Hilgard , an assistant professor of psychology at Illinois State University, and others published a study  in the journal Psychological Bulletin that found that laboratory experiments on the topic may be subject to publication bias. This means that studies that show the effect may be more likely to be published than those that don’t, skewing the body of evidence.

After Hilgard corrected for this bias, the effect of violent video games on aggressive behavior and emotions did still exist, but it was reduced, perhaps even to near zero. However, the effect on aggressive thoughts remained relatively unaffected by this publication bias. The researchers also found that cross-sectional studies weren’t subject to publication bias. They didn’t examine longitudinal studies, which have shown that youth who play more violent video games are more likely to report aggressive behavior over time.

Hilgard looked at a 2010 literature  review  by Craig A. Anderson , the director of the Center for the Study of Violence at Iowa State University, and others. Published in Psychological Bulletin,  this review influenced the APA’s report.

In response, Anderson took a second look at his review and found that the effect of violent video games on aggression was smaller than he originally thought, but not as small as Hilgard found. For this reason, he argued the effect was still a “societal concern.”

To be clear, Hilgard is arguing that there’s more uncertainty in the field than originally thought, not that video games have no effect on aggression. He’s also  not the first  to find that research on video games may be suffering from publication bias.

But what about movies and television? Reviews of the literature on these forms of media tend to be less recent, Kenneth A. Dodge , a professor of psychology and neuroscience at Duke University, told us by email.

Dodge, also one of the authors of the 2015 APA study, pointed us to one 1994 review of the literature on television published in the journal Communication Research that concluded that television violence also “increases aggressiveness and antisocial behavior.” Dodge told us he’s “confident” the effect this analysis and others found “would hold again today.”

Dodge also pointed us to a 2006 study that reviewed the literature on violent video games, films, television and other media together. “Most contemporary studies start with the premise that children are exposed [to violence] through so many diverse media that they start to group them together,” said Dodge.

Published in  JAMA Pediatrics , the review found that exposure to violent media increases the likelihood of  aggressive behavior, thoughts and feelings. The review also found media decreases the likelihood of helping behavior. All of these effects were “modest,” the researchers concluded. 

Overall, most of the research suggests media violence is a risk factor for aggression, but some experts in the field still question whether there’s enough evidence to conclusively say there’s a link.

Is Violent Media a Risk Factor for Violence?

There’s even less evidence to suggest media violence is a risk factor for criminal violence.

“In psychological research, aggression is usually conceptualized as behavior that is intended to harm another,” while, “[v]iolence can be defined as an extreme form of physical aggression,” the 2015 APA report explains . “Thus, all violence is aggression, but not all aggression is violence.”

The APA report said studies have been conducted on media violence’s relationship with “criminal violence,” but the authors “did not find enough evidence of sufficient utility to evaluate whether” there’s a solid link to violent video game use.

This lack of evidence is due, in part, to the fact that there are ethical limitations to conducting experiments on violence in the laboratory, especially when it comes to children and teens, the report explains. That leaves only evidence from cross-sectional studies and longitudinal studies. So what do those studies say?

One longitudinal study , published in the journal Developmental Psychology in 2003, found that, out of 153 males, those who watched the most violent television as children were more likely 15 years later “to have pushed, grabbed, or shoved their spouses, to have responded to an insult by shoving a person” or to have been “to have been convicted of a crime” during the previous year. Girls who watched the most violent television were also more likely to commit similar acts as young women. These effects persisted after controlling for other risk factors for aggression, such as parental aggression and intellectual ability.

A 2012 cross-sectional  study that Anderson, at Iowa State, and others published in the journal  Youth Violence and Juvenile Justice  did find that the amount of violent video games juvenile delinquents played correlated with how many violent acts they had committed over the past year. The violent acts included gang fighting, hitting a teacher, hitting a parent, hitting other students and attacking another person.

However, a 2008 review of the literature published in the journal Criminal Justice and Behavior concluded that “ the effects of exposure to media violence on criminally violent behavior have not been established.” But the authors clarify: “Saying that the effect has not been established is not the same as saying that the effect does not exist.”

In contrast to the APA report, Anderson and a colleague argue in a 2015 article published in American Behavioral Scientist  that “research shows that media violence is a causal risk factor not only for mild forms of aggression but also for more serious forms of aggression, including violent criminal behavior.”

Why did Anderson and his colleagues come to different conclusions than the APA? He told us that the APA “did not include the research literature on TV violence,” and excluded “several important studies on video game effects on violent behavior published since 2013.”

In their 2015 article, Anderson and his colleague clarify that, even if there is a link, it “does not mean that violent media exposure by itself will turn a normal child or adolescent who has few or no other risk factors into a violent criminal or a school shooter.” They add, “Such extreme violence is rare, and tends to occur only when multiple risk factors converge in time, space, and within an individual.”

Multiple experts we spoke with did point to one factor unique to the United States that they argue increases the risk of mass shootings and lethality of violence in general — access to guns.

For example, Anderson told us by email: “There is a pretty strong consensus among violence researchers in psychology and criminology that the main reason that U.S. homicide rates are so much higher than in most Western democracies is our easy access to guns.”

Dodge, at Duke, echoed Anderson’s point.”The single most obvious and probably largest difference between a country like the US that has many mass shootings and other developed countries is the easy access to guns,” he said.

So while scientists disagree about how much evidence is enough to sufficiently support a causal link between media violence and real world violence, Trump and other politicians’ concerns aren’t unfounded.

Editor’s note: FactCheck.org is also based at the University of Pennsylvania’s Annenberg Public Policy Center. Hilgard, now at Illinois State, was a post doctoral fellow at the APPC.

FactCheck.org

National Coalition Against Censorship Logo

Fact Sheet on Media Violence

This should not be surprising: media violence is so pervasive in our lives, and comes in so many different contexts and styles, that it is impossible to make accurate generalizations about its real-world effects based on experiments in a laboratory, or on studies that simply find statistical correlations between media-viewing and aggressive behavior.

Of course, the First Amendment would be a significant barrier to censoring violent images and ideas even if social science had in fact produced statistical evidence of adverse effects. But it is important for the ongoing debate on this issue that the real facts about media violence studies are understood.

r  No one seriously doubts that the mass media have profound effects on our attitudes and behavior. But the effects vary tremendously, depending on the different ways that media content is presented, and the personality, background, intelligence, and life experience of the viewer.

r  Although many people believe that media violence causes aggression, it’s doubtful that this can ever be proved by the methods of social science. For one thing, violent images and ideas come in too many different styles and contexts for researchers to be able to make meaningful generalizations about effects.1

r  Somewhere between 200 and 300 laboratory experiments, field studies, and correlational studies have been done on media violence (not thousands, as some activists have claimed), and their results are dubious and inconsistent. In some cases, experimenters have manipulated disappointing results until they came up with at least one positive finding; then proclaimed that the experiment supported their hypothesis that media violence causes aggression. Some experiments have found more aggressive behavior after viewing nonviolent shows like  Sesame Street  and  Mr. Rogers’ Neighborhood. 2

r  Professor Jonathan Freedman of the University of Toronto, an independent expert who reviewed the media violence literature in the 1980s, concluded that the research did not “provide either strong or consistent support for the hypothesis that exposure to media violence causes aggression or crime. Rather, the results have been extremely inconsistent and weak.”3 Updating his resarch in 2002, Freedman reported that fewer than half the studies support a causal effect.4

r  For the minority of experiments that have yielded positive results, the explanation probably has more to do with the general arousal effect of violent entertainment than with viewers actually imitating violent acts. Laboratory experiments, moreover, do not measure real aggression but other behaviors that the researchers consider “proxies” for real aggression – popping balloons, giving noise blasts, hitting Bobo dolls, or other forms of aggressive play.5

r  Laboratory experiments also suffer from “experimenter demand effect”– subjects responding to what they think the researcher wants. They know that behavior is permitted in the lab that would be unacceptable in the real world.6

r  Because of the weakness of laboratory experiments in predicting behavior, psychologists have undertaken “field experiments” that more accurately replicate the real world. Freedman reported that the overwhelming majority of field experiments found no adverse effects on behavior from exposure to media violence.7

r  Some correlational studies show a “link” or “association” between the subjects’ amount of violent TV viewing and real-world aggressive behavior. But a link or association does not establish causation. It is likely that a combination of factors (level of intelligence, education, social background and attitudes, genetic predisposition, and economic status) account for both the entertainment preferences and the behavior.8

r  Some correlational studies do not even focus on violent TV but simply examine overall amount of television viewing. This reinforces the probability that people whose cultural and activity choices are limited and who thus watch excessive amounts of TV also may have a more limited range of responses to conflict situations.9

r  Violence has been a subject in literature and the arts since the beginning of human civilization. In part, this simply reflects the unfortunate realities of the world. But it’s also likely that our fascination with violence satisfies some basic human needs. The adrenalin rush, the satisfactions of imagination, fantasy, and vicarious adventure, probably explain why millions of nonviolent people enjoy violent entertainment.10

r  Because the mass media presents violence in so many different ways (news, sports, action movies, cartoons, horror movies, documentaries, war stories with pacifist themes), it is particularly difficult to generalize about its impact. Even social scientists who believe that violent entertainment has adverse effects don’t agree on what kinds of violent images or ideas are harmful. Some point to cartoons; others point to movies in which a violent hero is rewarded; others fault the gory focus of television news.

r  Every federal appellate court that has addressed the issue has rejected the claim that social science research shows adverse effects from violent content in entertainment. In a June 2011 decision striking down a law that restricted minors’ access to violent video games, the Supreme Court noted that research studies “do not prove that violent video games  cause  minors to  act  aggressively (which would at least be a beginning).” Instead, “nearly all of the research is based on correlation, not evidence of causation, and most of the studies suffer from significant, admitted flaws in methodology. … They show at best some correlation between exposure to violent entertainment and minuscule real-world effects, such as children’s feeling more aggressive or making louder noises in the few minutes after playing a violent game than after playing a nonviolent game.”10a

r  There have been instances where criminals or others engaged in violent behavior have imitated specific aspects of a violent movie or TV show. But the fact that millions of other viewers have not engaged in imitation suggests that predisposition is the important factor, and that if the bad actors had not seen that particular movie or show, they would have imitated something else. It is impossible to predict which episodes or descriptions will be imitated by unstable individuals, and equally impossible to ban every book, movie, magazine article, song, game, or other cultural product that somebody might imitate.11

r  There is much that is pernicious, banal, and crude in popular culture — not all of it violent. The best ways to address concerns about bad media messages of all types are media literacy education, prompt attention to danger signs for violent behavior in schools, workplaces, and other venues, and increased funding for creative, educational, nonviolent TV programming.12

*This article previously appeared on the Free Expression Policy Project, which existed from 2000-2017.

1. See,  e.g.,  Jeffrey Goldstein, ed.  Why We Watch: The Attractions of Violent Entertainment  (1998); Henry Jenkins, “Lessons From Littleton: What Congress Doesn’t Want to Hear About Youth and Media.”  Harper’s,  August 1999; National Research Council, National Academy of Sciences, Understanding and Preventing Violence (A. Reiss, Jr. & J. Roth, eds.) (1993), pp. 101-02; Marjorie Heins,  Not in Front of the Children: “Indecency,” Censorship, and the Innocence of Youth  (2001), pp. 228-53.

2. See Jib Fowles,  The Case for Television Violence  (1999); Richard Rhodes, “The Media-Violence Myth,”  Rolling Stone , Nov. 23, 2000, p. 55; Jonathan Freedman,  Media Violence and Its Effect on Aggression: Assessing the Scientific Evidence  (2002). For studies examining the effect of  Sesame Street  and  Mr. Rogers’ Neighborhood , see Kenneth Gadow & Joyce Sprafkin, “Field Experiments of Television Violence with Children: Evidence for an Environmental Hazard?” 83(3)  Pediatrics  399 (1989); Joyce Sprafkin, Kenneth Gadow & Patricia Grayson, “Effects of Viewing Aggressive Cartoons on the Behavior of Learning Disabled Children.” 28(3)  Journal of Child Psychology & Psychiatry  387 (1987).

3. Jonathan Freedman, Executive Summary, “Media Violence and Aggression: A Review of the Research,” University of Toronto Manuscript (March 2001).

4. Jonathan Freedman,  Media Violence and Its Effect on Aggression: Assessing the Scientific Evidence  (2002).

5. See Jeffrey Goldstein, “Does Playing Violent Video Games Cause Aggressive Behavior?” Paper presented at U. of Chicago “Playing By the Rules” Conference, Oct. 27, 2001, p. 5; Stuart Fischoff, “Psychology’s Quixotic Quest for the Media-Violence Connection,” 4(4)  Journal of Media Psychology  (1999), http://www.calstatela.edu/faculty/sfischo/violence.html (accessed 9/20/02) (question about grant termination used as measure of aggression); Craig Anderson & Karen Dill, “Video Games and Aggressive Thoughts, Feelings, and Behavior in the Laboratory and in Life,” 78(4)  Journal of Personality& Social Psychology  772 (2000) (using as proxies noise blasts and recognizing “aggressive” words); Ellen Wolock, “Is There a Reasonable Approach to Handling Violence in Video Games?”  Children’s Software Revue , July/Aug. 2002 (“aggressivity” measured through “increase in heart rate and blood pressure, negative responses on questionnaires, toy choice, etc.”); Craig Emes, “Is Mr. Pac Man Eating Our Children? A Review of the Effect of Video Games on Children,” 42  Canadian Journal of Psychiatry  409, 413 (1997) (reliability and validity of procedures used to measure aggression “are questionable”).

6. See Jonathan Freedman, Media Violence and Its Effect on Aggression,  supra , pp. 49-51, 80-83; Guy Cumberbatch, “Video Violence: Villain or Victim?” (Video Standards Council, UK, 2001), www.videostandards.org.uk/video_violence.htm (accessed 9/13/02) (quoting “one shrewd four year-old who, on arriving at the laboratory, … was heard to whisper to her mother, ‘Look mummy! There’s the doll we have to hit!”); Joanne Savage, “The Criminologist’s Perspective,” in  Violence and the Media  (Freedom Forum, 2001), p. 28 (“it is possible that showing subjects violent material creates an atmosphere of permissiveness and encourages them to be more aggressive”).

7. Jonathan Freedman,  Media Violence and Its Effect on Aggression: Assessing the Scientific Evidence  (2002), pp. 106-107.

8. See generally Federal Trade Commission,  Marketing Violent Entertainment to Children: A Review of Self-Regulation and Industry Practices in the Motion Picture, Music Recording & Electronic Game Industries,  Appendix A – “A Review of Research on the Impact of Violence in Entertainment Media” (Sept. 2000); Jonathan Freedman,  Media Violence and Its Effect on Aggression: Assessing the Scientific Evidence  (2002).

9. See Brandon Centerwall, “Television and Violence: The Scale of the Problem and Where to Go From Here,” 267 JAMA 3059 (1992); Sissela Bok,  Mayhem – Violence as Public Entertainment  (1998), p. 86; Franklin Zimring & Gordon Hawkins,  Crime is Not the Problem – Lethal Violence in America  (1997), pp. 133-34, 239-43; Committee on Communications & Media Law, “Violence in the Media: A Position Paper,” 52  Record  273, 292-93 (Association of the Bar, City of New York, 1997).

10. See, e.g., Bruno Bettelheim,  The Uses of Enchantment – The Meaning and Importance of Fairy Tales  (1975); John Sommerville,  The Rise and Fall of Childhood  (1982), pp. 136-38; Jean Piaget,  Play, Dreams and Imitation in Childhood  (1962), pp. 132-33, 158; Erik Erikson,  Childhood and Society  (1950), p. 215; Henry Jenkins, “Lessons From Littleton: What Congress Doesn’t Want to Hear About Youth and Media,”  Harper’s,  August 1999; David Blum, “Embracing Fear as Fun To Practice for Reality: Why People Like to Terrify Themselves,”  New York Times , Oct. 30, 1999, p. B11; Norbert Elias & Eric Dunning,  Quest for Excitement: Sport and Leisure in the Civilizing Process  (1986), p. 89; Jeffrey Arnett, “The Soundtrack of Restlessness – Musical Preferences and Reckless Behavior Among Adolescents.” 7(3)  Journal of Adolescent Research  328 (July 1992); Jeffrey Arnett. “Adolescents and heavy metal music: From the mouths of metalheads.” 23  Youth & Society  76 (Sept. 1991).

10a. See  Why Nine Court Defeats Haven’t Stopped States From Trying to Restrict Violent Video Game s and Requiem For California’s Video Game Law.

11. See John Douglas & Mark Olshaker,  The Anatomy of Motive  (1999), pp. 82-87; Stuart Fischoff, “Psychology’s Quixotic Quest for the Media-Violence Connection,” 4(4)  Journal of Media Psychology  (1999), http://www.calstatela.edu/faculty/sfischo/violence.html (accessed 9/20/02).

12. See  Media Literacy: An Alternative to Censorship  (Free Expression Policy Project, 2003).

American Amusement Machine Association v. Kendrick.  244 F.3d 572. U.S. Court of Appeals for the Seventh Circuit, 2001.

Jeffrey Arnett. “The Soundtrack of Restlessness – Musical Preferences and Reckless Behavior Among Adolescents.” 7(3)  Journal of Adolescent Research  328 (July 1992).

Jeffrey Arnett. “Adolescents and heavy metal music: From the mouths of metalheads.” 23  Youth & Society  76 (Sept. 1991).

Martin Barker & Julian Petley, eds.,  Ill Effects: The Media Violence Debate.  1997.

Lillian Bensley& Juliet Van Eenwyk, “Video Games and Real-Life Aggression: Review of the Literature,” 29  Journal of Adolescent Health  244 (2001).

Hubert Blalock, Jr. “Multiple Causation, Indirect Measurement and Generalizability in the Social Sciences,” 68  Synthese – An International Journal for Epistemology, Methodology, and Philosophy of Science  13 (1986).

David Blum. “Embracing Fear as Fun To Practice for Reality: Why People Like to Terrify Themselves.”  New York Times , Oct. 30, 1999, p. B11.

David Buckingham.  Moving Images – Understanding children’s emotional responses to television.  Manchester University Press, 1996.

Committee on Communications & Media Law. “Violence in the Media: A Position Paper.” 52(3)  Record  of The Association of the Bar, City of New York 273 (1997).

Thomas Cook  et al.  “The Implicit Assumptions of Television Research: An Analysis of the 1982 NIMH Report on Television and Behavior.” 47(2)  Public Opinion Quarterly  161 (1983).

Guy Cumberbatch, “Video Violence: Villain or Victim?” Video Standards Council, UK, 2001, www.videostandards.org.uk/video_violence.htm (accessed 9/13/02).

Kevin Durkin,  Computer Games: their effects on young people: A Review.  Office of Film & Literature Classification (Australia), 1995.

Kevin Durkin,  Computer Games and Australians Today.  Australia Office of Film & Literature Classification, 1999.

Craig Emes, “Is Mr. Pac Man Eating Our Children? A Review of the Effect of Video Games on Children,” 42  Canadian Journal of Psychiatry  409 (1997).

Erik Erikson.  Childhood and Society.  Norton, 1950.

Eclipse Enterprises v. Gulotta , 134 F.3d 63. U.S. Court of Appeals for the Second Circuit, 1997.

Federal Trade Commission,  Marketing Violent Entertainment to Children: A Review of Self-Regulation and Industry Practices in the Motion Picture, Music Recording & Electronic Game Industries,  Appendix A – “A Review of Research on the Impact of Violence in Entertainment Media.” Federal Trade Commission, Sept. 2000.

Stuart Fischoff, “Psychology’s Quixotic Quest for the Media-Violence Connection,” 4(4)  Journal of Media Psychology  (1999), http://www.calstatela.edu/faculty/sfischo/violence.html (accessed 9/20/02).

Jib Fowles,  Why Viewers Watch: A Reappraisal of Television’s Effects.  Sage Publications, 1991.

Jib Fowles,  The Case for Television Violence.  Sage Publications, 1999.

Jonathan Freedman,  Media Violence and Its Effect on Aggression: Assessing the Scientific Evidence.  University of Toronto Press, 2002.

Jonathan Freedman. “Viewing Television Violence Does Not Make People More Aggressive.” 22  Hofstra Law Review  833 (1994).

Jonathan Freedman. “Television Violence and Aggression: What Psychologists Should Tell the Public,” in  Psychology and Social Policy  (Peter Suedfeld & Philip Tetlock, eds.). Hemisphere, 1991.

Jonathan Freedman. “Television Violence and Aggression: A Rejoinder.” 100(3)  Psychological Bulletin  372 (1986).

Jonathan Freedman. “Effect of Television Violence on Aggressiveness.” 96  Psychological Bulletin  227 (1984).

Kenneth Gadow & Joyce Sprafkin. “Field Experiments of Television Violence with Children: Evidence for an Environmental Hazard?” 83(3)  Pediatrics  399 (1989).

Henry Giroux.  Fugitive Cultures: Race, Violence, and Youth . Routledge, 1996.

Todd Gitlin. “The Real Problem is Violence, Not Violence on Television,” 22  Hofstra Law Review  885 (1994).

Jeffrey Goldstein, ed.  Why We Watch: The Attractions of Violent Entertainment. Oxford University Press, 1998.

Barrie Gunter.  Dimensions of Television Violence . St. Martin’s Press, 1985.

Barrie Gunter & Adrian Furnham. “Perceptions of television violence: Effects of programme genre and type of violence on viewers’ judgments of violent portrayals.” 23  British Journal of Social Psychology  155 (1984).

Judith Rich Harris.  The Nurture Assumption: Why Children Turn Out the Way They Do . Free Press, 1998.

Marjorie Heins.  Violence and the Media: An exploration of cause, effect, and the First Amendment.  Freedom Forum First Amendment Center, 2001.

Marjorie Heins.  Not in Front of the Children: “Indecency,” Censorship, and the Innocence of Youth.  Rutgers U. Press, 2007 .

J.C. Herz.  Joystick Nation: How Video Games Ate Our Quarters, Won Our Hearts, and Rewired Our Minds.  Little Brown, 1997.

Paul Humphreys. “Causation in the Social Sciences: An Overview,” 68  Synthese – An International Journal for Epistemology, Methodology, and Philosophy of Science  1 (1986).

Henry Jenkins. “Lessons From Littleton: What Congress Doesn’t Want to Hear About Youth and Media.”  Harper’s,  August 1999.

Gerard Jones.  Killing Monsters: Why Children Need Fantasy, Super Heroes, and Make-Believe Violence.  Basic Books. 2002.

Robert Kaplan. “TV Violence and Aggression Revisited Again.” 37(5)  American Psychologist  589 (1982).

Robert Kaplan & Robert Singer. “Television Violence and Viewer Aggression: A Reexamination of the Evidence,” 32(4)  Journal of Social Issues  35 (1976).

Herbert Kay. “Weaknesses in the Television-Causes-Aggression Analysis by Eron et al.” 27(10)  American Psychologist  970 (1972).

Jonathan Kellerman.  Savage Spawn – Reflections on Violent Children . Ballantine, 1999.

Thomas Krattenmaker & Lucas Powe, Jr. “Televised Violence: First Amendment Principles and Social Science Theory.” 64  Virginia Law Review  1123 (1978).

Lawrence Kurdek. “Gender differences in the psychological symptomatology and coping strategies of young adolescents.” 7  Journal of Early Adolescence  395 (1987).

Judith Levine.  Shooting the Messenger: Why Censorship Won’t Stop Violence. Media Coalition, 2000.

Konrad Lorenz.  On Aggression.  Harcourt Brace & World, 1963.

Mike A. Males.  Framing Youth: Ten Myths About the Next Generation.  Common Courage Press, 1999.

Mike A. Males.  The Scapegoat Generation: America’s War on Adolescents. Common Courage Press, 1996.

Rollo May.  Power and Innocence – A Search for the Sources of Violence.  Norton, 1972.

William McGuire. “The Myth of Massive Media Impact: Savagings and Salvagings,” in  Public Communication and Behavior , Vol. 1 (George Comstock, ed.). Academic Press, 1986.

Steven Messner, “Television Violence and Violent Crime: An Aggregate Analysis,” 33(3)  Social Problems  218 (1986).

J. Ronald Milavsky et al.  Television and Aggression: A Panel Study.  Academic Press, 1982.

National Research Council, National Academy of Sciences,  Understanding and Preventing Violence  (A. Reiss, Jr. & J. Roth, eds.) National Academy Press. 1993.

Debra Niehoff.  The Biology of Violence.  Free Press, 1999.

Marcia Pally.  Sex & Sensibility – Reflections on Forbidden Mirrors and the Will to Censor . Ecco Press, 1994.

Jean Piaget.  Play, Dreams and Imitation in Childhood.  Norton, 1962.

Albert Reiss, Jr. & Jeffrey Roth, eds.  Understanding and Preventing Violence.  National Research Council/National Academy Press, 1993.

Richard Rhodes. “The Media Violence Myth.” American Booksellers Foundation for Free Expression, 2000.  http://www.abffe.org/myth1.htm .

Richard Rhodes. “The Media Violence Myth.”  Rolling Stone , Nov. 23, 2000, p. 55.

Richard Rhodes.  Why they Kill: The Discoveries of a Maverick Criminologist. Vintage, 2000.

David Sohn. “On Eron on Television Violence and Aggression.” 37  American Psychologist  1292 (1982).

Birgitte Holm Sørensen & Carsten Jessen, “It Isn’t Real: Children, Computer Games, Violence and Reality,” in  Children in the New Media Landscape  (C. Von Feilitzen & U. Carlsson, eds.) UNESCO. 2000.

Joyce Sprafkin, Kenneth Gadow & Patricia Grayson. “Effects of Viewing Aggressive Cartoons on the Behavior of Learning Disabled Children.” 28(3)  Journal of Child Psychology & Psychiatry  387 (1987).

Wendy Steiner.  The Scandal of Pleasure – Art in an Age of Fundamentalism . University of Chicago Press, 1995.

The Lancet. “Guns, lies, and videotape” (editorial).  The Lancet , Aug. 14, 1999, p. 525.

United States, Surgeon General.  Youth Violence: A Report of the Surgeon General.  Department of Health & Human Services, 2001.

Oene Wiegman  et al. Television viewing related to aggressive and prosocial behavior.  Stichting Voor Onderzoek van het Onderwijs, 1986.

Oene Wiegman et al. “A Longitudinal Study of the Effects of Television Viewing on Aggressive and Prosocial Behaviors.” 31  British Journal of Social Psychology  147 (1992).

Franklin Zimring & Gordon Hawkins.  Crime is Not the Problem – Lethal Violence in America.  Oxford University Press, 1997.

Share This Story, Choose Your Platform!

ResearchMethods

  • Media Violence Exposure Scale (MVES)
  • " onclick="window.open(this.href,'win2','status=no,toolbar=no,scrollbars=yes,titlebar=no,menubar=no,resizable=yes,width=640,height=480,directories=no,location=no'); return false;" rel="nofollow"> Print

The Media Violence Exposure Scale (MVES) is a 25-item questionnaire used to measure an individual's exposure to violence in various forms of media, including television, movies, video games, and music. The scale was developed by Krahé and Möller in 2010 and has been widely used in research on media violence.

The MVES questionnaire consists of the following 25 items:

In the past year, how often have you watched violent movies (e.g., action movies, horror movies)?

In the past year, how often have you watched violent TV shows (e.g., crime dramas, police shows)?

In the past year, how often have you watched violent news reports (e.g., coverage of wars, natural disasters)?

In the past year, how often have you played violent video games (e.g., first-person shooter games)?

In the past year, how often have you listened to violent music (e.g., heavy metal, rap)?

In the past year, how often have you read violent books or comics?

How often do you watch violent movies?

How often do you watch violent TV shows?

How often do you watch violent news reports?

How often do you play violent video games?

How often do you listen to violent music?

How often do you read violent books or comics?

How many hours a week do you spend watching violent movies?

How many hours a week do you spend watching violent TV shows?

How many hours a week do you spend watching violent news reports?

How many hours a week do you spend playing violent video games?

How many hours a week do you spend listening to violent music?

How many hours a week do you spend reading violent books or comics?

How often do you watch violent movies alone?

How often do you watch violent TV shows alone?

How often do you watch violent news reports alone?

How often do you play violent video games alone?

How often do you listen to violent music alone?

How often do you read violent books or comics alone?

How often do you watch violent movies with others?

Participants respond to each item on a 5-point Likert scale, ranging from "never" (1) to "very often" (5). Items 7-12 are reverse-scored, so higher scores on these items indicate lower levels of exposure to media violence.

Scoring for the MVES is calculated by summing the responses to each item. The total score can range from 25 to 125, with higher scores indicating greater exposure to media violence.

MVES includes concerns about its reliance on self-report measures, the lack of differentiation between different types of violence, and the potential for social desirability bias in responses.

Krahé, B., & Möller, I. (2010). Long-term effects of media violence on aggression and empathy among German adolescents. Journal of Applied Developmental Psychology, 31(5), 401-409.

Krahé, B., Möller, I., Huesmann, L. R., Kirwil, L., Felber, J., & Berger, A. (2011). Desensitization to media violence: Links with habitual media violence exposure, aggressive cognitions, and aggressive behavior. Journal of Personality and Social Psychology, 100(4), 630-646.

  • Research General
  • Quantitative Research
  • Qualitative research
  • Concept Scales and Questionaires
  • Theories, Models and Concepts
  • Statistics items
  • Instruction Videos
  • Qualtrics videos
  • APA generator
  • APA style help
  • BUAS Library
  • Media Theories
  • Media Use Survey
  • Sample Size Calculator
  • Citation Manager
  • Finding articles on your topic
  • Create a visual overview of related articles
  • Tool to get an overview of content of articles
  • Before you use ChatGPT
  • Media Background

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

Why Many Parents and Teens Think It’s Harder Being a Teen Today

Is it harder these days to be a teen? Or do today’s teenagers have it easier than those of past generations? We asked the following question of 1,453 U.S. parents and teens: Compared with 20 years ago, do you think being a teenager today is harder, easier or about the same?

Parents and teens most often say it’s harder to be a teen today. Though parents are far more likely to say this.

Far fewer say it’s easier now …

… or that it’s about the same.

Teens, though, are more likely than parents to say they are unsure.

But why? We asked those who say teen life has gotten harder or easier to explain in their own words why they think so.

Why parents say it’s harder being a teen today

A chart showing that Technology, especially social media, is the top reason parents think it’s harder being a teen today

There are big debates about how teenagers are faring these days. And technology’s impact is often at the center of these conversations.

Prominent figures, including the U.S. Surgeon General, have been vocal about the harmful effects technology may be having on young people.

These concerns ring true for the parents in our survey. A majority blame technology – and especially social media – for making teen life more difficult.

Among parents who say it’s harder being a teen today, about two-thirds cite technology in some way. This includes 41% who specifically name social media.

While some mention social media in broad terms, others bring up specific experiences that teens may have on these platforms, such as feeling pressure to act or look a certain way or having negative interactions there. Parents also call out the downsides of being constantly connected through social media.

Pew Research Center has a long history of studying the attitudes and experiences of U.S. teens and parents, especially when it comes to their relationships with technology.

For this analysis, the Center conducted an online survey of 1,453 U.S. teens and parents from Sept. 26 to Oct. 23, 2023, through Ipsos. Ipsos invited one parent from each of a representative set of households with parents of teens in the desired age range from its  KnowledgePanel . The KnowledgePanel is a probability-based web panel recruited primarily through national, random sampling of residential addresses. Parents were asked to think about one teen in their household. (If there were multiple teens ages 13 to 17 in the household, one was randomly chosen.) After completing their section, the parent was asked to have this chosen teen come to the computer and complete the survey in private.

The survey is weighted to be representative of two different populations: 1) parents with teens ages 13 to 17, and 2) teens ages 13 to 17 who live with parents. For each of these populations, they survey is weighted to be representative by age, gender, race and ethnicity, household income and other categories.

Parents and teens were first asked whether they think it is harder, easier, or about the same to be a teen now than it was 20 years ago. Those who answered that it was easier or harder were then asked an open-ended question to explain why they answered the way they did. Center researchers developed a coding scheme categorizing the written responses, coded all responses, then grouped them into the themes explored in this data essay. Quotations may have been lightly edited for grammar, spelling and clarity.

Here are the questions among parents and among teens used in this analysis, along with responses, and its methodology .

This research was reviewed and approved by an external institutional review board (IRB), Advarra, an independent committee of experts specializing in helping to protect the rights of research participants.

“Social media is a scourge for society, especially for teens. They can’t escape social pressures and are constantly bombarded by images and content that makes them feel insecure and less than perfect, which creates undue stress that they can’t escape.” FATHER, 40s

“Kids are being told what to think and how to feel based on social media.” MOTHER, 40s

Parents name other forms of technology, but at much lower rates. Roughly one-in-ten parents who think being a teen is harder today specifically say the internet (11%) or smartphones (7%) contribute to this.

“Teens are online and they are going to encounter everything offered – positive and negative. Unfortunately, the negative can do major damage, as in cyberbullying, for example.” MOTHER, 30s

Another 26% say technology in general or some other specific type of technology (e.g., video games or television) make teens’ lives harder today.

“Technology has changed the way people communicate. I can see how kids feel very isolated.” FATHER, 40s

Parents also raise a range of reasons that do not specifically reference technology, with two that stand out: more pressures placed on teens and the country or world being worse off than in the past. Among parents who think it’s harder to be a teen today, 16% say it’s because of the pressures and expectations young people face. These include teens feeling like they have to look or act a certain way or perform at a certain level.

“The competition is more fierce in sports and academics and the bar seems to be higher. Everything is more over-the-top for social activities too. It’s not simple as it was.” MOTHER, 50s

A similar share (15%) says teen life is harder because the country or world has changed in a bad way, whether due to political issues or to shifts in morals and values.

“Now it is more difficult to instill values, principles, good customs and good behavior, since many bad vices are seen in some schools and public places.” MOTHER, 50s

Other reasons that do not mention technology are less common. For example, roughly one-in-ten of these parents or fewer mention violence and drugs, bullying, and exposure to bad influences.

Why parents say it’s easier being a teen today

A chart showing that Parents largely point to technology as a reason it’s easier being a teen today

Teens today have a seemingly endless choice of technologies at their disposal, whether it be smartphones , video games or generative AI . And while relatively few parents say teen’s lives are easier today, those who do largely point to technology.

Among parents who say it is easier being a teen today, roughly six-in-ten mention technology as a reason.

Some reference a specific type of technology, like the internet (14%). Another 8% cite smartphones, and 3% cite social media.

“Although the internet can be toxic, it also opens up so many avenues for connection, learning and engagement.” MOTHER, 50s

“We didn’t have smartphones when I was a teenager. Nowadays, teenagers have all the answers in the palm of their hand.” FATHER, 40s

A fair portion (47%) mention technology broadly or name another specific type of technology.

“Technology has improved exponentially, giving access to the whole world at your fingertips.” FATHER, 30s

Some other reasons that emerge do not mention technology specifically. For instance, 18% of parents who say it’s easier being a teen today think this is because there are fewer pressures and expectations on teenagers than in the past.

“Teens today have been shown more leniency; they barely hold themselves responsible.” MOTHER, 40s

And one-in-ten say it’s easier because teens have access to more resources and information.

 “When I was a teen, I had to carry so many books and binders everywhere while my daughter can just have her school laptop. She can complete research easily with internet access on her school device.” MOTHER, 30s

Why teens say it’s harder being a teen today

A chart showing that Increased pressures and social media stand out as reasons teens say it’s harder to be a teen today

Most teens use social media , and some do so almost constantly. But they also see these sites as a reason teens’ lives are harder today than 20 years ago.

In addition, teens point to the pressures and expectations that are placed on them.

Among teens who say it’s harder to be a teenager today than in the past, roughly four-in-ten mention technology as a reason. This includes a quarter who specifically name social media. Some mention these sites broadly; others link them to harmful experiences like increased pressures to look a certain way or negative interactions with others.

“Social media tells kids what to do and say. And if you aren’t up on it, you look like the fool and become like an outcast from lots of people.” TEEN GIRL

“Social media was not a part of my parents’ teenage lives and I feel that they did not have to ‘curate’ themselves and be a certain way in order to fit [in] as it is today.” TEEN GIRL

Few specifically mention the internet (6%) or smartphones (3%) as reasons. About one-in-ten (11%) cite technology broadly or another type of technology.

“For one thing, my phone is a huge distraction. It takes up so much of my time just looking at stuff that doesn’t even mean anything to me.” TEEN GIRL

Teens name several reasons that do not specifically mention technology – most prominently, the increased pressures and expectations placed on them. Roughly three-in-ten of those who say teen life is harder today (31%) say it’s because of these pressures and expectations.  

“We have so much more homework and pressure from other kids. We are always being looked at by everyone. We can’t escape.” TEEN GIRL

“Adults expect too much from us. We need to get good grades, do extracurricular activities, have a social life, and work part time – all at the same time.” TEEN BOY

Another 15% say it’s harder because the world is worse off today, due to such things as political issues, values being different or the country having declined in some way.

“Teenagers are less able to afford vehicles, rent, etc. and basic living necessities, and are therefore not able to move out for years after they graduate high school and even college.” TEEN BOY

Other reasons that don’t mention technology – including violence and drugs, bullying, and mental health problems – are named by 8% of these teens or fewer.

Why teens say it’s easier being a teen today

A chart showing that Technology is the top reason why teens think it’s easier being a teen today

Teens also see ways that technology makes life better, whether that’s helping them pursue hobbies , express their creativity or build skills . Overall, few think teens’ lives are easier today than 20 years ago, but those who do largely say technology is a reason. 

Six-in-ten teens who say teen life is easier today reference technology in some way. This includes 14% who mention the internet and 12% who mention phones. Just 3% name social media.

“[Teens 20 years ago] didn’t have internet available anywhere and they also didn’t have smartphones to be able to use whenever needed.” TEEN BOY

This also includes 46% who reference technology in general or some other specific type of technology.

“Tech has made it easier to connect with friends.” TEEN BOY

These teens also name reasons that don’t specifically mention technology, including 14% who say life is easier because there are fewer pressures and expectations for people their age.

“Twenty years ago there was probably more pressure to become an adult sooner and get things like a job, a learner’s permit, etc.” TEEN GIRL

And a same share says having more resources available to them has made life easier.

“Nowadays, we have help to deal with your physical and mental well-being, and we have specialists/therapists that we can talk to about our feelings and emotions.” TEEN GIRL

Smaller shares say it’s due to the country and world being better off today (4%) or people being nicer to each other (3%).

How parents and teens compare

A chart showing that Teens, parents cite social media, pressures at different rates when it comes to why teen life is harder today

Parents and teens are mostly in agreement on what makes growing up today harder than in the past.

But the rate at which they cite certain factors like social media or facing pressures differ.

Among those who say being a teen today is harder , 65% of parents believe it’s because of technology in some way. This drops to 39% among teens.

This divide also stands out when it comes to social media specifically (41% vs. 25%).

Teens, on the other hand, are more likely than parents to describe issues related to overachieving or having to look a certain way. Among those who say teen life is harder today, 31% of teens cite pressures and expectations as a reason, compared with 16% of parents.

Still, there are areas in which parents and teens are in sync. For example, similar shares cite the country or world being worse today (15% each) and violence and drugs (8% each) as reasons life today for teens is harder.

And among those who say being a teen today is easier , roughly six-in-ten parents (59%) and teens (60%) mention technology in some way.

Why parents and teens think it’s harder or easier to be a teen today than 20 years ago

Read the quotes below showing how parents and teens think teenagers’ experiences today differ from before.

Find out more

This project benefited greatly from the contributions of Director of Internet and Technology Research Monica Anderson , Research Assistants Eugenie Park and Olivia Sidoti . This project also benefited from Communications Manager Haley Nolan, Editorial Assistant Anna Jackson and Copy Editor Rebecca Leppert .

Pew Research Center is a subsidiary of The Pew Charitable Trusts, its primary funder.

Follow these links for more of our work on teens and technology:

  • Teens, social media and technology
  • Screen time among teens and parents
  • Views of social media policies for minors
  • Teens’ use of ChatGPT for schoolwork
  • Teens and video games
  • Cellphone distraction in the classroom
  • Parents’ worries about explicit content, time-wasting on social media

Find more reports and blog posts related to  internet and technology on our topic page.

901 E St. NW, Suite 300 Washington, DC 20004 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

© 2024 Pew Research Center

88 Media Violence Essay Topic Ideas & Examples

🏆 best media violence topic ideas & essay examples, ⭐ most interesting media violence topics to write about, 📑 good research topics about media violence, ❓ research questions about violence in the media.

  • Media Violence Effect on Youth and Its Regulation It is also important to note that the more important the media puts on violence, the more people are tempted to engage in it for the sake of attention.
  • Media Violence and Aggressive Behavior From one perspective, it is said that the person will learn to like the violence and use it in real life.
  • Media Violence and Importance of Media Literacy Media literacy is the public’s ability to access, decode, evaluate and transmit a message from media. Improved media literacy and education will enable the responsible consumption of information.
  • Relation Between Media Violence and Cause of George Floyd The media coverage of the end of George Floyd exposed the prevalence of police brutality against a colored population that led to nationwide protests.
  • Violence in Media: Contribution to Public Violence Present scholarship affords a more intricate integration flanking the media and community, with the media on engendering in rank from a structure of associations as well as manipulation and with personal definitions and analysis of […]
  • Fear in News and Violence in Media In the proposed paper I intend to present the prevailing fear in American society and which has been produced by news media and the rise of a “problem frame” which is used to delineate this […]
  • Media Violence, Its Reasons and Consequences Regarding the matters of media violence, first of all, it is necessary to mention, that this term is usually regarded in two senses: Information that is provided without any will or determination by the recipient […]
  • The Media Violence Debate and the Risks It Holds for Social Science On the other hand, research on the matter is inconclusive showing that the correlation between violence and aggression varies from null to weak.
  • Media Violence and Aggression Risk Factors The topic of exposure to violence in mass media and a consequent probability of developing more aggressive behaviors is widely investigated and discussed in the literature.
  • Violence in Media and Accepted Norm in Society At the same time, these concerned groups represent the stratum that has the most power in influencing the spreading of media violence and mitigating its effects. The government can ensure that that rules and regulations […]
  • Media Violence Laws and Their Effectiveness Thesis statement: With the increasing levels of criminally assaulting behavior in the USA and other countries caused by media violence, it is assumed that the relevant laws have a significant potential for reducing the scale […]
  • Canadian Media Violence, Pornography, Free Speech To fill the gap, the researchers developed a critical analysis of the problem in Canada based on the concept of “moral panic” and a study on the coverage of youth violence in the Canadian media.
  • Does Exposure to Media Violence Promote Aggressive Behavior? One of the major changes that have been prominent in the social environment is the satiety of the mass media. It is incorrect to focus on the irregularities witnessed in the studies whilst the researches […]
  • Research of Violence in the Media The left frontal lobe of the participants was analyzed and found to be more active in the control group than in the exposed group. Exposure of children to violence in the mass media leads to […]
  • Effects of Violence Media on Aggression In case a child is exposed to continuous violent media, chances are high that such a child would develop a deviant behavior, which might lead to the development of aggressive behavior.
  • The Main Cause of Increasing Violent Behavior Among Youths Is Violence in the Media Although the question is controversial, it is possible to state that the media promoting violent films, video games, and music is the cause for increasing violent behaviours because the media provokes the young people’s reflection […]
  • The Effects of Media Violence on People Despite the fact that there is some evidence that, lengthy exposure to violent media increases aggressive behavior in people, this exposure alone cannot cause people to become violent and aggressive for there is no established […]
  • Media Violence and Altruism Consistent presence of children in violent media avenues is a major factor that results to increased aggression even as they grow up. In this case, there is a close link of social aggressive behavior with […]
  • Media Violence and Its Effect on Children’s Aggression
  • Brutal Legacies: Media Violence and America’s Youth
  • Media Violence Should Be Restricted by Government and Does Cause Real-World
  • Children and the Effects of Media Violence
  • Reasons Why Children Suffer From Media Violence
  • Communication as the Easiest Way to Eliminate Media Violence on Children
  • Correlation Between Media Violence and Aggression
  • Defining Criteria for Evaluating Media Violence
  • Media Violence and Its Effects on Society
  • Correlation Between Media Violence, Video Games, and Aggressive Behavior
  • Juvenile Crime and the Influence of Media Violence
  • Linking Media Violence and Negative Behavior
  • Media Violence Affecting Our Mental Stability
  • The Link Between Media Violence and Aggressive Behavior in Children and Teens
  • The Relationships Between Media Violence and Crime Violence
  • Media Violence and Effects on the American Family
  • Correlation Between Media Violence and School Shootings
  • Media Violence and How It Affects Our Conscience
  • Linking Media Violence and the Violent Male Adolescents
  • Media Violence and Its Contributions to Aggressive Behavior in Our Society
  • The Controversy About Media Violence and Violent Video Games
  • Media Violence and Its Effects on School, Grades, and Social Activities
  • Analysis of the Problem Associated With Media Violence
  • Media Violence and Its Impact on Increasing Violence in Young People
  • Relationship Between Video Games and Television Media Violence
  • Media Violence and the Effect It Has on Actual Behavior
  • Television and Media Violence: Is Aggressive Behavior Linked to TV Violence?
  • Media Violence: Censorship Not Needed
  • Television and Media Violence – TV Violence and Common Sense
  • Media Violence Does Not Cause Violent Behavior
  • Television and the Effects of Media Violence on Society
  • Media Violence Increases the Risk of Aggressive Behavior Among Children
  • The American Battle Against the Culture of Media Violence
  • Media Violence May Increase Behavioral Violence
  • The Assumptions Regarding the Myth of Media Violence
  • Media Violence? Media Whatever You Want
  • The Growing Concerns Over Media Violence and Its Effect on Society
  • Media Violence: Not the Real Culprit for the Problems of Society
  • U.S. Population Consumes Much Media Violence
  • Media Violence Turning Good Kids Bad: Fact or Fiction?
  • What Is the Impact of Media Violence on Mental Health?
  • What Is the Contribution of Media Violence to Aggressive and Violent Behavior in Our Society?
  • How Common Is Concern About the Effects of Violence in Media, Video Games, the Internet, and Television?
  • What Are the Ways to Deal with Stress and Violence in the Media?
  • Should the Government Limit Violence in the Media?
  • How Does Violence-Based Media Affect Human Behavior?
  • Why Do Video Games Cause Less Violence Than Other Forms of Media?
  • How Do Media Violence and Advertising Affect the Minds of Young Children and Adults?
  • Does Violence in the Media Increase the Risk of Aggressive Behavior in Children?
  • What Is the Relationship Between Media Violence and Crime?
  • How Does Media Violence Affect Deviant Behavior, Particularly Criminal Behavior?
  • What Does Research Say About the Relationship Between Media Violence and Aggressive Behavior?
  • Is Aggressive Behavior Related to Media Violence?
  • Which Hypothesis Explains That Violence in the Media Causes More Aggressive Behavior?
  • How Does Family Conflict Increase the Effect of Media Violence Exposure on Adolescent Aggression?
  • What Are the Ethical Issues Related to the Portrayal of Violence in the Media?
  • To What Extent Does Media Violence Lead to Aggression?
  • What Are the Negative Consequences of Media Violence for Today’s Youth?
  • How Is America Dealing with a Culture of Media Violence?
  • Is Violence in the Media the Real Culprit of Society’s Problems?
  • What Is the Relationship Between Substance Abuse, Media Violence, School Violence, and Family Violence?
  • Does Intense Media Coverage of Violence Contribute to Its Spread in Our Society?
  • Is Communication the Easiest Way to End Media Violence Against Children?
  • What Are the Clear Connections Between Violence in the Media and Violence in Society?
  • To What Extent Do Sociologists Agree That Violence in the Media Leads to Violence in Real Life?
  • What Explanations Are Offered for Media Violence Against Women?
  • Does Violence in the Media Contribute to Violent Behavior Among Youth?
  • Is It Fact or Fiction That Media Violence Makes Good Kids Bad?
  • Social Justice Essay Ideas
  • Virtual Reality Topics
  • Public Safety Research Ideas
  • Television Ideas
  • Internet Privacy Essay Topics
  • Virtue Ethics Questions
  • Bullying Research Topics
  • Customer Service Essay Titles
  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2023, March 27). 88 Media Violence Essay Topic Ideas & Examples. https://ivypanda.com/essays/topic/media-violence-essay-topics/

"88 Media Violence Essay Topic Ideas & Examples." IvyPanda , 27 Mar. 2023, ivypanda.com/essays/topic/media-violence-essay-topics/.

IvyPanda . (2023) '88 Media Violence Essay Topic Ideas & Examples'. 27 March.

IvyPanda . 2023. "88 Media Violence Essay Topic Ideas & Examples." March 27, 2023. https://ivypanda.com/essays/topic/media-violence-essay-topics/.

1. IvyPanda . "88 Media Violence Essay Topic Ideas & Examples." March 27, 2023. https://ivypanda.com/essays/topic/media-violence-essay-topics/.

Bibliography

IvyPanda . "88 Media Violence Essay Topic Ideas & Examples." March 27, 2023. https://ivypanda.com/essays/topic/media-violence-essay-topics/.

Man and boy walk on dock alongside water. Man playfully tugs on the bill of the hat the boy is wearing.

  • News & Stories

Q&A: Understanding and Preventing Youth Firearm Violence

Jessika Bottiani discusses her research on the significant disparities in youth firearm violence and how understanding those gaps can help future prevention efforts.

Leslie Booren

August 26, 2024

This summer the United States Surgeon General Dr. Vivek Murthy released a landmark advisory on firearm violence , declaring it a public health crisis. According to the advisory, gun violence reaches across the lifespan and is currently the leading cause of death for children and adolescents in America.

Researchers at Youth-Nex, the UVA Center to Promote Effective Youth Development, have been examining some of the root causes of youth firearm violence disparities to better understand this crisis and how future prevention efforts may work.

Recently, the Society for Research on Adolescence (SRA) recognized Dr. Jessika Bottiani, an associate research professor at the UVA School of Education and Human Development and faculty affiliate at Youth-Nex, and her co-authors with the 2024 Social Policy Publication Award for a paper on the prevention of youth firearm violence disparities . SRA highlighted this review as work that should be read by all policymakers.

We sat down with Bottiani to learn more about this research review.

Q: Your paper examined research on youth firearm violence and firearm risk. What did you find?

A: Our review and synthesis of data demonstrated striking differences in firearm risk across intersectional identities. We separated out different types of firearm violence (e.g., homicide, suicide, injury), which revealed distinctions in risk across different demographic groups–most saliently gun homicide among Black boys and young men in urban settings.

Jessika Bottiani

A staggering degree of inequity in firearm fatalities is shouldered by Black boys and young men in this country, where the rate of firearm homicide is more than 20 times higher among Black boys and young men ages 15-24 than for white boys and young men in the same age groups. We also saw higher rates of gun suicide among white and Indigenous American boys and young men in rural areas of the United States.

When we examined rates by geography, we identified intersectional differences in risk that are important for policymakers to understand. For example, we saw that higher rates of firearm homicide among Black boys and young men were most salient in urban areas of the Midwest and south of the United States. Overlaying data onto maps demonstrated how young male suicide by firearm is also clustered geographically, for example, in rural counties in the Midwest and west for Indigenous young males, and in in rural counties in the west for White male youth (who have the second highest rate of suicide by firearm after Indigenous young males).

Q: Why was a review of the research specifically focused on disparities in youth firearm violence needed?

A: A lot of systematic and scoping reviews on firearm violence had come out in the literature around this time, but none of them focused on understanding why Black boys and young men in urban areas were so disproportionately affected, or why we were also seeing gaps affecting rural White boys and young men. This paper presented data that revealed the degree of these disparities and tried to understand the root causes.

We don’t pay enough attention to the role of racist historical policies and regulations that have calcified into today’s racially segregated geographies and poverty. With this paper, we wanted to reveal the way in which youth gun violence is inextricably bound to the history of race, place, and culture in the United States. The paper also delves into cultural norms around guns and masculinity. We feel insights on these aspects of context are vital for understanding how to address youth firearm violence.

Q: What future prevention efforts do you suggest in your paper?

A: We put forth a number of evidence-based solutions for settings ranging from emergency rooms to schools to address firearm violence at the individual level. Yet perhaps more importantly, we also provide suggestions for tackling the structural and sociocultural factors that underlie firearm violence.

At the community level, our recommendations range from violence interrupters to programs and policies that seek to disrupt racial segregation and redress housing inequities. We also note the potential for media campaigns addressing sociocultural norms to be a tool for prevention.

We provided a review of gun restriction and safety policies, and their potential effectiveness in addressing youth firearm violence (while also acknowledging the political climate wherein such policies have been increasingly challenged). We point out that some recent firearm related policies, purportedly race neutral in their language, had harmful impacts specifically on communities and people of color.

Individual level interventions or policies that seek to address only one piece of the puzzle are bound to be ineffective at scale. Rather, what is required are multisector, place-based initiatives that address structural factors related to poverty and the built environment in under-resourced segregated neighborhoods.

News Information

Media contact.

Audrey Breen

[email protected]

Research Center or Department

Featured faculty.

  • Jessika H. Bottiani

Thought Leadership

News Topics

  • Health & Wellness

Related News

Too Much Treadmill? Outdoor Training Could Help Your Shin Splints

How Summer Mindfulness Practices Can Help Mitigate School-year Stress

Summer Fun Can Help Students Find Belonging

A Google Search Put This Alumna on the Path to the NBA – and an Equally Rewarding Day Job

NIMH Logo

Transforming the understanding and treatment of mental illnesses.

Información en español

Celebrating 75 Years! Learn More >>

  • Science News
  • Meetings and Events
  • Social Media
  • Press Resources
  • Email Updates
  • Innovation Speaker Series

Day One: Placebo Workshop: Translational Research Domains and Key Questions

July 11, 2024

Welcome Remarks

ERIN KING: All right. We'll go ahead and get started. On behalf of the co-chairs and the NIMH planning committee, I'd like to welcome you to the NIMH Placebo Workshop: Translational Research Domains and Key Questions.

Before we begin, I'm going to quickly go through a few housekeeping items. All attendees have been entered into the workshop in listen-only mode with cameras disabled. You can submit your questions via the Q&A box at any time during the presentation. And be sure to address your question to the speaker that you'd like to respond. For more information on today's speakers, their biographies can be found on the event registration website.

If you have technical difficulties hearing or viewing the workshop, please note these in the Q&A box and our technicians will work to fix the problem. You can also send an email to [email protected]. And we'll put that email address in the chat box. This workshop will be recorded and posted to the NIMH event website for later viewing.

Now I'd like to turn it over to the acting NIMH Director, Dr. Shelli Avenevoli for opening remarks.

I think the audio is still out. If we can restart the video with the audio turned up.

TOR WAGER: That was some placebo audio. I think I might be able to share my screen and get the audio to come up on the video. So maybe I can try that. Hopefully you can see this okay. Let's see if it comes through.

SHELLI AVENEVOLI: Good morning. I'm excited to be here today to kick off the NIMH Placebo Workshop. I am currently the Acting Director of NIMH, and I look forward to serving in this role while NIMH conducts a national search for the next NIMH Director.

Today we are bringing together experts in neurobiology, clinical trials and regulatory science to examine placebo effects in drug devices and psychosocial interventions. NIMH has long understood that the placebo phenomenon is highly active in studies of mental illness. Understanding how to design and interpret clinical trial results as well as placebo neurobiological mechanisms have been important research questions that still have significant gaps. Subsequently, I'm eager to learn what you believe are the most important questions of placebo research and how they might be answered. This is no small charge, I understand. But our organizers have designed a carefully thought out agenda to help facilitate our success.

The workshop is organized into domains that aim to identify those important questions. I'm looking forward to hearing a historical review of the successes and failures around mitigating the placebo response in both academic and industry research. This includes historical perspectives in drug and device trials, understanding psychosocial aspects of the placebo response and measuring and mitigating the placebo effect.

Clearly, several perspectives will be discussed during these presentations. It will be exciting to hear your individual views as well as the panel discussions. I'd like to thank Doctors Tor Wager and Cristina Cusin, the co-chairs of the workshop, as well as the rest of the planning committee for their work in organizing this excellent agenda.

I will now turn it over to Dr. Tor Wager. Thank you.

Introduction and Workshop Overview

TOR WAGER: Okay. Hi, everybody. Sorry the audio didn't turn out as well as we had hoped, but I hope you could still hear it to some degree. And I just want to say I'm really delighted to have you all here. And I'm really delighted that NIMH has decided to organize this workshop and has worked so hard in planning it.

I'd like to thank my co-chair Cristina and also the NIHM co-leads Erin King and Doug Meinecke as well as the rest of the team that's been working really hard on preparing this meeting, including Meg Grabb and Laura Rowland and Alex Talkovsky, Mi Hillefors and Arina Knowlton.

My job for the next few minutes is just to give you a brief overview of the -- some of the main concepts in the placebo field altogether. And I'm going to start really at the very, very beginning.

The workshop goals are really to understand how placebo and nocebo effects impact clinical trial design and outcomes; to understand some of the psychological, neurobiological, and social mechanisms that underlie placebo effects.

And we'd like to think together to use this understanding to help to identify and maximize therapeutic effects of drugs and devices. And that means better clinical trial designs, better identification of outcomes, and also to harness placebo mechanisms in clinical care alongside active treatments so that we don't think of only specific treatments, we think of treatments as having psychological and psychosocial components as well as active drug or device components.

And to go back to the very, very beginning, my colleague Ted Kaptchuk once wrote that the history of medicine is the history of the placebo effect. So this is the Ebers Papyrus circa 1500BCE and it documents hundreds of ancient medications that are now thought to be little better than or no better than placebo effects. Some of them we recognize today like, for example, opium, the ingredient of opiates; and wormwood, the ingredient of absinthe for headache.

If you were poisoned, you might be treated with crushed up emerald or Bezoar stone which is undigested material from the intestines of animals. You might be treated with human sweat and tapeworms and feces, moths scraped from the skull of a hung criminal, or powdered Egyptian mummy, among many other treatments. And what all of these have in common is that none of them or very few of them have active ingredients in terms of specific effects, but they all act on the mind and brain of the perceiver. And so there is something about the beliefs and the imagination of the person that has made these treatments persist for many, many centuries.

And this provides both a challenge and an opportunity. I'm going to introduce the challenge with this clinical trial which is a gene therapy for Parkinson's disease AZ neurokinin which was an industry funded trial. And they went out two years. And this is a genetic manipulation intervention for Parkinson's disease. And what you see here is an improvement in motor scores in PDRS3 on Parkinson's. And if you see, people getting the active treatment, they got substantially better within the first six months and they stayed better for two years.

And this seems great. But the problem is that this trial failed. And the failure resulted in the drug company being sold off and this treatment may never see the light of day. And that's because people in the placebo group also got better and stayed better for two years. And there was no drug placebo difference.

And this is really shocking to me because Parkinson's is a neurodegenerative disorder. And so it's very surprising to see changes of this magnitude last this long. So the opportunity is in harnessing these psychosocial processes and the active ingredients that go into the placebo index like this, or placebo responses like this I should say. And the challenge, of course, is that placebo responses can mask effects of treatment in the way that we've seen here.

And this is not a unique occurrence. In many cases, there are treatments that are widely used that are Medicare reimbursed that turn out after they are tested later to not be better than placebo in clinical trials, randomized trials. And this includes arthroscopic knee surgery for arthritis, vertebroplasty, epidural steroid injections which are still practiced widely every day. Some other interesting ones like stents for angina, which is chest pain. And also some recent high profile failures to beat placebo after very initially promising results in emerging treatments like gene therapy for Parkinson's disease that I mentioned before and deep brain stimulation for depression.

A recent interesting case is the reversal of FDA approval for phenylephrine which is a very common nasal decongestant. It's the most widely used decongestant on the market. Almost $2 billion in sales. So it turns out, it may not be better than the placebo. One of the problems is that in some areas like, for example, in chronic pain, placebo effects are growing across time but drug effects are not. And so the drug placebo gap is shrinking and fewer and fewer treatments are then getting to market and getting through clinical trials.

And that's particularly true in this study by Alex Tavalo in the United States. So as an example, surgery has been widely practiced first in an open label way where people know what they are getting. And it was only much later that people started to go back and do trials where they would get a sham surgery that was blinded or just a superficial incision then. So the person doesn't know that they are not getting the real surgery. And those sham surgeries in many cases have effects that are substantial and in some cases as large or nearly as large as the active placebo -- as the active drug effects.

So this is what we call placebo response which is overall improvement on placebo. It doesn't mean that the sham surgery or other placebo treatment caused them to get better.

And so if we think about what the placebo response is, it's a mixture of interesting and uninteresting effects including regression to the mean, people fluctuate in their symptoms over time. And they tend to enroll sometimes when the symptoms are high. And there is sampling bias and selective attrition. There is natural history effects. And then there is the placebo effect which we'll define as a causal effect of the placebo context.

And the simplest way to identify a placebo effect is to compare placebo treatment with a natural history or no treatment group in a randomized trial. So here in this three-arm trial, a parallel groups trial, what you see is the typical way of identifying the effect is the active drug effect comparing active treatment to placebo. And you need to compare placebo to the natural history group to identify the placebo effect here.

And if we look at those studies that do such comparisons, we can see that there are many effects across different areas. And those effects are active brain body responses or mental responses to the treatment in context. And so there are many ingredients. It's not the placebo drug or stimulation or device itself, of course, that has the effect. It's the suggestions and the context surrounding that.

And there are many types of cues. There are verbal suggestions, information, there are place cues, there are social cues including body language and touch. There are specific treatment cues that are associated with the drugs. And there is a rich internal context. Expectations about treatment outcomes, interpretations of the meaning of what symptoms mean and the meaning of the therapeutic context and the care context. As well as engagement of emotions and memories. And what I'm calling here precognitive associations that are learned or conditioned responses in the brain and the body. So there is a large family of placebo effects; not one, but many placebo effects. They operate both via conscious and unconscious means. They are embedded in the nervous system through learning processes. And an idea here is that meaning of the response to the treatment to the person and the symptom is really the key. What are the implications of the cues and the symptoms and the whole context for future well being? o if we look at studies that have isolated placebo effects compared to no treatment, we see that there are many studies and many systematic reviews and meta analysis including many types of clinical pain in depression, in Parkinson's disease, in motor symptoms as well as other symptoms. In anxiety including social anxiety in particular and general anxiety. Substance misuse and perceived drug effects. Some effects in schizophrenia. Potentially some effects in asthma. And that is a sort of a tricky thing with the conflicting results that we could talk about. And effects on sleep and cognitive function and more. So these effects are really widespread.

There have been some attempts to decompose these into, you know, how large are the effects of placebo versus the effects of active drugs. And so if you look at pharmacotherapy for depression, at least in one analysis here by Irving Kirsch, half of the overall benefit, the placebo response -- or the active treatment response, I should say, is placebo. A very small proportion is specific drug effects. And about a quarter of it is people who would have gotten better anyway, they recover spontaneously from depression. That's natural history.

So the placebo effect is a large part of the overall therapy response. And this mirrors what's called common factors in psychotherapy. And common -- and this is for mood and anxiety disorders, substance use disorders and more. And common factors are those therapeutic elements that are shared across many treatments. And really in particular to -- they include drug and therapy, providing listening and social support, positive engagement and positive expectations. And in this analysis here the common factors also were responsible for a lion's share of the therapeutic effects of psychotherapy.

So in one sense you can say that placebo effects are really powerful, they can affect many kinds of outcomes. But there is continuing controversy, I would say. Even though these competing "New York Times" headlines are somewhat old now. And this latter headline came out after a landmark meta analysis in Froberg, Jenning, Kaptchuk in 2001 which they've updated several times since then.

And what they found is consistent with what I said. There are significant placebo effects in the domains that they were powered to detect. But they discounted those. They said it's probably due to reporting bias and other kinds of biases. So this is a key question is which outcomes count as important?

So here is an example from a fairly recent study of expectancy effects in anxiety. They compare it, people getting an SSRI in a typical open label way which is in the blue line with people who got a hidden SSRI, they didn't know that they were getting the SSRI. And that difference is a placebo-like effect or an expectancy effect.

There was a substantial drop in anxiety that was caused by getting the knowledge that you -- that people were being treated. So the question is does that actually count as a meaningful effect? And, you know, I think there's -- it's right to debate and discuss this. It relates to this idea of what I'll call heuristically depth. That this effect might simply be people telling us what we want to hear. That's a communication bias or a so-called demand characteristic that's been studied since the '50s.

It could be an effect on how people feel and their decision making about how they report feelings. It could be an effect on the construction and anxiety in the brain. It could be an effect on -- a deeper effect in potentially on some kind of lower level pathophysiology, some kind of effect on the organic causes of anxiety.

So the gold standard has been to look for these organic causes. And it gets very tricky when you define outcomes in terms of symptoms. Like is true with pain, with depression-related symptoms, anxiety-related symptoms and more in mental health. In pain, what the field has been trying to do is to look at pathways that are involved in early perceptual effects of nociception and on those central circuits that are involved in constructing the pain experience to ask if those are affected. And what we've seen, this is sort of the most developed area I think in human neuroscience of placebo effects. And we see reduced responses to painful events in many relevant areas. Including in the spinal cord areas in some studies that are known to give rise to nociceptive input to the brain.

There is increases in activity in punitive pain control systems that send descending projections down to the spinal cord. And there is release of endogenous opioids with placebo treatment in some of those pain control systems and other areas of the frontal cortex and forebrain. So these are all causal effects of placebo treatment that seem to be relevant for the construction of pain.

And what is remarkable is that the effects in the frontal cortex that are the most reliably influenced by placebo including the medial prefrontal cortex and the insula and other areas really are not just involved in pain, of course. They really affect some systems that are involved in high-level predictive control of motivation, decision making and perception.

So an emerging concept is this idea that what these circuits are for and what a lot of our brain is for in general is forming a predictive model of what is going to happen to us, what situation do we find ourselves in. So these cortical circuits are important for representing hidden states that we have to infer. And that's another way of saying meaning. Therefore, understanding what the meaning of events is. If it's an eye gaze, what is the meaning of that look? If it's a movement, what is the underlying meaning of the movement?

And it's that underlying situation model, predictive model that guides how we respond to a situation and what we learn from experience. So these systems in the brain that are influenced by placebo provide joint control over perception, over behavior and decision making including whether we choose to smoke or not smoke or eat more or eat less. And the body through the autonomic and neuroendocrine and immune systems. So broadly speaking, there is this joint control.

So this is one example where we can get closer to pathophysiology with some forms of placebo effects. And this is forebrain control over all of the various brainstem and spinal centers that are important for particular kinds of regulation of the body. The respiratory muscles, the heart, the intestines, and immune responses as well. When we look in the brain, the most consistent correlates in meta analyses of immune changes in the body are those that seem to play central roles in placebo effects as well like the ventromedial prefrontal cortex.

And another important development in this and aspect of this is the idea of parallel models in nonhuman animals and in humans, particularly those that use classical conditioning. So there are many kinds of pharmacological conditioning in which a cue is paired with a drug over time, usually over several days. And then the cues alone like the inscription alone can come to enlisted effects that sometimes mimic drug effects and sometimes are compensatory responses that oppose them.

And one of the most famous was the phenomenon of conditioned immunosuppression that was first published by Bob Ader in 1976 in Science and has since been developed quite a lot. So this is from a review by Mia Chelowoski's group which is a very comprehensive review of different kinds of immunosuppressive responses. And the point I want to make here is that there is increasing evidence that the insular cortex as an example is really important for storing memories about context that then get translated into effects on cellular immunity that are relevant for the trajectory of health and disease in broad ways. And those areas of the insula are similar to those that are involved in placebo effects in humans on pain, itch, cough, disgust and other conditions as well. So there is the potential here for memories that are stored in the cortex to play out in very important ways in the body. And that can influence mental health directly and indirectly as well.

And I want us to move toward wrapping up here with a couple of ideas about why these effects should exist. Why do we have placebo effects in the first place? And two ideas are that we need them for two reasons. One is for predictive control. The idea about what we need an evolved brain for, a highly developed brain is to anticipate those threats and opportunities in the environment and respond in advance. So it's not that we -- we don't respond to the world as it is. We really respond to the world as it could be or as we think it will be.

And the second principle is causal inference. That we -- what is less relevant is, is the particular sensory, you know, signals that are hitting our apparatus at any one time. And what is really more important is the underlying state of the body and the world, what is happening.

Just to illustrate those things, one example from Peter Sterling is this very complicated machinery for regulating blood pressure when you stand up and when you are under psychological stress. And we need this complex set of machinery in order to predict what the current -- what the future metabolic demands are. So our blood pressure essentially like other systems responds in advance of challenges. And that's why we get stressed about a lot of physiology.

An example of the second is a simple example from vision. If you look at these two squares that we circled here, you can see they probably look like they are different colors. One is brighter and one is darker. But if I just take away the context, you can see that the squares are exactly the same color. And so you don't see the color of the light hitting your retina. What you see is your brain's guess about the underlying color of the paint or the color of the cubes that discounts illumination and factors it out as a cause. So what our perceptual systems are doing is causal inference.

So with pain, itch or nausea, for example, other symptoms, you don't -- or mood or motivation, you don't feel your skin or your stomach or your body in a direct way. Your brain is making a guess about the underlying state from multiple types of information. And this really starts with our memories and past associations and our projections about the future.

So I'm using pain as an example because we study it a lot. But the idea is that the pain really starts with these projections about the future. And there is a representation in the brain of the current state of threat and safety, if you will. Nociceptive input from the body plays some role in that, but it's really the central construction that integrates other forms of context, what is the look, what kind of support are you getting, that together determines what we end up feeling.

And there are different kind of responses that are linked to different parts of that system. But the idea of suffering and well being, of fatigue and motivation, all of those things I think are related to the current state.

There are many open questions. You know, one is which outcomes count as important for determining whether an intervention is meaningful? Can we separate changes on decision making and suffering from response biases that we really shouldn't consider important for clinical research.

Secondly, can we identify outcomes affected by real treatments, drugs and devices but not placebos? And how can we use those outcomes in clinical trials in advance of the regulatory front as well on the scientific front?

Third, what kinds of experimental designs will help us separate specific effects from these broader context effects? And is this a reasonable goal? Can we actually separate them or do they often work together or synergize with one other? So do they interact?

Fourth, can we predict who will be a placebo responder from personality, genetics perhaps, or brain responses? Can we use this to maximize our treatment effects in clinical trials and improve the pipeline? And, you know, unclear whether that is possible.

And finally, how can we use all of these factors we've discussed alongside other treatments that are current medical treatments to improve outcomes?

With that, I'm just going to introduce the next -- the rest of today. I realize we're a little bit long getting started. Hopefully we can make up some time here. But now we're going to start our first session which is about perspectives on placebo in drug trials from Michael Detke and Ni Khin and Tiffany Francione. So this is going to be about the sort of history and state of how placebo effects interface with the regulatory environment.

Then we'll take a break. And after that we'll continue to the rest of the sessions. So without further ado, I would like to turn it over to Mike. Thank you.

Historic Perspectives on Placebo in Drug Trials

MICHAEL DETKE: I think Ni is going before me. Correct, Ni?

NI AYE KHIN: Yes, I am.

MICHAEL DETKE: Okay, thank you.

NI AYE KHIN: I'll do the first part for the historical perspective.

Hi, I'm Ni Khin. And I'll be talking about historical perspective on placebo response in drug trials.

My disclaimer slide. Although I'm currently an employee of Neurocrine Biosciences, part of the presentation today is the work conducted during my tenure with U.S. Food and Drug Administration.

The presentation reflects view of my view and it's not being not quoted with all the organizations that I was affiliated with and currently affiliated.

Let me start with a brief overview of what FDA required for drug approval. FDA regulation defines that there should be substantial evidence, evidence consisting of coming from adequate and well-controlled trial.

The usual interpretation is that it would require two positive randomized controlled clinical trials. However, in terms of drug approval process, we use holistic approach in review of clinical efficacy and safety coming from clinical trials. So in FDA data from both successful and non-successful study, positive and negative studies, as a package when the industry or the drug sponsors submit New Drug Application packages to the agency. And these mainly the efficacy results generally would come from shorter term efficacy data. And safety data will be according to the ICH requirement 1500 patients, three to 600 for six months and at least 100 patients for a year. Generally the maintenance efficacy or also relaxed prevention trials are conducted mostly post approval in the U.S.

So the data that I'm presenting was conducted as kind of a pool analysis from the data that was submitted to agency in terms of in support of New Drug Applications. Why we did that data mining effort. And as you know high rate of placebo response and decline in treatment effect is over time in psychiatry was the main major concern. At the time when we did this analysis if there were increasing trials at clinical trial sites outside the U.S. And we are looking into applicability of such data from non-U.S. sites in the U.S. population.

So we did exploratory analysis of pooled efficacy data from two different psychiatric indication, major depressive disorder and schizophrenia. We have data level coming from trial level and subject level data. And we for depression across the application package, we have Hamilton Depression Rating Scale as the common primary or key secondary efficacy rating scale. And schizophrenia application packages we have PANSS which is Positive and Negative Syndrome Scales.

So we were looking at those two endpoint measures. And then did some exploratory analysis and then summary from these findings. And the processes and challenges experienced in our effort looking into these databases will be shared today.

Let me start with depression trial level data that we looked at. It consisted of 81 RCT short-term trials. So it spans about 25 years. So these are mainly SSRIs and SNRIs, antidepressant. From that 81 short-term control trial, total number of subject was over 20,000 subject, 81% enrolled in U.S. sites. And as you could see here, majority were whites, Caucasian, female. And mean age was around 43 years of age. And baseline HAMD scores were approximately 24. And dropout rate, average dropout rate in these trials was approximately 33%.

We explored treatment effect and trial success rate based on the questions raised about applicability of data from non-U.S. site to the U.S. population. This is the overall results that we published in 2011 paper. We noticed that both placebo and drug group from non-U.S. tended to be larger change from baseline in HAMD-17 total scores than those observed in the U.S.

You can see on the left-hand column non-U.S. site placebo response is approximately 9.5 and U.S. is 8. But drug effect were also larger slightly in non-U.S. sites and U.S. is slightly lower. So if you subtract drug placebo differences, average is about the same for both U.S. -- data coming from both U.S. and non-U.S. sites. So it's about 2.5 points HAMD total difference.

So what we see overall over 25 years of antidepressant trials is that there is increase in highly variable placebo responses across trial. Slight decline in treatment effect moving from approximately three points difference in HAMD total towards two points drug and placebo difference. In trial success rate was slightly lower, 55 versus 50.

And as part of that analysis we also look at any difference in data between fixed and flexible doses. So 95% of the trials that is in the database utilize flexible dosing design regimen. And so placebo responses were quite similar. Treatment effect was slightly larger for flexible doses as compared to fixed dose.

And we pointed out that in our analysis we used data versus -- data coming from the treatment arms versus number of trials as the denominator in the calculation. So slightly higher trial success rate for fixed dose trials, which is 57%, versus flexible dose 50%.

So and some of you may already know that there was an earlier paper published by Arif Khan and his group. A similar database, but it was datasets coming from trial conducted between 1985 to 2000.

And from that analysis it was showing that 61% of the flexible dose studies versus 33 for fixed dose results in terms of success rate. And Khan's use number of treatment arm as the denominator. And if you look at the results, it's a flexible dose is also 60% compared to 31% of fixed dose. However, in our larger database, data included conducted after 2000, that is 2001 to 2008, our findings are in favor of still fixed dose design with success rate around 60% for fixed dose arm, compared to 34% for flexible dose arm. So we think that the more recent trial fixed dose studies, the success rate is likely higher.

So in addition to trial level data, we also look into subject level data from these trials. So for subject level data we initiated with 24 randomized control trial data from -- then we expanded to 45. And the main thing that we were looking at was the – what could we use in terms of responder definition. Do we need a HAMD total cutoff?

So from that analysis we noticed that overall 50% change for baseline is sufficient to define responder status and HAMD total cutoff is not necessary. Whether you use percent change or HAMD total cutoff or both, we would capture more or less the same folks as the responder, median responder status.

And then another item that we looked into was for optimal trial duration. And we -- if you -- from -- generally from eight weeks trials are the ones that would give overall successful trial results. And we looked into whether if we shorten it to six weeks, whether it will get similar results. So it was like somewhere in between that maybe shorten if you could see the two points difference at week six.

And another item that we look into was time to treatment discontinuation instead of change from baseline as the primary efficacy endpoint. And the data support -- not supportive of time to treatment discontinuation as an alternative primary endpoint for drug trials.

So I'm going to cover a little bit about efficacy results from maintenance efficacy trials also known as relapse prevention trials where we usually use randomized withdrawal design.

And they are generally not regulatory requirement in the U.S. to do maintenance efficacy study. But if the agency would see it would be needed, then we'll communicate with the drug sponsor before coming in with the application.

So as you could see on this slide, these longer term maintenance efficacy study generally design as open label treatment for approximately 12 weeks. Once they meet the stable responder status will be randomized into double-blind randomized withdrawal phase to either continue on the drug or the other half will be into placebo. The endpoint generally used is the time to relapse or relapse rate. And we did overall look at trial level data from 15 randomized controlled maintenance, randomized withdrawal trial that was conducted between 1987 and 2012. And you can see demographic disposition is more or less the same for this trial. Average number of subject per study is in the 500. And mean HAMD score at baseline prior to open label is more or less the same. And randomization after they meet responder status to drug and placebo HAMD total score is 9.4.

And the relapse and -- response and relapse criteria used in these studies are varied among studies. And stabilization period is varied. Regardless of that, these are approved based on short-term study. You also see maintenance efficacy based on the results of this study.

This is just the overall slide that shows the duration of open label -- open label response criteria, response rate, double-blind study period, relapse criteria, and different placebo relapse rate and relapse rate and 50% reduction in terms of relapse difference you will see with the drug treatment.

These results were published. And overall I just want to summarize the results saying that almost all the trials are successful. Open label phase, mean treatment response is about 52%. Those meeting responder status going into double-blind randomized withdrawal phase, there is average 50% reduction in relapse rate for drug treatment group as compared to placebo. And in that paper we have side by side comparison of subject level data in terms of relapse survival analysis Kaplan-Meier Curve.

And let me summarize a little bit about schizophrenia trial data. We did have a pool analysis of 32 randomized placebo-controlled short-term clinical trial that was conducted between '91 and 2009. And those are mainly atypical antipsychotics. And this slide shows number of subjects along with mean age and demographic distribution along with the mean baseline PANSS total score.

And we provided the observed increasing placebo response, stable drug response, and declining treatment effect over time in North America region. One thing we would notice was that treatment effect decrease as body weight increased in North America trial patients. And this is FDA also conducted post 2009 period analysis. And this slide shows comparison between pre 2009 trials and post 2009. And you could see that predominantly multiregional clinical trial in recent years dropout rate is higher, slightly higher. But continuing trend of increasing placebo and decreasing treatment effect when you look at in combination of two different pool analysis is that it still persist over 24-year period. Those both level pool data analysis and schizophrenia data analysis is for 25 years period.

So I'm just going to let folks know a little bit about challenges in doing these type of pool analysis is the datasets. Data standard issue. And it was because of the technology in those times' difference. We do not have subject level data trial conducted before 1997 in the database.

And of course always the resources is an issue. And the main point that I would like to bring for everyone's attention is the collaboration, collaboration, collaboration in terms of solving this major issue of placebo response.

I'm going to stop here. And I'll let Dr. Mike Detke continue with this topic from industry perspective. Mike.

MICHAEL DETKE: Thanks, Ni. I'm having problems sharing my screen. I got to make this full screen first. Okay, great. Sorry, minor technical problems. Thanks for the introductions, thanks to NIMH for inviting me to present here.

As Ni said very well, my background is industry. I'll be presenting this from kind of an industry perspective. I've spent 25 years working at a clinical trial site at big pharma, small biotech, and a vendor company all in CNS clinical development, mostly drugs. And I'll -- I'm also a board certified psychiatrist and practiced for about 20 years. I still do medicine part time. And I'll talk about relevant disclosures as they come up during my talk because I have worked in these fields a fair bit.

So that being said, there we go. This is just a high level overview of what I'll talk about. And again, from the industry perspective in contrast to the –

ERIN KING: Your camera is off if you want to turn it on.

MICHAEL DETKE: I will turn it on. My apologies. There we go.

So as I said, I'll be presenting from the industry perspective. And for the most part my definition of placebo response throughout this talk is if the patient got seven points better on placebo and the patients got ten points better on drug, the placebo response is seven points and we'll be focusing on that perspective.

And Tor gave a great overview of many other aspects of understanding placebo. And we'll talk and my esteemed co-presenters will talk more about that, too.

But again, I'll give you the historical perspective. And mostly I'm going to try to go through some data. Some a little older, some a little newer, that of things that have been tried to reduce placebo response and/or improve signal detection, drug placebo separation which especially in a proven effective therapeutic is probably a better way to look at it. And this is just a list of some of the topics I'll cover. I've got a lot of ground to cover, and this won't be exhaustive. But I'll do my best to get through as much of it as possible for you today.

Dr. Khin already talked about designs including the randomized withdrawal design. Important to keep those in mind. I'll briefly mention a couple of other major designs here that are worth keeping in mind. 

The crossover design has an advantage that it's much higher statistical power because in -- the ideal way to use this is to use the patients themselves as their own control groups. So you're doing within the subject statistics which make this much more powerful. You do a much more statistically powerful study with far fewer patients.

A couple of important cons are there can be washout effects in the drugs. So pharmacokinetic or even if it's completely washed out, the patient's depression or whatever might have gotten to a better state that might be lingering for some time. And because of these overlap effects there, you can't be totally certain that the baseline of phase two is the same as the baseline of phase one. And that's an important issue. And those overlap effects are important.

But diseases with stable baselines and I think in the CNS space things like adult ADHD could be things that you would consider for this perhaps in proof of concept rather than confirmatory, though. I'll leave that to my colleagues from the FDA.

Sequential parallel design. This has been presented a long time ago and published on much. This is a design where some of the patients get drug in the phase one and others get placebo. They randomize just like a typical parallel arm randomized study. However, in a second phase the placebo nonresponders specifically are then re-randomized to receive placebo or drug. So this has a couple of advantages.

One is that there are two phases from which you can combine the data. And the other is that this second phase enriches for placebo non-responders just like the randomized withdrawal enriches for drug responders. And this has been published on in the literature. This is a slide that hasn't been updated in a while. But the results even back a few years ago were, you know, out of, you know, quite a few trials that have been reported on.

There was a reduction in placebo response in phase two. The drug placebo difference improved. And the p values were better and so forth. So this is an important trial design to know about. Dr. Farchione will talk about I think one example of this having been used recently. It's a little bit hard because you can't really do this within trial comparisons of different trial designs. That's a limitation.

So these are all cross-trial comparisons really. But and there are some advantages and disadvantages. It -- by using patients twice, you might be able to do the trial with somewhat fewer patients, save money, save time. On the other hand, there is two phases so in that sense it might take a little longer. So various pros and cons like anything.

And then I'm going to talk about placebo lead-in. So historically people did single-blind placebo lead-ins where all patients would get placebo for the first week or so blinded to the patient, not to the staff. And then if they had a high placebo response, they would be excluded from the study.

Typically it was about a week and about a 30% placebo response, but it varied. Trivedi & Rush did a great review of this, over a hundred trials as you can see. And little evidence that it really improved -- reduced placebo or improved drug placebo separation. This is some work from my early days earlier in the 2000s at Eli Lilly when I worked on Cymbalta and Duloxetine for about seven years. We did something called a variable duration placebo lead-in where we -- this was the design as it was presented to the patients and to the site personnel that randomization would occur anytime between week -- visits two and four. Which meant they were on placebo for either 0 to one to two weeks. Usually, in fact, they were on for one week.

This has some pros and cons again practically. This -- the placebo lead-in adds a week or two of timeline and cost. The patients, the way this was designed and to maintain the blind, the patients that you, air quotes, throw out for having too high of a placebo response have to be maintained throughout the study which costs money and means that your overall end might need to be higher. So time and money implications.

When we looked at this, Craig Nalstrom, a statistician published from this. And we found that the average effect size did go up pretty substantially, this is going to the effect size. But you also lost some end when you excluded placebo responders. So the frequency of significant differences did not go up substantially in this analysis.

Moving on. Dr. Khin referred to this study by Arif Khan where flexible dose trials did better than fixed dose. I would say that, you know, the database that Dr. Khin presented from the FDA, bigger database, you know, less publication bias and things like that. So I would lean in favor of preferring that. But I would also say that if you focus on my last bullet point, there is clinical intuition about this. And ask yourself the question if you had a case of depression and you could go see a doctor that would only prescribe 20 milligrams of Prozac to every patient or a doctor that would prescribe 20 milligrams and if you're having side effects maybe titrate down, and if you're not having X he might titrate up, you know, which doctor would you rather go to?

So I think on some level it seems to have good faith validity that adjusting the dose to individual patients should lead to better efficacy and better assessment of true tolerability and safety. And that should do a better job than adjusting the dose of placebo. But importantly, because flex dose studies are two arms, one drug with a flexible dose and one placebo. And fixed dose studies are frequently dose-finding studies with, say, one arm of placebo and maybe three arms, 10, 20 and 40 milligrams of drug. So the number of treatment arms is practically, it's confounded with fixed versus flexible dosing. And likewise -- and that may matter. And the percentage randomized to placebo. And again, this is confounded with number of arms.

If you do equal randomization in a two-arm study, you have got a 50% chance of placebo; a four-arm study, you've got a 25% chance of placebo. And again, it makes good base validity, good sense that if your chance of getting placebo is much higher then you might have a higher placebo response rate or the chance of getting active drug is higher.

And that is what Papakostas found in a meta analysis in depression and Mallinckrodt again in a meta analysis of schizophrenia data. So those were all confounded. And they have pros and cons. And you do need to do some dose finding with your drug anyway. So they are all designs that have pros and cons to lead to better outcomes.

Better scales. This is a simple analysis taken from that same paper that did the double-blind placebo lead-in with Mallinckrodt. And we just looked at a pooled set of 22 RCTs. I think these were mostly or all duloxetine studies and depression studies. And the HAMD-17 item scale had an average effect size of about .38. But some of these subscales, which are, you know, five, six, seven or eight items long of items among the 17 in the HAMD. In other words, if you throw out half of the data from the HAMD, you could actually get a better effect size. And so this is something to think about at least in proof of concept. Obviously these subscales would need to be validated for regulatory and other purposes. But good to know that there are different approaches. 

And too, if you have a drug that you believe based on earlier clinical data or preclinical data that are more likely to be efficacious in certain domains, symptom domains, that is important, too.

Statistical approaches. This is a little bit dated at this point in time, but there are a lot of important statistical issues to take into account. When I entered the industry, last observation carried forward, LOCF, was the gold standard. There have been a lot of papers published on mixed model repeated measure that protects better against both false positives and false negatives, gives you better effect sizes here. And here almost, you know, 30 or so percent bigger which is pretty substantial. And I'll show you that later. So better protection against false positives and false negatives means we have got more true positives and true negatives which is exactly what we want in therapeutic development.

And I'll talk here now about different implementation strategies during the trial. Central raters and a lot of people use different terminology here. So my terminology for central ratings is when a rater is remote and actually does the assessment. They are asking the questions, they're hearing the answers, they are asking for clarification, they are doing the scoring, etc. And these raters can be more easily blinded to protocol pressures and more easily independent of pressures to meet enrollment and so on and so forth. Note here, I was previously an employee and stockholder and consultant to MedAvante which was one of the companies that pioneered doing the central ratings. So I'm no longer -- I don't have any stock or no financial conflicts of interest now, but I did work with them for a while.

One advantage to centralized ratings on the right is that you can simply use fewer raters which reduces the variance that all of us humans are going to contribute. These people can be trained together more frequently and more consistently. And that can reduce variability, too.

Just some perspective, and Tor presented some nice stuff from other therapeutic areas, too. Is that, you know, in psychiatry, in CNS most of our outcomes are subjective and highly variable and probably need to be improved upon in some ways. Despite that, in other areas where there is probably less inherent variability, they have already standardized the fact that, you know, centralized blinded review or assessments by at least a second or a third person for lots of other types of therapeutics. And these are relatively old guidances from the UMA and FDA mandating this in other therapeutic areas.

So then to get back to the data on centralized ratings, MedAvante was able to conduct about seven studies where they did within study comparisons of site-based ratings and centralized ratings. And across these seven studies, my interpretation, and you can look at the data, are that about five of seven were green. They were clearly -- clearly showed better lower placebo responses or if there was an effective drug, better drug placebo separation with centralized ratings. And two showed pretty equivocal or not impressive differences.

And again, I'm a former employee and consultant to MedAvante. Here is one example, a large GAD study with -- that had escitalopram as an active comparator. And you can see the effect size was about twice as big in HAM-A points. The Cones-D effect size here was about twice. And the chart we put together when I was at MedAvante illustrates that a doubling of the Cone-D effect size means that you can either reduce your sample size by 75% and still have the same statistical power; or you can select a sample size of, say, N of 100 and your power goes up from about 60 to almost 100.

The more important way to read these powers is that your chance of a false negative, your chance of killing your drug when you shouldn't have is 38% with this effect size. And less than 1% with this effect size.

So then there are other approaches than having a central rater really do the assessment remotely. You can review the work, have a third party review the work of the site-based raters. MedAvante, their competitors Verisite, Signed and others all offer these services now and other companies do, too. And I'm not trying to -- and I don't know of any reasons to prefer one versus the other.

So you can review the source documents, audio or video recordings. This looks like it should work. It has good face validity. I've run trials with this. But I'm just not aware of any control data. I haven't seen studies where people have done third-party remote feedback in, say, half the sites or half the raters and not the other half and shown results. If you have those data, please send them to me. I'd love to incorporate those. 

But, as I said, it has good face validity. You know, if you're giving people feedback on the quality of their assessment, the raters should do nothing but improve. There is effect called the Hawthorne effect that people behave differently when they know they are being monitored. This should work.

And let me talk a little bit about operations, doing central ratings is pretty burdensome. You have got to coordinate ratings with a rater that's somewhere else maybe in a different time zone and the patient and the site. It's expensive. It's labor intensive. This is less labor intensive because you don't have to review all the recordings. It can be done not in real time. And so it's less burdensome, it's less expensive.

Not clear exactly how efficacious it is, but it has good face validity. Or just replace those human raters with computers. There have been a lot of different groups that have done work on this. And I'm going to jump right into some data.

These are data from -- you'll recognize duloxetine again. And John Grice was one of the early pioneers in this in a company called Healthcare Technology Solutions. And this was done with patient self-report using IVR. So just basically an old fashioned keypad on a phone is good enough to do this. And the patients self-report this. And for those of you that don't know this, separating 30 and 60 milligrams of Duloxetine is really hard. We never really saw this with clinical rating scales.

But patients self-rating using a computer in days saw really nice signal detection and really rapid signal detection. And this is just another example of a different measure, PGI. And again, really impressive separation on these. Or humans are good and computers are good, why not combine the both. And Gary Sachs founded a company called Concordance many years ago. And it's been merged into other companies. And this is part of Signed now. And they showed that if you did a clinician rating and a patient self-rating by computer and compared them, you could learn a lot from the points that were not -- were discordant. And you could learn a lot about both severity ratings but also inclusion/exclusion criteria, diagnosis, things like that. So that's valuable.

Let's talk about professional patients quickly. This is just an anecdote. And I generally stay away from anecdotes, but I found this is really compelling. This subject returned to the site with their unused pills from their pill bottle. Unfortunately, he had a pill bottle from a different trial site, same sponsor and protocol. And this is probably a common problem. This is a phase three program in depression where they had up to 4% duplicate subjects at least in screening. It could be higher. We don't know how big the problem is. But we know it's a kind of a -- it's a tip of the iceberg issue. Because you can look -- you know, there probably aren't too many patients that are bold enough to try to enroll twice at different sites in the same study, but they might enroll sequentially. They might go through multiple screenings until they get in. They might be in different studies by different sponsors for the same or even different indications. Be in a bipolar study this week and a schizophrenia study next month, and a depression study the month after.

And these patients may or may not be compliant with medications and also protocol features. Anecdotal data on subject selection. There are lots of websites out there that will teach you how to be a bad patient in a clinical trial. And I just want to note, not that it's that a bad thing, I love ClinicalTrials.gov, I use it a lot, but any tool can be used for good or bad things, or almost any tool.

And the reason I mention this to you again, as you are posting your trials on ClinicalTrials.gov you want to be transparent enough to share what you need to share, but you might not want to help them too much with specific details of certain inclusion/exclusion criteria that are subjective and can be, for lack of a better word, faked.

The top three of these are all companies that do duplication check for duplicate patients that might be in your study and another study that they have in their database. I've worked with all of them. And worth noting, this is relatively minimally expensive. You just have to get a few demographics on each patient at screening. So also the site and patient burden are pretty minimal.

And AICure is really more of a medication adherence platform. But of course the really bad professional patients don't want to take the medications either. So there is some overlap between professional patients per se and medication adherence. Medication adherence. I'm going to go through the rest of this quickly in the interest of time. Difficult to know with certainty. Not as helpful if done after randomization certainly if you need intent to treat. But pK collection is important. One way to do it is just pK collection. That is a gold standard that tells you that the drug is in the patient's body. I'm going to skip this slide, too.

If half the patients don't take their medicine, you can imagine that the power is very bad. And I did consult with AiCure previously. That's an important disclosure, too. The reason I like AiCure, not so much because I consulted with them, there are many medication adherence platforms out there on the market. This is the only one where I've seen evidence that their platform is consistent with, correlates with, predicts pK values. So if I were you, that's an important question to ask. Then you also have to ask about all of the operational issues, too.

Biomarkers. I mean when we've got biomarkers, they're great. You know, if you've got a PET ligand and you can -- help you narrow down the dose and really demonstrate that you are engaging the target, that's fantastic. This is just an example of PET ligand. This is another biomarker. This is hot off the press, this was presented just a few weeks ago at ASCP. And the idea here is basically taking baseline demographics and putting them all into an AI model to see what predicts placebo response and drug placebo separation.

This is another company that I work with currently so there is that disclosure with as many grains of salt as you believe. We did a blinded analysis of baseline EEGs and identified three clusters in a placebo-controlled Zoloft study.

In the overall study, it just failed to separate. And we identified three distinct clusters, one of which has a huge Cone-C effect size and P value even in a little less than half the population. Another cluster that really weren't responders at all. And a cluster, the third cluster that is less than 20% of the population that had fantastic placebo responders and terrible drug responders.

So this needs more validation like all biomarkers. And I just want to leave this with the point that biomarkers are great as we continue to understand the biology and pathophysiology better. First we are going to have to validate these against the gold standards. And the current gold standards are variable and biased and imperfect. So to close on a relatively optimistic note, this is a red, green, yellow. Green is good. Yellow is questionable. Red is probably not that worth it. My own personal subjective assessment of -- but the takeaway is that a lot of these things can be helpful, especially when fit for purpose with the therapeutic that you are developing, the phase of development, and your strategic goals for that therapeutic.

So I'll end there. Thank you very much for your attention. Look forward to questions and so forth.

TOR WAGER: Great. Thank you, Mike. For time reasons, we're going to go on to our next speaker. But just to let everybody know, there's a Q&A and people are posting questions there. And our panelists can answer questions there in the Q&A panel as well as in the -- during the discussion phase. So keep the questions coming, thank you.

All right. Dr. Farchione, thank you.

Current State of Placebo in Regulatory Trials

TIFFANY FARCHIONE: Thank you. Let me just get this all cued up here. So thanks, everybody, and good afternoon.

As we've already said, my name is Tiffany Farchione, and I'm the Director of the Division of Psychiatry in the Center for Drug Evaluation and Research at the Food and Drug Administration. So because I'm Fed, I have no conflicts to disclose.

I'm going to be providing the regulatory perspective of placebo response in psychiatric trials. So, so far today you've heard a little bit of an historical perspective from Dr. Khin, who is actually my former team leader and peer reviewer. And she showed us that not only do we have a high rate of placebo response in psychiatry trials, but the extent of that problem has actually been increasing over time.

And then Dr. Detke just presented some of the strategies that have been proposed for dealing with this problem. And in some ways they are somewhat limited utility in some examples.

So I'm going to talk a little bit about the importance of placebos for regulatory decision making and give a few examples of placebo response mitigation strategies and registration studies. And then I'll go on and talk a bit about placebo response in other disease areas and end with some thoughts on what may ultimately help us to resolve this issue. All right. So I want to start first by expanding a bit on Dr. Khin's presentation and just quickly presenting some updated data. I saw that there was a question either in the chat or the Q&A about depression studies. And honestly, we don't have too much more from what she presented in depression. And also the things that we've approved more recently have different designs, different lengths of treatment and things like that so it makes it hard to combine them with the existing dataset.

But here we've got figures for schizophrenia and bipolar. And they look a little different from each other because I pulled them from a couple of different presentations. But essentially the data points in each figure represent the change from baseline to endpoint on either the PANNS on the left, or the YMRS on the right in critical trials of atypical antipsychotic medications for the treatment of either schizophrenia or bipolar one disorder.

And the drugs included in these figures are ones for which we have both adult and pediatric data. So on the left you can see that the trend for increasing placebo response over time is also evident in the adolescent trials. And then on the right, we have data from adult and adolescent bipolar one studies, which Dr. Khin didn't present. So there are a few data points in this side, fewer than in schizophrenia. But the trend is less obvious from the dots alone. But if you draw in the trend lines, which are here on the figure, that allows you to see that the same phenomenon is also at play in the bipolar studies.

All right. So let's go back to basics for a minute and talk about why we need placebos in clinical trials in the first place. So simply put, placebo-controlled studies are our bread and butter. And in order to support a marketing claim, companies need to provide substantial evidence of effectiveness for their drugs. Ni went over this a little bit as well. This is generally achieved with two positive adequate and well-controlled clinical studies. And the characteristics of adequate and well-controlled studies are outlined in the Code of Federal Regulations.

So there's seven different characteristics that are listed in CFR, but one of those states that the study has to use a design that permits a valid comparison with a control to provide a quantitative assessment of the drug effect. So more often than not, that's a placebo control.

And they've agreed we just need some way to determine that the drug itself is actually doing something. So if the treatment response in the drug arm is greater than the response in the placebo arm, then that difference is assumed to be evidence of a drug effect. But that may be oversimplifying things just a little bit. It's important to remember that the difference -- the difference between an effect and a response. So the response is the observed result like the change from baseline on a PANSS or a MADRS score. And the drug effect can be one component of that. But adherence to the drug, timing of the assessment, other factors also influence the observed response.

And yes, a portion of the drug response is probably attributable to placebo effect. Same thing with placebo response. Yes, the placebo effect itself is a component of the response observed. But you also have things like the natural history of the disease or regression to the mean or, you know, when we talk about adjunctive treatment, it could be that the other treatment is part of that effect. All of those play a role in the observed response in a study.

So what exactly is it that can account for the placebo response rate in our client's trials? So Dr. Detke went over several of these examples earlier. But let's start with expectancy. And this is a big one. If folks expect to have some benefit from a drug that they're taking, they oftentimes do experience some benefit. The structure of a clinical trial can also contribute to the placebo response. Folks are being seen on a regular basis; they have a caring clinician that they interact with routinely. Those things can in and of themselves be somewhat therapeutic.

The fact that we use subjective outcome assessment is another aspect of this that I want to highlight. Because in psychiatry trials, we can't draw labs or order a scan to ensure that we have the right patients in our trials or to objectively assess their response to the drug. What we have are clinician interviews and patient reported outcomes. And oftentimes these outcome assessments involve a report from a patient that is then being filtered through a clinician's interpretation and then translated into a score on a scale. So there is a lot of room for variability in that.

The distal nature of that assessment from the actual biological underpinnings of the disease can be problematic and it's certainly prone to misinterpretation and to biases also. So again, Dr. Detke also mentioned how enrolling inappropriate participants can impact placebo response. If you have folks in a trial who don't actually belong in the trial, whether that's the professional patients that he kind of finished with, or whether it's folks who just don't quite meet the inclusion criteria or who have been misdiagnosed somewhere along the line, any number of things. That's going to increase the variability in your study and could potentially result in increasing the placebo response. So, of course, there's lots of other factors that can contribute to the placebo response. But because Dr. Detka spent a lot of time on this already, I just wanted to highlight these few skews.

So next I want to talk a little bit about ways in which we could potentially manage the placebo response in clinical trials. First, I want to present one option that we actually have not yet accepted for new drugs in psychiatry, but it's an option that actually takes placebo out of the equation entirely. We have a bunch of approved antidepressants, a bunch of approved antipsychotics. So at this point you might be asking why we can't just do non-inferiority studies and attempt to demonstrate that the new drug is no worse than some approved drug.

So the complicating factor here is that conducting a non-inferiority study requires defining a non-inferiority margin. And in a non-inferiority study, you are trying to show that the amount by which the test drug is inferior to the active control is less than that prespecified non-inferiority margin, which is M1.

And M1 is estimated based on the past performance of the active control. But, unfortunately, because of the secular increase of placebo response over time, we can't really estimate M1. It's a moving target. So even though we have things that have been approved in the past, we don't know that the margin by which the active drug was superior to placebo in the clinical trial that supported its approval is the same margin that would be observed today under similar circumstances. So because we can't set a non-inferiority margin, we can't do non-inferiority trials, at least not for regulatory purposes in psychiatry.

Another strategy that's been employed in a few trials at this point is sequential parallel comparison design. And again, Dr. Detke went over this briefly so you have some idea of the principles behind this already. Now recall that this is a design in which you have two stages. And the first is intended to weed out the placebo responders so that in the second stage the drug placebo difference is amplified --

So there is some statistical concerns with this type of study design related to the relative weights of the two stages and the impact of dropouts. But we have had one application where trials that employed the kind of trial design made it to the New Drug Application stage. And this application was presented at an advisory committee meeting back in November of 2018. So there is publicly available information for me to share even though the application ultimately was not approved.

This was for a fixed-dose combination of Buprenorphine and Samidorphan. So it was intended for the adjunctive treatment of major depressive disorder. Now the figure on the right-hand side was taken directly from the AC briefing book. And it shows diagrams of three studies in which SPCD was employed as part of the clinical trial setting.

The important thing to observe here is that you do in fact have a large placebo response in stage one and a much smaller placebo response in stage two. But what we don't see is the expected amplification of the drug placebo difference in stage two.

So as I said at the advisory committee meeting, either SPCD isn't working or the drug isn't working. So regardless of the outcome here, the important take home point is that we were able to file an application with SPCD in it. We had reached agreement with the applicant on the weights for the two stages and the analyses. And there weren't many dropouts in stage one of the studies. So we were able to overcome two of the big hurdles for this design in this program.

But if we receive another application with SPCD in the future, we're going to have to look at those issues again because they really are trial specific. So we'd advise sponsors to use consistent stage lengths and to reach agreement with us in advance on the primary endpoint and other critical trial features. And even if we reach agreement on all of those things, we're still not going to be able to agree a priori that the study will be acceptable because of some things that we're concerned about will remain open questions until we have that data in hand.

I already mentioned that here there weren't many dropouts in stage one. You don't know that until stage one is done. So even if we do accept the design and the study is positive and all of these issues are resolved labeling is still going to be super complicated if you have an SPCD.

[AUDIO INTERRUPTION] end up writing a label for this one.

All right. So moving from complicated to something much more straightforward. This is a table taken from the clinical study section of the valbenazine label. This is the data that supported the approval of valbenazine for the treatment of tardive dyskinesia. The studies that supported this application really provide a good example of one of the strategies to mitigate placebo response that has been, you know, successful. And that's the use of blinded central raters.

In this study, the raters were blinded to treatment assignment and also to visit number. And using the blinded central raters was feasible here because the symptoms of tardive dyskinesia are directly observable and can even be captured on video. So they can be rated by the remote central raters fairly easily.

And then you'll note here that the change from baseline on the AIMS and the placebo arms was basically negligible.

All right. So I think it's also important to bear in mind that this phenomenon of placebo response in clinical trials is not something that's unique to psychiatry. We see it in multiple other areas of medicine. It's ultimately the reason that we have placebo controlled studies in the first place.

We do expect to see some response in a placebo group. Folks get something that they think could be an active drug and, lo and behold, they have some response. It's important, though, if you want to understand that the observed response is, in fact, related to the active treatment that you do show that folks on the investigational drug are doing better than folks on the placebo.

So for the next couple of slides, I'm going to show some examples of what we see in other disease areas and speculate a bit on why the placebo response rate in those trials is higher or lower than what we're used to seeing.

And I'll caveat this by noting that I pulled my examples from the most recent Office of New Drugs annual report, and I haven't done a deep dive to see if other drugs behave similarly or if my speculation here bears out consistently. But with those caveats in mind, I'm also going to try to draw some parallels to circumstances in psychiatry trials.

All right. So the first example I have here is from the clinical study section of labeling for zavegepant, which is an intranasal calcitonin gene related peptide antagonist that's approved for the acute treatment of migraine with or without aura in adults.

The point I want to make with this example is that the endpoint here, pain, is very subjective. So similar to a lot of what we do in psychiatry, the endpoint is relying on patient report of their subjective experience.

Now, in this case, it probably helps somewhat to have a dichotomous endpoint of pain free versus not, rather than asking participants to rate their pain on a Likert scale that would introduce more variability. And honestly, as somebody who gets migraines, I can tell you that pain free is what matters. Like, a little bit of migraine pain is still migraine pain. Like, I don't want to deal with it.

Anyhow, with that kind of subjectivity, it's not too surprising that about 15% of the folks in the placebo group were responders.

Now, if you think back to that slide I showed earlier about contributors to the placebo response, some of this could be placebo effect. Some of it could just be that their migraines were resolving spontaneously within two hours anyways. Regardless, we have a pretty high placebo response rate here.

But we also have a responder rate of almost 24% in the active treatment group and a statistically significant difference on the primary endpoint of pain free at two hours.

On the secondary of relief from the most bothersome symptoms, so things like photophobia, phonophobia, nausea, both the placebo and the active groups had even higher response rates, but again, a significantly higher response in the active treatment group than in placebo.

So this is from the clinical pharmacology section of that same label. And I want to point out that this is very similar to what a lot of our drugs look like in psychiatry. We describe what the drug does at the receptor level, and then we say that the relationship between that action and the clinical effect on depression or schizophrenia or whatever is unknown. And until we have a better understanding of pathophysiology, that's going to continue to be our approach in labeling.

All right. The next example I have comes from the clinical study section of labeling for linaclotide oral capsules. And I have to say, when I'm talking outside of my own disease area, hopefully I'm getting these pronunciations right. But anyways, it's a guanylate cyclase C agonist. The data here supported the irritable bowel syndrome with constipation indication.

And I think this is a really interesting example because we have two different endpoints here. Like our last example, one is a pain endpoint that's likely to be highly responsive to placebo. Again, it's subjective. But unlike the last example, it's not dichotomous. So it requires a bit more interpretation.

The other endpoint is something that's a bit closer to objective. CSBM is complete spontaneous bowel movements. So, clearly, the number of bowel movements is something that can be counted. But the endpoint itself is a little bit of a hybrid because it also involves a subjective report of the sense of completeness of evacuation.

So, interestingly, you see a much higher percentage of placebo subjects meeting the criteria for responder on the fully subjective pain endpoint than you do on the CSBM endpoint.

And I got to tell you, Section 12 of this label is something that I dream about being able to do for psychiatry. We can only aspire to this, frankly, at this point. The language here very clearly lays out the pathway between the action of the drug and the downstream physiologic effects on constipation. And it even presents an animal model to support the drug's effect on pain. So this suggests that the drug acts on some aspect of the underlying pathophysiology of IBS C.

All right. So, so far I started with an example of a trial with a subjective endpoint, then went to something that's a little bit more objectively measurable. Here I'm going to show data from the bimekizumab label and the studies that supported its indication for the treatment of moderate to severe plaque psoriasis in adults.

So bimekizumab is a humanized interleukin 17A and F antagonist. The endpoints in the study were Investigator Global Assessment, which is an overall assessment of psoriasis severity, and the Psoriasis Area and Severity Index. Now, you might think that these things are somewhat subjective because they are investigator assessments and, of course, require some interpretation to get to the score on these scales.

But these are assessments of the size and extent of the psoriasis plaques, things that are directly observable. And both scales have anchors that describe what type of appearance the plaques of a given severity would have. So, you know, it kind of like gives you a framework for how to, you know, rate these different lesions.

So even though these are global assessments and you might think of clear and almost clear as being analogous to something like improved or much improved on a CGI, we're really talking about very different things.

Here, both what the patient is experiencing and what the clinician is observing are things that you can see and measure. You're not asking the patient if the patient feels like their skin is redder, you can see the erythema. And here you can see a much lower rate of placebo response in the studies. When you're directly observing the pathophysiology in question, and it's something that is objective or relatively objectively measurable, you get less placebo response.

All right. And Section 12 of this label isn't quite as definitive as the linaclotide label in terms of directly linking the drug effect to pathophysiology, but it's pretty close. And, again, it's probably a combination of the relatively objective outcome measures and the tight link between drug action and pathophysiology that's contributing to the low placebo response in these trials.

Finally, I want to put up an example that, of course, has been in the news a lot lately. This is from Section 14 of the tirzepatide label, and this is one of the GLP 1 inhibitor drugs that's indicated for chronic weight management as an adjunct to reduced calorie diet and increased physical activity.

Now, there are all sorts of things that can contribute to placebo response in weight management studies. So, for example, the folks who are in these studies are likely to be motivated to lose weight in the first place. They're required to engage in diet and exercise as part of the study. And even though it's difficult, sometimes folks just lose weight.

So even though weight is something that is objectively measurable, there's multiple physiologic and behavioral factors that may contribute to changes in weight. So there's a lot of variability, and it's been traditionally pretty difficult to show improvement in weight loss trials, or at least to show enough improvement that it overcomes the adverse events that are observed in the trials.

Anyways, the primary outcome in these studies was the percent of patients losing at least 5% of their body weight [AUDIO INTERRUPTION]. Now, you'd think that that would be pretty difficult to surpass, but these studies still managed to show a treatment difference because the active treatment works like gangbusters.

So another way to overcome concerns about placebo response is to find something that really has an impressive treatment effect. Then, even if you have a massive placebo response rate, you'll still be able to show a difference. And so far we don't have much of anything with this kind of an effect in psychiatry, unfortunately.

And then again, once again, in Section 12 we have a mechanism of action description that links the drug action directly to the clinical effects. The drug binds to a physiologic regulator of appetite, the person taking the drug eats less. It's pretty straightforward.

All right. So what lessons can we take away from all of this? Ultimately, the point that I want folks to take home from the examples I've shown in psychiatry and in other disease areas is that there are things that we can do to help mitigate the placebo response in our clinical trials. For things like SPCD or other nontraditional study design elements, I would advise sponsors to talk to us early and often. There are still some methodological issues that, you know, need to be overcome, but we're willing to consider SPCD studies as long as we're able to agree on specific aspects of the design and analysis.

Folks can also do things like trying to improve rater training and to mitigate some of the variability that's just inherent in asking human beings to assign a rating to something that is subjective.

Still related to measurement, but maybe more of a medium term than a short term solution, it could be worthwhile to develop better clinical outcome assessments. The scales that we use in clinical trials now have been around a long time. You know, they were mostly expert consensus and, you know, just they're face valid, for sure, and obviously we have precedent for them, but they've been around longer than modern psychometric principles, quite frankly. So developing new ones would potentially be welcome.

Anyways, in terms of other sources of variability, I'd refer back to Dr. Detke's presentation and his comments on the number of sites, enrollment criteria, and so on. Essentially, quality controls on study design and implementation. But ultimately what's really going to be the real game changer here is when we can develop drugs that actually target pathophysiology. That's when we'll finally be able to take some of this variability and subjectivity out of our clinical trials and really get much more objective measures.

In the best of all possible worlds, we would have a much better understanding of pathophysiology of psychiatric disorders. We'd be able to develop drugs that target the pathophysiological underpinnings of our diseases, and we would even be able to define study entry criteria more appropriately because we wouldn't be relying on subjective assessments for diagnosis or inclusion.

We'd be able to get that blood test or get that scan that can tell us that, yes, this is, in fact, what's going on here, and this is a patient who is appropriate for this clinical trial.

And I understand that we're, you know, a long way from that today, but I hope that folks will think of this as an aspirational goal, that our current state of understanding is less of a roadblock and more of a call to action.

And so with that, and recognizing that I am the one thing standing between you and our break, I will just say thank you very much for your attention.

TOR WAGER: Okay, wonderful. Thank you to all of our speakers and panelists in this first session.

Let's take a short break. We have some questions in the chat. More questions are coming in. But we have a break now until 1:50. And so I suggest that it's a short break, but we can get back on track and start then in about seven minutes. Okay? Thank you.

TOR WAGER: Okay. Hi, everybody. It's a short break, but thanks for hanging with us here and coming back after this short break.

Current State of Placebo in Device Trials

TOR WAGER: Our next session is going to be led off by Dr. Holly Lisanby and Zhi De Deng on the current state of placebo effects in device trials, and then we'll go for a series of placebo effects in psychosocial trials, and then, after that, the panel discussion. Dr. Lisanby, thank you.

Sham in device trials: Historical perspectives and lessons learned

SARAH “HOLLY” LISANBY: Thank you, Tor. And so these are my disclosures. And as Tor said, I'm going to be talking about placebo in device trials. And so although up until now in the workshop we've been talking about placebo in drug trials, which are typically given either by mouth or intravenous or intranasal, we're now turning our attention to how you would do a placebo in a device trial.

And that's where we use the term sham. So we blind device trials typically by doing a sham procedure. And the idea of sham is that the mode of application of the device and the ancillary effects that the device elicits are meant to be as closely matched as possible but without having active stimulation of the body or the brain specifically.

Now, one of the challenges in blinding device trials using sham procedures is that one sham does not fit all or even most. And let me explain what I mean by that.

There are a growing range of different devices. Here you see the landscape of neuromodulation devices. On the X axis is how invasive they are and on the Y axis is how focal they are. And they all use different forms of stimulation applied to the head or the body. Some are surgically implanted, others are not. And those are just the devices that directly apply energy to the head or cranial nerves.

But there's another space of devices that deliver audio or visual stimuli to affect brain activity indirectly, and these include prescription digital therapeutics and neurofeedback devices.

Now, even within one modality of device, here I'm going to use transcranial magnetic stimulation, or TMS, as an example. We have a broad range of different TMS devices. Here I'm showing you just a few of them. And while they all use rapidly alternating magnetic fields, they differ in how they apply that to the head.

So this device, for example, uses an iron core figure 8 coil. This device uses an air core figure 8 coil. Now, those are pretty similar in terms of the electric field induced in the brain, but this device uses three different types of coil that are called H coils with different coil windings that stimulate very different parts of the brain and have different ancillary effects.

The device on the left uses an air core figure 8 coil, but it has some additional bells and whistles to it. It uses neuronavigation. So there's a camera in the room and a tracker to be able to navigate the TMS coil to a specific spot in the brain that was identified before treatment on the basis of fMRI. And so there's an additional aspect of this procedure. And also it's given with an accelerated schedule, where ten treatments are given a day, each day, for five days.

Now that brings us to some of these ancillary effects of TMS. One is the intensive provider contact in a high tech environment. And I'm showing you here just a few pictures from our lab. And this is intensive contact. It can range from either one session a day for six weeks to ten sessions a day over five days. And this really highlights the importance of blinding, not just for the patient, but also the coil operator and the raters.

Now, there are also sensory components to TMS. It makes a clicking noise, which is induced by the vibration of the coil within the casing. And this is quite loud. Even with earplugs, you can't mask the bone conduction of the sound. And so that, in addition to the sound, which can it also can induce scalp sensations. And these sensations can range from just feeling a tapping on your head to feeling something that's a scalp discomfort, even to scalp pain.

And the TMS can also evoke movements. So if you're even if you're not over the motor cortex, if you're over the frontal cortex, which is for depression treatment, this can cause movement in the face or the jaw, which can be from directly stimulating scalp muscles, facial nerves, or cranial nerves.

You can also, depending on the shape of the coil, get some evoked movement from the motor cortex. And this is more common with the more diffuse coils, such as the H coil configurations.

Now, not only are these ancillary effects important for blinding of clinical trials, they also represent important confounds for physiological studies that we do with TMS, where we want to understand use TMS to probe brain function, such as coupling TMS with EEG to study evoked potentials or coupling TMS with fMRI.

Now, sham TMS has evolved over the years. I'm showing you in the center of this photograph active TMS, and in the corners are four different types of early forms of sham TMS, which were called coil tilt TMS configurations, where you tilt the coil off the head so that the magnetic field is sort of grazing the scalp. You get some sensation, you get the noise, but you're trying to not stimulate the brain.

Now, while this coil tilt sham does induce some scalp stimulation and clicking, it lacks operator blinding. But even worse than that, what we showed from intracerebral recordings of the electric field induced in the brain by these different forms of a coil tilt sham in non human primates is that compared to active TMS, which is the top line, one of these four sham coil tilt configurations was almost 75% strength of active TMS. And that's the second line from the top with the black circles.

And so some forms of these coil tilt shams were actually biologically active. And that represents a confound when you're trying to study the older literature, trying to look at, do meta analyses of TMS clinical effects.

The next evolution in the step of sham TMS was shielding. And for example, figure 8 coils could have a metal shield between the coil and the head that blocked the flow of the magnetic field. And here, this E shield has both the magnetic shield as well as a printed circuit board on top of the coil that was meant to be fired antiphase with the TMS in order to try to cancel out the magnetic field at the surface of the head.

These types of approaches look and sound like active TMS, and they provide operator masking. However and they're biologically inactive. However, they don't feel like active TMS. Here you're looking at subjective ratings of scalp pain, muscle twitch, and facial pain with active TMS in the red and sham in the black. So there's not appropriate masking or matching of these ancillary effects.

But that sham, the E shield sham was used in the pivotal trial for depression in adults. And that pivotal trial missed its primary endpoint, which is shown here in the yellow box, where active TMS is in the blue line and sham is in the gray line.

Ultimately, TMS became FDA cleared in 2008 for a limited indication based on this post hoc analysis, which I'm showing you here, where about half of the patients in the pivotal trial who had failed only one antidepressant medication in the current episode showed a significant separation between active in the black line and sham in the gray line. However, those who had more failed trials in the current episode, from two to four, did not separate between active and sham.

Subsequently, the label was expanded and CMS coverage determinations have been provided, but that was on the basis of additional evidence, which came from additional randomized controlled trials as well as open label experience and literature reviews.

Now, that same sham has been used in a pivotal trial for TMS for adolescent depression, which also failed its primary endpoint and failed to separate active from sham. Here you see the antidepressant scores on the Y axis with active TMS in the blue and sham in the red, and they were indistinguishable.

And the sham is described in the paper, as I'm showing you here in the quote, and this is another one of these metal shield or E shield shams that did not provide scalp stimulation.

Now, ultimately, FDA did clear TMS down to the age of 15 on the basis of retrospective analysis of real world data that were derived from a registry of over a thousand adolescents over a span of 15 years, all of whom were obviously receiving off label treatment, as well as a literature review. And the status of insurance coverage is to be determined.

The next step in the evolution of sham TMS was scalp stimulation, and that's what we used in the OPT TMS trial of almost 200 patients. And this was the first study to use scalp stimulation. And you see those little patches on her forehead. Those are electrodes through which we administered weak electrical stimulation to the scalp along with auditory masking in order to better mimic the ancillary effects of TMS.

And here you can see the ratings of scalp discomfort and headache were similar between active TMS in the red and this scalp stimulation sham in the black.

This, we did assess the integrity of the blind in the OPT TMS trial, and we found that the blind was preserved, very low percentage of extremely confident correct responses. And we found a separation between active and sham in this study with a 14% remission with active and 5% remission with sham. That was statistically significant.

Shams in the modern era have kept this idea of scalp stimulation and auditory masking, but they come in different versions that are now available as turnkey systems. For example, this sham, which has an active magnetic stimulation on one side of the coil and no stimulation on the other side, but the sides are identical in appearance, and this comes along with an adjustable output for electrical stimulation of the scalp, which is synchronous with the TMS pulses that's built into the system.

Now I'm going to shift from TMS to a different form of stimulation, transcranial direct current stimulation, or tDCS. This is from one of the randomized controlled trials that we conducted of active versus sham tDCS for depression in 130 patients, which failed its primary endpoint.

Now, I'm showing you the depression response on the Y axis for unipolar patients on the left and bipolar patients on the right. And although we did not find active tDCS to be better than sham, we found something curious, which was that sham was better than active, particularly in the unipolar patients. And that caused us to ask, well, what is going on in our sham tDCS intervention?

Here's what our active intervention looked like. We stimulated at 2.5 milliamps continuously over 30 minutes. The sham, which we thought was biologically innocuous, actually had these brief ramp ups and then ramp downs intermittently during the 30 minutes.

But in addition to that, it had a weak current of .032 milliamps that was continuous throughout the stimulation. We weren't aware of this continuous stimulation, and it begs the question whether this waveform might have had some biological activity. And certainly when you find sham better than active, one has to ask that question.

Now, this question of how to sham tDCS trials has been addressed in the literature. In this study in 2019, they reported that there were a great multiplicity of sham approaches that were being used in the field. And some of these might have biological action.

Now, in 2018 we had conducted an NIMH sponsored workshop and published a report from that workshop in which we urged the field to present the rationale and the effectiveness of sham stimulation when you do studies. And we observed that this is rarely documented. We also encouraged the field to do blinding checklists during the study design, reporting, and assessment of study validity. And we still encourage this. It's still timely.

Now I'm going to move from tDCS to another form of implanted stimulation. So TMS and tDCS are non surgical. Now we're dealing with a surgical implanted device, vagus nerve stimulation.

So it's surgically implanted pulse generator, and sham is done by implanting the device but not turning it on. The pivotal trial of VNS for depression failed its primary endpoint, which is shown in the yellow box here. But it was subsequently FDA cleared based on a non randomized open label comparison with treatment as usual, as you see here. Insurance coverage was frequently denied, which limited utilization.

More recently, there was a study called the RECOVER trial, which stands for randomized controlled blinded trial, to demonstrate the safety and effectiveness of VNS as an adjunctive therapy versus no stimulation control.

This RECOVER study was designed in accordance with the CMS coverage with evidence determination decision memo. The study is not yet published, to my knowledge, but according to a press release from the company that sponsored it, after one year of active VNS versus sham, which was implantation but not being turned on, this study failed its primary endpoint.

And I'm quoting here from the press release that it failed due to a strong response in the sham group, which they said was unforeseen in the study design. And I would say that we might have foreseen this based on the original pivotal trial, which also failed to differentiate active versus sham.

Now I'm going to move to deep brain stimulation. And this is the randomized controlled trial that we conducted on bilateral subcallosal cingulate DBS for depression. Sham was done by implanting but not turning it on. And this study, in a futility analysis, failed to differentiate between active and sham. So you can see this has been a recurring theme in the studies that I've shown you.

Now, there's some specific challenges to blinding DBS trials. By the time you get to DBS, you're dealing with a very severely ill, depressed population, and clinical severity may represent some dangers when you try to think about the relapse that may occur from crossover designs, like crossing over from active to sham.

There are unique things that may unblind the study such as battery recharging or batteries that don't need to be recharged that could cue a patient. And also there's a need for rigorous safety protocols to protect patients who are so severely ill during their sham phases due to the risk of clinical worsening.

So, to conclude, sham methodology poses a lot of complex challenges for device trials. One size does not fit all. The interpretation of the literature is complicated by this variability in the sham methodology across studies and across time as the sham approaches have evolved.

Measuring the biological activity of the sham intervention before using it in a clinical trial is important and it is seldom done. And assessing the integrity of the blind is important for patients, operators, and raters. And that's why with sham procedures we need to think about triple blinding, not just double blinding.

And the shortest pathway to regulatory approval, which I gave you in the example of VNS, does not guarantee insurance coverage nor clinical adoption.

Some thoughts about future directions. We could focus on developing next generation active devices that lack these ancillary effects that need to be mimicked by sham. Some examples that you'll hear about from Zhi Deng, who's coming up next, include quiet TMS and controllable pulse TMS. We could conduct studies to validate and characterize the biological actions and expectancy effects of sham interventions. And there's a role for active stimulation of a control brain area as a comparison condition.

These are the members of the Noninvasive Neuromodulation Unit in our lab at NIMH. And I'll just show you the slide that we're recruiting for jobs as well as for patients in our trial. And thank you very much, and let me hand it back to you, Tor.

TOR WAGER: Wonderful. Thank you, Holly. All right. I think we have Zhi up next. So please take it away, Zhi.

Challenges and Strategies in Implementing Effective Sham Stimulation for Noninvasive Brain Stimulation Trials

ZHI DE DENG: I will share screen and maximize it. Good day, everyone. Thanks for having me here today. And for the next few minutes, I will discuss the challenges and strategies in implementing effective sham stimulation for noninvasive brain stimulation trials.

Dr. Lisanby has already gave a very nice overview as to why this topic is crucial as we strive to improve the validity and reliability of our neurostimulation device trials. I'll be discussing in more in depth the physical characterizations, computational modeling, as well as some measurements that we took of various sham strategies and discuss their trade offs in case you are interested in picking or implementing a sham technique or improving one. And I'll be focusing primarily on TMS and tDCS.

Before we proceed, I need to disclose that I am inventor on patents and patent applications owned by various institutions. Some of them are on brain stimulation technology. Additionally, this work is supported in part by the NIMH Intramural Research Program.

So when we talk about ... is this panel in the way? Let me put that aside.

TOR WAGER: It looks good. I don't think we can see it.

ZHI DE DENG: Okay, good. So when we talk about creating a valid sham TMS, Dr. Lisanby has already mentioned that there are several critical elements that we need to consider.

Firstly, the sham should look and sound like the active TMS to ensure a blinding. This means that the visual and auditory cues must be indistinguishable between sham and active conditions.

Secondly, the sham should reproduce the same somatic sensations, such as coil vibrations and scalp nerve and muscle activation. This sensory mimicry is essential to maintain the perception of receiving active stimulation.

And finally, perhaps the more important one, that there should be no active brain stimulation, which means that the electric field induced in the brain should be minimized to avoid any therapeutic effects.

For TMS, there are several categories of ways to implement sham, which are loosely categorized into the coil tilt techniques, two coil configurations, and dedicated sham systems. I'm going to describe each of them in some detail next.

So Dr. Lisanby has already covered the coil tilt technique, and this is one that was pretty popular in the early days of TMS. By angling the coil 45 degrees or 90 degrees relative to the tangential plane of the head, one can minimize the stimulation to the brain. At least they thought so.

It turns out through modeling and also intracranial recordings of induced voltages that some of these coil tilt techniques remain biologically active. Here you see simulations on a spherical head model of various coil manipulations in coil tilt. Up here we have the active figure of 8 stimulation producing a single focus of electric field directly underneath the center of the figure of 8 coil.

When you tilt the coil 45 degrees or 90 degrees, and when you look into the brain, there is considerable residual electric field that is still induced with these coil tilt techniques.

A better way, a very clever way, and this is popularized by some folks in Europe who's doing motor excitability studies, involve two coil configurations. You use two TMS coils that are attached to two different TMS stimulators, and you would position these coils perpendicular to each other, one in the active tangential configuration and one that is 90 degrees on top of the active coil.

And with this technique, the advantage is that you can interleave active and sham TMS pulses in the same protocol because you are dealing with two different TMS stimulators. So in active mode, you would simply fire the coil that is closer to the head, which is tangential in the active configuration. In sham mode, you would simply fire the coil that is on top of the active coil.

However, this technique, like the coil tilt, there is a spacer involved in this perpendicular coil setup. So the field that is induced in the brain is less compared to the 90 degrees coil tilt, but it does also not induce any scalp stimulation. That means that the sensation at the scalp level is decreased and not felt by the participants.

Another implementation involves a sandwich design, also involving two coil setups that are sandwiching a metal shielding plate. In active stimulation mode, one would fire the coil that is closer to the head, and in sham mode one would fire the coil that's further away. And this shield ensures that you have the -- limits the penetration of the magnetic field, resulting in no scalp stimulation as well as no brain stimulation.

The final category of sham systems are these dedicated sham systems manufactured by different companies, the first of which is a reversed current sham. Magstim has an implementation of this concept. In active stimulation, the coil current in the coil is such that there is a same coil current direction underneath the center of the coil, summating the field underneath the center.

In the sham stimulation setup, the coil current in one of the loops is reversed such that at the center of the coil the field is canceled. This effectively creates a larger circular or oval type coil, which is a larger coil that has a lesser field decay, and so when you actually look into the brain, there remains substantial electric field stimulation there.

Another technique that was mentioned earlier is shielding by, again, putting a metal shield or new metal shield underneath the coil. You can effectively block out all of the field penetration, but one would also completely eliminate any scalp stimulation, making the sensation feel different.

Another implementation strategy involves using a spacer and a passive shielding. This is an implementation of the MagVenture coil, for example, using a large block coil, and the coil winding inside that large block is only built into one side of the coil. And so during active stimulation, one would flip the coil such that the active winding is closer to the head. And for sham stimulation, one would flip this coil over such that the passive shielding is closer to the head and the active winding elements are further away from the head.

This shield technique plus the spacer would completely eliminate any brain stimulation, but it also would eliminate any scalp stimulation.

A final coil setup was invented by our lab several years ago, which we called the quadrupole coil. This implementation splits the figure of 8 coil into four loops, and by reversing the coil current direction on the outside loops during sham stimulation, effectively, you may get into a smaller figure of 8 coil. And as we know, with smaller coils, it has a lower field penetration, and therefore the scalp stimulation is reduced as well as the brain stimulation is reduced.

How do all of these different sham stimulation strategies stack up on each other? The criteria we want to achieve is basically 100% scalp stimulation compared to the active electric field. So when we quantify this sham electric field at the scalp, one would like to achieve 100% compared to the active E field in the active configuration.

When it comes to brain stimulation, in sham, E field should be zero. You don't want any electric field induced in the sham condition. And so one would like to maximize this contrast between scalp stimulation and brain stimulation.

But looking across the coil tilt techniques, the two coil configurations and dedicated sham systems, none of these techniques perfectly achieve what we want. Either you have no scalp stimulation, but it also has no brain stimulation, or you have residual scalp stimulation and brain stimulation at the same time, confounding clinical trial results.

So these are the primary challenges in implementing sham systems. There is a incomplete mimicry of sensory experience that is the scalp stimulation or that you have too much of this residual, possibly biologically active brain electric field that is induced.

So why don't we take a coil that does not produce any brain stimulation and produce no scalp stimulation and add to it some scalp stimulation back? And this is a proposed technique using concurrent cutaneous electrical stimulation, which was used in some of the early clinical trials of TMS, utilizing two electrodes that are placed relatively close together, approximately one centimeter's edge to edge distance underneath the center of the coil.

And the placement of the electrodes is such that you maintain the current direction induced in the head compared to active TMS. And the current is mostly shunted in the scalp, but a little of it enters the brain.

The early implementations of this technique would use a customized ECT device, and the device would deliver low amplitude square pulses that are synchronized to TMS pulses. In more modern configurations, this electrical stimulation module is incorporated into a dedicated sham coil, for example, such as the MagVenture setup.

There are several ways to use this electrical stimulation. One way is to carefully titrate the stimulus intensity for this electrical stimulation to match the active TMS sensation, or some labs maximize the intensity of the electrical stimulation, and this electrical stimulation would be delivered in both active and sham TMS conditions to entirely mask scalp sensation in both conditions.

Now, there are some problems with this cutaneous electrical stimulation, the first of which is waveform considerations. What is the waveform of these electrical pulses that are accompanying this sham TMS pulses? First of all, the manufacturers specified triangular waveforms with a 200 microsecond rise time and a 2 millisecond fall time.

When we actually make measurements of these current pulses, though, the waveform deviates substantially from this triangular waveform that manufacturers specified in their manual. What we actually measured are these exponential decaying waveforms that has a much longer tail compared to the 2 millisecond fall time of the triangular waveform.

What's more is that if one were to characterize the decay constant of this exponential decay and plot it as a function of the intensity of these pulses, one would find that for pulses that are more intense, you have a shorter decaying constant, and therefore it's more pulsatile. If you reduce the electrical intensity, you would end up with a pulse waveform that is longer and longer. And I'll tell you why that's important a little bit later.

A second feature that is peculiar of this system is that the current amplitude is not linear with the dial setting. That is, if you were to increase the intensity from rotating the dial on the machine, a increase from setting of 1 to 2 is not the same as a setting jump from 8 to a 9, for example.

And the maximum current at maximum stimulator setting is upwards of 6.7 milliamps, which is considerably higher compared to other electrical stimulation such as tDCS, which typically uses 2 milliamps.

There's another issue with this electrical stimulation intensity, which is that this electrical E stim intensity was advertised to scale with TMS intensities. That is, as you dial up the intensity of the TMS pulses, the intensity of the electrical stimulation should also increase.

And this is not the case from our measurement. As you can see here, at two different electrical stimulation intensity settings, as we dial the TMS pulse intensity up from 50% to 90%, the amplitude of these electrical stimulation waveforms, they don't really change.

Why is pulse shape matter? Why do pulse shape matter? This has to do with the strength duration property of the sensory fibers underneath the TMS coil. Sensory fibers are classified in this rudimentary drawing of sensory nerves that I put up here.

There are A beta nerves, which are these larger diameter myelinated nerves. And typically they have faster conduction time, and so they carry information about vibrations, pressures, and touch. A delta nerves are slightly smaller, about one to five microns in diameter, and they typically carry information about sharper pain. And then we have these C fibers that are unmyelinated and they are smaller in diameter. And because of the lower conduction time, they would carry information about burning sensations and thermal pain.

I know this is not a very professional drawing of these nerves, and, of course, when it comes to drawing, I am no Rembrandt, but neither was Picasso.

This is actually a more professional drawing, but the important thing about the different pulse shape is that they preferentially activate different kinds of fibers with different time constants. So one can actually model that using a nerve model, which I have done here, and we can show that the proportional nerve activation is different across different waveforms.

On the left cluster of bars, we see what the profile of the proportional nerve activation is like for various types of TMS waveforms, including biphasic sinusoids, monophasic sinusoids, and controllable pulse width, which are near rectangular pulses.

These TMS waveforms preferentially activate A beta and A delta fibers, contributing to this tapping sensation that you feel with TMS.

But when it comes to electrical stimulation using these exponential decaying waveforms, you see that these waveforms preferentially activate C fibers. Not only that, as you change the intensity of the stimulation from maximum to minimum, you preferentially stimulate more and more of the C fibers. That is, if you decrease the amplitude, the tail here gets longer and longer, and you stimulate more and more of these C fibers, and you create more and more burning sensation and this tingling sensation that sometimes people report with tDCS, for example, which is uncomfortable to some people.

But as you increase the electrical stimulation intensity, yes, the pulses become shorter and it feels more pulsatile, but then the intensity is increased, so now it feels more painful.

And so that does not seem to be a way to achieve a very comfortable setup with this electrical stimulation. And what's more important is it does not feel like TMS, that the profile of these nerve activation is very different from a TMS waveform.

So we did not find any perfect sham. The next order of business is that we look into the clinical literature. Might there be any other stimulation parameters such as intensity or stimulation site or stimulation protocol that are predictive of sham response, something that we can modulate and modify.

So we looked into the literature, and we replicated and extended a previous meta analysis looking at depression trials that are randomized controlled trials of TMS. The average sample size across these trials are 35 subjects. In terms of stimulation protocol, predominantly high frequency stimulation and the second largest group would be low frequency stimulation.

In terms of intensity, we have a mixture of intensity with most protocols administering either 100%, 110% or 120% of motor thresholds. In terms of stimulation site, most of these clinical trials use left dorsolateral prefrontal cortex as the treatment target. That as a single site stimulation combined with bilateral dlPFC account for close to 80% of the clinical trials.

In terms of targeting approach, I was surprised to find that we were still using the scalp based targeting strategy of the five centimeter rule, which uses just measurements on the scalp, five centimeters on the scalp anterior to the motor hotspot. And that's where they determine the location for the left dorsolateral prefrontal cortex.

In terms of sham type, a lot of the earlier studies, as Dr. Lisanby mentioned, uses the coil tilt configuration, either 45 degrees or 90 degrees. And so in this analysis, they still account for majority of the studies, and only about a third of the studies included uses a dedicated sham's coil setup.

Manufacturers, you know, it's a mix. In terms of coil types, they're predominantly a figure of 8 coils. And in terms of the number of sessions that are in these studies, the median is 12 sessions of treatment.

So what did we find? What are the correlates of sham response in these clinical trials? The first thing we found was that the number of sessions is correlated with sham response. So here on the Y axis, we're plotting the percent change from baseline for the primary outcome of the study, typically a depression severity rating. So down is actually good, antidepressant. And here we see a weak correlation between the number of sessions in a typical clinical trial with improved sham stimulation.

And this, you know, over a longer treatment course, participants may develop stronger expectation of improvement, and this continued engagement with the treatment process plus regular clinic visits and interaction with a healthcare team can reinforce these expectations contributing to this sustained and enhanced placebo response, which can also accumulate over time.

The second correlate that we found to be significantly correlated with sham response is active response. So in any given clinical trial, the higher the active response, the higher the sham response. And the correlation between sham and active responses may indicate that the mechanisms driving the placebo effect are also at play in the active treatment response.

This correlation might also reflect any form of intervention. And this finding underscores the importance of effective blinding and management of participants expectations and in account for placebo effects in clinical trial design and interpretation. And the final correlate is effect of time. Something that was also mentioned in relation to pain medication a little bit earlier. So Dr. Wager mentioned earlier sham response seems to be increasing over time. We also observe this effect.

Now this increase in placebo response with drugs is sometimes hypothesized to be associated with societal changes in the attitude towards certain types of treatments and perhaps greater awareness in medical research and increased exposure to healthcare information. And also more advertising in general, particularly post approval of a drug or a device. And all together it can enhance participants' expectations and belief in the efficacy of certain types of treatments contributing to stronger placebo response.

Here we see the same thing with devices. There are also other interpretations of this increased placebo response. Perhaps the demographics of the characteristics of the participants in clinical trials might have changed over time. Perhaps participants today are more health conscious, they are more proactive and engage in healthcare, leading to stronger expectations of treatment options.

It could also be that sham response -- sham devices and procedures are becoming more realistic. Changing from the earlier coil till techniques and to now more dedicated sham systems that can enhance the belief that one is receiving an active treatment. The good news, though, is that active response is also increasing, although not quite at the same rate. Active response may be increasing over the years as well, likely attributed to improvements in dosing and targeting techniques.

Speaking of similarities between drugs and devices and their placebo response, there are also some key differences. A study was published last year in Neuromodulation pointing out the differential placebo responses between neurostimulation techniques and pharmacotherapy in late life depression. The time course of this sham placebo response is different between sham RTMS and placebo pills. Specifically at the four-week time point, participants receiving sham RTMS showed a significantly greater reduction in their Hamilton Depression Rating Scale compared to those receiving placebo pills. And this suggest a stronger early placebo response to neurostimulation compared to pharmacotherapy.

But when we look at 12 weeks, the placebo response for drugs start to catch up. And by the end of the twelve -- at the end of the trial at 12 weeks there are no significant statistical difference between the placebo pill response and the sham TMS response. This is important to consider if we're designing clinical trials to compare drugs versus devices, for example.

So we must take care of -- think about when to assess primary outcome and also employ statistical techniques to account for this time-dependent placebo effect.

Touching on TDCS for a second. We don't really have a lot of work on TDCS. Typical sham protocols in TDCS is implemented by changing the time, the temporal waveform of the stimulation, by ramping up during the beginning phase of the stimulation, and sometimes a ramp up/ramp down towards the end of the stimulation to give a transient sense of the brain is being stimulated. There are some protocols that maintained a constant low intensity as shown in Dr. Lisanby's slides that are these microamp stimulation which may or may not be biologically active and that may confound results of clinical trials.

NIMH Staff: Dr. De, I'm sorry, but we are going to need to wrap up to give enough time for our following speakers.

ZHI-DE DENG: Okay, wrap up. Sure. Sure. Final slides. And we're just going to be talking about the -- some of the determinants of sham response in TDCS trials. There seems to be a large sham effect. And there are some protocols that has better blinding compared to the others. And there are certain electrode placement that has lower sham response and that again, similar to TMS, the sham response in TDCS is correlated with the active TDCS response.

With that, I think I will skip the rest of this talk and, you know, allow questions if you have any.

TOR WAGER: Okay. Thank you. Great. Well, keep putting the questions in the chat. And for our panelists, please keep answering them as you can.

We'll move on to the next session right now which is going to cover placebo effects in psychosocial trials and interpersonal interactions.

So our two speakers are Winfried Rief and Lauren Atlas. I believe, Winfried, you are going to go first so please take it away.

Current State of Placebo in Psychosocial Trials

WINFRIED RIEF: Thank you. First, greetings from Germany. And I'm pleased to be invited to this exciting conference.

I was asked to talk about placebo effects in psychosocial trials. And this is certainly a quite critical question whether we can really apply the placebo construct to treatments on psychological therapies and trials in psychological therapies.

So I want to just try to highlight why this is complicated to transfer this concept to psychological treatments. But then I will dive into details how placebo mechanisms might apply and how we might be able to control them in psychological treatments.

So what is the problem? The problem is about the definition of psychological treatments. They're designed studies that utilize psychological mechanisms to treat clinical conditions. But if we consider the definition of placebo effects in medicine, this is pretty similar or highly overlapping with the definition of psychological treatments themselves.

So the impact of psychological and contact factors are typically considered the placebo mechanisms in medical interventions. So we can switch to other attempts either to define placebo mechanisms. But then we need the concept of what are specific, what are unspecific mechanisms. And this is quite difficult to define if we use psychological interventions because we don't have this very clear ingredient as we have in drug trials. 

And the novel definition define placebo mechanisms as mechanisms of conditioning and expectation. But this is already a definition of psychological interventions.

And, as you know, CBT started with the concept of using learning mechanisms to improve clinical conditions. So there is an overlap in the definition what placebo mechanisms are and what psychological treatments are. And therefore it's quite difficult to disentangle the effects.

To provide more insight, I reanalyzed a meta analysis of Stephen Hoffman's group on depression and anxiety trials because they only included placebo-controlled trials on psychological interventions. For some of these trials, they were able to have some placebo-proof conditions if they also integrated some psychoactive drug arms. But most of the trials used arms that used some psycho education parts, information about the control or some supported therapies which means just to reflect emotional well being and to support emotional well being.

But some other trials used interventions that are known to be effective such as interpersonal psychotherapy or cognitive restructuring or GRN therapy. So they used therapies as control conditions that are known to be effective in other conditions. And this shows how difficult it is to define what a good placebo condition is in psychological interventions.

And in this meta analysis, in the first version of it six years ago, the authors defined a good psychological placebo conditions as someone -- as a condition that used an intervention and excludes the specific factor, only including the nonspecific factors. And these mechanisms that are used in the placebo arm should have shown to be non-effective for the treatment under -- for the clinical condition under consideration. And this is already a point that will be pretty hard to define in detail if we develop placebo conditions in psychological treatments.

Another attempt, as was already mentioned by Tor, is to disentangle the variant parts of treatment outcome. And this attempt, this approach is associated with names of like Bruce Wampold or Michael Lambert and others. And I show here the results of Michael Lambert's analysis. And you see that he defines placebo effects as the mere treatment expectation effect and to declare this is about 50% and allocates other parts of the effects to other factors.

We have to be aware that this kind of variants disentangling analysis, this is just about statistical modeling. This is not about causal investigation of factors. And a second shortcoming of it is also it does not consider the actions of these factors. And therefore the insight that we get from this kind of analysis is only limited.

But coming back to psychological treatments, we can say that patient's expectations are powerful predicters of outcome, as we know from medical interventions already. Here is data from a psychological treatment study on chronic pain conditions which shows that we find response rates of 35-36%, but only if patients have positive outcome expectations before they start treatment. And those who have negative outcome expectations have much lower success rates like 15%. And the relationship between positive and more negative expectations remains stable over month and years.

So what is the major challenge if we try to define control conditions in psychological treatments? The first point is we're unable to do a real blinding of psychological treatments. At least a psychotherapist should know what he or she is doing. And the placebo group in clinical trials often are different from the active interventions in terms of credibility or as we call it of being on a treatment -- a treatment that is as credible as the active treatment is.

And for some control conditions it's even questioned whether they are kind of nocebo conditions such as standard medical care or waiting list group. If you are randomized to standard medical care or waiting list, you might be disappointed, you don't expect much improvement. While being in the national core group might be even better, you try to do some self-help strategies, for instance. And another aspect is that the nonspecific effects can sometimes switch to become specific effects depending on what your treatment is and what your treatment rationale is.

I'll show one example of one of our studies for this effect. We investigated the treatment expectations in patients undergoing heart surgery. And before they had the heart surgery, we did a few sessions to optimize treatment outcome expectations. That means outcome expectations were moved from being a noise signal of placebo effect to being the target mechanism of our intervention. Like in this case the therapist is working with a patient to develop positive outcome expectations, what happens after they manage to survive the heart surgery.

So we did that with a randomized clinical trial with an expectation optimization in the major group when compared with two control groups. And we were able to show that if we optimize treatment outcome expectations in cardiac, in heart surgery patients, these patients really did better six months after surgery. Standard medical care has little improvement. It's mainly providing survival, which is important enough, no question about that. But where the patients are really feeling better six months after surgery depends on whether they got some psychological preoperative preparation.

And we also used this approach of optimizing expectation to develop complete psychological treatment programs also for patients with depression and with other mental disorders. So let's come to the other part of the placebo mechanisms, the nocebo effect. And I would like to report about nocebo effects in psychological treatments but the major problem is side effects and other effects are only rarely assessed in psychological treatments. This is really a shortcoming.

Here is just a top ten side effects from psychological treatments. Many of them are just increasing conflicts and problems. But some are also about new symptoms that develop. And some of our other studies we even found that symptoms such as suicidal ideation are increasing sometimes for some patients in psychological treatments. So negative side effects are an issue in psychological treatments and we need to assess them and to better understand afterwards whether nocebo effects occur.

How do they develop these treatment expectations, be it either positive or negative? One major effect was already shown in many placebo trials. And that is about pretreatment experience. Here are data of about 300 former psychotherapy users who plan to attend another psychological treatment. And you can see that how much improvement patients expect mainly depends on how much improvement they experienced during the last treatment.

And the same with negative expectations and the same with side effect expectations. Of note, positive clinical outcome expectations are not correlated with negative outcome correlations. That means people can be optimistic and worry at the same time. So a critical role about patient's frequent expectations is the clinician. And we wanted to evaluate the effect of the clinician using an experimental design. Here is our clinician. I will call him Tom. Who is explaining to a critical patient whether psychological treatments can help or not.

And we wanted to modulate this situation and therefore we first brought all our participants in this situation of developing negative treatment outcome expectations. We were quite successful in establishing negative treatment outcome expectations or as you see here, reduction of positive outcome expectations. After that, Tom explained to the patient that psychological treatments are helpful for his or her condition. But Tom changed his behavior. He always used the same information. Psychological treatments are powerful to improve your clinical condition.

But he sometimes was more warm and empathetic. Sometimes he showed no signs of competence. Sometimes both. You can see that it mainly depends on these behavior patterns of the therapist whether the information that he wants to transfer really has some action. If the therapist is low in competence and low in warmth, the same information doesn't have any effect while the same information can have a very powerful effect if the therapist shows warmth and competence.

So let me conclude these few insights into our placebo research. The distinction between specific treatment mechanisms and unspecific mechanisms is less clear than in biomedical interventions. But we can still say that expectations also predict outcome in psychological and psychosocial treatments. 

And main determinant of treatment expectations are pretreatment experiences, but also the clinician/patient relationship and many other factors that contribute to a development of treatment expectations. Expectations can be an unspecific factor to be controlled for, but they can also be the focus of an intervention and can really boost their treatment effects and therefore they are -- it's really valuable to focus on them.

And, unfortunately, side effect assessments are typically overseen factors in clinical trials. I'll come to this back in a moment. We want to recommend that placebo-controlled trials are needed in psychosocial intervention -- for psychosocial interventions. But it's more difficult to decide what to include into them. The major idea is to exclude the active mechanisms, but this is not that easily to be defined and therefore we need some psychological attention conditions that are credible in our controlled conditions that psychological treatments are compared with.

I would say that we need a variety of trial designs. Maybe if you start with very new interventions, it might be justifiable to start with a waiting list control group or with a standard medical care group. But if you want to learn more about the treatment, you need more control group designs. And there is not one perfect control condition, but you need variations of it. And last, not least, we have a strong emphasis on side effects and adverse events and unwanted events need to be assessed in psychological treatments as well.

Finally, let's make two comments. I think placebo-controlled investigations are developed and have to be developed to better understand the treatment mechanisms. From the patient's view, they are less important. The patients want to know whether -- what the overall efficacy is of a treatment. That means the combination of specific and unspecific effects, the overall package. And we shouldn't lose that out of mind.

And second, all these mechanisms we are talking about, they are not really to be separated one from the other, but they are typically interacting. Expectation effects are interacting with the development of side effects are interacting with the experience of improvement that can go back to the drug or to the psychological treatment.

So, so far from my side, and I'm happy to hand over to Lauren who will continue to talk about this issue.

TOR WAGER: Wonderful. Thank you, Winfried.

Now we have Lauren Atlas.

LAUREN ATLAS: Thank you. So it's really an honor to be wrapping up this first exciting day of this workshop. And to kind of I guess in a way bring you back to some of the themes that Tor highlighted in his introduction.

So I'll be talking about why I think that we as a field would benefit from taking a social neuroscience approach to placebo analgesia and placebo effects more generally. So Tor used the same figure in his introduction to the day. And I think one of the things that I really want to highlight in this is the distinction between intrapersonal factors so things like expectations, learning, history of associations with different treatments and different clinical context. And this really has kind of been the foundation of most studies of how placebo effects works -- work really because it's quite easy to manipulate things like expectations and learning in the lab and understand how those affect clinical outcomes.

But there has been far less work on the interpersonal processes that support placebo. And in some ways I'd like to say this is really where we need to be going as a field because it could be a lot easier to teach clinicians how to enhance patient outcomes rather than sort of being to fold into what a patient brings to the table. Although of course these factors interact and are both important in determining clinical outcomes.

And so the way I like to think about this interplay is really from a social affect of neuroscience standpoint. So the term social neuroscience really has come about over the past couple of decades talking about how we can use neuroscience techniques to understand emotional and interpersonal processes across a variety of domains. And where I think about this in the context of placebo is, first of all, through neuroscience techniques we can understand how placebo effects are mediated, whether that be supporting specific different types of outcomes or more general processes that shape placebo effects across domains.

From an affect and neuroscience standpoint, we can determine whether the mechanisms of different types of placebo are shared or unique. So, for instance, in the context of placebo analgesia we can ask whether placebo affects are really supported by pain-specific mechanisms or are we looking at the same mechanisms that might also be relevant in placebo effects for depression.

And then finally, from a social standpoint we can really isolate what a role is of the social context surrounding treatment. And so I a couple of years back wrote a review kind of looking at placebo effects from this social affect of neuroscience standpoint focusing on the role of expectations, affect and the social context.

Today I'd like to focus first on mechanistic work using neuroscience to understand how placebo effects are mediated. And secondly to address the role of the social context surrounding treatment. Which I think has implications not only for the study of placebo and clinical outcomes but also for reducing health disparities more generally. And I think I do want to say that I think the study of placebo can really point to all of the different features of the psychosocial context that influence clinical outcomes.

So this is why I think there is so much we can take from the study of placebo more generally. So turning first to how placebo effects are mediated. First, throughout the day we've been talking about how expectations associated with treatment outcomes can directly influence clinical outcomes in the form of placebo. And as Tor mentioned, if we not only compare treatment arms to placebo groups to isolate drug effects but instead also include natural history control groups, we can isolate placebo effects on a treatment outcome by controlling for things like regression to the mean.

Now, again this came up earlier, but a meta analysis of clinical trials that compared placebo with no treatment revealed that there was no placebo effect on binary outcomes or objective outcomes. But there was a substantial placebo effect on continuous subjective outcomes and especially in the context of pain. The others concluded that the fact that placebos had no significant effect on objective continuous outcomes suggest that reporting bias may have been a factor in the trials with subjective outcomes.

So the idea here when we talk about kind of our model of placebo, traditionally we think that things like social dynamics, psychosocial context surrounding treatment, cues associated with treatments lead to changes in one's sensory processing or one's bodily state. And based on that one makes a subjective decision about how one is feeling. For instance, a placebo effect in depression might lead to shifts in emotional processing, or a placebo effect in pain would lead to someone reporting less pain. And this is really driven by our report biases.

The idea is that rather than expectations changing that sensory processing, they affect subjective responses directly perhaps by changing our criteria in first calling something painful. So for over two decades now the field has really focused on asking to what extent are these effects mediated by changes in sensory processing?

And placebo effects in pain are a really ideal way for us to ask this question because we can objectively manipulate pain in the lab. So we can use this device called a thermode heated up to different temperatures and measure how much pain it elicits. And the targets of nociceptive signals are well studied, very well known and we know the tracks that transfer this information to the cortex.

And these can be visualized using functional magnetic resonance imaging or fMRI. So we see reliable activation in response to changes to nociceptive stimuli in a network of regions often referred to as the pain matrix including the insulate, dorsal anterior cingulate, thalamus, medial sensory cortex and brainstem and cerebellum.

Now we used machine learning to identify pattern of weights, which we call the neurologic pain signature that is sensitive and specific to pain and can reliably detect whether something is painful or not and which of two conditions is more painful. So this really provides an opportunity to ask when placebos affect pain. So, for instance, if we apply an inert topical treatment to a patient's arm before administering a noxious stimuli that they believe will reduce pain, does this pain reduction come about through changes in pain specific brain mechanisms or do we see shifts in more general mechanisms such as shifts in affect, things like emotion regulation or value-based learning? So maybe people just feel less anxious but there is nothing specifically about pain. This isn't really a problem because this would also mean that what we're learning about might transfer to other domains.

So a couple of years back nearly all labs that use this neuroimaging to study placebo analgesia in the brain combined patient level data. And what we found is that there was a reliable reduction in pain reports during fMRI scanning when people had an analgesic treatment -- or a placebo, sorry, relative to a control treatment that they didn't believe would reduce pain with a moderate to large effect size.

But there was no reliable placebo effects on the NPS. So this suggests that really we're not seeing placebo effects on this kind of best brain-based biomarker of pain. What do we see the placebo effects modulating? Oh, sorry, it's important for me to say that even though we don't see placebo effects on NPS, there are other psychological manipulations such as mindfulness cues that predict different levels of pain or administering treatments that reduce pain both when subjects know they are receiving it or when they believe they are not receiving it. And these all did affect NPS responses. So it is possible for psychological treatments to modulate the NPS, but we didn't see any placebo effect on NPS responses.

We also conducted a meta analysis of placebo analgesia looking at other published studies. And what we found is that there were reliable reductions during pain with placebo administration in the insula, thalamus and dorsal anterior cingulate. Now these regions are indeed targets of those nociceptive pathways that I mentioned. However, these regions are also activated by pretty much any salient stimulus in that MRI scanner as well as by anything involving interoception or a tension to the body.

And so I think an important point for the discussion is to what extent are these mechanisms or any of the principles we've been talking about today unique to pain or depression or any specific clinical endpoint.

When we looked for regions that showed increases with placebo, we saw increases in the ventral medial prefrontal cortex, dorsolateral prefrontal cortex and the striatum; regions that really have been implicated in domain general shifts in affect, things like emotion regulation and learning about valued outcomes.

So in this first half of my talk I demonstrated that placebo effects seem to be mediated by domain general circuits involved in salience, affective value and cognitive control. We did not see any placebo effects on the neurologic pain signature pattern. And this really points to the idea that these placebo mechanisms are unlikely to be specific to pain.

However, you know, there is many different labs working on different mechanisms of placebo. And so I think this is an ongoing question that really demands on further trials and different comparisons within and across participants.

So now I'd like to turn to the second half of my talk addressing the role of the social context surrounding treatment. And I'm going to talk about this in terms of patient's expectations, providers' assessments of patient's pain, and patient pain outcomes themselves.

So we were interested in asking whether patient's perceptions of providers impact pain expectations. And we know from work that Winfried and many others have conducted that indeed placebo responses depend on many different factors in the patient-provider relationship including how a provider treats a patient.

So Ted Kaptchuk and his group showed that a warm provider can lead to reductions in IBS in an open label placebo trial. We just heard data on how a provider's warmth and competence can influence outcomes. And this has also been shown in an experimental context by Ally Klem’s lab. And finally -- and I'll present this briefly at the end of my talk – we also know that a patient's perceived similarity to their provider also influences pain and placebo effects in simulated clinical interactions.

So a former post doc in my lab, Liz Nekka, was interested in studying this by asking not only whether interactions between patient and provider influence pain expectations but also whether our first impressions of our providers, namely in terms of their competence and/or similarity to us influence expectations even without actual interactions.

And the reason Liz wanted to do this is because we know from social psychology that people's first impressions are really important for a lot of different behaviors. So simply looking at people's faces can predict -- and judging competence can predict the outcomes of elections. And this is work that really has been led by Alex Todorov and his group.

So these faces are morphed along a dimension of competence. And so you can kind of see moving from three standard deviations below the mean to three standard deviations above the mean that there are certain features that are sort of associated with competence and dominance and that we use to make judgments about that person's trait. And so Liz asked whether these types of first impressions also influenced expectations about pain and treatment outcomes.

We conducted five studies on -- using Amazon's Mechanical Turk. And the first studies used those morphed faces from Todorov's group. Importantly, these were just male faces in the first two studies. In our third study, we used the same competence dimensions morphed onto either male or female faces.

We conducted another study in which we removed any cues like hair or clothing and just showed the face, the morphed male or female face itself between subjects.

And in the final study we used real individual faces that varied in race and ethnicity and again had between groups a manipulation of sex. On each trial participants first went through a series of trials in which they saw two faces that varied in competence and told us which provider they would prefer for a potential painful medical intervention. And then they were asked to imagine that provider were performing a painful medical procedure on them, how painful would the procedure be. And after the procedure are you more likely to use over the counter or prescription medication assuming that if the procedure is less painful they would assume -- they would expect to be more likely to use over-the-counter medication.

We also asked about similarities, but I won't be focusing on that today. So across all of the studies, so this is chance. This is that first decision, how likely are you to select a more competent face. What we found is that participants chose the more competent looking provider based on those facial features in the first study. We replicated that in the second study. In the third study we found no difference as a function of the features related to competence. In part because people preferred doctors who -- female doctors who looked less competent based on these features.

In the fourth study we used other individual’s ratings of perceived competence and again found that people selected more competent faces. But they also preferred this particularly only in the male faces. And when we used these real individuals, we again found that other people's ratings of competence predicted somebody's likelihood of selecting that person as their provider. And this was strongest when it came to white providers. We found that competence directly influenced pain expectations in all of the studies except for study three. So here this is the association between ratings of competence and pain. And so you see higher competence is associated with less pain across all the studies but study three. And, again, all the studies showed that the stronger the competence, the more likely somebody was to say they would have an over-the-counter prescription treatment in that study. But we found an interaction with sex such that competence predicted over-the-counter treatment only for male participants whereas competent female providers were associated with higher likelihood of having prescription medication rather than over the counter.

Finally, we found that stereotypes for these kind of information about race, ethnicity and gender which we were able to test in the fifth study also impacted pain expectations. So in study five, we found that expectations about pain varied as a function of provider race. We found that people expected the least amount of pain and highest likelihood of over-the-counter medication from the Asian providers relative to all others. And we also found sex differences in the expected medication use.

And finally, when we ran the meta analysis across all the studies, we found that effects of similarity unexpected analgesic use were strongest in white participants. And this is likely to be kind of an end group preference mainly because studies one through four all included white providers. And we found no other effects of the perceived demographics themselves.

Just with the last like three minutes or so. We know that not only do patients' stereotypes impact perceptions of providers, but we also know through studies on health disparities that providers' beliefs also impact assessment of patient's pain. So Peter Mende-Siedlecki who in this area ran beautiful studies looking at how race bias on pain assessment may be mediated through perceptual changes. Peter had black or white male actors depict pain or neutral faces. And he created morphed images ranging from neutral to painful.

And what he found is that white perceivers needed more evidence of a pain expression before labeling pain on black faces relative to white faces. And the more of the difference they had in terms of likelihood of seeing pain on white relative to black faces also predicted prescribing more analgesics to white relative to black targets across a number of studies.

We asked whether we saw similar biases in evaluations of real pain by measuring facial reactions in acute pain in 100 healthy individuals who label rated pain in response to heat, shock or cold water bath. What you can see is people have very different reactions to pain. This is all kind of the same level of pain. But you see differences in expressiveness.

And we're going to be creating a public database that will be available for other researchers to use to study pain assessment in diverse individuals. We had other healthy volunteers view these videos and assess pain. And critically we selected pain so that there were no differences across target race or gender in terms of the pain or its intensity. All the videos we presented were matched. Subjects saw videos and rated whether the target was in pain or not and how intense the pain was.

And what we found is that perceivers were less likely to ascribe pain to black individuals relative to white individuals. So again, black is here in cyan and white is in pink. And the women are with the hash lines and males are solid. And these are all again selected for trials where everybody is feeling the same amount of pain. And this is really driven by a failure to ascribe pain to black male participants when they were experiencing pain. And this was supported by signal detection analysis. We found that these race-based differences in pain assessment correlated with scores on a modern racism scale but did not vary dependent on perceiver race or gender. And we're now doing a study basically looking at how this type of bias might be reduced through learning and instructions. So basically we find that when people are told about a participant's pain after every trial, they are more accurate in judging other people's pain and that whether or not people receive feedback on pain assessment accuracy improves over time as people practice, suggesting we may be able to reduce these pain assessment biases through training and perhaps in clinical samples.

And finally, I just want to acknowledge that in this kind of dyadic interaction, we really ultimately also want to look at the direct interpersonal interactions that shape placebo analgesia. And this has been done by a series of studies of simulated clinical interactions where healthy volunteers are randomly assigned to act as doctor or patient and they administer a placebo to somebody else.

So Andy Chen Chang showed that telling a doctor that a treatment was analgesic affected the patient's pain, and that this was likely to be mediated through nonverbal communication. Liz Losen's lab showed that -- or Liz Losen when she was in Tor's lab showed that the more similarity or trust somebody had for a clinician the lowest pain they experienced. And finally, Steve Anderson, a grad student with Liz Losen showed that racial concordance between the patient and the provider in a placebo context could reduce pain, particularly in black individuals. And this was also associated with reduced physiological outcome.

So just to summarize the second part on the role of the social context surrounding treatment. I've shown you that first impressions shape pain expectations. Stereotypes impact pain expectations and pain assessment. And that concordance can enhance treatment outcomes.

Finally, just to kind of make clear where I think the path forward is from this kind of social affect of neuroscience approach, I believe that further research on how social factors shape clinical outcomes including placebo effects in placebo analgesia can help us improve patient provider interactions, reduce health disparities in general and maximize beneficial patient outcomes. And that we need more work distinguishing between domain specific and domain general mechanisms of placebo in order to isolate general effects of the clinical context versus targeting disease-specific endpoints. And identifying these kind of domain-specific mechanisms and the features of both patients and providers can really help us address the goals of personalized medicine.

So with that, I want to thank the organizers again for the opportunity to present our work. And acknowledge my former post doc, Liz Netfek, my former PhD student, Troy Duline, my current post doc Allie Jao, and mention that we have positions available in my lab. Thank you.

TOR WAGER: All right. Wonderful. Thank you, Lauren. So that concludes the series of presentations for this webinar for today. But we're not done yet.

Now we're moving into a phase where we have a panel discussion. And so it's going to be very exciting. And we'll get a chance to sort of talk about some of your comments you brought up and other things.

So this is moderated by Carolyn Rodriguez and Alexander Talkovsky. So hi, thank you for doing this, and please lead us off.

Panel Discussion

CAROLYN RODRIGUEZ: Yeah, definitely. So it's my pleasure to do this with Alex. My name is Carolyn Rodriguez. I'm a professor at Stanford. And I see there has been a very lively Q&A already, and some of them are being answered. So maybe we'll just popcorn a little bit.

There is one question here which, you know, I think gets at what we have been presenting is a lot of human data. And so maybe it's just worth noting, are studies in animals free of placebo effect? And, Tor, I see you are typing an answer, but I don't know if you wanted to answer that.

TOR WAGER: Sure. Yeah, I just finished typing my answer. But yeah, it's a good discussion point.

I mean I think that one of the first studies of placebo effects was by Hernstein in 1965 in Science called Placebo Effects in the Rat I think it was called. And there's a resurgence, too, of modern neuroscience work on placebo effects in animals. Greg Corder is going to give a talk on this tomorrow as one of the group of investigators doing this.

So long story short, I think that there are conditioned or learned placebo effects. So pharmacological conditioning pairing with a drug cue or conditioning with place cues can change the response patterns of animals as well.

It's difficult to know what animals are expecting. But there is quite a bit of circumstantial evidence or other evidence from other places even from Robert Rescorla years back or from Jeff Schoenbaum that really used clever paradigms to suggest that animals, it's really a lot about the information value and that they are sort of expecting, you know, and predicting a lot more than we might at first assume.

So even in those conditioning paradigms there might be a lot of something very similar to what we call sort of internal or mental model or expectations that are -- that's happening. So that is my first -- others can jump in here and say more.

CAROLYN RODRIGUEZ: Thank you. Yeah, any other panelists -- panelists, feel free to just turn on your videos and we'll be sort of, you know, asking, anybody else want to weigh in on animals and placebo?

Go ahead, Dr. Atlas.

LAUREN ATLAS: I'd be happy to do so. So actually, there is a study I love from a former post doc who worked with me, Anza Lee, during her PhD that -- we haven't really talked about the roles of dopamine and opioids so far today, which is interesting because those often dominate our conversations about mechanisms of placebo. But Anza had a really lovely study in which she showed that dopamine was necessary for learning the association between a context and pain relief while opioids medullary receptor system was necessary for actually experiencing that pain relief. And so that is a really nice kind of disassociation between that learning development of expectation and the actual pain modulation.

So that was a really lovely place where I thought that the preclinical work had some really nice findings for those of us who are doing human studies.

CAROLYN RODRIGUEZ: Wonderful. Thank you. And I think there is still a day two, so stay tuned. There's -- I can see in the agenda there will be more on this.

But a question I think specifically for you was how does Naloxone influence the NPS? So if there's any -- I think you answered it, but if there's any additional things.

LAUREN ATLAS: I think that's a great question. And I actually don't know of any studies that have administered Naloxone and looked at NPS responses.

The Naloxone effects on fMRI responses in placebo, actually I think we may have -- I'll just say a bit of a final jury problem there. There are a lot of studies that haven't found effects. We really need everybody to kind of publish their data.

But I think we've shown that there are studies of opioid -- or there are effects of opioid analgesics. But I don't think we know anything about blocking the opioid system and its effect on the NPS. But that would be really interesting and important so that's a great suggestion and question.

CAROLYN RODRIGUEZ: Yeah, I look forward to it. That's a very, very exciting question.

I'm going to hop over to neuromodulation. Dr. Lisanby and Dr. Deng, I think you guys have already answered a question which I found fascinating about whether when you try and get the motor threshold, what -- like does that unblind people? So I loved your answer and I just wanted you guys to just say it out loud.

SARAH “HOLLLY” LISANBY: Yeah, thank you. I can start. And Zhi might want to comment as well. So as you may know, we individualized the intensity of transcranial magnetic stimulation by determining the motor threshold where we stimulate with single magnetic pulses over the primary motor cortex and measure a muscle twitch in the hand.

And this is real TMS. And we do real TMS for motor threshold determination regardless of whether the person is going to be getting active or sham in order to give them the same level of intensity and so on. And you might think, plausibly speaking, that this might unblind them if then you give them sham RTMS with repetitive pulses. It turns out that single pulses do not cause the same amount of scalp pain or discomfort that repetitive trains of stimulation can cause.

Also, the motor cortex is farther away from the facial muscles and facial nerves so there is less of a noxious effect of stimulating over the motor cortex. And because of these differences it is very -- a common occurrence that people think they are getting active RTMS even when they are assigned to sham.

Maybe Zhi may want to comment.

ZHI-DE DENG: No, I totally agree with that. The different protocols feel very different. So being non-naive to one protocol might not necessarily mean that you break a blind.

CAROLYN RODRIGUEZ: Wonderful, thank you so much. Dr. Deng, always appreciate your humor in your presentations so thank you for that.

We're going to move over -- Dr. Detke, I think you had messaged that you have a couple of slides that may address some of the questions. And particularly Steve Brennan had asked a question about COVID interference. And there was a question about excluding sites with unusual response patterns. So would love to hear more about that.

I think you are on mute, though. We'd love to hear you.

MICHAEL DETKE: There we go. I have one kind of interesting slide on COVID. It kind of doesn't -- it doesn't get directly at the placebo response.

Let me walk you through. It's a weird slide. Because we've been looking at slides all day that like from left to right is changing -- is the duration of the study or the treatment.

This is actually as you can see on the X axis is actual calendar months. And then focus at first on the blue line. The blue line is the ADCS-ADL which is a scale of activities of daily living. And there are actually questions in it about, you know, have you gone to the grocery store recently? Are you able to do that by yourself? Have you gone to, you know, attended doctor's appointments, things like that.

And the reduction from the -- from early 2020 to kind of the peak of the pandemic, this change of like five points or so, this would be kind of the biggest -- this is an Alzheimer's study -- and this would be the biggest drug effect in the history of Alzheimer's. And this changed back even faster of a similar actually slightly larger magnitude. It was also a huge change.

This is pooled drug and placebo patients. So there is nothing in here that tells you about drug effects or not. You can see this ADL was really impacted by the peak of COVID cases. And I'm actually surprised this came out as clean as it did because we had about 30% of our patients were in Europe, Italy, France, Spain. And as you may recall, the peak of cases there was at a different time than the U.S.

But I think the takeaway here is that obviously things like COVID can certainly impact assessment scales. And they are certainly going to impact scales that specifically say hey, have you gone to your doctor's office when you can't go to the doctor's office. Scales like that are going to be really more impacted obviously than, you know, maybe just -- and moods and things could be, too, obviously. That is one piece of data that I know COVID had a whopping effect on at least one scale.

As for the sites over time, there has been a lot that has been talked about and thought about, about, you know, excluding sites with high placebo response, excluding sites with low drug placebo separation. Of course, if you do that post hoc, it's certainly not valid. There's a banned pass approach where you exclude the extreme sites on both ends, high and low placebo response, is a somewhat more valid. But and my understanding from statisticians is that any of those things increase false positives if you are doing it post hoc.

The other thing to think about when you're thinking about site performance is, A, sites change over time. They have different raters, you know, that might be there for ten years or maybe ten months. And maybe the single most important point on this response is realize, you know, the average depression trial, 100 or 150 patients per arm, 80% power to see a separation. And it's really 50% power as Ni Khin has shown and others effectively.

Now imagine you are looking at a clinical trial site. They have ten patients, five per arm. What is the statistical power there? It's close to zero. And this -- so these are some data that my colleague Dave Dubroda at Lily put together a long time ago. Huge database of I think these were Prozac depression studies. And they had the same -- you know, over many studies and many of them went back to the same sites that performed well.

And as you can see here, the same slide, each chart is a site, a site that was in multiple different studies. And their performance over time and HAMD change was no different. This study is another study that just looks at these are different investigative sites in the same trial. And this is a little bit of a build, but you can see that this site and this site have virtually identical drug responses, the yellow bars. Sorry, that's supposed to be a little higher. They have almost identical efficacy response. But this one has a huge placebo response and that one has a tiny placebo response. Which is probably because they only had five or six subjects per site. And if you get just two or three huge placebo responders.

So trying to assess site performance in the context of a single trial is pretty hard just because of the Ns. And then so evaluating performance by sites is challenging. And then excluding them for reasons like high placebo responses is also challenging. So those are just a little bit of context on that.

CAROLYN RODRIGUEZ: Thank you. Yeah, appreciate that. Question for your colleague Dr. Khin, but maybe for everyone, right?

So there is a question that says isn't it difficult to say that a two-point difference on a 52-point scale is clinically significant? So I know a lot of slides we were trying to say this is going to be significant and what is the difference between, you know, these two scales. So at the end of the day we're, you know, wanting to help patients.

And so what can we say about a two-point change in significance?

NI AYE KHIN: So two-point change is the difference between drug and the placebo. So each individual might have ten-point change or 50% change depending on the individual response. And mostly drug approval is based on statistical significance.

So if there is a two-points difference between drug and placebo for, for example, Hamilton Depression Score, that's generally -- that's the approximate total point change between the two groups that most of the drugs get approved. So, of course, statistical significant changes basing -- we base for drug approval. But for in real world, we don't really know what clinically meaningful change or difference, right. So that's still an issue.

So Tiffany might be able to add more on this topic.

TIFFANY FARCHIONE: Yeah, I mean I can add a little bit. So in terms of like the depression studies, again, those were conducted before our sort of what we do now.

Like if we have a new indication, a new endpoint, something like that, we're going to ask companies to give us an a priori definition of clinically meaningful within patient change. And we're looking, like Ni said, at the difference for an individual. Not the difference between the drug and placebo. But what matters to patients. How much change do they need to have.

And then they can have that -- they can power their study to see some amount of difference that they think matters. But ultimately we have them anchor their studies to, you know, things like global assessments of functioning. We have sponsors if they are using new endpoints do qualitative work so that we can understand what that change means on that given scale. There is a lot of additional work that goes into it now. But yeah, it's the within patient change, not the between group changes that ultimately matters the most.

CAROLYN RODRIGUEZ: Thank you so much. I felt like it was worth saying out loud. And, Dr. Farchione, I know you've done a lot of wonderful work. I heard you speak at ACNP about kind of more global measurements of functioning and really thinking about patients more globally, right. You can change a little bit on a scale, but does that translate into life functioning, work function, these are the things that we care about for our patients. So thank you both for that.

I see Dr. Rief wants to weigh in and then Dr. Lisanby.

WINFRIED RIEF: Just a small little point. The more the question has to be asked about the benefit harm ratio. And it is an important issue and very good that the question was asked. If the difference is just two points, we have to compare it with the risk and potential side effects. It's not only that we can focus on the benefits.

TIFFANY FARCHIONE: We always compare it to the risk regardless of the size of that difference.

CAROLYN RODRIGUEZ: All right. Dr. Lisanby.

SARAH “HOLLY”LISANBY: So this is an opportunity to talk about outcome measures.

CAROLYN RODRIGUEZ: Yes.

SARAH “HOLLY”LISANBY: And how sensitive they are to the intervention and also how proximal they are to the intervention with respect to mechanism. These are some points that Dr. Farchione raised in her talk as well. In psychiatry, the degree to which we can have outcome measures that are more proximal to what our intervention does to engage mechanisms, this might help us be able to measure and differentiate active treatment effects versus nonspecific placebo effects.

And this is part of the rationale of the research domain criteria or dot-research platform to try to look at domains of function. To look at them across levels of analysis and have measurements that might not just be a clinical rating scale. It might be a neurocognitive task that's related to the cognitive function that might be the target of a therapy or a physiological measure that might be an intermediate outcome measure.

So I was hoping we might generate some discussion on the panel about regulatory pathways for these other types of outcome measures and how we might think about selecting outcome measures that may be better at differentiating real treatment effects from nonspecific placebo effects.

CAROLYN RODRIGUEZ: Thank you. I see Dr. Wager, I don't know if you had something to add onto Dr. Lisanby's point or if you had a separate question.

TOR WAGER: I would like to add on to that, if I may.

CAROLYN RODRIGUEZ: Okay. Yeah, of course.

TOR WAGER: I think that's a really important question. I'd love to hear people's opinions about it. Especially the FDA, you know, Tiffany's perspective on it.

Because for me to add to that, I just was wondering how strongly the FDA considers pathophysiology in mechanism of action and what counts as mechanism of action. So there are certainly certain pharmacological changes and cellular level changes that obviously seem to matter a lot. But what about fMRI, EEG, other kinds of indirect measures, do they count, have they counted as mechanistic evidence?

TIFFANY FARCHIONE: Yeah, so they haven't counted yet. And in part because we just don't have either so far an EEG, fMRI, we see group differences but those aren't the kinds of things that can help predict something for an individual patient.

It just goes back to the whole point about understanding pathophysiology and being able to, you know, not just describe that this drug works on this receptor but also working on this receptor has that relationship downstream to X, Y, and Z effects. And in a clinically meaningful way.

I think ultimately a lot of the things we do in terms of our biomarker qualification program and things like that, understanding not just that a drug has some action or interacts with some sort of biology but in what way and what kind of information does that give you that can help inform the trial or help inform, you know, your assessment of drug effect. That's also important. We're a long way off from being able to put things like that into a drug label I would say.

SARAH “HOLLY” LISANBY: I certainly agree with Dr. Farchione's comments.

And I would like to talk for a moment about devices. And there might be different -- there are different regulations and different considerations in drug design versus device trial design. And we are already at a stage in the field of devices where individual physiology is on the label. And that is the case with the Saint technology where individual resting state functional connectivity MRI is used to target on each patient basis where to put the TMS coil.

And I would say that we -- the jury is still out about, you know, studies that unpack Saint to show where that individualized targeting is essential or whether it's the accelerated intermittent data burst and the ten treatments a day and so on.

Regardless, it is on the label. It's in the instructions for how to use the product. And so I think that that might be a sign of where things may be going in the future. And when we think about the way focal brain stimulation is administered, whether it's non-invasive or surgically implanted, we're targeting circuits in the brain. And being able to measure the impact of that targeting stimulation on the functioning of that circuit, EEG or fMRI might be the right readout and it might give some evidence.

I think even still, though, those measures which may be useful in identifying treatments and optimizing their dosing, ultimately I understand from my FDA colleagues that we'll still need to demonstrate that intervention, whatever it is, improves the quality of life and the clinical aspect for those patients.

But it may be an important part of getting the treatments to that phase where they could be reviewed by FDA.

CAROLYN RODRIGUEZ: Thank you so much. That's a good point. Anyone else to contribute to that? I don't see any other hands raised.

Maybe I'll pass it to Dr. Talkovsky and see if there are any other questions that you see on the Q&A that we could continue to ask the panel.

ALEXANDER TALKOVSKY: Yeah, there was one that jumped out to me a bit earlier. There was a bit of a discussion about warmth and competence as well as a perceived tradeoff between the two. And also some ideas about manipulating them as experimental variables that I thought was interesting. I saw, Dr. Rief, you had jumped into that discussion, too.

I thought that was an important enough topic that would be worth spending a little bit more time here in the group discussion making sure that everybody sees it. So I'll throw it back to you, Dr. Rief.

If you could maybe even elaborate on the answer you gave in there about warmth and competence and those as experimental variables, too.

WINFRIED RIEF: The major point I want to make is that we have to control these variables. If we don't control them, we risk they are different between the two or three arms in our trials. Then we cannot interpret the results. That means we have to assess it and we have to make sure that they are comparable between the different treatments. But this is something I can really recommend, I think it makes a lot of sense. There are other points I'm not sure what to recommend. Some people suggest limit, minimize warmth and competence to minimize potential placebo effects. This is the point where the tradeoff comes into the game. If we minimize warmth and competence, people are not motivated to participate and they might discontinue treatments and they are not willing to cope with side effects.

But if we maximize warmth and competence, we risk that placebo effect is bolstering everything. So at this level, at this stage I would say let's try to keep it in an average level. But really assess it and make sure that it's comparable between the different treatment arms.

ALEXANDER TALKOVSKY: Dr. Atlas, I see your hand up.

LAUREN ATLAS: Yeah. I love this question because I think it depends what the goal is. So if the goal is to reduce placebo to find the best benefit of the drug, then yes, you know, in clinical trials when people never see the same rater, for instance, that reduces the likelihood of building relationship. And there's all these different kinds of features that if you really want to minimize placebo then we can use these things in that way.

On the other hand, if the goal is to have the best patient outcomes, then I think we want to do the exact opposite and essentially identify exactly how these features improve patient's wellbeing and heighten them. And so I think really that is part of why I think talking about placebo is so fascinating because it both tells us how to improve patient outcomes and then also reduce them in the context of trials. So I think it really depends kind of what context you're talking about.

ALEXANDER TALKOVSKY: Dr. Rief.

WINFRIED RIEF: Yeah, may I just add a point. Because I missed it and Lauren reminded me to that point.

Most of us assume that we have to reduce the placebo effects to maximize the difference between placebo and drug effects. And this is an assumption. This is not something that we really know. That means -- and we have studied -- for instance, have seen studies in antidepressants and SSRI. We know studies for analgesics. If you reduce the placebo mechanisms to minimum then you are not able to show a difference to the drug afterward because the drug effects are reduced.

In other words, a good drug needs some minimum of placebo mechanisms to show its full action. Therefore, the assumption that minimizing placebo mechanisms to increase the difference between placebo and drugs is an assumption that we have to be concerned about that. And maybe for some drugs it's much better to have kind of average amount of placebo mechanisms.

ALEXANDER TALKOVSKY: Dr. Wager, let's go to you. Then I think we have another question that we want to tackle in the chat after you wrap up.

TOR WAGER: Yep, that sounds good. I see it, too. But just to weigh in on this. Because I think this is one of the most important issues to me. And I think Winfried also just wrote a review about this. And there have been a couple of others. Which is that there is always this tendency to want to screen out placebo responders. It doesn't seem to work very well most of the time in clinical trials.

And if you have a synergistic interaction over additive interaction between an active drug element and a placebo factor motivation or expectation, then screening out -- that is when screening out placebo responders also screens out the drug responders.

And so I think there is this opportunity to test this more, to test, you know, jointly the effects of active treatments whether it's neuromodulation or drugs or something else. And factors like expectations or perceived warmth and competence of the care provider.

So I guess I'm wondering if in the neurostimulation world are there many studies like that or any studies like that because they seem to be very separate worlds, right? You either study the device or you study the psychosocial aspects.

SARAH “HOLLY”LISANBY: Well, I can and maybe others can as well. It's a good point. Lauren, your talk was really beautiful. And my take-home point from that is in a device trial even if we're not studying the effect of the device operator, the effect is occurring in the trial.

And so measuring these aspects of the whole context of care I think can help us sort that out. And in order to do that, I think it could be helpful for investigators who are designing device trials to partner with investigators who have that expertise. Also in terms of the expertise, I was listening very carefully to the talks about psychosocial interventions and maybe the ancillary effects of the procedure is like a psychosocial intervention that we might benefit from having mixed methods approaches that pull from both fields to really better understand what we're doing.

And then there are also trials that use drugs and devices together. So being able to have cross-pollination across the fields I think would be very useful both with respect to our selection of measures to test the integrity of the blind as well as looking at expectancy and even measuring anything about the provider which is usually not done I would just say for device studies. We're usually not even reporting anything about the provider or the perceptions of the subject about the context of their care.

CAROLYN RODRIGUEZ: I wanted to also jump in, in terms of, you know, just in terms of topics. For psychedelic assisted therapy, Harriet DeWitt has a very good question here in terms of commenting about special considerations and testing of placebos. This is something that has come up a lot. And Boris Heifets, among others, has, you know, really gotten us to think about different kinds of designs to disguise the effects of ketamine, for example, with general anesthesia. There's other designs. But questions around the space.

So how important is it when you have a very active placebo that can have empathogenic effects or psychedelic effects in terms of the placebo effect?

TIFFANY FARCHIONE: Yeah, I figure I should probably jump in on this one first.

So, you know, I will say that when it comes to the psychedelics whether it's a classic psychedelic like psilocybin or if it's the empathogen or tactogen types like MDMA, blinding is practically impossible. Folks know if they are on active drug or a placebo. And that makes it really challenging to have an adequate and well-controlled study, right?

On the one hand, we still need to have placebo-controlled studies so that we can get a fairly -- as accurate as you can get assessment of safety of the drug. On the other hand, we've really been struggling trying to figure out what is the best design. Trying to add some kind of an active comparator, you know, choosing something that might mimic some aspect of the psychedelic effect without actually having a treatment effect of any kind is next to impossible. People still know. You know, you've talked about anything from niacin or benzos, a little bit of this, a little bit of that. They know. They just know.

So the best that we've come up with so far is asking for at least one placebo-controlled study so we can get a clear idea of safety. And we've suggested trying to use complementary designs. For instance, you know, it is still possible to have a dose response study serve as an adequate and well-controlled study. Then there is no placebo there. If you can see that a lower dose, mid dose, high dose, if there is a linear increase in treatment effect in that kind of a study, that is helpful to us. If we have -- one of the other things we ask for is to have some assessment of, you know, like an unblinding questionnaire. Do you think you got the active drug? Yes or no. Do you think you got placebo?

And then one of the things we're starting to ask for now in addition to that is not just assessment at the end of whether folks thought they were on active drug or not, not just from the patient but also from the raters trying to see. Because a lot of times the raters can figure out what the person was on, too, so that could introduce some bias.

Now we're starting to think about asking for like a pre-dose expectancy questionnaire of some kind. And so even if we can't necessarily control for the unblinding issues and the expectancy and everything, at least we can try to -- we can have more data to assess the impact on the study and use those as maybe, you know, covariants in the analyses. But yeah, we don't have the right answer yet. We are learning as we go and we are learning very rapidly.

CAROLYN RODRIGUEZ: That may be a plug for NIMH to do like another -- this placebo panel is amazing. We could keep going. I see we have nine minutes left. I'm going to pass it back to Dr. Talkovsky.

And but I know Dr. Lisanby and Dr. Wager have their hands up so I'll pass it back to Alex.

ALEXANDER TALKOVSKY: Thank you. Because we're short on time, with apologies, Dr. Lisanby and Dr. Wager, there is a question I want to address from the Q&A box that I saw a couple of our panelists already addressed in text but seems worth bringing up here as a group.

Are we confident that the placebo effect and specific affect are additive and not interactive?

LAUREN ATLAS: So I'll just -- can I -- oh, sorry.

CAROLYN RODRIGUEZ: Dr. Atlas, yes, that was quick. You won the buzzer.

ALEXANDER TALKOVSKY: Yes, start us off.

LAUREN ATLAS: I had already responded and was putting something in the chat kind of addressing the dose in the same context.

So basically one approach for testing additivity is to use the balanced placebo design so people receive drug or control and that is crossed with instructions about drug administration. So basically people receive the drug under open administration and they also receive placebo. And they receive the drug when they believe they are not getting treatment leading to hidden administration.

And this has been tested with nicotine effects on -- so nicotine, caffeine. We've done it in the context of remifentanil. There has been a couple other trials of different analgesics. It was really developed in the context of studies of alcohol.

We found, for instance, that depending on the endpoint, we have different conclusions about additivity. So when it came to pain, we found additive effects on pain. But we found pure drug effects on neurologic pain signature responses during remifentanil regardless of whether people knew they were receiving the drug or not. We found interactions when we looked at effects on intention.

And other groups, Christian’s group, has found interactions when they did the same exact trial but used lidocaine. And then furthermore, this is what I think we were just talking about in the context of doses. If people have unblinding at higher doses then there is going to be less of an effect of the context surrounding it. So expectations could grow with higher drug effects.

So I think that the question of additivity or interactions really may depend on the dose, the specific drug, and the specific endpoint. I don't think we can really conclude that.

And so even though doing balanced placebo designs do require a level of deception, I think there is really an urgent need to kind of understand how expectations combine with drugs to influence outcomes.

So yeah, I'm really glad somebody asked that question.

CAROLYN RODRIGUEZ: Thank you, Dr. Atlas. I just want to acknowledge Dr. Cristina Cusin who is the other cochair for the panel. She's on, and I want to be mindful of the time and make sure that she and Dr. Wager have the final words or thoughts or if you want to give the panelist the thoughts.

But we wanted to just pass it back to you so you have plenty of time to say any of the things that you wanted to say to wrap things up.

CRISTINA CUSIN: I will leave to Tor if he has any concluding remarks. My job will be to summarize the wonderful presentation from today and do a brief overview of the meeting tomorrow. It was amazing.

TOR WAGER: Since we have a few minutes left, I would like to go back to what Holly was going to say. We have about five minutes. I'd love to use that time to continue that conversation.

SARAH “HOLLY”LISANBY: I'm assuming that you're referring to the psychedelic question. I agree there is no perfect answer to that and it's very complicated. And there are different views on how to address it.

One of my concerns is therapist unblinding and the potential impact of therapist unblinding on the therapy that is being administered. And because as we've heard, it's very likely that the patient receiving a psychedelic intervention may be unblinded. So might the therapist because they know what a patient going through psychedelic assisted therapy typically experiences.

And one thought I have about that could be to measure the therapy, record it, quantify adherence to the manual. At least document what is going on in the therapy interaction. That would give you some data that might help you interpret and better understand whether therapist unblinding is impacting the psychosocial aspects of the intervention because we do -- we've heard from the field that the setting and aspects and context of the use of the psychedelic are an important part. So let's measure that, too.

TOR WAGER: It's really interesting. I want to note there is another -- Boris Heifets has put in the chat there is something that is a different take.

There might be more things to discuss about whether it's possible to blind these things in some ways and some diversity of opinions there. But you can see the chat comment and we can think about that.

I have one other question about that which is that to me I understand the unblinding problem and that seems to be something we're all really concerned about. What about what you call a sensitivity analysis type of design which is if you can independently manipulate expectations or context and maybe some of these other kinds of drug manipulations that induce another kind of experience, right, that is not the target drug, then you can see whether the outcomes are sensitive to those things or not.

So for some outcomes, they might -- it might not matter what you think or feel or whether you had a, you know, crazy experience or not. And if it doesn't, then that is ignorable, right? So you can manipulate that independently. You don't have to blind it out of your, you know, main manipulation. Or it might turn out to be that yes, that outcome is very sensitive to those kinds of manipulations. So I was wondering what you think about this kind of design.

TIFFANY FARCHIONE: I'm not quite sure that I followed that entirely.

TOR WAGER: Yeah, it's really like so you have one that is the psychedelic drug and you don't unblind it. But then you do an independent manipulation to try to manipulate the non-specific factors. If it's, you know, having a, you know, sort of unique experience or having a -- yeah, or just treatment expectations.

TIFFANY FARCHIONE: I guess that's the piece I'm not quite understanding because I'm not sure what you would be manipulating and how you would accomplish that.

TOR WAGER: In the simplest way, the expectation piece is simpler because you can induce expectations in other ways as well, right? By, you know, giving people suggestions that it's going to really impact them. Or, for example, a design that we've used is to say okay, everyone is -- you know, if you get this drug it's going to make you, I don't know, you know, it's going to give you these sort of strange experiences. But if it gives you these experiences, that means it's not working for you, that's bad. Another group you say this is a sign that it's working.

So you take the subjective symptoms and give people different instructions that those are going to be either helpful or harmful and see if that matters.

TIFFANY FARCHIONE: Yeah, I mean I think if you are giving different people different instructions now you are introducing a different source of potential variability so that kind of makes me a little bit nervous.

I guess what I would say is that if somebody had, you know, some sort of creative problem solving approach to dealing with this, I'd love to hear about it. I would love to see a proposal and a protocol. I would say it's probably best to do in an exploratory proof of concept way first before trying to implement a bunch of fancy bells and whistles in a pivotal study that would try to support the actual approval of a product.

But again, because we're learning as we go, we do tend to be pretty open to different design ideas here and different strategies. You know, as long as people are being monitored appropriately because that piece we don't really budge on.

CAROLYN RODRIGUEZ: I see we're at time. Maybe give Dr. Lisanby the last word. Maybe just some food for thought is that maybe it would be nice to have a toolkit to help clinical trialists have some considerations about how to minimize placebo effects would be something nice. Wish list.

SARAH “HOLLY” LISANBY: Yeah, and I just wanted to add to that last question that this is part of why we're sponsoring this workshop. We want to hear from you what are the gaps in the field, what research needs to be done.

Because we are interested in developing safe and effective interventions, be they psychosocial, drug, device or some combination.

And in the research studies that we support use placebos or other forms of control. We're interested in hearing from you where the research gaps are. What sort of manipulations like, Tor, you were talking about, manipulating expectation, to figure out how to do that. All of that is really interesting research topics. Whether that is the design of a pivotal trial or not, doesn't necessarily need to be that.

We're interested in mapping that gap space so we can figure out how to be most helpful to the field.

TOR WAGER: That's a great last word. We still have tomorrow to solve it all. Hope you all join us tomorrow. Looking forward to it. Thank you.

(Adjourned)

VIDEO : Shane Prior responds to questions about a 2021 social media post where he referred to domestic violence groups as a 'DV industry'

research questions on media violence

  • X (formerly Twitter)

Shane Prior responds to questions about a 2021 social media post where he referred to domestic violence groups as a 'DV industry'

  • Domestic Violence

Stories from ABC News

Timor-leste government bulldozes homes to make way for pope.

research questions on media violence

Israeli military frees hostage from inside tunnel in Gaza

research questions on media violence

Pacific leaders endorse Australia's policing initiative

research questions on media violence

Labor faces backlash from angry CFMEU members

research questions on media violence

ABC News Verify reveals the disadvantage of Kyiv's long-range firepower

research questions on media violence

IMAGES

  1. (PDF) Psychology’s Multiple Concerns About Research on the Effects of

    research questions on media violence

  2. Survey on Media Violence

    research questions on media violence

  3. (PDF) Media Violence and Social NeuroscienceNew Questions and New

    research questions on media violence

  4. (PDF) Media Violence Research and Youth Violence Data: Why Do They

    research questions on media violence

  5. (PDF) Impact of media violence on aggressive attitude for adolescents

    research questions on media violence

  6. (PDF) Types of Media Violence and Degree of Acceptance in under-18s

    research questions on media violence

COMMENTS

  1. Violence, Media Effects, and Criminology

    Media violence and its impact on audiences are among the most researched and examined topics in communications studies (Hetsroni, 2007). Yet, debate over whether media violence causes aggression and violence persists, particularly in response to high-profile criminal incidents. Blaming video games, and other forms of media and popular culture ...

  2. Twenty Questions (and Answers) About Media Violence and ...

    Robust research has found an association between exposure to media violence and real-life aggression in children and teens. Other effects include desensitization, fear, and attitudes that violence is a means of resolving conflict. Ongoing research finds similar associations between exposure to video game violence and real-life attitude and ...

  3. Twenty Questions (and Answers) About Media Violence and Cyberbullying

    For decades, pediatricians have been concerned about the impact of media on the health and well-being of children and adolescents. Robust research has found an association between exposure to media violence and real-life aggression in children and teens. Other effects include desensitization, fear, and attitudes that violence is a means of resolving conflict. Ongoing research finds similar ...

  4. Twenty Questions (and Answers) About Media Violence and Cyberbullying

    One theory is that there are 5 factors involved in teens' mass shootings, and the more factors at play, the greater the likelihood of a disaster: (1) social isolation; (2) a history of being bullied or abused; (3) mental illness; (4) heavy exposure to first-person shooter video games; and (5) easy access to guns.10.

  5. The Impact of Electronic Media Violence: Scientific Theory and Research

    For better or worse the mass media are having an enormous impact on our children's values, beliefs, and behaviors. Unfortunately, the consequences of one particular common element of the electronic mass media has a particularly detrimental effect on children's well being. Research evidence has accumulated over the past half-century that ...

  6. Violence in the media: Psychologists study potential harmful effects

    The advent of video games raised new questions about the potential impact of media violence, since the video game player is an active participant rather than merely a viewer. 97% of adolescents age 12-17 play video games—on a computer, on consoles such as the Wii, Playstation, and Xbox, or on portable devices such as Gameboys, smartphones, and tablets.

  7. The effects of violent media content on aggression

    The most straightforward of research designs, cross-sectional media violence studies usually involve surveys that assess, at minimum, violent media consumption and aggression. Classic examples of such studies can be found in the 1972 U. S. Surgeon General's report on the impact of televised violence [ 3 , 9 ].

  8. Twenty Questions (and Answers) About Media Violence and Cyberbullying

    The Workgroup on Media Violence and Violent Video Games reviewed numerous meta-analyses and other relevant research from the past 60 years, with an emphasis on violent video game research.

  9. (PDF) Violent Media Effects: Theory and Evidence

    VIOLENT OUTCOMES 2. Abstract. Electronic media is an omnipresent form of entertainment in contemporary society. large body of empirical evidence provides support for the notion that violent media ...

  10. Violent Media in Childhood and Seriously Violent Behavior in

    The current study aims to fill noted research gaps. First, while extant research examines exposure to violence on television and in video games, exposures through other media, such as music, are less well studied yet constitute a large part of youth media diets. ... music, websites of real people, and websites of cartoons. A similar question ...

  11. The effects of violent media content on aggression

    Abstract. Decades of research have shown that violent media exposure is one risk factor for aggression. This review presents findings from recent cross-sectional, experimental, and longitudinal studies, demonstrating the triangulation of evidence within the field. Importantly, this review also illustrates how media violence research has started ...

  12. The Influence of Media Violence on Intimate Partner Violence

    Introduction. In the United States, more than 12 million men and women become victims of domestic violence each year [].In fact, every minute, roughly 20 Americans are victimized at the hands of an intimate partner [].Although both men and women are abused by an intimate partner, women have a higher likelihood of such abuse, with those ages 18-34 years being at the highest risk of victimization.

  13. The Influence of Media Violence on Intimate Partner Violence ...

    Research suggests that the representation of violence against women in the media has resulted in an increased acceptance of attitudes favoring domestic violence. While prior work has investigated the relationship between violent media exposure and violent crime, there has been little effort to empirically examine the relationship between specific forms of violent media exposure and the ...

  14. The Impact of Media Violence on Child and Adolescent Aggression

    As a result, children and adolescents frequently encounter violence in the media. in a variety of forms, which has an effect on their behavior. Previous research has found that. exposure to media ...

  15. PDF Media Violence and Anxiety: a Meta-analysis on The Outcomes of

    A major area of interest and academic research now centers on the connection between media violence and its potential impact on psychological and behavioral aspects of people. The complex question of how media violence exposure may affect people's propensity for aggression-based anxiety rests at the junction of these concerns.

  16. The Facts on Media Violence

    Overall, most of the research suggests media violence is a risk factor for aggression, but some experts in the field still question whether there's enough evidence to conclusively say there's ...

  17. Fact Sheet on Media Violence

    Fact Sheet on Media Violence. This Fact Sheet answers some frequently-asked questions about social science research into the effects of media violence. The bottom line is that despite the claims of some psychologists and politicians, the actual research results have been weak and ambiguous. This should not be surprising: media violence is so ...

  18. Effects of violence in mass media

    The study of violence in mass media analyzes the degree of correlation between themes of violence in media sources (particularly violence in video games, television and films) with real-world aggression and violence over time.Many social scientists support the correlation, [1] [2] [3] however, some scholars argue that media research has methodological problems and that findings are exaggerated.

  19. PDF Report of the Media Violence Commission

    It may also be published in the ISRA Bulletin.". What follows is the final report of the Media Violence Commission, delivered in May, 2012. This statement was written by a group of internationally recognized active researchers in the field of media violence to summarize current knowledge about the strength of the link between violent media ...

  20. PDF Media Violence and Aggression among Young Adults

    ggression, Media Violence, Young AdultsViolencehas become a major par. of life in many schools, homes and communities. It is especially devastating to children and adolescents who are vulnerable because. of emotional, social and cognitive difficulties.In this new environment, radio, television, movies, videos, video games, and computer networ.

  21. Media Violence Exposure Scale (MVES)

    The Media Violence Exposure Scale (MVES) is a 25-item questionnaire used to measure an individual's exposure to violence in various forms of media, including television, movies, video games, and music. The scale was developed by Krahé and Möller in 2010 and has been widely used in research on media violence. The MVES questionnaire consists of ...

  22. Why Many Parents and Teens Think It's Harder Being a Teen Today

    These concerns ring true for the parents in our survey. A majority blame technology - and especially social media - for making teen life more difficult. Among parents who say it's harder being a teen today, about two-thirds cite technology in some way. This includes 41% who specifically name social media.

  23. 88 Media Violence Essay Topic Ideas & Examples

    Media Violence and Aggressive Behavior. From one perspective, it is said that the person will learn to like the violence and use it in real life. Media Violence and Importance of Media Literacy. Media literacy is the public's ability to access, decode, evaluate and transmit a message from media.

  24. Q&A: Understanding and Preventing Youth Firearm Violence

    Recently, the Society for Research on Adolescence (SRA) recognized Dr. Jessika Bottiani, an associate research professor at the UVA School of Education and Human Development and faculty affiliate at Youth-Nex, and her co-authors with the 2024 Social Policy Publication Award for a paper on the prevention of youth firearm violence disparities ...

  25. Day One: Placebo Workshop: Translational Research Domains and Key Questions

    The National Institute of Mental Health (NIMH) hosted a virtual workshop on the placebo effect. The purpose of this workshop was to bring together experts in neurobiology, clinical trials, and regulatory science to examine placebo effects in drug, device, and psychosocial interventions for mental health conditions. Topics included interpretability of placebo signals within the context of ...

  26. Shane Prior responds to questions about a 2021 social media post where

    VIDEO: Shane Prior responds to questions about a 2021 social media post where he referred to domestic violence groups as a 'DV industry' Posted 36m ago 36 minutes ago Wed 28 Aug 2024 at 9:59am Has ...