Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Ethical Considerations in Research | Types & Examples

Ethical Considerations in Research | Types & Examples

Published on October 18, 2021 by Pritha Bhandari . Revised on May 9, 2024.

Ethical considerations in research are a set of principles that guide your research designs and practices. Scientists and researchers must always adhere to a certain code of conduct when collecting data from people.

The goals of human research often include understanding real-life phenomena, studying effective treatments, investigating behaviors, and improving lives in other ways. What you decide to research and how you conduct that research involve key ethical considerations.

These considerations work to

  • protect the rights of research participants
  • enhance research validity
  • maintain scientific or academic integrity

Table of contents

Why do research ethics matter, getting ethical approval for your study, types of ethical issues, voluntary participation, informed consent, confidentiality, potential for harm, results communication, examples of ethical failures, other interesting articles, frequently asked questions about research ethics.

Research ethics matter for scientific integrity, human rights and dignity, and collaboration between science and society. These principles make sure that participation in studies is voluntary, informed, and safe for research subjects.

You’ll balance pursuing important research objectives with using ethical research methods and procedures. It’s always necessary to prevent permanent or excessive harm to participants, whether inadvertent or not.

Defying research ethics will also lower the credibility of your research because it’s hard for others to trust your data if your methods are morally questionable.

Even if a research idea is valuable to society, it doesn’t justify violating the human rights or dignity of your study participants.

Prevent plagiarism. Run a free check.

Before you start any study involving data collection with people, you’ll submit your research proposal to an institutional review board (IRB) .

An IRB is a committee that checks whether your research aims and research design are ethically acceptable and follow your institution’s code of conduct. They check that your research materials and procedures are up to code.

If successful, you’ll receive IRB approval, and you can begin collecting data according to the approved procedures. If you want to make any changes to your procedures or materials, you’ll need to submit a modification application to the IRB for approval.

If unsuccessful, you may be asked to re-submit with modifications or your research proposal may receive a rejection. To get IRB approval, it’s important to explicitly note how you’ll tackle each of the ethical issues that may arise in your study.

There are several ethical issues you should always pay attention to in your research design, and these issues can overlap with each other.

You’ll usually outline ways you’ll deal with each issue in your research proposal if you plan to collect data from participants.

Voluntary participation Your participants are free to opt in or out of the study at any point in time.
Informed consent Participants know the purpose, benefits, risks, and funding behind the study before they agree or decline to join.
Anonymity You don’t know the identities of the participants. Personally identifiable data is not collected.
Confidentiality You know who the participants are but you keep that information hidden from everyone else. You anonymize personally identifiable data so that it can’t be linked to other data by anyone else.
Potential for harm Physical, social, psychological and all other types of harm are kept to an absolute minimum.
Results communication You ensure your work is free of or research misconduct, and you accurately represent your results.

Voluntary participation means that all research subjects are free to choose to participate without any pressure or coercion.

All participants are able to withdraw from, or leave, the study at any point without feeling an obligation to continue. Your participants don’t need to provide a reason for leaving the study.

It’s important to make it clear to participants that there are no negative consequences or repercussions to their refusal to participate. After all, they’re taking the time to help you in the research process , so you should respect their decisions without trying to change their minds.

Voluntary participation is an ethical principle protected by international law and many scientific codes of conduct.

Take special care to ensure there’s no pressure on participants when you’re working with vulnerable groups of people who may find it hard to stop the study even when they want to.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Informed consent refers to a situation in which all potential participants receive and understand all the information they need to decide whether they want to participate. This includes information about the study’s benefits, risks, funding, and institutional approval.

You make sure to provide all potential participants with all the relevant information about

  • what the study is about
  • the risks and benefits of taking part
  • how long the study will take
  • your supervisor’s contact information and the institution’s approval number

Usually, you’ll provide participants with a text for them to read and ask them if they have any questions. If they agree to participate, they can sign or initial the consent form. Note that this may not be sufficient for informed consent when you work with particularly vulnerable groups of people.

If you’re collecting data from people with low literacy, make sure to verbally explain the consent form to them before they agree to participate.

For participants with very limited English proficiency, you should always translate the study materials or work with an interpreter so they have all the information in their first language.

In research with children, you’ll often need informed permission for their participation from their parents or guardians. Although children cannot give informed consent, it’s best to also ask for their assent (agreement) to participate, depending on their age and maturity level.

Anonymity means that you don’t know who the participants are and you can’t link any individual participant to their data.

You can only guarantee anonymity by not collecting any personally identifying information—for example, names, phone numbers, email addresses, IP addresses, physical characteristics, photos, and videos.

In many cases, it may be impossible to truly anonymize data collection . For example, data collected in person or by phone cannot be considered fully anonymous because some personal identifiers (demographic information or phone numbers) are impossible to hide.

You’ll also need to collect some identifying information if you give your participants the option to withdraw their data at a later stage.

Data pseudonymization is an alternative method where you replace identifying information about participants with pseudonymous, or fake, identifiers. The data can still be linked to participants but it’s harder to do so because you separate personal information from the study data.

Confidentiality means that you know who the participants are, but you remove all identifying information from your report.

All participants have a right to privacy, so you should protect their personal data for as long as you store or use it. Even when you can’t collect data anonymously, you should secure confidentiality whenever you can.

Some research designs aren’t conducive to confidentiality, but it’s important to make all attempts and inform participants of the risks involved.

As a researcher, you have to consider all possible sources of harm to participants. Harm can come in many different forms.

  • Psychological harm: Sensitive questions or tasks may trigger negative emotions such as shame or anxiety.
  • Social harm: Participation can involve social risks, public embarrassment, or stigma.
  • Physical harm: Pain or injury can result from the study procedures.
  • Legal harm: Reporting sensitive data could lead to legal risks or a breach of privacy.

It’s best to consider every possible source of harm in your study as well as concrete ways to mitigate them. Involve your supervisor to discuss steps for harm reduction.

Make sure to disclose all possible risks of harm to participants before the study to get informed consent. If there is a risk of harm, prepare to provide participants with resources or counseling or medical services if needed.

Some of these questions may bring up negative emotions, so you inform participants about the sensitive nature of the survey and assure them that their responses will be confidential.

The way you communicate your research results can sometimes involve ethical issues. Good science communication is honest, reliable, and credible. It’s best to make your results as transparent as possible.

Take steps to actively avoid plagiarism and research misconduct wherever possible.

Plagiarism means submitting others’ works as your own. Although it can be unintentional, copying someone else’s work without proper credit amounts to stealing. It’s an ethical problem in research communication because you may benefit by harming other researchers.

Self-plagiarism is when you republish or re-submit parts of your own papers or reports without properly citing your original work.

This is problematic because you may benefit from presenting your ideas as new and original even though they’ve already been published elsewhere in the past. You may also be infringing on your previous publisher’s copyright, violating an ethical code, or wasting time and resources by doing so.

In extreme cases of self-plagiarism, entire datasets or papers are sometimes duplicated. These are major ethical violations because they can skew research findings if taken as original data.

You notice that two published studies have similar characteristics even though they are from different years. Their sample sizes, locations, treatments, and results are highly similar, and the studies share one author in common.

Research misconduct

Research misconduct means making up or falsifying data, manipulating data analyses, or misrepresenting results in research reports. It’s a form of academic fraud.

These actions are committed intentionally and can have serious consequences; research misconduct is not a simple mistake or a point of disagreement about data analyses.

Research misconduct is a serious ethical issue because it can undermine academic integrity and institutional credibility. It leads to a waste of funding and resources that could have been used for alternative research.

Later investigations revealed that they fabricated and manipulated their data to show a nonexistent link between vaccines and autism. Wakefield also neglected to disclose important conflicts of interest, and his medical license was taken away.

This fraudulent work sparked vaccine hesitancy among parents and caregivers. The rate of MMR vaccinations in children fell sharply, and measles outbreaks became more common due to a lack of herd immunity.

Research scandals with ethical failures are littered throughout history, but some took place not that long ago.

Some scientists in positions of power have historically mistreated or even abused research participants to investigate research problems at any cost. These participants were prisoners, under their care, or otherwise trusted them to treat them with dignity.

To demonstrate the importance of research ethics, we’ll briefly review two research studies that violated human rights in modern history.

These experiments were inhumane and resulted in trauma, permanent disabilities, or death in many cases.

After some Nazi doctors were put on trial for their crimes, the Nuremberg Code of research ethics for human experimentation was developed in 1947 to establish a new standard for human experimentation in medical research.

In reality, the actual goal was to study the effects of the disease when left untreated, and the researchers never informed participants about their diagnoses or the research aims.

Although participants experienced severe health problems, including blindness and other complications, the researchers only pretended to provide medical care.

When treatment became possible in 1943, 11 years after the study began, none of the participants were offered it, despite their health conditions and high risk of death.

Ethical failures like these resulted in severe harm to participants, wasted resources, and lower trust in science and scientists. This is why all research institutions have strict ethical guidelines for performing research.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Measures of central tendency
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Thematic analysis
  • Cohort study
  • Peer review
  • Ethnography

Research bias

  • Implicit bias
  • Cognitive bias
  • Conformity bias
  • Hawthorne effect
  • Availability heuristic
  • Attrition bias
  • Social desirability bias

Ethical considerations in research are a set of principles that guide your research designs and practices. These principles include voluntary participation, informed consent, anonymity, confidentiality, potential for harm, and results communication.

Scientists and researchers must always adhere to a certain code of conduct when collecting data from others .

These considerations protect the rights of research participants, enhance research validity , and maintain scientific integrity.

Research ethics matter for scientific integrity, human rights and dignity, and collaboration between science and society. These principles make sure that participation in studies is voluntary, informed, and safe.

Anonymity means you don’t know who the participants are, while confidentiality means you know who they are but remove identifying information from your research report. Both are important ethical considerations .

You can only guarantee anonymity by not collecting any personally identifying information—for example, names, phone numbers, email addresses, IP addresses, physical characteristics, photos, or videos.

You can keep data confidential by using aggregate information in your research report, so that you only refer to groups of participants rather than individuals.

These actions are committed intentionally and can have serious consequences; research misconduct is not a simple mistake or a point of disagreement but a serious ethical failure.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2024, May 09). Ethical Considerations in Research | Types & Examples. Scribbr. Retrieved September 3, 2024, from https://www.scribbr.com/methodology/research-ethics/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, data collection | definition, methods & examples, what is self-plagiarism | definition & how to avoid it, how to avoid plagiarism | tips on citing sources, what is your plagiarism score.

  • Format Type
  • Learning Modules
  • Interactives
  • Credit Type
  • Participation Certificate
  • AMA Ed Hub >
  • AMA Journal of Ethics >

How Should the Three R's Be Revised and Why?

  • a Postdoctoral fellow at the Johns Hopkins Bloomberg School of Public Health in Baltimore, Maryland. She co-chairs the Science and Technology Subcommittee of the American Bar Association Animal Law Committee and co-teaches The Law & Ethics of Animal Testing at Lewis and Clark Law School with Dr Paul Locke.
  • b Professor in the Department of Environmental Health and Engineering at the Johns Hopkins Bloomberg School of Public Health in Baltimore, Maryland. An environmental health scientist and lawyer, Dr Locke's research, practice, and teaching focus on the intersection of law and science with an emphasis on replacing animals in biomedical research with new, human-relevant, nonanimal (in vitro) methods.

The Principles of Humane Experimental Technique established what many know today as the “3 R 's”—refinement, reduction, and replacement—when it was published in 1959. Since their formulation, these principles have guided decision-making for many about nonhuman animal subjects' uses in laboratory-based research. Discussion about how to amend or replace the 3 R 's is ongoing, driven mainly by philosophical ethics approaches to nonhuman animal rights and by scientific advancement. This article explores merits and drawbacks of possible updates to and interpretations of the 3 R 's.

Russell and Burch published The Principles of Humane Experimental Technique in 1959, which established the “3 R 's” as key principles that govern use of nonhuman animals in a laboratory setting. 1 Today, the 3 R 's is the most well-known ethical framework for conducting scientific research using nonhuman animals. The 3 R 's—refinement, reduction, and replacement—are almost universally accepted by responsible scientists throughout the world and form the basis of many legal and regulatory systems that govern laboratory nonhuman animal use. 2 -  7

However, it is no longer obvious that the 3 R 's as originally conceived represent a sufficient framework for the use of animals in research. Since the initial formulation of the 3 R 's, there has been considerable discussion about how to amend or replace them, driven in part by the writings of philosophers such as Peter Singer 8 and Tom Regan, 9 as well as by organizations that advocate for nonhuman animal rights. Furthermore, science and scientific methods have advanced in the past 6 decades, and the need for, and use of, nonhuman animals has changed. In addition, the translatability of animal models to human conditions has been called into question. 10 These factors raise the question about how to revise the 3 R 's for modern-day science.

Several lessons have been learned since the implementation of the 3 R 's ethical framework in the mid-20th century. First, the integration of ethical principles into practice requires substantial time. The 3 R 's took over 30 years to take hold within the scientific community. 4 Therefore, it is unsurprising that the 3 R 's would warrant revision, given their long history. And yet there are many aspects of the 3 R 's that have endured; these aspects are a testament to the original wisdom and utility of the framework and help to account for its worldwide adoption.

Quiz Ref ID A crucial pillar of the 3 R 's is the notion that humane science is necessary for both scientific and ethical reasons. Scientific results obtained from animals deprived of necessities or animals experiencing unmitigated pain, stress, or distress have little-to-no scientific merit. 11 Scientists must care about animal welfare so that their research can yield meaningful results. Fortunately, most scientists understand that the strength of scientific results is not independent of animal welfare and that good science comes from well-cared-for nonhuman animals. 11 , 12

Quiz Ref ID Yet, what is meant by humane treatment or well-being of nonhuman animals is less clear. The 3 R 's are premised on a pain-and-stress avoidance model that seeks first to avoid pain and stress by replacing animal models with nonanimal alternatives (which evolved from the original intent of replacing “higher-order” nonhuman animals with “lower-order” nonhuman animals), then to minimize total pain by reducing the number of animals, and finally to minimize individual pain by refining the pain-inducing procedure. The problem with this utilitarian model is that the consequence of the action cannot be known until the action is taken. Although an experiment may yield a very strong positive outcome that could warrant an animal's subjection to some pain, that outcome cannot be known before the experiment is conducted. Additionally, when applied, the principles can conflict with one another, giving rise to the need to better define each of the 3 R concepts.

Although the use of animal models under a 3 R ethical framework has yielded substantial scientific progress, there are instances in which animal models have not accurately predicted human responses. Some of these failures indicate that the animal model information cannot be relied upon when assessing toxicity of potential drugs in humans. 13 Furthermore, drugs for some disease types, such as Alzheimer's, have repeatedly seen successes in animal models and yet failed in human clinical trials. 14 Finally, although the extent of the problem is unknown, there are instances in which a drug could be fatal to nonhuman animals but a major success in humans (eg, aspirin). 13 A retrospective evaluation of the 3 R 's framework suggests that it is insufficient and that a different or modified ethical framework is needed.

Importantly, the particular ethical framework adopted by scientists is only one of the factors that influences scientists in their choice of methods. The 3 R 's may predispose a researcher to use a scientifically sound animal alternative , but that choice may be impeded by lack of regulatory approval. The regulatory state is an intermediary between what the scientific literature and ethical analysis support, on the one hand, and what is legally permissible, on the other. Vanda Pharmaceuticals tried to use the former to challenge the latter in filing suit against the US Food and Drug Administration in 2019. 15 This suit was unsuccessful but demonstrates a clear instance in which a drug product could not be brought to market without data from nonhuman animal subjects despite the scientific experts determining that animal models were unnecessary.

Finally, although the 3 R 's are codified in the laws of other countries, in the United States they are not explicitly required by law but only incorporated into guidelines by reference, such as through the National Research Council's Guide for the Care and Use of Laboratory Animals . 16 Some call for the codification of the 3 R 's, while others treat the Guide as synonymous with the law. 17 , 18 Regardless, it is clear that the 3 R 's are widely used by US-based researchers. Therefore, we can conclude that ethical mandates need not necessarily be enshrined in statutes or regulations but may be captured at a lower level, such as in guidance documents.

Quiz Ref ID Changes are needed not only to the ethical framework but also to the system within which the ethical framework functions. Training is one of the most important gaps to address. The Animal Welfare Act (AWA) of 1966 requires animal care personnel to be trained in welfare practices. 19 However, Russell and Burch envisioned a far more expansive form of training, which remains an ideal. 1 A 2014 study found that 58% of scientists who had signed up to take a laboratory animal sciences course were not aware of the 3 R 's prior to the course. 20 This survey included career scientists who had been conducting animal research for years. Ethics training for scientists and everyone working with animals in research must occur regularly. Ethics training for personnel working with animals in the lab should include a survey of normative ethical theories (eg, utilitarianism, deontology, virtue ethics) and cover both how the laws and regulations have incorporated some of these ethical approaches, as well as ethical gaps in the legal framework like the failure to require facilities to report all nonhuman animal use numbers so that reduction can be assessed on a larger scale. Most importantly, the training must not be superficial but instead be substantive, employing appropriate pedagogical methods to ensure staff's engagement in the course, retention of the content, and application of the content in laboratory settings.

But training should not be limited to the 3 R 's directly. Rather, in order for the ideas of the 3 R 's to be fully accessible, it is necessary to address the knowledge gap between those actively conducting research and those developing innovative technologies. Currently, in the event that a painful procedure will be used, there is a requirement for researchers to search for an alternative and address why an alternative to nonhuman animal models cannot be used. 21 Yet, in practice, this requirement is met primarily pro forma. 22 -  25 This requirement can be satisfied by merely checking a box and writing a simple sentence on a form submitted to a local Institutional Animal Care and Use Committee (IACUC), but doing so often does not reflect a concerted effort to identify plausible alternatives. Part of the problem is that the dissemination of information concerning animal alternatives is lacking. The number of alternatives is exploding, but there is no clear pathway for regulatory acceptance of new methods in the United States. This is a major limitation for the adoption of alternatives. New methods will not be widely accepted in science and research without a clear regulatory pathway.

Several ethical frameworks have been proposed to succeed the 3 R 's framework. Some propose a justice-based model that aims to end nonhuman animal testing completely. 26 , 27 Some proponents of this model omit an acknowledgement that this transition could not occur overnight or fail to provide a proposed plan for such a transition . 28 The absence of these 2 features is a major limitation of these approaches, particularly within the context of a discourse focused on practical application. For the purpose of remaining grounded in the practical, the ethical framework described below focuses on filling a key gap in the 3 R 's model.

Experimental strength has been identified as a key missing component of the 3 R 's model. 10 This criticism is to some extent due to the shortcomings of the 3 R 's' utilitarian foundations. Interestingly, the IACUC—the body responsible for reviewing nonhuman animal research protocols under the AWA and the Public Health Services Policy—is implicitly instructed to refrain from this type of review. 29 Nevertheless, many may find it difficult in practice to avoid identifying this omission as a weakness of the system.

Quiz Ref ID One proposal to incorporate experimental strength extends the 3 R 's to what has been coined the 3 V 's. 30 , 31 These additional elements comprise (a) construct validity, (b) internal validity, and (c) external validity. Each of these elements represents a unique aspect that, when taken together, provides a better assessment of overall experimental strength. Construct validity refers to the model's capacity to speak to the scientific objective of interest. Internal validity refers to design rigor (eg, sample size, statistical model, use of control groups). External validity refers to the extent to which the results are widely generalizable or only narrowly applicable. Taken together, the 3 V 's aim to reduce the occurrence of animal research that provides little-to-no meaningful information. The 3 V approach is also consistent with the evolution of science after the 3 R 's were first conceived by addressing the most pressing problems that confront animal models today.

Science advances when it respects and incorporates ethical principles. The introduction of the 3 R 's marked a fundamental shift in uses of nonhuman animal subjects. 32 However, reviews of the 3 R 's framework over the past several decades indicate room for improvement. For science to truly operate ethically, everyone involved must be taught—and express in their actions—the principles. Furthermore, the principles must be regularly reinforced. Knowing the 3 R 's is only one step, and that alone is insufficient. The 3 R 's can and should catch up with the scientific advances of the past few decades and seek to address some of the framework's limitations that have been uncovered. Efforts made to develop new replacement models must be given their full chance of success by identifying a clear regulatory approval pathway, and there must be systematic ways to disseminate information about newly available alternatives.

Quiz Ref ID Finally, even with all this in mind, a revision of the 3 R 's is warranted. Russell and Burch provided a cornerstone of animal research practice. Yet the best models are refined over time as use and experience reveal gaps. The addition of the 3 V 's, which serves to address experimental validity, is one possible revision. The moment is ripe to implement changes and strengthen the 3 R 's so that they can continue to be a useful tool for 21st-century science.

Sign in to take quiz and track your certificates

The AMA Journal of Ethics exists to help medical students, physicians and all health care professionals navigate ethical decisions in service to patients and society. The journal publishes cases and expert commentary, medical education articles, policy discussions, peer-reviewed articles for journal-based, video CME, audio CME, visuals, and more. Learn more

See More About

AMA Journal of Ethics

AMA J Ethics . 2024;26(9):E724-729.

AMA CME Accreditation Information

CME Disclosure Statement: Unless noted, all individuals in control of content reported no relevant financial relationships.

If applicable, all relevant financial relationships have been mitigated.

Conflict of Interest Disclosure: Authors disclosed no conflicts of interest.

The viewpoints expressed in this article are those of the author(s) and do not necessarily reflect the views and policies of the AMA.

Rebecca Critser, JD, LLM, MA is a postdoctoral fellow at the Johns Hopkins Bloomberg School of Public Health in Baltimore, Maryland. She co-chairs the Science and Technology Subcommittee of the American Bar Association Animal Law Committee and co-teaches The Law & Ethics of Animal Testing at Lewis and Clark Law School with Dr Paul Locke; Paul Locke, JD, DrPH is a professor in the Department of Environmental Health and Engineering at the Johns Hopkins Bloomberg School of Public Health in Baltimore, Maryland. An environmental health scientist and lawyer, Dr Locke's research, practice, and teaching focus on the intersection of law and science with an emphasis on replacing animals in biomedical research with new, human-relevant, nonanimal (in vitro) methods.

Credit Designation Statement: The American Medical Association designates this Journal-based CME activity activity for a maximum of 1.00  AMA PRA Category 1 Credit (s)™. Physicians should claim only the credit commensurate with the extent of their participation in the activity.

Successful completion of this CME activity, which includes participation in the evaluation component, enables the participant to earn up to:

  • 1.00 Medical Knowledge MOC points in the American Board of Internal Medicine's (ABIM) Maintenance of Certification (MOC) program;
  • 1.00 Self-Assessment points in the American Board of Otolaryngology – Head and Neck Surgery’s (ABOHNS) Continuing Certification program;
  • 1.00 MOC points in the American Board of Pediatrics’ (ABP) Maintenance of Certification (MOC) program;
  • 1.00 Lifelong Learning points in the American Board of Pathology’s (ABPath) Continuing Certification program; and
  • 1.00 credit toward the CME of the American Board of Surgery’s Continuous Certification program

It is the CME activity provider's responsibility to submit participant completion information to ACCME for the purpose of granting MOC credit.

AMA Membership provides:

  • Unlimited CME and MOC from JN Learning
  • Full-access to JAMA Network (print journal, online and audio subscription)

Sign in to access

Sign in to customize your interests, my saved searches.

You currently have no searches saved.

My Saved Courses

You currently have no courses saved.

Name Your Search

Sign in to save your search, lookup an activity, purchase access.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

National Institute of Environmental Health Sciences

Your environment. your health., what is ethics in research & why is it important, by david b. resnik, j.d., ph.d..

December 23, 2020

The ideas and opinions expressed in this essay are the author’s own and do not necessarily represent those of the NIH, NIEHS, or US government.

ethic image decorative header

When most people think of ethics (or morals), they think of rules for distinguishing between right and wrong, such as the Golden Rule ("Do unto others as you would have them do unto you"), a code of professional conduct like the Hippocratic Oath ("First of all, do no harm"), a religious creed like the Ten Commandments ("Thou Shalt not kill..."), or a wise aphorisms like the sayings of Confucius. This is the most common way of defining "ethics": norms for conduct that distinguish between acceptable and unacceptable behavior.

Most people learn ethical norms at home, at school, in church, or in other social settings. Although most people acquire their sense of right and wrong during childhood, moral development occurs throughout life and human beings pass through different stages of growth as they mature. Ethical norms are so ubiquitous that one might be tempted to regard them as simple commonsense. On the other hand, if morality were nothing more than commonsense, then why are there so many ethical disputes and issues in our society?

Alternatives to Animal Testing

test tubes on a tray decorrative image

Alternative test methods are methods that replace, reduce, or refine animal use in research and testing

Learn more about Environmental science Basics

One plausible explanation of these disagreements is that all people recognize some common ethical norms but interpret, apply, and balance them in different ways in light of their own values and life experiences. For example, two people could agree that murder is wrong but disagree about the morality of abortion because they have different understandings of what it means to be a human being.

Most societies also have legal rules that govern behavior, but ethical norms tend to be broader and more informal than laws. Although most societies use laws to enforce widely accepted moral standards and ethical and legal rules use similar concepts, ethics and law are not the same. An action may be legal but unethical or illegal but ethical. We can also use ethical concepts and principles to criticize, evaluate, propose, or interpret laws. Indeed, in the last century, many social reformers have urged citizens to disobey laws they regarded as immoral or unjust laws. Peaceful civil disobedience is an ethical way of protesting laws or expressing political viewpoints.

Another way of defining 'ethics' focuses on the disciplines that study standards of conduct, such as philosophy, theology, law, psychology, or sociology. For example, a "medical ethicist" is someone who studies ethical standards in medicine. One may also define ethics as a method, procedure, or perspective for deciding how to act and for analyzing complex problems and issues. For instance, in considering a complex issue like global warming , one may take an economic, ecological, political, or ethical perspective on the problem. While an economist might examine the cost and benefits of various policies related to global warming, an environmental ethicist could examine the ethical values and principles at stake.

See ethics in practice at NIEHS

Read latest updates in our monthly  Global Environmental Health Newsletter

global environmental health

Many different disciplines, institutions , and professions have standards for behavior that suit their particular aims and goals. These standards also help members of the discipline to coordinate their actions or activities and to establish the public's trust of the discipline. For instance, ethical standards govern conduct in medicine, law, engineering, and business. Ethical norms also serve the aims or goals of research and apply to people who conduct scientific research or other scholarly or creative activities. There is even a specialized discipline, research ethics, which studies these norms. See Glossary of Commonly Used Terms in Research Ethics and Research Ethics Timeline .

There are several reasons why it is important to adhere to ethical norms in research. First, norms promote the aims of research , such as knowledge, truth, and avoidance of error. For example, prohibitions against fabricating , falsifying, or misrepresenting research data promote the truth and minimize error.

Join an NIEHS Study

See how we put research Ethics to practice.

Visit Joinastudy.niehs.nih.gov to see the various studies NIEHS perform.

join a study decorative image

Second, since research often involves a great deal of cooperation and coordination among many different people in different disciplines and institutions, ethical standards promote the values that are essential to collaborative work , such as trust, accountability, mutual respect, and fairness. For example, many ethical norms in research, such as guidelines for authorship , copyright and patenting policies , data sharing policies, and confidentiality rules in peer review, are designed to protect intellectual property interests while encouraging collaboration. Most researchers want to receive credit for their contributions and do not want to have their ideas stolen or disclosed prematurely.

Third, many of the ethical norms help to ensure that researchers can be held accountable to the public . For instance, federal policies on research misconduct, conflicts of interest, the human subjects protections, and animal care and use are necessary in order to make sure that researchers who are funded by public money can be held accountable to the public.

Fourth, ethical norms in research also help to build public support for research. People are more likely to fund a research project if they can trust the quality and integrity of research.

Finally, many of the norms of research promote a variety of other important moral and social values , such as social responsibility, human rights, animal welfare, compliance with the law, and public health and safety. Ethical lapses in research can significantly harm human and animal subjects, students, and the public. For example, a researcher who fabricates data in a clinical trial may harm or even kill patients, and a researcher who fails to abide by regulations and guidelines relating to radiation or biological safety may jeopardize his health and safety or the health and safety of staff and students.

Codes and Policies for Research Ethics

Given the importance of ethics for the conduct of research, it should come as no surprise that many different professional associations, government agencies, and universities have adopted specific codes, rules, and policies relating to research ethics. Many government agencies have ethics rules for funded researchers.

  • National Institutes of Health (NIH)
  • National Science Foundation (NSF)
  • Food and Drug Administration (FDA)
  • Environmental Protection Agency (EPA)
  • US Department of Agriculture (USDA)
  • Singapore Statement on Research Integrity
  • American Chemical Society, The Chemist Professional’s Code of Conduct
  • Code of Ethics (American Society for Clinical Laboratory Science)
  • American Psychological Association, Ethical Principles of Psychologists and Code of Conduct
  • Statement on Professional Ethics (American Association of University Professors)
  • Nuremberg Code
  • World Medical Association's Declaration of Helsinki

Ethical Principles

The following is a rough and general summary of some ethical principles that various codes address*:

experimental research ethics

Strive for honesty in all scientific communications. Honestly report data, results, methods and procedures, and publication status. Do not fabricate, falsify, or misrepresent data. Do not deceive colleagues, research sponsors, or the public.

experimental research ethics

Objectivity

Strive to avoid bias in experimental design, data analysis, data interpretation, peer review, personnel decisions, grant writing, expert testimony, and other aspects of research where objectivity is expected or required. Avoid or minimize bias or self-deception. Disclose personal or financial interests that may affect research.

experimental research ethics

Keep your promises and agreements; act with sincerity; strive for consistency of thought and action.

experimental research ethics

Carefulness

Avoid careless errors and negligence; carefully and critically examine your own work and the work of your peers. Keep good records of research activities, such as data collection, research design, and correspondence with agencies or journals.

experimental research ethics

Share data, results, ideas, tools, resources. Be open to criticism and new ideas.

experimental research ethics

Transparency

Disclose methods, materials, assumptions, analyses, and other information needed to evaluate your research.

experimental research ethics

Accountability

Take responsibility for your part in research and be prepared to give an account (i.e. an explanation or justification) of what you did on a research project and why.

experimental research ethics

Intellectual Property

Honor patents, copyrights, and other forms of intellectual property. Do not use unpublished data, methods, or results without permission. Give proper acknowledgement or credit for all contributions to research. Never plagiarize.

experimental research ethics

Confidentiality

Protect confidential communications, such as papers or grants submitted for publication, personnel records, trade or military secrets, and patient records.

experimental research ethics

Responsible Publication

Publish in order to advance research and scholarship, not to advance just your own career. Avoid wasteful and duplicative publication.

experimental research ethics

Responsible Mentoring

Help to educate, mentor, and advise students. Promote their welfare and allow them to make their own decisions.

experimental research ethics

Respect for Colleagues

Respect your colleagues and treat them fairly.

experimental research ethics

Social Responsibility

Strive to promote social good and prevent or mitigate social harms through research, public education, and advocacy.

experimental research ethics

Non-Discrimination

Avoid discrimination against colleagues or students on the basis of sex, race, ethnicity, or other factors not related to scientific competence and integrity.

experimental research ethics

Maintain and improve your own professional competence and expertise through lifelong education and learning; take steps to promote competence in science as a whole.

experimental research ethics

Know and obey relevant laws and institutional and governmental policies.

experimental research ethics

Animal Care

Show proper respect and care for animals when using them in research. Do not conduct unnecessary or poorly designed animal experiments.

experimental research ethics

Human Subjects protection

When conducting research on human subjects, minimize harms and risks and maximize benefits; respect human dignity, privacy, and autonomy; take special precautions with vulnerable populations; and strive to distribute the benefits and burdens of research fairly.

* Adapted from Shamoo A and Resnik D. 2015. Responsible Conduct of Research, 3rd ed. (New York: Oxford University Press).

Ethical Decision Making in Research

Although codes, policies, and principles are very important and useful, like any set of rules, they do not cover every situation, they often conflict, and they require interpretation. It is therefore important for researchers to learn how to interpret, assess, and apply various research rules and how to make decisions and act ethically in various situations. The vast majority of decisions involve the straightforward application of ethical rules. For example, consider the following case:

The research protocol for a study of a drug on hypertension requires the administration of the drug at different doses to 50 laboratory mice, with chemical and behavioral tests to determine toxic effects. Tom has almost finished the experiment for Dr. Q. He has only 5 mice left to test. However, he really wants to finish his work in time to go to Florida on spring break with his friends, who are leaving tonight. He has injected the drug in all 50 mice but has not completed all of the tests. He therefore decides to extrapolate from the 45 completed results to produce the 5 additional results.

Many different research ethics policies would hold that Tom has acted unethically by fabricating data. If this study were sponsored by a federal agency, such as the NIH, his actions would constitute a form of research misconduct , which the government defines as "fabrication, falsification, or plagiarism" (or FFP). Actions that nearly all researchers classify as unethical are viewed as misconduct. It is important to remember, however, that misconduct occurs only when researchers intend to deceive : honest errors related to sloppiness, poor record keeping, miscalculations, bias, self-deception, and even negligence do not constitute misconduct. Also, reasonable disagreements about research methods, procedures, and interpretations do not constitute research misconduct. Consider the following case:

Dr. T has just discovered a mathematical error in his paper that has been accepted for publication in a journal. The error does not affect the overall results of his research, but it is potentially misleading. The journal has just gone to press, so it is too late to catch the error before it appears in print. In order to avoid embarrassment, Dr. T decides to ignore the error.

Dr. T's error is not misconduct nor is his decision to take no action to correct the error. Most researchers, as well as many different policies and codes would say that Dr. T should tell the journal (and any coauthors) about the error and consider publishing a correction or errata. Failing to publish a correction would be unethical because it would violate norms relating to honesty and objectivity in research.

There are many other activities that the government does not define as "misconduct" but which are still regarded by most researchers as unethical. These are sometimes referred to as " other deviations " from acceptable research practices and include:

  • Publishing the same paper in two different journals without telling the editors
  • Submitting the same paper to different journals without telling the editors
  • Not informing a collaborator of your intent to file a patent in order to make sure that you are the sole inventor
  • Including a colleague as an author on a paper in return for a favor even though the colleague did not make a serious contribution to the paper
  • Discussing with your colleagues confidential data from a paper that you are reviewing for a journal
  • Using data, ideas, or methods you learn about while reviewing a grant or a papers without permission
  • Trimming outliers from a data set without discussing your reasons in paper
  • Using an inappropriate statistical technique in order to enhance the significance of your research
  • Bypassing the peer review process and announcing your results through a press conference without giving peers adequate information to review your work
  • Conducting a review of the literature that fails to acknowledge the contributions of other people in the field or relevant prior work
  • Stretching the truth on a grant application in order to convince reviewers that your project will make a significant contribution to the field
  • Stretching the truth on a job application or curriculum vita
  • Giving the same research project to two graduate students in order to see who can do it the fastest
  • Overworking, neglecting, or exploiting graduate or post-doctoral students
  • Failing to keep good research records
  • Failing to maintain research data for a reasonable period of time
  • Making derogatory comments and personal attacks in your review of author's submission
  • Promising a student a better grade for sexual favors
  • Using a racist epithet in the laboratory
  • Making significant deviations from the research protocol approved by your institution's Animal Care and Use Committee or Institutional Review Board for Human Subjects Research without telling the committee or the board
  • Not reporting an adverse event in a human research experiment
  • Wasting animals in research
  • Exposing students and staff to biological risks in violation of your institution's biosafety rules
  • Sabotaging someone's work
  • Stealing supplies, books, or data
  • Rigging an experiment so you know how it will turn out
  • Making unauthorized copies of data, papers, or computer programs
  • Owning over $10,000 in stock in a company that sponsors your research and not disclosing this financial interest
  • Deliberately overestimating the clinical significance of a new drug in order to obtain economic benefits

These actions would be regarded as unethical by most scientists and some might even be illegal in some cases. Most of these would also violate different professional ethics codes or institutional policies. However, they do not fall into the narrow category of actions that the government classifies as research misconduct. Indeed, there has been considerable debate about the definition of "research misconduct" and many researchers and policy makers are not satisfied with the government's narrow definition that focuses on FFP. However, given the huge list of potential offenses that might fall into the category "other serious deviations," and the practical problems with defining and policing these other deviations, it is understandable why government officials have chosen to limit their focus.

Finally, situations frequently arise in research in which different people disagree about the proper course of action and there is no broad consensus about what should be done. In these situations, there may be good arguments on both sides of the issue and different ethical principles may conflict. These situations create difficult decisions for research known as ethical or moral dilemmas . Consider the following case:

Dr. Wexford is the principal investigator of a large, epidemiological study on the health of 10,000 agricultural workers. She has an impressive dataset that includes information on demographics, environmental exposures, diet, genetics, and various disease outcomes such as cancer, Parkinson’s disease (PD), and ALS. She has just published a paper on the relationship between pesticide exposure and PD in a prestigious journal. She is planning to publish many other papers from her dataset. She receives a request from another research team that wants access to her complete dataset. They are interested in examining the relationship between pesticide exposures and skin cancer. Dr. Wexford was planning to conduct a study on this topic.

Dr. Wexford faces a difficult choice. On the one hand, the ethical norm of openness obliges her to share data with the other research team. Her funding agency may also have rules that obligate her to share data. On the other hand, if she shares data with the other team, they may publish results that she was planning to publish, thus depriving her (and her team) of recognition and priority. It seems that there are good arguments on both sides of this issue and Dr. Wexford needs to take some time to think about what she should do. One possible option is to share data, provided that the investigators sign a data use agreement. The agreement could define allowable uses of the data, publication plans, authorship, etc. Another option would be to offer to collaborate with the researchers.

The following are some step that researchers, such as Dr. Wexford, can take to deal with ethical dilemmas in research:

What is the problem or issue?

It is always important to get a clear statement of the problem. In this case, the issue is whether to share information with the other research team.

What is the relevant information?

Many bad decisions are made as a result of poor information. To know what to do, Dr. Wexford needs to have more information concerning such matters as university or funding agency or journal policies that may apply to this situation, the team's intellectual property interests, the possibility of negotiating some kind of agreement with the other team, whether the other team also has some information it is willing to share, the impact of the potential publications, etc.

What are the different options?

People may fail to see different options due to a limited imagination, bias, ignorance, or fear. In this case, there may be other choices besides 'share' or 'don't share,' such as 'negotiate an agreement' or 'offer to collaborate with the researchers.'

How do ethical codes or policies as well as legal rules apply to these different options?

The university or funding agency may have policies on data management that apply to this case. Broader ethical rules, such as openness and respect for credit and intellectual property, may also apply to this case. Laws relating to intellectual property may be relevant.

Are there any people who can offer ethical advice?

It may be useful to seek advice from a colleague, a senior researcher, your department chair, an ethics or compliance officer, or anyone else you can trust. In the case, Dr. Wexford might want to talk to her supervisor and research team before making a decision.

After considering these questions, a person facing an ethical dilemma may decide to ask more questions, gather more information, explore different options, or consider other ethical rules. However, at some point he or she will have to make a decision and then take action. Ideally, a person who makes a decision in an ethical dilemma should be able to justify his or her decision to himself or herself, as well as colleagues, administrators, and other people who might be affected by the decision. He or she should be able to articulate reasons for his or her conduct and should consider the following questions in order to explain how he or she arrived at his or her decision:

  • Which choice will probably have the best overall consequences for science and society?
  • Which choice could stand up to further publicity and scrutiny?
  • Which choice could you not live with?
  • Think of the wisest person you know. What would he or she do in this situation?
  • Which choice would be the most just, fair, or responsible?

After considering all of these questions, one still might find it difficult to decide what to do. If this is the case, then it may be appropriate to consider others ways of making the decision, such as going with a gut feeling or intuition, seeking guidance through prayer or meditation, or even flipping a coin. Endorsing these methods in this context need not imply that ethical decisions are irrational, however. The main point is that human reasoning plays a pivotal role in ethical decision-making but there are limits to its ability to solve all ethical dilemmas in a finite amount of time.

Promoting Ethical Conduct in Science

globe decorative image

Do U.S. research institutions meet or exceed federal mandates for instruction in responsible conduct of research? A national survey

NCBI Pubmed

 Read about U.S. research instutuins follow federal manadates for ethics in research 

Learn more about NIEHS Research

Most academic institutions in the US require undergraduate, graduate, or postgraduate students to have some education in the responsible conduct of research (RCR) . The NIH and NSF have both mandated training in research ethics for students and trainees. Many academic institutions outside of the US have also developed educational curricula in research ethics

Those of you who are taking or have taken courses in research ethics may be wondering why you are required to have education in research ethics. You may believe that you are highly ethical and know the difference between right and wrong. You would never fabricate or falsify data or plagiarize. Indeed, you also may believe that most of your colleagues are highly ethical and that there is no ethics problem in research..

If you feel this way, relax. No one is accusing you of acting unethically. Indeed, the evidence produced so far shows that misconduct is a very rare occurrence in research, although there is considerable variation among various estimates. The rate of misconduct has been estimated to be as low as 0.01% of researchers per year (based on confirmed cases of misconduct in federally funded research) to as high as 1% of researchers per year (based on self-reports of misconduct on anonymous surveys). See Shamoo and Resnik (2015), cited above.

Clearly, it would be useful to have more data on this topic, but so far there is no evidence that science has become ethically corrupt, despite some highly publicized scandals. Even if misconduct is only a rare occurrence, it can still have a tremendous impact on science and society because it can compromise the integrity of research, erode the public’s trust in science, and waste time and resources. Will education in research ethics help reduce the rate of misconduct in science? It is too early to tell. The answer to this question depends, in part, on how one understands the causes of misconduct. There are two main theories about why researchers commit misconduct. According to the "bad apple" theory, most scientists are highly ethical. Only researchers who are morally corrupt, economically desperate, or psychologically disturbed commit misconduct. Moreover, only a fool would commit misconduct because science's peer review system and self-correcting mechanisms will eventually catch those who try to cheat the system. In any case, a course in research ethics will have little impact on "bad apples," one might argue.

According to the "stressful" or "imperfect" environment theory, misconduct occurs because various institutional pressures, incentives, and constraints encourage people to commit misconduct, such as pressures to publish or obtain grants or contracts, career ambitions, the pursuit of profit or fame, poor supervision of students and trainees, and poor oversight of researchers (see Shamoo and Resnik 2015). Moreover, defenders of the stressful environment theory point out that science's peer review system is far from perfect and that it is relatively easy to cheat the system. Erroneous or fraudulent research often enters the public record without being detected for years. Misconduct probably results from environmental and individual causes, i.e. when people who are morally weak, ignorant, or insensitive are placed in stressful or imperfect environments. In any case, a course in research ethics can be useful in helping to prevent deviations from norms even if it does not prevent misconduct. Education in research ethics is can help people get a better understanding of ethical standards, policies, and issues and improve ethical judgment and decision making. Many of the deviations that occur in research may occur because researchers simply do not know or have never thought seriously about some of the ethical norms of research. For example, some unethical authorship practices probably reflect traditions and practices that have not been questioned seriously until recently. If the director of a lab is named as an author on every paper that comes from his lab, even if he does not make a significant contribution, what could be wrong with that? That's just the way it's done, one might argue. Another example where there may be some ignorance or mistaken traditions is conflicts of interest in research. A researcher may think that a "normal" or "traditional" financial relationship, such as accepting stock or a consulting fee from a drug company that sponsors her research, raises no serious ethical issues. Or perhaps a university administrator sees no ethical problem in taking a large gift with strings attached from a pharmaceutical company. Maybe a physician thinks that it is perfectly appropriate to receive a $300 finder’s fee for referring patients into a clinical trial.

If "deviations" from ethical conduct occur in research as a result of ignorance or a failure to reflect critically on problematic traditions, then a course in research ethics may help reduce the rate of serious deviations by improving the researcher's understanding of ethics and by sensitizing him or her to the issues.

Finally, education in research ethics should be able to help researchers grapple with the ethical dilemmas they are likely to encounter by introducing them to important concepts, tools, principles, and methods that can be useful in resolving these dilemmas. Scientists must deal with a number of different controversial topics, such as human embryonic stem cell research, cloning, genetic engineering, and research involving animal or human subjects, which require ethical reflection and deliberation.

Learning Goals

  • Learning how to conduct ethical research.

Exploring Experimental Psychology

Research ethics, 4.1  moral foundations of research.

Ethics is the branch of philosophy that is concerned with morality—what it means to behave morally and how people can achieve that goal. It can also refer to a set of principles and practices that provide moral guidance in a particular field. There is an ethics of business, medicine, teaching, and of course, scientific research. Many kinds of ethical issues can arise in scientific research, especially when it involves human participants. For this reason, it is useful to begin with a general framework for thinking through these issues.

Weighing Risks Against Benefits

Scientific research in psychology can be ethical only if its risks are outweighed by its benefits. Among the risks to research participants are that a treatment might fail to help or even be harmful, a procedure might result in physical or psychological harm, and their right to privacy might be violated. Among the potential benefits are receiving a helpful treatment, learning about psychology, experiencing the satisfaction of contributing to scientific knowledge, and receiving money or course credit for participating. Scientific research can have risks and benefits to the scientific community and to society too (Rosenthal, 1994). A risk to science is that if a research question is uninteresting or a study is poorly designed, then the time, money, and effort spent on that research could have been spent on more productive research. A risk to society is that research results could be misunderstood or misapplied with harmful consequences. The research that mistakenly linked the measles, mumps, and rubella (MMR) vaccine to autism resulted in both of these kinds of harm. Of course, the benefits of scientific research to science and society are that it advances scientific knowledge and can contribute to the welfare of society.

It is not necessarily easy to weigh the risks of research against its benefits because the risks and benefits may not be directly comparable. For example, it is common for the risks of a study to be primarily to the research participants but the benefits primarily for science or society. Consider, for example, Stanley Milgram’s original study on obedience to authority (Milgram, 1963). The participants were told that they were taking part in a study on the effects of punishment on learning and were instructed to give electric shocks to another participant each time that participant responded incorrectly on a learning task. With each incorrect response, the shock became stronger—eventually causing the other participant (who was in the next room) to protest, complain about his heart, scream in pain, and finally fall silent and stop responding. If the first participant hesitated or expressed concern, the researcher said that he must continue. In reality, the other participant was a confederate of the researcher—a helper who pretended to be a real participant—and the protests, complaints, and screams that the real participant heard were an audio recording that was activated when he flipped the switch to administer the “shocks.” The surprising result of this study was that most of the real participants continued to administer the shocks right through the confederate’s protests, complaints, and screams. Although this is considered one of the most important results in psychology—with implications for understanding events like the Holocaust or the mistreatment of prisoners by US soldiers at Abu Ghraib—it came at the cost of producing severe psychological stress in the research participants.

 Was It Worth It?

Much of the debate over the ethics of Milgram’s obedience study concerns the question of whether the resulting scientific knowledge was worth the harm caused to the research participants. To get a better sense of the harm, consider Milgram’s (1963) own description of it.

 In a large number of cases, the degree of tension reached extremes that are rarely seen in sociopsychological laboratory studies. Subjects were observed to sweat, tremble, stutter, bite their lips, groan, and dig their fingernails into their flesh.…Fourteen of the 40 subjects showed definite signs of nervous laughter and smiling. The laughter seemed entirely out of place, even bizarre. Full blown uncontrollable seizures [of laughter] were observed for three subjects. On one occasion we observed a seizure so violently convulsive that it was necessary to call a halt to the experiment (p. 375).

 Milgram also noted that another observer reported that within 20 minutes one participant “was reduced to a twitching, stuttering wreck, who was rapidly approaching the point of nervous collapse” (p. 377).

To Milgram’s credit, he went to great lengths to debrief his participants—including returning their mental states to normal—and to show that most of them thought the research was valuable and were glad to have participated. Still, this research would be considered unethical by today’s standards.

Acting Responsibly and With Integrity

Researchers must act responsibly and with integrity. This means carrying out their research in a thorough and competent manner, meeting their professional obligations, and being truthful. Acting with integrity is important because it promotes trust, which is an essential element of all effective human relationships. Participants must be able to trust that researchers are being honest with them (e.g., about what the study involves), will keep their promises (e.g., to maintain confidentiality), and will carry out their research in ways that maximize benefits and minimize risk. An important issue here is the use of deception. Some research questions (such as Milgram’s) are difficult or impossible to answer without deceiving research participants. Thus acting with integrity can conflict with doing research that advances scientific knowledge and benefits society. We will consider how psychologists generally deal with this conflict shortly.

The scientific community and society must also be able to trust that researchers have conducted their research thoroughly and competently and that they have reported on it honestly. Again, the example at the beginning of the chapter illustrates what can happen when this trust is violated. In this case, other researchers wasted resources on unnecessary follow-up research and people avoided the MMR vaccine, putting their children at increased risk of measles, mumps, and rubella.

Seeking Justice

Researchers must conduct their research in a just manner. They should treat their participants fairly, for example, by giving them adequate compensation for their participation and making sure that benefits and risks are distributed across all participants. For example, in a study of a new and potentially beneficial psychotherapy, some participants might receive the psychotherapy while others serve as a control group that receives no treatment. If the psychotherapy turns out to be effective, it would be fair to offer it to participants in the control group when the study ends.

At a broader societal level, members of some groups have historically faced more than their fair share of the risks of scientific research, including people who are institutionalized, are disabled, or belong to racial or ethnic minorities. A particularly tragic example is the Tuskegee syphilis study conducted by the US Public Health Service from 1932 to 1972 (Reverby, 2009). The participants in this study were poor African American men in the vicinity of Tuskegee, Alabama, who were told that they were being treated for “bad blood.” Although they were given some free medical care, they were not treated for their syphilis. Instead, they were observed to see how the disease developed in untreated patients. Even after the use of penicillin became the standard treatment for syphilis in the 1940s, these men continued to be denied treatment without being given an opportunity to leave the study. The study was eventually discontinued only after details were made known to the general public by journalists and activists. It is now widely recognized that researchers need to consider issues of justice and fairness at the societal level.

 “They Were Betrayed”

In 1997—65 years after the Tuskegee Syphilis Study began and 25 years after it ended—President Bill Clinton formally apologized on behalf of the US government to those who were affected. Here is an excerpt from the apology:

 So today America does remember the hundreds of men used in research without their knowledge and consent. We remember them and their family members. Men who were poor and African American, without resources and with few alternatives, they believed they had found hope when they were offered free medical care by the United States Public Health Service. They were betrayed.

Respecting People’s Rights and Dignity

Researchers must respect people’s rights and dignity as human beings. One element of this is respecting their autonomy—their right to make their own choices and take their own actions free from coercion. Of fundamental importance here is the concept of informed consent. This means that researchers obtain and document people’s agreement to participate in a study after having informed them of everything that might reasonably be expected to affect their decision. Consider the participants in the Tuskegee study. Although they agreed to participate in the study, they were not told that they had syphilis but would be denied treatment for it. Had they been told this basic fact about the study, it seems likely that they would not have agreed to participate. Likewise, had participants in Milgram’s study been told that they might be “reduced to a twitching, stuttering wreck,” it seems likely that many of them would not have agreed to participate. In neither of these studies did participants give true informed consent.

Another element of respecting people’s rights and dignity is respecting their privacy—their right to decide what information about them is shared with others. This means that researchers must maintain confidentiality, which is essentially an agreement not to disclose participants’ personal information without their consent or some appropriate legal authorization.

 Unavoidable Ethical Conflict

It may already be clear that ethical conflict in psychological research is unavoidable. Because there is little, if any, psychological research that is completely risk free, there will almost always be conflict between risks and benefits. Research that is beneficial to one group (e.g., the scientific community) can be harmful to another (e.g., the research participants), creating especially difficult trade-offs. We have also seen that being completely truthful with research participants can make it difficult or impossible to conduct scientifically valid studies on important questions.

Of course, many ethical conflicts are fairly easy to resolve. Nearly everyone would agree that deceiving research participants and then subjecting them to physical harm would not be justified by filling a small gap in the research literature. But many ethical conflicts are not easy to resolve, and competent and well-meaning researchers can disagree about how to resolve them. Consider, for example, an actual study on “personal space” conducted in a public men’s room (Middlemist, Knowles, & Matter, 1976). The researchers secretly observed their participants to see whether it took them longer to begin urinating when there was another man (a confederate of the researchers) at a nearby urinal. While some critics found this to be an unjustified assault on human dignity (Koocher, 1977), the researchers had carefully considered the ethical conflicts, resolved them as best they could, and concluded that the benefits of the research outweighed the risks (Middlemist, Knowles, & Matter, 1977). For example, they had interviewed some preliminary participants and found that none of them was bothered by the fact that they had been observed.

The point here is that although it may not be possible to eliminate ethical conflict completely, it is possible to deal with it in responsible and constructive ways. In general, this means thoroughly and carefully thinking through the ethical issues that are raised, minimizing the risks, and weighing the risks against the benefits. It also means being able to explain one’s ethical decisions to others, seeking feedback on them, and ultimately taking responsibility for them.

Key Takeaways

·         A wide variety of ethical issues arise in psychological research. Thinking them through requires considering how each of four moral principles (weighing risks against benefits, acting responsibly and with integrity, seeking justice, and respecting people’s rights and dignity) applies to each of three groups of people (research participants, science, and society).

·         Ethical conflict in psychological research is unavoidable. Researchers must think through the ethical issues raised by their research, minimize the risks, weigh the risks against the benefits, be able to explain their ethical decisions, seek feedback about these decisions from others, and ultimately take responsibility for them.

4.2  From Moral Principles to Ethics Codes

The general moral principles of weighing risks against benefits, acting with integrity, seeking justice, and respecting people’s rights and dignity provide a useful starting point for thinking about the ethics of psychological research because essentially everyone agrees on them. As we have seen, however, even people who agree on these general principles can disagree about specific ethical issues that arise in the course of conducting research. This is why there also exist more detailed and enforceable ethics codes that provide guidance on important issues that arise frequently. In this section, we begin with a brief historical overview of such ethics codes and then look closely at the one that is most relevant to psychological research—that of the American Psychological Association (APA).

Historical Overview

One of the earliest ethics codes was the Nuremberg Code—a set of 10 principles written in 1947 in conjunction with the trials of Nazi physicians accused of shockingly cruel research on concentration camp prisoners during World War II. It provided a standard against which to compare the behavior of the men on trial—many of whom were eventually convicted and either imprisoned or sentenced to death. The Nuremberg Code was particularly clear about the importance of carefully weighing risks against benefits and the need for informed consent. The Declaration of Helsinki is a similar ethics code that was created by the World Medical Council in 1964. Among the standards that it added to the Nuremberg Code was that research with human participants should be based on a written protocol—a detailed description of the research—that is reviewed by an independent committee. The Declaration of Helsinki has been revised several times, most recently in 2004.

In the United States, concerns about the Tuskegee study and others led to the publication in 1978 of a set of federal guidelines called the Belmont Report. The Belmont Report explicitly recognized the principle of seeking justice, including the importance of conducting research in a way that distributes risks and benefits fairly across different groups at the societal level. The Belmont Report became the basis of a set of laws—the Federal Policy for the Protection of Human Subjects—that apply to research conducted, supported, or regulated by the federal government. An extremely important part of these regulations is that universities, hospitals, and other institutions that receive support from the federal government must establish an institutional review board (IRB)—a committee that is responsible for reviewing research protocols for potential ethical problems. An IRB must consist of at least five people with varying backgrounds, including members of different professions, scientists and nonscientists, men and women, and at least one person not otherwise affiliated with the institution. The IRB helps to make sure that the risks of the proposed research are minimized, the benefits outweigh the risks, the research is carried out in a fair manner, and the informed consent procedure is adequate.

The federal regulations also distinguish research that poses three levels of risk. Exempt research includes research on the effectiveness of normal educational activities, the use of standard psychological measures and surveys of a nonsensitive nature that are administered in a way that maintains confidentiality, and research using existing data from public sources. It is called exempt because the regulations do not apply to it. Minimal risk research exposes participants to risks that are no greater than those encountered by healthy people in daily life or during routine physical or psychological examinations. Minimal risk research can receive an expedited review by one member of the IRB or by a separate committee under the authority of the IRB that can only approve minimal risk research. (Many departments of psychology have such separate committees.) Finally, at-risk research poses greater than minimal risk and must be reviewed by the IRB.

Ethics Codes

The link that follows the list—from the Office of Human Subjects Research at the National Institutes of Health—allows you to read the ethics codes discussed in this section in their entirety. They are all highly recommended and, with the exception of the Federal Policy, short and easy to read.

 ·         The Nuremberg Code

·         The Declaration of Helsinki

·         The Belmont Report

·         Federal Policy for the Protection of Human Subjects

Here is the link for the Research Guidelines for Human Subjects.

APA Ethics Code

The APA’s Ethical Principles of Psychologists and Code of Conduct (also known as the APA Ethics Code) was first published in 1953 and has been revised several times since then, most recently in 2002. It includes about 150 specific ethical standards that psychologists and their students are expected to follow. Much of the APA Ethics Code concerns the clinical practice of psychology—advertising one’s services, setting and collecting fees, having personal relationships with clients, and so on. For our purposes, the most relevant part is Standard 8: Research and Publication. Here we consider only some of its most important aspects—informed consent, deception, debriefing, the use of nonhuman animal subjects, and scholarly integrity—in more detail.  You can read the full APA Ethics Code at http://www.apa.org/ethics/code/index.aspx .

Informed Consent

Standards 8.02 to 8.05 are about informed consent. Again, informed consent means obtaining and documenting people’s agreement to participate in a study, having informed them of everything that might reasonably be expected to affect their decision. This includes details of the procedure, the risks and benefits of the research, the fact that they have the right to decline to participate or to withdraw from the study, the consequences of doing so, and any legal limits to confidentiality. For example, some states require researchers who learn of child abuse or other crimes to report this information to authorities.

Although the process of obtaining informed consent often involves having participants read and sign a consent form, it is important to understand that this is not all it is. Although having participants read and sign a consent form might be enough when they are competent adults with the necessary ability and motivation, many participants do not actually read consent forms or read them but do not understand them. For example, participants often mistake consent forms for legal documents and mistakenly believe that by signing them they give up their right to sue the researcher (Mann, 1994). Even with competent adults, therefore, it is good practice to tell participants about the risks and benefits, demonstrate the procedure, ask them if they have questions, and remind them of their right to withdraw at any time—in addition to having them read and sign a consent form.

Note also that there are situations in which informed consent is not necessary. These include situations in which the research is not expected to cause any harm and the procedure is straightforward or the study is conducted in the context of people’s ordinary activities. For example, if you wanted to sit outside a public building and observe whether people hold the door open for people behind them, you would not need to obtain their informed consent. Similarly, if a college instructor wanted to compare two legitimate teaching methods across two sections of his research methods course, he would not need to obtain informed consent from his students.

Deception of participants in psychological research can take a variety of forms: misinforming participants about the purpose of a study, using confederates, using phony equipment like Milgram’s shock generator, and presenting participants with false feedback about their performance (e.g., telling them they did poorly on a test when they actually did well). Deception also includes not informing participants of the full design or true purpose of the research even if they are not actively misinformed (Sieber, Iannuzzo, & Rodriguez, 1995). For example, a study on incidental learning—learning without conscious effort—might involve having participants read through a list of words in preparation for a “memory test” later. Although participants are likely to assume that the memory test will require them to recall the words, it might instead require them to recall the contents of the room or the appearance of the research assistant.

Some researchers have argued that deception of research participants is rarely if ever ethically justified. Among their arguments are that it prevents participants from giving truly informed consent, fails to respect their dignity as human beings, has the potential to upset them, makes them distrustful and therefore less honest in their responding, and damages the reputation of researchers in the field (Baumrind, 1985).

Note, however, that the APA Ethics Code takes a more moderate approach—allowing deception when the benefits of the study outweigh the risks, participants cannot reasonably be expected to be harmed, the research question cannot be answered without the use of deception, and participants are informed about the deception as soon as possible. This approach acknowledges that not all forms of deception are equally bad. Compare, for example, Milgram’s study in which he deceived his participants in several significant ways that resulted in their experiencing severe psychological stress with an incidental learning study in which a “memory test” turns out to be slightly different from what participants were expecting. It also acknowledges that some scientifically and socially important research questions can be difficult or impossible to answer without deceiving participants. Knowing that a study concerns the extent to which they obey authority, act aggressively toward a peer, or help a stranger is likely to change the way people behave so that the results no longer generalize to the real world.

Standard 8.08 is about debriefing. This is the process of informing research participants as soon as possible of the purpose of the study, revealing any deception, and correcting any other misconceptions they might have as a result of participating. Debriefing also involves minimizing harm that might have occurred. For example, an experiment on the effects of being in a sad mood on memory might involve inducing a sad mood in participants by having them think sad thoughts, watch a sad video, or listen to sad music. Debriefing would be the time to return participants’ moods to normal by having them think happy thoughts, watch a happy video, or listen to happy music.

Nonhuman Animal Subjects

Standard 8.09 is about the humane treatment and care of nonhuman animal subjects. Although most contemporary research in psychology does not involve nonhuman animal subjects, a significant minority of it does—especially in the study of learning and conditioning, behavioral neuroscience, and the development of drug and surgical therapies for psychological disorders.

The use of nonhuman animal subjects in psychological research is like the use of deception in that there are those who argue that it is rarely, if ever, ethically acceptable (Bowd & Shapiro, 1993). Clearly, nonhuman animals are incapable of giving informed consent. Yet they can be subjected to numerous procedures that are likely to cause them suffering. They can be confined, deprived of food and water, subjected to pain, operated on, and ultimately euthanized. (Of course, they can also be observed benignly in natural or zoolike settings.) Others point out that psychological research on nonhuman animals has resulted in many important benefits to humans, including the development of behavioral therapies for many disorders, more effective pain control methods, and antipsychotic drugs (Miller, 1985). It has also resulted in benefits to nonhuman animals, including alternatives to shooting and poisoning as means of controlling them.

As with deception, the APA acknowledges that the benefits of research on nonhuman animals can outweigh the costs, in which case it is ethically acceptable. However, researchers must use alternative methods when they can. When they cannot, they must acquire and care for their subjects humanely and minimize the harm to them. For more information on the APA’s position on nonhuman animal subjects, see the website of the APA’s Committee on Animal Research and Ethics ( http://www.apa.org/science/leadership/care/index.aspx ).

Scholarly Integrity

Standards 8.10 to 8.15 are about scholarly integrity. These include the obvious points that researchers must not fabricate data or plagiarize. Plagiarism means using others’ words or ideas without proper acknowledgment. Proper acknowledgment generally means indicating direct quotations with quotation marks and providing a citation to the source of any quotation or idea used.

According to the APA Ethics Code, faculty advisers should discuss publication credit—who will be an author and the order of authors—with their student collaborators as early as possible in the research process.

The remaining standards make some less obvious but equally important points. Researchers should not publish the same data a second time as though it were new, they should share their data with other researchers, and as peer reviewers they should keep the unpublished research they review confidential. Note that the authors’ names on published research—and the order in which those names appear—should reflect the importance of each person’s contribution to the research. It would be unethical, for example, to include as an author someone who had made only minor contributions to the research (e.g., analyzing some of the data) or for a faculty member to make himself or herself the first author on research that was largely conducted by a student.

·         There are several written ethics codes for research with human participants that provide specific guidance on the ethical issues that arise most frequently. These codes include the Nuremberg Code, the Declaration of Helsinki, the Belmont Report, and the Federal Policy for the Protection of Human Subjects.

·         The APA Ethics Code is the most important ethics code for researchers in psychology. It includes many standards that are relevant mainly to clinical practice, but Standard 8 concerns informed consent, deception, debriefing, the use of nonhuman animal subjects, and scholarly integrity in research.

·         Research conducted at universities, hospitals, and other institutions that receive support from the federal government must be reviewed by an institutional review board (IRB)—a committee at the institution that reviews research protocols to make sure they conform to ethical standards.

·         Informed consent is the process of obtaining and documenting people’s agreement to participate in a study, having informed them of everything that might reasonably be expected to affect their decision. Although it often involves having them read and sign a consent form, it is not equivalent to reading and signing a consent form.

·         Although some researchers argue that deception of research participants is never ethically justified, the APA Ethics Code allows for its use when the benefits of using it outweigh the risks, participants cannot reasonably be expected to be harmed, there is no way to conduct the study without deception, and participants are informed of the deception as soon as possible.

4.3  Putting Ethics Into Practice

In this section, we look at some practical advice for conducting ethical research in psychology. Again, it is important to remember that ethical issues arise well before you begin to collect data and continue to arise through publication and beyond.

Know and Accept Your Ethical Responsibilities

As the American Psychological Association (APA) Ethics Code notes in its introduction, “Lack of awareness or misunderstanding of an ethical standard is not itself a defense to a charge of unethical conduct.” This is why the very first thing that you must do as a new researcher is know and accept your ethical responsibilities. At a minimum, this means reading and understanding the relevant standards of the APA Ethics Code, distinguishing minimal risk from at-risk research, and knowing the specific policies and procedures of your institution—including how to prepare and submit a research protocol for institutional review board (IRB) review. If you are conducting research as a course requirement, there may be specific course standards, policies, and procedures. If any standard, policy, or procedure is unclear—or you are unsure what to do about an ethical issue that arises—you must seek clarification. You can do this by reviewing the relevant ethics codes, reading about how similar issues have been resolved by others, or consulting with more experienced researchers, your IRB, or your course instructor. Ultimately, you as the researcher must take responsibility for the ethics of the research you conduct.

Identify and Minimize Risks

As you design your study, you must identify and minimize risks to participants. Start by listing all the risks, including risks of physical and psychological harm and violations of confidentiality. Remember that it is easy for researchers to see risks as less serious than participants do or even to overlook them completely. For example, one student researcher wanted to test people’s sensitivity to violent images by showing them gruesome photographs of crime and accident scenes. Because she was an emergency medical technician, however, she greatly underestimated how disturbing these images were to most people. Remember too that some risks might apply only to some participants. For example, while most people would have no problem completing a survey about their fear of various crimes, those who have been a victim of one of those crimes might become upset. This is why you should seek input from a variety of people, including your research collaborators, more experienced researchers, and even from nonresearchers who might be better able to take the perspective of a participant.

Once you have identified the risks, you can often reduce or eliminate many of them. One way is to modify the research design. For example, you might be able to shorten or simplify the procedure to prevent boredom and frustration. You might be able to replace upsetting or offensive stimulus materials (e.g., graphic accident scene photos) with less upsetting or offensive ones (e.g., milder photos of the sort people are likely to see in the newspaper). A good example of modifying a research design is a 2009 replication of Milgram’s study conducted by Jerry Burger. Instead of allowing his participants to continue administering shocks up to the 450-V maximum, the researcher always stopped the procedure when they were about to administer the 150-V shock (Burger, 2009). This made sense because in Milgram’s study (a) participants’ severe negative reactions occurred after this point and (b) most participants who administered the 150-V shock continued all the way to the 450-V maximum. Thus the researcher was able to compare his results directly with Milgram’s at every point up to the 150-V shock and also was able to estimate how many of his participants would have continued to the maximum—but without subjecting them to the severe stress that Milgram did. (The results, by the way, were that these contemporary participants were just as obedient as Milgram’s were.)

A second way to minimize risks is to use a pre-screening procedure to identify and eliminate participants who are at high risk. You can do this in part through the informed consent process. For example, you can warn participants that a survey includes questions about their fear of crime and remind them that they are free to withdraw if they think this might upset them. Pre-screening can also involve collecting data to identify and eliminate participants. For example, Burger used an extensive pre-screening procedure involving multiple questionnaires and an interview with a clinical psychologist to identify and eliminate participants with physical or psychological problems that put them at high risk.

A third way to minimize risks is to take active steps to maintain confidentiality. You should keep signed consent forms separately from any data that you collect and in such a way that no individual’s name can be linked to his or her data. In addition, beyond people’s sex and age, you should only collect personal information that you actually need to answer your research question. If people’s sexual orientation or ethnicity is not clearly relevant to your research question, for example, then do not ask them about it. Be aware also that certain data collection procedures can lead to unintentional violations of confidentiality. When participants respond to an oral survey in a shopping mall or complete a questionnaire in a classroom setting, it is possible that their responses will be overheard or seen by others. If the responses are personal, it is better to administer the survey or questionnaire individually in private or to use other techniques to prevent the unintentional sharing of personal information.

Identify and Minimize Deception

Remember that deception can take a variety of forms, not all of which involve actively misleading participants. It is also deceptive to allow participants to make incorrect assumptions (e.g., about what will be on a “memory test”) or simply withhold information about the full design or purpose of the study. It is best to identify and minimize all forms of deception.

Remember that according to the APA Ethics Code, deception is ethically acceptable only if there is no way to answer your research question without it. Therefore, if your research design includes any form of active deception, you should consider whether it is truly necessary. Imagine, for example, that you want to know whether the age of college professors affects students’ expectations about their teaching ability. You could do this by telling participants that you will show them photos of college professors and ask them to rate each one’s teaching ability. But if the photos are not really of college professors but of your own family members and friends, then this would be deception. This deception could easily be eliminated, however, by telling participants instead to imagine that the photos are of college professors and to rate them as if they were.

In general, it is considered acceptable to wait until debriefing before you reveal your research question as long as you describe the procedure, risks, and benefits during the informed consent process. For example, you would not have to tell participants that you wanted to know whether the age of college professors affects people’s expectations about them until the study was over. Not only is this information unlikely to affect people’s decision about whether or not to participate in the study, but it has the potential to invalidate the results. Participants who know that age is the independent variable might rate the older and younger “professors” differently because they think you want them to. Alternatively, they might be careful to rate them the same so that they do not appear prejudiced. But even this extremely mild form of deception can be minimized by informing participants—orally, in writing, or both—that although you have accurately described the procedure, risks, and benefits, you will wait to reveal the research question until afterward. In essence, participants give their consent to be deceived or to have information withheld from them until later.

Weigh the Risks Against the Benefits

Once the risks of the research have been identified and minimized, you need to weigh them against the benefits. This requires identifying all the benefits. Remember to consider benefits to the research participants, to science, and to society. If you are a student researcher, remember that one of the benefits is the knowledge you will gain about how to conduct scientific research in psychology—knowledge you can then use to complete your studies and succeed in graduate school or in your career.

If the research poses minimal risk—no more than in people’s daily lives or routine physical or psychological examinations—then even a small benefit to participants, science, or society is generally considered enough to justify it. If it poses more than minimal risk, then there should be more benefits. If the research has the potential to upset some participants, for example, then it becomes more important that the study be well designed and answer a scientifically interesting research question or have clear practical implications. It would be unethical to subject people to pain, fear, or embarrassment for no better reason than to satisfy one’s personal curiosity. In general, psychological research that has the potential to cause harm that is more than minor or lasts for more than a short time is rarely considered justified by its benefits. Consider, for example, that Milgram’s study—as interesting and important as the results were—would be considered unethical by today’s standards.

Create Informed Consent and Debriefing Procedures

Once you have settled on a research design, you need to create your informed consent and debriefing procedures. Start by deciding whether informed consent is necessary according to APA Standard 8.05. If informed consent is necessary, there are several things you should do. First, when you recruit participants—whether it is through word of mouth, posted advertisements, or a participant pool—provide them with as much information about the study as you can. This will allow those who might find the study objectionable to avoid it. Second, prepare a script or set of “talking points” to help you explain the study to your participants in simple everyday language. This should include a description of the procedure, the risks and benefits, and their right to withdraw at any time. Third, create an informed consent form that covers all the points in Standard 8.02a that participants can read and sign after you have described the study to them. Your university, department, or course instructor may have a sample consent form that you can adapt for your own study. If not, an Internet search will turn up several samples. Remember that if appropriate, both the oral and written parts of the informed consent process should include the fact that you are keeping some information about the design or purpose of the study from them but that you will reveal it during debriefing.

Debriefing is similar to informed consent in that you cannot necessarily expect participants to read and understand written debriefing forms. So again it is best to write a script or set of talking points with the goal of being able to explain the study in simple everyday language. During debriefing, you should reveal the research question and full design of the study. For example, if participants are tested under only one condition, then you should explain what happened in the other conditions. If you deceived your participants, you should reveal this as soon as possible, apologize for the deception, explain why it was necessary, and correct any misconceptions that participants might have as a result. Debriefing is also a good time to provide additional benefits to research participants by giving them relevant practical information or referrals to other sources of help. For example, in a study of attitudes toward domestic abuse, you could provide pamphlets about domestic abuse and referral information to the university counseling center for those who might want it.

Remember to schedule plenty of time for the informed consent and debriefing processes. They cannot be effective if you have to rush through them.

Get Approval

The next step is to get institutional approval for your research based on the specific policies and procedures at your institution or for your course. This will generally require writing a protocol that describes the purpose of the study, the research design and procedure, the risks and benefits, the steps taken to minimize risks, and the informed consent and debriefing procedures. Do not think of the institutional approval process as merely an obstacle to overcome but as an opportunity to think through the ethics of your research and to consult with others who are likely to have more experience or different perspectives than you. If the IRB has questions or concerns about your research, address them promptly and in good faith. This might even mean making further modifications to your research design and procedure before resubmitting your protocol.

Follow Through

Your concern with ethics should not end when your study receives institutional approval. It now becomes important to stick to the protocol you submitted or to seek additional approval for anything other than a minor change. During the research, you should monitor your participants for unanticipated reactions and seek feedback from them during debriefing. One criticism of Milgram’s study is that although he did not know ahead of time that his participants would have such severe negative reactions, he certainly knew after he had tested the first several participants and should have made adjustments at that point (Baumrind, 1985). Be alert also for potential violations of confidentiality. Keep the consent forms and the data safe and separate from each other and make sure that no one, intentionally or unintentionally, has access to any participant’s personal information.

Finally, you must maintain your integrity through the publication process and beyond. Address publication credit—who will be authors on the research and the order of authors—with your collaborators early and avoid plagiarism in your writing. Remember that your scientific goal is to learn about the way the world actually is and that your scientific duty is to report on your results honestly and accurately. So do not be tempted to fabricate data or alter your results in any way. Besides, unexpected results are often as interesting, or more so, than expected ones.

·         It is your responsibility as a researcher to know and accept your ethical responsibilities.

·         You can take several concrete steps to minimize risks and deception in your research. These include making changes to your research design, prescreening to identify and eliminate high-risk participants, and providing participants with as much information as possible during informed consent and debriefing.

·         Your ethical responsibilities continue beyond IRB approval. You need to monitor participants’ reactions, be alert for potential violations of confidentiality, and maintain scholarly integrity through the publication process.

References from Chapter 4

Baumrind, D. (1985). Research using intentional deception: Ethical issues revisited. American Psychologist, 40, 165–174.

Bowd, A. D., & Shapiro, K. J. (1993). The case against animal laboratory research in psychology. Journal of Social Issues, 49, 133–142.

Burger, J. M. (2009). Replicating Milgram: Would people still obey today? American Psychologist, 64, 1–11.

Haidt, J., Koller, S. H., & Dias, M. (1993). Affect, culture, and morality, or is it wrong to eat your dog? Journal of Personality and Social Psychology, 65, 613–628.

Koocher, G. P. (1977). Bathroom behavior and human dignity. Journal of Personality and Social Psychology, 35, 120–121.

Mann, T. (1994). Informed consent for psychological research: Do subjects comprehend consent forms and understand their legal rights? Psychological Science, 5, 140–143.

Middlemist, R. D., Knowles, E. S., & Matter, C. F. (1976). Personal space invasions in the lavatory: Suggestive evidence for arousal. Journal of Personality and Social Psychology, 33, 541–546.

Middlemist, R. D., Knowles, E. S., & Matter, C. F. (1977). What to do and what to report: A reply to Koocher. Journal of Personality and Social Psychology, 35, 122–125.

Milgram, S. (1963). Behavioral study of obedience. Journal of Abnormal and Social Psychology, 67, 371–378.

Miller, N. E. (1985). The value of behavioral research on animals. American Psychologist, 40, 423–440.

Reverby, S. M. (2009). Examining Tuskegee: The infamous syphilis study and its legacy. Chapel Hill, NC: University of North Carolina Press.

Rosenthal, R. M. (1994). Science and ethics in conducting, analyzing, and reporting psychological research. Psychological Science, 5, 127–133.

Sieber, J. E., Iannuzzo, R., & Rodriguez, B. (1995). Deception methods in psychology: Have they changed in 23 years? Ethics Behavior, 5, 67–85.

No Alignments yet.

Cite this work

Ethical Considerations In Psychology Research

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Ethics refers to the correct rules of conduct necessary when carrying out research. We have a moral responsibility to protect research participants from harm.

However important the issue under investigation, psychologists must remember that they have a duty to respect the rights and dignity of research participants. This means that they must abide by certain moral principles and rules of conduct.

What are Ethical Guidelines?

In Britain, ethical guidelines for research are published by the British Psychological Society, and in America, by the American Psychological Association. The purpose of these codes of conduct is to protect research participants, the reputation of psychology, and psychologists themselves.

Moral issues rarely yield a simple, unambiguous, right or wrong answer. It is, therefore, often a matter of judgment whether the research is justified or not.

For example, it might be that a study causes psychological or physical discomfort to participants; maybe they suffer pain or perhaps even come to serious harm.

On the other hand, the investigation could lead to discoveries that benefit the participants themselves or even have the potential to increase the sum of human happiness.

Rosenthal and Rosnow (1984) also discuss the potential costs of failing to carry out certain research. Who is to weigh up these costs and benefits? Who is to judge whether the ends justify the means?

Finally, if you are ever in doubt as to whether research is ethical or not, it is worthwhile remembering that if there is a conflict of interest between the participants and the researcher, it is the interests of the subjects that should take priority.

Studies must now undergo an extensive review by an institutional review board (US) or ethics committee (UK) before they are implemented. All UK research requires ethical approval by one or more of the following:

  • Department Ethics Committee (DEC) : for most routine research.
  • Institutional Ethics Committee (IEC) : for non-routine research.
  • External Ethics Committee (EEC) : for research that s externally regulated (e.g., NHS research).

Committees review proposals to assess if the potential benefits of the research are justifiable in light of the possible risk of physical or psychological harm.

These committees may request researchers make changes to the study’s design or procedure or, in extreme cases, deny approval of the study altogether.

The British Psychological Society (BPS) and American Psychological Association (APA) have issued a code of ethics in psychology that provides guidelines for conducting research.  Some of the more important ethical issues are as follows:

Informed Consent

Before the study begins, the researcher must outline to the participants what the research is about and then ask for their consent (i.e., permission) to participate.

An adult (18 years +) capable of being permitted to participate in a study can provide consent. Parents/legal guardians of minors can also provide consent to allow their children to participate in a study.

Whenever possible, investigators should obtain the consent of participants. In practice, this means it is not sufficient to get potential participants to say “Yes.”

They also need to know what it is that they agree to. In other words, the psychologist should, so far as is practicable, explain what is involved in advance and obtain the informed consent of participants.

Informed consent must be informed, voluntary, and rational. Participants must be given relevant details to make an informed decision, including the purpose, procedures, risks, and benefits. Consent must be given voluntarily without undue coercion. And participants must have the capacity to rationally weigh the decision.

Components of informed consent include clearly explaining the risks and expected benefits, addressing potential therapeutic misconceptions about experimental treatments, allowing participants to ask questions, and describing methods to minimize risks like emotional distress.

Investigators should tailor the consent language and process appropriately for the study population. Obtaining meaningful informed consent is an ethical imperative for human subjects research.

The voluntary nature of participation should not be compromised through coercion or undue influence. Inducements should be fair and not excessive/inappropriate.

However, it is not always possible to gain informed consent.  Where the researcher can’t ask the actual participants, a similar group of people can be asked how they would feel about participating.

If they think it would be OK, then it can be assumed that the real participants will also find it acceptable. This is known as presumptive consent.

However, a problem with this method is that there might be a mismatch between how people think they would feel/behave and how they actually feel and behave during a study.

In order for consent to be ‘informed,’ consent forms may need to be accompanied by an information sheet for participants’ setting out information about the proposed study (in lay terms), along with details about the investigators and how they can be contacted.

Special considerations exist when obtaining consent from vulnerable populations with decisional impairments, such as psychiatric patients, intellectually disabled persons, and children/adolescents. Capacity can vary widely so should be assessed individually, but interventions to improve comprehension may help. Legally authorized representatives usually must provide consent for children.

Participants must be given information relating to the following:

  • A statement that participation is voluntary and that refusal to participate will not result in any consequences or any loss of benefits that the person is otherwise entitled to receive.
  • Purpose of the research.
  • All foreseeable risks and discomforts to the participant (if there are any). These include not only physical injury but also possible psychological.
  • Procedures involved in the research.
  • Benefits of the research to society and possibly to the individual human subject.
  • Length of time the subject is expected to participate.
  • Person to contact for answers to questions or in the event of injury or emergency.
  • Subjects” right to confidentiality and the right to withdraw from the study at any time without any consequences.
Debriefing after a study involves informing participants about the purpose, providing an opportunity to ask questions, and addressing any harm from participation. Debriefing serves an educational function and allows researchers to correct misconceptions. It is an ethical imperative.

After the research is over, the participant should be able to discuss the procedure and the findings with the psychologist. They must be given a general idea of what the researcher was investigating and why, and their part in the research should be explained.

Participants must be told if they have been deceived and given reasons why. They must be asked if they have any questions, which should be answered honestly and as fully as possible.

Debriefing should occur as soon as possible and be as full as possible; experimenters should take reasonable steps to ensure that participants understand debriefing.

“The purpose of debriefing is to remove any misconceptions and anxieties that the participants have about the research and to leave them with a sense of dignity, knowledge, and a perception of time not wasted” (Harris, 1998).

The debriefing aims to provide information and help the participant leave the experimental situation in a similar frame of mind as when he/she entered it (Aronson, 1988).

Exceptions may exist if debriefing seriously compromises study validity or causes harm itself, like negative emotions in children. Consultation with an institutional review board guides exceptions.

Debriefing indicates investigators’ commitment to participant welfare. Harms may not be raised in the debriefing itself, so responsibility continues after data collection. Following up demonstrates respect and protects persons in human subjects research.

Protection of Participants

Researchers must ensure that those participating in research will not be caused distress. They must be protected from physical and mental harm. This means you must not embarrass, frighten, offend or harm participants.

Normally, the risk of harm must be no greater than in ordinary life, i.e., participants should not be exposed to risks greater than or additional to those encountered in their normal lifestyles.

The researcher must also ensure that if vulnerable groups are to be used (elderly, disabled, children, etc.), they must receive special care. For example, if studying children, ensure their participation is brief as they get tired easily and have a limited attention span.

Researchers are not always accurately able to predict the risks of taking part in a study, and in some cases, a therapeutic debriefing may be necessary if participants have become disturbed during the research (as happened to some participants in Zimbardo’s prisoners/guards study ).

Deception research involves purposely misleading participants or withholding information that could influence their participation decision. This method is controversial because it limits informed consent and autonomy, but can provide otherwise unobtainable valuable knowledge.

Types of deception include (i) deliberate misleading, e.g. using confederates, staged manipulations in field settings, deceptive instructions; (ii) deception by omission, e.g., failure to disclose full information about the study, or creating ambiguity.

The researcher should avoid deceiving participants about the nature of the research unless there is no alternative – and even then, this would need to be judged acceptable by an independent expert. However, some types of research cannot be carried out without at least some element of deception.

For example, in Milgram’s study of obedience , the participants thought they were giving electric shocks to a learner when they answered a question wrongly. In reality, no shocks were given, and the learners were confederates of Milgram.

This is sometimes necessary to avoid demand characteristics (i.e., the clues in an experiment that lead participants to think they know what the researcher is looking for).

Another common example is when a stooge or confederate of the experimenter is used (this was the case in both the experiments carried out by Asch ).

According to ethics codes, deception must have strong scientific justification, and non-deceptive alternatives should not be feasible. Deception that causes significant harm is prohibited. Investigators should carefully weigh whether deception is necessary and ethical for their research.

However, participants must be deceived as little as possible, and any deception must not cause distress.  Researchers can determine whether participants are likely distressed when deception is disclosed by consulting culturally relevant groups.

Participants should immediately be informed of the deception without compromising the study’s integrity. Reactions to learning of deception can range from understanding to anger. Debriefing should explain the scientific rationale and social benefits to minimize negative reactions.

If the participant is likely to object or be distressed once they discover the true nature of the research at debriefing, then the study is unacceptable.

If you have gained participants’ informed consent by deception, then they will have agreed to take part without actually knowing what they were consenting to.  The true nature of the research should be revealed at the earliest possible opportunity or at least during debriefing.

Some researchers argue that deception can never be justified and object to this practice as it (i) violates an individual’s right to choose to participate; (ii) is a questionable basis on which to build a discipline; and (iii) leads to distrust of psychology in the community.

Confidentiality

Protecting participant confidentiality is an ethical imperative that demonstrates respect, ensures honest participation, and prevents harms like embarrassment or legal issues. Methods like data encryption, coding systems, and secure storage should match the research methodology.

Participants and the data gained from them must be kept anonymous unless they give their full consent.  No names must be used in a lab report .

Researchers must clearly describe to participants the limits of confidentiality and methods to protect privacy. With internet research, threats exist like third-party data access; security measures like encryption should be explained. For non-internet research, other protections should be noted too, like coding systems and restricted data access.

High-profile data breaches have eroded public trust. Methods that minimize identifiable information can further guard confidentiality. For example, researchers can consider whether birthdates are necessary or just ages.

Generally, reducing personal details collected and limiting accessibility safeguards participants. Following strong confidentiality protections demonstrates respect for persons in human subjects research.

What do we do if we discover something that should be disclosed (e.g., a criminal act)? Researchers have no legal obligation to disclose criminal acts and must determine the most important consideration: their duty to the participant vs. their duty to the wider community.

Ultimately, decisions to disclose information must be set in the context of the research aims.

Withdrawal from an Investigation

Participants should be able to leave a study anytime if they feel uncomfortable. They should also be allowed to withdraw their data. They should be told at the start of the study that they have the right to withdraw.

They should not have pressure placed upon them to continue if they do not want to (a guideline flouted in Milgram’s research).

Participants may feel they shouldn’t withdraw as this may ‘spoil’ the study. Many participants are paid or receive course credits; they may worry they won’t get this if they withdraw.

Even at the end of the study, the participant has a final opportunity to withdraw the data they have provided for the research.

Ethical Issues in Psychology & Socially Sensitive Research

There has been an assumption over the years by many psychologists that provided they follow the BPS or APA guidelines when using human participants and that all leave in a similar state of mind to how they turned up, not having been deceived or humiliated, given a debrief, and not having had their confidentiality breached, that there are no ethical concerns with their research.

But consider the following examples:

a) Caughy et al. 1994 found that middle-class children in daycare at an early age generally score less on cognitive tests than children from similar families reared in the home.

Assuming all guidelines were followed, neither the parents nor the children participating would have been unduly affected by this research. Nobody would have been deceived, consent would have been obtained, and no harm would have been caused.

However, consider the wider implications of this study when the results are published, particularly for parents of middle-class infants who are considering placing their young children in daycare or those who recently have!

b)  IQ tests administered to black Americans show that they typically score 15 points below the average white score.

When black Americans are given these tests, they presumably complete them willingly and are not harmed as individuals. However, when published, findings of this sort seek to reinforce racial stereotypes and are used to discriminate against the black population in the job market, etc.

Sieber & Stanley (1988) (the main names for Socially Sensitive Research (SSR) outline 4 groups that may be affected by psychological research: It is the first group of people that we are most concerned with!
  • Members of the social group being studied, such as racial or ethnic group. For example, early research on IQ was used to discriminate against US Blacks.
  • Friends and relatives of those participating in the study, particularly in case studies, where individuals may become famous or infamous. Cases that spring to mind would include Genie’s mother.
  • The research team. There are examples of researchers being intimidated because of the line of research they are in.
  • The institution in which the research is conducted.
salso suggest there are 4 main ethical concerns when conducting SSR:
  • The research question or hypothesis.
  • The treatment of individual participants.
  • The institutional context.
  • How the findings of the research are interpreted and applied.

Ethical Guidelines For Carrying Out SSR

Sieber and Stanley suggest the following ethical guidelines for carrying out SSR. There is some overlap between these and research on human participants in general.

Privacy : This refers to people rather than data. Asking people questions of a personal nature (e.g., about sexuality) could offend.

Confidentiality: This refers to data. Information (e.g., about H.I.V. status) leaked to others may affect the participant’s life.

Sound & valid methodology : This is even more vital when the research topic is socially sensitive. Academics can detect flaws in methods, but the lay public and the media often don’t.

When research findings are publicized, people are likely to consider them fact, and policies may be based on them. Examples are Bowlby’s maternal deprivation studies and intelligence testing.

Deception : Causing the wider public to believe something, which isn’t true by the findings, you report (e.g., that parents are responsible for how their children turn out).

Informed consent : Participants should be made aware of how participating in the research may affect them.

Justice & equitable treatment : Examples of unjust treatment are (i) publicizing an idea, which creates a prejudice against a group, & (ii) withholding a treatment, which you believe is beneficial, from some participants so that you can use them as controls.

Scientific freedom : Science should not be censored, but there should be some monitoring of sensitive research. The researcher should weigh their responsibilities against their rights to do the research.

Ownership of data : When research findings could be used to make social policies, which affect people’s lives, should they be publicly accessible? Sometimes, a party commissions research with their interests in mind (e.g., an industry, an advertising agency, a political party, or the military).

Some people argue that scientists should be compelled to disclose their results so that other scientists can re-analyze them. If this had happened in Burt’s day, there might not have been such widespread belief in the genetic transmission of intelligence. George Miller (Miller’s Magic 7) famously argued that we should give psychology away.

The values of social scientists : Psychologists can be divided into two main groups: those who advocate a humanistic approach (individuals are important and worthy of study, quality of life is important, intuition is useful) and those advocating a scientific approach (rigorous methodology, objective data).

The researcher’s values may conflict with those of the participant/institution. For example, if someone with a scientific approach was evaluating a counseling technique based on a humanistic approach, they would judge it on criteria that those giving & receiving the therapy may not consider important.

Cost/benefit analysis : It is unethical if the costs outweigh the potential/actual benefits. However, it isn’t easy to assess costs & benefits accurately & the participants themselves rarely benefit from research.

Sieber & Stanley advise that researchers should not avoid researching socially sensitive issues. Scientists have a responsibility to society to find useful knowledge.

  • They need to take more care over consent, debriefing, etc. when the issue is sensitive.
  • They should be aware of how their findings may be interpreted & used by others.
  • They should make explicit the assumptions underlying their research so that the public can consider whether they agree with these.
  • They should make the limitations of their research explicit (e.g., ‘the study was only carried out on white middle-class American male students,’ ‘the study is based on questionnaire data, which may be inaccurate,’ etc.
  • They should be careful how they communicate with the media and policymakers.
  • They should be aware of the balance between their obligations to participants and those to society (e.g. if the participant tells them something which they feel they should tell the police/social services).
  • They should be aware of their own values and biases and those of the participants.

Arguments for SSR

  • Psychologists have devised methods to resolve the issues raised.
  • SSR is the most scrutinized research in psychology. Ethical committees reject more SSR than any other form of research.
  • By gaining a better understanding of issues such as gender, race, and sexuality, we are able to gain greater acceptance and reduce prejudice.
  • SSR has been of benefit to society, for example, EWT. This has made us aware that EWT can be flawed and should not be used without corroboration. It has also made us aware that the EWT of children is every bit as reliable as that of adults.
  • Most research is still on white middle-class Americans (about 90% of research is quoted in texts!). SSR is helping to redress the balance and make us more aware of other cultures and outlooks.

Arguments against SSR

  • Flawed research has been used to dictate social policy and put certain groups at a disadvantage.
  • Research has been used to discriminate against groups in society, such as the sterilization of people in the USA between 1910 and 1920 because they were of low intelligence, criminal, or suffered from psychological illness.
  • The guidelines used by psychologists to control SSR lack power and, as a result, are unable to prevent indefensible research from being carried out.

American Psychological Association. (2002). American Psychological Association ethical principles of psychologists and code of conduct. www.apa.org/ethics/code2002.html

Baumrind, D. (1964). Some thoughts on ethics of research: After reading Milgram’s” Behavioral study of obedience.”.  American Psychologist ,  19 (6), 421.

Caughy, M. O. B., DiPietro, J. A., & Strobino, D. M. (1994). Day‐care participation as a protective factor in the cognitive development of low‐income children.  Child development ,  65 (2), 457-471.

Harris, B. (1988). Key words: A history of debriefing in social psychology. In J. Morawski (Ed.), The rise of experimentation in American psychology (pp. 188-212). New York: Oxford University Press.

Rosenthal, R., & Rosnow, R. L. (1984). Applying Hamlet’s question to the ethical conduct of research: A conceptual addendum. American Psychologist, 39(5) , 561.

Sieber, J. E., & Stanley, B. (1988). Ethical and professional dimensions of socially sensitive research.  American psychologist ,  43 (1), 49.

The British Psychological Society. (2010). Code of Human Research Ethics. www.bps.org.uk/sites/default/files/documents/code_of_human_research_ethics.pdf

Further Information

  • MIT Psychology Ethics Lecture Slides

BPS Documents

  • Code of Ethics and Conduct (2018)
  • Good Practice Guidelines for the Conduct of Psychological Research within the NHS
  • Guidelines for Psychologists Working with Animals
  • Guidelines for ethical practice in psychological research online

APA Documents

APA Ethical Principles of Psychologists and Code of Conduct

Print Friendly, PDF & Email

Main Navigation

  • Contact NeurIPS
  • Code of Ethics
  • Code of Conduct
  • Create Profile
  • Journal To Conference Track
  • Diversity & Inclusion
  • Proceedings
  • Future Meetings
  • Exhibitor Information
  • Privacy Policy

NeurIPS Code of Ethics

The Code of Ethics aims to guide the NeurIPS community towards higher standards of ethical conduct as it pertains to elements of research ethics and the broader societal and environmental impact of research submitted to NeurIPS. It outlines conference expectations about the ethical practices that must be adopted by the submitting authors, members of the program and organizing committees. The Code of Ethics complements the NeurIPS Code of Conduct , which focuses on professional conduct and research integrity issues, including plagiarism, fraud and reproducibility concerns. The points described below also inform the NeurIPS Submission Checklist, which outlines more concrete communication requirements. 

Potential Harms Caused by the Research Process 

Research involving human subjects or participants:

  • Fair Wages: all human research subjects or participants must receive appropriate compensation. If you make use of crowdsourcing or contract work for a particular task as part of your research project,  you must respect the minimum hourly rate in the region where the work is carried out.
  • Research involving human participants: if the research presented involves direct interactions between the researchers and human participants or between a technical system and human participants, authors are required to follow existing protocols in their institutions (e.g. human subject research accreditation, IRB) and go through the relevant process. In cases when no formal process exists, they can undergo an equivalent informal process (e.g. via their peers or an internal ethics review).

Data-related concerns:

The points listed below apply to all datasets used for submissions, both for publicly available data and internal datasets.

  • Privacy: Datasets should minimize the exposure of any personally identifiable information, unless informed consent from those individuals is provided to do so. 
  • Consent: Any paper that chooses to create a dataset with real data of real people should ask for the explicit consent of participants, or explain why they were unable to do so.
  • Deprecated datasets: Authors should take care to confirm with dataset creators that a dataset is still available for use. Datasets taken down by the original author (ie. deemed obsolete, or otherwise discontinued), should no longer be used, unless it is for the purposes of audit or critical assessment. For some indication of known depreciated datasets, please refer to the NeurIPS list of deprecated datasets.
  • Copyright and Fair Use: While the norms of fair use and copyright in machine learning research are still evolving, authors must respect the terms of datasets that have defined licenses (e.g. CC 4.0, MIT, etc). 
  • Representative evaluation practice:  When collecting new datasets or making decisions about which datasets to use, authors should assess and communicate the degree to which their datasets are representative of their intended population. Claims of diverse or universal representation should be substantiated by concrete evidence or examples. 

Societal Impact and Potential Harmful Consequences

Authors should transparently communicate the known or anticipated consequences of research: for instance via the paper checklist or a separate section in a submission.

The following specific areas are of particular concern:

  • Safety: Contributors should consider whether there are foreseeable situations in which their technology can be used to harm, injure or kill people through its direct application, side effects, or potential misuse. We do not accept research whose primary goal is to increase the lethality of weapons systems.
  • Security: Researchers should consider whether there is a risk that applications could open security vulnerabilities or cause serious accidents when deployed in real world environments. If this is the case, they should take concrete steps to recommend or implement ways to protect against such security risks.
  • Discrimination: Researchers should consider whether the technology they developed can be used to discriminate, exclude, or otherwise negatively impact people, including impacts on the provision of services such as healthcare, education or access to credit.  
  • Surveillance: Researchers should consult on local laws or legislation before collecting or analyzing any bulk surveillance data. Surveillance should not be used to predict protected categories, or be used in any way to endanger individual well-being. 
  • Deception & Harassment: Researchers should communicate about whether their approach could be used to facilitate deceptive interactions that would cause harm such as theft, fraud, or harassment, and whether it could be used to impersonate public figures and influence political processes, or as a tool to promote hate speech or abuse.
  • Environment: Researchers should consider whether their research is going to negatively impact the environment by, e.g., promoting fossil fuel extraction, increasing societal consumption or producing substantial amounts of greenhouse gasses.
  • Human Rights: We prohibit circulation of any research work that builds upon or facilitates illegal activity, and we strongly discourage any work that could be used to deny people rights to privacy, speech, health, liberty, security, legal personhood, or freedom of conscience or religion.
  • Bias and fairness:  Contributors should consider any suspected biases or limitations to the scope of performance of models or the contents of datasets and inspect these to ascertain whether they encode, contain or exacerbate bias against people of a certain gender, race, sexuality, or other protected characteristics.

Impact Mitigation Measures 

We propose some reflection and actions taken to mitigate potential harmful consequences from the research project. 

  • Data and model documentation: Researchers should communicate the details of the dataset or the model as part of their submissions via structured templates.
  • Data and model licenses: If releasing data or models, authors should also provide licenses for them. These should include the intended use and limitations of these artifacts, in order to prevent misuse or inappropriate use.
  • Secure and privacy-preserving data storage & distribution : Authors should leverage privacy protocols, encryption and anonymization to reduce the risk of data leakage or theft. Stronger measures should be employed for more sensitive data (e.g., biometric or medical data). 
  • Responsible release and publication strategy: Models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, e.g. by requiring that users adhere to a code of conduct to access the model. Authors of papers exposing a security vulnerability in a system should follow the responsible disclosure procedures of the system owners.
  • Allowing access to research artifacts: When releasing research artifacts, it is important to make accessible the information required to understand these artifacts (e.g. the code, execution environment versions, weights, and hyperparameters of systems) to enable external scrutiny and auditing.
  • Disclose essential elements for reproducibility: Any work submitted to NeurIPS should be accompanied by the information sufficient for the reproduction of results described. This can include the code, data, model weights, and/or a description of the computational resources needed to train the proposed model or validate the results.
  • Ensure legal compliance: Ensure adequate awareness of regional legal requirements. This can be done, for instance, by consulting with law school clinics specializing in intellectual property and technology issues. Additional information is required from authors where legal compliance could not be met due to human rights violations (e.g. freedom of expression, the right to work and education, bodily autonomy, etc.). 

Violations to the Code of Ethics should be reported to [email protected] . NeurIPS reserves the right to reject the presentation of scientific works that violate the Code of Ethics. Notice that conference contributors are also obliged to adhere to additional ethical codes or review requirements arising from other stakeholders such as funders and research institutions.

Further reading

UNDERSTANDING LICENSES

  • Towards Standardization of Data Licenses: The Montreal Data License
  • Behavioral Use Licensing for Responsible AI  
  • Choose an open source license  

MODEL AND DATA DOCUMENTATION TEMPLATES

  • Model Cards for Model Reporting
  • Datasheets for Datasets
  • Using AI Factsheets for AI Governance
  • ML Lifecycle Documentation Practices

SOCIETAL IMPACT

  • Safety: Key Concepts in AI Safety: An Overview
  • Security: SoK: Security and Privacy in Machine Learning
  • Discrimination:  Bias in algorithms – Artificial intelligence and discrimination ; What about fairness, bias and discrimination?
  • Surveillance: The Human Right to Privacy in the Digital Age
  • Deception & Harassment: Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations  
  • Environment: Quantifying the Carbon Emissions of Machine Learning  
  • Human Rights: Technology and Rights | Human Rights Watch  
  • Bias and fairness:  Fairness and machine learning  
  • Dual use problem:  Dual use of artificial-intelligence-powered drug discovery 
  • Data Enrichment: Responsible Sourcing for Data Enrichment
  • Synthetic Media: PAI’s Responsible Practices for Synthetic Media

RELATED ENDEAVORS

  • ACM Code of Ethics  
  • ACL Ethics FAQ
  • ICLR Code of Ethics
  • Responsible Conduct of Research Training  

RELATED RESEARCH COMMUNITIES

  • IEEE SaTML 2023
  • Aies Conference
  • FORC 2022  
NeurIPS uses cookies to remember that you are logged in. By using our websites, you agree to the placement of cookies.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 29 September 2004

Use of animals in experimental research: an ethical dilemma?

  • V Baumans 1 , 2  

Gene Therapy volume  11 ,  pages S64–S66 ( 2004 ) Cite this article

70k Accesses

106 Citations

17 Altmetric

Metrics details

Mankind has been using animals already for a long time for food, for transport and as companion. The use of animals in experimental research parallels the development of medicine, which had its roots in ancient Greece (Aristotle, Hippocrate). With the Cartesian philosophy in the 17th century, experiments on animals could be performed without great moral problems. The discovery of anaesthetics and Darwin's publication on the Origin of Species, defending the biological similarities between man and animal, contributed to the increase of animal experimentation. The increasing demand for high standard animal models together with a critical view on the use of animals led to the development of Laboratory Animal Science in the 1950s with Russell and Burch's three R's of Replacement, Reduction and Refinement as guiding principles, a field that can be defined as a multidisciplinary branch of science, contributing to the quality of animal experiments and to the welfare of laboratory animals. The increased interest in and concern about animal welfare issues led to legislative regulations in many countries and the establishment of animal ethics committees.

This is a preview of subscription content, access via your institution

Access options

Subscribe to this journal

Receive 12 print issues and online access

251,40 € per year

only 20,95 € per issue

Buy this article

  • Purchase on SpringerLink
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

experimental research ethics

Similar content being viewed by others

experimental research ethics

Reproducibility of animal research in light of biological variation

experimental research ethics

Translational science: a survey of US biomedical researchers’ perspectives and practices

experimental research ethics

3R measures in facilities for the production of genetically modified rodents

Van Zutphen LFM . History of animal use. In: Van Zutphen LFM, Baumans V, Beynen AC (eds). Principles of Laboratory Animal Science . Elsevier: Amsterdam, 2001, pp 2–5.

Google Scholar  

Dennis Jr MB . Welfare issues of genetically modified animals. ILAR J 2002; 43 : 100–109.

Article   CAS   Google Scholar  

Russell WMS, Burch RL . The Principles of Humane Experimental Technique . Methuen: London, 1959, Reprinted by UFAW, 1992: 8 Hamilton Close, South Mimms, Potters Bar, Herts EN6 3QD England.

Download references

Author information

Authors and affiliations.

Department of Laboratory Animal Science, Utrecht University, Utrecht, The Netherlands

Karolinska Institute, Stockholm, Sweden

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Baumans, V. Use of animals in experimental research: an ethical dilemma?. Gene Ther 11 (Suppl 1), S64–S66 (2004). https://doi.org/10.1038/sj.gt.3302371

Download citation

Published : 29 September 2004

Issue Date : 01 October 2004

DOI : https://doi.org/10.1038/sj.gt.3302371

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • animal experiments

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

experimental research ethics

Understand the principles of Human Research Ethics

Are you supporting or planning to engage in research with or about people, their data or their tissue? A new, self-paced learning module now available on Canvas entitled Human Research Ethics introduces its values, principles and review.

Human research ethics

The module provides information to help you design a human research project and understand how ethics reviewers will consider your design against the guidance provided in the National Statement on Ethical Conduct in Human Research .

Based on, and directly cross referencing with the National Statement, the module outlines why research ethics review was introduced internationally. It introduces the key agencies and documents associated with Australian ethics review and points out important changes in 2023 National Statement. These include a refined risk matrix approach and the requirements of granting an exemption from ethics review.

Merrilee Kessler, UTS Research Ethics Coordinator said that UTS commissioned this module for its research community.

“We wanted to provide a course that introduced the key research ethics concepts such as risk and benefit, consent, recruitment and data management, in a way that helps researchers understand the language and intent of ethics review,” she said.

Merrilee at Research Cafe

The module highlights the kinds of responses needed in an ethics application to show that the research design meets the expectations of the National Statement. It looks at current issues like AI, social media in research and Big Data.

“It also includes research that UTS doesn’t come across as often, but which provides great scope for ethical investigation, such as Genomic research and Animal-to-human xenotransplantation,” Merrilee explained.

We wanted to provide a course that introduced the key research ethics concepts such as risk and benefit, consent, recruitment and data management

Keith Heggart, UTS researcher and ethics reviewer, was one of the first to complete the module.

“I think it’s important for early career researchers to understand that going through the ethics process is not just a matter of ticking the right boxes and filling in the right forms. Instead, it’s a thoughtful, nuanced engagement with ideas like justice, beneficence and risk,” he said.

“Done well, careful completion of the ethics approval process improves research projects. This course does a great job of carefully explaining this point, and as such, I recommend it for all early career researchers - and experienced ones too!”

Done well, careful completion of the ethics approval process improves research projects.

Contributing to research excellence

Importantly, the Human Research Ethics module covers key aspects of the UTS Research Outcomes Capability Framework | RES Hub (uts.edu.au) including those relating to:

  • Research life cycle: Best practice in research project management from research ethics and integrity, policy and procedures, data management policy and systems to IP management and security.
  • Research leadership: Confidence to champion research integrity and best practice in research project management, mentor successfully and deliver ethical and robust research with integrity.
  • Creativity and innovation: Knowledge of ‘human-centred’ research methods and practices. 
  • Indigenous led knowledges and research: Knowledge and understanding of Indigenous Research Ethics (e.g. AIATSIS, NHMRC, community protocols) and the ability to ensure that research processes and outputs will not harm Indigenous peoples and communities.

“We also look at the relevant UTS policies, procedures and systems,” added Merrilee.

“At the end of the course, participants will have the ability to identify and manage risk and appropriately manage data – including by planning for re-use and reproducibility, archiving and sharing with appropriate audiences.”

About the Human Research Ethics Module

Human Research Ethics  takes approximately two hours to complete and can be accessed as many times as needed once an account has been opened.

A Certificate of Completion is available for download.

Access the training at https://canvas.uts.edu.au/enroll/GEKRK8

Find ethics training and support

  • Visit the Ethics Sharepoint site - Research Ethics and Integrity - Home (sharepoint.com)
  • Make an appointment to attend clinics for Animal Ethics, General Research Ethics or Health Related Research  - Ethics clinics (sharepoint.com)
  • Register for Good Clinical Practice (GCP) training-   GCP training page

Need more information? Get in touch with the Research Ethics team by email [email protected]

RES Hub acknowledges and pays respect to the Gadigal people of the Eora Nation, the Boorooberongal people of the Dharug Nation, the Bidiagal people and the Gamaygal people, upon whose ancestral lands where UTS now stands.

We pay respect to their Elders; past, present, emerging and future as the traditional custodians of Country and knowledge for this land. We recognise their continued connection to the land and waters and the continuation of their cultural, spiritual, and educational practices. We extend this respect to all Aboriginal and/or Torres Strait Islander peoples who visit RES Hub.

As a place for the UTS community to connect and collaborate, RES Hub acknowledges the long-standing traditional practices of these communities in gathering to share experience, knowledge, and history. RES Hub acknowledges that sovereignty was never ceded. This is and always will be Aboriginal Land. 

American Psychological Association Logo

This page has been archived and is no longer being updated regularly.

Cover Story

Five principles for research ethics

Cover your bases with these ethical strategies

By DEBORAH SMITH

Monitor Staff

January 2003, Vol 34, No. 1

Print version: page 56

13 min read

  • Conducting Research

Not that long ago, academicians were often cautious about airing the ethical dilemmas they faced in their research and academic work, but that environment is changing today. Psychologists in academe are more likely to seek out the advice of their colleagues on issues ranging from supervising graduate students to how to handle sensitive research data , says George Mason University psychologist June Tangney, PhD.

"There has been a real change in the last 10 years in people talking more frequently and more openly about ethical dilemmas of all sorts," she explains.

Indeed, researchers face an array of ethical requirements: They must meet professional, institutional and federal standards for conducting research with human participants, often supervise students they also teach and have to sort out authorship issues, just to name a few.

Here are five recommendations APA's Science Directorate gives to help researchers steer clear of ethical quandaries:

1. Discuss intellectual property frankly

Academe's competitive "publish-or-perish" mindset can be a recipe for trouble when it comes to who gets credit for authorship . The best way to avoid disagreements about who should get credit and in what order is to talk about these issues at the beginning of a working relationship, even though many people often feel uncomfortable about such topics.

"It's almost like talking about money," explains Tangney. "People don't want to appear to be greedy or presumptuous."

APA's Ethics Code offers some guidance: It specifies that "faculty advisors discuss publication credit with students as early as feasible and throughout the research and publication process as appropriate." When researchers and students put such understandings in writing, they have a helpful tool to continually discuss and evaluate contributions as the research progresses.

However, even the best plans can result in disputes, which often occur because people look at the same situation differently. "While authorship should reflect the contribution," says APA Ethics Office Director Stephen Behnke, JD, PhD, "we know from social science research that people often overvalue their contributions to a project. We frequently see that in authorship-type situations. In many instances, both parties genuinely believe they're right." APA's Ethics Code stipulates that psychologists take credit only for work they have actually performed or to which they have substantially contributed and that publication credit should accurately reflect the relative contributions: "Mere possession of an institutional position, such as department chair, does not justify authorship credit," says the code. "Minor contributions to the research or to the writing for publications are acknowledged appropriately, such as in footnotes or in an introductory statement."

The same rules apply to students. If they contribute substantively to the conceptualization, design, execution, analysis or interpretation of the research reported, they should be listed as authors. Contributions that are primarily technical don't warrant authorship. In the same vein, advisers should not expect ex-officio authorship on their students' work.

Matthew McGue, PhD, of the University of Minnesota, says his psychology department has instituted a procedure to avoid murky authorship issues. "We actually have a formal process here where students make proposals for anything they do on the project," he explains. The process allows students and faculty to more easily talk about research responsibility, distribution and authorship.

Psychologists should also be cognizant of situations where they have access to confidential ideas or research, such as reviewing journal manuscripts or research grants, or hearing new ideas during a presentation or informal conversation. While it's unlikely reviewers can purge all of the information in an interesting manuscript from their thinking, it's still unethical to take those ideas without giving credit to the originator.

"If you are a grant reviewer or a journal manuscript reviewer [who] sees someone's research [that] hasn't been published yet, you owe that person a duty of confidentiality and anonymity," says Gerald P. Koocher, PhD, editor of the journal Ethics and Behavior and co-author of "Ethics in Psychology: Professional Standards and Cases" (Oxford University Press, 1998).

Researchers also need to meet their ethical obligations once their research is published: If authors learn of errors that change the interpretation of research findings, they are ethically obligated to promptly correct the errors in a correction, retraction, erratum or by other means.

To be able to answer questions about study authenticity and allow others to reanalyze the results, authors should archive primary data and accompanying records for at least five years, advises University of Minnesota psychologist and researcher Matthew McGue, PhD. "Store all your data. Don't destroy it," he says. "Because if someone charges that you did something wrong, you can go back."

"It seems simple, but this can be a tricky area," says Susan Knapp, APA's deputy publisher. "The APA Publication Manual Section 8.05 has some general advice on what to retain and suggestions about things to consider in sharing data."

The APA Ethics Code requires psychologists to release their data to others who want to verify their conclusions, provided that participants' confidentiality can be protected and as long as legal rights concerning proprietary data don't preclude their release. However, the code also notes that psychologists who request data in these circumstances can only use the shared data for reanalysis; for any other use, they must obtain a prior written agreement.

2. Be conscious of multiple roles

APA's Ethics Code says psychologists should avoid relationships that could reasonably impair their professional performance or could exploit or harm others. But it also notes that many kinds of multiple relationships aren't unethical--as long as they're not reasonably expected to have adverse effects.

That notwithstanding, psychologists should think carefully before entering into multiple relationships with any person or group, such as recruiting students or clients as participants in research studies or investigating the effectiveness of a product of a company whose stock they own.

For example, when recruiting students from your Psychology 101 course to participate in an experiment, be sure to make clear that participation is voluntary. If participation is a course requirement, be sure to note that in the class syllabus, and ensure that participation has educative value by, for instance, providing a thorough debriefing to enhance students' understanding of the study. The 2002 Ethics Code also mandates in Standard 8.04b that students be given equitable alternatives to participating in research.

Perhaps one of the most common multiple roles for researchers is being both a mentor and lab supervisor to students they also teach in class. Psychologists need to be especially cautious that they don't abuse the power differential between themselves and students, say experts. They shouldn't, for example, use their clout as professors to coerce students into taking on additional research duties.

By outlining the nature and structure of the supervisory relationship before supervision or mentoring begins, both parties can avoid misunderstandings, says George Mason University's Tangney. It's helpful to create a written agreement that includes both parties' responsibilities as well as authorship considerations, intensity of the supervision and other key aspects of the job.

"While that's the ideal situation, in practice we do a lot less of that than we ought to," she notes. "Part of it is not having foresight up front of how a project or research study is going to unfold."

That's why experts also recommend that supervisors set up timely and specific methods to give students feedback and keep a record of the supervision, including meeting times, issues discussed and duties assigned.

If psychologists do find that they are in potentially harmful multiple relationships, they are ethically mandated to take steps to resolve them in the best interest of the person or group while complying with the Ethics Code.

3. Follow informed-consent rules

When done properly, the consent process ensures that individuals are voluntarily participating in the research with full knowledge of relevant risks and benefits.

"The federal standard is that the person must have all of the information that might reasonably influence their willingness to participate in a form that they can understand and comprehend," says Koocher, dean of Simmons College's School for Health Studies.

APA's Ethics Code mandates that psychologists who conduct research should inform participants about:

The purpose of the research, expected duration and procedures.

Participants' rights to decline to participate and to withdraw from the research once it has started, as well as the anticipated consequences of doing so.

Reasonably foreseeable factors that may influence their willingness to participate, such as potential risks, discomfort or adverse effects.

Any prospective research benefits.

Limits of confidentiality, such as data coding, disposal, sharing and archiving, and when confidentiality must be broken.

Incentives for participation.

Who participants can contact with questions.

Experts also suggest covering the likelihood, magnitude and duration of harm or benefit of participation, emphasizing that their involvement is voluntary and discussing treatment alternatives, if relevant to the research.

Keep in mind that the Ethics Code includes specific mandates for researchers who conduct experimental treatment research. Specifically, they must inform individuals about the experimental nature of the treatment, services that will or will not be available to the control groups, how participants will be assigned to treatments and control groups, available treatment alternatives and compensation or monetary costs of participation.

If research participants or clients are not competent to evaluate the risks and benefits of participation themselves--for example, minors or people with cognitive disabilities--then the person who's giving permission must have access to that same information, says Koocher.

Remember that a signed consent form doesn't mean the informing process can be glossed over, say ethics experts. In fact, the APA Ethics Code says psychologists can skip informed consent in two instances only: When permitted by law or federal or institutional regulations, or when the research would not reasonably be expected to distress or harm participants and involves one of the following:

The study of normal educational practices, curricula or classroom management methods conducted in educational settings.

Anonymous questionnaires, naturalistic observations or archival research for which disclosure of responses would not place participants at risk of criminal or civil liability or damage their financial standing, employability or reputation, and for which confidentiality is protected.

The study of factors related to job or organization effectiveness conducted in organizational settings for which there is no risk to participants' employability, and confidentiality is protected.

If psychologists are precluded from obtaining full consent at the beginning--for example, if the protocol includes deception, recording spontaneous behavior or the use of a confederate--they should be sure to offer a full debriefing after data collection and provide people with an opportunity to reiterate their consent, advise experts.

The code also says psychologists should make reasonable efforts to avoid offering "excessive or inappropriate financial or other inducements for research participation when such inducements are likely to coerce participation."

4. Respect confidentiality and privacy

Upholding individuals' rights to confidentiality and privacy is a central tenet of every psychologist's work. However, many privacy issues are idiosyncratic to the research population, writes Susan Folkman, PhD, in " Ethics in Research with Human Participants " (APA, 2000). For instance, researchers need to devise ways to ask whether participants are willing to talk about sensitive topics without putting them in awkward situations, say experts. That could mean they provide a set of increasingly detailed interview questions so that participants can stop if they feel uncomfortable.

And because research participants have the freedom to choose how much information about themselves they will reveal and under what circumstances, psychologists should be careful when recruiting participants for a study, says Sangeeta Panicker, PhD, director of the APA Science Directorate's Research Ethics Office. For example, it's inappropriate to obtain contact information of members of a support group to solicit their participation in research. However, you could give your colleague who facilitates the group a letter to distribute that explains your research study and provides a way for individuals to contact you, if they're interested.

Other steps researchers should take include:

Discuss the limits of confidentiality. Give participants information about how their data will be used, what will be done with case materials, photos and audio and video recordings, and secure their consent.

Know federal and state law. Know the ins and outs of state and federal law that might apply to your research. For instance, the Goals 2000: Education Act of 1994 prohibits asking children about religion, sex or family life without parental permission.

Another example is that, while most states only require licensed psychologists to comply with mandatory reporting laws, some laws also require researchers to report abuse and neglect. That's why it's important for researchers to plan for situations in which they may learn of such reportable offenses. Generally, research psychologists can consult with a clinician or their institution's legal department to decide the best course of action.

Take practical security measures. Be sure confidential records are stored in a secure area with limited access, and consider stripping them of identifying information, if feasible. Also, be aware of situations where confidentiality could inadvertently be breached, such as having confidential conversations in a room that's not soundproof or putting participants' names on bills paid by accounting departments.

Think about data sharing before research begins. If researchers plan to share their data with others, they should note that in the consent process, specifying how they will be shared and whether data will be anonymous. For example, researchers could have difficulty sharing sensitive data they've collected in a study of adults with serious mental illnesses because they failed to ask participants for permission to share the data. Or developmental data collected on videotape may be a valuable resource for sharing, but unless a researcher asked permission back then to share videotapes, it would be unethical to do so. When sharing, psychologists should use established techniques when possible to protect confidentiality, such as coding data to hide identities. "But be aware that it may be almost impossible to entirely cloak identity, especially if your data include video or audio recordings or can be linked to larger databases," says Merry Bullock, PhD, associate executive director in APA's Science Directorate.

Understand the limits of the Internet. Since Web technology is constantly evolving, psychologists need to be technologically savvy to conduct research online and cautious when exchanging confidential information electronically. If you're not a Internet whiz, get the help of someone who is. Otherwise, it may be possible for others to tap into data that you thought was properly protected.

5. Tap into ethics resources

One of the best ways researchers can avoid and resolve ethical dilemmas is to know both what their ethical obligations are and what resources are available to them.

"Researchers can help themselves make ethical issues salient by reminding themselves of the basic underpinnings of research and professional ethics," says Bullock. Those basics include:

The Belmont Report. Released by the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research in 1979, the report provided the ethical framework for ensuing human participant research regulations and still serves as the basis for human participant protection legislation (see Further Reading).

APA's Ethics Code , which offers general principles and specific guidance for research activities.

Moreover, despite the sometimes tense relationship researchers can have with their institutional review boards (IRBs), these groups can often help researchers think about how to address potential dilemmas before projects begin, says Panicker. But psychologists must first give their IRBs the information they need to properly understand a research proposal.

"Be sure to provide the IRB with detailed and comprehensive information about the study, such as the consent process, how participants will be recruited and how confidential information will be protected," says Bullock. "The more information you give your IRB, the better educated its members will become about behavioral research, and the easier it will be for them to facilitate your research."

As cliché as it may be, says Panicker, thinking positively about your interactions with an IRB can help smooth the process for both researchers and the IRBs reviewing their work.

Further reading

American Psychological Association. (2002). Ethical principles of psychologists and code of conduct. American Psychologist, 57 (12).

Sales, B.D., & Folkman, S. (Eds.). (2000). Ethics in research with human participants . Washington, DC: American Psychological Association.

APA's Research Ethics Office in the Science Directorate; e-mail ; Web site: APA Science .

The National Institutes of Health (NIH) offers educational materials on human subjects .

NIH Bioethics Resources Web site .

The Department of Health and Human Services' (DHHS) Office of Research Integrity Web site .

DHHS Office of Human Research Protections Web site .

The 1979 Belmont Report on protecting human subjects .

Association for the Accreditation of Human Research Protection Programs Web site: www.aahrpp.org .

Related Articles

  • Ethics in research with animals

Letters to the Editor

  • Send us a letter

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

admsci-logo

Article Menu

  • Subscribe SciFeed
  • Recommended Articles
  • Author Biographies
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

How ethical behavior is considered in different contexts: a bibliometric analysis of global research trends.

experimental research ethics

1. Introduction

2. literature review, 2.1. ethical behavior, 2.2. bibliometric, 3. methodology, 4.1. countries and their concerns about ethical behavior, 4.2. key themes in research terms, 4.3. bibliographic coupling analysis, 4.3.1. journals, 4.3.2. authors, 4.4. co-citation analysis, 4.4.1. publications, 4.4.2. journals, 4.4.3. authors, 5. discussion, 5.1. ethical behavior in consumption, 5.2. ethical behavior in leadership, 5.2.1. social learning theory (slt), 5.2.2. social exchange theory (set), transformational leadership, authentic leadership, spiritual leadership, 5.3. ethical behavior in business.

  • Focus on social responsibility;
  • Emphasis on honesty and fairness;
  • Focus on “Golden Rules”;
  • Values that are consistent with a person’s behavior or religious beliefs;
  • Obligations, responsibilities, and rights towards dedicated or enlightened work;
  • Philosophy of good or bad;
  • Ability to clarify issues in decision making;
  • Focus on personal conscience;
  • Systems or theories of justice that question the quality of one’s relationships;
  • The relationship of the means to ends;
  • Concern with integrity, what should be, habits, logic, and principles of Aristotle;
  • Emphasis on virtue, leadership, confidentiality, judgment of others, putting God first, topicality, and publicity.

Values, Business Ethics, and Corporate Social Responsibility (CSR)

5.4. ethical behavior in the medical context, 5.4.1. autonomy, 5.4.2. beneficence, 5.4.3. non-maleficence, 5.4.4. fairness, 5.5. ethical behaviour in education, 5.5.1. violation of school/university regulation, 5.5.2. selfishness, 5.5.3. cheating, 5.5.4. computer ethics, 5.6. ethical context in organization, 5.6.1. context of organizational ethical climate, 5.6.2. context of organizational ethical culture, 6. conclusions, 7. limitations and future research, author contributions, institutional review board statement, informed consent statement, data availability statement, conflicts of interest.

ClusterAuthorBaseConcept
Cluster 1: Ethical Behavior in Organization and BusinessAndreas ChatzidakisRoyal Holloway UniversityEthical consumption
John PelozaKentucky UniversityResponsibility
Sean ValentineLouisiana Tech UniversityEthical business, human management, and behavior in an organization
Linda TreviñoPennsylvania State UniversityBehavior in organizations and ethics, behavior in organizations and ethical business
Gary R. WeaverDelaware UniversityMoral awareness, ethical behavior in organizations
Cluster 2: Ethical Behavior in LeadershipBruce AvolioWashington UniversityEthical communication of leadership, strategic leadership from individual to global
Deanne N. Den HartogAmsterdam UniversityLeadership behavior in the organization, dynamic, international management
Jennifer J. Kish-GephartMassachusetts—Amherst UniversityBehavioral ethics, diversity, social inequality, behavior, business ethics
Fred O. WalumbwaArizona State University’s W.P.Authentic leadership
Cluster 3: Nervous, Deep Brain Stimulation, and DepressionLaura B. DunnStanford UniversityScientific and ethical issues related to deep brain stimulation for mood, behavioral, and thought disorders, ethics of schizophrenia, treatment of depression
Benjamin D. GreenbergBrown UniversityPsychiatry, neuroscience, anxiety-related features, deep brain stimulation, treatment-resistant depression
Joseph J. FinRockefeller University, Weill Cornell Medical CollegeConsciousness disorders, deep brain stimulation, neurotechnology, neuroethics
Thomas E. SchlaepferThe Johns Hopkins UniversityDeep brain stimulation, depression, anxiety, neurobiology
Cluster 4: Ethical CultureMarcus Dickson WayneState UniversityUnderlying leadership theories generalizing culture and multiculturalism, the influence of culture on leadership and organizations
Mary A. Keating Trinity College DublinMulticultural management, ethics, human resource management
Gillian S. MartinCollege DublinLeadership culture change
Christian ResickDrexel UniversityTeamwork, personality, organizational culture and conformity, ethical leadership, and ethical-related organizational environment
Cluster 5: Moral PsychologyMichael C. Gottieb and Mitchell M. HandelsmanThe University of Texas Southwestern Medical Center & University of KansasThe Ethical Dilemma in Psychotherapy, Ethical Psychologist Training: A Self-Awareness Question for Effective Psychotherapists: Helping Good Psychotherapists Become Even Better, APA Handbook of Ethics in Psychology
Samuel L. KnappDartmouth CollegePhysiological sustainability
Cluster 6: Ethical issues in health care, especially concerned with the knowledge of nursesJang, In-sunSungshin Women’s UniversityEthical decision-making model for nurses, nursing students, telehealth technology, research topics on family care between Korea and other countries
Park, Eun-junSejong UniversityNursing students, beliefs in knowledge and health, Korean nursing students, nurses’ organizational culture, health-related behavior
ClusterRepresentative AuthorBaseConcept
Cluster 1: Psychology, TPB, theory of the stages of moral development, the development of behavior in the context of makeupIcek AjzenMassachusetts Amherst UniversityTPB
Shelby D. HuntTexas Technology UniversityMarketing research
O.C. FerrellAuburn UniversityEthical marketing, social responsibility
Scott J. VitellMississippi UniversityBusiness administration, social psychology, marketing, management
Lawrence Kohlberg Theory of the stages of moral development
AnusornSinghapakdiOld Dominion University, Mississippi UniversityMarketing with subfields in consumer behavior and econometrics
Cluster 2: Social cognitive theory, ethical behavior in leadershipAlbert BanduraStanford UniversityBehaviorism and cognitive psychology, social learning theory originator, theoretical structure of self-efficacy
Michael E. BrownSam and Irene Black School of Business Penn State-Erie, The Behrend CollegeBehavioral leadership, ethics, ethical leadership, moral conflict
David M. MayerMichigan UniversityBehavioral ethics, leadership ethics, organizational behavior
Philip PodsakoffFlorida UniversityCitizen organization, behavioral organization, research methods leadership
Cluster 3: Psychological, emotional, and unethical behaviorFrancesca GinoHarvard Business SchoolUnethical, dishonest behavior
Jonathan HaidtNYU-SternEthical psychology, political psychology, positive psychology, business ethics
Ann E. TenbrunselNotre Dame UniversityPsychology of ethical decision making and the ethical infrastructure in organizations, examining why employees, leaders, and students behave unethically, despite of their best intention
Karl AquinoBritish Columbia UniversityEthics, forgiveness, victims, emotions.
Cluster 4: Ethical behavior in business and organizationTheresa Jones Ecological light pollution, chemical communication, immune function, history features, mating
Linda TreviñoPennsylvania State UniversityOrganizational behavior and business ethics
Gary R. WeaverDelaware UniversityBehavioral ethics in organizations
Bart VictorVanderbilt UniversityThe organizational basis of an ethical work environment
  • Agrell, Anders, and Roland Gustafson. 1994. The Team Climate Inventory (TCI) and group innovation: A psychometric test on a Swedish sample of work groups. Journal of Occupational and Organizational Psychology 67: 143–51. [ Google Scholar ] [ CrossRef ]
  • Aguinis, Herman. 2011. Organizational responsibility: Doing good and doing well. In APA Handbook of Industrial and Organizational Psychology . Edited by Sheldon Zedeck. Maintaining, Expanding, and Contracting the Organization. Washington, DC: American Psychological Association, vol. 3, pp. 855–79. [ Google Scholar ] [ CrossRef ]
  • Ahmed, Mohamed M., Kun Young Chung, and John W. Eichenseher. 2003. Business students’ perception of ethics and moral judgment: A cross-cultural study. Journal of Business Ethics 43: 89–102. [ Google Scholar ] [ CrossRef ]
  • Ajzen, Icek. 1991. The theory of planned behavior. Organizational Behavior and Human Decision Processes 50: 179–211. [ Google Scholar ] [ CrossRef ]
  • Allam, Zafrul, Muzaffar Asad, Nasir Ali, and Azam Malik. 2022. Bibliometric analysis of research visualizations of knowledge aspects on burnout among teachers from 2012 to January 2022. Paper presented at the 2022 International Conference on Decision aid Sciences and Applications (DASA), Chiangrai, Thailand, March 23–25; pp. 126–31. [ Google Scholar ] [ CrossRef ]
  • American Library Association (ALA). 1983. ALA Glossary of Library and Information Science . Chicago: American Library Association. [ Google Scholar ]
  • Aquino, Karl, and Americus Reed, II. 2002. The self-importance of moral identity. Journal of Personality and Social Psychology 83: 1423. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Arefeen, Sirajul, Mohammad E. Mohyuddin, and Mohammad Aktaruzzaman Khan. 2020. An exploration of unethical behavior attitude of tertiary level students of Bangladesh. Global Journal of Management and Business Research 20: 39–47. [ Google Scholar ] [ CrossRef ]
  • Arman, Saleh Md, and Cecilia Mark-Herbert. 2022. Ethical Pro-Environmental Self-Identity Practice: The Case of Second-Hand Products. Sustainability 14: 2154. [ Google Scholar ] [ CrossRef ]
  • Armstrong, Mary Beth, J. Edward Ketz, and Dwight Owsen. 2003. Ethics education in accounting: Moving toward ethical motivation and ethical behavior. Journal of Accounting Education 21: 1–16. [ Google Scholar ] [ CrossRef ]
  • Arnaud, Anke, and Marshall Schminke. 2012. The ethical climate and context of organizations: A comprehensive model. Organization Science 23: 1767–80. [ Google Scholar ] [ CrossRef ]
  • Ashforth, Blake, Dennis A. Gioia, Sandra L. Robinson, and Linda K. Treviño. 2008. Re-viewing organizational corruption. Academy of Management Review 33: 670–84. [ Google Scholar ] [ CrossRef ]
  • Asif, Muhammad, Miao Qing, Jinsoo Hwang, and Hao Shi. 2019. Ethical leadership, affective commitment, work engagement, and creativity: Testing a multiple mediation approach. Sustainability 11: 4489. [ Google Scholar ] [ CrossRef ]
  • Aslam, Hassan Danial, Sorinel Căpușneanu, Tasawar Javed, Ileana-Sorina Rakos, and Cristian-Marian Barbu. 2024. The mediating role of attitudes towards performing well between ethical leadership, technological innovation, and innovative performance. Administrative Sciences 14: 62. [ Google Scholar ] [ CrossRef ]
  • Avolio, Bruce J., and William L. Gardner. 2005. Authentic leadership development: Getting to the root of positive forms of leadership. The Leadership Quarterly 16: 315–38. [ Google Scholar ] [ CrossRef ]
  • Avolio, Bruce J., William L. Gardner, Fred O. Walumbwa, Fred Luthans, and Douglas R. May. 2004. Unlocking the mask: A look at the process by which authentic leaders impact follower attitudes and behaviors. The Leadership Quarterly 15: 801–23. [ Google Scholar ] [ CrossRef ]
  • Babin, Barry J., James S. Boles, and Donald P. Robin. 2000. Representing the perceived ethical work climate among marketing employees. Journal of the Academy of Marketing Science 28: 345–58. [ Google Scholar ] [ CrossRef ]
  • Bai, Yuntao, Li Lin, and Joseph T. Liu. 2019. Leveraging the employee voice: A multi-level social learning perspective of ethical leadership. The International Journal of Human Resource Management 30: 1869–901. [ Google Scholar ] [ CrossRef ]
  • Bandura, Albert. 1977. Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review 84: 191–215. [ Google Scholar ] [ CrossRef ]
  • Bandura, Albert. 1999. Moral disengagement in the perpetration of inhumanities. Personality and Social Psychology Review 3: 193–209. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Barnett, Tim, and Cheryl Vaicys. 2000. The moderating effect of individuals’ perceptions of ethical work climate on ethical judgments and behavioral intentions. Journal of Business Ethics 27: 351–62. [ Google Scholar ] [ CrossRef ]
  • Barry, Vincent. 1979. Moral Issues in Business . Belmont: Wadsworth Publishing Company. [ Google Scholar ]
  • Bass, Bernard M. 1985. Leadership and Performance Beyond Expectations . New York: Free Press. [ Google Scholar ]
  • Bauman, Christopher W., and Linda J. Skitka. 2012. Corporate social responsibility as a source of employee satisfaction. Research in Organizational Behavior 32: 63–86. [ Google Scholar ] [ CrossRef ]
  • Baur, Vivianne, Inge van Nistelrooij, and Linus Vanlaere. 2017. The sensible health care professional: A care ethical perspective on the role of caregivers in emotionally turbulent practices. Medicine, Health Care and Philosophy 20: 483–93. [ Google Scholar ] [ CrossRef ]
  • Beauchamp, Tom L., and James F. Childress. 2001. Principles of Biomedical Ethics . New York: Oxford University Press. [ Google Scholar ]
  • Beauchamp, Tom L., and Norman E. Bowie. 1983. Ethical Theory and Business , 2nd ed. Englewood Cliffs: Prentice Hall. [ Google Scholar ]
  • Beaudoin, Cathy A., Anna M. Cianci, Sean T. Hannah, and George T. Tsakumis. 2019. Bolstering managers’ resistance to temptation via the firm’s commitment to corporate social responsibility. Journal of Business Ethics 157: 303–18. [ Google Scholar ] [ CrossRef ]
  • Beciu, Silviu, Georgiana Armenița Arghiroiu, and Maria Bobeică. 2024. From Origins to Trends: A Bibliometric Examination of Ethical Food Consumption. Foods 13: 2048. [ Google Scholar ] [ CrossRef ]
  • Bedi, Akanksha, Can M. Alpaslan, and Sandy Green. 2016. A meta-analytic review of ethical leadership outcomes and moderators. Journal of Business Ethics 139: 517–36. [ Google Scholar ] [ CrossRef ]
  • Benckendorff, Pierre, and Anita Zehrer. 2013. A network analysis of tourism research. Annals of Tourism Research 43: 121–49. [ Google Scholar ] [ CrossRef ]
  • Bhatti, Sabeen Hussain, Saifullah Khalid Kiyani, Scott B. Dust, and Ramsha Zakariya. 2021. The impact of ethical leadership on project success: The mediating role of trust and knowledge sharing. International Journal of Managing Projects in Business 14: 982–98. [ Google Scholar ] [ CrossRef ]
  • Bing, Mark N., H. Kristl Davison, Scott J. Vitell, Anthony P. Ammeter, Bart L. Garner, and Milorad M. Novicevic. 2012. An experimental investigation of an interactive model of academic cheating among business school students. Academy of Management Learning & Education 11: 28–48. [ Google Scholar ] [ CrossRef ]
  • Blau, Peter M. 1964. Exchange and Power in Social Life . New York: John Wiley & Sons. [ Google Scholar ]
  • Broadus, Robert N. 1987. Toward a definition of “bibliometrics”. Scientometrics 12: 373–79. [ Google Scholar ] [ CrossRef ]
  • Brown, Michael E., and Linda K. Treviño. 2006. Ethical leadership: A review and future directions. The leadership Quarterly 17: 595–616. [ Google Scholar ] [ CrossRef ]
  • Brown, Michael E., and Marie S. Mitchell. 2010. Ethical and unethical leadership: Exploring new avenues for future research. Business Ethics Quarterly 20: 583–616. [ Google Scholar ] [ CrossRef ]
  • Brown, Michael E., Linda K. Treviño, and David A. Harrison. 2005. Ethical leadership: A social learning perspective for construct development and testing. Organizational Behavior and Human Decision Processes 97: 117–34. [ Google Scholar ] [ CrossRef ]
  • Bunn, Douglas N., Steven B. Caudill, and Daniel M. Gropper. 1992. Crime in the classroom: An economic analysis of undergraduate student cheating behavior. The Journal of Economic Education 23: 197–207. [ Google Scholar ] [ CrossRef ]
  • Burns, James MacGregor. 1978. Leadership . New York: Harper & Row. [ Google Scholar ]
  • Camilleri, Mark Anthony. 2021. Strategic attributions of corporate social responsibility and environmental management: The business case for doing well by doing good! Sustainable Development 30: 409–22. [ Google Scholar ] [ CrossRef ]
  • Carrington, Michal J., Ben Neville, and Robin Canniford. 2015. Unmanageable multiplicity: Consumer transformation towards moral self coherence. European Journal of Marketing 49: 1300–25. [ Google Scholar ] [ CrossRef ]
  • Carroll, Archie B. 1991. The pyramid of corporate social responsibility: Toward the moral management of organizational stakeholders. Business Horizons 34: 39–48. [ Google Scholar ] [ CrossRef ]
  • Carroll, Archie B. 2016. Carroll’s pyramid of CSR: Taking another look. International Journal of Corporate Social Responsibility 1: 3. [ Google Scholar ] [ CrossRef ]
  • Cauffman, Elizabeth, S. Shirley Feldman, Lene Arnett Jensen, and Jeffrey Jensen Arnett. 2000. The (un) acceptability of violence against peers and dates. Journal of Adolescent Research 15: 652–73. [ Google Scholar ] [ CrossRef ]
  • Chabowski, Brian R., Jeannette A. Mena, and Tracy L. Gonzalez-Padron. 2011. The structure of sustainability research in marketing, 1958–2008: A basis for future research opportunities. Journal of the Academy of Marketing Science 39: 55–70. [ Google Scholar ] [ CrossRef ]
  • Chang, Yu-Wei, Mu-Hsuan Huang, and Chiao-Wen Lin. 2015. Evolution of research subjects in library and information science based on keyword, bibliographical coupling, and co-citation analyses. Scientometrics 105: 2071–87. [ Google Scholar ] [ CrossRef ]
  • Chapman, Kenneth J., Richard Davis, Daniel Toy, and Lauren Wright. 2004. Academic integrity in the business school environment: I’ll get by with a little help from my friends. Journal of Marketing Education 26: 236–49. [ Google Scholar ] [ CrossRef ]
  • Chatters, Linda M. 2000. Religion and health: Public health research and practice. Annual Review of Public Health 21: 335–67. [ Google Scholar ] [ CrossRef ]
  • Chatzidakis, Andreas, Pauline Maclaran, and Alan Bradshaw. 2012. Heterotopian space and the utopics of ethical and green consumption. Journal of Marketing Management 28: 494–515. [ Google Scholar ] [ CrossRef ]
  • Chen, Wei-Fen, Xue Wang, Haiyan Gao, and Ying-Yi Hong. 2019. Understanding consumer ethics in China’s demographic shift and social reforms. Asia Pacific Journal of Marketing and Logistics 31: 627–46. [ Google Scholar ] [ CrossRef ]
  • Chen, Ziguang, Wing Lam, and Jian An Zhong. 2007. Leader-member exchange and member performance: A new look at individual-level negative feedback-seeking behavior and team-level empowerment climate. Journal of Applied Psychology 92: 202. [ Google Scholar ] [ CrossRef ]
  • Cheung, Erick H., and Joseph M. Pierre. 2015. The medical ethics of cognitive neuroenhancement. AIMS Neuroscience 2: 102–22. [ Google Scholar ] [ CrossRef ]
  • Clark, Duncan, and Richie Unterberger. 2007. The Rough Guide to Shopping with a Conscience . New York: Rough Guides. [ Google Scholar ]
  • Çollaku, Lum, Arbana Sahiti Ramushi, and Muhamet Aliu. 2024. Fraud intention and the relationship with selfishness: The mediating role of moral justification in the accounting profession. International Journal of Ethics and Systems , ahead-of-print . [ Google Scholar ] [ CrossRef ]
  • Comoli, Maurizio, Patrizia Tettamanzi, and Michael Murgolo. 2023. Accounting for ‘ESG’ under Disruptions: A Systematic Literature Network Analysis. Sustainability 15: 6633. [ Google Scholar ] [ CrossRef ]
  • Condon, Paul. 2019. Meditation in context: Factors that facilitate prosocial behavior. Current Opinion in Psychology 28: 15–19. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Cooper-Martin, Elizabeth, and Morris B. Holbrook. 1993. Ethical consumption experiences and ethical space. Advances in Consumer Research 20: 113–18. [ Google Scholar ]
  • Daradkeh, Mohammad. 2023. Navigating the complexity of entrepreneurial ethics: A Systematic review and future research agenda. Sustainability 15: 11099. [ Google Scholar ] [ CrossRef ]
  • Dasborough, Marie T., Sean T. Hannah, and Weichun Zhu. 2020. The generation and function of moral emotions in teams: An integrative review. Journal of Applied Psychology 105: 433–52. [ Google Scholar ] [ CrossRef ]
  • De Hoogh, Annebel H. B., and Deanne N. Den Hartog. 2008. Ethical and despotic leadership, relationships with leader’s social responsibility, top management team effectiveness and subordinates’ optimism: A multi-method study. The Leadership Quarterly 19: 297–311. [ Google Scholar ] [ CrossRef ]
  • De la Hoz-Correa, Andrea, Francisco Muñoz-Leiva, and Márta Bakucz. 2018. Past themes and future trends in medical tourism research: A co-word analysis. Tourism Management 65: 200–11. [ Google Scholar ] [ CrossRef ]
  • DeConinck, James B. 2011. The effects of ethical climate on organizational identification, supervisory trust, and turnover among salespeople. Journal of Business Research 64: 617–24. [ Google Scholar ] [ CrossRef ]
  • Di Nardo, Matteo, Anna Dalle Ore, Giuseppina Testa, Gail Annich, Edoardo Piervincenzi, Giorgio Zampini, Gabriella Bottari, Corrado Cecchetti, Antonio Amodeo, Roberto Lorusso, and et al. 2019. Principlism and personalism. Comparing two ethical models applied clinically in neonates undergoing extracorporeal membrane oxygenation support. Frontiers in Pediatrics 7: 312. [ Google Scholar ] [ CrossRef ]
  • Dietz, Joerg, Sandra L. Robinson, Robert Folger, Robert A. Baron, and Martin Schulz. 2003. The impact of community violence and an organization’s procedural justice climate on workplace aggression. Academy of Management Journal 46: 317–26. [ Google Scholar ] [ CrossRef ]
  • Dufresne, Ronald L. 2004. An action learning perspective on effective implementation of academic honor codes. Group & Organization Management 29: 201–18. [ Google Scholar ] [ CrossRef ]
  • Duggan, Patrick S., Gail Geller, Lisa A. Cooper, and Mary Catherine Beach. 2006. The moral nature of patient-centeredness: Is it “just the right thing to do”. Patient Education and Counseling 62: 271–6. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Ellemers, Naomi, Stefano Pagliaro, Manuela Barreto, and Colin Wayne Leach. 2008. Is it better to be moral than smart? The effects of morality and competence norms on the decision to work at group status improvement. Journal of Personality and Social Psychology 95: 1397–410. [ Google Scholar ] [ CrossRef ]
  • Engelen, Bart. 2019. Ethical criteria for health-promoting nudges: A case-by-case analysis. The American Journal of Bioethics 19: 48–59. [ Google Scholar ] [ CrossRef ]
  • Erokhin, Vasilii, Kamel Mouloudj, Ahmed C. Bouarar, Smail Mouloudj, and Tianming Gao. 2024. Investigating Farmers’ Intentions to Reduce Water Waste through Water-Smart Farming Technologies. Sustainability 16: 4638. [ Google Scholar ] [ CrossRef ]
  • Fan, Guanhua, Zhenhua Lin, Yizhen Luo, Maohuai Chen, and Liping Li. 2019. Role of community health service programs in navigating the medical ethical slippery slope—A 10-year retrospective study among medical students from southern China. BMC Medical Education 19: 240. [ Google Scholar ] [ CrossRef ]
  • Ferreira, Karine Araújo, Larissa Almeida Flávio, and Lasara Fabrícia Rodrigues. 2018. Postponement: Bibliometric analysis and systematic review of the literature. International Journal of Logistics Systems and Management 30: 69–94. [ Google Scholar ] [ CrossRef ]
  • Ferrell, Odies C., and Larry G. Gresham. 1985. A contingency framework for understanding ethical decision making in marketing. Journal of Marketing 49: 87–96. [ Google Scholar ] [ CrossRef ]
  • Field, R. H. George, and Michael A. Abelson. 1982. Climate: A reconceptualization and proposed model. Human Relations 35: 181–201. [ Google Scholar ] [ CrossRef ]
  • Fishbein, Martin, and Icek Ajzen. 1977. Belief, attitude, intention, and behavior: An introduction to theory and research. Philosophy and Rhetoric 10: 130–32. [ Google Scholar ]
  • Fornell, Claes, and David F. Larcker. 1981. Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research 18: 39–50. [ Google Scholar ] [ CrossRef ]
  • Frederick, William C. 1960. The growing concern over business responsibility. California Management Review 2: 54–61. [ Google Scholar ] [ CrossRef ]
  • Friend, Scott B., Fernando Jaramillo, and Jeff S. Johnson. 2020. Ethical climate at the frontline: A meta-analytic evaluation. Journal of Service Research 23: 116–38. [ Google Scholar ] [ CrossRef ]
  • Fry, Louis W. 2003. Toward a theory of spiritual leadership. The Leadership Quarterly 14: 693–727. [ Google Scholar ] [ CrossRef ]
  • Gamarra, María Pilar, and Michele Girotto. 2022. Ethical behavior in leadership: A bibliometric review of the last three decades. Ethics & Behavior 32: 124–46. [ Google Scholar ] [ CrossRef ]
  • Gatersleben, Birgitta, Niamh Murtagh, Megan Cherry, and Megan Watkins. 2019. Moral, wasteful, frugal, or thrifty? Identifying consumer identities to understand and manage pro-environmental behavior. Environment and Behavior 51: 24–49. [ Google Scholar ] [ CrossRef ]
  • Goebel, Sebastian, and Barbara E. Weißenberger. 2017. The relationship between informal controls, ethical work climates, and organizational performance. Journal of Business Ethics 141: 505–28. [ Google Scholar ] [ CrossRef ]
  • Graham, Melody A. 1994. Cheating at small colleges: An examination of student and faculty attitudes and behaviors. Journal of College Student Development 35: 255–60. [ Google Scholar ]
  • Gram-Hanssen, Kirsten. 2021. Conceptualising ethical consumption within theories of practice. Journal of Consumer Culture 21: 432–49. [ Google Scholar ] [ CrossRef ]
  • Greene, Joshua D., Brian Sommerville, Leigh E. Nystrom, John M. Darley, and Jonathan D. Cohen. 2001. An fMRI investigation of emotional engagement in moral judgment. Science 293: 2105–8. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Greenleaf, Robert K. 2002. Servant Leadership: A Journey into the Nature of Legitimate Power and Greatness . Mahwah: Paulist Press. [ Google Scholar ]
  • Gregory-Smith, Diana, Danae Manika, and Pelin Demirel. 2017. Green intentions under the blue flag: Exploring differences in EU consumers’ willingness to pay more for environmentally-friendly products. Business Ethics: A European Review 26: 205–22. [ Google Scholar ] [ CrossRef ]
  • Grobler, Sonja, and Anton Grobler. 2021. Ethical leadership, person-organizational fit, and productive energy: A South African sectoral comparative study. Ethics & Behavior 31: 21–37. [ Google Scholar ] [ CrossRef ]
  • Haidt, Jonathan. 2001. The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review 108: 814–34. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Haines, Russell, and Lori N. K. Leonard. 2007. Individual characteristics and ethical decision-making in an IT context. Industrial Management & Data Systems 107: 5–20. [ Google Scholar ] [ CrossRef ]
  • Harding, Trevor S., Matthew J. Mayhew, Cynthia J. Finelli, and Donald D. Carpenter. 2007. The Theory of Planned Behavior as a Model of Academic Dishonesty in Engineering and Humanities Undergraduates. Ethics & Behavior 17: 255–79. [ Google Scholar ] [ CrossRef ]
  • Harrison, Rob, Terry Newholm, and Deirdre Shaw. 2005. The Ethical Consumer . New York: SAGE Publications. [ Google Scholar ] [ CrossRef ]
  • Harvey, Paul, Mark J. Martinko, and Nancy Borkowski. 2017. Justifying deviant behavior: The role of attributions and moral emotions. Journal of Business Ethics 141: 779–95. [ Google Scholar ] [ CrossRef ]
  • Hassan, Louise M., Edward Shiu, and Deirdre Shaw. 2016. Who says there is an intention–behaviour gap? Assessing the empirical evidence of an intention–behaviour gap in ethical consumption. Journal of Business Ethics 136: 219–36. [ Google Scholar ] [ CrossRef ]
  • Hirth-Goebel, Tabea Franziska, and Barbara E. Weißenberger. 2019. Management accountants and ethical dilemmas: How to promote ethical intention? Journal of Management Control 30: 287–322. [ Google Scholar ] [ CrossRef ]
  • Hofmann, David A., and Barbara Mark. 2006. An investigation of the relationship between safety climate and medication errors as well as other nurse and patient outcomes. Personnel Psychology 59: 847–69. [ Google Scholar ] [ CrossRef ]
  • Hong, Eun-Sil, and Hyo-Yeon Shin. 2010. Ethical consumption and related variables of college students. Korean Journal of Family Management 13: 1–25. [ Google Scholar ]
  • Hong, Yeon Geum, and Insook Song. 2008. A study of cases of ethical consumption in the analysis of purchasing motives of environmentally-friendly agriculture products. Journal of Consumption Culture 11: 23–42. [ Google Scholar ] [ CrossRef ]
  • Huang, Wen-yeh, Ching-Yun Huang, and Alan J. Dubinsky. 2014. The impact of guanxi on ethical perceptions: The case of Taiwanese salespeople. Journal of Business-to-Business Marketing 21: 1–17. [ Google Scholar ] [ CrossRef ]
  • Hunt, Shelby D., and Scott Vitell. 1986. A general theory of marketing ethics. Journal of Macromarketing 6: 5–16. [ Google Scholar ] [ CrossRef ]
  • Husser, Jocelyn, Jean-Marc Andre, and Véronique Lespinet-Najib. 2019. The impact of locus of control, moral intensity, and the microsocial ethical environment on purchasing-related ethical reasoning. Journal of Business Ethics 154: 243–61. [ Google Scholar ] [ CrossRef ]
  • Johnson, Kevin J., Joé T. Martineau, Saouré Kouamé, Gokhan Turgut, and Serge Poisson-de-Haro. 2018. On the unethical use of privileged information in strategic decision-making: The effects of peers’ ethicality, perceived cohesion, and team performance. Journal of Business Ethics 152: 917–29. [ Google Scholar ] [ CrossRef ]
  • Johnson, Olivia, and Veena Chattaraman. 2019. Conceptualization and measurement of millennial’s social signaling and self-signaling for socially responsible consumption. Journal of Consumer Behaviour 18: 32–42. [ Google Scholar ] [ CrossRef ]
  • Jones, Thomas M. 1991. Ethical decision making by individuals in organizations: An issue-contingent model. Academy of Management Review 16: 366–95. [ Google Scholar ] [ CrossRef ]
  • Kacmar, Kacmar, K. Michele, Daniel G. Bachrach, Kenneth J. Harris, and Suzanne Zivnuska. 2011. Fostering good citizenship through ethical leadership: Exploring the moderating role of gender and organizational politics. Journal of Applied Psychology 96: 633–42. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Kang, Jee-Won, and Young Namkung. 2018. The effect of corporate social responsibility on brand equity and the moderating role of ethical consumerism: The case of Starbucks. Journal of Hospitality & Tourism Research 42: 1130–51. [ Google Scholar ] [ CrossRef ]
  • Kanungo, Rabindra N. 2001. Ethical values of transactional and transformational leaders. Canadian Journal of Administrative Sciences 18: 257–65. [ Google Scholar ] [ CrossRef ]
  • Kautish, Pradeep, Justin Paul, and Rajesh Sharma. 2019. The moderating influence of environmental consciousness and recycling intentions on green purchase behavior. Journal of Cleaner Production 228: 1425–36. [ Google Scholar ] [ CrossRef ]
  • Keck, Natalija, Steffen R. Giessner, Niels Van Quaquebeke, and Erica Kruijff. 2020. When do followers perceive their leaders as ethical? A relational models perspective of normatively appropriate conduct. Journal of Business Ethics 164: 477–93. [ Google Scholar ] [ CrossRef ]
  • Kelly, Paul, Simon J. Marshall, Hannah Badland, Jacqueline Kerr, Melody Oliver, Aiden R. Doherty, and Charlie Foster. 2013. An ethical framework for automated, wearable cameras in health behavior research. American journal of Preventive Medicine 44: 314–19. [ Google Scholar ] [ CrossRef ]
  • Kelman, Herbert C., and V. Lee Hamilton. 1989. Crimes of Obedience: Toward a Social Psychology of Authority and Responsibility . New Haven: Yale University Press. [ Google Scholar ]
  • Kerse, Gökhan. 2021. A leader indeed is a leader in deed: The relationship of ethical leadership, person–organization fit, organizational trust, and extra-role service behavior. Journal of Management & Organization 27: 601–20. [ Google Scholar ] [ CrossRef ]
  • Kim, Hyelin Lina, Yinyoung Rhou, Muzaffer Uysal, and Nakyung Kwon. 2017. An examination of the links between corporate social responsibility (CSR) and its internal consequences. International Journal of Hospitality Management 61: 26–34. [ Google Scholar ] [ CrossRef ]
  • Kim, Junghwan, and Mei-Po Kwan. 2021. The impact of the COVID-19 pandemic on people’s mobility: A longitudinal study of the US from March to September of 2020. Journal of Transport Geography 93: 103039. [ Google Scholar ] [ CrossRef ]
  • Kim, Jungsun Sunny, Hak Jun Song, and Choong-Ki Lee. 2016. Effects of corporate social responsibility and internal marketing on organizational commitment and turnover intentions. International Journal of Hospitality Management 55: 25–32. [ Google Scholar ] [ CrossRef ]
  • Kirsch, Roxanne E., Corrine R. Balit, Franco A. Carnevale, Jos M. Latour, and Victor Larcher. 2018. Ethical, cultural, social, and individual considerations prior to transition to limitation or withdrawal of life-sustaining therapies. Pediatric Critical Care Medicine 19: 10–18. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Kirsch, Roxanne, and David Munson. 2018. Ethical and end of life considerations for neonates requiring ECMO support. Seminars in Perinatology 42: 129–37. [ Google Scholar ] [ CrossRef ]
  • Klein, Katherine J., and Joann Speer Sorra. 1996. The challenge of innovation implementation. Academy of Management Review 21: 1055–80. [ Google Scholar ] [ CrossRef ]
  • Kohlberg, Lawrence. 1976. Moral stages and moralization: The cognitive-development approach. In Moral Development and Behavior: Theory and Research and Social Issues . Edited by Thomas Lickona. New York: Holt, Rienhart, and Winston, pp. 31–53. [ Google Scholar ]
  • Kohlberg, Lawrence. 1981. The Philosophy of Moral Development: Moral Stages and the Idea of Justice. New York: Harper & Row. [ Google Scholar ]
  • Köseoglu, Mehmet Ali, Yasin Sehitoglu, Gary Ross, and John A. Parnell. 2016. The evolution of business ethics research in the realm of tourism and hospitality: A bibliometric analysis. International Journal of Contemporary Hospitality Management 28: 1598–621. [ Google Scholar ] [ CrossRef ]
  • Laratta, Rosario. 2011. Ethical climate and accountability in nonprofit organizations: A comparative study between Japan and the UK. Public Management Review 13: 43–63. [ Google Scholar ] [ CrossRef ]
  • Lawrence, Dana J. 2007. The four principles of biomedical ethics: A foundation for current bioethical debate. Journal of Chiropractic Humanities 14: 34–40. [ Google Scholar ] [ CrossRef ]
  • Leung, Xi Y., Jie Sun, and Billy Bai. 2017. Bibliometrics of social media research: A co-citation and co-word analysis. International Journal of Hospitality Management 66: 35–45. [ Google Scholar ] [ CrossRef ]
  • Lewis, Phillip V. 1985. Defining ‘business ethics’: Like nailing jello to a wall. Journal of Business Ethics 4: 377–83. [ Google Scholar ] [ CrossRef ]
  • Liao, Hui, and Deborah E. Rupp. 2005. The impact of justice climate and justice orientation on work outcomes: A cross-level multifoci framework. Journal of Applied Psychology 90: 242–56. [ Google Scholar ] [ CrossRef ]
  • Liu, Yongdan, Matthew Tingchi Liu, Andrea Pérez, Wilco Chan, Jesús Collado, and Ziying Mo. 2020. The importance of knowledge and trust for ethical fashion consumption. Asia Pacific Journal of Marketing and Logistics 33: 1154–94. [ Google Scholar ] [ CrossRef ]
  • Luthans, Fred, and Bruce J. Avolio. 2003. Authentic leadership: A positive developmental approach. In Positive Organizational Scholarship . Edited by Kim S. Cameron, Jane E. Dutton and Robert E. Quinn. San Francisco: Barrett-Koehler, pp. 241–61. [ Google Scholar ]
  • Machlup, Fritz, and Una Mansfield. 1983. Cultural Diversity in Studies of Information. In The Study of Information: Interdisciplinary Messages . Edited by F. Machlup and U. Mansfield. New York: John Wiley and Sons, pp. 3–59. [ Google Scholar ]
  • Martin, Sean R., Jennifer J. Kish-Gephart, and James R. Detert. 2014. Blind forces: Ethical infrastructures and moral disengagement in organizations. Organizational Psychology Review 4: 295–325. [ Google Scholar ] [ CrossRef ]
  • Mas-Tur, Alicia, Norat Roig-Tierno, Shikhar Sarin, Christophe Haon, Trina Sego, Mustapha Belkhouja, Alan Porter, and José M. Merigó. 2021. Co-citation, bibliographic coupling and leading authors, institutions and countries in the 50 years of Technological Forecasting and Social Change. Technological Forecasting and Social Change 165: 120487. [ Google Scholar ] [ CrossRef ]
  • May, Douglas R., Young K. Chang, and Ruodan Shao. 2015. Does ethical membership matter? Moral identification and its organizational implications. Journal of Applied Psychology 100: 681–94. [ Google Scholar ] [ CrossRef ]
  • Mayer, David M. 2014. A review of the literature on ethical climate and culture. In The Oxford Handbook of Organizational Climate and Culture . Edited by Benjamin Schneider and Karen M. Barbera. New York: Oxford University Press, pp. 415–40. [ Google Scholar ] [ CrossRef ]
  • Mayer, David M., Maribeth Kuenzi, Rebecca Greenbaum, Mary Bardes, and Rommel (Bombie) Salvador. 2009. How low does ethical leadership flow? Test of a trickle-down model. Organizational Behavior and Human Decision Processes 108: 1–13. [ Google Scholar ] [ CrossRef ]
  • Mazar, Nina, On Amir, and Dan Ariely. 2008. The dishonesty of honest people: A theory of self-concept maintenance. Journal of Marketing Research 45: 633–44. [ Google Scholar ] [ CrossRef ]
  • McCain, Katherine W. 1990. Mapping authors in intellectual space: A technical overview. Journal of the American Society for Information Science (1986–1996) 41: 433–43. [ Google Scholar ] [ CrossRef ]
  • McGuire, Joseph W. 1963. Factors Affecting the Growth of Manufacturing Firms . Seattle: University of Washington, Bureau of Business Research. [ Google Scholar ]
  • McKay, Patrick F., Derek R. Avery, and Mark A. Morris. 2009. “A tale of two climates: Diversity climate from subordinates” and managers’ perspectives and their role in store unit sales performance. Personnel Psychology 62: 767–91. [ Google Scholar ] [ CrossRef ]
  • McMurtry, Kim. 2001. E-cheating: Combating a 21st Century challenge. T.H.E. Journal 29: 36–38. [ Google Scholar ]
  • Mihelič, Katarina Katja, and Barbara Culiberg. 2014. Turning a blind eye: A study of peer reporting in a business school setting. Ethics & Behavior 24: 364–81. [ Google Scholar ] [ CrossRef ]
  • Mitchell, Marie S., Scott J. Reynolds, and Linda K. Treviño. 2020. The study of behavioral ethics within organizations: A special issue introduction. Personnel Psychology 73: 5–17. [ Google Scholar ] [ CrossRef ]
  • Mohi Ud Din, Qaiser, and Li Zhang. 2023. Unveiling the mechanisms through which leader integrity shapes ethical leadership behavior: Theory of planned behavior perspective. Behavioral Sciences 13: 928. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Moore, Celia, David M. Mayer, Flora F. T. Chiang, Craig Crossley, Matthew J. Karlesky, and Thomas A. Birtch. 2019. Leaders matter morally: The role of ethical leadership in shaping employee moral cognition and misconduct. Journal of Applied Psychology 104: 123–45. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Mouloudj, Kamel, and Ahmed Chemseddine Bouarar. 2023. Investigating predictors of medical students’ intentions to engagement in volunteering during the health crisis. African Journal of Economic and Management Studies 14: 205–22. [ Google Scholar ] [ CrossRef ]
  • Mouloudj, Kamel, Anuli Njoku, Dachel Martínez Asanza, Ahmed Chemseddine Bouarar, Marian A. Evans, Smail Mouloudj, and Achouak Bouarar. 2023. Modeling Predictors of Medication Waste Reduction Intention in Algeria: Extending the Theory of Planned Behavior. International Journal of Environmental Research and Public Health 20: 6584. [ Google Scholar ] [ CrossRef ]
  • Mulki, Jay Prakash, Fernando Jaramillo, Shavin Malhotra, and William B. Locander. 2012. Reluctant employees and felt stress: The moderating impact of manager decisiveness. Journal of Business Research 65: 77–83. [ Google Scholar ] [ CrossRef ]
  • Mumford, Michael D., Ginamarie M Scott, Blaine Gaddis, and Jill M Strange. 2002. Leading creative people: Orchestrating expertise and relationships. The Leadership Quarterly 13: 705–50. [ Google Scholar ] [ CrossRef ]
  • Murrell, Vicki S. 2014. The failure of medical education to develop moral reasoning in medical students. International Journal of Medical Education 5: 219–25. [ Google Scholar ] [ CrossRef ]
  • Nathanson, Craig, Delroy L. Paulhus, and Kevin M. Williams. 2006. Predictors of a behavioral measure of scholastic cheating: Personality and competence but not demographics. Contemporary Educational Psychology 31: 97–122. [ Google Scholar ] [ CrossRef ]
  • Nga, Joyce K. H., and Evelyn W. S. Lum. 2013. An investigation into unethical behavior intentions among undergraduate students: A Malaysian study. Journal of Academic Ethics 11: 45–71. [ Google Scholar ] [ CrossRef ]
  • Njoku, Anuli, Kamel Mouloudj, Ahmed Chemseddine Bouarar, Marian A. Evans, Dachel Martínez Asanza, Smail Mouloudj, and Achouak Bouarar. 2024. Intentions to create green start-ups for collection of unwanted drugs: An empirical study. Sustainability 16: 2797. [ Google Scholar ] [ CrossRef ]
  • Noh, Yoon Goo, and Se Young Kim. 2024. Factors of hospital ethical climate among hospital nurses in Korea: A systematic review and meta-analysis. Healthcare 12: 372. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Onesti, Gianni. 2023. Exploring the impact of leadership styles, ethical behavior, and organizational identification on workers’ well-being. Administrative Sciences 13: 149. [ Google Scholar ] [ CrossRef ]
  • Orellano, Anabel, Carmen Valor, and Emilio Chuvieco. 2020. The Influence of Religion on Sustainable Consumption: A Systematic Review and Future Research Agenda. Sustainability 12: 7901. [ Google Scholar ] [ CrossRef ]
  • Overall, Jeffrey, and Steven Gedeon. 2023. Rational Egoism virtue-based ethical beliefs and subjective happiness: An empirical investigation. Philosophy of Management 22: 51–72. [ Google Scholar ] [ CrossRef ]
  • Pan, Yue, and John R. Sparks. 2012. Predictors, consequence, and measurement of ethical judgments: Review and meta-analysis. Journal of Business Research 65: 84–91. [ Google Scholar ] [ CrossRef ]
  • Pepper, Miriam, Tim Jackson, and David Uzzell. 2009. An examination of the values that motivate socially conscious and frugal consumer behaviours. International Journal of Consumer Studies 33: 126–36. [ Google Scholar ] [ CrossRef ]
  • Petrovskaya, Irina, and Fazli Haleem. 2021. Socially responsible consumption in Russia: Testing the theory of planned behavior and the moderating role of trust. Business Ethics, the Environment & Responsibility 30: 38–53. [ Google Scholar ] [ CrossRef ]
  • Piccolo, Ronald F., Rebecca Greenbaum, Deanne N. den Hartog, and Robert Folger. 2010. The relationship between ethical leadership and core job characteristics. Journal of Organizational Behavior 31: 259–78. [ Google Scholar ] [ CrossRef ]
  • Podsakoff, Philip M., Scott B. MacKenzie, Jeong-Yeon Lee, and Nathan P. Podsakoff. 2003. Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology 88: 879–903. [ Google Scholar ] [ CrossRef ]
  • Potter, William Gray. 1981. Introduction to bibliometrics. Library Trends 30: 5–7. [ Google Scholar ]
  • Rasool, Shahid, Roberto Cerchione, and Jari Salo. 2020. Assessing ethical consumer behavior for sustainable development: The mediating role of brand attachment. Sustainable Development 28: 1620–31. [ Google Scholar ] [ CrossRef ]
  • Reave, Laura. 2005. Spiritual values and practices related to leadership effectiveness. The Leadership Quarterly 16: 655–87. [ Google Scholar ] [ CrossRef ]
  • Resick, Christian J., Gillian S. Martin, Mary A. Keating, Marcus W. Dickson, Ho Kwong Kwan, and Chunyan Peng. 2011. What ethical leadership means to me: Asian, American, and European perspectives. Journal of Business Ethics 101: 435–57. [ Google Scholar ] [ CrossRef ]
  • Rest, James R. 1986. Moral Development: Advances in Research and Theory . New York: Praeger. [ Google Scholar ]
  • Rey-Martí, Andrea, Domingo Ribeiro-Soriano, and Daniel Palacios-Marqués. 2016. A bibliometric analysis of social entrepreneurship. Journal of Business Research 69: 1651–55. [ Google Scholar ] [ CrossRef ]
  • Richardson, Hettie A., and Robert J. Vandenberg. 2005. Integrating managerial perceptions and transformational leadership into a work-unit level model of employee involvement. Journal of Organizational Behavior 26: 561–89. [ Google Scholar ] [ CrossRef ]
  • Roberts, James A. 1993. Sex differences in socially responsible consumers’ behavior. Psychological Reports 73: 139–48. [ Google Scholar ] [ CrossRef ]
  • Robertson, Christopher J. 2008. An analysis of 10 years of business ethics research in strategic management journal: 1996–2005. Journal of Business Ethics 80: 745–53. [ Google Scholar ] [ CrossRef ]
  • Rodriguez-Rad, Carlos. J., and Encarnacion Ramos-Hidalgo. 2018. Spirituality, consumer ethics, and sustainability: The mediating role of moral identity. Journal of Consumer Marketing 35: 51–63. [ Google Scholar ] [ CrossRef ]
  • Romani, Simona, Silvia Grappi, and Richard P. Bagozzi. 2016. Corporate socially responsible initiatives and their effects on consumption of green products. Journal of Business Ethics 135: 253–64. [ Google Scholar ] [ CrossRef ]
  • Ross, Michael W. 2005. Typing, doing, and being: Sexuality and the Internet. Journal of Sex Research 42: 342–52. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Roy, Achinto, Alexander Newman, Heather Round, and Sukanto Bhattacharya. 2024. Ethical culture in organizations: A review and agenda for future research. Business Ethics Quarterly 34: 97–138. [ Google Scholar ] [ CrossRef ]
  • Runes, Dagobert D. 1964. Dictionary of Philosophy . Patterson: Littlefields, Adams and Co. [ Google Scholar ]
  • Saini, Supreet. 2013. Academic ethics at the undergraduate level: Case study from the formative years of the institute. Journal of Academic Ethics 11: 35–44. [ Google Scholar ] [ CrossRef ]
  • Sánchez-González, Irene, Irene Gil-Saura, and María Eugenia Ruiz-Molina. 2020. Ethically Minded Consumer Behavior, Retailers’ Commitment to Sustainable Development, and Store Equity in Hypermarkets. Sustainability 12: 8041. [ Google Scholar ] [ CrossRef ]
  • Schminke, Marshall, Anke Arnaud, and Maribeth Kuenzi. 2007. The power of ethical work climates. Organizational Dynamics 36: 171–86. [ Google Scholar ] [ CrossRef ]
  • Schwepker, Charles H., Jr. 2001. Ethical climate’s relationship to job satisfaction, organizational commitment, and turnover intention in the salesforce. Journal of Business Research 54: 39–52. [ Google Scholar ] [ CrossRef ]
  • Şendağ, Serkan, Mesut Duran, and M. Robert Fraser. 2012. Surveying the extent of involvement in online academic dishonesty (e-dishonesty) related practices among university students and the rationale students provide: One university’s experience. Computers in Human Behavior 28: 849–60. [ Google Scholar ] [ CrossRef ]
  • Shapira-Lishchinsky, Orly, and Zehava Rosenblatt. 2010. School ethical climate and teachers’ voluntary absence. Journal of Educational Administration 48: 164–81. [ Google Scholar ] [ CrossRef ]
  • Singhapakdi, Anusorn. 1993. Ethical perceptions of marketers: The interaction effects of Machiavellianism and organizational ethical culture. Journal of Business Ethics 12: 407–18. [ Google Scholar ] [ CrossRef ]
  • Smith, Samantha, and Angela Paladino. 2010. Eating clean and green? Investigating consumer motivations towards the purchase of organic food. Australasian Marketing Journal 18: 93–104. [ Google Scholar ] [ CrossRef ]
  • Snider, Jamie, Ronald Paul Hill, and Diane Martin. 2003. Corporate social responsibility in the 21st century: A view from the world’s most successful firms. Journal of Business Ethics 48: 175–87. [ Google Scholar ] [ CrossRef ]
  • Soler-Costa, Rebeca, Pablo Lafarga-Ostáriz, Marta Mauri-Medrano, and Antonio-José Moreno-Guerrero. 2021. Netiquette: Ethic, Education, and Behavior on Internet—A Systematic Literature Review. International Journal of Environmental Research and Public Health 18: 1212. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Soltani, Bahram. 2014. The anatomy of corporate fraud: A comparative analysis of high profile American and European corporate scandals. Journal of Business Ethics 120: 251–74. [ Google Scholar ] [ CrossRef ]
  • Starr, Martha A. 2009. The social economics of ethical consumption: Theoretical considerations and empirical evidence. The Journal of Socio-Economics 38: 916–25. [ Google Scholar ] [ CrossRef ]
  • Stern, Paul C., Thomas Dietz, and Gregory A. Guagnano. 1995. The new ecological paradigm in social-psychological context. Environment and Behavior 27: 723–43. [ Google Scholar ] [ CrossRef ]
  • Su, Hsin-Ning, and Pei-Chun Lee. 2010. Mapping knowledge structure by keyword co-occurrence: A first look at journal papers in Technology Foresight. Scientometrics 85: 65–79. [ Google Scholar ] [ CrossRef ]
  • Sumanth, John J., and Sean T. Hannah. 2014. An Integration and Exploration of Ethical and Authentic Leadership Antecedents. In Advances in Authentic and Ethical Leadership . Edited by Linda L. Neider and Chester A. Schriesheim. Charlotte: Information Age Publishing, pp. 25–74. [ Google Scholar ]
  • Tafolli, Festim, and Sonja Grabner-Kräuter. 2020. Employee perceptions of corporate social responsibility and organizational corruption: Empirical evidence from Kosovo. Corporate Governance 20: 1349–70. [ Google Scholar ] [ CrossRef ]
  • Tenbrunsel, Ann E., and Kristin Smith-Crowe. 2008. 13 ethical decision making: Where we’ve been and where we’re going. Academy of Management Annals 2: 545–607. [ Google Scholar ] [ CrossRef ]
  • Treviño, Linda Klebe. 1986. Ethical decision making in organizations: A person-situation interactionist model. Academy of Management Review 11: 601–17. [ Google Scholar ] [ CrossRef ]
  • Treviño, Linda Klebe, and Gary R. Weaver. 2001. Organizational justice and ethics program “follow-through”: Influences on employees’ harmful and helpful behavior. Business Ethics Quarterly 11: 651–71. [ Google Scholar ] [ CrossRef ]
  • Treviño, Linda Klebe, and Katherine A. Nelson. 2021. Managing Business Ethics: Straight Talk about How to Do it Right . Hoboken: John Wiley & Sons. [ Google Scholar ]
  • Treviño, Linda Klebe, and Michael Brown. 2004. Managing to be ethical: Debunking five business ethics myths. Academy of Management Perspectives 18: 69–81. [ Google Scholar ] [ CrossRef ]
  • Treviño, Linda Klebe, and Stuart A. Youngblood. 1990. Bad apples in bad barrels: A causal analysis of ethical decision-making behavior. Journal of Applied Psychology 75: 378–85. [ Google Scholar ] [ CrossRef ]
  • Treviño, Linda Klebe, Gary R. Weaver, and Scott J. Reynolds. 2006. Behavioral ethics in organizations: A review. Journal of Management 32: 951–90. [ Google Scholar ] [ CrossRef ]
  • Treviño, Linda Klebe, Kenneth D. Butterfield, and Donald L. McCabe. 1998. The ethical context in organizations: Influences on employee attitudes and behaviors. Business Ethics Quarterly 8: 447–76. [ Google Scholar ] [ CrossRef ]
  • Treviño, Linda Klebe, Laura Pincus Hartman, and Michael Brown. 2000. Moral person and moral manager: How executives develop a reputation for ethical leadership. California Management Review 42: 128–42. [ Google Scholar ] [ CrossRef ]
  • Treviño, Linda Klebe, Michael Brown, and Laura Pincus Hartman. 2003. A qualitative investigation of perceived executive ethical leadership: Perceptions from inside and outside the executive suite. Human Relations 56: 5–37. [ Google Scholar ] [ CrossRef ]
  • Treviño, Linda Klebe, Niki A. Den Nieuwenboer, and Jennifer J. Kish-Gephart. 2014. (Un) ethical behavior in organizations. Annual Review of Psychology 65: 635–60. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Tsai, Feng-Chiao, Yu-Tsung Huang, Luan-Yin Chang, and Jin-Town Wang. 2008. Pyogenic liver abscess as endemic disease, Taiwan. Emerging Infectious Diseases 14: 1592–600. [ Google Scholar ] [ CrossRef ]
  • Tsalikis, John, and David J. Fritzsche. 2013. Business Ethics: A Literature Review with a Focus on Marketing Ethics. In Citation Classics from the Journal of Business Ethics . Edited by Alex C. Michalos and Deborah C. Poff. Advances in Business Ethics Research. Dordrecht: Springer, vol. 2, pp. 337–404. [ Google Scholar ] [ CrossRef ]
  • Ülkü, Tugay, and Musa Said Döven. 2021. The Moderator role of ethical climate upon the effect between health personnel’s Machiavellian tendencies and whistleblowing intention: The case of Eskişehir. Is AhlakıDergisi 14: 125–66. [ Google Scholar ] [ CrossRef ]
  • Underwood, Jean, and Attila Szabo. 2003. Academic offences and e-learning: Individual propensities in cheating. British Journal of Educational Technology 34: 467–77. [ Google Scholar ] [ CrossRef ]
  • Uusitalo, Outi, and Reetta Oksanen. 2004. Ethical consumerism: A view from Finland. International Journal of Consumer Studies 28: 214–21. [ Google Scholar ] [ CrossRef ]
  • Valentine, Sean, and Tim Barnett. 2007. Perceived organizational ethics and the ethical decisions of sales and marketing personnel. Journal of Personal Selling & Sales Management 27: 373–88. [ Google Scholar ] [ CrossRef ]
  • Val-Laillet, David, Esther Aarts, Bernhard M. Weber, Marco M. D. Ferrari, Valentina Quaresima, Luke Edward Stoeckel, Miguel Alonso-Alonso, Michel Albert Audette, Charles Henri Malbert, and Eric M. Stice. 2015. Neuroimaging and neuromodulation approaches to study eating behavior and prevent and treat eating disorders and obesity. NeuroImage: Clinical 8: 1–31. [ Google Scholar ] [ CrossRef ]
  • Vallaster, Christine, Sascha Kraus, José M. Merigó Lindahl, and Annika Nielsen. 2019. Ethics and entrepreneurship: A bibliometric study and literature review. Journal of Business Research 99: 226–37. [ Google Scholar ] [ CrossRef ]
  • Van Eck, Nees, and Ludo Waltman. 2010. Software survey: VOSviewer, a computer program for bibliometric mapping. Scientometrics 84: 523–38. [ Google Scholar ] [ CrossRef ]
  • Van Quaquebeke, Niels, Jan U. Becker, Niko Goretzki, and Christian Barrot. 2019. Perceived ethical leadership affects customer purchasing intentions beyond ethical marketing in advertising due to moral identity self-congruence concerns. Journal of Business Ethics 156: 357–76. [ Google Scholar ] [ CrossRef ]
  • Vardaman, James M., Maria B. Gondo, and David G. Allen. 2014. Ethical climate and pro-social rule breaking in the workplace. Human Resource Management Review 24: 108–18. [ Google Scholar ] [ CrossRef ]
  • Victor, Bart, and John B. Cullen. 1988. The organizational bases of ethical work climates. Administrative Science Quarterly 33: 101–25. [ Google Scholar ] [ CrossRef ]
  • Vitell, Scott J., Robert Allen King, Katharine Howie, Jean-François Toti, Lumina Albert, Encarnación Ramos Hidalgo, and Omneya Yacout. 2016. Spirituality, moral identity, and consumer ethics: A multi-cultural study. Journal of Business Ethics 139: 147–60. [ Google Scholar ] [ CrossRef ]
  • Vošner, Helena Blažun, Samo Bobek, Simona Sternad Zabukovšek, and Peter Kokol. 2017. Openness and information technology: A bibliometric analysis of literature production. Kybernetes 46: 750–66. [ Google Scholar ] [ CrossRef ]
  • Walker, Margaret Urban. 2007. Moral Understandings: A Feminist Study in Ethics . New York: Oxford University Press. [ Google Scholar ]
  • Wang, Yau-De, and Hui-Hsien Hsieh. 2012. Toward a better understanding of the link between ethical climate and job satisfaction: A multilevel analysis. Journal of Business Ethics 105: 535–45. [ Google Scholar ] [ CrossRef ]
  • Wartick, Steven L., and Philip L. Cochran. 1985. The evolution of the corporate social performance model. Academy of ManagementReview 10: 758–69. [ Google Scholar ] [ CrossRef ]
  • Wiegmann, Alex, and Michael R. Waldmann. 2014. Transfer effects between moral dilemmas: A causal model theory. Cognition 131: 28–43. [ Google Scholar ] [ CrossRef ]
  • Wood, Donna J. 1991. Corporate social performance revisited. Academy of Management Review 16: 691–718. [ Google Scholar ] [ CrossRef ]
  • Yadav, Rambalak. 2016. Altruistic or egoistic: Which value promotes organic food consumption among young consumers? A study in the context of a developing nation. Journal of Retailing and Consumer Services 33: 92–97. [ Google Scholar ] [ CrossRef ]
  • Yardley, Jennifer, Melanie Domenech Rodríguez, Scott C. Bates, and Johnathan Nelson. 2009. True confessions?: Alumni’s retrospective reports on undergraduate cheating behaviors. Ethics & Behavior 19: 1–14. [ Google Scholar ] [ CrossRef ]
  • Ye, Nan, Tung-Boon Kueh, Lisong Hou, Yongxin Liu, and Hang Yu. 2020. A bibliometric analysis of corporate social responsibility in sustainable development. Journal of Cleaner Production 272: 122679. [ Google Scholar ] [ CrossRef ]
  • Yukl, Gary. 2006. Leadership in Organizations , 6th ed. Upper Saddle River: Pearson Prentice Hall. [ Google Scholar ]
  • Zandi, GholamReza, Imran Shahzad, Muhammad Farrukh, and Sebastian Kot. 2020. Supporting Role of Society and Firms to COVID-19 Management among Medical Practitioners. International Journal of Environmental Research and Public Health 17: 7961. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Zemigala, Marcin. 2019. Tendencies in research on sustainable development in management sciences. Journal of Cleaner Production 218: 796–809. [ Google Scholar ] [ CrossRef ]
  • Zhang, Guangxi, Jianan Zhong, and Muammer Ozer. 2020. Status threat and ethical leadership: A power-dependence perspective. Journal of Business Ethics 161: 665–85. [ Google Scholar ] [ CrossRef ]
  • Zhang, Qian, Bee Lan Oo, and Benson Teck Heng Lim. 2019. Drivers, motivations, and barriers to the implementation of corporate social responsibility practices by construction enterprises: A review. Journal of Cleaner Production 210: 563–84. [ Google Scholar ] [ CrossRef ]
  • Zohar, Dov. 2010. Thirty years of safety climate research: Reflections and future directions. Accident Analysis & Prevention 42: 1517–22. [ Google Scholar ]

Click here to enlarge figure

ObjectivesMethod
CountryBibliographic coupling
KeywordCo-occurrence
PublicationBibliographic coupling and Co-citation
JournalBibliographic coupling and Co-citation
AuthorBibliographic coupling and Co-citation
Cluster (Number of Keywords)The Theme of Research about Ethical Behavior in the ContextContextKeywords
1 (146)Concerns about health problemsMedicalCare; health; depression; cancer; medicine; stress; quality-of-life; risk; burnout; children; COVID-19; vulnerability; care; human-rights; psychology, life, family; HIV; suicide; bioethics; health-care; nurse
2 (75)Management work of leadersLeadershipPerformance; ethical leadership; model; ethical decision-making; job-satisfaction; ethical climate; employee voice; work; transformational leadership; abusive supervision
3 (54)Consumer behavior toward products of a socially responsible firmConsumption Corporate social-responsibility; corporate social responsibility; planned behavior; consumers; intentions; consumption; green; consumer behavior; product; welfare; welfare animal; responsibility; sustainability
4 (51)Understand the process of making an ethical decisionEthical decisionmaking Ethics; judgment; decision making; power; empathy; morality; emotion; dilemmas; psychologists, dynamics, intuition, negotiation, willingness
5 (37)Student’s behavior in educationAcademic Education; students; organization; managers; depletion; misconduct; integrity; cheating; academic dishonesty; unethical behavior
6 (30)Activities in corporate (business, management)Corporate Behavior; business ethics; codes; management; entrepreneurship; work climate; financial performance; human resource management; stakeholder theory
7 (23)The concept of factors mentioned when marketingMarketing Marketing ethics; consumer ethics; religiosity; collectivism; decision-making; idealism; social responsibility; culture; strategy
8 (6)Spirituality and virtue affect ethical behavior in Indian firmsSpiritual Firms; India; philosophy; spirituality; virtue; workplace spirituality
Cluster Representative Publications
Cluster 1 (435 publications)
Medical Context
( ); ( ); ( ); ( ); ( ); ( ); ( ); ( ); ( )
Cluster 2 (131 Publications)
Ethical Behavior in Consumption
( ); ( ); ( ); ( ); ( )
Cluster 3 (129 Publications)
Moral Development, Ethical Perception, Moral Judgment, and Ethical Decision Making
( ); ( ); ( ); ( )
Cluster 4 (119 Publications)
Ethical Behavior in Leadership
( ); ( ); ( ); ( ); ( ); ( ); ( ); ( ); ( )
Cluster 5 (78 Publications)
Ethical Behavior in Business: Corporate Social Responsibility
( ); ( ); ( ); ( ); ( )
Cluster 6 (64 Publications)
(Un)Ethical Behavior in Organizational Context
( ); ( ); ( ); ( ); ( ); ( ); ( ); ( )
Cluster 7 (27 Publications)
(Un)Ethical Behavior in Educational Context
( ); ( ); ( ); ( ); ( )
Cluster 8 (16 Publications)
Ethical Climate in Organizational Context
( ); ( ); ( ); ( ); ( ); ( ); ( )
JournalCountryPublicationsSJR 2021Quartile
Journal of Business Ethics (1982)Netherlands1432.44Q1
Journal of Applied Psychology (1917)UK246.45Q1
Ethics and Behavior (1991)USA170.44Q2
Sustainability (2009)Switzerland170.66Q1
Science and Engineering Ethics (1995)Netherlands151.07Q1
Frontiers in Psychology (2010)Switzerland100.87Q1
Academic Medicine (1964)USA101.66Q1
Business Ethics Quarterly (1996)UK91.54Q1
Journal of Business Research (1973)USA92.32Q1
Personnel Review (1971)UK50.89Q2
Business Ethics (1992)UK50.93Q1
Cluster Representative Research
Cluster 1 (37 publications)
Ethical Decision Making
( ); ( ); ( ); ( ); ( ); ( ); ( ); ( ); ( )
Cluster 2 (34 publications)
Ethical Leadership
( ); ( ); ( ); ( ); ( ); ( ); ( ); ( ); ( )
Cluster 3 (23 publications)
Ethical Judgment, Moral Development, and Ethical Behavior in an Organization
( ); ( ); ( ); ( ); ( ); ( ); ( ); ( )
Cluster 4 (6 publications)
Ethical Climate
( ); ( ); ( )
JournalCountryCitationSJR 2021Quartile
Journal of Business Ethics (1982)Netherlands47752.44Q1
Journal of Applied Psychology (1917)USA13266.45Q1
Academy of Management Review (1978)USA10067.62Q1
Academy of Management Journal (1975)USA90810.87Q1
Journal of Personality and Social Psychology (1965)USA8953.7Q1
Leadership Quarterly (1990)USA6394.91Q1
Organizational Behavior and Human Decision Processes (1985)USA5772.83Q1
Journal of Business Research (1973)USA5382.32Q1
Journal of Management (1975)USA5252.12Q1
Journal of Marketing (1969)USA5227.46Q1
Science (1880)USA37514.59Q1
Business Ethics (1992)UK3580.93Q1
A. Bibliographic Coupling AnalysisB. Co-Citation AnalysisC. Key Context
Cluster 2 (131 Publications)
Ethical Behavior in Consumption
Cluster 1 (37 publications) Ethical Decision MakingConsumption
Cluster 4 (119 Publications)
Ethical Behavior in Leadership
Cluster 2 (34 publications) Ethical LeadershipLeadership
Cluster 3 (129 Publications)
Moral Development, Ethical Perception, Moral Judgment, and Ethical Decision Making
Cluster 3 (23 publications) Ethical Judgment, Moral Development, and Ethical Behavior in OrganizationsBusiness
Cluster 5 (78 Publications)
Ethical Behavior in Business: Corporate Social Responsibility
Cluster 6 (64 Publications)
(Un)Ethical Behavior in Organizational Contexts
Cluster 4 (6 publications) Ethical ClimateOrganization
Cluster 8 (16 Publications)
Ethical Climate in Organizational Contexts
Cluster 1 (435 publications)
Medical Contexts
Medical
Cluster 7 (27 Publications)
(Un)Ethical Behavior in Educational Contexts
Education
Main ConceptExplanationAuthors
Altruistic consumptionCustomers choose forms of consumption that are not environmentally friendly ( ); ( )
Exchanging behaviorUsing the ethical values of the exchange product ( ); ( )
Fair trade (FT) practiceThese include (1) willingness to pay more, (2) guidance by universalism, benevolence, self-direction and stimulation, (3) self-identity, (4) emphasis on brand fair trade in products, and (5) cultural influences ( ); ( )
Frugal consumptionCustomers are less interested in shopping, more physical repair and product reuse, longer product life ( ); ( )
Green consumptionCustomers drive communities and practices at the national level, which forces manufacturers to adhere to environmentally friendly products ( ); ( )
Socially conscious consumption behaviorConsider equity between environmental issues (e.g., use of used products), health (e.g., building low-waste communities) and social issues (e.g., donate unused products) ( ); ( )
Socially responsible consumption behaviorThese include buying behavior (e.g., buying used products), non-buying behavior (e.g., discouraging purchasing products using raw materials), and post-purchase behavior (e.g., sell fully functional used products at lower market prices) ( ); ( )
Spiritual and moral consumptionConsumer spiritual practices promote ethical consumption ( ); ( )
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Vu Lan Oanh, L.; Tettamanzi, P.; Tien Minh, D.; Comoli, M.; Mouloudj, K.; Murgolo, M.; Dang Thu Hien, M. How Ethical Behavior Is Considered in Different Contexts: A Bibliometric Analysis of Global Research Trends. Adm. Sci. 2024 , 14 , 200. https://doi.org/10.3390/admsci14090200

Vu Lan Oanh L, Tettamanzi P, Tien Minh D, Comoli M, Mouloudj K, Murgolo M, Dang Thu Hien M. How Ethical Behavior Is Considered in Different Contexts: A Bibliometric Analysis of Global Research Trends. Administrative Sciences . 2024; 14(9):200. https://doi.org/10.3390/admsci14090200

Vu Lan Oanh, Le, Patrizia Tettamanzi, Dinh Tien Minh, Maurizio Comoli, Kamel Mouloudj, Michael Murgolo, and Mai Dang Thu Hien. 2024. "How Ethical Behavior Is Considered in Different Contexts: A Bibliometric Analysis of Global Research Trends" Administrative Sciences 14, no. 9: 200. https://doi.org/10.3390/admsci14090200

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

Analyzing the ethical and societal impacts of proposed research

Illustration of a network

An interdisciplinary Stanford team has created and seeks to scale a new Ethics and Society Review (ESR) that prompts researchers seeking funding to consider the ethical and societal impacts of their research and how to mitigate potential harms.

Currently, Institutional Review Boards (IRBs) are the main form of ethical review for research done in the United States that involves human subjects. IRBs are groups designated to review and monitor research and ensure the rights and welfare of the people taking part in the study. According to the U.S. Food and Drug Administration’s regulations, IRBs have the authority to approve, deny, or require modifications to a research project, but their scope is limited to assessing the impact the of research on the individuals in the study.

The newly proposed ESR fills a critical need by considering how proposed research could have harmful effects on society as well as positive effects. Consider the effects for example of AI algorithms on fairness in sentencing or on who is prioritized for treatments, or the effects of a proposed technology on privacy. If there are negative risks or known negative effects, how might these be anticipated and mitigated?

Earlier this year, the ESR was tested in a pilot program that reviewed proposals submitted by researchers seeking funding from the Stanford Institute for Human-Centered Artificial Intelligence (HAI). The first faculty review panel included experts from fields including anthropology, communication, computer science, history, management science and engineering, medicine, philosophy, political science, and sociology. A paper published December 28 in the Proceedings of the National Academy of Sciences ( PNAS) details the findings and how the ESR could be applied in other areas of research and institutions elsewhere.

Here, four of the paper’s six co-authors, Michael Bernstein , associate professor of Computer Science in the School of Engineering ; Margaret Levi , the Sara Miller McCune Director of the Center for Advanced Study in the Behavioral Sciences (CASBS ); David Magnus , the Thomas A. Raffin Professor of Medicine and Biomedical Ethics at Stanford Medicine ; and Debra Satz , the Vernon R. and Lysbeth Warren Anderson Dean of the School of Humanities and Sciences discuss how the ESR came to be, why it’s needed, and the impact they hope it will have.

""

What is the process for the ethics and society review that you propose?

Bernstein: The engine that we usually associate with ethics review—the Institutional Review Board, or IRB—is explicitly excluded from considering long-range societal impact. So, for example, artificial intelligence projects can be pursued, published, and shared without engaging in any structured ethical or societal reflection. But even if many of these projects do not need to engage with IRBs, they need to apply for funding. The ESR is designed as a gate to funding: funding from collaborating grant programs isn't released until the researchers complete the ESR process.

Levi: The ESR depends on a partnership with a funding group that is willing to release funds to successful proposals only after the project investigators provide a statement outlining any problematic ethical implications or societal consequences of their research. Of particular interest to the review panel are mitigation strategies. If the outline is adequate, the funds are released. If the panel deems it necessary, there is iterated discussion with the panel to help figure out where there are problems, trade-offs that need to be addressed, and appropriate mitigation steps. This is more of a collaborative than a compliance model.

Why do we need an ethics review and why is the focus on potential impacts to society important?

Satz: Our current review processes do a good job of protecting individuals from unnecessary risks or harms. But some of our social problems do not show up directly as harms or risks to individuals but instead to social institutions and the general social fabric that knits our lives together. New technologies are upending the way we work and live in both positive and negative ways. Some of the negative effects are not inevitable; they depend on design choices that we can change.

Magnus: Because this is not part of the IRB process, it is easy for researchers to focus solely on the risks to individual participants without consideration of the broader implications of their research. For example, a project that was developing wearable robotic devices did a great job of considering all of the relevant risks that research participants would be exposed to and how to mitigate those. But they did not at all consider the literature on the importance of taking downstream implications of the technology (for example privacy issues that are likely to arise when implemented in real world settings that do not arise in the laboratory research setting) into account in the design process.

An interdisciplinary group of authors worked on this paper, how did that come about?

Satz: The problems posed by new technologies require input from many fields of knowledge, working together. The problems cannot be adequately addressed by ethicists or philosophers pronouncing from “on high”—removed from those creating and thinking about technology and science. We have found that deliberation among computer scientists, philosophers, political scientists, and others yields a deeper understanding of the challenges and provides better guidance for improving our practices.

Levi: All four of the faculty have been active—in different domains—in promoting standards for research that take into account ethical and societal implications, not just harms to individual subjects and participants. Within Stanford’s Ethics, Society, and Technology Hub , CASBS has been coordinating the implementation and evaluation of the ESR. Betsy Rajala , the program director of CASBS, and Charla Waeiss, a consultant to CASBS, have been the key players and are full partners in the writing of the PNAS paper.

Bernstein: What initially catalyzed this effort was an email that Debra Satz sent about a (rejected) grant that we were on, where she mentioned that IRBs were focused on risks to human subjects rather than risks to human society. Her comment gave words to much of the uncertainty I had faced in my career as a computer scientist, and it rattled around in my brain until I translated it into the basic concept of the ESR—ethics and societal review connected to grant funding. I quickly connected with Margaret Levi, who had been pursuing similar goals in the social sciences and had a strong interest in societal impacts of AI. We pitched it to the leadership of Stanford's HAI; they connected us with David Magnus, who has vast experience in ethics review, and the four of us were off to the races.

Did any of the results surprise you?

Bernstein: Two results surprised me. First, I expected substantial pushback from researchers along the lines of "you're adding red tape!" However, all the respondents to our survey were willing to submit to the ESR again in the future. Second, over half of researchers felt that the process had positively influenced the design of their research project. For a fairly lightweight process to benefit the design of half of projects was a huge—and very pleasant—surprise to me.

What’s next for the ESR?

Magnus: The biggest challenge is to find a way to make this scalable. It is one thing to do an ESR for 35 or 40 proposals, it is quite another to do 400 or 4,000. We hope this scaffolding will make it easier for researchers to think through the ethical and social issues raised by their research and identify strategies to mitigate any problems. [We also hope this process] becomes a routine part of research.

Levi: We also are eager to collaborate with other universities and firms to see how best to transfer our process broadly. In addition, we are considering ways to help researchers when they discover new ethical implications or societal consequences in the process of their research. In terms of improving the ESR, our plans are two-fold. First, we are determining ways to staff and support the faculty panels so that we are not misusing or over-demanding of faculty time. Second, and perhaps most importantly, we are building the scaffolding that will inform and transform thinking so that considering the ethical implications and societal consequences becomes second nature.

Related:  Michael Bernstein , Associate professor of Computer Science

Related Departments

Ukraine and Russia flags on map displaying Europe.

The future of Russia and Ukraine

CO2 converted to ethanol in a photobioreactor.

Turning carbon pollution into ethanol

Blowtorch heating gel on plywood.

New gels could protect buildings during wildfires

Animal experimentation: A look into ethics, welfare and alternative methods

  • November 2017
  • Revista da Associação Médica Brasileira 63(11):923-928
  • 63(11):923-928

Marcos Fernandes at Universidade Federal de Goiás

  • Universidade Federal de Goiás

Aline Pedroso at Universidade Federal de Goiás

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations
  • Kardelen Kocaman Kalkan
  • Meral Kekeçoğlu

Simona Sciuto

  • BMC Med Educ
  • Harpreet Kaur and Nidhi Srivastava

Nidhi Srivastava

  • Marie P.F. Corradi

Thomas Harrison Luechtefeld

  • Gabriela Vitória de Oliveira

Marcell Soares

  • Katsuyoshi Kudo

Michiaki Unno

  • INT J MOL SCI

Maria Zoupa

  • Ilaria Baldelli

Alma Massaro

  • Susanna Penco

Rosagemma Ciliberti

  • Caio Vinicius Botelho Brito
  • Rosa Helena de Figueiredo Chaves Soares

Nara Macedo Botelho

  • INT J IMMUNOPATH PH

Vijay Pal Singh

  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

Public Health Notes

Your partner for better health, research ethics: definition, principles and advantages.

October 13, 2020 Kusum Wagle Epidemiology 0

experimental research ethics

Table of Contents

What is Research Ethics?

  • Ethics are the set of rules that govern our expectations of our own and others’ behavior.
  • Research ethics are the set of ethical guidelines that guides us on how scientific research should be conducted and disseminated.
  • Research ethics govern the standards of conduct for scientific researchers It is the guideline for responsibly conducting the research.
  • Research that implicates human subjects or contributors rears distinctive and multifaceted ethical, legitimate, communal and administrative concerns.
  • Research ethics is unambiguously concerned in the examination of ethical issues that are upraised when individuals are involved as participants in the study.
  • Research ethics committee/Institutional Review Board (IRB) reviews whether the research is ethical enough or not to protect the rights, dignity and welfare of the respondents.

Objectives of Research Ethics:

  • The first and comprehensive objective – to guard/protect human participants, their dignity, rights and welfare .
  • The second objective – to make sure that research is directed in a manner that assists welfares of persons, groups and/or civilization as a whole.
  • The third objective – to inspect particular research events and schemes for their ethical reliability, considering issues such as the controlling risk, protection of privacy and the progression of informed consent.

Principles of Research Ethics:

experimental research ethics

The general principles of research ethics are:

 

Being honest with the beneficiaries and respondents. Being honest about the findings and methodology of the research. Being honest with other direct and indirect stakeholders.
Ensuring honesty and sincerity. Fulfilling agreements and promises. Do not create false expectations or make false promises.

 

Avoiding bias in experimental design, data analysis, data interpretation, peer review, and other aspects of research.
It includes:

Maximize the benefits of the participants. Ethical obligation to maximize possible benefits and to minimize possible harms to the respondents.
Do no harm. Minimize harm/s or risks to the human. Ensure privacy, autonomy and dignity.
Responsibly publishing to promote and uptake research or knowledge. No duplicate publication.
It means keeping the participant anonymous. It involves not revealing the name, caste or any other information about the participants that may reveal his/her identity.

 

Protecting confidential information, personnel records. It includes information such as:

Avoid discrimination on the basis of age, sex, race, ethnicity or other factors that are violation of human rights and are not related to the study.
Be open to sharing results, data and other resources. Also accept encouraging comments and constructive feedback.
Be careful about the possible error and biases.

Give credit to the intellectual property of others. Always paraphrase while referring to others article, writing. Never plagiarize.

The obligation to distribute benefits and burdens fairly, to treat equals equally, and to give reasons for differential treatment based on widely accepted criteria for just ways to distribute benefits and burdens.

Broad Categorization of Principles of Research Ethics:

Broadly categorizing, there are mainly five principles of research ethics:

1. MINIMIZING THE RISK OF HARM

It is necessary to minimize any sort of harm to the participants. There are a number of forms of harm that participants can be exposed to. They are:

  • Bodily harm to contributors.
  • Psychological agony and embarrassment.
  • Social drawback.
  • Violation of participant’s confidentiality and privacy.

In order to minimize the risk of harm, the researcher/data collector should:

  • Obtain  informed consent from participants.
  • Protecting anonymity and confidentiality of participants.
  • Avoiding  misleading practices when planning research.
  • Providing participants with the  right to withdraw .

2. OBTAINING INFORMED CONSENT 

One of the fundamentals of research ethics is the notion of  informed consent .

Informed consent means that a person knowingly, voluntarily and intelligently gives consent to participate in a research.

Informed consent means that the participants should be well-informed about the:

  • Introduction and objective of the research
  • Purpose of the discussion
  • Anticipated advantages, benefits/harm from the research (if any)
  • Use of research
  • Their role in research
  • Methods which will be used to protect anonymity and confidentiality of the participant
  • Freedom to not answer any question/withdraw from the research
  • Who to contact if the participant need additional information about the research

3. PROTECTING ANONYMITY AND CONFIDENTIALITY

Protecting the  anonymity  and  confidentiality  of research participants is an additionally applied constituent of research ethics.

Protecting anonymity: It means keeping the participant anonymous. It involves not revealing the name, caste or any other information about the participants that may reveal his/her identity.

Maintaining confidentiality: It refers to ensuring that the information given by the participant are confidential and not shared with anyone, except the research team. It is also about keeping the information secretly from other people.

4. AVOIDING MISLEADING PRACTICES

  • The researcher should avoid all the deceptive and misleading practices that might misinform the respondent.
  • It includes avoiding all the activities like communicating wrong messages, giving false assurance, giving false information etc.

5. PROVIDING THE RIGHT TO WITHDRAW

  • Participants have to have the right to withdraw at any point of the research.
  • When any respondent decides on to withdraw from the research, they should not be  stressed or  forced  in any manner to try to discontinue them from withdrawing.

Apart from the above-mentioned ethics, other ethical aspects things that must be considered while doing research are:

Protection of vulnerable groups of people:

  • Vulnerability is one distinctive feature of people incapable to protect their moralities and wellbeing. Vulnerable groups comprise captive populations (detainees, established, students, etc.), mentally ill persons, and aged people, children, critically ill or dying, poor, with learning incapacities, sedated or insensible.
  • Their participation in research can be endorsed to their incapability to give an informed consent and to the need for their further safety and sensitivity from the research/researcher as they are in a greater risk of being betrayed, exposed or forced to participate.

  Skills of the researcher:

  • Researchers should have the basic skills and familiarity for the specific study to be carried out and be conscious of the bounds of personal competence in research.
  • Any lack of knowledge in the area under research must be clearly specified.
  • Inexperienced researchers should work under qualified supervision that has to be revised by an ethics commission.

Advantages of Research Ethics:

  • Research ethics promote the aims of research.
  • It increases trust among the researcher and the respondent.
  • It is important to adhere to ethical principles in order to protect the dignity, rights and welfare of research participants.
  • Researchers can be held accountable and answerable for their actions.
  • Ethics promote social and moral values.
  • Promote s the  ambitions of research, such as understanding, veracity, and dodging of error.
  • Ethical standards uphold the  values that are vital to cooperative work , such as belief, answerability, mutual respect, and impartiality.
  • Ethical norms in research also aid to construct  public upkeep for research. People are more likely to trust a research project if they can trust the worth and reliability of research.

Limitations of Research Ethics:

For subjects:

  • Possibilities to physical integrity, containing those linked with experimental drugs and dealings and with other involvements that will be used in the study (e.g. measures used to observe research participants, such as blood sampling, X-rays or lumbar punctures).
  • Psychological risks: for example, a questionnaire may perhaps signify a risk if it fears traumatic events or happenings that are especially traumatic.
  • Social, legal and economic risks : for example, if personal information collected during a study is unintentionally released, participants might face a threat of judgment and stigmatization.
  • Certain tribal or inhabitant groups may possibly suffer from discrimination or stigmatization, burdens because of research, typically if associates of those groups are recognized as having a greater-than-usual risk of devouring a specific disease.
  • The research may perhaps have an influence on the prevailing health system: for example, human and financial capitals dedicated to research may distract attention from other demanding health care necessities in the community.

How can we ensure ethics at different steps of research?

The following process helps to ensure ethics at different steps of research:

  • Collect the facts and talk over intellectual belongings openly
  • Outline the ethical matters
  • Detect the affected parties (stakeholders)
  • Ascertain the forfeits
  • Recognize the responsibilities (principles, rights, justice)
  • Contemplate your personality and truthfulness
  • Deliberate innovatively about possible actions
  • Respect privacy and confidentiality
  • Resolve on the appropriate ethical action and be willing to deal with divergent point of view.

References and For More Information:

http://dissertation.laerd.com/principles-of-research-ethics.php

https://researchethics.ca/what-is-research-ethics/

https://www.who.int/ethics/Ethics_basic_concepts_ENG.pdf

https://www.niehs.nih.gov/research/resources/bioethics/whatis/index.cfm

https://research.ku.edu/sites/research.ku.edu/files/docs/EESE_EthicalDecisionmakingFramework.pdf

https://www.who.int/ethics/research/en/

https://www.ufrgs.br/bioetica/cioms2008.pdf

https://www.who.int/ethics/research/en/#:~:text=WHO%20Research%20Ethics%20Review%20Committee,financially%20or%20technically%20by%20WHO .

https://www.who.int/reproductivehealth/topics/ethics/review_bodies_guide_serg/en/

https://www.who.int/ethics/indigenous_peoples/en/index13.html

https://www.who.int/bulletin/archives/80(2)114.pdf

https://www.who.int/about/ethics

https://www.slideshare.net/uqudent/introduction-to-research-ethics

https://libguides.library.cityu.edu.hk/researchmethods/ethics#:~:text=Methods%20by%20Subject-,What%20is%20Research%20Ethics%3F,ensure%20a%20high%20ethical%20standard .

https://www.apa.org/monitor/jan03/principles

https://www.hsj.gr/medicine/what-are-the-major-ethical-issues-in-conducting-research-is-there-a-conflict-between-the-research-ethics-and-the-nature-of-nursing.php?aid=3485

https://www.skillsyouneed.com/learn/research-ethics.html

  • advantages of research ethics
  • difference between confidentiality and anonymity in research
  • minimizing the risk of harm in research
  • obtaining informed consent in research
  • principles of research ethics
  • PROTECTING ANONYMITY AND CONFIDENTIALITY
  • what are the advantages of research ethics
  • what are the limitations of research ethics
  • what are the principles of research ethics
  • what is obtaining informed consent in research
  • what is research ethics
  • what is right to withdraw in research
  • what is ROTECTING ANONYMITY AND CONFIDENTIALITY in research

' src=

Copyright © 2024 | WordPress Theme by MH Themes

A black and white photo taken from above shows two rows of people, mostly men, wearing military-style jackets in a courtroom.

The Nuremberg Code isn’t just for prosecuting Nazis − its principles have shaped medical ethics to this day

experimental research ethics

Director of the Center for Health Law, Ethics & Human Rights, Boston University

Disclosure statement

George J Annas does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Boston University provides funding as a founding partner of The Conversation US.

View all partners

After World War II, Nuremberg, Germany, was the site of trials of Nazi officials charged with war crimes and crimes against humanity. The Nuremberg trials were landmarks in the development of international law. But one of them has also been applied in peacetime: the “ Medical Trial ,” which has helped to shape bioethics ever since.

Twenty Nazi physicians and three administrators were tried for committing lethal and torturous human experimentation , including freezing prisoners in ice water and subjecting them to simulated high-altitude experiments. Other Nazi experiments included infecting prisoners with malaria, typhus and poisons and subjecting them to mustard gas and sterilization. These criminal experiments were conducted mostly in the concentration camps and often ended in the death of the subjects.

Lead prosecutor Telford Taylor, an American lawyer and general in the U.S. Army, argued that such deadly experiments were more accurately classified as murder and torture than anything related to the practice of medicine. A review of the evidence, including physician expert witnesses and testimony from camp survivors , led the judges to agree. The verdicts were handed down on Aug. 20, 1947.

As part of their judgment, the American judges drafted what has become known as The Nuremberg Code , which set forth key requirements for ethical treatment and medical research. The code has been widely recognized for, among other things, being the first major articulation of the doctrine of informed consent. Yet its guidelines may not be enough to protect humans against new potentially “species-endangering” research today.

10 key values

The code consists of 10 principles that the judges ruled must be followed as both a matter of medical ethics and a matter of international human rights law.

The first and most famous sentence stands out: “The voluntary consent of the human subject is absolutely essential.”

In addition to voluntary and informed consent, the code also requires that subjects have a right to withdraw from an experiment at any time. The other provisions are designed to protect the health of the subjects, including that the research must be done only by a qualified investigator, follow sound science, be based on preliminary research on animals and ensure adequate health and safety protection of subjects.

The trial’s prosecutors, physicians and judges formulated the code by working together. As they did, they also set the early agenda for a new field: bioethics. The guidelines also describe a scientist-subject relationship that obligates researchers to do more than act in what they think is the best interests of subjects, but to respect the subject’s human rights and protect their welfare. These rules essentially replace the paternalistic model of the Hippocratic oath with a human rights approach.

Four women in winter coats stand with suitcases, with a few men standing around them.

Under President Dwight D. Eisenhower, who had been the commanding general in Europe, the U.S. Department of Defense adopted the code’s principles in 1953 – one sign of its influence. Its fundamental consent principle is also summarized in the U.N.’s International Covenant on Civil and Political Rights , which declares that “no one shall be subjected without his free consent to medical or scientific experimentation.”

Yet some physicians tried to distance themselves from the Nuremberg Code because its source was judicial rather than medical, and because they did not want to be linked in any way to the Nazi physicians on trial at Nuremberg.

The World Medical Association, a physicians group set up after the Nuremberg Doctors Trial, formulated its own set of ethical guidelines , named the “ Helsinki Declaration .” As with Hippocrates, Helsinki permitted exceptions to informed consent, such as when the physician-researcher thought that silence was in the best medical interest of the subject.

The Nuremberg Code was written by judges to be applied in the courtroom. Helinski was written by physicians for physicians.

There have been no subsequent international trials on human experimentation since Nuremberg, even in the International Criminal Court, so the text of the Nuremberg Code remains unchanged.

New research, new procedures?

The code has been a major focus of my work on health law and bioethics , and I spoke in Nuremberg on its 50th and 75th anniversaries, at conferences sponsored by the International Physicians for the Prevention of Nuclear War. Both events celebrated the Nuremberg Code as a human rights proclamation.

A woman with curly hair and a serious expression sits barefoot, wearing a skirt and suit jacket.

I remain a strong supporter of the Nuremberg Code and believe that following its precepts is both an ethical and a legal obligation of physician researchers. Yet the public can’t expect Nuremberg to protect it against all types of scientific research or weapons development.

Soon after the U.S. dropped atomic bombs over Hiroshima and Nagasaki – two years before the Nuremberg trials began – it became evident that our species was capable of destroying ourselves.

Nuclear weapons are only one example. Most recently, international debate has focused on new potential pandemics, but also on “ gain-of-function” research , which sometimes adds lethality to an existing bacteria or virus to make it more dangerous. The goal is not to harm humans but rather to try to develop a protective countermeasure . The danger, of course, is that a super harmful agent “escapes” from the laboratory before such a countermeasure can be developed.

I agree with the critics who argue that at least some gain-of-function research is so dangerous to our species that it should be outlawed altogether. Innovations in artificial intelligence and climate engineering could also pose lethal dangers to all humans, not just some humans. Our next question is who gets to decide whether species-endangering research should be done, and on what basis?

I believe that species-endangering research should require multinational, democratic debate and approval. Such a mechanism would be one way to make the survival of our own endangered species more likely – and ensure we are able to celebrate the 100th anniversary of the Nuremberg Code.

  • Medical ethics
  • Human research ethics
  • Informed consent
  • Biomedical ethics
  • Ethical question
  • War crimes trials
  • On-human experimentation

experimental research ethics

Director of STEM

experimental research ethics

Community member - Training Delivery and Development Committee (Volunteer part-time)

experimental research ethics

Chief Executive Officer

experimental research ethics

Finance Business Partner

experimental research ethics

Head of Evidence to Action

This week: the arXiv Accessibility Forum

Help | Advanced Search

Computer Science > Software Engineering

Title: raising ai ethics awareness through an ai ethics quiz for software practitioners.

Abstract: Today, ethical issues surrounding AI systems are increasingly prevalent, highlighting the critical need to integrate AI ethics into system design to prevent societal harm. Raising awareness and fostering a deep understanding of AI ethics among software practitioners is essential for achieving this goal. However, research indicates a significant gap in practitioners' awareness and knowledge of AI ethics and ethical principles. While much effort has been directed toward helping practitioners operationalise AI ethical principles such as fairness, transparency, accountability, and privacy, less attention has been paid to raising initial awareness, which should be the foundational step. Addressing this gap, we developed a software-based tool, the AI Ethics Quiz, to raise awareness and enhance the knowledge of AI ethics among software practitioners. Our objective was to organise interactive workshops, introduce the AI Ethics Quiz, and evaluate its effectiveness in enhancing awareness and knowledge of AI ethics and ethical principles among practitioners. We conducted two one-hour workshops (one in-person and one online) involving 29 software practitioners. Data was collected through pre-quiz questionnaire, the AI Ethics Quiz, and a post-quiz questionnaire. The anonymous responses revealed that the quiz significantly improved practitioners' awareness and understanding of AI ethics. Additionally, practitioners found the quiz engaging and reported it created a meaningful learning experience regarding AI ethics. In this paper, we share insights gained from conducting these interactive workshops and introducing the AI Ethics Quiz to practitioners. We also provide recommendations for software companies and leaders to adopt similar initiatives, which may help them enhance practitioners' awareness and understanding of AI ethics.
Comments: 37 pages, 11 figures, 2 tables
Subjects: Software Engineering (cs.SE)
Cite as: [cs.SE]
  (or [cs.SE] for this version)
  Focus to learn more arXiv-issued DOI via DataCite

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Prev Med Hyg
  • v.63(2 Suppl 3); 2022 Jun

Ethical considerations regarding animal experimentation

Aysha karim kiani.

1 Allama Iqbal Open University, Islamabad, Pakistan

2 MAGI EUREGIO, Bolzano, Italy

DEREK PHEBY

3 Society and Health, Buckinghamshire New University, High Wycombe, UK

GARY HENEHAN

4 School of Food Science and Environmental Health, Technological University of Dublin, Dublin, Ireland

RICHARD BROWN

5 Department of Psychology and Neuroscience, Dalhousie University, Halifax, Nova Scotia, Canada

PAUL SIEVING

6 Department of Ophthalmology, Center for Ocular Regenerative Therapy, School of Medicine, University of California at Davis, Sacramento, CA, USA

PETER SYKORA

7 Department of Philosophy and Applied Philosophy, University of St. Cyril and Methodius, Trnava, Slovakia

ROBERT MARKS

8 Department of Biotechnology Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel

BENEDETTO FALSINI

9 Institute of Ophthalmology, Università Cattolica del Sacro Cuore, Fondazione Policlinico Universitario A. Gemelli-IRCCS, Rome, Italy

NATALE CAPODICASA

10 MAGI BALKANS, Tirana, Albania

STANISLAV MIERTUS

11 Department of Biotechnology, University of SS. Cyril and Methodius, Trnava, Slovakia

12 International Centre for Applied Research and Sustainable Technology, Bratislava, Slovakia

LORENZO LORUSSO

13 UOC Neurology and Stroke Unit, ASST Lecco, Merate, Italy

DANIELE DONDOSSOLA

14 Center for Preclincal Research and General and Liver Transplant Surgery Unit, Fondazione IRCCS Ca‘ Granda Ospedale Maggiore Policlinico, Milan, Italy

15 Department of Pathophysiology and Transplantation, Università degli Studi di Milano, Milan, Italy

GIANLUCA MARTINO TARTAGLIA

16 Department of Biomedical, Surgical and Dental Sciences, Università degli Studi di Milano, Milan, Italy

17 UOC Maxillo-Facial Surgery and Dentistry, Fondazione IRCCS Ca Granda, Ospedale Maggiore Policlinico, Milan, Italy

MAHMUT CERKEZ ERGOREN

18 Department of Medical Genetics, Faculty of Medicine, Near East University, Nicosia, Cyprus

MUNIS DUNDAR

19 Department of Medical Genetics, Erciyes University Medical Faculty, Kayseri, Turkey

SANDRO MICHELINI

20 Vascular Diagnostics and Rehabilitation Service, Marino Hospital, ASL Roma 6, Marino, Italy

DANIELE MALACARNE

21 MAGI’S LAB, Rovereto (TN), Italy

GABRIELE BONETTI

Astrit dautaj, kevin donato, maria chiara medori, tommaso beccari.

22 Department of Pharmaceutical Sciences, University of Perugia, Perugia, Italy

MICHELE SAMAJA

23 MAGI GROUP, San Felice del Benaco (BS), Italy

STEPHEN THADDEUS CONNELLY

24 San Francisco Veterans Affairs Health Care System, University of California, San Francisco, CA, USA

DONALD MARTIN

25 Univ. Grenoble Alpes, CNRS, Grenoble INP, TIMC-IMAG, SyNaBi, Grenoble, France

ASSUNTA MORRESI

26 Department of Chemistry, Biology and Biotechnology, University of Perugia, Perugia, Italy

ARIOLA BACU

27 Department of Biotechnology, University of Tirana, Tirana, Albania

KAREN L. HERBST

28 Total Lipedema Care, Beverly Hills California and Tucson Arizona, USA

MYKHAYLO KAPUSTIN

29 Federation of the Jewish Communities of Slovakia

LIBORIO STUPPIA

30 Department of Psychological, Health and Territorial Sciences, School of Medicine and Health Sciences, University "G. d'Annunzio", Chieti, Italy

LUDOVICA LUMER

31 Department of Anatomy and Developmental Biology, University College London, London, UK

GIAMPIETRO FARRONATO

Matteo bertelli.

32 MAGISNAT, Peachtree Corners (GA), USA

Animal experimentation is widely used around the world for the identification of the root causes of various diseases in humans and animals and for exploring treatment options. Among the several animal species, rats, mice and purpose-bred birds comprise almost 90% of the animals that are used for research purpose. However, growing awareness of the sentience of animals and their experience of pain and suffering has led to strong opposition to animal research among many scientists and the general public. In addition, the usefulness of extrapolating animal data to humans has been questioned. This has led to Ethical Committees’ adoption of the ‘four Rs’ principles (Reduction, Refinement, Replacement and Responsibility) as a guide when making decisions regarding animal experimentation. Some of the essential considerations for humane animal experimentation are presented in this review along with the requirement for investigator training. Due to the ethical issues surrounding the use of animals in experimentation, their use is declining in those research areas where alternative in vitro or in silico methods are available. However, so far it has not been possible to dispense with experimental animals completely and further research is needed to provide a road map to robust alternatives before their use can be fully discontinued.

How to cite this article: Kiani AK, Pheby D, Henehan G, Brown R, Sieving P, Sykora P, Marks R, Falsini B, Capodicasa N, Miertus S, Lorusso L, Dondossola D, Tartaglia GM, Ergoren MC, Dundar M, Michelini S, Malacarne D, Bonetti G, Dautaj A, Donato K, Medori MC, Beccari T, Samaja M, Connelly ST, Martin D, Morresi A, Bacu A, Herbst KL, Kapustin M, Stuppia L, Lumer L, Farronato G, Bertelli M. Ethical considerations regarding animal experimentation. J Prev Med Hyg 2022;63(suppl.3):E255-E266. https://doi.org/10.15167/2421-4248/jpmh2022.63.2S3.2768

Introduction

Animal model-based research has been performed for a very long time. Ever since the 5 th century B.C., reports of experiments involving animals have been documented, but an increase in the frequency of their utilization has been observed since the 19 th century [ 1 ]. Most institutions for medical research around the world use non-human animals as experimental subjects [ 2 ]. Such animals might be used for research experimentations to gain a better understanding of human diseases or for exploring potential treatment options [ 2 ]. Even those animals that are evolutionarily quite distant from humans, such as Drosophila melanogaster , Zebrafish ( Danio rerio ) and Caenorhabditis elegans , share physiological and genetic similarities with human beings [ 2 ]; therefore animal experimentation can be of great help for the advancement of medical science [ 2 ].

For animal experimentation, the major assumption is that the animal research will be of benefit to humans. There are many reasons that highlight the significance of animal use in biomedical research. One of the major reasons is that animals and humans share the same biological processes. In addition, vertebrates have many anatomical similarities (all vertebrates have lungs, a heart, kidneys, liver and other organs) [ 3 ]. Therefore, these similarities make certain animals more suitable for experiments and for providing basic training to young researchers and students in different fields of biological and biomedical sciences [ 3 ]. Certain animals are susceptible to various health problems that are similar to human diseases such as diabetes, cancer and heart disease [ 4 ]. Furthermore, there are genetically modified animals that are used to obtain pathological phenotypes [ 5 ]. A significant benefit of animal experimentation is that test species can be chosen that have a much shorter life cycle than humans. Therefore, animal models can be studied throughout their life span and for several successive generations, an essential element for the understanding of disease progression along with its interaction with the whole organism throughout its lifetime [ 6 ].

Animal models often play a critical role in helping researchers who are exploring the efficacy and safety of potential medical treatments and drugs. They help to identify any dangerous or undesired side effects, such as birth defects, infertility, toxicity, liver damage or any potential carcinogenic effects [ 7 ]. Currently, U.S. Federal law, for example, requires that non-human animal research is used to demonstrate the efficacy and safety of any new treatment options before proceeding to trials on humans [ 8 ]. Of course, it is not only humans benefit from this research and testing, since many of the drugs and treatments that are developed for humans are routinely used in veterinary clinics, which help animals live longer and healthier lives [ 4 ].

COVID-19 AND THE NEED FOR ANIMAL MODELS

When COVID-19 struck, there was a desperate need for research on the disease, its effects on the brain and body and on the development of new treatments for patients with the disease. Early in the disease it was noticed that those with the disease suffered a loss of smell and taste, as well as neurological and psychiatric symptoms, some of which lasted long after the patients had “survived” the disease [ 9-15 ]. As soon as the pandemic started, there was a search for appropriate animal models in which to study this unknown disease [ 16 , 17 ]. While genetically modified mice and rats are the basic animal models for neurological and immunological research [ 18 , 19 ] the need to understand COVID-19 led to a range of animal models; from fruit flies [ 20 ] and Zebrafish [ 21 ] to large mammals [ 22 , 23 ] and primates [ 24 , 25 ]. And it was just not one animal model that was needed, but many, because different aspects of the disease are best studied in different animal models [ 16 , 25 , 26 ]. There is also a need to study the transmission pathways of the zoonosis: where does it come from, what are the animal hosts and how is it transferred to humans [ 27 ]?

There has been a need for animal models for understanding the pathophysiology of COVID-19 [ 28 ], for studying the mechanisms of transmission of the disease [ 16 ], for studying its neurobiology [ 29 , 30 ] and for developing new vaccines [ 31 ]. The sudden onset of the COVID-19 pandemic has highlighted the fact that animal research is necessary, and that the curtailment of such research has serious consequences for the health of both humans and animals, both wild and domestic [ 32 ] As highlighted by Adhikary et al. [ 22 ] and Genzel et al. [ 33 ] the coronavirus has made clear the necessity for animal research and the danger in surviving future such pandemics if animal research is not fully supported. Genzel et al. [ 33 ], in particular, take issue with the proposal for a European ban on animal testing. Finally, there is a danger in bypassing animal research in developing new vaccines for diseases such as COVID-19 [ 34 ]. The purpose of this paper is to show that, while animal research is necessary for the health of both humans and animals, there is a need to carry out such experimentation in a controlled and humane manner. The use of alternatives to animal research such as cultured human cells and computer modeling may be a useful adjunct to animal studies but will require that such methods are more readily accessible to researchers and are not a replacement for animal experimentation.

Pros and cons of animal experimentation

Arguments against animal experimentation.

A fundamental question surrounding this debate is to ask whether it is appropriate to use animals for medical research. Is our acceptance that animals have a morally lower value or standard of life just a case of speciesism [ 35 ]? Nowadays, most people agree that animals have a moral status and that needlessly hurting or abusing pets or other animals is unacceptable. This represents something of a change from the historical point of view where animals did not have any moral status and the treatment of animals was mostly subservient to maintaining the health and dignity of humans [ 36 ].

Animal rights advocates strongly argue that the moral status of non-human animals is similar to that of humans, and that animals are entitled to equality of treatment. In this view, animals should be treated with the same level of respect as humans, and no one should have the right to force them into any service or to kill them or use them for their own goals. One aspect of this argument claims that moral status depends upon the capacity to suffer or enjoy life [ 37 ].

In terms of suffering and the capacity of enjoying life, many animals are not very different from human beings, as they can feel pain and experience pleasure [ 38 ]. Hence, they should be given the same moral status as humans and deserve equivalent treatment. Supporters of this argument point out that according animals a lower moral status than humans is a type of prejudice known as “speciesism” [ 38 ]. Among humans, it is widely accepted that being a part of a specific race or of a specific gender does not provide the right to ascribe a lower moral status to the outsiders. Many advocates of animal rights deploy the same argument, that being human does not give us sufficient grounds declare animals as being morally less significant [ 36 ].

ARGUMENTS IN FAVOR OF ANIMAL EXPERIMENTATION

Those who support animal experimentation have frequently made the argument that animals cannot be elevated to be seen as morally equal to humans [ 39 ]. Their main argument is that the use of the terms “moral status” or “morality” is debatable. They emphasize that we must not make the error of defining a quality or capacity associated with an animal by using the same adjectives used for humans [ 39 ]. Since, for the most part, animals do not possess humans’ cognitive capabilities and lack full autonomy (animals do not appear to rationally pursue specific goals in life), it is argued that therefore, they cannot be included in the moral community [ 39 ]. It follows from this line of argument that, if animals do not possess the same rights as human beings, their use in research experimentation can be considered appropriate [ 40 ]. The European and the American legislation support this kind of approach as much as their welfare is respected.

Another aspect of this argument is that the benefits to human beings of animal experimentation compensate for the harm caused to animals by these experiments.

In other words, animal harm is morally insignificant compared to the potential benefits to humans. Essentially, supporters of animal experimentation claim that human beings have a higher moral status than animals and that animals lack certain fundamental rights accorded to humans. The potential violations of animal rights during animal research are, in this way, justified by the greater benefits to mankind [ 40 , 41 ]. A way to evaluate when the experiments are morally justified was published in 1986 by Bateson, which developed the Bateson’s Cube [ 42 ]. The Cube has three axes: suffering, certainty of benefit and quality of research. If the research is high-quality, beneficial, and not inflicting suffering, it will be acceptable. At the contrary, painful, low-quality research with lower likelihood of success will not be acceptable [ 42 , 43 ].

Impact of experimentations on animals

Ability to feel pain and distress.

Like humans, animal have certain physical as well as psychological characteristics that make their use for experimentation controversial [ 44 ].

In the last few decades, many studies have increased knowledge of animal awareness and sentience: they indicate that animals have greater potential to experience damage than previously appreciated and that current rights and protections need to be reconsidered [ 45 ]. In recent times, scientists as well as ethicists have broadly acknowledged that animals can also experience distress and pain [ 46 ]. Potential sources of such harm arising from their use in research include disease, basic physiological needs deprivation and invasive procedures [ 46 ]. Moreover, social deprivation and lack of the ability to carry out their natural behaviors are other causes of animal harm [ 46 ]. Several studies have shown that, even in response to very gentle handling and management, animals can show marked alterations in their physiological and hormonal stress markers [ 47 ].

In spite of the fact that suffering and pain are personalized experiences, several multi-disciplinary studies have provided clear evidence of animals experiencing pain and distress. In particular, some animal species have the ability to express pain similarly to human due to common psychological, neuroanatomical and genetic characteristics [ 48 ]. Similarly, animals share a resemblance to humans in their developmental, genetic and environmental risk factors for psychopathology. For instance, in many species, it has been shown that fear operates within a less organized subcortical neural circuit than pain [ 49 , 50 ]. Various types of depression and anxiety disorders like posttraumatic stress disorder have also been reported in mammals [ 51 ].

PSYCHOLOGICAL CAPABILITIES OF ANIMALS

Some researchers have suggested that besides their ability to experience physical and psychological pain and distress, some animals also exhibit empathy, self-awareness and language-like capabilities. They also demonstrate tools-linked cognizance, pleasure-seeking and advanced problem-solving skills [ 52 ]. Moreover, mammals and birds exhibit playful behavior, an indicator of the capacity to experience pleasure. Other taxa such as reptiles, cephalopods and fishes have also been observed to display playful behavior, therefore the current legislation prescribes the use of environmental enrichers [ 53 ]. The presence of self-awareness ability, as assessed by mirror self-recognition, has been reported in magpies, chimpanzees and other apes, and certain cetaceans [ 54 ]. Recently, another study has revealed that crows have the ability to create and use tools that involve episodic-like memory formation and its retrieval. From these findings, it may be suggested that crows as well as related species show evidence of flexible learning strategies, causal reasoning, prospection and imagination that are similar to behavior observed in great apes [ 55 ]. In the context of resolving the ethical dilemmas about animal experimentation, these observations serve to highlight the challenges involved [ 56 , 57 ].

Ethics, principles and legislation in animal experimentation

Ethics in animal experimentation.

Legislation around animal research is based on the idea of the moral acceptability of the proposed experiments under specific conditions [ 58 ]. The significance of research ethics that ensures proper treatment of experimental animals [ 58 ]. To avoid undue suffering of animals, it is important to follow ethical considerations during animal studies [ 1 ]. It is important to provide best human care to these animals from the ethical and scientific point of view [ 1 ]. Poor animal care can lead to experimental outcomes [ 1 ]. Thus, if experimental animals mistreated, the scientific knowledge and conclusions obtained from experiments may be compromised and may be difficult to replicate, a hallmark of scientific research [ 1 ]. At present, most ethical guidelines work on the assumption that animal experimentation is justified because of the significant potential benefits to human beings. These guidelines are often permissive of animal experimentation regardless of the damage to the animal as long as human benefits are achieved [ 59 ].

PRINCIPLE OF THE 4 RS

Although animal experimentation has resulted in many discoveries and helped in the understanding numerous aspects of biological science, its use in various sectors is strictly controlled. In practice, the proposed set of animal experiments is usually considered by a multidisciplinary Ethics Committee before work can commence [ 60 ]. This committee will review the research protocol and make a judgment as to its sustainability. National and international laws govern the utilization of animal experimentation during research and these laws are mostly based on the universal doctrine presented by Russell and Burch (1959) known as principle of the 3 Rs. The 3Rs referred to are Reduction, Refinement and Replacement, and are applied to protocols surrounding the use of animals in research. Some researchers have proposed another “R”, of responsibility for the experimental animal as well as for the social and scientific status of the animal experiments [ 61 ]. Thus, animal ethics committees commonly review research projects with reference to the 4 Rs principles [ 62 ].

The first “R”, Reduction means that the experimental design is examined to ensure that researchers have reduced the number of experimental animals in a research project to the minimum required for reliable data [ 59 ]. Methods used for this purpose include improved experimental design, extensive literature search to avoid duplication of experiments [ 35 ], use of advanced imaging techniques, sharing resources and data, and appropriate statistical data analysis that reduce the number of animals needed for statistically significant results [ 2 , 63 ].

The second “R”, Refinement involves improvements in procedure that minimize the harmful effects of the proposed experiments on the animals involved, such as reducing pain, distress and suffering in a manner that leads to a general improvement in animal welfare. This might include for example improved living conditions for research animals, proper training of people handling animals, application of anesthesia and analgesia when required and the need for euthanasia of the animals at the end of the experiment to curtail their suffering [ 63 ].

The third “R”, Replacement refers to approaches that replace or avoid the use of experimental animals altogether. These approaches involve use of in silico methods/computerized techniques/software and in vitro methods like cell and tissue culture testing, as well as relative replacement methods by use of invertebrates like nematode worms, fruit flies and microorganisms in place of vertebrates and higher animals [ 1 ]. Examples of proper application of these first “3R2 principles are the use of alternative sources of blood, the exploitation of commercially used animals for scientific research, a proper training without use of animals and the use of specimen from previous experiments for further researches [ 64-67 ].

The fourth “R”, Responsibility refers to concerns around promoting animal welfare by improvements in experimental animals’ social life, development of advanced scientific methods for objectively determining sentience, consciousness, experience of pain and intelligence in the animal kingdom, as well as effective involvement in the professionalization of the public discussion on animal ethics [ 68 ].

OTHER ASPECTS OF ANIMAL RESEARCH ETHICS

Other research ethics considerations include having a clear rationale and reasoning for the use of animals in a research project. Researchers must have reasonable expectation of generating useful data from the proposed experiment. Moreover, the research study should be designed in such a way that it should involve the lowest possible sample size of experimental animals while producing statistically significant results [ 35 ].

All individual researchers that handle experimental animals should be properly trained for handling the particular species involved in the research study. The animal’s pain, suffering and discomfort should be minimized [ 69 ]. Animals should be given proper anesthesia when required and surgical procedures should not be repeated on same animal whenever possible [ 69 ]. The procedure of humane handling and care of experimental animals should be explicitly detailed in the research study protocol. Moreover, whenever required, aseptic techniques should be properly followed [ 70 ]. During the research, anesthetization and surgical procedures on experimental animals should only be performed by professionally skilled individuals [ 69 ].

The Animal Research: Reporting of In Vivo Experiments (ARRIVE) guidelines that are issued by the National Center for the Replacement, Refinement, and Reduction of Animals in Research (NC3Rs) are designed to improve the documentation surrounding research involving experimental animals [ 70 ]. The checklist provided includes the information required in the various sections of the manuscript i.e. study design, ethical statements, experimental procedures, experimental animals and their housing and husbandry, and more [ 70 ].

It is critical to follow the highest ethical standards while performing animal experiments. Indeed, most of the journals refuse to publish any research data that lack proper ethical considerations [ 35 ].

INVESTIGATORS’ ETHICS

Since animals have sensitivity level similar to the human beings in terms of pain, anguish, survival instinct and memory, it is the responsibility of the investigator to closely monitor the animals that are used and identify any sign of distress [ 71 ]. No justification can rationalize the absence of anesthesia or analgesia in animals that undergo invasive surgery during the research [ 72 ]. Investigators are also responsible for giving high-quality care to the experimental animals, including the supply of a nutritious diet, easy water access, prevention of and relief from any pain, disease and injury, and appropriate housing facilities for the animal species [ 73 ]. A research experiment is not permitted if the damage caused to the animal exceeds the value of knowledge gained by that experiment. No scientific advancement based on the destruction and sufferings of another living being could be justified. Besides ensuring the welfare of animals involved, investigators must also follow the applicable legislation [ 74 , 75 ].

To promote the comfort of experimental animals in England, an animal protection society named: ‘The Society for the Preservation of Cruelty to Animals’ (now the Royal Society for the Prevention of Cruelty to Animals) was established (1824) that aims to prevent cruelty to animal [ 76 ].

ANIMAL WELFARE LAWS

Legislation for animal protection during research has long been established. In 1876 the British Parliament sanctioned the ‘Cruelty to Animals Act’ for animal protection. Russell and Burch (1959) presented the ‘3 Rs’ principles: Replacement, Reduction and Refinement, for use of animals during research [ 61 ]. Almost seven years later, the U.S.A also adopted regulations for the protection of experimental animals by enacting the Laboratory Animal Welfare Act of 1966 [ 60 ]. In Brazil, the Arouca Law (Law No. 11,794/08) regulates the animal use in scientific research experiments [ 76 ].

These laws define the breeding conditions, and regulate the use of animals for scientific research and teaching purposes. Such legal provisions control the use of anesthesia, analgesia or sedation in experiments that could cause distress or pain to experimental animals [ 59 , 76 ]. These laws also stress the need for euthanasia when an experiment is finished, or even during the experiment if there is any intense suffering for the experimental animal [ 76 ].

Several national and international organizations have been established to develop alternative techniques so that animal experimentation can be avoided, such as the UK-based National Centre for the Replacement, Refinement and Reduction of Animals in Research (NC3Rs) ( www.caat.jhsph.edu ), the European Centre for the Validation of Alternative Methods (ECVAM) [ 77 ], the Universities Federation for Animal Welfare (UFAW) ( www.ufaw.org.uk ), The Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM) [ 78 ], and The Center for Alternatives to Animal Testing (CAAT) ( www.caat.jhsph.edu ). The Brazilian ‘Arouca Law’ also constitutes a milestone, as it has created the ‘National Council for the Control of Animal Experimentation’ (CONCEA) that deals with the legal and ethical issues related to the use of experimental animals during scientific research [ 76 ].

Although national as well as international laws and guidelines have provided basic protections for experimental animals, the current regulations have some significant discrepancies. In the U.S., the Animal Welfare Act excludes rats, mice and purpose-bred birds, even though these species comprise almost 90% of the animals that are used for research purpose [ 79 ]. On the other hand, certain cats and dogs are getting special attention along with extra protection. While the U.S. Animal Welfare Act ignores birds, mice and rats, the U.S. guidelines that control research performed using federal funding ensure protections for all vertebrates [ 79 , 80 ].

Living conditions of animals

Choice of the animal model.

Based on all the above laws and regulations and in line with the deliberations of ethical committees, every researcher must follow certain rules when dealing with animal models.

Before starting any experimental work, thorough research should be carried out during the study design phase so that the unnecessary use of experimental animals is avoided. Nevertheless, certain research studies may have compelling reasons for the use of animal models, such as the investigation of human diseases and toxicity tests. Moreover, animals are also widely used in the training of health professionals as well as in training doctors in surgical skills [ 1 , 81 ].

Researcher should be well aware of the specific traits of the animal species they intend to use in the experiment, such as its developmental stages, physiology, nutritional needs, reproductive characteristics and specific behaviors. Animal models should be selected on the basis of the study design and the biological relevance of the animal [ 1 ].

Typically, in early research, non-mammalian models are used to get rapid insights into research problems such as the identification of gene function or the recognition of novel therapeutic options. Thus, in biomedical and biological research, among the most commonly used model organisms are the Zebrafish, the fruit fly Drosophila melanogaster and the nematode Caenorhabditis elegans . The main advantage of these non-mammalian animal models is their prolific reproducibility along with their much shorter generation time. They can be easily grown in any laboratory setting, are less expensive than the murine animal models and are somewhat more powerful than the tissue and cell culture approaches [ 82 ].

Caenorhabditis elegans is a small-sized nematode with a short life cycle and that exists in large populations and is relatively inexpensive to cultivate. Scientists have gathered extensive knowledge of the genomics and genetics of Caenorhabditis elegans ; but Caenorhabditis elegans models, while very useful in some respects, are unable to represent all signaling pathways found in humans. Furthermore, due to its short life cycle, scientists are unable to investigate long term effects of test compounds or to analyze primary versus secondary effects [ 6 ].

Similarly, the fruit fly Drosophila melanogaster has played a key role in numerous biomedical discoveries. It is small in size, has a short life cycle and large population size, is relatively inexpensive to breed, and extensive genomics and genetics information is available [ 6 ]. However, its respiratory, cardiovascular and nervous systems differ considerably from human beings. In addition, its immune system is less developed when compared to vertebrates, which is why effectiveness of a drug in Drosophila melanogaster may not be easily extrapolated to humans [ 83 ].

The Zebrafish ( Danio rerio ) is a small freshwater teleost, with transparent embryos, providing easy access for the observation of organogenesis and its manipulation. Therefore, Zebrafish embryos are considered good animal models for different human diseases like tuberculosis and fetal alcohol syndrome and are useful as neurodevelopmental research models. However, Zebrafish has very few mutant strains available, and its genome has numerous duplicate genes making it impossible to create knockout strains, since disrupting one copy of the gene will not disrupt the second copy of that gene. This feature limits the use of Zebrafish as animal models to study human diseases. Additionally they are rather expensive, have long life cycle, and genomics and genetics studies are still in progress [ 82 , 84 ].

Thus, experimentation on these three animals might not be equivalent to experimentation on mammals. Mammalian animal model are most similar to human beings, so targeted gene replacement is possible. Traditionally, mammals like monkey and mice have been the preferred animal models for biomedical research because of their evolutionary closeness to humans. Rodents, particularly mice and rats, are the most frequently used animal models for scientific research. Rats are the most suitable animal model for the study of obesity, shock, peritonitis, sepsis, cancer, intestinal operations, spleen, gastric ulcers, mononuclear phagocytic system, organ transplantations and wound healing. Mice are more suitable for studying burns, megacolon, shock, cancer, obesity, and sepsis as mentioned previously [ 85 ].

Similarly, pigs are mostly used for stomach, liver and transplantation studies, while rabbits are suitable for the study of immunology, inflammation, vascular biology, shock, colitis and transplantations. Thus, the choice of experimental animal mainly depends upon the field of scientific research under consideration [ 1 ].

HOUSING AND ENVIRONMENTAL ENRICHMENT

Researchers should be aware of the environment and conditions in which laboratory animals are kept during research, and they also need to be familiar with the metabolism of the animals kept in vivarium, since their metabolism can easily be altered by different factors such as pain, stress, confinement, lack of sunlight, etc. Housing conditions alter animal behavior, and this can in turn affect experimental results. By contrast, handling procedures that feature environmental enrichment and enhancement help to decrease stress and positively affect the welfare of the animals and the reliability of research data [ 74 , 75 ].

In animals, distress- and agony-causing factors should be controlled or eliminated to overcome any interference with data collection as well as with interpretation of the results, since impaired animal welfare leads to more animal usage during experiment, decreased reliability and increased discrepancies in results along with the unnecessary consumption of animal lives [ 86 ].

To reduce the variation or discrepancies in experimental data caused by various environmental factors, experimental animals must be kept in an appropriate and safe place. In addition, it is necessary to keep all variables like humidity, airflow and temperature at levels suitable for those species, as any abrupt variation in these factors could cause stress, reduced resistance and increased susceptibility to infections [ 74 ].

The space allotted to experimental animals should permit them free movement, proper sleep and where feasible allow for interaction with other animals of the same species. Mice and rats are quite sociable animals and must, therefore, be housed in groups for the expression of their normal behavior. Usually, laboratory cages are not appropriate for the behavioral needs of the animals. Therefore, environmental enrichment is an important feature for the expression of their natural behavior that will subsequently affect their defense mechanisms and physiology [ 87 ].

The features of environmental enrichment must satisfy the animals’ sense of curiosity, offer them fun activities, and also permit them to fulfill their behavioral and physiological needs. These needs include exploring, hiding, building nests and gnawing. For this purpose, different things can be used in their environment, such as PVC tubes, cardboard, igloos, paper towel, cotton, disposable masks and paper strips [ 87 ].

The environment used for housing of animals must be continuously controlled by appropriate disinfection, hygiene protocols, sterilization and sanitation processes. These steps lead to a reduction in the occurrence of various infectious agents that often found in vivarium, such as Sendai virus, cestoda and Mycoplasma pulmonis [ 88 ].

Euthanasia is a term derived from Greek, and it means a death without any suffering. According to the Brazilian Arouca Law (Article 14, Chapter IV, Paragraphs 1 and 2), an animal should undergo euthanasia, in strict compliance with the requirements of each species, when the experiment ends or during any phase of the experiment, wherever this procedure is recommended and/or whenever serious suffering occurs. If the animal does not undergo euthanasia after the intervention it may leave the vivarium and be assigned to suitable people or to the animal protection bodies, duly legalized [ 1 ].

Euthanasia procedures must result in instant loss of consciousness which leads to respiratory or cardiac arrest as well as to complete brain function impairment. Another important aspect of this procedure is calm handling of the animal while taking it out of its enclosure, to reduce its distress, suffering, anxiety and fear. In every research project, the study design should include the details of the appropriate endpoints of these experimental animals, and also the methods that will be adopted. It is important to determine the appropriate method of euthanasia for the animal being used. Another important point is that, after completing the euthanasia procedure, the animal’s death should be absolutely confirmed before discarding their bodies [ 87 , 89 ].

Relevance of animal experimentations and possible alternatives

Relevance of animal experiments and their adverse effects on human health.

One important concern is whether human diseases, when inflicted on experimental animals, adequately mimic the progressions of the disease and the treatment responses observed in humans. Several research articles have made comparisons between human and animal data, and indicated that the results of animals’ research could not always be reliably replicated in clinical research among humans. The latest systematic reviews about the treatment of different clinical conditions including neurology, vascular diseases and others, have established that the results of animal studies cannot properly predict human outcomes [ 59 , 90 ].

At present, the reliability of animal experiments for extrapolation to human health is questionable. Harmful effects may occur in humans because of misleading results from research conducted on animals. For instance, during the late fifties, a sedative drug, thalidomide, was prescribed for pregnant women, but some of the women using that drug gave birth to babies lacking limbs or with foreshortened limbs, a condition called phocomelia. When thalidomide had been tested on almost all animal models such as rats, mice, rabbits, dogs, cats, hamsters, armadillos, ferrets, swine, guinea pig, etc., this teratogenic effect was observed only occasionally [ 91 ]. Similarly, in 2006, the compound TGN 1412 was designed as an immunomodulatory drug, but when it was injected into six human volunteer, serious adverse reactions were observed resulting from a deadly cytokine storm that in turn led to disastrous systemic organ failure. TGN 1412 had been tested successfully in rats, mice, rabbits, and non-human primates [ 92 ]. Moreover, Bailey (2008) reported 90 HIV vaccines that had successful trial results in animals but which failed in human beings [ 93 ]. Moreover, in Parkinson disease, many therapeutic options that have shown promising results in rats and non-human primate models have proved harmful in humans. Hence, to analyze the relevance of animal research to human health, the efficacy of animal experimentation should be examined systematically [ 94 , 95 ]. At the same time, the development of hyperoxaluria and renal failure (up to dialysis) after ileal-jejunal bypass was unexpected because this procedure was not preliminarily evaluated on an animal model [ 96 ].

Several factors play a role in the extrapolation of animal-derived data to humans, such as environmental conditions and physiological parameters related to stress, age of the experimental animals, etc. These factors could switch on or off genes in the animal models that are specific to species and/or strains. All these observations challenge the reliability and suitability of animal experimentation as well as its objectives with respect to human health [ 76 , 92 ].

ALTERNATIVE TO ANIMAL EXPERIMENTATION/DEVELOPMENT OF NEW PRODUCTS AND TECHNIQUES TO AVOID ANIMAL SACRIFICE IN RESEARCH

Certainly, in vivo animal experimentation has significantly contributed to the development of biological and biomedical research. However it has the limitations of strict ethical issues and high production cost. Some scientists consider animal testing an ineffective and immoral practice and therefore prefer alternative techniques to be used instead of animal experimentation. These alternative methods involve in vitro experiments and ex vivo models like cell and tissue cultures, use of plants and vegetables, non-invasive human clinical studies, use of corpses for studies, use of microorganisms or other simpler organism like shrimps and water flea larvae, physicochemical techniques, educational software, computer simulations, mathematical models and nanotechnology [ 97 ]. These methods and techniques are cost-effective and could efficiently replace animal models. They could therefore, contribute to animal welfare and to the development of new therapies that can identify the therapeutics and related complications at an early stage [ 1 ].

The National Research Council (UK) suggested a shift from the animal models toward computational models, as well as high-content and high-throughput in vitro methods. Their reports highlighted that these alternative methods could produce predictive data more affordably, accurately and quickly than the traditional in vivo or experimental animal methods [ 98 ].

Increasingly, scientists and the review boards have to assess whether addressing a research question using the applied techniques of advanced genetics, molecular, computational and cell biology, and biochemistry could be used to replace animal experiments [ 59 ]. It must be remembered that each alternative method must be first validated and then registered in dedicated databases.

An additional relevant concern is how precisely animal data can mirror relevant epigenetic changes and human genetic variability. Langley and his colleagues have highlighted some of the examples of existing and some emerging non-animal based research methods in the advanced fields of neurology, orthodontics, infectious diseases, immunology, endocrine, pulmonology, obstetrics, metabolism and cardiology [ 99 ].

IN SILICO SIMULATIONS AND INFORMATICS

Several computer models have been built to study cardiovascular risk and atherosclerotic plaque build-up, to model human metabolism, to evaluate drug toxicity and to address other questions that were previously approached by testing in animals [ 100 ].

Computer simulations can potentially decrease the number of experiments required for a research project, however simulations cannot completely replace laboratory experiments. Unfortunately, not all the principles regulating biological systems are known, and computer simulation provide only an estimation of possible effects due to the limitations of computer models in comparison with complex human tissues. However, simulation and bio-informatics are now considered essential in all fields of science for their efficiency in using the existing knowledge for further experimental designs [ 76 ].

At present, biological macromolecules are regularly simulated at various levels of detail, to predict their response and behavior under certain physical conditions, chemical exposures and stimulations. Computational and bioinformatic simulations have significantly reduced the number of animals sacrificed during drug discovery by short listing potential candidate molecules for a drug. Likewise, computer simulations have decreased the number of animal experiments required in other areas of biological science by efficiently using the existing knowledge. Moreover, the development of high definition 3D computer models for anatomy with enhanced level of detail, it may make it possible to reduce or eliminate the need for animal dissection during teaching [ 101 , 102 ].

3D CELL-CULTURE MODELS AND ORGANS-ON-CHIPS

In the current scenario of rapid advancement in the life sciences, certain tissue models can be built using 3D cell culture technology. Indeed, there are some organs on micro-scale chip models used for mimicking the human body environment. 3D models of multiple organ systems such as heart, liver, skin, muscle, testis, brain, gut, bone marrow, lungs and kidney, in addition to individual organs, have been created in microfluidic channels, re-creating the physiological chemical and physical microenvironments of the body [ 103 ]. These emerging techniques, such as the biomedical/biological microelectromechanical system (Bio-MEMS) or lab-on-a-chip (LOC) and micro total analysis systems (lTAS) will, in the future, be a useful substitute for animal experimentation in commercial laboratories in the biotechnology, environmental safety, chemistry and pharmaceutical industries. For 3D cell culture modeling, cells are grown in 3D spheroids or aggregates with the help of a scaffold or matrix, or sometimes using a scaffold-free method. The 3D cell culture modeling conditions can be altered to add proteins and other factors that are found in a tumor microenvironment, for example, or in particular tissues. These matrices contain extracellular matrix components such as proteins, glycoconjugates and glycosaminoglycans that allow for cell communication, cell to cell contact and the activation of signaling pathways in such a way that the morphological and functional differentiation of these cells can accurately mimic their environment in vivo . This methodology, in time, will bridge the gap between in vivo and in vitro drug screening, decreasing the utilization of animal models during research [ 104 ].

ALTERNATIVES TO MICROBIAL CULTURE MEDIA AND SERUM-FREE ANIMAL CELL CULTURES

There are moves to reduce the use of animal derived products in many areas of biotechnology. Microbial culture media peptones are mostly made by the proteolysis of farmed animal meat. However, nowadays, various suppliers provide peptones extracted from yeast and plants. Although the costs of these plant-extracted peptones are the same as those of animal peptones, plant peptones are more environmentally favorable since less plant material and water are required for them to grow, compared with the food grain and fodder needed for cattle that are slaughtered for animal peptone production [ 105 ].

Human cell culture is often carried out in a medium that contains fetal calf serum, the production of which involves animal (cow) sacrifice or suffering. In fact, living pregnant cows are used and their fetuses removed to harvest the serum from the fetal blood. Fetal calf serum is used because it is a natural medium rich in all the required nutrients and significantly increases the chances of successful cell growth in culture. Scientists are striving to identify the factors and nutrients required for the growth of various types of cells, with a view to eliminating the use of calf serum. At present, most cell lines could be cultured in a chemically-synthesized medium without using animal products. Furthermore, data from chemically-synthesized media experiments may have better reproducibility than those using animal serum media, since the composition of animal serum does change from batch to batch on the basis of animals’ gender, age, health and genetic background [ 76 ].

ALTERNATIVES TO ANIMAL-DERIVED ANTIBODIES

Animal friendly affinity reagents may act as an alternative to antibodies produced, thereby removing the need for animal immunization. Typically, these antibodies are obtained in vitro by yeast, phage or ribosome display. In a recent review, a comparative analysis between animal friendly affinity reagents and animal derived-antibodies showed that the affinity reagents have superior quality, are relatively less time consuming, have more reproducibility and are more reliable and are cost-effective [ 106 , 107 ].

Conclusions

Animal experimentation led to great advancement in biological and biomedical sciences and contributed to the discovery of many drugs and treatment options. However, such experimentation may cause harm, pain and distress to the animals involved. Therefore, to perform animal experimentations, certain ethical rules and laws must be strictly followed and there should be proper justification for using animals in research projects. Furthermore, during animal experimentation the 4 Rs principles of reduction, refinement, replacement and responsibility must be followed by the researchers. Moreover, before beginning a research project, experiments should be thoroughly planned and well-designed, and should avoid unnecessary use of animals. The reliability and reproducibility of animal experiments should also be considered. Whenever possible, alternative methods to animal experimentation should be adopted, such as in vitro experimentation, cadaveric studies, and computer simulations.

While much progress has been made on reducing animal experimentation there is a need for greater awareness of alternatives to animal experiments among scientists and easier access to advanced modeling technologies. Greater research is needed to define a roadmap that will lead to the elimination of all unnecessary animal experimentation and provide a framework for adoption of reliable alternative methodologies in biomedical research.

Acknowledgements

This research was funded by the Provincia Autonoma di Bolzano in the framework of LP 15/2020 (dgp 3174/2021).

Conflicts of interest statement

Authors declare no conflict of interest.

Author's contributions

MB: study conception, editing and critical revision of the manuscript; AKK, DP, GH, RB, Paul S, Peter S, RM, BF, NC, SM, LL, DD, GMT, MCE, MD, SM, Daniele M, GB, AD, KD, MCM, TB, MS, STC, Donald M, AM, AB, KLH, MK, LS, LL, GF: literature search, editing and critical revision of the manuscript. All authors have read and approved the final manuscript.

Contributor Information

INTERNATIONAL BIOETHICS STUDY GROUP : Derek Pheby , Gary Henehan , Richard Brown , Paul Sieving , Peter Sykora , Robert Marks , Benedetto Falsini , Natale Capodicasa , Stanislav Miertus , Lorenzo Lorusso , Gianluca Martino Tartaglia , Mahmut Cerkez Ergoren , Munis Dundar , Sandro Michelini , Daniele Malacarne , Tommaso Beccari , Michele Samaja , Matteo Bertelli , Donald Martin , Assunta Morresi , Ariola Bacu , Karen L. Herbst , Mykhaylo Kapustin , Liborio Stuppia , Ludovica Lumer , and Giampietro Farronato

  • Open access
  • Published: 02 September 2024

Saving lives with statistics

  • Jo Røislien 1 , 2  

Scandinavian Journal of Trauma, Resuscitation and Emergency Medicine volume  32 , Article number:  79 ( 2024 ) Cite this article

Metrics details

Healthcare is awash with numbers, and figuring out what knowledge these numbers might hold is worthwhile in order to improve patient care. Numbers allow for objective mathematical analysis of the information at hand, but while mathematics is objective by design, our choice of mathematical approach in a given situation is not. In prehospital and critical care, numbers stem from a wide range of different sources and situations, be it experimental setups, observational data or data registries, and what constitutes a “good” statistical analysis can be unclear. A well-crafted statistical analysis can help us see things our eyes cannot, and find patterns where our brains come short, ultimately contributing to changing clinical practice and improving patient outcome. With increasingly more advanced research questions and research designs, traditional statistical approaches are often inadequate, and being able to properly merge statistical competence with clinical knowhow is essential in order to arrive at not only correct, but also valuable and usable research results. By marrying clinical knowhow with rigorous statistical analysis we can accelerate the field of prehospital and critical care.

Statistics deals with numbers, not people, and is more concerned with group averages than individual patients. Yet, the use of statistics in medicine has been hailed as one of the most important medical developments of the last 1000 years [ 1 ].

Healthcare is full of numbers, and figuring out what knowledge these numbers hold is worthwhile in order to improve patient care. However, the human brain and our sensory apparatus is not particularly well suited for dealing with abstract numbers – particularly percentages, probabilities and other ratio concepts [ 2 , 3 ] – and we need tools to help make sense of it all.

The invention of the microscope was a revolution: suddenly we could see things that had previously been invisible to us. With the recent increase of numerical data in society, we need new tools to see what our eyes cannot. “Mathematics is biology’s next microscope – only better,” Cohen wrote in 2004 [ 4 ].

In a world awash with numbers mathematical competence is key. By marrying clinical knowhow with rigorous statistical analysis we can accelerate the field of prehospital and critical care.

Numbers allow for objective mathematical analysis of the information at hand. However, while mathematics is objective by design – two plus two equals four regardless of where or when or by whom the calculation is performed – our choice of mathematical approach in a given situation is not. Applying mathematics for straight lines is of limited value if your problem is one of circles and curves.

In prehospital and critical care, numbers stem from a wide range of different sources and situations, and what constitutes a “good” statistical analysis can be unclear. A combination of mathematical and clinical knowhow is needed.

Experiments

Experiments are part of the scientific bedrock. Randomized controlled trial can assess a causal association between two factors, as the rest of the world is zeroed out by design, and the accompanying numbers can be analyzed using simple statistical tests. However, with increasingly more complicated research questions and designs, even the analysis of experimental setups is not necessarily straightforward.

In a project comparing different methods for transporting trauma patients, Hyldmo et al. experimented with cadavers, meticulously measuring neck rotation and movement [ 5 , 6 ]. For ethical reasons the experiment called for a non-traditional setup, and analytical results using traditional statistical tests were inconclusive. But when taking the specific structure of the experiment into account in statistical models, associations that had been hidden came into light, and the project could give concrete advice on the transportation of trauma patients.

Observational data

When applying statistical methods to analyze our data, we simultaneously impose strict assumptions on the numbers at hand – be it the assumption of symmetrically distributed data, linear associations, or other. If these assumptions are not sufficiently correct, we hinder the numbers to communicate freely what information they actually hold.

The protein fibrinogen is vital to the body’s built-in blood stopping mechanism, and clinical guidelines state that when fibrinogen levels drop below a certain threshold mortality increases, and one should act upon it [ 7 ]. However, when looking at observational data on fibrogen levels and mortality, the numbers seem to tell a slightly different story [ 8 ]. Standard regression models assume linearity in the data. While the association between two variables is provably linear on a small enough scale [ 9 ], and linear regression thus often a suitable – and common – statistical analysis, this does not necessarily hold true on larger scales. By applying a more flexible statistical approach with fewer assumptions – applying Generalized Additive Models (GAM) rather than Generalized Linear Models (GLM) – to fibrinogen data, the critical value for what should be considered too low fibrinogen levels, is found to be substantially higher than indicated in the guidelines [ 8 ]. Applying more advanced statistical methodology directly impacts the analytical results, and the accompanying clinical conclusions.

Data registries

More frequent use of data registries provides large amounts of healthcare data, without having to set up experiments or collect observational data from scratch. However, data registries are not designed to answer specific research questions, and results must be evaluated accordingly.

The trauma registry at Oslo University Hospital in Norway holds thousands of individual events. Plotting them on a timeline reveals a seasonal pattern [ 10 ], with more trauma admissions in summer than in winter. However, there’s not much we can do about seasonal changes. Seasonality just is. However, with changing seasons comes changing weather, and weather matters. Replacing the generic phenomenon “seasons” with daily factors like “hours of sunlight” and “amount of rain” [ 11 ] results in a statistical model that is not only significantly better [ 10 ], but also allows for action. Rather than planning work schedules at ERs weeks in advance, the statistical model implies that it would be more cost-efficient to ask meteorologist for estimates of sunlight and perspiration a few days ahead, calculate the expected number of trauma incidents, and staff up accordingly. More staff on sunny days, fewer when it rains.

Or – maybe not. While a statistical model with weather variables as predictors might be objectively better than mere seasonal effects, it would also result in markedly poorer quality of life for the healthcare personnel involved, having their work schedule decided by short-term weather forecasts. Statistician George Box has said that “All models are wrong, but some are useful” [ 12 ]. Even a “good” statistical analysis is not necessarily useful.

Statistical analysis has two ingredients: Mathematics and context. Mathematics is often the easy part: It’s either right or wrong. The real world, however, is rarely black or white, but tends to be shades of grey, and the accompanying statistical analysis will be shades of right and shades of wrong. A well-crafted statistical analysis can help us see things our eyes cannot, and find patterns where our brains come short, ultimately contributing to changing clinical practice and improving patient outcome. Being able to properly merge statistical competence with clinical knowhow is essential in order to arrive at not only correct, but also valuable and usable research results.

Can statistics save lives? Indeed. But only when the mathematical and contextual side of statistics work together.

Data availability

No datasets were generated or analysed during the current study.

Looking back on the millennium in medicine. N Engl J Med. 2000;342(1):42–9.

Article   Google Scholar  

Røislien J, Johnsen AL. Those troublesome fractions. Tidsskr nor Laegeforen. 2024;144(7).

Reyna VF, Nelson WL, Han PK, Dieckmann NF. How numeracy influences risk comprehension and medical decision making. Psychol Bull. 2009;135(6):943–73.

Article   PubMed   PubMed Central   Google Scholar  

Cohen JE. Mathematics is biology’s next microscope, only better; biology is mathematics’ next physics, only better. PLoS Biol. 2004;2(12):e439.

Hyldmo PK, Horodyski MB, Conrad BP, Dubose DN, Røislien J, Prasarn M, et al. Safety of the lateral trauma position in cervical spine injuries: a cadaver model study. Acta Anaesthesiol Scand. 2016;60(7):1003–11.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Hyldmo PK, Horodyski M, Conrad BP, Aslaksen S, Røislien J, Prasarn M, et al. Does the novel lateral trauma position cause more motion in an unstable cervical spine injury than the logroll maneuver? Am J Emerg Med. 2017;35(11):1630–5.

Article   PubMed   Google Scholar  

Spahn DR, Bouillon B, Cerny V, Coats TJ, Duranteau J, Fernández-Mondéjar E, et al. Management of bleeding and coagulopathy following major trauma: an updated European guideline. Crit Care. 2013;17(2):R76.

Hagemo JS, Stanworth S, Juffermans NP, Brohi K, Cohen M, Johansson PI, et al. Prevalence, predictors and outcome of hypofibrinogenaemia in trauma: a multicentre observational study. Crit Care. 2014;18(2):R52.

Feigenbaum L. Brook Taylor and the method of increments. Arch Hist Exact Sci. 1985;34(1):1–140.

Røislien J, Søvik S, Eken T. Seasonality in trauma admissions - are daylight and weather variables better predictors than general cyclic effects? PLoS ONE. 2018;13(2):e0192568.

Bhattacharyya T, Millham FH. Relationship between Weather and Seasonal factors and trauma admission volume at a level I trauma Center. J Trauma Acute Care Surg. 2001;51(1).

Box GEP. Robustness in the strategy of scientific model building. In: Launer RL, Wilkinson GN, editors. Robustness in statistics. Academic; 1979. pp. 201–36.

Download references

Author information

Authors and affiliations.

Department of Research, The Norwegian Air Ambulance Foundation, Oslo, Norway

Jo Røislien

Faculty of Health Sciences, University of Stavanger, Stavanger, Norway

You can also search for this author in PubMed   Google Scholar

Contributions

Jo Røislien had the idea for and wrote the manuscript.

Corresponding author

Correspondence to Jo Røislien .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This Comment provides a summary of the Scandinavian Journal of Trauma, Resuscitation and Emergency Medicine Honorary Lecture given at the Oslo HEMS Conference 2023

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Røislien, J. Saving lives with statistics. Scand J Trauma Resusc Emerg Med 32 , 79 (2024). https://doi.org/10.1186/s13049-024-01256-4

Download citation

Received : 21 August 2024

Accepted : 23 August 2024

Published : 02 September 2024

DOI : https://doi.org/10.1186/s13049-024-01256-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Statistical analysis
  • Prehospital care

Scandinavian Journal of Trauma, Resuscitation and Emergency Medicine

ISSN: 1757-7241

experimental research ethics

Published on 

 New Northeastern lab plumbs the mysteries of the ticks and bacteria that cause Lyme 

Constantin Takacs of Northeastern loves to study black-legged deer ticks and Borrelia burgdorferi, which is good news for everybody else

experimental research ethics

A sign at the entrance to the newly established laboratory of Northeastern University assistant biology professor Constantin Takacs warns visitors of the tiny menaces that dwell within.

“There are ticks present in the space,” it says. “Do not enter this room without the knowledge and permission of the Takacs lab.”

To make sure the poppyseed-sized black-legged ticks recently arrived from a breeding facility don’t escape, the doorway to the testing room is lined with a white double-sided sticky mat.

The precautions are meant to ensure researchers’ safety as they use new techniques to follow Borrelia burgdorferi, the bacterium that causes Lyme disease, inside the tick over the course of the tick’s two-year life cycle.

With cases of Lyme disease spreading and outpacing mosquito-borne illnesses in the U.S., it’s more important than ever to explore the cycle of disease and potential ways to interrupt it, Takacs says.

The little-understood tick

“We need to understand how the (Lyme disease bacteria) function and how they’re transmitted by ticks. And, therefore, we need to understand the tick itself,” he says.

Surprisingly little is known about black-legged or deer ticks, despite them being a vector, or carrier, of Lyme, a disease for which 476,000 people are treated annually, according to the Centers for Disease Control and Prevention .

The vector biology field has been driven by mosquitoes. “In other parts of the world, the mosquito is far more important as a disease vector than ticks are,” Takacs says. “But here, right now, it’s ticks that transmit most of the vector-borne disease.”

Lyme disease is not only endemic in the heavily populated Northeast, mid-Atlantic and upper Midwest states, but the pathogen that causes it can stick around in the infected host for a long time.

“If you take a lab mouse and you infect it (with Borrelia burgdorferi) and you don’t treat it with an antibiotic, it’s infected for life,” Takacs says.

Deer ticks are also tough customers, able to live months in the lab without a meal and yet still transmit Lyme disease.

These are the sort of biological facts that intrigue Takacs but sound like the stuff of zombie horror movies to almost anyone who has ever taken a walk in the woods or fields where ticks dwell.

‘A lot of ticks will die for science’

A major challenge facing tick researchers is the small size of the creature, which like a spider has eight legs as an adult and so is considered an acarid rather than an insect like the mosquito.

The larvae of black-legged ticks, also known as deer ticks, resemble dust, Takacs says. The nymphs that they molt into are the size of a poppy seed, while adults are typically  compared to sesame seeds.

A laptop showing a vector of a tick.

But tools new to the scientific community such as dissection stereo microscopes are giving Takacs and his researchers highly amplified views of nymphal ticks that arrived this summer from Oklahoma State University.

Using the microscope, which has a camera attached, postdoctoral research associate Chris Zinck shows a visitor how to look beyond the glint of a live tick’s hard outer body to see everything from its midgut to the hooks that anchor into their host victim’s skin. 

Scientists want to know “What’s happening to the tick during its life cycle?” he says.

 “A lot of that starts with what we can see,” says Zinck, who like Takacs wears a white lab coat and gloves instead of the traditional blue to better spot escaped ticks.

Zinck recently successfully dissected a nymphal tick after immobilizing it on a metal table chilled to 4 degrees Celsius. Ticks don’t like cold temperatures, so chilling them makes them easier to handle for dissection and other purposes.

 “A lot of ticks will die for science,” Takacs says.

Blood meals

But first, the ticks will be fed a blood meal at every stage of their life, starting with larvae feeding on lab mice infected with Borrelia burgdorferi, Takacs says.

“In order to follow the bacterium inside a tick, we have to let the tick grow as it naturally does, so we have to feed the tick. And the only way we can feed the tick is by putting it on a mouse,” Takacs says.

In a literal feedback loop, the tick acquires the bacterium by feeding on an infected mouse, he says.

Featured Stories

A group of people paint together outside on a sunny day.

It’s all fun and games as Northeastern students kick off Welcome Week with campus celebration

Students working on a robot in robotics club.

With nearly 600 student organizations, Northeastern has more ways to get involved than ever before 

Emily Costa holding a microphone up to a flock of pigeons on the ground.

This Northeastern graduate started a comedy podcast. It changed her science career

A crowd returning to campus

Back to School 2024: Students’ guide to essential move-in resources, Welcome Week activities and more

Staining tick body parts with a special dye will allow researchers to see how the Lyme bacteria interacts with the tick’s anatomy as the pathogenic agent moves from gut to salivary gland and mouth and back again during and between feeding cycles, Takacs says.

Weird, and resilient, ticks

Once infected, the tick is able to transmit Lyme to mice, as well as other mammals including deer, dogs and humans.

One of the goals of the research is to disrupt tick anatomy or interactions to see whether that stops transmission of the bacteria, Takacs says.

It helps that scientists came up with a more complete iteration of the genome of the black-legged deer tick in early 2023, opening the way for the creation of genetic targets to reduce Lyme and other tick-borne diseases.

Takacs says his spirochete and vector biology lab at Northeastern will apply genetic modification techniques on ticks as well as on the bacteria that causes Lyme disease.

The twin-armed approach is aided by his training in both bacteriology and eukaryotic biology, which is the study of any cell with a clearly defined nucleus, including mice and ticks, Takacs says.

“I’m kind of straddling both worlds because I understand the bacterium, the microbe, but also the host. I like to study their interaction by looking at both angles,” he says.

Takacs, who came to Northeastern in January of 2023, says his fascination with Borrelia burgdorferi has only grown since his postdoctoral research days at Yale and Stanford.

 Borrelia burgdorferi is a spiral-shaped bacterium called a spirochete. Another example of a spirochete is the one that causes syphilis, but borrelia is different enough to be labeled weird and almost certainly is unique in the world of bacteria.

In bad news for the body’s natural immune defenses, Borrelia burgdorferi has an “antigenic variation mechanism” that allows the bacteria to change a protein on their surface, Takacs says.

The surface protein is “akin to the armor of the organism,” he says. “It changes continuously. By the time the host immune system starts to attack the armor, the bacteria has changed the nature of the armor.”

This is different from the way most bacteria and pathogens enter the body and make a person feel ill. 

“After a week or two, your immune system is going to kill the pathogen. Think about the last cold you had. Well, Borrelia burgdorferi is going to stay inside you for many months.”

In the case of the lab mouse, which can live one to five years, that’s for life, Takacs says.

Another way that the Lyme spirochete stands out from the rest of the bacterial crowd is by having more than 15 linear and circular plasmids in addition to one large linear chromosome. 

Takacs says scientists theorize Borrelia burgdorferi evolved to have many genome segments so it could infect a variety of animal species.

Microscopic photos of ticks.

Ticks on dinosaurs and Lyme in a mummy

Black-legged ticks are also resilient, almost shockingly. 

“You can take a tick and put it in a jar at room temperature on the shelf. And as long as it doesn’t get dry, a year later, it’s still alive,” Takacs says.

Not only that, “It still has the same number of spirochetes,” he says. “It’s still able to transmit Lyme disease.”

Perhaps it’s no surprise that ticks and the Lyme spirochete are so tough, seeing as they have been around for millenia.

Ticks have been discovered in the fossils of feathered dinosaurs , while evidence of Lyme disease was found in the 5,300-year-old mummified remains of Otzi “the Ice Man.”

Lyme can affect the joints, heart and brain

The most common symptoms of Lyme are fever, fatigue, joint pains and a rash, although not every sufferer gets the rash.

Untreated, the Lyme spirochete can get into skin, heart tissue, joints and the outermost layer of protective tissue surrounding the brain and spinal cord, Takacs says.

“If you think about the symptoms of Lyme disease, you get a match between where the bacteria go and where there’s pathology,” he says.

The CDC says people treated in the early stages of Lyme disease with appropriate antibiotics usually experience a full recovery although it estimates that  5% to 10% of Lyme patients have persistent symptoms after early treatment.

In the meantime, the Global Lyme Alliance says that as many as 2 million Americans could suffer post-treatment disability. 

The search for better Lyme treatment has led Northeastern University Distinguished Professor of Biology Kim Lewis to develop an antibiotic treatment he says is more targeted to borrelia than broad spectrum antibiotics currently in use such as doxycycline.

Lewis says the antibiotic, Hygromycin A, could also mop up residual pathogens. It is currently in clinical trials in Australia.

Takacs believes potential solutions lie in understanding the complex interactions between Borrelia burgdorferi, black-legged ticks and host animals such as lab mice.

Black-legged deer ticks also transmit other but less common  diseases than Lyme, including the bacterial illness anaplasmosis; babesiosis, which is transmitted by a parasite; and Powassan, which is a viral disease.

Tick habitat is expanding. But this is still “sort of an area of mystery,” he says.

Cynthia McCormick Hibbert is a Northeastern Global News reporter. Email her at [email protected] or contact her on X/Twitter @HibbertCynthia .

  • Copy Link Link Copied!

Program is transforming youth music education and amplifying diverse voices

Beyond Creative @ NU gives high schoolers a crash course in making music, ending with the creation of a full album.

Northeastern students singing into a microphone in a studio.

Math can be intimidating — unless your professor is Solomon Jekel

During a four-decade tenure, the Northeastern mathematics professor has helped students take their love of numbers in many directions.

Headshot of Solomon Jekel.

Behind the scenes in NBC’s Olympics research room: ‘Gold Zone,’ obscure badminton rules and trivia for Peyton Manning

Northeastern University data science graduate Claire Dudley has spent the past three weeks unearthing statistics and stories for Paris.

The silhouette of a person sitting in front of a multiview screen of Olympics events.

How these Northeastern researchers are rewriting the immigration-crime narrative

Ramiro Martinez and Jacob Stowell say their research has disproved assertions that immigration exposes communities to increases in crime.

An immigrant walks his bike that has a Mexican flag on it across a street.

This artist spent two weeks on a tall ship in the frigid Arctic researching extreme climates and life cycles

Art and design professor Julia Hechtman did a residency on a tall ship in the Arctic Circle, seeking inspiration in the frigid landscape.

A walrus approaches shore in the arctic circle.

When Vulnerable Narcissists Take the Lead: The Role of Internal Attribution of Failure and Shame for Abusive Supervision

  • Original Paper
  • Open access
  • Published: 02 September 2024

Cite this article

You have full access to this open access article

experimental research ethics

  • Susanne Braun   ORCID: orcid.org/0000-0002-8510-5914 1 ,
  • Birgit Schyns 2 ,
  • Yuyan Zheng 3 &
  • Robert G. Lord 1  

Research to date provides only limited insights into the processes of abusive supervision, a form of unethical leadership. Leaders’ vulnerable narcissism is important to consider, as, according to the trifurcated model of narcissism, it combines entitlement with antagonism, which likely triggers cognitive and affective processes that link leaders’ vulnerable narcissism and abusive supervision. Building on conceptualizations of aggression as a self-regulatory strategy, we investigated the role of internal attribution of failure and shame in the relationship between leaders’ vulnerable narcissism and abusive supervision. We found across three empirical studies with supervisory samples from Germany and the United Kingdom (UK) that vulnerable narcissism related positively to abusive supervision (intentions), and supplementary analyses illustrated that leaders’ vulnerable (rather than grandiose) narcissism was the main driver. Study 1 ( N  = 320) provided correlational evidence of the vulnerable narcissism-abusive supervision relationship and for the mediating role of the general proneness to make internal attributions of failure (i.e., attribution style). Two experimental studies ( N  = 326 and N  = 292) with a manipulation-of-mediator design and an event recall task supported the causality and momentary triggers of the internal attribution of failure. Only Study 2 pointed to shame as a serial mediator, and we address possible reasons for the differences between studies. We discuss implications for future studies of leaders’ vulnerable narcissism as well as ethical organizational practices.

Avoid common mistakes on your manuscript.

Introduction

Abusive supervision remains a concern for scholars and practitioners in the field of business ethics (Mitchell et al., 2023 ) and has sometimes been considered the opposite of ethical leadership (e.g., Babalola et al., 2022 ). Indeed, we would argue that it is a form of unethical leadership as it refers to the “sustained display of hostile verbal and nonverbal behaviors, excluding physical contact” from leaders directed at their followers (Tepper, 2000 , p. 178). By engaging in acts of harm against their followers, abusive supervisors violate deontological (rights and justice) and virtue (moral character) norms in organizations (Ünal et al., 2012 ). Ample research supports the negative impact that abusive supervision has on followers and organizations (for overviews see Mackey et al., 2017 , 2021a , 2021b ; Martinko et al., 2013 ; Schyns & Schilling, 2013 ). It decreases satisfaction with supervisors (Pircher Verdorfer et al., 2023 ), fuels turnover intentions (Palanski et al., 2014 ), and predicts perceptions of workplace aggression more strongly than other unethical forms of leadership do (Cao et al., 2023 ). The effects of abusive supervision also spill over to others through peer harassment (Bai et al., 2022 ), gossiping (Decoster et al., 2013 ), and bullying (Mackey et al., 2018 ). Responding to calls in the business ethics literature to uncover why leaders engage in (un)ethical behaviors (Babalola et al., 2022 ) and emphasizing the value of psychological mechanisms (Islam, 2020 ), we focus on the cognitive and affective processes leading to abusive supervision. We posit that scholars of business ethics must, both from a normative and consequential point of view, continue to create an understanding of the underlying reasons for abusive supervision to contribute to more ethical workplaces.

The purpose of our study is to expand the current research with new insights that help us understand why leaders, especially those high in vulnerable narcissism, engage in abusive supervision (see Fischer et al., 2021 ; Mackey et al., 2017 ; Tepper et al., 2017 ; Zhang & Bednall, 2016 for reviews). Previous research illustrates that a multitude of leader characteristics such as traits (e.g., Machiavellianism; De Hoogh et al., 2021 ), attitudes (e.g., psychological entitlement; Eissa & Lester, 2022 ), negative emotions (e.g., anxiety; Xi et al., 2022 ), perfectionism (Guo et al., 2020 ), and emotional exhaustion (Fan et al., 2020 ) predict the extent to which leaders engage in abusive supervision. However, when it comes to leader narcissism as a predictor of abusive supervision, the results are mixed. Studies that conceptualized leader narcissism as a unidimensional construct did not show a relationship (Wisse & Sleebos, 2016 ) or only for some leaders (e.g., with low political skills; Waldman et al., 2018 ). Studies with multidimensional conceptualizations of narcissism found that leaders’ narcissistic rivalry, but not narcissistic admiration, related positively to abusive supervision (Gauglitz, 2022 ; Gauglitz & Schyns, 2024 ). Vulnerable narcissism, however, remains systematically understudied in the work domain. The dearth of research on leaders’ vulnerable narcissism is particularly worrying because findings in a general population provide clear evidence of a vulnerable narcissism–aggression link (Du et al., 2024 ; Kjærvik & Bushman, 2021 ).

Psychology offers fruitful avenues for a deeper study of the intra-psychic processes underlying ethical issues in organizations (Babalola et al., 2022 ; Islam, 2020 ), and thereby contributes to a broader intellectual base of business ethics research (Greenwood & Freeman, 2017 ). We build on current developments in the field of personality psychology, and specifically the trifurcated model of narcissism, which suggests that narcissism should be studied hierarchically with two higher order factors, grandiose and vulnerable narcissism. They share a common antagonism component but are unique with respect to extraversion and neuroticism (Crowe et al., 2019 ; Miller et al., 2021 ). Vulnerable narcissists are much more problematic as leaders due to their low and contingent self-esteem (Di Pierro et al., 2019 ; Dinić et al., 2022 ; Rogoza et al., 2018 ; Rohmann et al., 2019 , 2021 ). Vulnerable narcissism is equally related to aggression as grandiose narcissism is (Du et al., 2024 ; Kjærvik & Bushman, 2021 ), making it a likely antecedent of abusive supervision.

We study leaders’ vulnerable narcissism as a relatively stable personality trait characterized by “a defensive and insecure grandiosity that obscures feelings of inadequacy, incompetence, and negative affect” (Miller et al., 2011 , pp. 1013–1014). Vulnerable narcissism differs from “narcissistic leader ship” defined as “leaders’ actions [that] are principally motivated by their own egomaniacal needs and beliefs” (Rosenthal & Pittinsky, 2006 , p. 629). It is also different from Vulnerable Narcissistic Leader Behaviour (VNLB; Schyns et al., 2023a , 2023b ), that is, “the specific behavioural expression that vulnerable narcissistic leaders show in their daily work life” (p. 817). We disentangle the personality trait of vulnerable narcissism from the leadership behavior, allowing us to investigate leaders’ internal affective and cognitive processes that lead from the trait to its expression.

We base our assumptions on the dynamic self-regulatory processing model of narcissism (Morf & Rhodewalt, 2001 ) and conceptualizations of aggression as a self-regulatory strategy (Denissen et al., 2018 ; Kruglanski et al., 2023 ), relating to the argument that vulnerable narcissists show aggression due to frustration and shame (Morf et al., 2011 ). Shame is a self-conscious emotion that arises through the attribution of failure to stable characteristics of the self (e.g., Tracy & Robins, 2007 ). Self-conscious emotions have been called moral emotions as they serve moral functions (Tangney et al., 2007a , 2007b ). Schaumberg and Tracy ( 2020 ) argue that shame and guilt influence unethical behaviors in opposite ways. While guilt serves to improve ethical behavior (e.g., increasing employees’ next-day performance and decreasing enacted incivility; Kim et al., 2024 ), this may not be the case for shame. Indeed, shame has been linked to aggression. In that way, Tracy and Robins ( 2007 ) argue that one of the reasons for narcissistic rage is shame.

Aggression linked to shame serves a purpose in that it can (re-)establish “one’s sense of significance and mattering” (Kruglanski et al., 2023 , p. 445) when the individual feels devalued, inferior or exposed, also described as humiliated fury (Lewis, 1971 ). Previous research illustrates that individuals high in vulnerable narcissism are prone to experience shame (e.g., Di Sarno et al., 2020 ; Freis et al., 2015 ), but less is known about why and with which consequences. We argue that leaders high in vulnerable narcissism turn their shame inside out: They direct this painful, self-conscious emotion at a convenient target by abusing their followers (Neves, 2014 ). We suggest that this process is set in motion by how these leaders attribute: They see the reasons for failure as something negative about themselves (i.e., attribute failure internally) which increases shame.

Attribution theory originating in the works of Heider ( 1958 ), Kelley ( 1973 ), and Weiner ( 1985 ) has been fruitfully applied to the work context (Martinko et al., 2011 ). Attributions often occur in response to negative trigger events, and when these are attributed internally, the cause is regarded as something negative about the self (e.g., lack of ability or effort; Harvey et al., 2014 ). In addition, attribution styles are stable, trait-like tendencies toward certain types of attributions (Martinko et al., 2007 , 2011 ). We argue that individuals high in vulnerable narcissism attribute internally both chronically and in response to negative events. These processes might explain why vulnerable narcissism predicts reactive aggression (Vize et al., 2019 ). Reactive aggression requires a trigger, and for narcissists, the trigger is often a threat directed at the fragile self (Morf & Rhodewalt, 2001 ). We, hence, explain abusive supervision as a self-regulatory process: Leaders’ vulnerable narcissism makes them more likely to experience shame because they attribute failure internally and take their negative thoughts about the self out against others.

Our work makes several contributions to the extant business ethics and leadership literature. First, we contribute to a growing body of work that seeks to understand the antecedents of abusive supervision (Fischer et al., 2021 ; Mackey et al., 2017 ; Tepper et al., 2017 ; Zhang & Bednall, 2016 ), and unethical leadership (Hassan et al., 2023 ; Mackey et al., 2021a , 2021b ), particularly research that emphasizes the unethical implications of vulnerable narcissism in organizations (Gauglitz, 2022 ; Schyns et al., 2023a , 2023b ). Building on recent psychological theories for a better understanding of the underlying issues that spur on abusive supervision, our research disentangles leaders’ vulnerable narcissism as a stable personality trait from leader behavior (Tuncdogan et al., 2017 ), as traits are not necessarily expressed in behavior (Tett et al., 2021 ). We provide a new angle to understanding why leaders aggress against their followers that has previously been missed in narcissism, leadership, and ethics research.

Second, we contribute to research into moral emotions in the workplace. Specifically, our work provides further insights into the specific interplay between powerful cognitions and moral emotions in organizations, that is, internal attribution of failure (Martinko et al., 2007 ) and shame (Daniels & Robinson, 2019 ). We argue that leaders high in vulnerable narcissism use aggression as a self-regulatory strategy (Denissen et al., 2018 ; Kruglanski et al., 2023 ), thereby emphasizing the triggers of leaders’ internal, self-regulatory processes (Morf & Rhodewalt, 2001 ; Morf et al., 2011 ). Hence, rather than examining other-rated abusive supervision, we ask leaders to indicate to what extent they engaged in or would engage in abusive supervision (for a similar approach see Decoster et al., 2023 ; and Gauglitz, 2022 ). A focus on the internal cognitive and affective processes leading to abusive supervision (intentions) will help us to explain how abusive supervision emerges within the individual, thereby contributing to the literature which considers intra-psychic processes as essential to a psychology of ethics (Islam, 2020 ). Also, in contrast to previous research which focused on general negative affect (e.g., Eissa et al., 2020 ), we test two specific mechanisms to explain why leaders high in vulnerable narcissism abuse their followers. Understanding these mechanisms offers two potential points for intervention that can contribute to more ethical workplaces.

Finally, we provide supplementary analyses that speak to the uniqueness of our findings for vulnerable narcissism. This contributes to research into different forms of narcissism that aims to better understand the different outcomes related to vulnerable versus grandiose narcissism. We build on the trifurcated model of narcissism, and the distinction between grandiose and vulnerable narcissism (Crowe et al., 2019 ; Miller et al., 2011 , 2021 ). Our research allows us to shed light on what is shared in terms of vulnerable versus grandiose narcissism and what is unique to (leaders’) vulnerable narcissism. Specifically, in controlling for grandiose narcissism in our analyses, we can draw conclusions about the uniqueness of the effects of vulnerable narcissism. Furthermore, by repeating our analyses with grandiose narcissism, we can also rule out that grandiose narcissism by itself instills the same internal processes leading to abusive supervision as vulnerable narcissism does. These analyses inform business ethics research and practice in that they help disentangle which dimension of narcissism is more problematic for workplace ethics.

Leader Vulnerable Narcissism and Abusive Supervision

Previous studies of the correlates of vulnerable narcissism support that vulnerable narcissists should be predisposed towards abusive supervision. Vulnerable narcissism is positively related to entitlement (Miller et al., 2011 , 2018 ), which predicts abusive supervision (Eissa & Lester, 2022 ). Vulnerable narcissists are also high in neuroticism (Miller et al., 2011 , 2018 ), and recent results support the positive relationship between angry hostility (an element of neuroticism) and abusive supervision (Fosse et al., 2024 ). Vulnerable narcissism relates negatively to agreeableness (Miller et al., 2011 , 2018 ), and it has been shown that frustrated leaders are more likely to abuse their subordinates when their agreeableness is low (Eissa & Lester, 2017 ). Vulnerable narcissists are less likely to experience positive affect and more likely to experience negative affect (Miller et al., 2011 ), the latter of which has been shown to positively predict abusive supervision (Pan & Lin, 2018 ).

Individuals high in vulnerable narcissism also show destructive interpersonal behaviors such as anger and rudeness towards others (Miller et al., 2011 ). Two recent meta-analyses support the relationship between vulnerable narcissism and aggression. A meta-analysis of 437 primary studies shows that vulnerable narcissism relates positively to reactive and proactive aggression (Kjærvik & Bushman, 2021 ). The authors conclude that “not just individuals who are high in entitlement and grandiose narcissism […] lash out aggressively against others; people who are high in vulnerable narcissism do that too” (Kjærvik & Bushman, 2021 , p. 490). In Du et al.’s ( 2022 ) meta-analysis of 112 primary studies, the antagonism subdimension (which vulnerable and grandiose narcissism share) related positively to all indices of aggression, and neuroticism (specific to vulnerable narcissism) related positively to reactive and general aggression.

In sum, vulnerable narcissism is positively associated with traits (i.e., high neuroticism and hostility, low agreeableness), emotions (i.e., high negative affect), and behaviors (i.e., high aggression) that predispose leaders high on vulnerable narcissism toward abusive supervision. We therefore expect that vulnerable narcissism relates positively to abusive supervision.

Hypothesis 1.

Leaders’ vulnerable narcissism relates positively to abusive supervision.

Shame, Internal Attribution of Failure, and Abusive Supervision

Self-regulatory theory helps to understand why leaders high in vulnerable narcissism engage in abusive supervision. The fragility of the narcissistic self-concept motivates the individual to strive for external affirmation and to protect themselves from situations that are self-threatening (Morf & Rhodewalt, 2001 ). However, these processes differ between individuals high in grandiose narcissism and vulnerable narcissism. Grandiose narcissists aggress against others to establish their superiority, whereas vulnerable narcissists are tormented by inferiority. The aggression of vulnerable narcissists occurs as “an attempt to turn the tables and to right the self, which has been impaired in the shame experience” (Tangney et al., 1992 , p. 670). As Morf and colleagues (2011) argue, internal attribution of failure and shame are key elements of vulnerable narcissists’ dynamic self-regulatory processes, and aggression is a way to act out their self-directed frustration.

Shame is a powerful self-conscious emotion because it implies that the whole self is wrong (i.e., “ I failed”) as opposed to guilt which centers more on behavior (i.e., “I failed ”) (Daniels & Robinson, 2019 ). Evidence supports that vulnerable narcissism relates positively to shame ( r  = 0.46 and r  = 0.49 in studies by Morf et al., 2017 ), and that there are differential relationships of vulnerable narcissism and grandiose narcissism with shame-proneness (Schoenleber et al., 2024 ). Krizan and Johar ( 2015 ) demonstrated that vulnerable (but not grandiose) narcissism positively predicted shame, aggressiveness, and poor anger control, distrust of others and angry rumination that in turn related positively to reactive and displaced aggression in general and in response to provocation. In work contexts, Di Sarno et al. ( 2020 ) found that workplace events relating to social stress and workload were positively related to shame, particularly for individuals high in vulnerable narcissism. Freis et al. ( 2015 ) found that only individuals high (not low) in vulnerable narcissism responded with more shame when they received negative rather than satisfactory feedback for a task on which they believed they had performed well.

These findings illustrate the paradox of vulnerable narcissism: Vulnerable narcissistic individuals harbor thoughts of entitlement, while at the same time doubting their own entitled beliefs and relying on positive enforcement from others to self-regulate (i.e., maintain and restore their fragile self; Morf & Rhodewalt, 2001 ; Morf et al., 2011 ). We expand the picture on vulnerable narcissism and self-regulation by arguing that leaders high in vulnerable narcissism are looking for a release of their frustration, and therefore aim negative thoughts, emotions, and behaviors at their followers.

Shame and Abusive Supervision

Shame can manifest in unethical behaviors such as withdrawal, avoidance, and attacking others (Murphy & Kiffin-Petersen, 2017 ). Shame-prone (as opposed to guilt-prone) individuals display maladaptive responses such as anger, hostility, and blaming others for negative events (Tangney et al., 1992 ). In a correlational study with 240 employees in the United States, Bauer and Spector ( 2015 ) found that, contrary to their hypothesis, shame was more strongly related to active than to passive counterproductive work behaviors such as abuse against others ( r  = 0.46) and social undermining ( r  = 0.53) compared to withdrawal behaviors ( r  = 0.36).

Shame is not easily remedied. Leith and Baumeister ( 1998 ) suggest that the “only responses that seem to minimize the subjective distress of shame are to ignore the problem, to deny one’s responsibility, to avoid other people, or perhaps to lash out at one’s accusers” (p. 4). They found no relationship between shame and perspective-taking and a negative relationship between shame and interpersonal outcomes.

In sum, it is compelling to suggest that leaders high in vulnerable narcissism act aggressively against others because they perceive their self to be under threat, and they turn the powerful, self-conscious emotion of shame ‘inside-out’ by lashing out against their followers (Neves, 2014 ). One question is why leaders high in vulnerable narcissism experience shame. We argue that this is due to the way they attribute, that is, that they see the reasons for mistakes as something negative about themselves (i.e., attribute failure internally) which increases shame.

Internal Attribution of Failure and Shame

Failing in organizations is common, and the motivation to understand negative outcomes is high (Smollan & Singh, 2022 ). We argue that leaders high in vulnerable narcissism show different cognitive, affective, and behavioral responses to failure than their less narcissistic counterparts. Attribution theory distinguishes between attributions as “causal explanations for specific events” (Martinko et al., 2007 , p. 569), and attribution style, the “tendency to make attributions that are similar across situations” (i.e., a trait-like characteristic; Martinko et al., 2007 , p. 569). Locus of causality is the best understood attributional dimension and describes “whether the perceived cause of an outcome is internal or external” (Harvey et al., 2014 , p. 130). Internal attribution, in turn, is an antecedent of shame. As Daniels and Robinson ( 2019 ) argue, shame arises “when people fall short of important identity-based standards, and make self-threatening attributions for it” (p. 2450), which they call an attribution to “a faulty self” (p. 2453). We posit that leaders high in vulnerable narcissism attribute failure internally both because they have a chronic tendency to do so and because they experience failure-related events as self-threatening. In sum, we expect that internal attribution of failure and subsequent shame can in part explain why leaders high in vulnerable narcissism aggress against their followers in the form of abusive supervision.

Hypothesis 2.

Internal attribution of failure mediates the positive relationship between leaders’ vulnerable narcissism and abusive supervision.

Hypothesis 3.

The relationship between leaders’ vulnerable narcissism and abusive supervision is mediated by internal attribution of failure and shame in a serial fashion.

Figure  1 summarizes our research model.

figure 1

Research Model. Study 1 tests Hypotheses 1 and 2. Studies 2 and 3 test the full research model

We conducted three empirical studies with supervisor samples in Germany (Studies 1 and 2) and the UK (Study 3). The first study ( N  = 320) applies a correlational design with two points of measurement to test how leaders’ vulnerable narcissism relates to general proneness to attribute failure internally (i.e., attribution style) and self-rated abusive supervision. To understand the causality and momentary triggers of the processes involved, we designed two experimental studies ( N  = 326 and N  = 292). In both Study 2 and 3, we assessed narcissism as our independent variable at Time 1 (T1) and, at Time 2 (T2), applied a manipulation-of-mediator design for internal attribution of failure. In Study 2, we linked the internal attribution of failure more clearly to a negative event at work and added shame as a serial mediator. In Study 3, we strengthened our manipulation of internal attribution of failure by using an event recall task. Study 3 also used an improved measurement of shame.

In sum, our study series progresses from initial cross-sectional evidence of the general relationships proposed in the first two hypotheses (Study 1) to a causal analysis of the full model with all three hypotheses and scenario-based testing of momentary triggers (Study 2) to a replication of the causal results with a more rigorous design including increased experimental realism and improved measurement of shame (Study 3). All studies were reviewed and approved by the institution’s ethical review board and participants gave informed consent before taking part in the research.

Sample and Procedure

We surveyed supervisors in organizations in Germany at two points in time, approximately ten days apart recruited through Respondi. At T1, participants rated their own vulnerable and grandiose narcissism as well as internal attribution of failure. At T2, participants rated their own abusive supervision and indicated socio-demographic data. At T1, 390 supervisors completed the survey and passed the attention checks to ensure data quality, 320 of whom also completed the survey and passed attention checks at T2 (159 women, 161 men). Supervisors were between 26 and 64 years old ( M  = 46.06, SD  = 9.71), and had between 0.5 and 40 years of supervisory experience ( M  = 11.21, SD  = 8.12). They held positions as team leaders (30.9%), managed departments (38.1%) or business areas (16.6%), or top management team members (12.2%; 2.2% in other positions).

Measurement

Vulnerable narcissism.

We assessed vulnerable narcissism with a 16-item version (Schoenleber et al., 2015 ) of Pincus et al.’s ( 2009 ) Pathological Narcissism Inventory (PNI; α  = 0.93) using items from the four subdimensions of contingent self-esteem, hiding the self, devaluing others, and entitlement rage (validated translation by Morf et al., 2017 ). Participants responded on a 7-point Likert scale from fully disagree (1) to fully agree (7).

Internal Attribution of Failure

Participants rated five negative workplace situations (1 = Nothing to do with me, 7 = Totally due to me; α = 0.64 Footnote 1 ) from the Organizational Attributional Style Questionnaire (OASQ; Kent & Martinko, 2018 ), which were translated into German and back-translated following standard procedures.

Abusive Supervision

We adapted Tepper’s ( 2000 ) abusive supervision scale ( α  = 0.92; 15 items) using Schilling and May’s ( 2015 ) translation to capture self-rated abusive supervision. Participants rated the extent to which they engaged in abusive supervision on a scale from never (1) to very often (7). The adapted items are provided in OSM section IV.

Table 1 summarizes the means, standard deviations, and correlations for Study 1. We conducted confirmatory factor analysis (CFA) with ML estimation in Mplus Version 8.7 (Muthén & Muthén, 1998 – 2024 ). Our hypothesized model fitted the data best (see OSM Section I).

Hypothesis Testing

We ran a hierarchical linear regression in SPSS version 28 with 95% confidence intervals and vulnerable narcissism as the predictor. In line with Hypothesis 1 , the results indicated a significant positive relationship between vulnerable narcissism and abusive supervision ( B  = 0.258, p  < 0.001, 95%CI [0.196, 0.320]). Next, we estimated the mediation model in PROCESS (Hayes, 2018) using Model 4 and 95% confidence intervals. Results in Table  2 show that vulnerable narcissism related positively to internal attribution of failure ( B  = 0.161, SE  = 0.064, p  < 0.05, 95%CI [0.035, 0.287]). Internal attribution of failure related positively to abusive supervision ( B  = 0.064, SE  = 0.027, p  < 0.05, 95%CI [0.011, 0.118]). The indirect relationship was significant ( B  = 0.010, SE  = 0.007, 95%CI [0.0002, 0.0274]), supporting Hypothesis 2 .

Robustness Checks

Given the positive correlation between vulnerable narcissism and grandiose narcissism in our data ( r  = 0.69), we repeated all analyses with grandiose narcissism as a covariate and found that the relationships remained similar. The results did not replicate for grandiose narcissism as a predictor (for details see OSM Section I).

The first study showed that leaders’ vulnerable narcissism related positively to self-rated abusive supervision. The general tendency to attribute failure internally mediated this relationship. However, the study has several clear limitations. First, although not out of range for situation-based measurement, the internal attribution of failure measure was low in internal consistency. It also assessed attribution style rather than attributions related to a specific event. Second, we did not provide causal evidence of the predicted relationships. Finally, we did not test the role of shame as a sequential mediator. We next designed an experiment to examine the causal role of internal attribution of failure in relation to a specific trigger event, using an improved measure of attribution of failure for manipulation check, and testing shame as a sequential mediator.

We surveyed supervisors from organizations in Germany at two points in time, approximately ten days apart, recruited through Respondi. Only individuals who had not taken part in the previous study were invited. At T1, participants rated their vulnerable and grandiose narcissism. At T2, we implemented the manipulation-of-mediator design. At T1, 460 supervisors responded to the survey, 431 of whom provided complete data and passed attention checks to ensure data quality (217 women, 214 men). At T2, 326 supervisors provided complete data and passed attention checks. Participants were between 21 and 72 years old ( M  = 44.50, SD  = 11.01) and had between one and 45 years of supervisory experience ( M  = 12.96, SD  = 9.07). Participants held positions as team leader (39.5%), led departments (42.6%) or business areas (26.1%), and were top management team members (15.0%; 4.6% in other positions).

Manipulation-of-Mediator Design

The manipulation-of-mediator design tests the causal effect of the mediator on the dependent variable by comparing the relationship between independent and dependent variables under different conditions of the mediator (MacKinnon & Pirlott, 2015 ; Pirlott & MacKinnon, 2016 ). When constraining the variance of the mediator in the experimental condition relative to when the mediator varies freely (i.e., the control condition), the predicted relationships between independent variable and outcomes should not occur (or only to a lesser extent; for a succinct overview of the manipulation-of-mediator design see Highhouse and Brooks ( 2021 ), and for studies that employed it see Chen et al., 2021 ; Lamer et al., 2022 ; Schyns et al., 2023a , 2023b ).

The manipulation-of-mediator design was implemented in our second study as follows: Participants were randomly assigned to one of three conditions: internal attribution of failure, external attribution of failure, or no attribution of failure (control). In the internal attribution condition, we prevented the mediator from varying freely, thereby blocking the hypothesized mediating mechanism from operating. Participants envisioned a scenario in which they were the head of a sales and marketing department in a mid-sized company in the technology sector. The department consisted of five teams, each supervised by a team leader. The team leaders were the participant’s direct subordinates. We informed participants that a team in their department had failed to win a significant bid and that the failure was their fault (internal attribution) or the team leader’s (i.e., the participant’s subordinate’s) fault (external attribution). In the control condition, we allowed the mediator to vary freely by not providing any information about whose fault it was. Specifically, in the internal attribution condition, an email from a dissatisfied client stressed that the failure was the participant’s own fault. In the external attribution condition, an email from the dissatisfied client stressed that it was the team leader’s fault. In the control condition , participants did not receive information about whose fault it was that the team failed to win the bid.

Following the logic of the manipulation-of-mediator design, Hypothesis 2 is supported if there is a positive relationship between vulnerable narcissism and abusive supervision intentions in the control condition and no (or a weaker) positive relationship between vulnerable narcissism and abusive supervision intentions in the internal attribution condition. Hypothesis 3 is supported if there is a positive indirect relationship between vulnerable narcissism, shame, and abusive supervision intentions in the control condition and no (or a weaker) positive indirect relationship between vulnerable narcissism, shame, and abusive supervision intentions in the internal attribution condition.

The external attribution condition was a comparison condition, which also prevented the mediator from varying freely, thereby restricting the hypothesized mediating mechanism. We did not have specific predictions for this condition. Since we are comparing conditions (control, internal, and external attribution of failure) to test the mediation effect of internal attribution of failure, we conducted a moderated mediation model with experimental conditions serving as the categorical moderator (PROCESS version 4.0, Model 8). We measured shame and abusive supervision intentions in response to the experimental scenario and conducted a manipulation check.

For vulnerable narcissism ( α  = 0.94) and abusive supervision ( α  = 0.91) adapted to our scenario (e.g., “I would tell [the subordinate] that their thoughts or feelings are stupid”), we used the same measures as in Study 1.

We used two items to measure shame ( r  = 0.63) in response to the scenario (“In this situation I would think about quitting” and “In this situation I would feel incompetent”) adapted from Tangney et al., ( 2007a ). Footnote 2

Manipulation Check

We used the German translation (Grassinger & Dresel, 2017 ) of three items ( α  = 0.81) from the Revised Causal Dimension Scale (CDS II; McAuley et al., 1992 ) for the manipulation check (e.g., “Were the reasons why this mistake happened… something about you – something about others”, reverse coded).

We reported the means, standard deviations, and correlations among variables for Study 2 in the OSM (see Section II).

The manipulation check indicated significantly higher levels of internal (vs. external) attribution of failure in the internal attribution condition ( M  = 5.16, SD  = 1.26; 95%CI [4.932, 5.383]) than in the external attribution condition ( M  = 3.53, SD  = 1.24; 95%CI [3.302, 3.761]) and the control condition ( M  = 4.06, SD  = 1.09; 95%CI [3.838, 4.289]), F (2,323) = 51.682, p  < 0.001, η p 2  = 0.242. We concluded that the manipulation was successful.

We estimated a moderated mediation model in PROCESS version 4.0 (Hayes, 2017 ) using Model 8 with 5,000 bootstrap samples and 95% confidence intervals to regress abusive supervision intentions on vulnerable narcissism, including the three groups (i.e., internal attribution, external attribution, control) as the categorical moderator, shame as the mediator, and abusive supervision intentions as the outcome. Table 3 summarizes the means and standard deviations by condition. Table 4 summarizes the direct and indirect effects.

Supporting Hypothesis 1 , a positive relationship between vulnerable narcissism and abusive supervision intentions emerged ( B  = 0.290, SE  = 0.075, p  < 0.001, 95%CI [0.142, 0.438]) across the three experimental conditions, as shown in Table  4 . Hypothesis 2 (i.e., internal attribution of failure mediates the positive relationship between vulnerable narcissism and abusive supervision intentions) is supported if there is a positive relationship between vulnerable narcissism and abusive supervision intentions in the control condition and no (or a weaker) positive relationship between vulnerable narcissism and abusive supervision intentions emerges in the internal attribution condition.

As shown in Table  4 , the effect of vulnerable narcissism on abusive supervision intentions was stronger in the control condition ( B  = 0.424, SE  = 0.075, 95%CI [0.277, 0.571]) than in the internal attribution condition ( B  = 0.290, SE  = 0.075, 95%CI [0.142, 0.438]), although not significant, supporting Hypothesis 2 . That is, when the mediator varied freely (i.e., control condition), the positive relationship between vulnerable narcissism and abusive supervision intentions was higher than when the mediator was blocked (i.e., internal attribution condition). To note that there was a significant effect in the external attribution condition ( B  = 0.287, SE  = 0.076, 95%CI [0.139, 0.436]).

Hypothesis 3 (i.e., internal attribution of failure and shame serially mediate the positive relationship between vulnerable narcissism and abusive supervision intentions) is supported if there is a positive indirect relationship between vulnerable narcissism, shame, and abusive supervision intentions in the control condition and no (or a weaker) positive indirect relationship between vulnerable narcissism, shame, and abusive supervision intentions emerges in the internal attribution condition. The conditional indirect relationship between vulnerable narcissism, shame, and abusive supervision intentions was significant in all three conditions. As shown in Table  4 , the indirect effect was stronger in the control condition ( B  = 0.058, SE  = 0.023, 95%CI [0.015, 0.103]) compared to the internal attribution condition ( B  = 0.029, SE  = 0.170, 95%CI [0.001, 0.068]), although not significant, supporting Hypothesis 3 . That is, when the mediator varied freely (i.e., control condition), the positive relationship between vulnerable narcissism, shame, and abusive supervision intentions was higher than when the mediator was blocked (i.e., internal attribution condition). To note that the external attribution condition resulted in a small effect ( B  = 0.064, SE  = 0.026, 95%CI [0.012, 0.108]).

Again, we repeated all analyses with grandiose narcissism as a covariate and found that the relationships remained comparable. Again, the results did not replicate for grandiose narcissism (for details see OSM Section II).

In the second study, we found evidence to confirm the positive relationship between vulnerable narcissism and abusive supervision intentions (Hypothesis 1 ). The findings suggested that internal attribution of failure may function as a mediator (Hypothesis 2 ), and of the serial mediation through internal attribution of failure and shame (Hypothesis 3 ), although the evidence was not conclusive. There are several clear limitations of this study: The analysis following a manipulation-of-mediator design suggested that the mediation in the internal attribution condition was not blocked, just reduced. The manipulation of internal attribution of failure was based on a scenario that suggested to participants what the reason for the failure was (i.e., it did not capture their own attributions, but triggered an attribution based on external information). Although the manipulation check showed that the participants’ subsequent attribution followed as intended, we did not capture an actual event that they had experienced and their own, independent attribution and behavior in response. Finally, the two-item shame measure is a concern, although we provided additional information about convergent and divergent validity. We designed a final study to re-examine the causal role of internal attribution of failure and shame as mediating mechanisms of the relationship between vulnerable narcissism and abusive supervision with an event recall task and an improved measure of shame.

We surveyed supervisors from different organizations in the UK at two points in time, approximately ten days apart, recruited through Prolific. At T1, participants rated their vulnerable and grandiose narcissism. At T2, we implemented an event recall task with a manipulation-of-mediator design. At T1, 362 supervisors responded to the survey, 357 of whom provided complete data and passed attention checks to ensure high data quality. At T2, 292 supervisors (146 women, 146 men) provided complete data and passed attention checks. Participants were between 19 and 66 years old ( M  = 34.62, SD  = 9.18) and had between less than one and 30 years of supervisory experience ( M  = 5.53, SD  = 5.42). Participants held positions as team leader (61.6%), managed departments (19.9%), were area managers (3.4%), or top management team members (10.6%; 10.6% in other positions).

Manipulation-of-Mediator Design and Event Recall

We followed the same logic for the manipulation-of-mediator design as in Study 2. Different from Study 2, we employed an event recall task based on Grube et al.’s ( 2008 ) adaptation of the day reconstruction method (Kahneman et al., 2004 ). As in Study 2, participants were randomly assigned to one of three conditions: internal attribution of failure, external attribution of failure or no attribution of failure (control condition). We asked participants to recall one specific mistake that had happened recently and involved one of their subordinates. In the internal attribution condition, participants recalled a serious mistake that involved a subordinate and that they were personally responsible for. In the external attribution condition, participants recalled a serious mistake that involved a subordinate and that their subordinate was responsible for. In the control condition, participants recalled a serious mistake that involved a subordinate, and no further instructions were given as to who was responsible for it. The manipulations for each condition are provided in OSM section V. Participants were then asked to recall and describe the event in as much detail as possible, followed by a series of questions to assess shame and abusive supervision toward the subordinate involved in the mistake.

The same logic for the support of hypotheses outlined in Study 2 applies to Study 3. Again, we did not have specific predictions for the external attribution condition. Since we are comparing conditions (control, internal, and external attribution of failure) to test the mediation effect of internal attribution of failure, we conducted a moderated mediation model with experimental conditions serving as the categorical moderator (PROCESS version 4.0, Model 8). We measured shame and abusive supervision in response to the experimental scenario and conducted a manipulation check.

For vulnerable narcissism ( α  = 0.89) and abusive supervision ( α  = 0.91), we used the same measures as in Study 2.

We adapted five items from Tangney et al., ( 2007a ) to measure shame ( r  = 0.87) in response to the event recall task (i.e., To what extent did you experience the following emotions when this mistake happened? “I thought about quitting my job”, “I felt incompetent”, “I felt stupid”, I felt as if I wanted to hide”, I felt ashamed”).

We adapted six items from the Revised Causal Dimension Scale (CDS II; McAuley et al., 1992 ). Three items measured internal attribution ( α  = 0.85) (e.g., “Were the reasons why this mistake happened… something about you – something about others”, reverse coded) and three items measured external attribution ( α  = 0.79) (e.g., “Were the reasons why this mistake happened… something about your subordinate – something about others”, reverse coded).

We reported the means, standard deviations, and correlations among variables for Study 3 in OSM section III.

The manipulation check indicated significantly higher levels of internal attribution in the internal attribution condition ( M  = 4.52, SD  = 1.38; 95%CI [4.243, 4.807]) than in the external attribution condition ( M  = 2.48, SD  = 1.15; 95%CI [2.200, 2.749]) and the control condition ( M  = 2.91, SD  = 1.60; 95%CI [2.631, 3.180]), F (2,289) = 58.037, p  < 0.001, η p 2  = 0.287. Similarly, the manipulation check indicated significantly higher levels of external attribution in the external attribution condition ( M  = 5.22, SD  = 1.35; 95%CI [4.944, 5.494]) than in the internal attribution condition ( M  = 3.55, SD  = 1.29; 95%CI [3.264, 3.828]) and the control condition ( M  = 4.61, SD  = 1.52; 95%CI [4.338, 4.888]), F (2,289) = 35.680, p  < 0.001, η p 2  = 0.198. We concluded that the manipulation was successful.

We estimated a moderated mediation model in PROCESS version 4.0 (Hayes, 2018) using Model 8 with 5,000 bootstrap samples and 95% confidence intervals to regress abusive supervision on vulnerable narcissism, including the three groups (i.e., internal attribution, external attribution, control) as the categorical moderator, shame as the mediator, and abusive supervision as the outcome. Table 5 summarizes the means and standard deviations by condition for dependent variables. Table 6 summarizes the direct and indirect effects.

Supporting Hypothesis 1 , a positive relationship between vulnerable narcissism and abusive supervision emerged (B = 0.149, SE = 0.059, p  < 0.05, 95%CI [0.033, 0.265]) across the three experimental conditions, as shown in Table  6 .

Hypothesis 2 (i.e., internal attribution of failure mediates the positive relationship between vulnerable narcissism and abusive supervision) is supported if there is a positive relationship between vulnerable narcissism and abusive supervision in the control condition and no (or a weaker) positive relationship between vulnerable narcissism and abusive supervision emerges in the internal attribution condition. As shown in Table  6 , the effect of vulnerable narcissism on abusive supervision emerged in the control condition ( B  = 0.149, SE  = 0.059, 95%CI [0.033, 0.265]), whereas this effect became non-significant in the internal attribution condition ( B  = 0.060, SE  = 0.063, 95%CI [−0.065, 0.185]), supporting Hypothesis 2 . That is, when the mediator varied freely (i.e., control condition), the positive relationship between vulnerable narcissism and abusive supervision occurred, but it did not occur when the mediator was blocked (i.e., internal attribution condition). To note that there was no effect in the external attribution condition ( B  = 0.094, SE  = 0.060, 95%CI [−0.023, 0.211]).

Hypothesis 3 (i.e., internal attribution of failure and shame serially mediate the positive relationship between vulnerable narcissism and abusive supervision) is supported if there is a positive indirect relationship between vulnerable narcissism, shame, and abusive supervision in the control condition and no (or a weaker) positive indirect relationship between vulnerable narcissism, shame, and abusive supervision in the internal attribution condition. However, contrary to Hypothesis 3 , estimates of the conditional indirect relationship between vulnerable narcissism, shame, and abusive supervision suggested that this relationship was not significant in the control condition ( B  = 0.007, SE  = 0.019, 95%CI [−0.029, 0.049]), whereas it was significant in the internal attribution condition ( B  = 0.053, SE  = 0.030, 95%CI [0.005, 0.122]), as shown in Table  6 . That is, when the mediator varied freely (i.e., control condition), the predicted positive relationship between vulnerable narcissism, shame, and abusive supervision did not occur. To note that the external attribution condition resulted in a small effect ( B  = 0.038, SE  = 0.017, 95%CI [0.009, 0.077]).

Again, we repeated all analyses with grandiose narcissism as a covariate and found that the relationships remained similar. Again, the results did not replicate for grandiose narcissism (for details see OSM Section III).

Our final study confirmed the positive relationship between vulnerable narcissism and abusive supervision with an event recall task that increased experimental realism. We found that leaders’ internal attribution of failure mediated this relationship. However, this study failed to support the sequential mediation through shame. Some limitations of this study require acknowledgment and can inform future research. Although our manipulation of the attribution of failure was successful, we cannot exclude that memory biases may have affected the event recall (see below for a more detailed discussion of mnemic neglect; Sedikides & Green, 2009 ). Therefore, the question of whether shame is the reason why vulnerable narcissists’ internal attribution of failure results in abusive supervision must be further examined and tested vis-à-vis alternative explanations.

We set out to make three contributions to the business ethics and leadership literature: The first aim of our research was to shed light on leaders’ vulnerable narcissism as an antecedent of abusive supervision, using current developments in psychology, specifically the trifurcated model of narcissism, to advance the understanding of predictors of this unethical form of leadership. Second, we sought to examine the interplay between internal cognitive and affective processes, specifically internal attribution of failure and the moral emotion of shame, as intra-psychic processes that may explain the relationship (Islam, 2020 ). Third, we tested the uniqueness of our findings for leaders’ vulnerable narcissism, ruling out that the same processes may hold for grandiose narcissism.

In relation to the first contribution, across three empirical studies with different designs, namely a survey study, and two experimental manipulation-of-mediator studies, one of which used scenarios while the other one prompted an event recall, we found compelling evidence of the positive relationship between leaders’ vulnerable narcissism and their abusive supervision (intentions). We derived our assumptions from the dynamic self-regulatory processing model of narcissism (Morf & Rhodewalt, 2001 ) and the idea that vulnerable narcissists act out in frustration because they feel inferior and ashamed of themselves (Morf et al., 2011 ). We assessed abusive supervision from the leader’s point of view (see also Decoster et al., 2023 ; Gauglitz, 2022 ) in three different ways: as self-rating, as intentions, and as a recollection of past behavior. The relationship between vulnerable narcissism and abusive supervision (intentions) was stable across those different types of assessments. Our work therefore contributes to a new stream of research that examines the abuser’s experience (e.g., Priesemuth & Bigelow, 2020 ) with the potential to expand current views of unethical leadership in the business ethics literature (Babalola et al., 2022 ).

In relation to the second contribution, emphasizing the importance of intra-psychic processes for a psychology of ethics (Islam, 2020 ), our research sheds light on possible cognitive and affective mechanisms linking leaders’ vulnerable narcissism to abusive supervision (intentions). We found that internal attribution of failure, in the form of attribution styles and momentary attribution in response to events, in part explains the relationship between leaders’ vulnerable narcissism and abusive supervision (intentions). This finding emphasizes the paradoxical nature of vulnerable narcissism as set out in the dynamic self-regulatory processing model by Morf and Rhodewalt ( 2001 ): Even though vulnerable narcissists crave approval from others to stabilize their fragile self, they lack the ability to form positive relationships or to show constructive leader behavior (Schyns et al., 2023a , 2023b ) and instead abuse and alienate followers. Although beyond the remit of our study, it is not unthinkable that by abusing others these leaders go even deeper down the rabbit hole of negative self-views and social isolation. Future research could test whether vulnerable narcissism aggravates the relationship between abusive supervision and social worth (Priesemuth & Bigelow, 2020 ).

In relation to the third contribution, our results confirmed that vulnerable narcissism is the more problematic dimension of narcissism when it comes to unethical leadership as a comparison of our results for vulnerable narcissism with results for grandiose narcissism showed. Indeed, while there is overlap in terms of the antagonistic facet common to both dimensions of narcissism, vulnerable narcissism remains the better predictor of abusive supervision (intentions). Thus, we add to the small yet growing body of studies that are concerned with leaders’ vulnerable narcissism (Schyns, Braun, et al., 2023 ; Schyns, Gauglitz, et al., 2023 ; Schyns et al., 2022 ). Our findings corroborate evidence of the risks that leaders with a fragile self can pose to others from an ethics perspective (Gauglitz, 2022 ; Neves, 2014 ; Schyns et al., 2023a , 2023b ).

Notwithstanding the contributions of our work, questions remain regarding the role of shame in the relationship between leaders’ vulnerable narcissism and abusive supervision. While Study 2 provided initial support for shame as a sequential mediator, we could not replicate this result in Study 3. This might be due to different methodological designs applied in these two studies. While in Study 2, participants reacted to a hypothetical scenario, the third study asked participants to report their memories of an event that they had experienced as a leader. Participants might have recalled events that were less shameful, thus making it less likely to find an indirect effect of shame in Study 3. Mnemic neglect posits that individuals tend to recall negative self-threatening feedback less well than self-affirming feedback (Sedikides & Green, 2009 ). We recommend examining this memory bias in future research on vulnerable narcissism.

Another potentially relevant difference between our studies was that in Study 2, participants were told who was responsible for the event while in Study 3, they were asked to select personally experienced events from memory. It is possible that changing the agent who makes the attribution (i.e., others in Study 2 versus the leader in Study 3) contributed to the different results. First, in actual events, not only one person but often several individuals contribute to an issue. In Study 2, participants were told that they were responsible for the failure, leaving less room for ambiguity. However, in Study 3, participants were asked to recall an event. Considering vulnerable narcissists are prone to devaluing others, even leaders who were asked to recall an event that they were responsible for might (also to deflect strong, self-threatening feelings about the memory) think that who is to blame for the failure is more ambiguous, in that others have contributed to it.

Relatedly, applying the displaced aggression framework of abusive supervision (Hoobler & Brass, 2006 ), Neves ( 2014 ) showed that abusive supervisors are more likely to aggress against subordinates with lower core self-evaluation and coworker support. Perhaps vulnerable narcissistic leaders take their shame out against convenient targets (rather than any of their followers). In the scenario experiment (Study 2), participants indicated abusive supervision intentions toward a fictitious target. However, when remembering actual abuse directed at one of their followers (Study 3), leaders might have internally justified their behavior and thus felt less ashamed by thinking that the respective follower deserved the abuse. Future research should investigate how far justification via follower characteristics might influence the internal cognitive and affective processes of vulnerable narcissistic leaders. It would also be interesting for future research to differentiate types of triggers. A recent meta-analysis suggests that narcissism relates more strongly to aggression with an affiliation-related provocation (e.g., disliking, social exclusion) rather than a status-related provocation (Kjærvik & Bushman, 2021 ). Perhaps the shame–abusive supervision link would be stronger if the failure was in an interpersonal rather than a task-related domain (as in Study 2).

Limitations and Future Research

Similar to other studies (Decoster et al., 2013 ; Gauglitz, 2022 ), we assessed abusive supervision from the leader’s point of view in three different ways (self-rating, intentions, recall). While this approach avoids issues related to follower ratings of abusive supervisory behavior (see Hansbrough et al., 2015 for a critical discussion of follower ratings) and is appropriate for studying internal processes, it is subject to ego-centric biases. We addressed this issue by using different designs to invoke leader responses, and thus being more confident that the identified relationships hold. However, future research could relate leader self-rated abusive supervision to follower-rated abusive supervision. It would, in this case, also be interesting to differentiate between more active (e.g., abusive supervision) or more passive (e.g., laissez-faire) forms of negative leadership (e.g., Klasmeier et al., 2022 ) to examine if leader self-rated abusive supervision really translates into active or indeed passive forms of abuse in line with the idea that vulnerable narcissists might withdraw from social interactions. While individuals high in vulnerable narcissism are unforgiving of others’ mistakes (Lannin et al., 2014 ), they might spend more time ruminating (Rogoza et al., 2022 ) or ‘daydreaming’ (Ghinassi et al., 2023 ) about abusive behaviors than showing them, perhaps using less overt forms of abuse when they do (Mitchell & Ambrose, 2007 ). We thus encourage future studies to include both leader and follower perceptions of abusive supervision and to differentiate between active and passive forms of abusive supervision.

While in Study 1, we assessed attribution style, in Study 2, the internal attribution was triggered by a specific, experimentally manipulated event. In Study 3, participants recalled an event. While we asked participants to describe the event to make their memory more vivid and strengthen the manipulation, we did not analyze the recalled events. It is possible that different types of events triggered different responses. Future research could conduct qualitative studies to better understand what exactly triggers the internal attribution of failure and perhaps shame in vulnerable narcissistic leaders to further help organizations break the link between this personality trait and its expression.

In addition, experience sampling methodology (ESM) could be fruitfully applied to replicate and expand our results on the underlying processes linking leaders’ vulnerable narcissism and abusive supervision. ESM studies can provide a deeper understanding of momentary experiences in the workplace, and current results show that factors such as time pressure trigger day-to-day abusive supervisory behavior (Zhang & Jia, 2023 ). The real-time ESM assessment would enable future research to overcome some of the limitations of scenarios or event recall in experiments. It would, for example, be interesting to investigate in how far working conditions such as time pressure interact with vulnerable narcissism to predict internal attribution of failure, shame, and unethical behavior. Indeed, time pressure could aggravate these issues as leaders might lack the time to reflect on possible longer-term consequences of their abusive behavior.

Furthermore, the aggression literature distinguishes between different forms and purposes of aggression, such as reactive aggression (i.e., hostile, impulsive, emotion-driven) and proactive aggression (i.e., instrumental, pre-meditated; Bushman & Anderson, 2001 ). Recent meta-analyses suggest that vulnerable narcissism relates to reactive aggression (Du et al., 2022 ). While we believe that reactive aggression may serve to stabilize these leaders’ self-esteem after self-threatening events by devaluing others (Morf et al., 2011 ), future research should test this assumption directly to understand vulnerable narcissists’ motivations when they act aggressively.

In addition, while we were interested in abusive supervision, there are many ways in which leaders engage in unethical leadership, and these should be considered as outcomes of leaders’ vulnerable narcissism. There are also possible boundary conditions, which go beyond the micro-level dynamics between leaders and followers that we have considered here (e.g., Hassan et al., 2023 , for different levels of boundary conditions of unethical leadership). It would also be interesting to assess subsequent follower outcomes (e.g., counterproductivity; Braun et al., 2018 ). Followers may engage in retaliation to ‘get even’ with their abusive supervisors (e.g., restore justice perceptions; Liang et al., 2022 ), particularly when they see the leader as responsible for the event. As a result, leader and follower behavior could result in a vicious cycle as it further exacerbates the threat to vulnerable narcissists’ self-esteem, making it interesting for future research to investigate reciprocal relationships between abusive supervision and follower behavior.

Practical Implications

Our research suggests that it is crucial for organizations to address leaders’ vulnerable narcissism as a precursor to unethical behavior. First, organizations have a duty of care to their employees and should protect them from abusive supervision. Ideally, organizations should avoid hiring or promoting vulnerable narcissists into leadership positions to prevent unethical behavior and harm to followers (Mackey et al., 2018 ; Park et al., 2018 , 2019 ; Wang et al., 2015 ) as well as negative downstream consequences (e.g., turnover intentions; Palanski et al., 2014 ). However, implementing narcissism diagnostics in hiring processes might pose legal problems in many countries as it could be construed as discriminating based on a stable personality trait. It may be more suitable to address this issue in promotion decisions, when evidence of leaders’ unethical behavior (such as abusive supervision) should be prohibitive of further progression within the organization.

Second, knowing that vulnerable narcissists are prone to attribute failure internally and that this attribution triggers abusive supervision, organizations should make sure that failure is analyzed thoroughly and constructively to avoid (self-) blaming. Coaching approaches can help leaders negotiate the failure experience, including the recognition that failure has occurred, managing their emotions, and facilitating the ability to learn which helps increase their effectiveness in the future (Newton et al., 2008 ). Similarly, organizations can encourage a culture that focuses away from blaming individuals but rather regards failure as a precursor to learning (e.g., mastery climates). Doing so might help vulnerable narcissistic leaders break the link between negative events and their cognitive response leading to abusive supervision.

In this sense, third, it is important to keep in mind that leaders are often scapegoats for issues in organizations and their influence on success and failure tends to be overestimated (Meindl, 1995 ). Organizations can, on the one hand, try to ensure that scapegoating is minimized and, on the other hand, increase these leaders’ self-awareness and meta-skills when they deal with failure. Such interventions could either prevent internal attribution processes in the first place or help leaders break the link between their initial reactions and abusive supervision by strengthening personal resources to deal with negative feedback in a more adaptive way.

While largely disregarded in organizational research to date, vulnerable narcissism represents a considerable risk factor in organizations because it facilitates unethical behavior. This research advances the understanding of leaders’ vulnerable narcissism as a predictor of abusive supervision and the role of leaders’ attributing failure internally. We hope to inspire future business ethics research for a more differentiated understanding of the intra-psychic processes linked to leaders’ vulnerable narcissism to protect followers and organizations from its unethical consequences.

Data availability

Data is available from the first author upon reasonable request.

The OASQ asks participants to indicate their answers in response to specific situations rather than rating general attitudes or behaviors as is common in survey measures. As a result, it has been argued that internal consistency indices may not fully reflect the OASQ’s reliability (Martinko et al., 2018 ). The OASQ’s internal consistency in our study, while not ideal and lower compared to common reliability standards, aligns with previous results for a smaller subset of situations (α = .66 for internal/external attributions in three situations; Martinko et al., 2018 ).

We acknowledge the limitations of the two-item shame measure. In Study 2, we additionally included a measurement of participants’ chronic shame. We asked participants to indicate two separate responses to five negative work-related situations from the Test of Self-Conscious Affect (TOSCA-3 short; Tangney et al., 2007a ), with one response representing shame and one representing guilt, each rated on a 5-point Likert scale from 1 (very unlikely) to 5 (very likely). The correlation between the two-item shame measure and chronic shame was r  = .40 ( p  < .001) and chronic guilt was r  = -.01 ( p  = .92), supporting the convergent and divergent validity.

Babalola, M. T., Bal, M., Cho, C. H., Garcia-Lorenzo, L., Guedhami, O., Liang, H., Shailer, G., & van Gils, S. (2022). Bringing excitement to empirical business ethics research: Thoughts on the future of business ethics. Journal of Business Ethics, 180 (3), 903–916. https://doi.org/10.1007/s10551-022-05242-7

Article   Google Scholar  

Bai, Y., Lu, L., & Lin-Schilstra, L. (2022). Auxiliaries to abusive supervisors: The spillover effects of peer mistreatment on employee performance. Journal of Business Ethics, 178 (1), 219–237. https://doi.org/10.1007/s10551-021-04768-6

Bauer, J. A., & Spector, P. E. (2015). Discrete negative emotions and counterproductive work behavior. Human Performance, 28 (4), 307–331. https://doi.org/10.1080/08959285.2015.1021040

Braun, S., Aydin, N., Frey, D., & Peus, C. (2018). Leader narcissism predicts malicious envy and supervisor-targeted counterproductive work behavior: Evidence from field and experimental research. Journal of Business Ethics, 151 (3), 725–741. https://doi.org/10.1007/s10551-016-3224-5

Bushman, B. J., & Anderson, C. A. (2001). Is it time to pull the plug on hostile versus instrumental aggression dichotomy? Psychological Review, 108 (1), 273–279. https://doi.org/10.1037/0033-295X.108.1.273

Cao, W., Li, P., Van Der Wal, C., & R., & W. Taris, T. (2023). Leadership and workplace aggression: A meta-analysis. Journal of Business Ethics, 186 (2), 347–367. https://doi.org/10.1007/s10551-022-05184-0

Chen, S., Binning, K. R., Manke, K. J., Brady, S. T., McGreevy, E. M., Betancur, L., Limeri, L. B., & Kaufmann, N. (2021). Am I a science person? A strong science identity bolsters minority students’ sense of belonging and performance in college. Personality and Social Psychology Bulletin, 47 (4), 593–606. https://doi.org/10.1177/0146167220936480

Crowe, M. L., Lynam, D. R., Campbell, W. K., & Miller, J. D. (2019). Exploring the structure of narcissism: Toward an integrated solution. Journal of Personality, 87 (6), 1151–1169. https://doi.org/10.1111/jopy.12464

Daniels, M. A., & Robinson, S. L. (2019). The shame of it all: A review of shame in organizational life. Journal of Management, 45 (6), 2448–2473. https://doi.org/10.1177/0149206318817604

Decoster, S., Camps, J., Stouten, J., Vandevyvere, L., & Tripp, T. M. (2013). Standing by your organization: The impact of organizational identification and abusive supervision on followers’ perceived cohesion and tendency to gossip. Journal of Business Ethics, 118 (3), 623–634. https://doi.org/10.1007/s10551-012-1612-z

Decoster, S., De Schutter, L., Menges, J., De Cremer, D., & Stouten, J. (2023). Does change incite abusive supervision? The role of transformational change and hindrance stress. Human Resource Management Journal, 33 (4), 957–976. https://doi.org/10.1111/1748-8583.12494

Denissen, J. J., Thomaes, S., & Bushman, B. J. (2018). Self-regulation and aggression: Aggression-provoking cues, individual differences, and self-control strategies. In K. de Ridder, D. Adriaanse, & M. Fujita (Eds.), Routledge international handbook of self-control in health and well-being (pp. 330–339). Routledge.

Google Scholar  

De Hoogh, A. H. B., Den Hartog, D. N., & Belschak, F. D. (2021). Showing one’s true colors: Leader machiavellianism, rules and instrumental climate, and abusive supervision. Journal of Organizational Behavior, 42 (7), 851–866. https://doi.org/10.1002/job.2536

Dinić, B. M., Sokolovska, V., & Tomašević, A. (2022). The narcissism network and centrality of narcissism features. Current Psychology, 41 (11), 7990–8001. https://doi.org/10.1007/s12144-020-01250-w

Du, T. V., Lane, S. P., Miller, J. D., & Lynam, D. R. (2024). Momentary assessment of the relations between narcissistic traits, interpersonal behaviors, and aggression. Journal of Personality, 92 (2), 405–420. https://doi.org/10.1111/jopy.12831

Du, T. V., Miller, J. D., & Lynam, D. R. (2022). The relation between narcissism and aggression: A meta-analysis. Journal of Personality, 90 (4), 574–594. https://doi.org/10.1111/jopy.12684

Eissa, G., & Lester, S. W. (2017). Supervisor role overload and frustration as antecedents of abusive supervision: The moderating role of supervisor personality. Journal of Organizational Behavior, 38 (3), 307–326. https://doi.org/10.1002/job.2123

Eissa, G., & Lester, S. W. (2022). A moral disengagement investigation of how and when supervisor psychological entitlement instigates abusive supervision. Journal of Business Ethics, 180 (2), 675–694. https://doi.org/10.1007/s10551-021-04787-3

Eissa, G., Lester, S. W., & Gupta, R. (2020). Interpersonal deviance and abusive supervision: The mediating role of supervisor negative emotions and the moderating role of subordinate organizational citizenship behavior. Journal of Business Ethics, 166 (3), 577–594. https://doi.org/10.1007/s10551-019-04130-x

Fan, X. L., Wang, Q. Q., Liu, J., Liu, C., & Cai, T. (2020). Why do supervisors abuse subordinates? Effects of team performance, regulatory focus, and emotional exhaustion. Journal of Occupational and Organizational Psychology, 93 (3), 605–628. https://doi.org/10.1111/joop.12307

Fischer, T., Tian, A. W., Lee, A., & Hughes, D. J. (2021). Abusive supervision: A systematic review and fundamental rethink. The Leadership Quarterly, 32 (6), 101540. https://doi.org/10.1016/j.leaqua.2021.101540

Fosse, T. H., Martinussen, M., Sørlie, H. O., Skogstad, A., Martinsen, Ø. L., & Einarsen, S. V. (2024). Neuroticism as an antecedent of abusive supervision and laissez-faire leadership in emergent leaders: The role of facets and agreeableness as a moderator. Applied Psychology, 73 (2), 675–697. https://doi.org/10.1111/apps.12495

Freis, S. D., Brown, A. A., Carroll, P. J., & Arkin, R. M. (2015). Shame, rage, and unsuccessful motivated reasoning in vulnerable narcissism. Journal of Social and Clinical Psychology, 34 (10), 877–895. https://doi.org/10.1521/jscp.2015.34.10.877

Gauglitz, I. K. (2022). Different forms of narcissism and leadership. Zeitschrift Für Psychologie, 230 (4), 321–324. https://doi.org/10.1027/2151-2604/a000480

Gauglitz, I. K., & Schyns, B. (2024). Triggered abuse: How and why leaders with narcissistic rivalry react to follower deviance. Journal of Business Ethics . https://doi.org/10.1007/s10551-023-05579-7

Ghinassi, S., Fioravanti, G., & Casale, S. (2023). Is shame responsible for maladaptive daydreaming among grandiose and vulnerable narcissists? A general population study. Personality and Individual Differences, 206 , 112122. https://doi.org/10.1016/j.paid.2023.112122

Grassinger, R., & Dresel, M. (2017). Who learns from errors on a class test? Antecedents and profiles of adaptive reactions to errors in a failure situation. Learning and Individual Differences, 53 , 61–68. https://doi.org/10.1016/j.lindif.2016.11.009

Greenwood, M., & Freeman, R. E. (2017). Focusing on ethics and broadening our intellectual base. Journal of Business Ethics, 140 (1), 1–3. https://doi.org/10.1007/s10551-016-3414-1

Grube, A., Schroer, J., Hentzschel, C., & Hertel, G. (2008). The event reconstruction method: An efficient measure of experience-based job satisfaction. Journal of Occupational and Organizational Psychology, 81 (4), 669–689. https://doi.org/10.1348/096317907X251578

Guo, L., Chiang, J. T. J., Mao, J. Y., & Chien, C. J. (2020). Abuse as a reaction of perfectionistic leaders: A moderated mediation model of leader perfectionism, perceived control, and subordinate feedback seeking on abusive supervision. Journal of Occupational and Organizational Psychology, 93 (3), 790–810. https://doi.org/10.1111/joop.12308

Harvey, P., Madison, K., Martinko, M., Crook, T. R., & Crook, T. A. (2014). Attribution theory in the organizational sciences: The road traveled and the path ahead. Academy of Management Perspectives, 28 (2), 128–146. https://doi.org/10.5465/amp.2012.0175

Hassan, S., Kaur, P., Muchiri, M., Ogbonnaya, C., & Dhir, A. (2023). Unethical leadership: Review, synthesis and directions for future research. Journal of Business Ethics, 183 (2), 511–550. https://doi.org/10.1007/s10551-022-05081-6

Hayes, A. F. (2017). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach . The Guilford Press.

Heider, F. (1958). The psychology of interpersonal relations . Wiley.

Book   Google Scholar  

Highhouse, S., & Brooks, M. E. (2021). A simple solution to a complex problem: Manipulate the mediator! Industrial and Organizational Psychology, 14 (4), 493–496. https://doi.org/10.1017/iop.2021.117

Hoobler, J. M., & Brass, D. J. (2006). Abusive supervision and family undermining as displaced aggression. Journal of Applied Psychology, 91 (5), 1125–1133. https://doi.org/10.1037/0021-9010.91.5.1125

Islam, G. (2020). Psychology and business ethics: A multi-level research agenda. Journal of Business Ethics, 165 (1), 1–13. https://doi.org/10.1007/s10551-019-04107-w

Kahneman, D., Krueger, A. B., Schkade, D. A., Schwarz, N., & Stone, A. A. (2004). A survey method for characterizing daily life experience: The day reconstruction method. Science, 306 (5702), 1776–1780. https://doi.org/10.1126/science.1103572

Kelley, H. H. (1973). The processes of causal attribution. American Psychologist, 28 (2), 107–128. https://doi.org/10.1037/h0034225

Kent, R. L., & Martinko, M. J. (2018). The development and evaluation of a scale to measure organizational attributional style. In M. J. Martinko (Ed.), Attribution theory (pp. 53–75). Routledge.

Chapter   Google Scholar  

Kim, D., Lanaj, K., & Koopman, J. (2024). Incivility affects actors too: The complex effects of incivility on perpetrators work and home behaviors. Journal of Business Ethics . https://doi.org/10.1007/s10551-024-05714-y

Kjærvik, S. L., & Bushman, B. J. (2021). The link between narcissism and aggression: A meta-analytic review. Psychological Bulletin, 147 (5), 477–503. https://doi.org/10.1037/bul0000323

Klasmeier, K. N., Schleu, J. E., Millhoff, C., Poethke, U., & Bormann, K. C. (2022). On the destructiveness of laissez-faire versus abusive supervision: A comparative, multilevel investigation of destructive forms of leadership. European Journal of Work and Organizational Psychology, 31 (3), 406–420. https://doi.org/10.1080/1359432X.2021.1968375

Krizan, Z., & Johar, O. (2015). Narcissistic rage revisited. Journal of Personality and Social Psychology, 108 (5), 784–801. https://doi.org/10.1037/pspp0000013

Kruglanski, A. W., Ellenberg, M., Szumowska, E., Molinario, E., Speckhard, A., Leander, N. P., Pierro, A., Di Cicco, G., & Bushman, B. J. (2023). Frustration–aggression hypothesis reconsidered: The role of significance quest. Aggressive Behavior, 49 (5), 445–468. https://doi.org/10.1002/ab.22092

Lamer, S. A., Dvorak, P., Biddle, A. M., Pauker, K., & Weisbuch, M. (2022). The transmission of gender stereotypes through televised patterns of nonverbal bias. Journal of Personality and Social Psychology, 123 (6), 1315–1335. https://doi.org/10.1037/pspi0000390

Lannin, D. G., Guyll, M., Krizan, Z., Madon, S., & Cornish, M. (2014). When are grandiose and vulnerable narcissists least helpful? Personality and Individual Differences, 56 , 127–132. https://doi.org/10.1016/j.paid.2013.08.035

Leith, K. P., & Baumeister, R. F. (1998). Empathy, shame, guilt, and narratives of interpersonal conflicts: Guilt-prone people are better at perspective taking. Journal of Personality, 66 (1), 1–37. https://doi.org/10.1111/1467-6494.00001

Lewis, H. B. (1971). Shame and guilt in neurosis . International Universities Press.

Liang, L. H., Coulombe, C., Brown, D. J., Lian, H., Hanig, S., Ferris, D. L., & Keeping, L. M. (2022). Can two wrongs make a right? The buffering effect of retaliation on subordinate well-being following abusive supervision. Journal of Occupational Health Psychology, 27 (1), 37–52. https://doi.org/10.1037/ocp0000291

Mackey, J. D., Brees, J. R., McAllister, C. P., Zorn, M. L., Martinko, M. J., & Harvey, P. (2018). Victim and culprit? The effects of entitlement and felt accountability on perceptions of abusive supervision and perpetration of workplace bullying. Journal of Business Ethics, 153 (3), 659–673. https://doi.org/10.1007/s10551-016-3348-7

Mackey, J. D., Frieder, R. E., Brees, J. R., & Martinko, M. J. (2017). Abusive supervision: A meta-analysis and empirical review. Journal of Management, 43 (6), 1940–1965. https://doi.org/10.1177/0149206315573997

Mackey, J. D., McAllister, C. P., Ellen, B. P., III., & Carson, J. E. (2021a). A meta-analysis of interpersonal and organizational workplace deviance research. Journal of Management, 47 (3), 597–622. https://doi.org/10.1177/01492063198626

Mackey, J. D., Parker Ellen, B., McAllister, C. P., & Alexander, K. C. (2021b). The dark side of leadership: A systematic literature review and meta-analysis of destructive leadership research. Journal of Business Research, 132 , 705–718. https://doi.org/10.1016/j.jbusres.2020.10.037

MacKinnon, D. P., & Pirlott, A. G. (2015). Statistical approaches for enhancing causal interpretation of the M to Y relation in mediation analysis. Personality and Social Psychology Review, 19 (1), 30–43. https://doi.org/10.1177/1088868314542878

Martinko, M. J., Harvey, P., Brees, J. R., & Mackey, J. (2013). A review of abusive supervision research. Journal of Organizational Behavior . https://doi.org/10.1002/job.1888

Martinko, M. J., Harvey, P., & Dasborough, M. T. (2011). Attribution theory in the organizational sciences: A case of unrealized potential. Journal of Organizational Behavior, 32 (1), 144–149. https://doi.org/10.1002/job.690

Martinko, M. J., Harvey, P., & Douglas, S. C. (2007). The role, function, and contribution of attribution theory to leadership: A review. The Leadership Quarterly, 18 (6), 561–585. https://doi.org/10.1016/j.leaqua.2007.09.004

Martinko, M. J., Randolph-Seng, B., Shen, W., Brees, J. R., Mahoney, K. T., & Kessler, S. R. (2018). An examination of the influence of implicit theories, attribution styles, and performance cues on questionnaire measures of leadership. Journal of Leadership & Organizational Studies, 25 (1), 116–133. https://doi.org/10.1177/15480518177203

McAuley, E., Duncan, T. E., & Russell, D. W. (1992). Measuring causal attributions: The revised causal dimension scale (CDSII). Personality and Social Psychology Bulletin, 18 (5), 566–573. https://doi.org/10.1177/0146167292185006

Meindl, J. R. (1995). The romance of leadership as a follower-centric theory: A social constructionist approach. The Leadership Quarterly, 6 (3), 329–341. https://doi.org/10.1016/1048-9843(95)90012-8

Miller, J. D., Back, M. D., Lynam, D. R., & Wright, A. G. C. (2021). Narcissism today: What we know and what we need to learn. Current Directions in Psychological Science, 30 (6), 519–525. https://doi.org/10.1177/09637214211044109

Miller, J. D., Hoffman, B. J., Gaughan, E. T., Gentile, B., Maples, J., & Keith Campbell, W. (2011). Grandiose and vulnerable narcissism: A nomological network analysis: Variants of narcissism. Journal of Personality, 79 (5), 1013–1042. https://doi.org/10.1111/j.1467-6494.2010.00711.x

Miller, J. D., Lynam, D. R., Vize, C., Crowe, M., Sleep, C., Maples-Keller, J. L., Few, L. R., & Campbell, W. K. (2018). Vulnerable narcissism is (mostly) a disorder of neuroticism. Journal of Personality, 86 (2), 186–199. https://doi.org/10.1111/jopy.12303

Mitchell, M. S., & Ambrose, M. L. (2007). Abusive supervision and workplace deviance and the moderating effects of negative reciprocity beliefs. Journal of Applied Psychology, 92 (4), 1159–1168. https://doi.org/10.1037/0021-9010.92.4.1159

Mitchell, M. S., Rivera, G., & Treviño, L. K. (2023). Unethical leadership: A review, analysis, and research agenda. Personnel Psychology, 76 (2), 547–583. https://doi.org/10.1111/peps.12574

Morf, C. C., Horvath, S., & Torchetti, L. (2011). Narcissistic self-enhancement. In M. D. Alicke & C. Sedikides (Eds.), Handbook of self-enhancement and self-protection (pp. 399–424). Guilford Press.

Morf, C. C., & Rhodewalt, F. (2001). Unraveling the paradoxes of narcissism: A dynamic self-regulatory processing model. Psychological Inquiry, 12 (4), 177–196. https://doi.org/10.1207/S15327965PLI1204_1

Morf, C. C., Schürch, E., Küfner, A., Siegrist, P., Vater, A., Back, M., Mestel, R., & Schröder-Abé, M. (2017). Expanding the nomological net of the Pathological Narcissism Inventory: German validation and extension in a clinical inpatient sample. Assessment, 24 (4), 419–443. https://doi.org/10.1177/1073191115627010

Murphy, S. A., & Kiffin-Petersen, S. (2017). The exposed self: A multilevel model of shame and ethical behavior. Journal of Business Ethics, 141 (4), 657–675. https://doi.org/10.1007/s10551-016-3185-8

Muthén, L. K., & Muthén, B. O. (1998–2024). Mplus: Statistical analysis with latent variables: User’s guide . Muthén & Muthén.

Neves, P. (2014). Taking it out on survivors: Submissive employees, downsizing, and abusive supervision. Journal of Occupational and Organizational Psychology, 87 (3), 507–534. https://doi.org/10.1111/joop.12061

Newton, N. A., Khanna, C., & Thompson, J. (2008). Workplace failure: Mastering the last taboo. Consulting Psychology Journal: Practice and Research, 60 (3), 227–245. https://doi.org/10.1037/1065-9293.60.3.227

Palanski, M., Avey, J. B., & Jiraporn, N. (2014). The effects of ethical leadership and abusive supervision on job search behaviors in the turnover process. Journal of Business Ethics, 121 (1), 135–146. https://doi.org/10.1007/s10551-013-1690-6

Pan, S.-Y., & Lin, K. J. (2018). Who suffers when supervisors are unhappy? The roles of leader–member exchange and abusive supervision. Journal of Business Ethics, 151 (3), 799–811. https://doi.org/10.1007/s10551-016-3247-y

Park, H., Hoobler, J. M., Wu, J., Liden, R. C., Hu, J., & Wilson, M. S. (2019). Abusive supervision and employee deviance: A multifoci justice perspective. Journal of Business Ethics, 158 , 1113–1131. https://doi.org/10.1007/s10551-022-05208-9

Park, J. H., Carter, M. Z., DeFrank, R. S., & Deng, Q. (2018). Abusive supervision, psychological distress, and silence: The effects of gender dissimilarity between supervisors and subordinates. Journal of Business Ethics, 153 (3), 775–792. https://doi.org/10.1007/s10551-016-3384-3

Di Pierro, R., Costantini, G., Benzi, I. M. A., Madeddu, F., & Preti, E. (2019). Grandiose and entitled, but still fragile: A network analysis of pathological narcissistic traits. Personality and Individual Differences, 140 , 15–20. https://doi.org/10.1016/j.paid.2018.04.003

Pincus, A. L., Ansell, E. B., Pimentel, C. A., Cain, N. M., Wright, A. G. C., & Levy, K. N. (2009). Initial construction and validation of the pathological narcissism inventory. Psychological Assessment, 21 (3), 365–379. https://doi.org/10.1037/a0016530

Pircher Verdorfer, A., Belschak, F., & Bobbio, A. (2023). Felt or thought: Distinct mechanisms underlying exploitative leadership and abusive supervision. Journal of Business Ethics . https://doi.org/10.1007/s10551-023-05543-5

Pirlott, A. G., & MacKinnon, D. P. (2016). Design approaches to experimental mediation. Journal of Experimental Social Psychology, 66 , 29–38. https://doi.org/10.1016/j.jesp.2015.09.012

Priesemuth, M., & Bigelow, B. (2020). It hurts me too! (or not?): Exploring the negative implications for abusive bosses. Journal of Applied Psychology, 105 (4), 410–421. https://doi.org/10.1037/apl0000447

Rogoza, R., Cieciuch, J., & Strus, W. (2022). Vulnerable isolation and enmity concept: Disentangling the blue and dark face of vulnerable narcissism. Journal of Research in Personality, 96 , 104167. https://doi.org/10.1016/j.jrp.2021.104167

Rogoza, R., Żemojtel-Piotrowska, M., Kwiatkowska, M. M., & Kwiatkowska, K. (2018). The bright, the dark, and the blue face of narcissism: The spectrum of narcissism in its relations to the metatraits of personality, self-esteem, and the nomological network of shyness, loneliness, and empathy. Frontiers in Psychology, 9 , 343. https://doi.org/10.3389/fpsyg.2018.00343

Rohmann, E., Brailovskaia, J., & Bierhoff, H.-W. (2021). The framework of self-esteem: Narcissistic subtypes, positive/negative agency, and self-evaluation. Current Psychology, 40 (10), 4843–4850. https://doi.org/10.1007/s12144-019-00431-6

Rohmann, E., Hanke, S., & Bierhoff, H.-W. (2019). Grandiose and vulnerable narcissism in relation to life satisfaction, self-esteem, and self-construal. Journal of Individual Differences, 40 (4), 194–203. https://doi.org/10.1027/1614-0001/a000292

Rosenthal, S. A., & Pittinsky, T. L. (2006). Narcissistic leadership. The Leadership Quarterly, 17 (6), 617–633. https://doi.org/10.1016/j.leaqua.2006.10.005

Di Sarno, M., Zimmermann, J., Madeddu, F., Casini, E., & Di Pierro, R. (2020). Shame behind the corner? A daily diary investigation of pathological narcissism. Journal of Research in Personality, 85 , 103924. https://doi.org/10.1016/j.jrp.2020.103924

Schaumberg, R. L., & Tracy, J. L. (2020). From self-consciousness to success: When and why self-conscious emotions promote positive employee outcomes. In L.-Q. Yang, R. Cropanzano, C. S. Daus, & V. Martínez-Tur (Eds.), The Cambridge handbook of workplace affect (pp. 414–425). Cambridge University Press.

Schilling, J., & May, D. (2015). Negative und destruktive Führung [Negative and destructive leadership]. In J. Felfe (Ed.), Trends der Psychologischen Führungsforschung: Neue Konzepte, Methoden und Erkenntnisse (pp. 317–330). Hogrefe.

Schoenleber, M., Johnson, L. R., & Berenbaum, H. (2024). Self-conscious emotion traits & reactivity in narcissism. Current Psychology, 43 , 11546–11558. https://doi.org/10.1007/s12144-023-05256-y

Schoenleber, M., Roche, M. J., Wetzel, E., Pincus, A. L., & Roberts, B. W. (2015). Development of a brief version of the Pathological Narcissism Inventory. Psychological Assessment, 27 (4), 1520–1526. https://doi.org/10.1037/pas0000158

Schyns, B., Braun, S., & Xia, Y. E. (2023a). What motivates narcissistic individuals to lead? The role of identity across cultures. Personality and Individual Differences, 206 , 112107. https://doi.org/10.1016/j.paid.2023.112107

Schyns, B., Gauglitz, I. K., Gilmore, S., & Nieberle, K. (2023b). Vulnerable narcissistic leadership meets Covid-19: The relationship between vulnerable narcissistic leader behaviour and subsequent follower irritation. European Journal of Work and Organizational Psychology, 32 (6), 816–826. https://doi.org/10.1080/1359432X.2023.2252130

Schyns, B., Lagowska, U., & Braun, S. (2022). Me, me, me: Narcissism and motivation to lead. Zeitschrift Für Psychologie, 230 (4), 330–334. https://doi.org/10.1027/2151-2604/a000504

Schyns, B., & Schilling, J. (2013). How bad are the effects of bad leaders? A meta-analysis of destructive leadership and its outcomes. The Leadership Quarterly, 24 (1), 138–158. https://doi.org/10.1016/j.leaqua.2012.09.001

Sedikides, C., & Green, J. D. (2009). Memory as a self-protective mechanism. Social and Personality Psychology Compass, 3 (6), 1055–1068. https://doi.org/10.1111/j.1751-9004.2009.00220.x

Smollan, R. K., & Singh, S. (2022). The emotions of failure in organizational life. In R. H. Humphrey, N. M. Ashkanasy, & A. C. Troth (Eds.), Emotions and negativity (pp. 13–34). Emerald.

Tangney, J. P., Dearing, R. L., Brown, C. B., Stuewig, J., Wagner, P. E., & Gramzow, R. (2007a). The Test of Self-Conscious Affect-3, short client version (TOSCA-3SC) . George Mason University.

Tangney, J. P., Stuewig, J., & Mashek, D. J. (2007b). What’s moral about the self-conscious emotions? In J. L. Tracy, R. W. Robins, & J. P. Tangney (Eds.), The self-conscious emotions: Theory and research (pp. 21–37). The Guilford Press.

Tangney, J. P., Wagner, P., Fletcher, C., & Gramzow, R. (1992). Shamed into anger? The relation of shame and guilt to anger and self-reported aggression. Journal of Personality and Social Psychology, 62 (4), 669–675. https://doi.org/10.1037/0022-3514.62.4.669

Tepper, B. J. (2000). Consequences of abusive supervision. Academy of Management Journal, 43 (2), 178–190. https://doi.org/10.2307/1556375

Tepper, B. J., Simon, L., & Park, H. M. (2017). Abusive supervision. Annual Review of Organizational Psychology and Organizational Behavior, 4 (1), 123–152. https://doi.org/10.1146/annurev-orgpsych-041015-062539

Tett, R. P., Toich, M. J., & Ozkum, S. B. (2021). Trait activation theory: A review of the literature and applications to five lines of personality dynamics research. Annual Review of Organizational Psychology and Organizational Behavior, 8 (1), 199–233. https://doi.org/10.1146/annurev-orgpsych-012420-062228

Tracy, J. L., & Robins, R. W. (2007). Self-conscious emotions: Where self and emotion meet. In C. Sedikides & S. J. Spencer (Eds.), The self (pp. 187–209). Psychology Press.

Tuncdogan, A., Acar, O. A., & Stam, D. (2017). Individual differences as antecedents of leader behavior: Towards an understanding of multi-level outcomes. The Leadership Quarterly, 28 (1), 40–64. https://doi.org/10.1016/j.leaqua.2016.10.011

Ünal, A. F., Warren, D. E., & Chen, C. C. (2012). The normative foundations of unethical supervision in organizations. Journal of Business Ethics, 107 (1), 5–19. https://doi.org/10.1007/s10551-012-1300-z

Vize, C. E., Collison, K. L., Crowe, M. L., Campbell, W. K., Miller, J. D., & Lynam, D. R. (2019). Using dominance analysis to decompose narcissism and its relation to aggression and externalizing outcomes. Assessment, 26 (2), 260–270. https://doi.org/10.1177/1073191116685811

Waldman, D. A., Wang, D., Hannah, S. T., Owens, B. P., & Balthazard, P. A. (2018). Psychological and neurological predictors of abusive supervision. Personnel Psychology, 71 (3), 399–421. https://doi.org/10.1111/peps.12262

Wang, G., Harms, P. D., & Mackey, J. D. (2015). Does it take two to tangle? Subordinates’ perceptions of and reactions to abusive supervision. Journal of Business Ethics, 131 (2), 487–503. https://doi.org/10.1007/s10551-014-2292-7

Weiner, B. (1985). An attributional theory of achievement motivation and emotion. Psychological Review, 92 (4), 548–573. https://doi.org/10.1037/0033-295X.92.4.548

Wisse, B., & Sleebos, E. (2016). When the dark ones gain power: Perceived position power strengthens the effect of supervisor Machiavellianism on abusive supervision in work teams. Personality and Individual Differences, 99 , 122–126. https://doi.org/10.1016/j.paid.2016.05.019

Xi, M., He, W., Fehr, R., & Zhao, S. (2022). Feeling anxious and abusing low performers: A multilevel model of high performance work systems and abusive supervision. Journal of Organizational Behavior, 43 (1), 91–111. https://doi.org/10.1002/job.2558

Zhang, Y., & Bednall, T. C. (2016). Antecedents of abusive supervision: A meta-analytic review. Journal of Business Ethics, 139 (3), 455–471. https://doi.org/10.1007/s10551-015-2657-6

Zhang, Z., & Jia, X. (2023). No time for ethics: How and when time pressure leads to abusive supervisory behavior. Journal of Business Ethics, 188 (4), 807–825. https://doi.org/10.1007/s10551-023-05510-0

Download references

Author information

Authors and affiliations.

Durham University Business School, Durham University, Durham, UK

Susanne Braun & Robert G. Lord

NEOMA Business School, Reims, France

Birgit Schyns

Surrey Business School, University of Surrey, Guildford, UK

Yuyan Zheng

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Susanne Braun .

Ethics declarations

Competing interests.

The authors have no competing interests to declare that are relevant to the content of this article.

Research Involving Human Participants

Approval for all studies included in this manuscript was obtained from the ethics committee of Durham University Business School, UK The procedures used in this study adhere to the tenets of the Declaration of Helsinki.

Informed Consent

Informed consent was obtained from all individual participants included in the three studies.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file1 (DOCX 41 KB)

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Braun, S., Schyns, B., Zheng, Y. et al. When Vulnerable Narcissists Take the Lead: The Role of Internal Attribution of Failure and Shame for Abusive Supervision. J Bus Ethics (2024). https://doi.org/10.1007/s10551-024-05805-w

Download citation

Received : 13 September 2023

Accepted : 14 August 2024

Published : 02 September 2024

DOI : https://doi.org/10.1007/s10551-024-05805-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Abusive supervision
  • Internal attribution
  • Vulnerable narcissism
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. PPT

    experimental research ethics

  2. PPT

    experimental research ethics

  3. Research Ethics: Definition, Principles and Advantages

    experimental research ethics

  4. what is research ethics

    experimental research ethics

  5. FREE 10+ Research Ethics Samples & Templates in MS Word

    experimental research ethics

  6. Research Ethics

    experimental research ethics

VIDEO

  1. 9-Minutes Guide to Epidemiology Study Designs & Ethics

  2. Ethical Considerations in Research

  3. The Shocking Truth About The Tuskegee Syphilis Experimental Studies

  4. RESEARCH-ETHICS-25-07-2024-CHAPTER-COMPLETED

  5. Ethical review for research with human participants at TU Eindhoven

  6. Battlefield Vision AI Ethics

COMMENTS

  1. Ethics in field experimentation: A call to establish new standards to

    Keywords: ethics, field experiments, research There has been a rapid and dangerous decline in adherence to the core foundations of ethical research on human participants when it comes to field experiments in the social, behavioral, and psychological sciences ( 1 - 7 ).

  2. Ethical Considerations in Research

    Research ethics are a set of principles that guide your research designs and practices in both quantitative and qualitative research. In this article, you will learn about the types and examples of ethical considerations in research, such as informed consent, confidentiality, and avoiding plagiarism. You will also find out how to apply ethical principles to your own research projects with ...

  3. Ethics in scientific research: a lens into its importance, history, and

    Ethics are a guiding principle that shapes the conduct of researchers. It influences both the process of discovery and the implications and applications of scientific findings 1. Ethical considerations in research include, but are not limited to, the management of data, the responsible use of resources, respect for human rights, the treatment ...

  4. How Should the Three R's Be Revised and Why?

    Russell and Burch published The Principles of Humane Experimental Technique in 1959, which established the "3 R's" as key principles that govern use of nonhuman animals in a laboratory setting. 1 Today, the 3 R's is the most well-known ethical framework for conducting scientific research using nonhuman animals. The 3 R's—refinement, reduction, and replacement—are almost universally ...

  5. What Is Ethics in Research and Why Is It Important?

    Ethical norms also serve the aims or goals of research and apply to people who conduct scientific research or other scholarly or creative activities. There is even a specialized discipline, research ethics, which studies these norms. See Glossary of Commonly Used Terms in Research Ethics and Research Ethics Timeline.

  6. Ethical Issues in Research: Perceptions of Researchers, Research Ethics

    In the context of academic research, a diversity of ethical issues, conditioned by the different roles of members within these institutions, arise. Previous studies on this topic addressed mainly the perceptions of researchers. However, to our knowledge, ...

  7. Ethics of Experimental Research

    Ethics of Experimental Research. The ethical fundaments of research are objectivity and logic. The subjective and. objective elements in the procedure: formulation of a problem - hypothesis - ex- perimentation - selection of arguments - theory are discussed with examples from experimental biology, foremost plant physiology.

  8. PDF Ethics in Experimental Research

    Ethics in Experimental Research Dr. Felicia Pratto. Underlying Principles of Ethics With power and authority come responsibility Researchers must guard against conflicts of interests-especially that our getting research is our own agenda, and may benefit us whereas ethics requires that participants' well-being (of particular types) comes ...

  9. Exploring Experimental Psychology

    Experimental Psychology is intended to provide a fundamental understanding of the basics of experimental research in the psychological sciences. Experimental Psychology by Jackie Anson is modified version of Research Methods in Psychology which was adapted by Michael G. Dudley and is licensed under Creative Commons Attribution-NonCommercial.

  10. Ethics of psychological research

    The course emphasizes the protections offered by ethics codes for human participants, ensuring their safety and well-being throughout the research process. Ethical considerations in planning research studies using nonhuman animals are also thoroughly examined, with a focus on the importance of the three Rs (Replacement, Reduction, and ...

  11. Ethical Considerations in Psychology Research

    Ethics refers to the correct rules of conduct necessary when carrying out research. We have a moral responsibility to protect research participants from harm.

  12. Ethics Guidelines

    The Code of Ethics complements the NeurIPS Code of Conduct, which focuses on professional conduct and research integrity issues, including plagiarism, fraud and reproducibility concerns. The points described below also inform the NeurIPS Submission Checklist, which outlines more concrete communication requirements.

  13. Use of animals in experimental research: an ethical dilemma?

    The use of animals in experimental research parallels the development of medicine, which had its roots in ancient Greece (Aristotle, Hippocrate).

  14. Understand the principles of Human Research Ethics

    The module provides information to help you design a human research project and understand how ethics reviewers will consider your design against the guidance provided in the National Statement on Ethical Conduct in Human Research.. Based on, and directly cross referencing with the National Statement, the module outlines why research ethics review was introduced internationally.

  15. Five principles for research ethics

    Psychologists in academe are more likely to seek out the advice of their colleagues on issues ranging from supervising graduate students to how to handle sensitive research data.

  16. Comparison of Instructions to Authors and Reporting of Ethics

    This study investigated changes in editors' instructions to authors and authors' reporting of research ethics information in selected African biomedical journals between 2008 and 2017. Twelve selected journal websites and online articles were reviewed in Eastern, Southern, and Western African [ESWA] countries. A pre-tested schema and a ...

  17. How Ethical Behavior Is Considered in Different Contexts: A ...

    The Journal of Business Ethics leads the way in research into ethical behavior, followed by the Journal of Applied Psychology. Next Article in Journal. The Interplay of Values and Skill: How Do They Impact Graduates' Employability? ... Anthony P. Ammeter, Bart L. Garner, and Milorad M. Novicevic. 2012. An experimental investigation of an ...

  18. PDF Ethics in Experimental Research

    Research by foreigners often gets special scrutiny. Scholars in many countries are simply ignoring those laws, flying in on tourist visas, running experiments, and heading home with data.

  19. Analyzing the ethical and societal impacts of proposed research

    Michael Bernstein, Margaret Levi, David Magnus, and Debra Satz What is the process for the ethics and society review that you propose? Bernstein: The engine that we usually associate with ethics review—the Institutional Review Board, or IRB—is explicitly excluded from considering long-range societal impact. So, for example, artificial intelligence projects can be pursued, published, and ...

  20. Human subject research

    Human subject research is systematic, scientific investigation that can be either interventional (a "trial") or observational (no "test article") and involves human beings as research subjects, commonly known as test subjects.Human subject research can be either medical (clinical) research or non-medical (e.g., social science) research. [1] Systematic investigation incorporates both the ...

  21. (PDF) Animal experimentation: A look into ethics, welfare and

    The significance of research ethics that ensures proper treatment of experimental animals [58]. To avoid undue suffering of animals, it is important to follow ethical considerations during animal ...

  22. Research Ethics: Definition, Principles and Advantages

    Research ethics are the set of ethical guidelines that guides us on how scientific research should be conducted and disseminated. Research ethics govern the standards of conduct for scientific researchers It is the guideline for responsibly conducting the research.

  23. The Nuremberg Code isn't just for prosecuting Nazis − its principles

    10 key values. The code consists of 10 principles that the judges ruled must be followed as both a matter of medical ethics and a matter of international human rights law.. The first and most ...

  24. Raising AI Ethics Awareness through an AI Ethics Quiz for Software

    Today, ethical issues surrounding AI systems are increasingly prevalent, highlighting the critical need to integrate AI ethics into system design to prevent societal harm. Raising awareness and fostering a deep understanding of AI ethics among software practitioners is essential for achieving this goal. However, research indicates a significant gap in practitioners' awareness and knowledge of ...

  25. Ethical considerations regarding animal experimentation

    The significance of research ethics that ensures proper treatment of experimental animals [58]. To avoid undue suffering of animals, it is important to follow ethical considerations during animal studies [1].

  26. Saving lives with statistics

    However, with increasingly more complicated research questions and designs, even the analysis of experimental setups is not necessarily straightforward. In a project comparing different methods for transporting trauma patients, Hyldmo et al. experimented with cadavers, meticulously measuring neck rotation and movement [5, 6]. For ethical ...

  27. Full article: ACGAN: adaptive conditional generative adversarial

    This research focuses on enhancing the classification accuracy of multi-class skin lesions through the application of current deep learning architectures, specifically Inception V3, ResNet50 and ResNet101. ... Experimental analysis was performed on data obtained from executing Inception V3, ResNet50 and ResNet 101. ... Ethics approval and ...

  28. New Northeastern lab plumbs the mysteries of the ticks and bacteria

    One of the goals of the research is to disrupt tick anatomy or interactions to see whether that stops transmission of the bacteria, Takacs says. It helps that scientists came up with a more complete iteration of the genome of the black-legged deer tick in early 2023, opening the way for the creation of genetic targets to reduce Lyme and other ...

  29. Experimental Psychology: Chapter 2_Research Ethics

    Research ethics are a framework of values within which we conduct research. Ethics help researchers identify actions we consider good and bad, and explain the principles by which we make responsible decisions in actual situations.

  30. When Vulnerable Narcissists Take the Lead: The Role of Internal

    Journal of Business Ethics - Research to date provides only limited insights into the processes of abusive supervision, a form of unethical leadership. ... Two experimental studies (N = 326 and N = 292) with a manipulation-of-mediator design and an event recall task supported the causality and momentary triggers of the internal attribution of ...