non moral standards essay

PHILO-notes

Free Online Learning Materials

Moral Standard versus Non-Moral Standard

Why the need to distinguish moral standards from non-moral ones?

It is important to note that different societies have different moral beliefs and that our beliefs are deeply influenced by our own culture and context. For this reason, some values do have moral implications, while others don’t. Let us consider, for example, the wearing of hijab. For sure, in traditional Muslim communities, the wearing of hijab is the most appropriate act that women have to do in terms of dressing up. In fact, for some Muslims, showing parts of the woman’s body, such as the face and legs, is despicable. However, in many parts of the world, especially in Western societies, most people don’t mind if women barely cover their bodies. As a matter of fact, the Hollywood canon of beauty glorifies a sexy and slim body and the wearing of extremely daring dress. The point here is that people in the West may have pitied the Muslim women who wear hijab, while some Muslims may find women who dress up daringly despicable.

Again, this clearly shows that different cultures have different moral standards. What is a matter of moral indifference, that is, a matter of taste (hence, non-moral value) in one culture may be a matter of moral significance in another.

Now, the danger here is that one culture may impose its own cultural standard on others, which may result in a clash in cultural values and beliefs. When this happens, as we may already know, violence and crime may ensue, such as religious violence and ethnic cleansing.

How can we address this cultural conundrum?

This is where the importance of understanding the difference between moral standards (that is, of what is a moral issue) and non-moral ones (that is, of what is a non-moral issue―thus, a matter of taste) comes in. This issue may be too obvious and insignificant for some people, but understanding the difference between the two may have far-reaching implications. For one, once we have distinguished moral standards from non-moral ones, of course, through the aid of the principles and theories in ethics, we will be able to identify fundamental ethical values that may guide our actions. Indeed, once we know that particular values and beliefs are non-moral, we will be able to avoid running the risk of falling into the pit of cultural reductionism (that is, taking complex cultural issues as simple and homogenous ones) and the unnecessary imposition of one’s own cultural standard on others. The point here is that if such standards are non-moral (that is, a matter of taste), then we don’t have the right to impose them on others. But if such standards are moral ones, such as not killing or harming people, then we may have the right to force others to act accordingly. In this way, we may be able to find a common moral ground, such as agreeing not to steal, lie, cheat, kill, harm, and deceive our fellow human beings.

Now, what are moral standards , and how do they differ from non-moral ones ?

Moral Standards and their Characteristics

Moral standards are norms that individuals or groups have about the kinds of actions believed to be morally right or wrong, as well as the values placed on what we believed to be morally good or morally bad. Moral standards normally promote “the good”, that is, the welfare and well-being of humans as well as animals and the environment. Moral standards, therefore, prescribe what humans ought to do in terms of rights and obligations.

According to some scholars, moral standards are the sum of combined norms and values. In other words, norms plus values equal moral standards. On the one hand, norms are understood as general rules about our actions or behaviors. For example, we may say “We are always under the obligation to fulfill our promises” or “It is always believed that killing innocent people is absolutely wrong”. On the other hand, values are understood as enduring beliefs or statements about what is good and desirable or not. For example, we may say “Helping the poor is good” or “Cheating during exams is bad”.

According to many scholars, moral standards have the following characteristics, namely: 

  • moral standards deal with matters we think can seriously injure or benefit 

humans, animals, and the environment, such as child abuse, rape, and murder; 

  • moral standards are not established or changed by the decisions of 

authoritative individuals or bodies. Indeed, moral standards rest on the adequacy of the reasons that are taken to support and justify them. For sure, we don’t need a law to back up our moral conviction that killing innocent people is absolutely wrong; 

  • moral standards are overriding, that is, they take precedence over other 

standards and considerations, especially of self-interest; 

  • moral standards are based on impartial considerations. Hence, moral 

standards are fair and just; and 

  • moral standards are associated with special emotions (such as guilt and 

shame) and vocabulary (such as right, wrong, good, and bad).

Non-moral Standards

Non-moral standards refer to standards by which we judge what is good or bad and right or wrong in a non-moral way. Examples of non-moral standards are standards of etiquette by which we judge manners as good or bad, standards we call the law by which we judge something as legal or illegal, and standards of aesthetics by which we judge art as good or rubbish. Hence, we should not confuse morality with etiquette, law, aesthetics or even with religion.

As we can see, non-moral standards are matters of taste or preference. Hence, a scrupulous observance of these types of standards does not make one a moral person. Violation of said standards also does not pose any threat to human well-being.

Finally, as a way of distinguishing moral standards from non-moral ones, if a moral standard says “Do not harm innocent people” or “Don’t steal”, a non-moral standard says “Don’t text while driving” or “Don’t talk while the mouth is full”.

Logo for VIVA's Pressbooks

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

21 Distinguishing Between Moral & Nonmoral Claims

Radford University, Radford University Core Handbook, https://lcubbison.pressbooks.com/ and Deborah Holt, BS, MA

Recall an ethical dilemma is a term for a situation in which a person faces an ethically problematic situation and is not sure of what she ought to do. Those who experience ethical dilemmas feel themselves being pulled by competing ethical demands or values and perhaps feel that they will be blameworthy or experience guilt no matter what course of action they take.

What is the role of values in ethical dilemmas?

Frequently, ethical dilemmas are fundamentally a clash of values. We may experience a sense of frustration trying to figure out what the ‘right’ thing to do is because any available course of action violates some value that we are dedicated to. For example, let’s say you are taking a class with a good friend and sitting next to him one day during a quiz you discover him copying answers from a third student. Now you are forced into an ethical decision embodied by two important values common to your society. Those values are honesty and loyalty. Do you act dishonestly and preserve your friend’s secret or do you act disloyal and turn them in for academic fraud?

Awareness of the underlying values at play in an ethical conflict can act as a powerful method to clarify the issues involved. We should also be aware of the use of value as a verb in the ethical sense. Certainly what we choose to value more or less will play a very significant role in the process of differentiating between outcomes and actions thereby determining what exactly we should do.

Literature and film are full of ethical dilemmas, as they allow us to reflect on the human struggle as well as presenting tests of individual character. For example in World War Z, Gerry Lane (played by Brad Pitt in the movie version) has to make a similar choice as Sartre’s Frenchman: between serving the world-community of humans in their just war against Zombies, and serving his own immediate family. It adds depth and substance to the character to see him struggling with this choice over the right thing to do.

What ethical dilemmas are more common in real life?

If you’ve ever felt yourself pulled between two moral choices, you’ve faced an ethical dilemma. Often we make our choice based on which value we prize more highly. Some examples:

You are offered a scholarship to attend a far-away college, but that would mean leaving your family, to whom you are very close. Values: success/future achievements/excitement vs. family/love/safety

You are friends with Jane, who is dating Bill. Jane confides in you that she’d been seeing Joe on the side but begs you not to tell Bill. Bill then asks you if Jane has ever cheated on him. Values: Friendship/loyalty vs. Truth

You are the official supervisor for Tywin. You find out that Tywin has been leaving work early and asking his co-workers to clock him out on time. You intend to fire Tywin, but then you find out that he’s been leaving early because he needs to pick up his child from daycare. Values: Justice vs. Mercy

You could probably make a compelling argument for either side for each of the above. That’s what makes ethical dilemmas so difficult (or interesting, if you’re not directly involved!)

What is an ethical violation?

Sometimes we are confronted with situations in which we are torn between a right and a wrong; we know what the right thing to do would be, but the wrong is personally beneficial, tempting, or much easier to do. In 2010, Ohio State University football coach Jim Tressel discovered that some of his players were violating NCAA rules. He did not report it to anyone, as it would lead to suspensions, hurting the football team’s chances of winning. He was not torn between two moral choices; he knew what he should do, but didn’t want to jeopardize his career. In 2011, Tressel’s unethical behavior became public, OSU had to void its wins for the year, and he resigned as coach.

Ethics experts tend to think that ethical considerations should always trump personal or self-interested ones and that to resist following one’s personal desires is a matter of having the right motivation and the strength of will to repel temptation. One way to strengthen your “ethics muscles” is to become familiar with the ways we try to excuse or dismiss unethical actions.

How does self-interest affect people’s ethical choices?

In a perfect world, morality and happiness would always align: living ethically and living well wouldn’t collide because living virtuously—being honest, trustworthy, caring, etc.—would provide the deepest human happiness and would best allow humans to flourish. Some would say, however, that we do not live in a perfect world, and that our society entices us to think of happiness in terms of status and material possessions at the cost of principles. Some even claim that all persons act exclusively out of self-interest—that is, out of psychological egoism—and that genuine concern for the well-being of others—altruism—is impossible. As you explore an ethical issue, consider whether people making choices within the context of the issue are acting altruistically or out of self-interest.

What is the difference between good ethical reasoning and mere rationalization?

When pressed to justify their choices, people may try to evade responsibility and to justify decisions that may be unethical but that serve their self-interest. People are amazingly good at passing the buck in this fashion, yet pretty poor at recognizing and admitting that they are doing so. When a person is said to be rationalizing his actions and choices, this doesn’t mean he is applying critical thinking, or what we have described as ethical analysis. Quite the opposite: it means that he is trying to convince others—or often just himself—using reasons that he should be able to recognize as faulty or poor reasons. Perhaps the most common rationalization of unethical action has come to be called the Nuremberg Defense: ‘I was just doing what I was told to do—following orders or the example of my superior. So blame them and exonerate me.’ This defense was used by Nazi officials during the Nuremberg trials after World War II in order to rationalize behavior such as participation in the administration of concentration camps. This rationalization didn’t work then, and it doesn’t work now.

What kinds of rationalizations do people make for their actions?*

Rationalization is a common human coping strategy. An intriguing finding in research on corruption is that people who behave unethically usually do not see themselves as unethical. Instead, they recast their actions using rationalization techniques to justify what they’ve done. Common rationalization strategies:

Denial of responsibility

The people engaged in bad behavior “had no choice” but to participate in such activities OR people turn a blind eye to ethical misbehavior.

“What can I do? My boss ordered me not to tell the police.”

“My neighbors’ children always seem to have bruises, but it’s none of my business.”

Denial of injury

No one is harmed by the action, or that the harm could have been worse.

“All’s well that ends well.”

“Nobody died.”

Blaming the victim

Counter any blame for the actions by arguing that the violated party deserved what happened.

“She chose to go that fraternity party; what did she think was going to happen?”

“If the professors don’t want students to say mean things in student evaluations, they should be more entertaining.”

Social weighting

Compared to what other people have done, this is nothing, OR everybody does it, so it’s okay.

“I sometimes come into work late, but compared to everybody who leaves early every Friday, it’s nothing to get worked up over.”

“Everyone around me was texting; it’s not fair that I should be the one in trouble.”

Appeal to higher values

It was done for a good, higher cause.

“You should let me copy your homework; if I fail this class, I’ll lose my scholarship.”

“I couldn’t tell anyone because I’m loyal to my boss.”

 Saint’s excuse

If someone has done good things in the past, they should get a “pass” for misbehavior.

“He’s done so many good things for the community, it would be a shame to punish him.”

“She’s so talented, why focus on the bad things she’s done?”

What fallacies are most prevalent in debates over ethical issues?

In addition to self-deception and rationalizations, we often find overtly fallacious reasoning that undermines open, constructive debate of ethical issues. Of the common we described, those most common in ethics debate include ad hominem (personal) attacks, appeals to false authority, appeals to fear, the slippery slope fallacy, false dilemmas, the two-wrongs-make-a-right fallacy, and the strawman fallacy. Fallacious reasoning, especially the attempt to sway sentiment through language manipulation, is ever-present in popular sources of information and opinion pieces, like blogs and special-interest-group sites. It may take practice to spot fallacious reasoning, but being able to give names to these strategies of trickery and manipulation provides the aspiring critical thinker with a solid start.

* Modified from Anand, V., Ashforth, B. E., & Joshi, M. (2004). Business as usual: The acceptance and perpetuation of corruptions in organizations.  Academy of Management Executive , 18(2). Retrieved from http://actoolkit.unprme.org/wp-content/resourcepdf/anand_et_al._ame_2004.pdf

How can I tell what is the “right” thing to do?

That’s the million dollar question. Ethical theories describe the rules or principles that guide people when the rightness or wrongness of an action becomes an issue. In this section, you will read about some of the most common and important ways of approaching ethics. They all ask the question, “how can I tell what the right thing to do is?” but differ as to where to start and what to consider:

  • Situation. Relativists say that rightness changes depending on the individuals and culture involved.
  • Results. Consequentialists believe that you should judge rightness based on the predicted outcome. Utilitarianism is a type of consequentialist perspective.
  • Actions. Deontologists judge the rightness purely on the action itself. Duty-based and rights-based perspectives fall into this category.
  • Actors. In actor-oriented perspectives, the person or entity making the decision- the ethical actor- must decide what a virtuous person or entity would do, and follow that path. The ethical actor may also be called the agent.

How do I use ethical reasoning to make decisions?

Making good ethical decisions takes practice. Our instinct or “gut” can draw us to selfish choices, so we need to step back and think critically about ethical dilemmas rather than just jumping to our first solution.

We need to consider all the elements involved:

  • Who is affected?
  • Who is making the decision?
  • What are the known facts and circumstances?
  • How ethical are the possible actions?

The framework below can help guide you through this process. It is not a checklist of steps; rather, decision making is an iterative process in which learning a new fact may cause you to revise earlier thoughts on the situation.

How do I recognize an ethical situation?

Identifying an ethical situation will require you to research the facts of a situation and to ask whether stakeholders must consider questions about the moral rightness or wrongness of public policy or personal behavior. To help you identify and describe the nature of the ethical issue, ask the following:

Does the situation require individuals to engage in ethical judgments? Do you find yourself thinking about whether an action is morally right or wrong or whether a person’s motives are morally good or bad? Could you debate what, morally, someone ‘should’ or ‘ought to’ do in the situation?

Does the situation seem to pose an ethical conflict for one or more stakeholder? That is, does there seem be a clash between what a stakeholder ‘ought to do’ and what she ‘wants to do’?

Does the situation pose an ethical dilemma for one or more stakeholders? That is, does it seem as if someone is pulled between competing ethical demands, each calling for behavior that would be ethical but with one action making it impossible to perform the other, equally justifiable action? Are there values that are in conflict?

You also should consider whether any professional codes are relevant to the situation. Often professional codes spell out the ethical or moral obligations of members of a profession. Compare any relevant professional code with the behavior of participants in that situation who may be bound by that code. Was their behavior consistent with that code? Were there any competing norms or codes of behavior that put participants in the midst of an ethical dilemma?

In an ethical situation, a difficult decision- perhaps multiple difficult decisions, will need to be made.

How do I identify stakeholders?

Usually, any complex topic features multiple stakeholders: people who have an interest in or are affected by the outcome of decisions revolving around the situation. These different parties are not all affected in the same way, and therefore, their perspectives on the topic will differ.

How do I identify the different perspectives and positions held by stakeholders?

A stakeholder’s perspective or position is based upon the stakeholder’s relationship to the situation. That relationship can be captured by asking questions about power, support, influence, and need in the context of the situation that the stakeholder has an interest in.

  • Power—How much decision-making authority does the stakeholder have over the situation?
  • Support—How strongly is the stakeholder for or against the idea?
  • Influence—How much ability does the stakeholder have to affect the decisions made by other people?
  • Need—For the stakeholder to benefit, what does she need to have happen (or not happen) in the situation?

Be sure to look for interests and perspectives that may be shared by different stakeholders, and be certain that you do not automatically side with the stakeholders who have the most power and influence. If you gravitate toward the parties with the most power and influence, you may end up ignoring the individuals or groups with the most need, the ones who may be badly hurt by an unethical decision.

How can I research stakeholder positions?

When you research an issue, look beyond yes/no, pro/con arguments in order to see the people involved in the situation. Remember that often there are more than the oversimplified ‘two sides’, so be open to identifying more than two stakeholders.

Make a list of the individuals and groups who affect or are affected by the issue. Add to the list as your research uncovers additional aspects of the situation that bring in additional stakeholders.

Analyze the positions held by each stakeholder, looking in-depth at their involvement. Go to the Appendix for a list of possible questions to research.

How do I identify the ethical actor?

Within that set of stakeholders, identify which is the one (or ones) in a position to take action. It could be an individual, a group, or an institution. Those are the ethical actors, who will exercise the decision related to the ethical situation.

The ethical actor may be you, but it’s also probable in this class that you will research case studies of ethical situations in the wider world. In such assignments, focus your attention on the people and entities that can and need to take action in order for this situation to be resolved. Avoid ‘victim blaming’- looking at stakeholders and condemning them for getting themselves into the current situation, or trying to rewrite history so that the situation wouldn’t exist. Concentrate on the facts of the case as they relate to the decision making process.

How can I use critical thinking in this process?

How can a person decide whether a certain act is ethical without being influenced by his biases? The thoughtful development of criteria is one method to keep biases from having an excessive influence on the group’s decision-making process. Criteria are carefully considered, objective principles that can be applied to a situation in order to reach measured conclusions.

What are criteria?

Criteria are the standards you apply to develop and evaluation whether a solution to a problem is ‘good’ or ‘right’. People apply criteria to solve both ethical and non-ethical problems.

Criteria need to be specific and measurable in some fashion to allow them to be used to judge whether a solution is likely to successfully address a problem. See the Appendix for more information on criteria.

How do I identify possible actions?

When you have identified who can act and what criteria is essential, you can now brainstorm options for actions. You can use the major ethical perspectives to help you:

  • What action would result in the best results?
  • What action would respect stakeholders’ rights?
  • What action would respect the ethical actor’s obligations?
  • What action would lead the ethical actor to being a virtuous person or organization?
  • What action gives extra consideration to those who are vulnerable?

If this is a professional situation, you should also check to see if there are any codes of conduct to consult.

If you think of other actions, apply the different ethical perspectives to them to see if they are ethical.

How do I evaluate the possible options?

Sometimes all the theories point to the same action, but usually there are differences. At this point, you need to consider the specific situation and the context of the ethical actor. Which perspective is most appropriate given these circumstances?

For example, there is a limited amount of medication available for a very infectious disease. How do you decide who receives the medication?

  • If the ethical actor is a government official deciding on a policy, one would probably turn to utilitarianism: what would be the best result for the most number of people?
  • If the ethical actor is a physician, she may turn to deontology : what are her professional obligations?
  • If the ethical actor is the mother of a sick child, she may give up her dose to save the baby ( virtue ethics , would ask what a virtuous person would do”)*

Deontology is a universal ethical theory that considers whether an action itself is right or wrong. Deontologists argue that you can never know what the results will be so it doesn’t make sense to decide whether something is ethical based on outcomes.

Utilitarianism is a specific type of consequentialism that focuses on the greatest good for the greatest number. After you identify your options for action, you ask who will benefit and who will be harmed by each. The ethical action would be the one that caused the greatest good for the most people, or the least harm to the least number.

Thinkers who embrace virtue ethics emphasize that the sort of person we choose to be constitutes the heart of our ethical being. If you want to behave virtuously, become a virtuous person. Certain traits—for instance, honesty, compassion, generosity, courage—seem to be universally admired. These strengths of character are virtues. To acquire these virtues, follow the example of persons who possess them. Once acquired, these virtues may be trusted to guide our decisions about how to act, even in difficult situations .

What else should I consider before acting?

You should do a critical thinking check to make sure you are not falling into any fallacious thinking or rationalizations to justify an option that is selfish or otherwise unethical. Would you be okay with your decision being widely known and associated with you?

Am I done after acting?

No. It’s essential to examine how the decision turned out and consider what lessons you may have learned from it.

So, what is the difference between a moral and a nonmoral claim?

When thinking about a moral claim versus a nonmoral claim, it is important to recognize that the word “nonmoral” is not the same as “immoral.”

  • Immoral can be defined as something that does not conform to standards of morality.
  • Nonmoral can be defined as something that does not possess characteristics of or fall into the realm of morals and ethics.

For example, telling a lie can be considered immoral.  And “it is wrong to lie” can be considered a moral claim.

When asked to offer an example of a moral and a nonmoral claim, it is important to recognize that a claim is a statement where you are asserting something you believe is the case.

  • “Abortion is wrong because it involves the killing of a human being” would be a moral claim.
  • “Red lipstick is the right color for you” would be an example of a nonmoral claim.
  • WHY? There is nothing about the claim that “red lipstick is the right color for you” which relates to morals or ethics. The claim conveys a belief, but not an ethical or moral belief.
  • The claim “abortion is wrong because it involves the killing of a human being,” is an ethical or moral belief supported by the stance that abortion is killing/taking the life of a human being.

*A slight modification from the original text includes addition of thought-provoking question related to virtue ethics, and the addition of “So, what is the difference between a moral and a nonmoral claim?”.

This work ( Distinguishing Between Moral & Nonmoral Claims by Radford University, Radford University Core Handbook, https://lcubbison.pressbooks.com/ and Deborah Holt, BS, MA) is free of known copyright restrictions.

Share This Book

Lesson 2: Moral & Non-Moral Standards

Intended learning outcome.

Distinguish between moral and non-moral standards

Preparation

Introduction:

We often hear the terms "moral standards" and "non-moral standards What do these refer to? What about the word "immoral" Is there such a thing as immoral standards? Is immoral synonymous with non-moral" Let's find this out in this Lesson 

Classify the following into groups: moral standards and non-moral standards. 

No talking while your mouth is full.

Do not lie 

Wear black or white for mourning; never red. 

The males should be the one to propose marriage not females. 

Don't steal 

Observe correct grammar when writing and speaking English. 

Submit school requirements on time. 

If you are a male, stay by the danger side (roadside) when walking with a female. 

Go with the fashion or you are not in. 

Don't cheat others. 

Don't kill 

When you speak pronounce words correctly.

Focus the microscope properly 

Maintain a 36-24-36 body figure. 

1. Analyze your groupings. Why do you classify one group a standards and another as non-moral standards?

2. What is common to those listed under moral standards 

3. What is common to the list of non-moral standards?

Presentation

Ethymology and Meaning of Ethics  

The term "ethics" comes from the Greek word "ethos" means "custom" used in the works of Aristotle, while the term "moral" is the Latin equivalent. Based on the Greek and Latin etymology of the word "ethics," ethics deals with morality. When the Roman orator Cicero exclaimed, tempora o mores" (Cicero, 1856) (Oh, what time and what morals), he may have been trying to express dismay of the morality of his time. 

Ethics or moral philosophy , is a branch of philosophy which deals with moral standards, inquires about the rightness or wrongness of human behavior or the goodness or badness of personality, trait or character. It deals with ideas, with topics such as moral standards or norms of morality conscience, moral values and virtues. Ethics is a study of the morality of human acts and moral agents, what makes an act obligatory and w makes a person accountable. 

"Moral" is the adjective describing a human act as either  Right or Wrong. or qualifying a person, personality, character, as either ethically good or bad. 

Moral Standards or Moral Frameworks and Non-Moral Standards

Since ethics is a study of moral standards, then the first question for the course is, what are moral standards. The following are supposed be examples of moral standards "Stealing is wrong." "Killing is wrong "Telling lies is wrong." "Adultery is wrong" "Environment preservation is the right thing to do". "Freedom with responsibility is the right way "Giving what is due to others is justice". Hence, moral standards are norms or prescriptions that serve as the frameworks for determining what ought be done or what is right or wrong action, what is good or bad character.

In the Activity phase of this Lesson the following can be classified as moral standards: 

Don't cheat others

Moral standards are either consequences standards (like Stuart Mill's utilitarianism) or non-consequence standards (like Aristotle's virtue, St Thomas' natural law. or Immanuel Kant' good will or sense of duty) The consequence standards depend on results, outcome An act that results in the general welfare, in the greatest good of the greatest number, is moral. To take part in a project that results in the improvement of the majority of people is, therefore. moral.

The non-consequence standards are based on the natural law Natural law is the law of God revealed through human reason lt is the "law of God written in the hearts of men" To preserve human life is in accordance with the natural law. therefore it is moral. Likewise, the non-consequence standard may also be based on good will or intention. and on a sense of duty. Respect for humanity, treatment of the other as a human person, an act that is moral. springs from a sense of duty, a sense of duty that you wish will apply to all human persons.

On the other hand, non-moral standards are social rules, demands non-moral of etiquette and good manners. They are guides of action which should be followed as expected by society. Sometimes they may not be followed or some people may not follow them. From time to time, changes are made regarding good manners or etiquette. In sociology, non-moral standards or rules are called folkways. In short, non-moral actions are those where moral categories cannot be applied.

Examples of non-moral standards are rules of good manners and right conduct, etiquette, rules of behavior set by parents, teachers, and standards of grammar or language, standards of art, and standards of sports set by other authorities. Examples are "do not eat with your mouth open;" "observe rules of grammar," and "do not wear socks that don't match." 

In the Activity phase of this Lesson, the following are non-moral standards:

No talking while your mouth is full. 

Wear black or white for mourning; never red.

The males should be the one to propose marriage not females

Observe correct grammar when writing and speaking English.

If you are a male, stay by the danger side (roadside) when walking with a female.

Go with the fashion or you are not "in."

When you speak pronounce words correctly. 

Focus the microscope properly. 

Maintain a good body figure.

An indicator whether or not a standard is moral or non-moral lies in it compliance as distinguished from it's non-compliance. Non-compliance with moral standards causes sense of guilt, while on- compliance with a non-moral standard may only cause shame and embarrassment.

Classification of the Theories of Moral Standards

Garner and Rosen (1967) classified the various moral standard formulated by moral philosophers as follows: 1) Consequence (teleological, from tele which means end, result, or consequence) standard states than act is right or wrong depending on the consequences of the act, that is the good that is produced in the world . Will it do you good if you go school the answer is right, because you learn how to read and write then gong to school is right. The consequence standard can also be a bass for determining whether or not a rule is a right rule. So the consequence standard states that the rightness or wrongness of a rule depends on the consequences or the good that is produced in following the rule. For instance, if everyone follows the rule of a game, everyone will enjoy playing the game. This good consequence proves the rule must be a correct rule. 2) Not-only-consequence standard (deontological), holds that the rightness or wrongness of an action or rule depends on sense of duty, natural law, virtue and the demand of the situation or circumstances. The rightness or wrongness of an action does not only depend or rely on the consequence of that action or following that rule. 

Natural law and virtue ethics are deontological moral standards because their basis for determining what is right or wrong does not depend on consequences but on the natural law and virtue. Situation ethics, too, is deontological because the rightness or wrongness of an act depends the situation and circumstances requiring or demanding exception to rule. 

Rosen and Garner are inclined to consider deontology , be it rule or act deontology. as the better moral standard because it synthesizes includes all the other theory of norms. Under this theory, the rightness or wrongness of an action depends on (or is a function of) all the following: a) consequences of an action or rule, what promotes one's greatest go or the greatest good of the greatest number; b) consideration other than consequences like the obligatoriness or the act based on natural law, or its being one's duty, or its promoting an ideal virtue. Deontology also considers the object, purpose, and circumstances or situation of the moral issue or dilemma.

All these moral standards or ethical frameworks will be dealt with more in detail in Chapter IV of this book.

What Makes Standards Moral? 

The question means what obliges us to follow a moral standard? For theists, believers in God's existence, moral standards are commandments were revealed by God to Moses. One who believes in God vows to Him and obliges himself herself to follow His Ten Commandments For theists, God is the ultimate source of what is moral to human persons revealed to human persons.

How about non theists? For non-theists, God is not the source of morality. Moral standards are based on the wisdom of sages like Confucius or philosophers like Immanuel Kant.

In China. C Confucius taught the moral standard. "Do unto others what you like others to do unto you" and persuaded people to follow this rule because it is the right way, the gentleman's way. Later, Immanuel Kant, the German philosopher, formulated a criterion for determining what makes a moral standard moral. It is stated as follows: "Act only according to that maxim whereby you can at the same time will that it should become a universal law." (1993) In other words, if a maxim or standard cannot pass this test, it cannot be a moral standard. For instance, does the maxim "Stealing is wrong" pass this test' Can one will that this maxim be a universal maxim? The answer is in the affirmative. The opposite of the maxim would not be acceptable. Moral standards are standards that we want to be followed by all, otherwise, one would be wishing one's own ill fortune. Can you wish "do not kill to be a universal maxim" The answer has to be yes because if you say "no" then you are not objecting to someone killing you. Thus, the universal necessity of the maxim, what makes it a categorical imperative is what makes it obligatory. "Stealing is wrong" means "one ought not steal and "Do not kill means "one ought not kill." It is one's obligation not to steal or kill. Ultimately, the obligation arises from the need of self-preservation . 

The Origin of Moral Standards: Theist and Non-Theist 

Related to the question of what makes moral standards moral is how do moral standards arise or come into existence? A lot of new attempts to explain the origins of morality or moral standards have been made.

The theistic line of thought states moral standards are of divine origin while 20th century thinkers claim state that they simply evolved. The issue is: Are moral standards derived from God, communicated to man through signs of revelation, or did they arise in the course of man's evolution? 

With the Divine source concept, moral standards are derived from natural law, man's "participation" in the Divine law. The moral principle, Do good and avoid evil" is an expression of natural law. Man's obliging himself to respect the life, liberty, and property of his fellowman arises from the God-given sacredness, spirituality, and dignity of his fellow man. It arises from his faith, hope. and love of God and man.

With the evolutionary concept, the basics of moral standards- do good and avoid evil - have been observed among primates and must have are of have evolved as the process of evolution followed its course.

Are these theist and non-theist (evolutionary) origin of moral standards reconcilable? The evolutionist claims that altruism, a sense of morality, can be observed from man's fellow primates-the apes and monkeys and, therefore it can be said that the altruism of human persons evolved from the primates. However, the evolutionist cannot satisfactorily argue, with factual evidence. that the rudiments of moral standards can be observed from the primates. Neither can it be scientifically established that the theist view that man's obliging himself to avoid evil, refrain from inflicting harm on his fellowman, is a moral principle implanted by God in the hearts of men. But the concept of creation and evolution are not necessarily contradictory. The revelation of the norms of Divine origin could not have been instant, like a happening "in one fell swoop." It could have happened gradually as man evolved to differ from the other primates. As the evolutionists claim creation may be conceived as a process of evolution story of creation could have happened in billions of years instead of six days.

1. Here are the two questions:

a) Can one eat while praying?

b) Can one pray while eating?

Which is a moral question? Which is a non-moral question?

2. I did not dress appropriately formally for a formal party. Which did I fail to observe? Moral or non- moral standard?

3. Lady B dressed indecently to expose her body. Which did she violate moral or non-moral standard? 

4 In Fyodor Dostoevsky's The Brothers Karamasov, Ivan Karamasovone asserted the famous line, "If God does not exist, everything is permitted,"

 a) How does this relate to our lesson on source of moral standards? Based on this line, what is the source of moral standards?

 b)The deeper and stronger one's faith in God is the deeper and stronger is his her morality. Is this an implication of this quoted line?

 c)Using your knowledge of logic. what will be the continuation of Karamasov's syllogism?

Performance

CHECK FOR UNDERSTANDING

Distinguish moral and standards and non-moral standards.

Does belief in God strengthen a person to be moral? Explain your answer.

It is more difficult to do only that which is moral than to do anything you want to do. But you keep on striving to do only that which is moral, anyway. What makes you strive to do only that which is moral even if difficult? Write your reflections.

non moral standards essay

Non-moral standards originate from social rules, demands of etiquette and good manners. They are guides of action which should be followed as expected by society.

 Moral standards are based on the natural law, the consequence of one's actions and sense of duty. 

Moral standards are based on natural law, the law of God revealed through human reason or the " law of God written in the hearts of men." 

Moral standards are based on consequences standards. That which leads to a good consequence or result like the greatest good of the greatest number is what is moral.

Moral standards are based also on non-consequence standards or sense of duty that you wish would be followed by all. Respect for humanity, treatment of the other as a human person, an act that is moral, springs from a sense of duty, a sense of duty that you wish is wished by all and applies to all human persons. 

For theists, the origin of moral standards is God who "wrote his law in the heart of every person", the natural law. For non-theists, the origin of moral standards is the moral frameworks formulated by philosophers like Confucius, Immanuel Kant, Stuart Mill, et al.

The evolutionist claims that the sense of moral standards must have evolved with man not something that was implanted in every human person instantly at the moment of creation. Creation as a process may have taken place not only in 6 days as the creationist claims but in billions of years as the evolutionist asserts. 

For the theists, belief in God strengthens them to be moral. 

Back to Chapter 1

MyInfoBasket.com

MyInfoBasket.com

Free Quality Online Learning Materials

  • Pagkilala sa Pambihirang Pagkakakilanlan ng Lahing Pilipino
  • Alab ng Kultura: Mga Musika at Sayaw ng Lahi na Pambihirang Ipinagmamalaki ng mga Pilipino
  • Tunay na mga Biyaya: Mga Kawanggawa sa Pamayanan na Bunga ng Pananampalataya
  • The Last Words of Jose Rizal: An Insight into the Final Moments of the National Hero
  • Who was Jose Rizal? Discover the Life and Legacy of Jose Protacio Rizal Mercado y Alonso Realonda

similar cubes with rules inscription on windowsill in building

Moral Standards and Non Moral Standards (Difference and Characteristics)

Let us differentiate moral standards and non moral standards.

What is non-moral standards? And what is the difference between moral standards and non moral standards? To begin with, what is morality? And what are the features of moral rules or standards?

Let us study!

You may watch the short educational video or continue reading. Note : To have a FULL ACCESS to the video, SUBSCRIBE first (if you have not subscribed yet):

Moral standards  pertain to the rules people have about the kinds of actions they believe are morally right and wrong, as well as the values they place on the kinds of objects they believe are morally good and morally bad.

Non-moral standards,  on the other hand, are the rules that are unrelated to moral or ethical considerations. Either these standards are not necessarily linked to morality or by nature lack ethical sense.

Moral standards are also referred to as moral values  and  moral principles . On the other hand, usual examples of non-moral standards include rules of etiquette, fashion standards, rules in games, and various house rules.

Ethicists believe that technically speaking, religious rules, some traditions, and legal statutes (e.g. laws and ordinances) are non-moral principles.

Nonetheless, they can be considered as moral standards too as they can be ethically relevant, depending on some factors and contexts.

Filipino Philosophy professor and textbook author  Jensen DG. Mañebog mentions six (6) characteristics of moral standards that further differentiate them from non-moral standards.

So moral standards can be distinguished from non-moral standards using the following characteristics he discusses in his lectures:

Moral standards are not (merely) established by authority figures.

Moral standards are not mere invention even of authoritative bodies or persons such as nations’ legislative bodies. Rather, the moral standards or values ought to be considered in the process of making laws.

Hence, in principle, moral standards cannot be changed nor nullified by the decisions of particular authoritative body.

One thing about these standards, nonetheless, is that its validity lies on the soundness or adequacy of the reasons that are considered to support and justify them. (Related: Reason and Impartiality in Morality: A Slideshow presentation )

Moral standards have the trait of universalizability.

Simply put, it means that everyone should live up to moral standards. To be more accurate, however, it entails that moral principles must apply to all who are in the relevantly similar situation.

If one judges that act A is morally right for a certain person P, then it is morally right for anybody relevantly similar to P.

This characteristic is exemplified in the Gold Rule , “Do unto others what you would them do unto you (if you were in their shoes)” and in the formal Principle of Justice, which states that:

“It cannot be right for A to treat B in a manner in which it would be wrong for B to treat A, merely on the ground that they are two different individuals, and without there being any difference  between the natures or circumstances of the two which can be stated as a reasonable ground for difference of treatment.”

Universalizability is an extension of the principle of consistency, that is, one ought to be consistent about one’s value judgments.

Moral standards are based on impartial considerations.

Moral standard does not evaluate standards on the basis of the interests of a certain person or group, but one that goes beyond personal interests to a universal standpoint in which each person’s interests are impartially counted as equal.

Impartiality is usually depicted as being free of bias or prejudice. Impartiality in morality requires that we give equal and/or adequate consideration to the interests of all concerned parties.

Moral standards are associated with special emotions and vocabulary.

Prescriptivity indicates the practical or action-guiding nature of moral standards. These moral standards are generally put forth as injunction or imperatives (such as, ‘Do not kill,’ ‘Do no unnecessary harm,’ and ‘Love your neighbor’).

Moral principles are proposed for use, to advise, and to influence one’s actions. Retroactively, this feature of moral principles is used to evaluate one’s behavior, to assign praise and blame, and to produce feelings of satisfaction or of guilt.

If a person violates a moral standard by telling a lie even to fulfill a special purpose, it is not surprising if he/she starts feeling guilty or being ashamed of his behavior afterwards.

Notice that on the contrary, no much guilt is felt if one goes against the current fashion trend (e.g. refusing to wear tattered jeans). Indeed, Moral standards are associated with special emotions … continue reading

For other characteristics of moral standards that differentiate them from non-moral standards, read: Moral Standards vs. Non-Moral Standards

*Looking for other topics or assignments (English or Tagalog)? Search here:

Copyright © 2014-present by MyInfoBasket.com and Prof.  Jensen DG. Mañebog

To professors : You may share this free lecture as a reading assignment of your students. For other free lectures in Ethics, visit: Homepage: ETHICS Subject Free Lectures

Read: From Socrates to Mill: An Analysis of Prominent Ethical Theories  

Free Lectures for Ethics Subjects:

What is Moral Dilemma (And the Three Levels of Moral Dilemmas)

Related: Reasoning and Debate: A Handbook and a Textbook  

ALSO CHECK OUT: Reasoning and Debate: A Handbook and a Textbook  by  Jensen DG. Mañebog

TO STUDENTS : Write your assignment/comment in the comment section of Moral Standards and Non-Moral Standards (Difference and Characteristics)

Share this:

  • ← The Global Economy (And the Economic Globalization)
  • Klimang Tropikal: Klima at Panahon sa Mga Rehiyon →

Privacy Policy

  • No category

Topic 2- Moral versus Non-Moral Standards

Related documents.

busi diss

Add this document to collection(s)

You can add this document to your study collection(s)

Add this document to saved

You can add this document to your saved list

Suggest us how to improve StudyLib

(For complaints, use another form )

Input it if you want to receive answer

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

The Importance of Moral Construal: Moral versus Non-Moral Construal Elicits Faster, More Extreme, Universal Evaluations of the Same Actions

* E-mail: [email protected]

Affiliation New York University, New York, New York, United States of America

Affiliation Lehigh University, Bethlehem, Pennsylvania, United States of America

Affiliation Department of Political Science, University of Nebraska-Lincoln, Lincoln, Nebraska, United States of America

Affiliation The University of Toronto, The Ohio State University, Toronto, Ontario, Canada

  • Jay J. Van Bavel, 
  • Dominic J. Packer, 
  • Ingrid Johnsen Haas, 
  • William A. Cunningham

PLOS

  • Published: November 28, 2012
  • https://doi.org/10.1371/journal.pone.0048693
  • Reader Comments

Figure 1

Over the past decade, intuitionist models of morality have challenged the view that moral reasoning is the sole or even primary means by which moral judgments are made. Rather, intuitionist models posit that certain situations automatically elicit moral intuitions, which guide moral judgments. We present three experiments showing that evaluations are also susceptible to the influence of moral versus non-moral construal. We had participants make moral evaluations (rating whether actions were morally good or bad) or non-moral evaluations (rating whether actions were pragmatically or hedonically good or bad) of a wide variety of actions. As predicted, moral evaluations were faster, more extreme, and more strongly associated with universal prescriptions—the belief that absolutely nobody or everybody should engage in an action—than non-moral (pragmatic or hedonic) evaluations of the same actions. Further, we show that people are capable of flexibly shifting from moral to non-moral evaluations on a trial-by-trial basis. Taken together, these experiments provide evidence that moral versus non-moral construal has an important influence on evaluation and suggests that effects of construal are highly flexible. We discuss the implications of these experiments for models of moral judgment and decision-making.

Citation: Van Bavel JJ, Packer DJ, Haas IJ, Cunningham WA (2012) The Importance of Moral Construal: Moral versus Non-Moral Construal Elicits Faster, More Extreme, Universal Evaluations of the Same Actions. PLoS ONE 7(11): e48693. https://doi.org/10.1371/journal.pone.0048693

Editor: Sam Gilbert, University College London, United Kingdom

Received: July 18, 2012; Accepted: October 1, 2012; Published: November 28, 2012

Copyright: © 2012 Van Bavel et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Funding: This research was supported by grants from the Social Sciences and Humanities Research Council of Canada to JV and DP and the National Science Foundation (BCS-0819250) to WC. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: The authors have declared that no competing interests exist.

Introduction

Over the past decade, intuitionist models of morality have challenged the view that moral reasoning is the sole or even primary means by which moral judgments are made. Rather, intuitionist models posit that certain situations automatically elicit moral intuitions, which guide moral judgments [1] . According to these models, moral judgments are very often produced by reflexive mental computations that are unconscious, fast, and automatic [2] . From this perspective, affective responses are automatically triggered by certain moral issues and provide a strong bottom-up influence on judgments and decision-making. As such, the role of moral reasoning is relegated to the role of post hoc justification [1] or corrective control following the initial intuition [3] , but is not the causal impetus for a moral judgment. In the current paper, we present three experiments showing that moral evaluations are also susceptible to construal. Specifically, we show that people can deliberately construe a wide variety of actions through either a moral or a non-moral lens with different consequences for their evaluations.

The Origins of Moral Intuitions

Dating back to Darwin [4] , several theorists have proposed that evolution may have provided humans with a built-in set of moral rules, heuristics or intuitions [5] , [6] , [7] , [8] , [9] . In addition, moral beliefs and values can develop through social learning, via which children learn specific cultural practices [1] , [10] and ultimately acquire a set of knowledge structures about moral standards that guide their social interactions and provide the foundation for morality in adulthood [11] . The conversion of preferences into values—termed moralization—often occurs in cultures and individuals on the scale of years and involves an increased overlap between values and a personally or socially important issue or action [12] , [13] .

The work on the biological and cultural basis of morality has inspired a highly influential approach to moral psychology—the intuitionist model. The intuitionist model of moral judgment focuses on evaluations “that are made with respect to a set of virtues held to be obligatory by a culture or subculture” [1] This definition is broad enough to allow “marginally moral judgments” that may have escaped the attention of moral philosophers but are nevertheless moralized in the local cultural milieu (e.g., eating a low fat diet). Whereas, rationalist approaches hold that moral judgments are reached through a process of reasoning and reflection [14] , [15] , [16] , the intuitionist approach argues that eliciting situations automatically trigger affective moral intuitions, which guide moral judgments [1] . According to the intuitionist model, conscious reasoning frequently follows an initial judgment, providing a post hoc justification but not the causal impetus.

For instance, Haidt and colleagues [17] have created scenarios to which people typically have strong moral reactions but fail to articulate any rational principles to justify their responses—a response termed ‘moral dumbfounding’. Likewise, there is now extensive experimental evidence that disgust and other emotional responses influence moral judgments [18] , [19] . From the intuitionist perspective, unconscious, affective responses guide reactions to these morally charged scenarios and people often engage in deliberate reasoning only after they have already made an initial moral judgment.

Despite the popularity of the bottom-up approach to morality posited by the intuitionist model, several theorists have argued that an appraisal process [20] is a pre-requisite for generating specific emotional intuitions [9] , [21] . The fact that cultural shifts and individual differences in moralization occur suggests that morality is not always intrinsic to stimuli, but may be the result of construing those actions as morally-relevant [22] . Moreover, there is evidence that different people construe different issues in moral (or non-moral) terms—termed moral mandates [23] . It is unclear, however, whether individual differences in the moralization of specific issues is based on intuitions built on biological and cultural differences or construal processes. This raises a critical question: can people quickly change between a moral versus non-moral construal of the same action or issue?

A Dynamic Model of Evaluation

Although it may seem sensible that individuals can appraise or construe the same action in different ways, there is a surprising lack of empirical evidence on this issue in the domain of morality. In a recent paper on the dynamic nature of evaluation, we hypothesized that evaluation should indeed be sensitive to moral versus non-moral construal processes [24] . For example, people may be able to construe a situation or stimulus in moral or non-moral terms depending on their goals and beliefs, which will direct attention, modulate perception and guide consequent emotional intuitions. For the purposes of the present research, we use the terms moral and non-moral to describe different evaluative modes. This over-simplified distinction reflects the fact that participants are explicitly told to make moral evaluations (how right or wrong is an action) in each study. To provide a contrast with moral evaluations, participants also make pragmatic (how personally good or bad is an action) or hedonic (how personally pleasant or unpleasant is an action) evaluations. These latter conditions are termed “non-moral” simply because participants are not explicitly asked to make moral evaluations. We are aware that certain participants may consider hedonic maximization or self-interest moral imperatives [25] , [26] . Indeed, Kohlberg [27] considered self-interest the second stage of moral reasoning.

To test our hypothesis, in the current research we directly manipulated the way people evaluated a wide variety of actions to determine whether construal has an influence on evaluations of the exact same actions.

Our predictions are grounded in a dynamical model of evaluation—termed the Iterative Reprocessing (IR) Model [24] , [28] , [29] . Whereas many dual-process models characterize human evaluation as a function of automatically activated associations and subsequent, corrective control processes [30] , the IR Model highlights the dynamic interactions between multiple component processes in the evaluative system. A key assumption underlying our model is that brain systems are organized hierarchically, such that lower–order automatic processes influence and are influenced by higher–order processes [31] . As such, reflective processes do not merely override or control automatic ones—these processes work in a dynamic, interactive fashion to construct evaluations. In this way, object construal plays an important role in determining evaluations, including shaping the initial response to a stimulus.

The IR Model makes a distinction between the contents (e.g., attitudes and representations), processes (e.g., mental operations and computations) and outcomes of evaluation [28] . Thus, while people may develop relatively stable moral content (e.g., standards and values), whether these contents influence an evaluation at any given moment likely depends on whether an action or issue is processed in moral or non-moral terms (in this paper, we use the term “processed” in the broad sense to include stimulus construal). Although it is likely the case that highly moralized actions (like murder) are chronically and reflexively processed as moral (i.e., the representations rapidly stabilize in a way that reflects the moral construal) and are therefore commonly evaluated in moral terms, we propose that many actions can be evaluated according to moral considerations [28] . A helpful analogy is available in the social psychology literature. When perceivers categorize targets as in-group members it has important implications for their perceptions, evaluations and behavior [32] , [33] , and can even override ostensibly automatic biases to visually salient categories like race [32] , [34] , [35] . Thus, construal or categorization can even shape automatic evaluations of stimuli with strong affective associations [36] .

In the current research, we test the prediction that a wide variety of actions can be evaluated using both moral and non-moral considerations, and that this construal process can lead to different evaluative outcomes for the same actions. In three experiments, we instructed people to evaluate the same stimuli in moral versus non-moral (i.e., pragmatic or hedonic) terms. By holding the influence of the stimuli constant while varying the construal, we were able to investigate the influence of moral versus non-moral construal. This ensures that differences observed in the nature of evaluative outcomes are due to differences in the construal (or an interaction between construal and stimuli) rather than the mere influence of the stimuli. If our assumptions are accurate, evaluating actions on the basis of moral versus non-moral considerations should lead to different evaluative outcomes. As long as there is some moral content that participants can bring to bear on their evaluation, moral construal may alter the evaluation of actions that have not been typically seen as moral. For example, one can bring to mind the moral aspects of recycling (e.g., saving the environment) even if more pragmatic aspects normally predominate (e.g., the time and effort involved). Of course, there may be some actions that have virtually no moral content to draw upon when generating an evaluation. For those stimuli, moral and non-moral evaluative outcomes may be similar.

The Flexibility of Moral Construal

We are not the first to suggest that people can flexibly construe and evaluate the same actions as moral or not [37] . For instance, models of ethical decision-making distinguish between moral awareness , in which a person recognizes that a situation may have moral relevance, and moral judgment , in which the moral value of a course of action is determined. These models predict that only if a person is morally aware will they apply processes to render a moral judgment [38] , [39] , [40] . The division between these stages is important because it can account for particular types of moral failure in which people make immoral decisions not because they intended to do so or because they mistakenly evaluate an immoral act as moral, but rather because they fail to consider the action on the basis of moral considerations in the first place. Although moral awareness and moral judgment are conceptually distinct, the majority of psychological research on morality has focused on the latter, investigating how characteristics of the perceiver, the stimulus and/or the social context affect judgments of right or wrong regarding issues that are ostensibly morally-relevant [18] , [19] , [41] , [42] . However, before making a moral judgment, the evaluative system must be ready to evaluate the action in moral terms—people have to construe the stimulus as potentially morally relevant.

In related work, Tetlock and colleagues [43] have proposed that people are multifunctional entities who shift between different decision-making frameworks depending on the context and their current goals. Thus, the same person may alternate between acting as an “intuitive economist” animated by utilitarian goals, and a “principled theologian” animated by the need to protect sacred values from secular encroachments. To investigate the tension between these forms of evaluation, they forced participants to consider tradeoffs between moral and pragmatic values—termed a taboo trade-off. As it turns out, people often react with moral outrage when a material valuation is placed on sacred items or events [44] . This research highlights that sometimes an opposition exists between moral and pragmatic evaluative processes, and that when pitted against one another, moral considerations typically dominate judgments. Again, however, this opposition may be unique to specific types of highly moralized stimuli; further, these studies require participants to engage in pragmatic and moral forms of evaluation simultaneously. To better understand the differences between moral and non-moral evaluation, the current research separates and compares moral and non-moral construals of the same actions.

Some recent research suggests that moral judgments are not intrinsic to issues, but are the result of construing actions as morally-relevant (i.e., moral awareness). In one paper, directing participants' attention to an action that violates moral rules elicited deontological preferences, whereas directing their attention to the outcomes that favored the violation of a moral rule elicited utilitarian preferences [21] . In a different paper, participants were randomly assigned to rate 70 stimuli—including a subset of 20 mundane objects (e.g., refrigerator, desk)—on how morally good or bad they thought the stimuli were or how much they liked or disliked the stimuli [45] . Although the mean ratings were not directly compared across these two conditions, the mundane objects were judged positively (relative to the mid-point of the scale) in the moral condition. These studies not only suggest that people can view relatively mundane stimuli as having moral value, but that construal can change the evaluation [46] .

The Current Research

We present three experiments in which participants were instructed to evaluate the same stimuli—a wide range of positive and negative actions—in moral and/or non-moral terms. In each of the experiments, we asked participants to make moral and non-moral evaluations of the same actions to determine whether moral (rating whether actions were morally good or bad) relative to non-moral (rating whether actions were pragmatically or hedonically good or bad) construal would lead to different evaluative outcomes. We assumed that potential courses of action could be construed in multiple ways, and that how they were construed would influence the nature of the evaluations. By holding stimuli constant, we could examine whether moral versus non-moral construal can influence evaluations. If we observe differences in the nature of resulting evaluations and associated judgments it would provide evidence that the distinction between moral construal (which is triggered by a situational cue in this case) and evaluation stages is an important one, and that moral and non-moral modes of evaluation can be flexibly applied to the same stimuli.

To examine the influence of moral construal, we employed a task facilitation paradigm [47] . This paradigm allowed us to determine whether making a moral versus non-moral evaluation about one's actions was associated with universality judgments about the behavior of others. The paradigm was based on the following logic: if the process of performing the first task (i.e., generating an evaluation) or the information activated during the first task was relevant to the second task (i.e., making a universality judgment), then the time needed to perform the second task should be reduced [48] . Therefore, to assess the extent to which two or more tasks rely on similar processes/information, one can analyze the degree to which performing the first task diminishes the time needed to complete the second task. The task facilitation effect will be greatest when the processes or information are highly similar in both tasks. Similarly, any differences in task facilitation between conditions will reflect the differential relevance of processes or activated information rather than differences in stimuli (which were held constant). It is also possible that differences between conditions might reflect aspects of task interference rather than facilitation.

In the current research, the first task involved a moral, pragmatic or hedonic evaluation and the second task was a universality judgment. We compared the average reaction time for universality judgments between conditions to determine whether these judgments were more strongly associated with moral, pragmatic or hedonic evaluative modes. We chose to examine universality judgments because universality is widely considered to be a hallmark of moral cognition. Moral philosophers and psychologists have long posited that moral evaluations are (or should be) associated with universal prescriptions—the belief that absolutely everybody should act in the same way [49] , [50] , [51] , [52] , [53] . Other psychologists have argued that moral attitudes are experienced as matters of fact that others could or should be persuaded to share, rather than as matters of preference, taste or convention [16] , [23] , [54] . Further, compared to conventional transgressions, for example, moral transgressions are consistently rated as more wrong, punishable, independent of authority, and universally applicable. These differences emerge early in life and appear to hold across societies [55] , [56] , [57] . We predicted that if moral evaluation is more strongly linked to universality than other forms of evaluation, the time required to make a universality judgment should be shorter following a moral evaluation than a non-moral evaluation. We also assessed whether moral evaluations were more highly correlated with the subsequent universality judgments, and predicted that this correlation should be stronger than the correlation between pragmatic evaluations and the subsequent universality judgments.

In addition, we hypothesized that construing actions in different ways would give rise to observable differences in evaluation, despite holding the stimuli constant. Empirical work suggests that moral evaluation entails black-and-white thinking and moral absolutes. For example, moral attitudes are more durable and resistant to temptation [13] , and are associated with stronger reactions to dissimilar others [23] , both of which are indicators of attitude strength [58] . As noted, other research suggests that moral judgments are often based on moral intuitions or heuristics [1] , leading to quick and simple judgments. We therefore predicted that moral evaluations would be more extreme and rendered faster than non-moral evaluations of the same actions. However, research on moral reasoning raised the alternative prediction that moral evaluations might be more deliberate and, therefore, take longer than non-moral evaluations [59] . Our paradigm allowed us to directly test these competing hypotheses by comparing participants' reaction times to moral and non-moral evaluations of the same stimuli.

We also sought to examine whether people could shift back-and-forth between moral and non-moral evaluations of the same objects. Although studies have recently suggested that moral awareness may be relatively flexible [45] , none have directly examined whether or not people are able to shift back-and-forth between moral and non-moral construals of the same stimuli within the same session. Our multi-level model of the human evaluative system assumes that top-down influences on evaluation are highly flexible and update rapidly [24] , [28] , [29] . We therefore anticipated that people could evaluate actions in moral or non-moral terms in a flexible fashion. To examine this possibility, we had participants switch back-and-forth between moral and non-moral evaluation. As elaborated above, we predicted that evaluations would be faster, more extreme, and more strongly associated with universally prescriptive judgments following moral as compared with pragmatic or hedonic evaluations—and that these effects would shift to reflect the current moral versus non-moral evaluative mode, even if these shifts were separated by mere seconds.

Experiment 1

Overview and predictions.

In the first experiment, participants made moral and pragmatic evaluations of a wide variety of actions—including actions typically construed in moral terms (e.g., murder, honesty), and actions that are not (e.g., riding a bike, eating). In order to test whether moral evaluations were more universally prescriptive than pragmatic evaluations, after rating each action in moral or pragmatic terms, participants then rated how many other people should/should not engage in the action (universality judgment). Each trial consisted of an evaluation (moral or pragmatic) followed by a universality judgment of the same action. We measured the ratings and reaction time for the evaluation and the universality judgment. In addition to exploring the relationship between different evaluations and universality, we used this information to test whether moral (relative to pragmatic) evaluations were associated with faster and more extreme evaluations.

We predicted that if moral evaluations are more strongly linked to universality than pragmatic evaluations, two things should occur. First, the time required to make a universality judgment should be shorter following a moral evaluation than a pragmatic evaluation. Second, moral evaluations should be highly correlated with the subsequent universality judgments, and this correlation should be stronger than the correlation between pragmatic evaluations and the subsequent universality judgments.

Material and Methods

Participants..

Forty-five undergraduate students (26 females; mean age = 20 years) participated for partial course credit for an Introduction to Psychology course. One participant was removed from analysis for failing to follow instructions.

Participants arrived at the lab in small groups and completed all tasks on individual computers. Participants read that they would be presented with a number of different behaviors (e.g., getting a flu shot) and would be asked to evaluate them. They were also told that there were at least two ways of evaluating an action: “One way of evaluating an action is by thinking about whether it would be good or bad for you personally. These pragmatic judgments focus on pros and cons, and take into account the benefit or the harm you may experience if you do something. A second way of evaluating an action is by thinking about how moral or ethical it is. Rather than thinking about what would benefit you personally, these moral judgments focus on whether or not you ought to do something because it is the right or the wrong thing to do.” Participants were also told that after evaluating each action, they would be asked to rate how many other people should engage in that behavior.

Participants were presented with 104 actions (e.g., recycle, shop-lift, study; see Appendix S1A for complete list of stimuli) one at a time on a desktop computer using E-Prime (see Figure 1 ). Participants made moral evaluations for 52 actions using the keyboard, rating “how morally wrong/right it would be for you to [action]” (1 =  very wrong to 7 =  very right ), and pragmatic evaluations for the other 52 actions, rating “how personally bad/good you think it would be for you to [action]” (1 =  very bad to 7 =  very good ). Actions remained on screen until participants made a response (M = 3,683 ms). Following each moral and pragmatic judgment, participants made universality judgments for the same action, rating “how many other people should [action]” (1 =  nobody to 7 =  everybody ).

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

On each trial, a fixation cross appeared for 1,000 ms before participants made a moral or pragmatic evaluation followed by a universality judgment. We recorded reaction times on the moral/pragmatic evaluation and university judgment. The trials were presented in four blocks. In each block, participants made moral and universality evaluations for 13 actions before switching to pragmatic and universality evaluations for 13 different actions.

https://doi.org/10.1371/journal.pone.0048693.g001

The actions were presented in four blocks. In each block, participants made moral and universality evaluations for 13 actions before switching to pragmatic and universality evaluations for 13 different actions. The order of moral and pragmatic judgments was counterbalanced such that half of the participants made moral judgments first within each block, and half made pragmatic judgments first. Actions were randomly assigned within participants to be evaluated morally versus pragmatically. Participants never made a moral and pragmatic evaluation of the same action; however, across participants, each action was equally likely to be evaluated according to moral or pragmatic standards. This ensured that any differences between moral and pragmatic evaluations were not due to the specific actions but to differences in moral versus pragmatic evaluation.

To assess differences between moral versus pragmatic evaluation, we conducted 2 (evaluation type: moral, pragmatic)×4 (block: 1, 2, 3, 4) analyses of variance (evaluation type and block were repeated measure factors) on the speed with which participants made evaluations, the overall valence and extremity of their evaluations, and their reaction times to subsequent universality ratings. To analyze reaction times, we removed trials with extremely slow (>10,000 ms) reaction times and log-transformed all remaining reaction times to minimize the influence of outliers and skewness [60] . To ease interpretation, all reported means are based on raw reaction times. Analyses with raw and log-transformed reaction times were nearly identical.

Traditional analyses of repeated measures have tended to focus on mean-level differences in reaction time or accuracy. However, this approach has the consequence of reducing hundred of trials to a single score for each participant diminishing power and meaningful variance. To more accurately measure moral and pragmatic judgments we used multi-level modeling [61] . Multi-level modeling allows for the direct analysis of accuracy on individual trials and helps overcome violations of independence that occur as a result of correlated trials within participants. When an assumption of independence is not satisfied, ignoring dependency among trials can lead to invalid statistical conclusions; namely the underestimation of standard errors and the overestimation of the significance of predictors [62] . We therefore created multi-level models with trials nested within participants to provide more appropriate estimates of regression parameters. Multi-level models were implemented in the SAS PROC MIXED procedure [63] .

Moral evaluations are associated with universality.

Our primary prediction was that moral evaluations would be more strongly associated with universality judgments than pragmatic evaluations. To test this hypothesis, we compared the reaction times of universality judgments following moral versus pragmatic evaluations. As predicted, participants were faster to make universality judgments following moral (M = 1,254 ms) compared to pragmatic (M = 1,443 ms) evaluations, F (1, 43) = 9.17, p <.01. As shown in Table 1 , participants were also faster to make universality judgments during later blocks (a practice effect), F (3, 129) = 72.46, p <.01; however, the effect of condition was not moderated by block ( p  = .94). Moreover, an item-by-item analysis indicated that evaluating an action in moral terms facilitated subsequent universality judgments regardless of the moral rating it received—even actions that were rated as morally neutral led to faster universality judgments. These results demonstrate that moral evaluations facilitated universality judgments more than pragmatic evaluations throughout the study, suggesting that participants were able to switch between moral and pragmatic evaluative modes.

thumbnail

https://doi.org/10.1371/journal.pone.0048693.t001

To examine whether this facilitation effect held across the full range of actions or was specific to actions with certain moral ratings we conducted an item-level analysis. We calculated means across participants for each action: its mean moral rating, and separate mean reaction times for universality judgments following moral and pragmatic evaluations. Using a hierarchical regression analysis, we then regressed mean reaction times for universality judgments on preceding evaluation type (moral vs. pragmatic), the mean moral rating of each action and their interaction term. Consistent with the primary analysis, there was a significant main effect of preceding evaluation type, such that participants were faster to make universality judgments following a moral than a pragmatic evaluation ( p <.01). There were also linear and curvilinear effects of moral rating: actions with higher mean moral ratings (i.e., actions rated as more moral) were associated with slower universality judgments ( p  = .05) and actions with extreme moral ratings (i.e., highly immoral and moral actions) were associated with faster universality judgments ( p <.01). Critically, the main effect of preceding evaluation type was not moderated by the linear ( p >.60) or curvilinear ( p >.90) moral rating terms.

We also examined whether universality judgments were more highly correlated with preceding moral than pragmatic evaluations. As predicted, a two-way interaction between evaluation type and the preceding moral/pragmatic rating, F (1, 43) = 11.18, p <.01, indicated that participants' universality ratings were more strongly associated with preceding moral (ß = .93) than pragmatic (ß = .83) ratings. As such, participants were more likely to indicate that nobody should engage in actions evaluated as immoral relative to actions evaluated as personally negative; conversely, participants were more likely to indicate that everybody should engage in actions evaluated as moral relative to actions evaluated as personally positive. Once again, this interaction was not moderated by block ( p  = .58). These results demonstrate that universality judgments (nobody/everybody) were more highly correlated with moral (wrong/right) than pragmatic (bad/good) ratings. In sum, these results are consistent with the general hypothesis that moral evaluations are associated with universality judgments to a greater degree than pragmatic evaluations.

Moral evaluations are extreme.

We predicted that moral evaluations would be more positive and/or more extreme than pragmatic evaluations. Whereas previous research has suggested that people who rate objects on whether they are morally good or bad may come to rate them more positively [45] , we found no difference on the overall ratings of actions when participants made moral (M = 4.07) or pragmatic (M = 4.04), F (1, 43) = .22, p  = .64, and there was no interaction with block ( p  = .22).

We predicted that moral evaluations would be more extreme than pragmatic evaluations of the same actions. To test this hypothesis, we computed and compared the extremity of moral versus pragmatic ratings. Since all moral/pragmatic ratings ranged in valence from one to seven, we created curvilinear extremity scores by mean centering and squaring each rating. For example, a rating of 5 (out of 7) would be computed by subtracting the overall mean (4.05) and squaring the difference (.95)*(.95) to provide an extremity score (.90). As shown in Table 1 , participants made marginally more extreme moral (M = 5.85) than pragmatic (M = 5.59) ratings of the same actions, F (1, 43) = 3.54, p  = .067. Consistent with the prediction that participants would be able to switch between moral and pragmatic evaluative modes, the effect of evaluation type was not moderated by block ( p  = .39). These results indicate that moral evaluations were more extreme than pragmatic evaluations of the same actions (see Figure 2 ).

thumbnail

The actions have been rank ordered on the Y-axis from the highest (left) to lowest (right) mean rating. The X-axis reflects the rating scale (range 1–7). Pragmatic ratings are relatively linear whereas moral ratings are curvilinear, reflecting differences in extremity.

https://doi.org/10.1371/journal.pone.0048693.g002

Moral evaluations are fast.

We predicted that moral judgments would be faster than pragmatic evaluations of the same actions. To test this hypothesis, we compared the reaction times of moral versus pragmatic evaluations. As predicted, participants were faster to provide moral (M = 3,595 ms) than pragmatic (M = 3,877 ms) evaluations of the same actions, F (1, 43) = 11.29, p <.01. As shown in Table 1 , participants were faster to respond in later blocks, F (3, 129) = 99.86, p <.01, indicating a task-learning effect; however, the effect of condition was not moderated by block ( p  = .97). These results demonstrate that moral evaluations were faster than pragmatic evaluations throughout the study, suggesting that these were distinct modes of evaluation and participants were able to switch back and forth between moral and pragmatic evaluative modes.

Consistent with our predictions, moral and pragmatic construals of the same actions were associated with distinct evaluative outcomes. Moral evaluations made on the same set of actions were faster, more extreme and more universally prescriptive than pragmatic evaluations. Further, these distinct consequences were maintained as participants switched back-and-forth between moral and pragmatic evaluations, indicating that these evaluations are not only distinct, but are also highly sensitive to current top-down construal.

These findings are consistent with what is known about the flexibility of the human evaluative system [24] , [28] and suggest that many issues may not necessitate automatic and inflexible construals. Although many issues, such as incest or pushing someone off of a footbridge, may evoke moral considerations, Experiment 1 suggests that people can deliberately construe and evaluate a host of issues in reference to moral considerations. Thus, while chronic moralization about many issues may elicit strong attitudes [1] , [12] , [23] , construal can shape the evaluation of many of these same issues and lead to several different evaluative outcomes.

The results from Experiment 1 provide evidence that thinking morally is associated with universality. Specifically, participants were not only faster to make a universality judgment following a moral than a pragmatic evaluation, but mean moral judgments were more highly correlated with mean universality judgments than pragmatic judgments. However, the wording of the universality item was general enough that it could imply normativity or desirability. Classic research on morality has shown that it is important to distinguish moral norms from mere social conventions or personal preferences. Further, by asking where “how many other people should” engage in a given action we may have left open the definition of “other people”. Participants may interpret “other people” to mean group members at almost any level of social categorization (e.g., university students, Americans, humans). Consequently, narrow interpretations of “other people” allows for relativism, as any moral norm may only be applied to a narrow subset of humanity.

We were, however, interested in assessing the relationship between moral evaluation and universal moral duty—what Kant termed the “categorical imperative” [50] . Categorical imperatives are moral principles that are intrinsically valid and must be obeyed by all people in all situations and circumstances. According to Kant, people should “Act only according to that maxim whereby you can at the same time will that it should become a universal law” [50] . To better approximate this construct, participants in Experiment 2 were asked “whether each action should be universally prohibited or required, where universal means that something applies to all people, without limit or exception”.

The results from Experiment 1 indicated that moral evaluations were faster and more extreme than pragmatic evaluations. These results are consistent with scientific and lay understandings of morality. However, they may also be due to the difference in the scales used for moral versus pragmatic evaluations: pragmatic evaluations were made on a scale from very bad to very good whereas moral evaluations were made on a scale from very wrong to very right . Although both evaluations were made on 7-point scales, the different labels that anchored each scale may have led to different interpretations. One possibility is that the right/wrong anchors may have implied more extreme judgments during moral evaluation [64] . Further, the more extreme labels could have primed a specific mindset that facilitated subsequent universality judgments. Alternatively, the right/wrong anchors may have been interpreted to mean the normativity or correctness of an action (regardless of moral content). For example, participants may have made extreme judgments because some actions are simply correct (e.g., using keys to start a car) and others are incorrect (e.g., using keys to start a refrigerator). We addressed these concerns in the following experiments by holding the scales for moral and pragmatic types of evaluation constant—participants evaluated every action on a scale from very bad to very good .

Experiment 2

In the second experiment, participants made moral and pragmatic evaluations of a wide variety of actions to determine whether moral evaluations were more strongly associated with universal prescriptions than pragmatic evaluations. In order to test this hypothesis, participants rated each action as moral or pragmatic and then rated whether the action should be universally prohibited/required. We also attempted to replicate the results from Experiment 1 showing that moral evaluations are associated with faster and more extreme evaluative outcomes than pragmatic evaluations while holding the scale labels constant for both types of evaluation.

Seventy undergraduate psychology students (50 females; mean age = 19) participated for partial course credit. Four participants were removed from analysis for failing to follow instructions.

The procedure was similar to Experiment 1 , with three important differences. The first difference was the inclusion of a different universality question. After evaluating each action, participants were asked whether the action should be universally prohibited or required, where universal means that something applies to all people, without limit or exception. Participants were told “For something to be universally prohibited it means that nobody should be permitted to do this action, without exception. For something to be universally required it means that everybody should be required to do this action, without exception.” Participants made these ratings on a 7-point scale (1 =  universally prohibited to 7 =  universally required ). The second difference was holding the scale labels constant for moral and pragmatic evaluations. Specifically, participants made moral evaluations for 60 actions, rating “how morally bad/good it would be for you to [action]” (1 =  very bad to 7 =  very good ), and pragmatic evaluations for the other 60 actions, rating “how personally bad/good you think it would be for you to [action]” (1 =  very bad to 7 =  very good ). Following each moral and pragmatic evaluation, participants made universality judgments for the same action. The third difference was the inclusion of 16 additional actions during evaluation (see Appendix S1B ; for a total of 120 actions).

Actions were presented in four blocks. In each block, participants made moral and universality evaluations for 13 actions before switching to pragmatic and universality evaluations for 13 different actions. The order of moral and pragmatic judgments was counterbalanced such that half of the participants made moral judgments first within each block, and half made pragmatic judgments first. Actions were randomly assigned within participants to be evaluated morally versus pragmatically. Participants never made a moral and pragmatic evaluation of the same action; however, across participants, each action was equally likely to be evaluated according to moral or pragmatic standards. This ensured that any differences between moral and pragmatic evaluations were not due to the specific actions.

To assess differences between moral versus pragmatic evaluation, we conducted 2 (evaluation type: moral, pragmatic)×4 (block: 1, 2, 3, 4) analyses of variance (where evaluation type and block were repeated measure factors) on the speed with which participants made evaluations, the overall valence and extremity of their evaluations, and their reaction times to subsequent universality ratings. To analyze reaction times, we removed trials with extremely long (>10,000 ms) reaction times and log-transformed all remaining reaction times. To ease interpretation, all reported means are based on raw reaction times.

Our primary prediction in Experiment 2 was that moral evaluations would be more strongly associated with universality judgments than pragmatic evaluations. To test this hypothesis, we compared the reaction times of universality judgments following moral versus pragmatic evaluations. As predicted, participants were faster to make universality judgments following moral (M = 1,438 ms) compared to pragmatic (M = 1,542 ms) ratings, F (1, 65) = 8.66, p <.01. A main effect of block indicated that participants were faster to make universality judgments during later blocks, F (3, 195) = 172.29, p <.01; however, the effect of evaluation type on the speed of universality judgments was not moderated by block ( p  = .50). Moreover, an item-by-item analysis indicated that evaluating an action in moral terms facilitated subsequent universality judgments regardless of the moral rating it received—even actions that were rated as morally neutral. Replicating the results from Experiment 1 , moral evaluations facilitated universality judgments more than pragmatic evaluations throughout the study, suggesting that participants were able to switch between moral and pragmatic evaluative modes.

As in Experiment 1 , we conducted an item-level analysis to examine whether this facilitation effect held across the full range of actions or was specific to actions with certain moral ratings. Consistent with the primary analysis, there was a significant main effect of preceding evaluation type, such that participants were faster to make universality judgments following a moral than a pragmatic evaluation ( p <.03). There were also linear and curvilinear effects of moral rating: actions with higher mean moral ratings (i.e., actions rated as more moral) were associated with slower universality judgments ( p <.02) and actions with extreme moral ratings (i.e., highly immoral and moral actions) were associated with faster universality judgments ( p <.01). Critically, the main effect of preceding evaluation type was not moderated by the linear ( p >.15) or curvilinear ( p >.30) moral rating terms.

Following the results of Experiment 1 , we predicted that universality judgments would be more highly correlated with preceding moral than pragmatic evaluations. As predicted, a two-way interaction between evaluation type and the preceding moral/pragmatic rating, F (1, 65) = 20.83, p <.01, indicated that participants' universality ratings were more strongly associated with preceding moral (ß = .75) than pragmatic (ß = .70) ratings. As such, participants were more likely to indicate that nobody should engage in actions evaluated as immoral relative to actions evaluated as personally negative; conversely, participants were more likely to indicate that everybody should engage in actions evaluated as moral relative to actions evaluated as personally positive. We also found an unexpected three-way interaction with block, F (3, 195) = 2.66, p  = .05, indicating that this interaction was strongest during the first two blocks. However, this effect was not replicated in the other experiments. In sum, these results are consistent with the general hypothesis that moral evaluations are associated with universality judgments to a greater degree than pragmatic evaluations.

Following the results of Experiment 1 , we predicted that moral evaluations would be more extreme than pragmatic evaluations of the same actions, but not more positive or negative. Consistent with Experiment 1 , we found no difference on the overall rated valence of actions when participants made moral (M = 4.21) or pragmatic (M = 4.27) evaluations ( p  = .64). As predicted, participants made more extreme moral (M = 5.53) than pragmatic (M = 5.32) ratings of the same actions, F (1, 65) = 3.93, p  = .05 (see Table 2 ). Consistent with the prediction that participants would be able to switch between moral and pragmatic evaluative modes, the effect of evaluation type was not moderated by block ( p  = .35). These results indicate that moral evaluations were more extreme than pragmatic evaluations of the same actions.

thumbnail

https://doi.org/10.1371/journal.pone.0048693.t002

Following the results of Experiment 1 , we predicted that moral judgments would be faster than pragmatic evaluations of the same actions. To test this hypothesis, we compared the reaction times of moral versus pragmatic evaluations. As predicted, participants were faster to provide moral (M = 3,188 ms) than pragmatic (M = 3,386 ms) evaluations of the same actions, F (1, 65) = 14.18, p <.01. Participants were also faster to respond in later blocks, F (3, 195) = 85.85, p <.01, indicating a task-learning effect; however, the effect of condition was not moderated by block ( p  = .94). Replicating the results from Experiment 1 , moral evaluations were faster than pragmatic evaluations across the blocks, suggesting that participants were able to switch back and forth between moral and pragmatic evaluative modes.

The results of the first two experiments provide convergent evidence that thinking morally is associated with universality. Specifically, participants were not only faster to make a universality judgment following a moral than a pragmatic evaluation, but mean moral judgments were more highly correlated with mean universality judgments than were pragmatic judgments. Comparing the effects of moral and pragmatic evaluation is important because it illustrates how easily people can depart from rational, pragmatic decision-making and shows that this departure has important implications for evaluative outcomes. However, since moral judgments were only compared to pragmatic evaluations, any inferences about the nature of moral evaluation from the first two experiments must rely on both the nature of pragmatic judgment and the nature of the psychological contrast between moral and pragmatic construal (e.g., moral judgments may be less complex).

To address these concerns, we compared moral judgment to an alternative type of judgment in experiment 3 —a simple judgment about whether each action is pleasant or unpleasant [65] . We also reasoned that the hedonic evaluations were likely to be highly subjective, which might lead to relatively weak associations with universality, even relative to pragmatic evaluations. The outcomes of moral evaluation were thus compared with the outcomes of a simple hedonic evaluation. We also compared differences between moral and hedonic evaluation with differences between moral and pragmatic evaluation to see if the non-moral condition (pragmatic versus hedonic) had any major implications for interpreting the results from the first two experiments.

The results from the first two experiments indicated that participants were able to shift back-and-forth between moral and pragmatic evaluations with distinct consequences, indicating that the evaluations are sensitive to construal. Participants were able to evaluate a series of actions using moral considerations and then quickly shift to evaluate a separate series of actions using pragmatic considerations. Although this level of flexibility is impressive, no single participant was forced to provide moral and pragmatic evaluations of the same object(s). If moral evaluation is truly flexible, participants may be able to evaluate the exact same action in very different ways depending on their current evaluative mode. Moreover, this flexibility should lead to different evaluative outcomes for the same stimuli even when the different evaluations take place only moments apart. For example, a person who is considering the pragmatic costs of recycling but is suddenly reminded to consider its moral implications may have a sudden change of heart about discarding an empty bottle in the trash. Although this example seems intuitively plausible, human concerns about being and appearing consistent [66] , [67] , along with psychological anchoring processes [68] render this a conservative test of the flexibility hypothesis. In Experiment 3 , each participant made both moral and non-moral evaluations of the same set of actions during the same experimental session, and type of evaluation switched semi-randomly on a trial-by-trial basis (see below). We predicted that moral evaluations would be associated with different evaluative outcomes (e.g., universality) relative to non-moral evaluations, even when participants made both forms of evaluation toward the same objects during the same session.

Experiment 3

We have presented evidence that participants were able to shift back-and-forth between moral and non-moral evaluative modes in a relatively flexible fashion. The results from the first two experiments suggest that people can shift between moral and non-moral evaluative modes in a tonic fashion—shifting modes for a series of trials (blocks) at a time. A stronger form of our dynamical systems approach would predict that people can shift between moral and non-moral evaluative modes in a phasic fashion. If so, it would suggest that the construal process can influence on evaluation on a moment to moment basis. In Experiment 3 , participants switched semi-randomly between moral and non-moral (pragmatic or pleasantness) evaluations on a trial-by-trial basis.

As in the first two experiments, participants rated whether each action was moral or non-moral (pragmatic or pleasant) and then rated whether the action should be universally prohibited/required. However, in this experiment we compared differences between moral and pragmatic evaluations with differences between moral and hedonic evaluations to see if the non-moral condition (pragmatic versus hedonic) had any major implications for interpreting the results from the first two experiments. This allowed us to determine whether moral evaluations were more strongly associated with universal prescriptions than two forms of non-moral evaluation. We predicted that moral evaluations would be associated with different evaluative outcomes (e.g., universality) relative to non-moral evaluations, even when participants were forced to shift back and forth between moral and non-moral evaluations every few seconds.

One hundred and forty-eight undergraduate psychology students (84 females; mean age = 20) participated for partial course credit. Three participants did not complete the experiment and were not included in the analysis.

The procedure was similar to the previous experiment, with three important differences. First, participants were randomly assigned to make non-moral evaluations on the basis of pragmatic or hedonic concerns. Thus, half the participants made moral and pragmatic evaluations (as in the previous two experiments), and half the participants made moral and hedonic evaluations. Second, participants shifted between moral and non-moral evaluations on a trial-by-trial basis. However, the order of every pair of moral and non-moral trials was randomized to ensure that participants could not anticipate that every even (or odd) numbered trial was always moral (non-moral). For example, if participants made a moral and then a non-moral evaluation on the first two trials, the order of the moral and non-moral evaluations in the subsequent two trials was randomly determined. This design allowed us to test whether the previous effects of moral versus non-moral evaluations were based on some kind of tonic moral versus non-moral mindset, or whether participants were capable of flexibly shifting from moral to non-moral evaluations in a flexible and rapid (phasic) fashion. Third, each participant made both moral and non-moral evaluations of the same set of actions during the same experimental session (in the previous experiments, participants never evaluated the same action twice).

To assess differences between moral versus pragmatic evaluation, we conducted 2 (evaluation type: moral, non-moral)×2 (non-moral: pragmatic, hedonic) repeated-measures analyses on the speed with which participants made evaluations, the overall valence and extremity of their evaluations, as well as their reaction times to subsequent universality ratings. To analyze reaction times, we removed trials with extremely long (>10,000 ms) reaction times and log-transformed all remaining reaction times. To ease interpretation, all reported means are based on raw reaction times.

Our primary prediction in Experiment 3 was that moral evaluations would be more strongly associated with universality judgments than non-moral evaluations, regardless of whether moral evaluations were contrasted with pragmatic or hedonic evaluations. To test this hypothesis, we compared the reaction times of universality judgments following moral versus non-moral (pragmatic and hedonic) evaluations. As predicted, participants were faster to make universality judgments following moral (M = 1,651 ms) compared to non-moral (M = 1,701 ms) ratings, F (1, 143) = 10.81, p <.01. There was no main effect of the non-moral control condition, F (1, 16848) = 1.47, p  = .23, and the effect of evaluation type did not differ when compared with pragmatic versus hedonic evaluations ( p  = .91). Estimated G matrix was not positive definite during the analysis of cross-level interactions when the between-subjects variables were modeled as random effects. Therefore, between-subjects main effects and interactions were modeled as fixed effects. The degrees of freedom reflect the difference between random and fixed effects parameters. Replicating and extending the results from the first two experiments, moral evaluations facilitated universality judgments more than pragmatic or hedonic evaluations throughout the session, suggesting that participants were able to switch between moral and non-moral modes of evaluation. More importantly, Experiment 3 provided evidence that construal affected universality judgments for the same actions within subjects, such that evaluating the same people responded differently when evaluating the same action morally versus non-morally.

Following the results from the first two experiments, we predicted that universality judgments would be more highly correlated with preceding moral than non-moral (pragmatic or hedonic) evaluations. As predicted, a two-way interaction between evaluation type and the preceding moral/non-moral rating, F (1, 143) = 202.39, p <.01, indicated that participants' universality ratings were more strongly associated with preceding moral (ß = .72) than non-moral (ß = .58) ratings. As such, participants were more likely to indicate that nobody should engage in actions evaluated as immoral relative to actions evaluated as pragmatically or hedonically negative; conversely, participants were more likely to indicate that everybody should engage in actions evaluated as moral relative to actions evaluated as pragmatically or hedonically positive.

These effects were qualified by a three-way interaction between evaluation type, and whether the non-moral condition was pragmatic or hedonic, F (1, 16,844) = 19.15, p <.01. When the control condition involved pragmatic evaluation, there was a two-way interaction between evaluation type and the preceding moral/pragmatic rating, F (1, 73) = 53.13, p <.01, indicating that participants' universality ratings were more strongly associated with preceding moral (ß = .72) than non-moral (ß = .61) ratings. However, when the control condition involved hedonic evaluation, the two-way interaction between evaluation type and the preceding moral/hedonic rating was stronger, F (1, 70) = 159.98, p <.01, indicating that participants' universality ratings were more strongly associated with preceding moral (ß = .72) than non-moral (ß = .56) ratings, and that this difference was greater than in the moral/pragmatic condition. Although moral evaluations were strongly linked to universality judgments of the same action in both conditions, these results suggest that participants may have been more willing to generalize their pragmatic evaluations to others than their hedonic evaluations. However, the most robust effect remains that universality judgments (universally prohibited/required) were more highly correlated with moral than non-moral ratings—whether they were pragmatic or hedonic in nature. These results are consistent with the general hypothesis that moral evaluations are associated with universality judgments to a greater degree than other forms of evaluation.

Following the results of the first two experiments, we predicted that moral evaluations would be more extreme than non-moral evaluations of the same actions, but not more positive or negative. Consistent with the previous experiments, we found no difference on the overall valence ratings of actions when participants made moral (M = 4.19) or non-moral (M = 4.17) evaluations, F (1, 143) = .21, p  = .65. As predicted, participants made more extreme moral (M = 5.10) than non-moral (M = 4.64) ratings of the same actions, F (1, 143) = 26.67, p <.01 (see Table 3 ). There was no effect of non-moral (pragmatic versus hedonic) evaluation ( p  = .60), and the effect of evaluation type was not moderated by the non-moral evaluation ( p  = .66). In other words, the nature of the non-moral condition did not make a difference: people's moral evaluations were more extreme than their pragmatic or hedonic evaluations of the same actions.

thumbnail

https://doi.org/10.1371/journal.pone.0048693.t003

Following the results of the first two experiments, we predicted that moral judgments would be faster than non-moral evaluations of the same actions. To test this hypothesis, we compared the reaction times of moral versus pragmatic and hedonic evaluations. As predicted, participants were faster to provide moral (M = 3,661 ms) than non-moral (M = 3,822 ms) evaluations of the same action s , F (1, 143) = 28.86, p <.01 (see Table 3 ). This increase in overall reaction time relatively to the previous two experiments is likely due to the fact that participants were forced to switch evaluations on a trial-by-trial basis in Experiment 3 , inducing a task-switching cost [69] . There was no effect of the non-moral (pragmatic versus hedonic) evaluation ( p  = .72), and the effect of evaluation type was not moderated by the non-moral evaluation ( p  = .15). In other words, the nature of the non-moral condition did not make a difference: people were faster to make moral evaluations than pragmatic or hedonic evaluations. Replicating and extending the results from the first two experiments, moral evaluations were faster than non-moral evaluations throughout the study, suggesting that participants were able to switch back and forth between moral and non-moral evaluative modes on a trial-by-trial basis.

The results from Experiment 3 replicate and extend the results from the first two experiments. The three experiments provide convergent evidence that thinking morally is associated with universality. Specifically, participants were not only faster to make a universality judgment following a moral than a non-moral evaluation, but mean moral judgments were more highly correlated with mean universality judgments than non-moral judgments. By replicating this pattern of effects when comparing moral with both pragmatic and hedonic modes of evaluation, we have increased confidence that the effects of moral evaluation are not merely a consequence of the psychological contrast between moral and pragmatic modes. However, the results of Experiment 3 indicated that different non-moral modes of evaluation are not equivalent: people seemed more willing to universalize their pragmatic than their hedonic evaluations.

Experiment 3 also provided the first evidence that people are able to shift back-and-forth between moral and non-moral evaluative modes in a highly flexible fashion—shifting construal on a trial-by-trial basis. Whereas the results from the first two experiments provided evidence that people could shift between moral and non-moral evaluative modes in a tonic fashion—shifting modes for a series or trials at a time— Experiment 3 indicated that people can shift on-line between moral and non-moral evaluative modes in a phasic fashion. This suggests that the effects of construal are not limited to evaluative modes or mindsets. Further, showing these differences within participants indicates that construal can override consistency motives [66] , [67] and psychological anchoring [68] .

General Discussion

We present three experiments showing that moral evaluations are susceptible to top-down influences. Specifically, we show that people can deliberately construe a wide variety of actions through either a moral or a non-moral lens with different consequences for their evaluations. Thus, moral evaluation is not strictly a bottom-up process. The current research provides evidence that moral and non-moral construals of the same actions lead to distinct evaluative outcomes. Specifically, the moral evaluative mode elicited faster, more extreme and more universally prescriptive evaluations than non-moral evaluative modes, consistent with longstanding assumptions about morality. In short, evaluating an action in moral terms increased people's inclination to render judgments in absolutes—more simple, extreme, black-and-white evaluations. These differences in evaluative outcomes are consistent with the contention that moral and non-moral construals triggered different evaluations. In addition, our experiments suggest that people can shift back-and-forth between moral and non-moral evaluations of the same actions very quickly, consistent with dynamical models of evaluation [24] , [28] , [29] .

Much of the previous research on morality has made an implicit assumption that moralization leads people to reflexively construe certain actions or dilemmas as moral. Although this may certainly be the case for many issues, such as murder and incest, the current research suggests that people can construe and evaluate a host of issues according to moral standards [45] . Thus, while moralization involves the development of relatively stable moral contents (e.g., standards and values) and may instigate the construal of certain acts in moral terms, whether these contents influence an evaluation at any given moment likely depends on whether an action or issue is construed in moral or non-moral terms. As such, it seems likely that issues that have not necessarily been extensively moralized (e.g., recycling) may allow for the most flexible evaluations and lead to the largest differences between moral and non-moral evaluative modes [70] . In contrast, actions that are highly moralized (smothering a baby) or mundane (wearing a sweater vest) may allow for less flexibility [23] .

To investigate the influence of construal, we instructed participants to evaluate the same stimuli in moral versus non-moral (e.g., pragmatic) terms. Our experimental paradigm—which holds stimuli constant while varying the mode of evaluation—allowed us to investigate how flexibly moral versus non-moral evaluative modes can be applied to judgment of the same stimuli and ensured that differences observed in the nature of evaluative outcomes were due to differences in the nature of evaluative construal rather than the stimuli. As we predicted, evaluating actions on the basis of moral versus non-moral considerations lead to different evaluative outcomes. Specifically, the present data suggests that moral evaluations are more likely to be applied universally to others. In all three experiments, we found that moral evaluations were more strongly associated with universal prescriptions than non-moral evaluations. Future research should explore the relationship between moral evaluation and universality, including whether the effects of universality extend across time as well as people and the implications of these associations for human judgment and decision-making.

Building on our dynamical model of the evaluative system, we distinguish between the contents (e.g., attitudes and standards) and processes (e.g., mental operations and computations) of evaluation [24] , [28] , [29] . Accordingly, as long as there are some moral contents that participants can bring to bear on their evaluation, moral evaluations may be applied to actions that have not been typically seen as moral. For example, one can bring to mind the moral aspects of recycling (e.g., saving the environment) even if more pragmatic aspects normally predominate (e.g., the pain of driving to the local recycling depot). Thus, the current work extends the research by Skitka and colleagues [23] by showing that evaluating an issue as moral (or not) varies not only across individuals, but within individuals within seconds as a function of the construal the person is applying. Indeed, Experiment 3 provided evidence that construal influenced universality judgments for the same actions within the same people.

Moral Construal

The current research manipulates the construal people use to evaluate different stimuli. Many others have proposed that moral cognition can be understood by the processes involved in moral reasoning rather than final judgments [14] , [15] . Kohlberg [27] had participants respond to moral dilemmas and identified their stage of moral development on the basis of their reasoning. Rationalist approaches in moral psychology stress that moral judgments are reached through a process of reasoning and reflection [15] , [16] , [71] . More recently, researchers have challenged the view that moral reasoning is the sole or even primary means by which moral judgments are made, arguing that certain situations automatically elicit moral intuitions, which guide moral judgments [1] . According to the intuitionist model, moral reasoning frequently follows an initial judgment, providing a post hoc justification but not the causal impetus for a moral judgment. From the intuitionist perspective, unconscious, affective responses guide reactions to these morally charged scenarios and people often engage in deliberate reasoning only after they have already made an initial moral judgment.

The two stage models of ethical decision-making argue that the “eliciting situation” (e.g., a stimulus, situation or course of action) is only likely to be judged as morally right or wrong when prior processes first determine that the situation is to be evaluated in moral terms. Given the variety of actions that elicited differences between moral and non-moral evaluations in the current experiments, we contend that moral evaluation can extend beyond the actions and dilemmas that are typically examined in studies on moral cognition. Thus, while certain eliciting situations, such as smothering a baby [72] , may serve to directly trigger moral awareness in addition to providing a basis for the resultant moral judgment, many situations are highly sensitive to framing and construal [73] . For instance, research suggests that people can make decisions using different perspectives, from the legal viewpoint of a judge to the moral viewpoint of a citizen, and these different perspectives can shape the processes underlying legal and moral decisions [46] .

Although the cognitive reasoning or intuitionist models of moral evaluation are not necessarily inconsistent with a dissociation between awareness and judgment stages in moral evaluation, by using highly moralized stimuli and/or by asking people to form moral judgments (cuing moral awareness), these research traditions may over-estimate the extent to which moral evaluation is automatically triggered by stimulus features. Paradigms designed to examine moral judgment in both the moral reasoning and intuitionist traditions are predominantly stimulus-driven, confronting participants with situations or dilemmas that are assumed a priori to be morally relevant (or not). Many of these studies cannot easily discriminate effects due to differences between moral and non-moral forms of evaluation from effects due to stimulus differences; even in studies that contrast judgments and decisions made in response to ostensibly moral and non-moral situations, these conditions differ both in type of evaluation and type of stimuli. We therefore suggest that extant research on the psychological underpinnings of moral evaluation does not provide much direct evidence that moral awareness (choosing to evaluate stimuli in moral terms) is independent of moral judgment. Experimental approaches like the one employed here are important for several reasons. First, they allow for a test of the contention that many of the same actions can be evaluated in moral and non-moral ways and that these different types of evaluation have distinct evaluative outcomes. Second, this approach helps disentangle the awareness and judgment stages of moral evaluation. By having participants evaluate the same actions in moral and/or non-moral terms we directly tested whether evaluating stimuli in moral terms gave rise to distinct outcomes. Research in the cognitive reasoning tradition that explicitly directs participants to evaluate situations such as the Heinz Dilemma in moral terms lacks the non-moral control conditions necessary to dissociate these processes. In contrast, research in the intuitionist tradition, in which participants evaluate stimuli that are presumably moral (or not), cannot distinguish effects due to moral evaluative processes from effects due to stimulus characteristics. However, if different evaluative outcomes are observed when participants evaluate the same actions in moral versus non-moral terms, this supports the notion that moral processes themselves have evaluative consequences beyond the consequences associated with specific stimulus characteristics. By directly comparing the evaluative outcomes of moral versus non-moral modes of evaluation, we found that a moral evaluation elicited faster, more extreme and more universally prescriptive evaluations than non-moral evaluations.

We are not suggesting, however, that moral and non-moral (i.e., pragmatic or hedonic) modes of evaluation are completely independent: differences observed between moral and non-moral evaluation do not imply that the two forms of evaluation do not share many of the same underlying processes. Many neural component processes—especially those involved in representing value—are likely common to both forms of evaluation [36] . Further, moral and pragmatic evaluations of the same action may often lead to the same behavioral outcomes. Indeed, religious and secular institutions impose punishments on many forms of self-interested behavior to help ensure that pragmatic and moral concerns are closely aligned to the benefit of the collective. For example, the decision to commit a crime is not often only immoral, but is likely to incur severe legal punishments. In this way, legal and social sanctions act as deterrents for otherwise “immoral” behavior. Humans have spent centuries creating legal systems and social institutions (including religions) that align pragmatic rewards and punishments with moral concerns. This normally strong relationship between moral and non-moral evaluations mitigates potential differences, and makes our experimental tests of differences between these evaluative modes conservative.

Lay Definitions of Morality

One of the major questions facing moral psychology is how one knows whether something is in fact a moral issue [74] , [75] , [76] . For the most part, researchers have used theoretical rationale or face validity as the primary criterion for morality, assuming that acts such as incest and murder are likely chronically construed as moral and that attributions of blameworthiness reflect moral evaluations. In the current research, we relied on participants' lay understanding of moral and non-moral evaluation. In some regards, this is a strength of the current research as it bypasses assumptions on the part of the researchers about the nature of moral versus non-moral modes of evaluation. It does, however, raise the possibility that the differences observed between moral and non-moral evaluation may have stemmed, at least in part, from participants' lay theories about the difference between these two dimensions of evaluation because our paradigm made participants aware that they were providing both moral and non-moral evaluations. However, a similar pattern of results holds for several non-moral evaluations (pragmatic and hedonic), suggesting that our effects are not specific to lay theories about the distinction between moral and pragmatic evaluations. In any event, future research should examine whether making this contrast salient enhances the reported differences.

Future Research

In each of the experiments reported above, we instructed people to evaluate the same stimuli in moral versus non-moral terms. This experimental approach, which holds stimuli constant while varying the mode of evaluation, is important because it allows us to investigate how construal processes can be applied to judgment of the same stimuli, and because it ensures that differences observed in the nature of evaluative outcomes are due to differences in the nature of evaluative processing rather than the stimuli . As we noted above, we intentionally used the term “processed” in the broad sense to include stimulus construal. Although our experimental design ensured that participants evaluated the exact same stimuli in moral and non-moral evaluative modes, this does not preclude the possibility that different underlying representations (i.e., contents) were activated and applied to the evaluations in both modes. We hold open the possibility that any differences in evaluative outcomes may reflect different underlying representations. For example, evaluating the moral implications of recycling may activate a different set of contents (e.g., representations based on beliefs and attitudes about global warming, social responsibility, etc.) than evaluating the pragmatic implications of recycling (e.g., the costs and benefits in terms of the time and money involved in recycling). Future research should use a combination of behavioral and physiological measures to assess underlying differences in process versus content [77] .

Similarly, neuroimaging could be used to help understand the hierarchical relationship between the brain systems implicated in moral construal and evaluation, since these systems are frequently confounded in extant research. We expect that the region of ventral medial prefrontal cortex frequently implicated in moral decision-making studies [78] , [79] may be sensitive to top-down construals instigated by higher-order control processes implemented by the fronto-parietal network [28] , [80] , [81] . This work may also elucidate the mental computations that underlie moral and non-moral evaluation.

Evidence that moral versus non-moral evaluations can be moderated by construal, applied to wide range of actions, and are associated with distinct evaluative outcomes, has a number of important implications. First, the processes associated with morality may be sensitive to motivation and social context. Moral framing has been shown to increase generosity in economic games [82] . Likewise, people primed with religious constructs may be more likely to see the moral implications of their actions, leading to more generous behavior [83] . Framing issues in terms of their moral implications may also reduce selfish behavior in a variety of contexts, such as cheating or paying taxes. Second, our data suggest that morality is not always associated with specific issues, but stems from the construal of those issues. Third, it raises the possibility that moral construal may lead to systematic biases in decision-making and behavior. For example, considering the pragmatic versus moral implications of voting might have a profound effect on voting behavior. If people focus on the time and energy involved, they may be unlikely to vote; alternatively, if the same people focus on their moral duty as voters for preserving a healthy democracy, they may be willing to vote despite the personal costs. As such, construing the same action in moral versus pragmatic terms may ultimately lead to different evaluations and behavior [84] , [85] .

People engage in countless actions on a daily basis and these actions can be based on a number of considerations, from gut instinct to a rational cost-benefit analysis. The current research suggests that people can also base their actions on their moral standards, and using these standards alters the mental operations used to evaluate those actions. As a consequence, ostensibly moral acts may be construed and processed according to other standards, and vice-versa. The effects of construal highlighted in the current research suggest that generating an appropriate construal (moral or otherwise) may be one of the most important aspects of moral or ethical decision-making [86] . The failure, for example, to consider the pragmatic implications of certain decisions could lead to unnecessarily swift or extreme decisions. Conversely, the failure to consider the moral implications of one's actions may ultimately lead people to act immorally in pursuit of pragmatic ends [87] . Future research should continue to investigate why people evaluate certain actions in moral terms as opposed to analyzing their pros and cons or considering their hedonic value.

Supporting Information

Appendix s1..

(A) Participants were presented with 104 actions one at a time on a desktop computer using E-Prime ( Experiment 1 ). (B) Participants were presented with an additional 16 additional actions, for a total of 120 actions (Experiments 2 and 3).

https://doi.org/10.1371/journal.pone.0048693.s001

Acknowledgments

The authors would like to thank members of NYU Social Perception and Evaluation Lab and The Ohio State Social Cognitive Science lab for their thoughtful comments on various stages of this research.

Author Contributions

Conceived and designed the experiments: JV DP IH WC. Performed the experiments: JV DP. Analyzed the data: JV DP. Wrote the paper: JV DP IH WC.

  • View Article
  • Google Scholar
  • 4. Darwin C (1874) The descent of man and selection in relation to sex. New York: Rand, McNally & Company.
  • 7. Sober E, Wilson DS (1998) Unto others: The evolution and psychology of unselfish behavior. Cambridge, MA: Harvard University Press.
  • 8. Hauser M (2006) Moral minds: How nature designed our universal sense of right and wrong. New York: HarperCollins Publishers.
  • 10. Bandura A (1991) Social cognitive theory of moral thought and action. In: Kurtines WM, Gewirtz JL, editors. Handbook of moral behavior and development. Hillsdale, NJ: Lawrence Erlbaum. pp. 45–103.
  • 14. Kohlberg L (1984) Essays on moral development: Vol. 2. The psychology of moral development. New York: Harper.
  • 15. Piaget J (1932/1965) The moral judgment of the child. New York: Free Press.
  • 16. Turiel E (1983) The development of social knowledge: Morality and convention. Cambridge, England: Cambridge University Press.
  • 25. Rand A (1964) The virtue of selfishness. New York, NY: Signet.
  • 26. Greenspan A (2007) The age of turbulence: Adventures in a new world. New York, NY: Penguin Press.
  • 27. Kohlberg L (1958) The Development of Modes of Thinking and Choices in Years 10 to 16: University of Chicago.
  • 30. Chaiken S, Trope Y (1999) Dual-process theories in social psychology. New York: Guilford Press.
  • 39. Rest JR (1986) Moral development: Advances in research and theory. New York: Praeger Publishers.
  • 49. Hare RM (1963) Freedom and Reason. Oxford, England: Clarendon Press.
  • 50. Kant I (1785/1993) Grounding for the metaphysics of morals. Ellington JW, translator: Hackett.
  • 51. Sidgwick H (1907/1981) The methods of ethics: Hackett.
  • 52. Hare RM (1955) Universalizability. Proceedings of the Aristotelian Society.
  • 53. Singer MG (1961) Generalization in ethics. New York: Knopf.
  • 58. Petty RE, Krosnick JA, editors (1995) Attitude strength: Antecedents and consequences. Mahwah, N.J.: Lawrence Erlbaum Associates.
  • 61. Goldstein H (1995) Multilevel Statistical Models. London: Arnold.
  • 62. Cohen J, Cohen P, West SG, Aiken LS (2003) Applied regression/correlation analysis for the behavioral sciences. Mahwah, NJ: Erlbaum.
  • 66. Festinger L (1957) A theory of cognitive dissonance. Palo Alto, CA: Stanford University Press. 291 p.
  • 71. Kohlberg L (1969) Stage and sequence: The cognitive developmental approach to socialization. In: Goslin DA, editor. Handbook of socialization theory and research. Chicago: Rand.
  • 73. Kappes A, Van Bavel JJ (2012) Subtle framing shapes moral judgments. New York: New York University.
  • 75. Haidt J (2012) The righteous mind: Why good people are divided by politics and religion: Pantheon.
  • 77. Cunningham WA, Packer DJ, Kesek A, Van Bavel JJ (2009) Implicit measures of attitudes: A physiological approach. In: Petty RE, Fazio RH, Brinol P, editors. Attitudes: Insights from the new implicit measures. New York: Psychology Press. pp. 485–512.
  • 85. Packer DJ, Van Bavel JJ, Haas IJ, Cunningham WA (2011) Shifting the calculus: The differential influence of moral versus pragmatic evaluative modes on voting intentions. Bethlehem, PA: Lehigh University.
  • 87. Arendt H (1963) Eichmann in Jerusalem: A report on the banality of evil. London: Faber & Faber.

chrome icon

What are the differences between moral standards and non-moral standards?  

Insight from top 5 papers.

Moral standards and non-moral standards differ in their normativity and functions. Moral standards, such as those related to ethical conduct, are demanding in nature and impose obligations on individuals [??] . They provide a framework for evaluating actions as right or wrong, and guide individuals in making moral decisions [??] . On the other hand, non-moral standards, such as rationality, are recommending in nature and suggest actions that are beneficial or optimal [??] . They serve as guidelines for practical decision-making, but do not carry the same moral weight as moral standards [??] . While moral standards are based on principles that assign moral status to actions, non-moral standards focus on achieving desired outcomes [??] . Understanding the differences between these two types of standards is important for comprehending the complexities of moral decision-making and ethical behavior in various contexts.

Source Papers (5)

TitleInsight
- 22 PDF Talk with Paper
,   - PDF Talk with Paper
- 3 PDF Talk with Paper
- 2 Talk with Paper
- Talk with Paper

Related Questions

Non-dividend stocks exhibit higher return volatilities and market betas compared to dividend-paying stocks, despite commanding lower mean returns . Additionally, non-dividend listed companies are analyzed in terms of external and internal factors, focusing on industries with a higher prevalence of non-dividend stocks . On the other hand, dividend-paying stocks tend to have lower implied volatilities and variance risk premiums unconditionally, but when considering implied volatility levels, dividend-paying firms display higher conditional variance risk premiums relative to nonpayers . Furthermore, following the Jobs and Growth Tax Relief Reconciliation Act of 2003, non-dividend firms experienced more favorable effects due to significant shifts in institutional holdings, particularly among high-risk firms, leading to reductions in the implied cost of equity capital .

Non-reading comprehension deficits refer to difficulties in understanding text when readers lack the necessary background knowledge or struggle with constructing meaning from ambiguous contexts . In contrast, reading comprehension deficits involve challenges in extracting meaning from written material due to various factors such as vocabulary limitations, inadequate decoding skills, or poor monitoring abilities . The complexity of reading comprehension lies in the activation of multiple subprocesses simultaneously, influenced by factors like reader skills, text difficulty, and task definition . Neurophysiological studies have shown differences in processing literary and non-literary metaphors, with literary metaphors eliciting distinct neural responses, as seen through event-related potentials (ERPs) . Understanding these distinctions is crucial for developing effective assessment methods and instructional strategies to address comprehension difficulties in both reading and non-reading contexts.

Moral standards encompass principles guiding ethical behavior, such as honesty and fairness, as seen in the context of moral education and community inquiry . On the other hand, non-moral standards refer to criteria unrelated to ethics, like those governing capital movements or forest governance . Examples of moral standards include accountability of transnational actors and environmental standards . In contrast, non-moral standards can be observed in the realm of cosmology, where constraints on neutrino masses and interactions are set based on scientific principles . While moral standards focus on legitimacy and accountability, non-moral standards are often rooted in technical or scientific requirements, showcasing the diverse nature of standards across different domains.

Volunteer work can be defined as a free, non-profit activity that serves the common good and is characterized by voluntariness. It is distinct from other forms of work and is compatible with the national system of gainful employment. Non-volunteer work, on the other hand, refers to work that is not done on a voluntary basis and may be for-profit or not aligned with the common good. The distinction between volunteer work and non-volunteer work lies in the aspect of voluntariness and the purpose of the work .

Gamers and non-gamers show no significant differences in efficiency, users' evaluation of user interfaces (UIs), and the number of stress events experienced while using UIs . However, there is a significant difference in user preference for UIs, with gamers preferring web-based UIs and non-gamers preferring game-based UIs . Gamers who completed a video games questionnaire before performing cognitive-motor tasks had faster reaction times (RTs) in certain tasks, while non-gamers had slower RTs in those tasks when answering the questionnaire before the tasks . In terms of visual context processing, gamers showed poorer target size discrimination accuracy compared to non-gamers in younger age groups, but this difference diminished in older age groups . Overall, these findings suggest that there are differences between gamers and non-gamers in terms of UI preference, cognitive-motor task performance, and visual context processing.

Trending Questions

The development of religiosity in early childhood is influenced by various factors, including the role of educators, family dynamics, and the educational environment. Understanding these influences can help shape effective strategies for fostering religious values in young children. ## Role of Educators - Teachers' religious understanding significantly impacts children's religious attitudes. A study found that educators who prioritize humanist values and inclusiveness foster children's openness to diverse beliefs. - Preschool catechesis plays a crucial role in shaping children's religiosity, providing structured religious education that aligns with their developmental stage. ## Family Influence - Parental religiosity serves as a model for children, influencing their behavior and moral values. However, some studies indicate that the direct impact of parental religiosity on altruistic behavior in children may not be significant. - The family environment, including interactions and values taught at home, is essential for nurturing children's religious development. ## Educational Environment - Both formal and non-formal educational settings contribute to children's religious and moral development. Strategies such as modeling, storytelling, and interactive activities are effective in instilling religious values. While these factors are critical, it is also important to consider that children's religiosity can be shaped by their individual experiences and interpretations, which may not always align with parental or educational influences.

Service learning is a pedagogical approach that integrates community service with academic instruction, focusing on critical, reflective thinking and personal and civic responsibility. It has been shown to significantly influence students' development of empathy, a crucial soft skill in various professional and personal contexts. This educational method provides students with real-world experiences that enhance their understanding and appreciation of diverse perspectives, thereby fostering empathy. Below, we explore how service learning contributes to empathy development through various mechanisms and contexts. ## Integration of Theory and Practice - Service learning effectively combines theoretical knowledge with practical application, which is crucial for empathy development. By engaging in service activities, students can apply classroom knowledge to real-world problems, enhancing their understanding and empathy towards the issues faced by different communities . - The 4Es Empathy Model (Exposure, Explanation, Experience, and Evaluation) is a pedagogical framework that links theoretical knowledge to fieldwork, emphasizing the importance of empathy in social education programs . ## Cross-Cultural and Socioeconomic Exposure - Service learning often involves cross-cultural experiences, such as the border-crossing service-learning trip to Cambodia, which exposed students to different socioeconomic conditions. This exposure helps students develop cognitive empathy by understanding the lived experiences of others, although affective empathy may develop more gradually . - In engineering education, service-learning trips to foreign communities have been shown to foster empathy through group dynamics, community interactions, and self-reflection, highlighting the importance of diverse cultural exposure in empathy development . ## Reflective Practices and Professional Identity - Reflective practices are integral to service learning, allowing students to process their experiences and develop empathy. In medical education, reflective essays have been used to help students develop an empathetic professional identity, challenging their initial perceptions and fostering empathy through transformative experiences . - Medical students' preclinical service-learning experiences have been linked to increased empathy in clinical settings, particularly in terms of compassionate care and perspective-taking, although the impact on other empathy dimensions may vary . ## Challenges and Implementation - Implementing effective service-learning programs requires significant commitment from educational institutions, as they must integrate these experiences into their curricula and align them with their educational missions. Successful examples, such as those from the University of Turin, demonstrate the potential of well-organized service-learning activities to enhance empathy among students . - Despite the challenges, service learning is a powerful tool for promoting empathy and civic responsibility, encouraging students to engage with social issues and develop a deeper understanding of the communities they serve . While service learning is a valuable approach for fostering empathy, it is not without its limitations. The development of empathy can be incremental and may vary across different dimensions, such as cognitive and affective empathy. Additionally, the effectiveness of service learning in nurturing empathy depends on the quality of the program and the extent of students' engagement and reflection. Therefore, continuous evaluation and adaptation of service-learning programs are essential to maximize their impact on empathy development.

Developing religious behavior in early childhood education is crucial for instilling moral values and fostering a relationship with the divine. Effective strategies encompass a variety of approaches tailored to the developmental needs of young children. ## Strategies for Religious Education - **Integrative Learning Approaches**: Utilizing creative methods such as storytelling, games, and digital media can enhance engagement and understanding of religious concepts. For instance, the use of educational games and storytelling in Islamic education has shown significant improvements in children's spiritual intelligence. - **Habitual Practices**: Establishing morning routines that include prayers and religious recitations can instill consistent religious behavior. Programs that incorporate daily religious practices have been effective in shaping children's spiritual habits. - **Parental and Community Involvement**: Collaboration with parents and the community is essential. Parents play a pivotal role in reinforcing religious values at home, while community support can enhance the educational experience. - **Modeling and Exemplification**: Teachers and caregivers should model religious behaviors, as children often learn through observation. This includes demonstrating moral values and engaging in religious practices. While these strategies are effective, challenges such as varying family backgrounds and community support can impact their implementation. Addressing these obstacles through structured programs and community engagement is vital for success.

Disobedience to the law can be morally justified in various contexts, particularly when laws perpetuate injustice or violate fundamental ethical principles. The discourse surrounding civil disobedience highlights its potential as a moral duty rather than merely a right, especially in democratic societies. ## Justifications for Civil Disobedience - **Moral Duty to Resist Injustice**: Delmas argues that citizens may have a moral obligation to resist laws that uphold injustice, suggesting that civil disobedience can invigorate civic friendship rather than strain it. - **Communicative Aspect**: Brownlee emphasizes that civil disobedience serves as a form of responsible citizenship, where individuals act on deeply held societal ideals, thus justifying their disobedience. - **Political Change**: Ra notes that civil disobedience aims for political change and can be morally defensible when laws conflict with moral authority. ## Consequences of Civil Disobedience - **Costly Signals**: Lai posits that civil disobedience acts as a costly social signal, where the punitive consequences amplify its communicative force, thereby justifying the act. - **Higher Obligations**: Marisi discusses how civil disobedients prioritize ethical principles over legal obligations, suggesting that disobedience can be a moral protest against unjust laws. While civil disobedience is often framed as a necessary response to injustice, it raises complex questions about the balance between legal compliance and moral integrity, indicating that the justification for disobedience is context-dependent.

Self-awareness and values development are interconnected processes essential for personal growth and effective leadership. Self-awareness involves recognizing and understanding one's thoughts, emotions, and behaviors, which facilitates better emotional management and decision-making aligned with personal values. Values, often seen as guiding principles, play a crucial role in shaping self-awareness, particularly in leadership contexts where authenticity and emotional intelligence are vital. ## The Role of Self-Awareness - Self-awareness enhances personal development by fostering self-description, self-evaluation, and self-regulation. - It is crucial for social work education, promoting knowledge and skill development. ## Values Development - Understanding values is essential for leadership, as they motivate actions and decisions. - Emotional experiences, such as empathy and moral pride, contribute to the internalization of values, enhancing moral self-awareness. While self-awareness and values development are generally viewed as positive, some argue that excessive focus on self can lead to narcissism or self-centeredness, potentially undermining interpersonal relationships and community engagement.

Essay Writing Solution

Non-moral standards versus moral standards.

The standards that a group or an individual has related to what is evil, what is good or what is wrong and what is right is known as morality. Moral standards are those that are related to behaviors of individuals especially how they differentiate between wrong behavior and right behavior or bad behavior or good behavior.

The rules that individuals have related to different kinds of actions they perceive can be wrong or right morally are also known as moral standards. They also include values other individuals have about the different kinds of objects they also believe can be morally bad or morally good. Moral standards along with moral principles and moral values can also be equated by some ethicists.

Moral Standards

Along with this, rules that are unrelated to ethical considerations or moral considerations are known as non-moral standards. It is not necessary that standards are linked to regarding nature that lacks ethical sense or to morality. However, some basic examples of non-moral standards involve different rules of the house, different rules about games, different standards of fashion, and different rules of etiquette. Non-moral principles involve legal statues, for example, different ordinances and different laws. They also include some different traditions and rules related to religions.

Moreover, there are some characteristics through which differences between non-moral standards and moral standards can be highlighted. For instance, different advantages or serious wrongs can be involved in moral standards. Along with this, a matter that can influence seriously that can provide different benefits or injure other individuals can be involved in moral standards. However, when it comes about non-moral standards.

They are different from moral standards. For example, an individual violating or following some rules of basketball games may matter in games of basketball. However, they may not impact someone’s well-being or life.

Whereas moral standards may also include the values of other individuals. As hegemonic authority or overriding character may be included in moral standards. If it is stated by moral standards that an individual does has the moral obligation. This may conflict with the non-moral standards of other individuals or their self-interest.

It also means that moral standards are not simply the principles or rules in society, but they involve precedence over considerations of other individuals. They may involve the legal ones, the prudential ones, or even the aesthetic ones. An individual may be justified aesthetically even if he/she prefers to leave his family behind.

non moral standards essay

Related Posts

The coronavirus that affects China has forced the enclosure of millions of citizens, has caused more than…

Buy your literature creative writing paper from us Literature is an academic field of study…

Since around the beginning of the 1970s, one of the main concerns of development economists…

web analytics

Business Ethics and Corporate Governance, Second Edition by

Get full access to Business Ethics and Corporate Governance, Second Edition and 60K+ other titles, with a free 10-day trial of O'Reilly.

There are also live events, courses curated by job role, and more.

HOW ARE MORAL STANDARDS FORMED?

There are some moral standards that many of us share in our conduct in society. These moral standards are influenced by a variety of factors such as the moral principles we accept as part of our upbringing, values passed on to us through heritage and legacy, the religious values that we have imbibed from childhood, the values that were showcased during the period of our education, the behaviour pattern of those who are around us, the explicit and implicit standards of our culture, our life experiences and more importantly, our critical reflections on these experiences. Moral standards concern behaviour which is very closely linked to human well-being. These standards also take priority over non-moral standards, ...

Get Business Ethics and Corporate Governance, Second Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.

Don’t leave empty-handed

Get Mark Richards’s Software Architecture Patterns ebook to better understand how to design components—and how they should interact.

It’s yours, free.

Cover of Software Architecture Patterns

Check it out now on O’Reilly

Dive in for free with a 10-day trial of the O’Reilly learning platform—then explore all the other resources our members count on to build skills and solve problems every day.

non moral standards essay

IMAGES

  1. SOLUTION: Moral and non moral standards

    non moral standards essay

  2. SOLUTION: Moral and non moral standars and moral dilemmas

    non moral standards essay

  3. Moral Standards vs. Non-Moral Standards

    non moral standards essay

  4. SOLUTION: Ethics moral vs non moral standards docx

    non moral standards essay

  5. MORAL AND NON-MORAL STANDARDS.doc

    non moral standards essay

  6. SOLUTION: Moral standards vs non moral standards

    non moral standards essay

COMMENTS

  1. Moral Standard versus Non-Moral Standard

    Violation of said standards also does not pose any threat to human well-being. Finally, as a way of distinguishing moral standards from non-moral ones, if a moral standard says "Do not harm innocent people" or "Don't steal", a non-moral standard says "Don't text while driving" or "Don't talk while the mouth is full".

  2. Difference between Moral and Non-Moral Standards

    Finally, as a way of distinguishing moral standards from non-moral ones, if a moral standard says "Do not harm innocent people" or "Don't steal", a non-moral standard says "Don't text while driving" or "Don't talk while the mouth is full". Quoted from: Jefjust24.(2018).Moral versus Non-moral Standards. Retrieved from ...

  3. Distinguishing Between Moral & Nonmoral Claims

    Immoral can be defined as something that does not conform to standards of morality. Nonmoral can be defined as something that does not possess characteristics of or fall into the realm of morals and ethics. For example, telling a lie can be considered immoral. And "it is wrong to lie" can be considered a moral claim.

  4. GEC-Ethics

    Ethics is a study of the morality of human acts and moral agents, what makes an act obligatory and w makes a person accountable. "Moral" is the adjective describing a human act as either Right or Wrong. or qualifying a person, personality, character, as either ethically good or bad. Moral Standards or Moral Frameworks and Non-Moral Standards.

  5. The Psychology of Morality: A Review and Analysis of Empirical Studies

    With this procedure, we found 1,278 papers published from 1940 through 2017 that report research addressing morality. Notwithstanding the enormous research interest visible in empirical publications on morality, a comprehensive overview of this literature is lacking. ... Shared moral standards go beyond other behavioral norms in that they are ...

  6. What is the difference between a moral and a nonmoral issue?

    Brian W. Bearden <[email protected]>. It appears you put a lot of thought into your paper; however, it seems that the true difference between a moral and nonmoral issue is just a matter of opinion because everyone views things differently as a result of their own beliefs. Jamie Meadows <[email protected]>.

  7. Moral Standards and Non Moral Standards (Difference and Characteristics)

    Non-moral standards, on the other hand, are the rules that are unrelated to moral or ethical considerations. Either these standards are not necessarily linked to morality or by nature lack ethical sense. Moral standards are also referred to as moral values and moral principles. On the other hand, usual examples of non-moral standards include ...

  8. Norms, Values and Human Conditions: An Introduction, 2019

    The first essay by S. Swaminathan engages in a philosophically fertile debate between cognitivist and non-cognitivist interpretations of the bindingness of law. The cognitivist picture of morality suggests that normativity or bindingness of law is a function of the objective moral standards.

  9. Moral Standards vs. Non Moral Standards

    The document discusses distinguishing between moral and non-moral standards. Moral standards promote human welfare and well-being by prescribing rights and obligations. They deal with issues that can seriously harm people, are not dependent on laws or authorities, take precedence over self-interest, and are associated with emotions like guilt. Non-moral standards are matters of taste or ...

  10. Week 3 Moral and Non Moral Standards

    1. The document discusses the difference between moral and non-moral standards. Moral standards involve rules about right and wrong behavior that can seriously impact human well-being, while non-moral standards are unrelated to morality. 2. Moral standards should be preferred over other values, are not determined by authority, can be universally applied, and are based on impartial ...

  11. Topic 2- Moral versus Non-Moral Standards

    advertisement. Topic 2: Moral versus Non-Moral Standards. Nominal Duration: 1.5 hours. Learning Outcomes: Upon completion of this topic, the student must be able to: 1. differentiate Moral from Non-moral standards; 2. cite the metaphors for moral standards; and. 3. explain the characteristics of moral standards. Introduction.

  12. The Importance of Moral Construal: Moral versus Non-Moral ...

    Over the past decade, intuitionist models of morality have challenged the view that moral reasoning is the sole or even primary means by which moral judgments are made. Rather, intuitionist models posit that certain situations automatically elicit moral intuitions, which guide moral judgments. We present three experiments showing that evaluations are also susceptible to the influence of moral ...

  13. Moral Standards vs non Moral Standards

    The following six (6) characteristics of moral standards further differentiate them from non- moral standards: a. Moral standards involve serious wrongs or significant benefits. Moral standards deal with matters which can seriously impact, that is, injure or benefit human beings. It is not the case with many non-moral standards.

  14. What are the differences between moral standards and non-moral

    Moral standards and non-moral standards differ in their normativity and functions. Moral standards, such as those related to ethical conduct, are demanding in nature and impose obligations on individuals. They provide a framework for evaluating actions as right or wrong, and guide individuals in making moral decisions. On the other hand, non-moral standards, such as rationality, are ...

  15. PDF REFLECTION AND MORALITY

    focus on what reasons there may be to be moral, what acting morally entails, or in what sense, if any, moral judgments count as true or false. All of these are important issues. But often the taken-for-granted deserves the greatest scrutiny. That we should be able at all to view the world imper-sonally, recognizing the independent and equal ...

  16. Frontlearners

    Today, we will talk about the first part of our discussion about the basic concepts of ethics. In this module, we will talk about the difference between moral and non-moral. standards. Part 1: Difference between Moral and Non-moral.

  17. MORAL STANDARDS VS. NONMORAL STANDARDS SUMMARY (PHILO-notes, 2017

    NONMORAL STANDARDS SUMMARY (PHILO-notes, 2017) - Free download as PDF File (.pdf), Text File (.txt) or read online for free. An overall summary of MORAL STANDARDS VS. NONMORAL STANDARDS SUMMARY from (PHILO-notes, 2017) and other references.

  18. ETHICS chapter 1: Moral and Non-moral standards Flashcards

    moral standards. the norms about the kinds of actions believed to be morally right and wrong, as well as the values placed on what we believe to be morally good and morally bad. Non-moral. actions or events where moral categories cannot be applied like table manners, classroom procedures and routines etc. Immoral.

  19. Reflection Paper On Moral Standards

    A standard is defined as a " model or rule to be followed ". In short, moral standard means " the principles of right and wrong that should be followed". Religion is one of the sources of principles. The characteristics of moral standards: first, they deal with matters that can seriously injure or benefit human beings; second, they ...

  20. Non-Moral Standards versus Moral Standards

    Whereas moral standards may also include the values of other individuals. As hegemonic authority or overriding character may be included in moral standards. If it is stated by moral standards that an individual does has the moral obligation. This may conflict with the non-moral standards of other individuals or their self-interest.

  21. Philo 101 1PRELIM Lesson 2 Moral and Non-Moral Standards

    Philo 101 1PRELIM Lesson 2 Moral and Non-Moral Standards - Free download as Word Doc (.doc / .docx), PDF File (.pdf), Text File (.txt) or read online for free. This document distinguishes between moral and non-moral standards. Moral standards are norms that determine right and wrong actions based on frameworks like ethics. Examples include "stealing is wrong" and "killing is wrong."

  22. How Are Moral Standards Formed?

    There are some moral standards that many of us share in our conduct in society. These moral standards are influenced by a variety of factors such as the moral principles we accept as part of our upbringing, values passed on to us through heritage and legacy, the religious values that we have imbibed from childhood, the values that were ...