• U.S. Department of Health & Human Services

National Institutes of Health (NIH) - Turning Discovery into Health

  • Virtual Tour
  • Staff Directory
  • En Español

You are here

Nih clinical research trials and you, guiding principles for ethical research.

Pursuing Potential Research Participants Protections

Female doctor talking to a senior couple at her desk.

“When people are invited to participate in research, there is a strong belief that it should be their choice based on their understanding of what the study is about, and what the risks and benefits of the study are,” said Dr. Christine Grady, chief of the NIH Clinical Center Department of Bioethics, to Clinical Center Radio in a podcast.

Clinical research advances the understanding of science and promotes human health. However, it is important to remember the individuals who volunteer to participate in research. There are precautions researchers can take – in the planning, implementation and follow-up of studies – to protect these participants in research. Ethical guidelines are established for clinical research to protect patient volunteers and to preserve the integrity of the science.

NIH Clinical Center researchers published seven main principles to guide the conduct of ethical research:

Social and clinical value

Scientific validity, fair subject selection, favorable risk-benefit ratio, independent review, informed consent.

  • Respect for potential and enrolled subjects

Every research study is designed to answer a specific question. The answer should be important enough to justify asking people to accept some risk or inconvenience for others. In other words, answers to the research question should contribute to scientific understanding of health or improve our ways of preventing, treating, or caring for people with a given disease to justify exposing participants to the risk and burden of research.

A study should be designed in a way that will get an understandable answer to the important research question. This includes considering whether the question asked is answerable, whether the research methods are valid and feasible, and whether the study is designed with accepted principles, clear methods, and reliable practices. Invalid research is unethical because it is a waste of resources and exposes people to risk for no purpose

The primary basis for recruiting participants should be the scientific goals of the study — not vulnerability, privilege, or other unrelated factors. Participants who accept the risks of research should be in a position to enjoy its benefits. Specific groups of participants  (for example, women or children) should not be excluded from the research opportunities without a good scientific reason or a particular susceptibility to risk.

Uncertainty about the degree of risks and benefits associated with a clinical research study is inherent. Research risks may be trivial or serious, transient or long-term. Risks can be physical, psychological, economic, or social. Everything should be done to minimize the risks and inconvenience to research participants to maximize the potential benefits, and to determine that the potential benefits are proportionate to, or outweigh, the risks.

To minimize potential conflicts of interest and make sure a study is ethically acceptable before it starts, an independent review panel should review the proposal and ask important questions, including: Are those conducting the trial sufficiently free of bias? Is the study doing all it can to protect research participants? Has the trial been ethically designed and is the risk–benefit ratio favorable? The panel also monitors a study while it is ongoing.

Potential participants should make their own decision about whether they want to participate or continue participating in research. This is done through a process of informed consent in which individuals (1) are accurately informed of the purpose, methods, risks, benefits, and alternatives to the research, (2) understand this information and how it relates to their own clinical situation or interests, and (3) make a voluntary decision about whether to participate.

Respect for potential and enrolled participants

Individuals should be treated with respect from the time they are approached for possible participation — even if they refuse enrollment in a study — throughout their participation and after their participation ends. This includes:

  • respecting their privacy and keeping their private information confidential
  • respecting their right to change their mind, to decide that the research does not match their interests, and to withdraw without a penalty
  • informing them of new information that might emerge in the course of research, which might change their assessment of the risks and benefits of participating
  • monitoring their welfare and, if they experience adverse reactions, unexpected effects, or changes in clinical status, ensuring appropriate treatment and, when necessary, removal from the study
  • informing them about what was learned from the research

More information on these seven guiding principles and on bioethics in general

This page last reviewed on March 16, 2016

Connect with Us

  • More Social Media from NIH

Understanding Research Ethics

  • First Online: 22 April 2022

Cite this chapter

type of principles research

  • Sarah Cuschieri 2  

643 Accesses

1 Citations

As a researcher, whatever your career stage, you need to understand and practice good research ethics. Moral and ethical principles are requisite in research to ensure no deception or harm to participants, scientific community, and society occurs. Failure to follow such principles leads to research misconduct, in which case the researcher faces repercussions ranging from withdrawal of an article from publication to potential job loss. This chapter describes the various types of research misconduct that you should be aware of, i.e., data fabrication and falsification, plagiarism, research bias, data integrity, researcher and funder conflicts of interest. A sound comprehension of research ethics will take you a long way in your career.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Author information

Authors and affiliations.

Department of Anatomy, Faculty of Medicine and Surgery, University of Malta, Msida, Malta

Sarah Cuschieri

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Cuschieri, S. (2022). Understanding Research Ethics. In: A Roadmap to Successful Scientific Publishing. Springer, Cham. https://doi.org/10.1007/978-3-030-99295-8_2

Download citation

DOI : https://doi.org/10.1007/978-3-030-99295-8_2

Published : 22 April 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-99294-1

Online ISBN : 978-3-030-99295-8

eBook Packages : Biomedical and Life Sciences Biomedical and Life Sciences (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Logo for Digital Editions

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

2 Chapter 2: Principles of Research

Principles of research, 2.1  basic concepts.

Before we address where research questions in psychology come from—and what makes them more or less interesting—it is important to understand the kinds of questions that researchers in psychology typically ask. This requires a quick introduction to several basic concepts, many of which we will return to in more detail later in the book.

Research questions in psychology are about variables. A variable is a quantity or quality that varies across people or situations. For example, the height of the students in a psychology class is a variable because it varies from student to student. The sex of the students is also a variable as long as there are both male and female students in the class. A quantitative variable is a quantity, such as height, that is typically measured by assigning a number to each individual. Other examples of quantitative variables include people’s level of talkativeness, how depressed they are, and the number of siblings they have. A categorical variable is a quality, such as sex, and is typically measured by assigning a category label to each individual. Other examples include people’s nationality, their occupation, and whether they are receiving psychotherapy.

“Lots of Candy Could Lead to Violence”

Although researchers in psychology know that  correlation does not imply causation , many journalists do not. Many headlines suggest that a causal relationship has been demonstrated, when a careful reading of the articles shows that it has not because of the directionality and third-variable problems.

One article is about a study showing that children who ate candy every day were more likely than other children to be arrested for a violent offense later in life. But could candy really “lead to” violence, as the headline suggests? What alternative explanations can you think of for this statistical relationship? How could the headline be rewritten so that it is not misleading?

As we will see later in the book, there are various ways that researchers address the directionality and third-variable problems. The most effective, however, is to conduct an experiment. An experiment is a study in which the researcher manipulates the independent variable. For example, instead of simply measuring how much people exercise, a researcher could bring people into a laboratory and randomly assign half of them to run on a treadmill for 15 minutes and the rest to sit on a couch for 15 minutes. Although this seems like a minor addition to the research design, it is extremely important. Now if the exercisers end up in more positive moods than those who did not exercise, it cannot be because their moods affected how much they exercised (because it was the researcher who determined how much they exercised). Likewise, it cannot be because some third variable (e.g., physical health) affected both how much they exercised and what mood they were in (because, again, it was the researcher who determined how much they exercised). Thus experiments eliminate the directionality and third-variable problems and allow researchers to draw firm conclusions about causal relationships.

2.2  Generating Good Research Questions

Good research must begin with a good research question. Yet coming up with good research questions is something that novice researchers often find difficult and stressful. One reason is that this is a creative process that can appear mysterious—even magical—with experienced researchers seeming to pull interesting research questions out of thin air. However, psychological research on creativity has shown that it is neither as mysterious nor as magical as it appears. It is largely the product of ordinary thinking strategies and persistence (Weisberg, 1993). This section covers some fairly simple strategies for finding general research ideas, turning those ideas into empirically testable research questions, and finally evaluating those questions in terms of how interesting they are and how feasible they would be to answer.

Finding Inspiration

Research questions often begin as more general research ideas—usually focusing on some behaviour or psychological characteristic: talkativeness, memory for touches, depression, bungee jumping, and so on. Before looking at how to turn such ideas into empirically testable research questions, it is worth looking at where such ideas come from in the first place. Three of the most common sources of inspiration are informal observations, practical problems, and previous research.

Informal observations include direct observations of our own and others’ behaviour as well as secondhand observations from nonscientific sources such as newspapers, books, and so on. For example, you might notice that you always seem to be in the slowest moving line at the grocery store. Could it be that most people think the same thing? Or you might read in the local newspaper about people donating money and food to a local family whose house has burned down and begin to wonder about who makes such donations and why. Some of the most famous research in psychology has been inspired by informal observations. Stanley Milgram’s famous research on obedience, for example, was inspired in part by journalistic reports of the trials of accused Nazi war criminals—many of whom claimed that they were only obeying orders. This led him to wonder about the extent to which ordinary people will commit immoral acts simply because they are ordered to do so by an authority figure (Milgram, 1963).

Practical problems can also inspire research ideas, leading directly to applied research in such domains as law, health, education, and sports. Can human figure drawings help children remember details about being physically or sexually abused? How effective is psychotherapy for depression compared to drug therapy? To what extent do cell phones impair people’s driving ability? How can we teach children to read more efficiently? What is the best mental preparation for running a marathon?

Probably the most common inspiration for new research ideas, however, is previous research. Recall that science is a kind of large-scale collaboration in which many different researchers read and evaluate each other’s work and conduct new studies to build on it. Of course, experienced researchers are familiar with previous research in their area of expertise and probably have a long list of ideas. This suggests that novice researchers can find inspiration by consulting with a more experienced researcher (e.g., students can consult a faculty member). But they can also find inspiration by picking up a copy of almost any professional journal and reading the titles and abstracts. In one typical issue of Psychological Science, for example, you can find articles on the perception of shapes, anti-Semitism, police lineups, the meaning of death, second-language learning, people who seek negative emotional experiences, and many other topics. If you can narrow your interests down to a particular topic (e.g., memory) or domain (e.g., health care), you can also look through more specific journals, such as Memory Cognition or Health Psychology.

Generating Empirically Testable Research Questions

Once you have a research idea, you need to use it to generate one or more empirically testable research questions, that is, questions expressed in terms of a single variable or relationship between variables. One way to do this is to look closely at the discussion section in a recent research article on the topic. This is the last major section of the article, in which the researchers summarize their results, interpret them in the context of past research, and suggest directions for future research. These suggestions often take the form of specific research questions, which you can then try to answer with additional research. This can be a good strategy because it is likely that the suggested questions have already been identified as interesting and important by experienced researchers.

But you may also want to generate your own research questions. How can you do this? First, if you have a particular behaviour or psychological characteristic in mind, you can simply conceptualize it as a variable and ask how frequent or intense it is. How many words on average do people speak per day? How accurate are children’s memories of being touched? What percentage of people have sought professional help for depression? If the question has never been studied scientifically—which is something that you will learn in your literature review—then it might be interesting and worth pursuing.

If scientific research has already answered the question of how frequent or intense the behaviour or characteristic is, then you should consider turning it into a question about a statistical relationship between that behaviour or characteristic and some other variable. One way to do this is to ask yourself the following series of more general questions and write down all the answers you can think of.

·         What are some possible causes of the behaviour or characteristic?

·         What are some possible effects of the behaviour or characteristic?

·         What types of people might exhibit more or less of the behaviour or characteristic?

·         What types of situations might elicit more or less of the behaviour or characteristic?

In general, each answer you write down can be conceptualized as a second variable, suggesting a question about a statistical relationship. If you were interested in talkativeness, for example, it might occur to you that a possible cause of this psychological characteristic is family size. Is there a statistical relationship between family size and talkativeness? Or it might occur to you that people seem to be more talkative in same-sex groups than mixed-sex groups. Is there a difference in the average level of talkativeness of people in same-sex groups and people in mixed-sex groups? This approach should allow you to generate many different empirically testable questions about almost any behaviour or psychological characteristic.

If through this process you generate a question that has never been studied scientifically—which again is something that you will learn in your literature review—then it might be interesting and worth pursuing. But what if you find that it has been studied scientifically? Although novice researchers often want to give up and move on to a new question at this point, this is not necessarily a good strategy. For one thing, the fact that the question has been studied scientifically and the research published suggests that it is of interest to the scientific community. For another, the question can almost certainly be refined so that its answer will still contribute something new to the research literature. Again, asking yourself a series of more general questions about the statistical relationship is a good strategy.

·         Are there other ways to operationally define the variables?

·         Are there types of people for whom the statistical relationship might be stronger or weaker?

·         Are there situations in which the statistical relationship might be stronger or weaker—including situations with practical importance?

For example, research has shown that women and men speak about the same number of words per day—but this was when talkativeness was measured in terms of the number of words spoken per day among college students in the United States and Mexico. We can still ask whether other ways of measuring talkativeness—perhaps the number of different people spoken to each day—produce the same result. Or we can ask whether studying elderly people or people from other cultures produces the same result. Again, this approach should help you generate many different research questions about almost any statistical relationship.

2.3  Evaluating Research Questions

Researchers usually generate many more research questions than they ever attempt to answer. This means they must have some way of evaluating the research questions they generate so that they can choose which ones to pursue. In this section, we consider two criteria for evaluating research questions: the interestingness of the question and the feasibility of answering it.

Interestingness

How often do people tie their shoes? Do people feel pain when you punch them in the jaw? Are women more likely to wear makeup than men? Do people prefer vanilla or chocolate ice cream? Although it would be a fairly simple matter to design a study and collect data to answer these questions, you probably would not want to because they are not interesting. We are not talking here about whether a research question is interesting to us personally but whether it is interesting to people more generally and, especially, to the scientific community. But what makes a research question interesting in this sense? Here we look at three factors that affect the interestingness of a research question: the answer is in doubt, the answer fills a gap in the research literature, and the answer has important practical implications.

First, a research question is interesting to the extent that its answer is in doubt. Obviously, questions that have been answered by scientific research are no longer interesting as the subject of new empirical research. But the fact that a question has not been answered by scientific research does not necessarily make it interesting. There has to be some reasonable chance that the answer to the question will be something that we did not already know. But how can you assess this before actually collecting data? One approach is to try to think of reasons to expect different answers to the question—especially ones that seem to conflict with common sense. If you can think of reasons to expect at least two different answers, then the question might be interesting. If you can think of reasons to expect only one answer, then it probably is not. The question of whether women are more talkative than men is interesting because there are reasons to expect both answers. The existence of the stereotype itself suggests the answer could be yes, but the fact that women’s and men’s verbal abilities are fairly similar suggests the answer could be no. The question of whether people feel pain when you punch them in the jaw is not interesting because there is absolutely no reason to think that the answer could be anything other than a resounding yes.

A second important factor to consider when deciding if a research question is interesting is whether answering it will fill a gap in the research literature. Again, this means in part that the question has not already been answered by scientific research. But it also means that the question is in some sense a natural one for people who are familiar with the research literature. For example, the question of whether human figure drawings can help children recall touch information would be likely to occur to anyone who was familiar with research on the unreliability of eyewitness memory (especially in children) and the ineffectiveness of some alternative interviewing techniques.

A final factor to consider when deciding whether a research question is interesting is whether its answer has important practical implications. Again, the question of whether human figure drawings help children recall information about being touched has important implications for how children are interviewed in physical and sexual abuse cases. The question of whether cell phone use impairs driving is interesting because it is relevant to the personal safety of everyone who travels by car and to the debate over whether cell phone use should be restricted by law.

Feasibility

A second important criterion for evaluating research questions is the feasibility of successfully answering them. There are many factors that affect feasibility, including time, money, equipment and materials, technical knowledge and skill, and access to research participants. Clearly, researchers need to take these factors into account so that they do not waste time and effort pursuing research that they cannot complete successfully.

Looking through a sample of professional journals in psychology will reveal many studies that are complicated and difficult to carry out. These include longitudinal designs in which participants are tracked over many years, neuroimaging studies in which participants’ brain activity is measured while they carry out various mental tasks, and complex non-experimental studies involving several variables and complicated statistical analyses. Keep in mind, though, that such research tends to be carried out by teams of highly trained researchers whose work is often supported in part by government and private grants. Keep in mind also that research does not have to be complicated or difficult to produce interesting and important results. Looking through a sample of professional journals will also reveal studies that are relatively simple and easy to carry out—perhaps involving a convenience sample of college students and a paper-and-pencil task.

A final point here is that it is generally good practice to use methods that have already been used successfully by other researchers. For example, if you want to manipulate people’s moods to make some of them happy, it would be a good idea to use one of the many approaches that have been used successfully by other researchers (e.g., paying them a compliment). This is good not only for the sake of feasibility—the approach is “tried and true”—but also because it provides greater continuity with previous research. This makes it easier to compare your results with those of other researchers and to understand the implications of their research for yours, and vice versa.

Key Takeaways

·         Research ideas can come from a variety of sources, including informal observations, practical problems, and previous research.

·         Research questions expressed in terms of variables and relationships between variables can be suggested by other researchers or generated by asking a series of more general questions about the behaviour or psychological characteristic of interest.

·         It is important to evaluate how interesting a research question is before designing a study and collecting data to answer it. Factors that affect interestingness are the extent to which the answer is in doubt, whether it fills a gap in the research literature, and whether it has important practical implications.

·         It is also important to evaluate how feasible a research question will be to answer. Factors that affect feasibility include time, money, technical knowledge and skill, and access to special equipment and research participants.

References from Chapter 2

Milgram, S. (1963). Behavioral study of obedience. Journal of Abnormal and Social Psychology, 67, 371–378.

Stanovich, K. E. (2010). How to think straight about psychology (9th ed.). Boston, MA: Allyn Bacon.

Weisberg, R. W. (1993). Creativity: Beyond the myth of genius. New York, NY: Freeman.

Research Methods in Psychology & Neuroscience Copyright © by Dalhousie University Introduction to Psychology and Neuroscience Team. All Rights Reserved.

Share This Book

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

Research Methods | Definitions, Types, Examples

Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design . When planning your methods, there are two key decisions you will make.

First, decide how you will collect data . Your methods depend on what type of data you need to answer your research question :

  • Qualitative vs. quantitative : Will your data take the form of words or numbers?
  • Primary vs. secondary : Will you collect original data yourself, or will you use data that has already been collected by someone else?
  • Descriptive vs. experimental : Will you take measurements of something as it is, or will you perform an experiment?

Second, decide how you will analyze the data .

  • For quantitative data, you can use statistical analysis methods to test relationships between variables.
  • For qualitative data, you can use methods such as thematic analysis to interpret patterns and meanings in the data.

Table of contents

Methods for collecting data, examples of data collection methods, methods for analyzing data, examples of data analysis methods, other interesting articles, frequently asked questions about research methods.

Data is the information that you collect for the purposes of answering your research question . The type of data you need depends on the aims of your research.

Qualitative vs. quantitative data

Your choice of qualitative or quantitative data collection depends on the type of knowledge you want to develop.

For questions about ideas, experiences and meanings, or to study something that can’t be described numerically, collect qualitative data .

If you want to develop a more mechanistic understanding of a topic, or your research involves hypothesis testing , collect quantitative data .

Qualitative to broader populations. .
Quantitative .

You can also take a mixed methods approach , where you use both qualitative and quantitative research methods.

Primary vs. secondary research

Primary research is any original data that you collect yourself for the purposes of answering your research question (e.g. through surveys , observations and experiments ). Secondary research is data that has already been collected by other researchers (e.g. in a government census or previous scientific studies).

If you are exploring a novel research question, you’ll probably need to collect primary data . But if you want to synthesize existing knowledge, analyze historical trends, or identify patterns on a large scale, secondary data might be a better choice.

Primary . methods.
Secondary

Descriptive vs. experimental data

In descriptive research , you collect data about your study subject without intervening. The validity of your research will depend on your sampling method .

In experimental research , you systematically intervene in a process and measure the outcome. The validity of your research will depend on your experimental design .

To conduct an experiment, you need to be able to vary your independent variable , precisely measure your dependent variable, and control for confounding variables . If it’s practically and ethically possible, this method is the best choice for answering questions about cause and effect.

Descriptive . .
Experimental

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

type of principles research

Research methods for collecting data
Research method Primary or secondary? Qualitative or quantitative? When to use
Primary Quantitative To test cause-and-effect relationships.
Primary Quantitative To understand general characteristics of a population.
Interview/focus group Primary Qualitative To gain more in-depth understanding of a topic.
Observation Primary Either To understand how something occurs in its natural setting.
Secondary Either To situate your research in an existing body of work, or to evaluate trends within a research topic.
Either Either To gain an in-depth understanding of a specific group or context, or when you don’t have the resources for a large study.

Your data analysis methods will depend on the type of data you collect and how you prepare it for analysis.

Data can often be analyzed both quantitatively and qualitatively. For example, survey responses could be analyzed qualitatively by studying the meanings of responses or quantitatively by studying the frequencies of responses.

Qualitative analysis methods

Qualitative analysis is used to understand words, ideas, and experiences. You can use it to interpret data that was collected:

  • From open-ended surveys and interviews , literature reviews , case studies , ethnographies , and other sources that use text rather than numbers.
  • Using non-probability sampling methods .

Qualitative analysis tends to be quite flexible and relies on the researcher’s judgement, so you have to reflect carefully on your choices and assumptions and be careful to avoid research bias .

Quantitative analysis methods

Quantitative analysis uses numbers and statistics to understand frequencies, averages and correlations (in descriptive studies) or cause-and-effect relationships (in experiments).

You can use quantitative analysis to interpret data that was collected either:

  • During an experiment .
  • Using probability sampling methods .

Because the data is collected and analyzed in a statistically valid way, the results of quantitative analysis can be easily standardized and shared among researchers.

Research methods for analyzing data
Research method Qualitative or quantitative? When to use
Quantitative To analyze data collected in a statistically valid manner (e.g. from experiments, surveys, and observations).
Meta-analysis Quantitative To statistically analyze the results of a large collection of studies.

Can only be applied to studies that collected data in a statistically valid manner.

Qualitative To analyze data collected from interviews, , or textual sources.

To understand general themes in the data and how they are communicated.

Either To analyze large volumes of textual or visual data collected from surveys, literature reviews, or other sources.

Can be quantitative (i.e. frequencies of words) or qualitative (i.e. meanings of words).

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square test of independence
  • Statistical power
  • Descriptive statistics
  • Degrees of freedom
  • Pearson correlation
  • Null hypothesis
  • Double-blind study
  • Case-control study
  • Research ethics
  • Data collection
  • Hypothesis testing
  • Structured interviews

Research bias

  • Hawthorne effect
  • Unconscious bias
  • Recall bias
  • Halo effect
  • Self-serving bias
  • Information bias

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts and meanings, use qualitative methods .
  • If you want to analyze a large amount of readily-available data, use secondary data. If you want data specific to your purposes with control over how it is generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyze data (for example, experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

Is this article helpful?

Other students also liked, writing strong research questions | criteria & examples.

  • What Is a Research Design | Types, Guide & Examples
  • Data Collection | Definition, Methods & Examples

More interesting articles

  • Between-Subjects Design | Examples, Pros, & Cons
  • Cluster Sampling | A Simple Step-by-Step Guide with Examples
  • Confounding Variables | Definition, Examples & Controls
  • Construct Validity | Definition, Types, & Examples
  • Content Analysis | Guide, Methods & Examples
  • Control Groups and Treatment Groups | Uses & Examples
  • Control Variables | What Are They & Why Do They Matter?
  • Correlation vs. Causation | Difference, Designs & Examples
  • Correlational Research | When & How to Use
  • Critical Discourse Analysis | Definition, Guide & Examples
  • Cross-Sectional Study | Definition, Uses & Examples
  • Descriptive Research | Definition, Types, Methods & Examples
  • Ethical Considerations in Research | Types & Examples
  • Explanatory and Response Variables | Definitions & Examples
  • Explanatory Research | Definition, Guide, & Examples
  • Exploratory Research | Definition, Guide, & Examples
  • External Validity | Definition, Types, Threats & Examples
  • Extraneous Variables | Examples, Types & Controls
  • Guide to Experimental Design | Overview, Steps, & Examples
  • How Do You Incorporate an Interview into a Dissertation? | Tips
  • How to Do Thematic Analysis | Step-by-Step Guide & Examples
  • How to Write a Literature Review | Guide, Examples, & Templates
  • How to Write a Strong Hypothesis | Steps & Examples
  • Inclusion and Exclusion Criteria | Examples & Definition
  • Independent vs. Dependent Variables | Definition & Examples
  • Inductive Reasoning | Types, Examples, Explanation
  • Inductive vs. Deductive Research Approach | Steps & Examples
  • Internal Validity in Research | Definition, Threats, & Examples
  • Internal vs. External Validity | Understanding Differences & Threats
  • Longitudinal Study | Definition, Approaches & Examples
  • Mediator vs. Moderator Variables | Differences & Examples
  • Mixed Methods Research | Definition, Guide & Examples
  • Multistage Sampling | Introductory Guide & Examples
  • Naturalistic Observation | Definition, Guide & Examples
  • Operationalization | A Guide with Examples, Pros & Cons
  • Population vs. Sample | Definitions, Differences & Examples
  • Primary Research | Definition, Types, & Examples
  • Qualitative vs. Quantitative Research | Differences, Examples & Methods
  • Quasi-Experimental Design | Definition, Types & Examples
  • Questionnaire Design | Methods, Question Types & Examples
  • Random Assignment in Experiments | Introduction & Examples
  • Random vs. Systematic Error | Definition & Examples
  • Reliability vs. Validity in Research | Difference, Types and Examples
  • Reproducibility vs Replicability | Difference & Examples
  • Reproducibility vs. Replicability | Difference & Examples
  • Sampling Methods | Types, Techniques & Examples
  • Semi-Structured Interview | Definition, Guide & Examples
  • Simple Random Sampling | Definition, Steps & Examples
  • Single, Double, & Triple Blind Study | Definition & Examples
  • Stratified Sampling | Definition, Guide & Examples
  • Structured Interview | Definition, Guide & Examples
  • Survey Research | Definition, Examples & Methods
  • Systematic Review | Definition, Example, & Guide
  • Systematic Sampling | A Step-by-Step Guide with Examples
  • Textual Analysis | Guide, 3 Approaches & Examples
  • The 4 Types of Reliability in Research | Definitions & Examples
  • The 4 Types of Validity in Research | Definitions & Examples
  • Transcribing an Interview | 5 Steps & Transcription Software
  • Triangulation in Research | Guide, Types, Examples
  • Types of Interviews in Research | Guide & Examples
  • Types of Research Designs Compared | Guide & Examples
  • Types of Variables in Research & Statistics | Examples
  • Unstructured Interview | Definition, Guide & Examples
  • What Is a Case Study? | Definition, Examples & Methods
  • What Is a Case-Control Study? | Definition & Examples
  • What Is a Cohort Study? | Definition & Examples
  • What Is a Conceptual Framework? | Tips & Examples
  • What Is a Controlled Experiment? | Definitions & Examples
  • What Is a Double-Barreled Question?
  • What Is a Focus Group? | Step-by-Step Guide & Examples
  • What Is a Likert Scale? | Guide & Examples
  • What Is a Prospective Cohort Study? | Definition & Examples
  • What Is a Retrospective Cohort Study? | Definition & Examples
  • What Is Action Research? | Definition & Examples
  • What Is an Observational Study? | Guide & Examples
  • What Is Concurrent Validity? | Definition & Examples
  • What Is Content Validity? | Definition & Examples
  • What Is Convenience Sampling? | Definition & Examples
  • What Is Convergent Validity? | Definition & Examples
  • What Is Criterion Validity? | Definition & Examples
  • What Is Data Cleansing? | Definition, Guide & Examples
  • What Is Deductive Reasoning? | Explanation & Examples
  • What Is Discriminant Validity? | Definition & Example
  • What Is Ecological Validity? | Definition & Examples
  • What Is Ethnography? | Definition, Guide & Examples
  • What Is Face Validity? | Guide, Definition & Examples
  • What Is Non-Probability Sampling? | Types & Examples
  • What Is Participant Observation? | Definition & Examples
  • What Is Peer Review? | Types & Examples
  • What Is Predictive Validity? | Examples & Definition
  • What Is Probability Sampling? | Types & Examples
  • What Is Purposive Sampling? | Definition & Examples
  • What Is Qualitative Observation? | Definition & Examples
  • What Is Qualitative Research? | Methods & Examples
  • What Is Quantitative Observation? | Definition & Examples
  • What Is Quantitative Research? | Definition, Uses & Methods

"I thought AI Proofreading was useless but.."

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

type of principles research

Community Blog

Keep up-to-date on postgraduate related issues with our quick reads written by students, postdocs, professors and industry leaders.

Types of Research – Explained with Examples

Picture of DiscoverPhDs

  • By DiscoverPhDs
  • October 2, 2020

Types of Research Design

Types of Research

Research is about using established methods to investigate a problem or question in detail with the aim of generating new knowledge about it.

It is a vital tool for scientific advancement because it allows researchers to prove or refute hypotheses based on clearly defined parameters, environments and assumptions. Due to this, it enables us to confidently contribute to knowledge as it allows research to be verified and replicated.

Knowing the types of research and what each of them focuses on will allow you to better plan your project, utilises the most appropriate methodologies and techniques and better communicate your findings to other researchers and supervisors.

Classification of Types of Research

There are various types of research that are classified according to their objective, depth of study, analysed data, time required to study the phenomenon and other factors. It’s important to note that a research project will not be limited to one type of research, but will likely use several.

According to its Purpose

Theoretical research.

Theoretical research, also referred to as pure or basic research, focuses on generating knowledge , regardless of its practical application. Here, data collection is used to generate new general concepts for a better understanding of a particular field or to answer a theoretical research question.

Results of this kind are usually oriented towards the formulation of theories and are usually based on documentary analysis, the development of mathematical formulas and the reflection of high-level researchers.

Applied Research

Here, the goal is to find strategies that can be used to address a specific research problem. Applied research draws on theory to generate practical scientific knowledge, and its use is very common in STEM fields such as engineering, computer science and medicine.

This type of research is subdivided into two types:

  • Technological applied research : looks towards improving efficiency in a particular productive sector through the improvement of processes or machinery related to said productive processes.
  • Scientific applied research : has predictive purposes. Through this type of research design, we can measure certain variables to predict behaviours useful to the goods and services sector, such as consumption patterns and viability of commercial projects.

Methodology Research

According to your Depth of Scope

Exploratory research.

Exploratory research is used for the preliminary investigation of a subject that is not yet well understood or sufficiently researched. It serves to establish a frame of reference and a hypothesis from which an in-depth study can be developed that will enable conclusive results to be generated.

Because exploratory research is based on the study of little-studied phenomena, it relies less on theory and more on the collection of data to identify patterns that explain these phenomena.

Descriptive Research

The primary objective of descriptive research is to define the characteristics of a particular phenomenon without necessarily investigating the causes that produce it.

In this type of research, the researcher must take particular care not to intervene in the observed object or phenomenon, as its behaviour may change if an external factor is involved.

Explanatory Research

Explanatory research is the most common type of research method and is responsible for establishing cause-and-effect relationships that allow generalisations to be extended to similar realities. It is closely related to descriptive research, although it provides additional information about the observed object and its interactions with the environment.

Correlational Research

The purpose of this type of scientific research is to identify the relationship between two or more variables. A correlational study aims to determine whether a variable changes, how much the other elements of the observed system change.

According to the Type of Data Used

Qualitative research.

Qualitative methods are often used in the social sciences to collect, compare and interpret information, has a linguistic-semiotic basis and is used in techniques such as discourse analysis, interviews, surveys, records and participant observations.

In order to use statistical methods to validate their results, the observations collected must be evaluated numerically. Qualitative research, however, tends to be subjective, since not all data can be fully controlled. Therefore, this type of research design is better suited to extracting meaning from an event or phenomenon (the ‘why’) than its cause (the ‘how’).

Quantitative Research

Quantitative research study delves into a phenomena through quantitative data collection and using mathematical, statistical and computer-aided tools to measure them . This allows generalised conclusions to be projected over time.

Types of Research Methodology

According to the Degree of Manipulation of Variables

Experimental research.

It is about designing or replicating a phenomenon whose variables are manipulated under strictly controlled conditions in order to identify or discover its effect on another independent variable or object. The phenomenon to be studied is measured through study and control groups, and according to the guidelines of the scientific method.

Non-Experimental Research

Also known as an observational study, it focuses on the analysis of a phenomenon in its natural context. As such, the researcher does not intervene directly, but limits their involvement to measuring the variables required for the study. Due to its observational nature, it is often used in descriptive research.

Quasi-Experimental Research

It controls only some variables of the phenomenon under investigation and is therefore not entirely experimental. In this case, the study and the focus group cannot be randomly selected, but are chosen from existing groups or populations . This is to ensure the collected data is relevant and that the knowledge, perspectives and opinions of the population can be incorporated into the study.

According to the Type of Inference

Deductive investigation.

In this type of research, reality is explained by general laws that point to certain conclusions; conclusions are expected to be part of the premise of the research problem and considered correct if the premise is valid and the inductive method is applied correctly.

Inductive Research

In this type of research, knowledge is generated from an observation to achieve a generalisation. It is based on the collection of specific data to develop new theories.

Hypothetical-Deductive Investigation

It is based on observing reality to make a hypothesis, then use deduction to obtain a conclusion and finally verify or reject it through experience.

Descriptive Research Design

According to the Time in Which it is Carried Out

Longitudinal study (also referred to as diachronic research).

It is the monitoring of the same event, individual or group over a defined period of time. It aims to track changes in a number of variables and see how they evolve over time. It is often used in medical, psychological and social areas .

Cross-Sectional Study (also referred to as Synchronous Research)

Cross-sectional research design is used to observe phenomena, an individual or a group of research subjects at a given time.

According to The Sources of Information

Primary research.

This fundamental research type is defined by the fact that the data is collected directly from the source, that is, it consists of primary, first-hand information.

Secondary research

Unlike primary research, secondary research is developed with information from secondary sources, which are generally based on scientific literature and other documents compiled by another researcher.

Action Research Methods

According to How the Data is Obtained

Documentary (cabinet).

Documentary research, or secondary sources, is based on a systematic review of existing sources of information on a particular subject. This type of scientific research is commonly used when undertaking literature reviews or producing a case study.

Field research study involves the direct collection of information at the location where the observed phenomenon occurs.

From Laboratory

Laboratory research is carried out in a controlled environment in order to isolate a dependent variable and establish its relationship with other variables through scientific methods.

Mixed-Method: Documentary, Field and/or Laboratory

Mixed research methodologies combine results from both secondary (documentary) sources and primary sources through field or laboratory research.

MBA vs PhD

Considering whether to do an MBA or a PhD? If so, find out what their differences are, and more importantly, which one is better suited for you.

What is the Thurstone Scale?

The Thurstone Scale is used to quantify the attitudes of people being surveyed, using a format of ‘agree-disagree’ statements.

Difference between the journal paper status of In Review and Under Review

This post explains the difference between the journal paper status of In Review and Under Review.

Join thousands of other students and stay up to date with the latest PhD programmes, funding opportunities and advice.

type of principles research

Browse PhDs Now

Tips for Applying to a PhD

Thinking about applying to a PhD? Then don’t miss out on these 4 tips on how to best prepare your application.

What is an Academic Transcript?

An academic transcript gives a breakdown of each module you studied for your degree and the mark that you were awarded.

Daisy Shearer_Profile

Daisy’s a year and half into her PhD at the University of Surrey. Her research project is based around the control of electron spin state in InSb quantum wells using quantum point contacts.

type of principles research

Priya’s a 1st year PhD student University College Dublin. Her project involves investigating a novel seaweed-ensiling process as an alternative to drying to preserve seaweeds nutritional and monetary value.

Join Thousands of Students

Educational resources and simple solutions for your research journey

research

What is Research? Definition, Types, Methods, and Examples

Academic research is a methodical way of exploring new ideas or understanding things we already know. It involves gathering and studying information to answer questions or test ideas and requires careful thinking and persistence to reach meaningful conclusions. Let’s try to understand what research is.   

Table of Contents

Why is research important?    

Whether it’s doing experiments, analyzing data, or studying old documents, research helps us learn more about the world. Without it, we rely on guesswork and hearsay, often leading to mistakes and misconceptions. By using systematic methods, research helps us see things clearly, free from biases. (1)   

What is the purpose of research?  

In the real world, academic research is also a key driver of innovation. It brings many benefits, such as creating valuable opportunities and fostering partnerships between academia and industry. By turning research into products and services, science makes meaningful improvements to people’s lives and boosts the economy. (2)(3)  

What are the characteristics of research?    

The research process collects accurate information systematically. Logic is used to analyze the collected data and find insights. Checking the collected data thoroughly ensures accuracy. Research also leads to new questions using existing data.   

Accuracy is key in research, which requires precise data collection and analysis. In scientific research, laboratories ensure accuracy by carefully calibrating instruments and controlling experiments. Every step is checked to maintain integrity, from instruments to final results. Accuracy gives reliable insights, which in turn help advance knowledge.   

Types of research    

The different forms of research serve distinct purposes in expanding knowledge and understanding:    

  • Exploratory research ventures into uncharted territories, exploring new questions or problem areas without aiming for conclusive answers. For instance, a study may delve into unexplored market segments to better understand consumer behaviour patterns.   
  • Descriptive research delves into current issues by collecting and analyzing data to describe the behaviour of a sample population. For instance, a survey may investigate millennials’ spending habits to gain insights into their purchasing behaviours.   
  • Explanatory research, also known as causal research, seeks to understand the impact of specific changes in existing procedures. An example might be a study examining how changes in drug dosage over some time improve patients’ health.   
  • Correlational research examines connections between two sets of data to uncover meaningful relationships. For instance, a study may analyze the relationship between advertising spending and sales revenue.   
  • Theoretical research deepens existing knowledge without attempting to solve specific problems. For example, a study may explore theoretical frameworks to understand the underlying principles of human behaviour.   
  • Applied research focuses on real-world issues and aims to provide practical solutions. An example could be a study investigating the effectiveness of a new teaching method in improving student performance in schools.  (4)

Types of research methods

  • Qualitative Method: Qualitative research gathers non-numerical data through interactions with participants. Methods include one-to-one interviews, focus groups, ethnographic studies, text analysis, and case studies. For example, a researcher interviews cancer patients to understand how different treatments impact their lives emotionally.    
  • Quantitative Method: Quantitative methods deal with numbers and measurable data to understand relationships between variables. They use systematic methods to investigate events and aim to explain or predict outcomes. For example, Researchers study how exercise affects heart health by measuring variables like heart rate and blood pressure in a large group before and after an exercise program. (5)  

Basic steps involved in the research process    

Here are the basic steps to help you understand the research process:   

  • Choose your topic: Decide the specific subject or area that you want to study and investigate. This decision is the foundation of your research journey.   
  • Find information: Look for information related to your research topic. You can search in journals, books, online, or ask experts for help.   
  • Assess your sources: Make sure the information you find is reliable and trustworthy. Check the author’s credentials and the publication date.   
  • Take notes: Write down important information from your sources that you can use in your research.   
  • Write your paper: Use your notes to write your research paper. Broadly, start with an introduction, then write the body of your paper, and finish with a conclusion.   
  • Cite your sources: Give credit to the sources you used by including citations in your paper.   
  • Proofread: Check your paper thoroughly for any errors in spelling, grammar, or punctuation before you submit it. (6)

How to ensure research accuracy?  

Ensuring accuracy in research is a mix of several essential steps:    

  • Clarify goals: Start by defining clear objectives for your research. Identify your research question, hypothesis, and variables of interest. This clarity will help guide your data collection and analysis methods, ensuring that your research stays focused and purposeful.   
  • Use reliable data: Select trustworthy sources for your information, whether they are primary data collected by you or secondary data obtained from other sources. For example, if you’re studying climate change, use data from reputable scientific organizations with transparent methodologies.   
  • Validate data: Validate your data to ensure it meets the standards of your research project. Check for errors, outliers, and inconsistencies at different stages, such as during data collection, entry, cleaning, or analysis.    
  • Document processes: Documenting your data collection and analysis processes is essential for transparency and reproducibility. Record details such as data collection methods, cleaning procedures, and analysis techniques used. This documentation not only helps you keep track of your research but also enables others to understand and replicate your work.   
  • Review results: Finally, review and verify your research findings to confirm their accuracy and reliability. Double-check your analyses, cross-reference your data, and seek feedback from peers or supervisors. (7) 

Research is crucial for better understanding our world and for social and economic growth. By following ethical guidelines and ensuring accuracy, researchers play a critical role in driving this progress, whether through exploring new topics or deepening existing knowledge.   

References:  

  • Why is Research Important – Introductory Psychology – Washington State University  
  • The Role Of Scientific Research In Driving Business Innovation – Forbes  
  • Innovation – Royal Society  
  • Types of Research – Definition & Methods – Bachelor Print  
  • What Is Qualitative vs. Quantitative Study? – National University  
  • Basic Steps in the Research Process – North Hennepin Community College  
  • Best Practices for Ensuring Data Accuracy in Research – LinkedIn  

Researcher.Life is a subscription-based platform that unifies the best AI tools and services designed to speed up, simplify, and streamline every step of a researcher’s journey. The Researcher.Life All Access Pack is a one-of-a-kind subscription that unlocks full access to an AI writing assistant, literature recommender, journal finder, scientific illustration tool, and exclusive discounts on professional publication services from Editage.  

Based on 21+ years of experience in academia, Researcher.Life All Access empowers researchers to put their best research forward and move closer to success. Explore our top AI Tools pack, AI Tools + Publication Services pack, or Build Your Own Plan. Find everything a researcher needs to succeed, all in one place –  Get All Access now starting at just $17 a month !    

Related Posts

convenience sampling

What is Convenience Sampling: Definition, Method, and Examples 

research funding sources

What are the Best Research Funding Sources

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • BMC Med Res Methodol

Logo of bmcmrm

A tutorial on methodological studies: the what, when, how and why

Lawrence mbuagbaw.

1 Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON Canada

2 Biostatistics Unit/FSORC, 50 Charlton Avenue East, St Joseph’s Healthcare—Hamilton, 3rd Floor Martha Wing, Room H321, Hamilton, Ontario L8N 4A6 Canada

3 Centre for the Development of Best Practices in Health, Yaoundé, Cameroon

Daeria O. Lawson

Livia puljak.

4 Center for Evidence-Based Medicine and Health Care, Catholic University of Croatia, Ilica 242, 10000 Zagreb, Croatia

David B. Allison

5 Department of Epidemiology and Biostatistics, School of Public Health – Bloomington, Indiana University, Bloomington, IN 47405 USA

Lehana Thabane

6 Departments of Paediatrics and Anaesthesia, McMaster University, Hamilton, ON Canada

7 Centre for Evaluation of Medicine, St. Joseph’s Healthcare-Hamilton, Hamilton, ON Canada

8 Population Health Research Institute, Hamilton Health Sciences, Hamilton, ON Canada

Associated Data

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

Methodological studies – studies that evaluate the design, analysis or reporting of other research-related reports – play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste.

We provide an overview of some of the key aspects of methodological studies such as what they are, and when, how and why they are done. We adopt a “frequently asked questions” format to facilitate reading this paper and provide multiple examples to help guide researchers interested in conducting methodological studies. Some of the topics addressed include: is it necessary to publish a study protocol? How to select relevant research reports and databases for a methodological study? What approaches to data extraction and statistical analysis should be considered when conducting a methodological study? What are potential threats to validity and is there a way to appraise the quality of methodological studies?

Appropriate reflection and application of basic principles of epidemiology and biostatistics are required in the design and analysis of methodological studies. This paper provides an introduction for further discussion about the conduct of methodological studies.

The field of meta-research (or research-on-research) has proliferated in recent years in response to issues with research quality and conduct [ 1 – 3 ]. As the name suggests, this field targets issues with research design, conduct, analysis and reporting. Various types of research reports are often examined as the unit of analysis in these studies (e.g. abstracts, full manuscripts, trial registry entries). Like many other novel fields of research, meta-research has seen a proliferation of use before the development of reporting guidance. For example, this was the case with randomized trials for which risk of bias tools and reporting guidelines were only developed much later – after many trials had been published and noted to have limitations [ 4 , 5 ]; and for systematic reviews as well [ 6 – 8 ]. However, in the absence of formal guidance, studies that report on research differ substantially in how they are named, conducted and reported [ 9 , 10 ]. This creates challenges in identifying, summarizing and comparing them. In this tutorial paper, we will use the term methodological study to refer to any study that reports on the design, conduct, analysis or reporting of primary or secondary research-related reports (such as trial registry entries and conference abstracts).

In the past 10 years, there has been an increase in the use of terms related to methodological studies (based on records retrieved with a keyword search [in the title and abstract] for “methodological review” and “meta-epidemiological study” in PubMed up to December 2019), suggesting that these studies may be appearing more frequently in the literature. See Fig.  1 .

An external file that holds a picture, illustration, etc.
Object name is 12874_2020_1107_Fig1_HTML.jpg

Trends in the number studies that mention “methodological review” or “meta-

epidemiological study” in PubMed.

The methods used in many methodological studies have been borrowed from systematic and scoping reviews. This practice has influenced the direction of the field, with many methodological studies including searches of electronic databases, screening of records, duplicate data extraction and assessments of risk of bias in the included studies. However, the research questions posed in methodological studies do not always require the approaches listed above, and guidance is needed on when and how to apply these methods to a methodological study. Even though methodological studies can be conducted on qualitative or mixed methods research, this paper focuses on and draws examples exclusively from quantitative research.

The objectives of this paper are to provide some insights on how to conduct methodological studies so that there is greater consistency between the research questions posed, and the design, analysis and reporting of findings. We provide multiple examples to illustrate concepts and a proposed framework for categorizing methodological studies in quantitative research.

What is a methodological study?

Any study that describes or analyzes methods (design, conduct, analysis or reporting) in published (or unpublished) literature is a methodological study. Consequently, the scope of methodological studies is quite extensive and includes, but is not limited to, topics as diverse as: research question formulation [ 11 ]; adherence to reporting guidelines [ 12 – 14 ] and consistency in reporting [ 15 ]; approaches to study analysis [ 16 ]; investigating the credibility of analyses [ 17 ]; and studies that synthesize these methodological studies [ 18 ]. While the nomenclature of methodological studies is not uniform, the intents and purposes of these studies remain fairly consistent – to describe or analyze methods in primary or secondary studies. As such, methodological studies may also be classified as a subtype of observational studies.

Parallel to this are experimental studies that compare different methods. Even though they play an important role in informing optimal research methods, experimental methodological studies are beyond the scope of this paper. Examples of such studies include the randomized trials by Buscemi et al., comparing single data extraction to double data extraction [ 19 ], and Carrasco-Labra et al., comparing approaches to presenting findings in Grading of Recommendations, Assessment, Development and Evaluations (GRADE) summary of findings tables [ 20 ]. In these studies, the unit of analysis is the person or groups of individuals applying the methods. We also direct readers to the Studies Within a Trial (SWAT) and Studies Within a Review (SWAR) programme operated through the Hub for Trials Methodology Research, for further reading as a potential useful resource for these types of experimental studies [ 21 ]. Lastly, this paper is not meant to inform the conduct of research using computational simulation and mathematical modeling for which some guidance already exists [ 22 ], or studies on the development of methods using consensus-based approaches.

When should we conduct a methodological study?

Methodological studies occupy a unique niche in health research that allows them to inform methodological advances. Methodological studies should also be conducted as pre-cursors to reporting guideline development, as they provide an opportunity to understand current practices, and help to identify the need for guidance and gaps in methodological or reporting quality. For example, the development of the popular Preferred Reporting Items of Systematic reviews and Meta-Analyses (PRISMA) guidelines were preceded by methodological studies identifying poor reporting practices [ 23 , 24 ]. In these instances, after the reporting guidelines are published, methodological studies can also be used to monitor uptake of the guidelines.

These studies can also be conducted to inform the state of the art for design, analysis and reporting practices across different types of health research fields, with the aim of improving research practices, and preventing or reducing research waste. For example, Samaan et al. conducted a scoping review of adherence to different reporting guidelines in health care literature [ 18 ]. Methodological studies can also be used to determine the factors associated with reporting practices. For example, Abbade et al. investigated journal characteristics associated with the use of the Participants, Intervention, Comparison, Outcome, Timeframe (PICOT) format in framing research questions in trials of venous ulcer disease [ 11 ].

How often are methodological studies conducted?

There is no clear answer to this question. Based on a search of PubMed, the use of related terms (“methodological review” and “meta-epidemiological study”) – and therefore, the number of methodological studies – is on the rise. However, many other terms are used to describe methodological studies. There are also many studies that explore design, conduct, analysis or reporting of research reports, but that do not use any specific terms to describe or label their study design in terms of “methodology”. This diversity in nomenclature makes a census of methodological studies elusive. Appropriate terminology and key words for methodological studies are needed to facilitate improved accessibility for end-users.

Why do we conduct methodological studies?

Methodological studies provide information on the design, conduct, analysis or reporting of primary and secondary research and can be used to appraise quality, quantity, completeness, accuracy and consistency of health research. These issues can be explored in specific fields, journals, databases, geographical regions and time periods. For example, Areia et al. explored the quality of reporting of endoscopic diagnostic studies in gastroenterology [ 25 ]; Knol et al. investigated the reporting of p -values in baseline tables in randomized trial published in high impact journals [ 26 ]; Chen et al. describe adherence to the Consolidated Standards of Reporting Trials (CONSORT) statement in Chinese Journals [ 27 ]; and Hopewell et al. describe the effect of editors’ implementation of CONSORT guidelines on reporting of abstracts over time [ 28 ]. Methodological studies provide useful information to researchers, clinicians, editors, publishers and users of health literature. As a result, these studies have been at the cornerstone of important methodological developments in the past two decades and have informed the development of many health research guidelines including the highly cited CONSORT statement [ 5 ].

Where can we find methodological studies?

Methodological studies can be found in most common biomedical bibliographic databases (e.g. Embase, MEDLINE, PubMed, Web of Science). However, the biggest caveat is that methodological studies are hard to identify in the literature due to the wide variety of names used and the lack of comprehensive databases dedicated to them. A handful can be found in the Cochrane Library as “Cochrane Methodology Reviews”, but these studies only cover methodological issues related to systematic reviews. Previous attempts to catalogue all empirical studies of methods used in reviews were abandoned 10 years ago [ 29 ]. In other databases, a variety of search terms may be applied with different levels of sensitivity and specificity.

Some frequently asked questions about methodological studies

In this section, we have outlined responses to questions that might help inform the conduct of methodological studies.

Q: How should I select research reports for my methodological study?

A: Selection of research reports for a methodological study depends on the research question and eligibility criteria. Once a clear research question is set and the nature of literature one desires to review is known, one can then begin the selection process. Selection may begin with a broad search, especially if the eligibility criteria are not apparent. For example, a methodological study of Cochrane Reviews of HIV would not require a complex search as all eligible studies can easily be retrieved from the Cochrane Library after checking a few boxes [ 30 ]. On the other hand, a methodological study of subgroup analyses in trials of gastrointestinal oncology would require a search to find such trials, and further screening to identify trials that conducted a subgroup analysis [ 31 ].

The strategies used for identifying participants in observational studies can apply here. One may use a systematic search to identify all eligible studies. If the number of eligible studies is unmanageable, a random sample of articles can be expected to provide comparable results if it is sufficiently large [ 32 ]. For example, Wilson et al. used a random sample of trials from the Cochrane Stroke Group’s Trial Register to investigate completeness of reporting [ 33 ]. It is possible that a simple random sample would lead to underrepresentation of units (i.e. research reports) that are smaller in number. This is relevant if the investigators wish to compare multiple groups but have too few units in one group. In this case a stratified sample would help to create equal groups. For example, in a methodological study comparing Cochrane and non-Cochrane reviews, Kahale et al. drew random samples from both groups [ 34 ]. Alternatively, systematic or purposeful sampling strategies can be used and we encourage researchers to justify their selected approaches based on the study objective.

Q: How many databases should I search?

A: The number of databases one should search would depend on the approach to sampling, which can include targeting the entire “population” of interest or a sample of that population. If you are interested in including the entire target population for your research question, or drawing a random or systematic sample from it, then a comprehensive and exhaustive search for relevant articles is required. In this case, we recommend using systematic approaches for searching electronic databases (i.e. at least 2 databases with a replicable and time stamped search strategy). The results of your search will constitute a sampling frame from which eligible studies can be drawn.

Alternatively, if your approach to sampling is purposeful, then we recommend targeting the database(s) or data sources (e.g. journals, registries) that include the information you need. For example, if you are conducting a methodological study of high impact journals in plastic surgery and they are all indexed in PubMed, you likely do not need to search any other databases. You may also have a comprehensive list of all journals of interest and can approach your search using the journal names in your database search (or by accessing the journal archives directly from the journal’s website). Even though one could also search journals’ web pages directly, using a database such as PubMed has multiple advantages, such as the use of filters, so the search can be narrowed down to a certain period, or study types of interest. Furthermore, individual journals’ web sites may have different search functionalities, which do not necessarily yield a consistent output.

Q: Should I publish a protocol for my methodological study?

A: A protocol is a description of intended research methods. Currently, only protocols for clinical trials require registration [ 35 ]. Protocols for systematic reviews are encouraged but no formal recommendation exists. The scientific community welcomes the publication of protocols because they help protect against selective outcome reporting, the use of post hoc methodologies to embellish results, and to help avoid duplication of efforts [ 36 ]. While the latter two risks exist in methodological research, the negative consequences may be substantially less than for clinical outcomes. In a sample of 31 methodological studies, 7 (22.6%) referenced a published protocol [ 9 ]. In the Cochrane Library, there are 15 protocols for methodological reviews (21 July 2020). This suggests that publishing protocols for methodological studies is not uncommon.

Authors can consider publishing their study protocol in a scholarly journal as a manuscript. Advantages of such publication include obtaining peer-review feedback about the planned study, and easy retrieval by searching databases such as PubMed. The disadvantages in trying to publish protocols includes delays associated with manuscript handling and peer review, as well as costs, as few journals publish study protocols, and those journals mostly charge article-processing fees [ 37 ]. Authors who would like to make their protocol publicly available without publishing it in scholarly journals, could deposit their study protocols in publicly available repositories, such as the Open Science Framework ( https://osf.io/ ).

Q: How to appraise the quality of a methodological study?

A: To date, there is no published tool for appraising the risk of bias in a methodological study, but in principle, a methodological study could be considered as a type of observational study. Therefore, during conduct or appraisal, care should be taken to avoid the biases common in observational studies [ 38 ]. These biases include selection bias, comparability of groups, and ascertainment of exposure or outcome. In other words, to generate a representative sample, a comprehensive reproducible search may be necessary to build a sampling frame. Additionally, random sampling may be necessary to ensure that all the included research reports have the same probability of being selected, and the screening and selection processes should be transparent and reproducible. To ensure that the groups compared are similar in all characteristics, matching, random sampling or stratified sampling can be used. Statistical adjustments for between-group differences can also be applied at the analysis stage. Finally, duplicate data extraction can reduce errors in assessment of exposures or outcomes.

Q: Should I justify a sample size?

A: In all instances where one is not using the target population (i.e. the group to which inferences from the research report are directed) [ 39 ], a sample size justification is good practice. The sample size justification may take the form of a description of what is expected to be achieved with the number of articles selected, or a formal sample size estimation that outlines the number of articles required to answer the research question with a certain precision and power. Sample size justifications in methodological studies are reasonable in the following instances:

  • Comparing two groups
  • Determining a proportion, mean or another quantifier
  • Determining factors associated with an outcome using regression-based analyses

For example, El Dib et al. computed a sample size requirement for a methodological study of diagnostic strategies in randomized trials, based on a confidence interval approach [ 40 ].

Q: What should I call my study?

A: Other terms which have been used to describe/label methodological studies include “ methodological review ”, “methodological survey” , “meta-epidemiological study” , “systematic review” , “systematic survey”, “meta-research”, “research-on-research” and many others. We recommend that the study nomenclature be clear, unambiguous, informative and allow for appropriate indexing. Methodological study nomenclature that should be avoided includes “ systematic review” – as this will likely be confused with a systematic review of a clinical question. “ Systematic survey” may also lead to confusion about whether the survey was systematic (i.e. using a preplanned methodology) or a survey using “ systematic” sampling (i.e. a sampling approach using specific intervals to determine who is selected) [ 32 ]. Any of the above meanings of the words “ systematic” may be true for methodological studies and could be potentially misleading. “ Meta-epidemiological study” is ideal for indexing, but not very informative as it describes an entire field. The term “ review ” may point towards an appraisal or “review” of the design, conduct, analysis or reporting (or methodological components) of the targeted research reports, yet it has also been used to describe narrative reviews [ 41 , 42 ]. The term “ survey ” is also in line with the approaches used in many methodological studies [ 9 ], and would be indicative of the sampling procedures of this study design. However, in the absence of guidelines on nomenclature, the term “ methodological study ” is broad enough to capture most of the scenarios of such studies.

Q: Should I account for clustering in my methodological study?

A: Data from methodological studies are often clustered. For example, articles coming from a specific source may have different reporting standards (e.g. the Cochrane Library). Articles within the same journal may be similar due to editorial practices and policies, reporting requirements and endorsement of guidelines. There is emerging evidence that these are real concerns that should be accounted for in analyses [ 43 ]. Some cluster variables are described in the section: “ What variables are relevant to methodological studies?”

A variety of modelling approaches can be used to account for correlated data, including the use of marginal, fixed or mixed effects regression models with appropriate computation of standard errors [ 44 ]. For example, Kosa et al. used generalized estimation equations to account for correlation of articles within journals [ 15 ]. Not accounting for clustering could lead to incorrect p -values, unduly narrow confidence intervals, and biased estimates [ 45 ].

Q: Should I extract data in duplicate?

A: Yes. Duplicate data extraction takes more time but results in less errors [ 19 ]. Data extraction errors in turn affect the effect estimate [ 46 ], and therefore should be mitigated. Duplicate data extraction should be considered in the absence of other approaches to minimize extraction errors. However, much like systematic reviews, this area will likely see rapid new advances with machine learning and natural language processing technologies to support researchers with screening and data extraction [ 47 , 48 ]. However, experience plays an important role in the quality of extracted data and inexperienced extractors should be paired with experienced extractors [ 46 , 49 ].

Q: Should I assess the risk of bias of research reports included in my methodological study?

A : Risk of bias is most useful in determining the certainty that can be placed in the effect measure from a study. In methodological studies, risk of bias may not serve the purpose of determining the trustworthiness of results, as effect measures are often not the primary goal of methodological studies. Determining risk of bias in methodological studies is likely a practice borrowed from systematic review methodology, but whose intrinsic value is not obvious in methodological studies. When it is part of the research question, investigators often focus on one aspect of risk of bias. For example, Speich investigated how blinding was reported in surgical trials [ 50 ], and Abraha et al., investigated the application of intention-to-treat analyses in systematic reviews and trials [ 51 ].

Q: What variables are relevant to methodological studies?

A: There is empirical evidence that certain variables may inform the findings in a methodological study. We outline some of these and provide a brief overview below:

  • Country: Countries and regions differ in their research cultures, and the resources available to conduct research. Therefore, it is reasonable to believe that there may be differences in methodological features across countries. Methodological studies have reported loco-regional differences in reporting quality [ 52 , 53 ]. This may also be related to challenges non-English speakers face in publishing papers in English.
  • Authors’ expertise: The inclusion of authors with expertise in research methodology, biostatistics, and scientific writing is likely to influence the end-product. Oltean et al. found that among randomized trials in orthopaedic surgery, the use of analyses that accounted for clustering was more likely when specialists (e.g. statistician, epidemiologist or clinical trials methodologist) were included on the study team [ 54 ]. Fleming et al. found that including methodologists in the review team was associated with appropriate use of reporting guidelines [ 55 ].
  • Source of funding and conflicts of interest: Some studies have found that funded studies report better [ 56 , 57 ], while others do not [ 53 , 58 ]. The presence of funding would indicate the availability of resources deployed to ensure optimal design, conduct, analysis and reporting. However, the source of funding may introduce conflicts of interest and warrant assessment. For example, Kaiser et al. investigated the effect of industry funding on obesity or nutrition randomized trials and found that reporting quality was similar [ 59 ]. Thomas et al. looked at reporting quality of long-term weight loss trials and found that industry funded studies were better [ 60 ]. Kan et al. examined the association between industry funding and “positive trials” (trials reporting a significant intervention effect) and found that industry funding was highly predictive of a positive trial [ 61 ]. This finding is similar to that of a recent Cochrane Methodology Review by Hansen et al. [ 62 ]
  • Journal characteristics: Certain journals’ characteristics may influence the study design, analysis or reporting. Characteristics such as journal endorsement of guidelines [ 63 , 64 ], and Journal Impact Factor (JIF) have been shown to be associated with reporting [ 63 , 65 – 67 ].
  • Study size (sample size/number of sites): Some studies have shown that reporting is better in larger studies [ 53 , 56 , 58 ].
  • Year of publication: It is reasonable to assume that design, conduct, analysis and reporting of research will change over time. Many studies have demonstrated improvements in reporting over time or after the publication of reporting guidelines [ 68 , 69 ].
  • Type of intervention: In a methodological study of reporting quality of weight loss intervention studies, Thabane et al. found that trials of pharmacologic interventions were reported better than trials of non-pharmacologic interventions [ 70 ].
  • Interactions between variables: Complex interactions between the previously listed variables are possible. High income countries with more resources may be more likely to conduct larger studies and incorporate a variety of experts. Authors in certain countries may prefer certain journals, and journal endorsement of guidelines and editorial policies may change over time.

Q: Should I focus only on high impact journals?

A: Investigators may choose to investigate only high impact journals because they are more likely to influence practice and policy, or because they assume that methodological standards would be higher. However, the JIF may severely limit the scope of articles included and may skew the sample towards articles with positive findings. The generalizability and applicability of findings from a handful of journals must be examined carefully, especially since the JIF varies over time. Even among journals that are all “high impact”, variations exist in methodological standards.

Q: Can I conduct a methodological study of qualitative research?

A: Yes. Even though a lot of methodological research has been conducted in the quantitative research field, methodological studies of qualitative studies are feasible. Certain databases that catalogue qualitative research including the Cumulative Index to Nursing & Allied Health Literature (CINAHL) have defined subject headings that are specific to methodological research (e.g. “research methodology”). Alternatively, one could also conduct a qualitative methodological review; that is, use qualitative approaches to synthesize methodological issues in qualitative studies.

Q: What reporting guidelines should I use for my methodological study?

A: There is no guideline that covers the entire scope of methodological studies. One adaptation of the PRISMA guidelines has been published, which works well for studies that aim to use the entire target population of research reports [ 71 ]. However, it is not widely used (40 citations in 2 years as of 09 December 2019), and methodological studies that are designed as cross-sectional or before-after studies require a more fit-for purpose guideline. A more encompassing reporting guideline for a broad range of methodological studies is currently under development [ 72 ]. However, in the absence of formal guidance, the requirements for scientific reporting should be respected, and authors of methodological studies should focus on transparency and reproducibility.

Q: What are the potential threats to validity and how can I avoid them?

A: Methodological studies may be compromised by a lack of internal or external validity. The main threats to internal validity in methodological studies are selection and confounding bias. Investigators must ensure that the methods used to select articles does not make them differ systematically from the set of articles to which they would like to make inferences. For example, attempting to make extrapolations to all journals after analyzing high-impact journals would be misleading.

Many factors (confounders) may distort the association between the exposure and outcome if the included research reports differ with respect to these factors [ 73 ]. For example, when examining the association between source of funding and completeness of reporting, it may be necessary to account for journals that endorse the guidelines. Confounding bias can be addressed by restriction, matching and statistical adjustment [ 73 ]. Restriction appears to be the method of choice for many investigators who choose to include only high impact journals or articles in a specific field. For example, Knol et al. examined the reporting of p -values in baseline tables of high impact journals [ 26 ]. Matching is also sometimes used. In the methodological study of non-randomized interventional studies of elective ventral hernia repair, Parker et al. matched prospective studies with retrospective studies and compared reporting standards [ 74 ]. Some other methodological studies use statistical adjustments. For example, Zhang et al. used regression techniques to determine the factors associated with missing participant data in trials [ 16 ].

With regard to external validity, researchers interested in conducting methodological studies must consider how generalizable or applicable their findings are. This should tie in closely with the research question and should be explicit. For example. Findings from methodological studies on trials published in high impact cardiology journals cannot be assumed to be applicable to trials in other fields. However, investigators must ensure that their sample truly represents the target sample either by a) conducting a comprehensive and exhaustive search, or b) using an appropriate and justified, randomly selected sample of research reports.

Even applicability to high impact journals may vary based on the investigators’ definition, and over time. For example, for high impact journals in the field of general medicine, Bouwmeester et al. included the Annals of Internal Medicine (AIM), BMJ, the Journal of the American Medical Association (JAMA), Lancet, the New England Journal of Medicine (NEJM), and PLoS Medicine ( n  = 6) [ 75 ]. In contrast, the high impact journals selected in the methodological study by Schiller et al. were BMJ, JAMA, Lancet, and NEJM ( n  = 4) [ 76 ]. Another methodological study by Kosa et al. included AIM, BMJ, JAMA, Lancet and NEJM ( n  = 5). In the methodological study by Thabut et al., journals with a JIF greater than 5 were considered to be high impact. Riado Minguez et al. used first quartile journals in the Journal Citation Reports (JCR) for a specific year to determine “high impact” [ 77 ]. Ultimately, the definition of high impact will be based on the number of journals the investigators are willing to include, the year of impact and the JIF cut-off [ 78 ]. We acknowledge that the term “generalizability” may apply differently for methodological studies, especially when in many instances it is possible to include the entire target population in the sample studied.

Finally, methodological studies are not exempt from information bias which may stem from discrepancies in the included research reports [ 79 ], errors in data extraction, or inappropriate interpretation of the information extracted. Likewise, publication bias may also be a concern in methodological studies, but such concepts have not yet been explored.

A proposed framework

In order to inform discussions about methodological studies, the development of guidance for what should be reported, we have outlined some key features of methodological studies that can be used to classify them. For each of the categories outlined below, we provide an example. In our experience, the choice of approach to completing a methodological study can be informed by asking the following four questions:

  • What is the aim?

A methodological study may be focused on exploring sources of bias in primary or secondary studies (meta-bias), or how bias is analyzed. We have taken care to distinguish bias (i.e. systematic deviations from the truth irrespective of the source) from reporting quality or completeness (i.e. not adhering to a specific reporting guideline or norm). An example of where this distinction would be important is in the case of a randomized trial with no blinding. This study (depending on the nature of the intervention) would be at risk of performance bias. However, if the authors report that their study was not blinded, they would have reported adequately. In fact, some methodological studies attempt to capture both “quality of conduct” and “quality of reporting”, such as Richie et al., who reported on the risk of bias in randomized trials of pharmacy practice interventions [ 80 ]. Babic et al. investigated how risk of bias was used to inform sensitivity analyses in Cochrane reviews [ 81 ]. Further, biases related to choice of outcomes can also be explored. For example, Tan et al investigated differences in treatment effect size based on the outcome reported [ 82 ].

Methodological studies may report quality of reporting against a reporting checklist (i.e. adherence to guidelines) or against expected norms. For example, Croituro et al. report on the quality of reporting in systematic reviews published in dermatology journals based on their adherence to the PRISMA statement [ 83 ], and Khan et al. described the quality of reporting of harms in randomized controlled trials published in high impact cardiovascular journals based on the CONSORT extension for harms [ 84 ]. Other methodological studies investigate reporting of certain features of interest that may not be part of formally published checklists or guidelines. For example, Mbuagbaw et al. described how often the implications for research are elaborated using the Evidence, Participants, Intervention, Comparison, Outcome, Timeframe (EPICOT) format [ 30 ].

Sometimes investigators may be interested in how consistent reports of the same research are, as it is expected that there should be consistency between: conference abstracts and published manuscripts; manuscript abstracts and manuscript main text; and trial registration and published manuscript. For example, Rosmarakis et al. investigated consistency between conference abstracts and full text manuscripts [ 85 ].

In addition to identifying issues with reporting in primary and secondary studies, authors of methodological studies may be interested in determining the factors that are associated with certain reporting practices. Many methodological studies incorporate this, albeit as a secondary outcome. For example, Farrokhyar et al. investigated the factors associated with reporting quality in randomized trials of coronary artery bypass grafting surgery [ 53 ].

Methodological studies may also be used to describe methods or compare methods, and the factors associated with methods. Muller et al. described the methods used for systematic reviews and meta-analyses of observational studies [ 86 ].

Some methodological studies synthesize results from other methodological studies. For example, Li et al. conducted a scoping review of methodological reviews that investigated consistency between full text and abstracts in primary biomedical research [ 87 ].

Some methodological studies may investigate the use of names and terms in health research. For example, Martinic et al. investigated the definitions of systematic reviews used in overviews of systematic reviews (OSRs), meta-epidemiological studies and epidemiology textbooks [ 88 ].

In addition to the previously mentioned experimental methodological studies, there may exist other types of methodological studies not captured here.

  • 2. What is the design?

Most methodological studies are purely descriptive and report their findings as counts (percent) and means (standard deviation) or medians (interquartile range). For example, Mbuagbaw et al. described the reporting of research recommendations in Cochrane HIV systematic reviews [ 30 ]. Gohari et al. described the quality of reporting of randomized trials in diabetes in Iran [ 12 ].

Some methodological studies are analytical wherein “analytical studies identify and quantify associations, test hypotheses, identify causes and determine whether an association exists between variables, such as between an exposure and a disease.” [ 89 ] In the case of methodological studies all these investigations are possible. For example, Kosa et al. investigated the association between agreement in primary outcome from trial registry to published manuscript and study covariates. They found that larger and more recent studies were more likely to have agreement [ 15 ]. Tricco et al. compared the conclusion statements from Cochrane and non-Cochrane systematic reviews with a meta-analysis of the primary outcome and found that non-Cochrane reviews were more likely to report positive findings. These results are a test of the null hypothesis that the proportions of Cochrane and non-Cochrane reviews that report positive results are equal [ 90 ].

  • 3. What is the sampling strategy?

Methodological reviews with narrow research questions may be able to include the entire target population. For example, in the methodological study of Cochrane HIV systematic reviews, Mbuagbaw et al. included all of the available studies ( n  = 103) [ 30 ].

Many methodological studies use random samples of the target population [ 33 , 91 , 92 ]. Alternatively, purposeful sampling may be used, limiting the sample to a subset of research-related reports published within a certain time period, or in journals with a certain ranking or on a topic. Systematic sampling can also be used when random sampling may be challenging to implement.

  • 4. What is the unit of analysis?

Many methodological studies use a research report (e.g. full manuscript of study, abstract portion of the study) as the unit of analysis, and inferences can be made at the study-level. However, both published and unpublished research-related reports can be studied. These may include articles, conference abstracts, registry entries etc.

Some methodological studies report on items which may occur more than once per article. For example, Paquette et al. report on subgroup analyses in Cochrane reviews of atrial fibrillation in which 17 systematic reviews planned 56 subgroup analyses [ 93 ].

This framework is outlined in Fig.  2 .

An external file that holds a picture, illustration, etc.
Object name is 12874_2020_1107_Fig2_HTML.jpg

A proposed framework for methodological studies

Conclusions

Methodological studies have examined different aspects of reporting such as quality, completeness, consistency and adherence to reporting guidelines. As such, many of the methodological study examples cited in this tutorial are related to reporting. However, as an evolving field, the scope of research questions that can be addressed by methodological studies is expected to increase.

In this paper we have outlined the scope and purpose of methodological studies, along with examples of instances in which various approaches have been used. In the absence of formal guidance on the design, conduct, analysis and reporting of methodological studies, we have provided some advice to help make methodological studies consistent. This advice is grounded in good contemporary scientific practice. Generally, the research question should tie in with the sampling approach and planned analysis. We have also highlighted the variables that may inform findings from methodological studies. Lastly, we have provided suggestions for ways in which authors can categorize their methodological studies to inform their design and analysis.

Acknowledgements

Abbreviations.

CONSORTConsolidated Standards of Reporting Trials
EPICOTEvidence, Participants, Intervention, Comparison, Outcome, Timeframe
GRADEGrading of Recommendations, Assessment, Development and Evaluations
PICOTParticipants, Intervention, Comparison, Outcome, Timeframe
PRISMAPreferred Reporting Items of Systematic reviews and Meta-Analyses
SWARStudies Within a Review
SWATStudies Within a Trial

Authors’ contributions

LM conceived the idea and drafted the outline and paper. DOL and LT commented on the idea and draft outline. LM, LP and DOL performed literature searches and data extraction. All authors (LM, DOL, LT, LP, DBA) reviewed several draft versions of the manuscript and approved the final manuscript.

This work did not receive any dedicated funding.

Availability of data and materials

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

DOL, DBA, LM, LP and LT are involved in the development of a reporting guideline for methodological studies.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Research Methods In Psychology

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

research methods3

Hypotheses are statements about the prediction of the results, that can be verified or disproved by some investigation.

There are four types of hypotheses :
  • Null Hypotheses (H0 ) – these predict that no difference will be found in the results between the conditions. Typically these are written ‘There will be no difference…’
  • Alternative Hypotheses (Ha or H1) – these predict that there will be a significant difference in the results between the two conditions. This is also known as the experimental hypothesis.
  • One-tailed (directional) hypotheses – these state the specific direction the researcher expects the results to move in, e.g. higher, lower, more, less. In a correlation study, the predicted direction of the correlation can be either positive or negative.
  • Two-tailed (non-directional) hypotheses – these state that a difference will be found between the conditions of the independent variable but does not state the direction of a difference or relationship. Typically these are always written ‘There will be a difference ….’

All research has an alternative hypothesis (either a one-tailed or two-tailed) and a corresponding null hypothesis.

Once the research is conducted and results are found, psychologists must accept one hypothesis and reject the other. 

So, if a difference is found, the Psychologist would accept the alternative hypothesis and reject the null.  The opposite applies if no difference is found.

Sampling techniques

Sampling is the process of selecting a representative group from the population under study.

Sample Target Population

A sample is the participants you select from a target population (the group you are interested in) to make generalizations about.

Representative means the extent to which a sample mirrors a researcher’s target population and reflects its characteristics.

Generalisability means the extent to which their findings can be applied to the larger population of which their sample was a part.

  • Volunteer sample : where participants pick themselves through newspaper adverts, noticeboards or online.
  • Opportunity sampling : also known as convenience sampling , uses people who are available at the time the study is carried out and willing to take part. It is based on convenience.
  • Random sampling : when every person in the target population has an equal chance of being selected. An example of random sampling would be picking names out of a hat.
  • Systematic sampling : when a system is used to select participants. Picking every Nth person from all possible participants. N = the number of people in the research population / the number of people needed for the sample.
  • Stratified sampling : when you identify the subgroups and select participants in proportion to their occurrences.
  • Snowball sampling : when researchers find a few participants, and then ask them to find participants themselves and so on.
  • Quota sampling : when researchers will be told to ensure the sample fits certain quotas, for example they might be told to find 90 participants, with 30 of them being unemployed.

Experiments always have an independent and dependent variable .

  • The independent variable is the one the experimenter manipulates (the thing that changes between the conditions the participants are placed into). It is assumed to have a direct effect on the dependent variable.
  • The dependent variable is the thing being measured, or the results of the experiment.

variables

Operationalization of variables means making them measurable/quantifiable. We must use operationalization to ensure that variables are in a form that can be easily tested.

For instance, we can’t really measure ‘happiness’, but we can measure how many times a person smiles within a two-hour period. 

By operationalizing variables, we make it easy for someone else to replicate our research. Remember, this is important because we can check if our findings are reliable.

Extraneous variables are all variables which are not independent variable but could affect the results of the experiment.

It can be a natural characteristic of the participant, such as intelligence levels, gender, or age for example, or it could be a situational feature of the environment such as lighting or noise.

Demand characteristics are a type of extraneous variable that occurs if the participants work out the aims of the research study, they may begin to behave in a certain way.

For example, in Milgram’s research , critics argued that participants worked out that the shocks were not real and they administered them as they thought this was what was required of them. 

Extraneous variables must be controlled so that they do not affect (confound) the results.

Randomly allocating participants to their conditions or using a matched pairs experimental design can help to reduce participant variables. 

Situational variables are controlled by using standardized procedures, ensuring every participant in a given condition is treated in the same way

Experimental Design

Experimental design refers to how participants are allocated to each condition of the independent variable, such as a control or experimental group.
  • Independent design ( between-groups design ): each participant is selected for only one group. With the independent design, the most common way of deciding which participants go into which group is by means of randomization. 
  • Matched participants design : each participant is selected for only one group, but the participants in the two groups are matched for some relevant factor or factors (e.g. ability; sex; age).
  • Repeated measures design ( within groups) : each participant appears in both groups, so that there are exactly the same participants in each group.
  • The main problem with the repeated measures design is that there may well be order effects. Their experiences during the experiment may change the participants in various ways.
  • They may perform better when they appear in the second group because they have gained useful information about the experiment or about the task. On the other hand, they may perform less well on the second occasion because of tiredness or boredom.
  • Counterbalancing is the best way of preventing order effects from disrupting the findings of an experiment, and involves ensuring that each condition is equally likely to be used first and second by the participants.

If we wish to compare two groups with respect to a given independent variable, it is essential to make sure that the two groups do not differ in any other important way. 

Experimental Methods

All experimental methods involve an iv (independent variable) and dv (dependent variable)..

The researcher decides where the experiment will take place, at what time, with which participants, in what circumstances,  using a standardized procedure.

  • Field experiments are conducted in the everyday (natural) environment of the participants. The experimenter still manipulates the IV, but in a real-life setting. It may be possible to control extraneous variables, though such control is more difficult than in a lab experiment.
  • Natural experiments are when a naturally occurring IV is investigated that isn’t deliberately manipulated, it exists anyway. Participants are not randomly allocated, and the natural event may only occur rarely.

Case studies are in-depth investigations of a person, group, event, or community. It uses information from a range of sources, such as from the person concerned and also from their family and friends.

Many techniques may be used such as interviews, psychological tests, observations and experiments. Case studies are generally longitudinal: in other words, they follow the individual or group over an extended period of time. 

Case studies are widely used in psychology and among the best-known ones carried out were by Sigmund Freud . He conducted very detailed investigations into the private lives of his patients in an attempt to both understand and help them overcome their illnesses.

Case studies provide rich qualitative data and have high levels of ecological validity. However, it is difficult to generalize from individual cases as each one has unique characteristics.

Correlational Studies

Correlation means association; it is a measure of the extent to which two variables are related. One of the variables can be regarded as the predictor variable with the other one as the outcome variable.

Correlational studies typically involve obtaining two different measures from a group of participants, and then assessing the degree of association between the measures. 

The predictor variable can be seen as occurring before the outcome variable in some sense. It is called the predictor variable, because it forms the basis for predicting the value of the outcome variable.

Relationships between variables can be displayed on a graph or as a numerical score called a correlation coefficient.

types of correlation. Scatter plot. Positive negative and no correlation

  • If an increase in one variable tends to be associated with an increase in the other, then this is known as a positive correlation .
  • If an increase in one variable tends to be associated with a decrease in the other, then this is known as a negative correlation .
  • A zero correlation occurs when there is no relationship between variables.

After looking at the scattergraph, if we want to be sure that a significant relationship does exist between the two variables, a statistical test of correlation can be conducted, such as Spearman’s rho.

The test will give us a score, called a correlation coefficient . This is a value between 0 and 1, and the closer to 1 the score is, the stronger the relationship between the variables. This value can be both positive e.g. 0.63, or negative -0.63.

Types of correlation. Strong, weak, and perfect positive correlation, strong, weak, and perfect negative correlation, no correlation. Graphs or charts ...

A correlation between variables, however, does not automatically mean that the change in one variable is the cause of the change in the values of the other variable. A correlation only shows if there is a relationship between variables.

Correlation does not always prove causation, as a third variable may be involved. 

causation correlation

Interview Methods

Interviews are commonly divided into two types: structured and unstructured.

A fixed, predetermined set of questions is put to every participant in the same order and in the same way. 

Responses are recorded on a questionnaire, and the researcher presets the order and wording of questions, and sometimes the range of alternative answers.

The interviewer stays within their role and maintains social distance from the interviewee.

There are no set questions, and the participant can raise whatever topics he/she feels are relevant and ask them in their own way. Questions are posed about participants’ answers to the subject

Unstructured interviews are most useful in qualitative research to analyze attitudes and values.

Though they rarely provide a valid basis for generalization, their main advantage is that they enable the researcher to probe social actors’ subjective point of view. 

Questionnaire Method

Questionnaires can be thought of as a kind of written interview. They can be carried out face to face, by telephone, or post.

The choice of questions is important because of the need to avoid bias or ambiguity in the questions, ‘leading’ the respondent or causing offense.

  • Open questions are designed to encourage a full, meaningful answer using the subject’s own knowledge and feelings. They provide insights into feelings, opinions, and understanding. Example: “How do you feel about that situation?”
  • Closed questions can be answered with a simple “yes” or “no” or specific information, limiting the depth of response. They are useful for gathering specific facts or confirming details. Example: “Do you feel anxious in crowds?”

Its other practical advantages are that it is cheaper than face-to-face interviews and can be used to contact many respondents scattered over a wide area relatively quickly.

Observations

There are different types of observation methods :
  • Covert observation is where the researcher doesn’t tell the participants they are being observed until after the study is complete. There could be ethical problems or deception and consent with this particular observation method.
  • Overt observation is where a researcher tells the participants they are being observed and what they are being observed for.
  • Controlled : behavior is observed under controlled laboratory conditions (e.g., Bandura’s Bobo doll study).
  • Natural : Here, spontaneous behavior is recorded in a natural setting.
  • Participant : Here, the observer has direct contact with the group of people they are observing. The researcher becomes a member of the group they are researching.  
  • Non-participant (aka “fly on the wall): The researcher does not have direct contact with the people being observed. The observation of participants’ behavior is from a distance

Pilot Study

A pilot  study is a small scale preliminary study conducted in order to evaluate the feasibility of the key s teps in a future, full-scale project.

A pilot study is an initial run-through of the procedures to be used in an investigation; it involves selecting a few people and trying out the study on them. It is possible to save time, and in some cases, money, by identifying any flaws in the procedures designed by the researcher.

A pilot study can help the researcher spot any ambiguities (i.e. unusual things) or confusion in the information given to participants or problems with the task devised.

Sometimes the task is too hard, and the researcher may get a floor effect, because none of the participants can score at all or can complete the task – all performances are low.

The opposite effect is a ceiling effect, when the task is so easy that all achieve virtually full marks or top performances and are “hitting the ceiling”.

Research Design

In cross-sectional research , a researcher compares multiple segments of the population at the same time

Sometimes, we want to see how people change over time, as in studies of human development and lifespan. Longitudinal research is a research design in which data-gathering is administered repeatedly over an extended period of time.

In cohort studies , the participants must share a common factor or characteristic such as age, demographic, or occupation. A cohort study is a type of longitudinal study in which researchers monitor and observe a chosen population over an extended period.

Triangulation means using more than one research method to improve the study’s validity.

Reliability

Reliability is a measure of consistency, if a particular measurement is repeated and the same result is obtained then it is described as being reliable.

  • Test-retest reliability :  assessing the same person on two different occasions which shows the extent to which the test produces the same answers.
  • Inter-observer reliability : the extent to which there is an agreement between two or more observers.

Meta-Analysis

Meta-analysis is a statistical procedure used to combine and synthesize findings from multiple independent studies to estimate the average effect size for a particular research question.

Meta-analysis goes beyond traditional narrative reviews by using statistical methods to integrate the results of several studies, leading to a more objective appraisal of the evidence.

This is done by looking through various databases, and then decisions are made about what studies are to be included/excluded.

  • Strengths : Increases the conclusions’ validity as they’re based on a wider range.
  • Weaknesses : Research designs in studies can vary, so they are not truly comparable.

Peer Review

A researcher submits an article to a journal. The choice of the journal may be determined by the journal’s audience or prestige.

The journal selects two or more appropriate experts (psychologists working in a similar field) to peer review the article without payment. The peer reviewers assess: the methods and designs used, originality of the findings, the validity of the original research findings and its content, structure and language.

Feedback from the reviewer determines whether the article is accepted. The article may be: Accepted as it is, accepted with revisions, sent back to the author to revise and re-submit or rejected without the possibility of submission.

The editor makes the final decision whether to accept or reject the research report based on the reviewers comments/ recommendations.

Peer review is important because it prevent faulty data from entering the public domain, it provides a way of checking the validity of findings and the quality of the methodology and is used to assess the research rating of university departments.

Peer reviews may be an ideal, whereas in practice there are lots of problems. For example, it slows publication down and may prevent unusual, new work being published. Some reviewers might use it as an opportunity to prevent competing researchers from publishing work.

Some people doubt whether peer review can really prevent the publication of fraudulent research.

The advent of the internet means that a lot of research and academic comment is being published without official peer reviews than before, though systems are evolving on the internet where everyone really has a chance to offer their opinions and police the quality of research.

Types of Data

  • Quantitative data is numerical data e.g. reaction time or number of mistakes. It represents how much or how long, how many there are of something. A tally of behavioral categories and closed questions in a questionnaire collect quantitative data.
  • Qualitative data is virtually any type of information that can be observed and recorded that is not numerical in nature and can be in the form of written or verbal communication. Open questions in questionnaires and accounts from observational studies collect qualitative data.
  • Primary data is first-hand data collected for the purpose of the investigation.
  • Secondary data is information that has been collected by someone other than the person who is conducting the research e.g. taken from journals, books or articles.

Validity means how well a piece of research actually measures what it sets out to, or how well it reflects the reality it claims to represent.

Validity is whether the observed effect is genuine and represents what is actually out there in the world.

  • Concurrent validity is the extent to which a psychological measure relates to an existing similar measure and obtains close results. For example, a new intelligence test compared to an established test.
  • Face validity : does the test measure what it’s supposed to measure ‘on the face of it’. This is done by ‘eyeballing’ the measuring or by passing it to an expert to check.
  • Ecological validit y is the extent to which findings from a research study can be generalized to other settings / real life.
  • Temporal validity is the extent to which findings from a research study can be generalized to other historical times.

Features of Science

  • Paradigm – A set of shared assumptions and agreed methods within a scientific discipline.
  • Paradigm shift – The result of the scientific revolution: a significant change in the dominant unifying theory within a scientific discipline.
  • Objectivity – When all sources of personal bias are minimised so not to distort or influence the research process.
  • Empirical method – Scientific approaches that are based on the gathering of evidence through direct observation and experience.
  • Replicability – The extent to which scientific procedures and findings can be repeated by other researchers.
  • Falsifiability – The principle that a theory cannot be considered scientific unless it admits the possibility of being proved untrue.

Statistical Testing

A significant result is one where there is a low probability that chance factors were responsible for any observed difference, correlation, or association in the variables tested.

If our test is significant, we can reject our null hypothesis and accept our alternative hypothesis.

If our test is not significant, we can accept our null hypothesis and reject our alternative hypothesis. A null hypothesis is a statement of no effect.

In Psychology, we use p < 0.05 (as it strikes a balance between making a type I and II error) but p < 0.01 is used in tests that could cause harm like introducing a new drug.

A type I error is when the null hypothesis is rejected when it should have been accepted (happens when a lenient significance level is used, an error of optimism).

A type II error is when the null hypothesis is accepted when it should have been rejected (happens when a stringent significance level is used, an error of pessimism).

Ethical Issues

  • Informed consent is when participants are able to make an informed judgment about whether to take part. It causes them to guess the aims of the study and change their behavior.
  • To deal with it, we can gain presumptive consent or ask them to formally indicate their agreement to participate but it may invalidate the purpose of the study and it is not guaranteed that the participants would understand.
  • Deception should only be used when it is approved by an ethics committee, as it involves deliberately misleading or withholding information. Participants should be fully debriefed after the study but debriefing can’t turn the clock back.
  • All participants should be informed at the beginning that they have the right to withdraw if they ever feel distressed or uncomfortable.
  • It causes bias as the ones that stayed are obedient and some may not withdraw as they may have been given incentives or feel like they’re spoiling the study. Researchers can offer the right to withdraw data after participation.
  • Participants should all have protection from harm . The researcher should avoid risks greater than those experienced in everyday life and they should stop the study if any harm is suspected. However, the harm may not be apparent at the time of the study.
  • Confidentiality concerns the communication of personal information. The researchers should not record any names but use numbers or false names though it may not be possible as it is sometimes possible to work out who the researchers were.

Print Friendly, PDF & Email

Everything you need to know about primary research

Last updated

28 February 2023

Reviewed by

Miroslav Damyanov

They might search existing research to find the data they need—a technique known as secondary research .

Alternatively, they might prefer to seek out the data they need independently. This is known as primary research.

Analyze your primary research

Bring your primary research together inside Dovetail and uncover actionable insights

  • What is primary research?

During primary research, the researcher collects the information and data for a specific sample directly.

Types of primary research

Primary research can take several forms, depending on the type of information studied. Here are the four main types of primary research:

Observations

Focus groups

When conducting primary research, you can collect qualitative or quantitative data (or both).

Qualitative primary data collection provides a vast array of feedback or information about products and services. However, it may need to be interpreted before it is used to make important business decisions.

Quantitative primary data collection , on the other hand, involves looking at the numbers related to a specific product or service.

  • What types of projects can benefit from primary research?

Data obtained from primary research may be more accurate than if it were obtained from previous data samples.

Primary research may be used for

Salary guides

Industry benchmarks

Government reports

Any information based on the current state of the target, including statistics related to current information

Scientific studies

Current market research

Crafting user-friendly products

Primary research can also be used to capture any type of sentiment that cannot be represented statistically, verbally, or through transcription. This may include tone of voice, for example. The researcher might want to find out if the subject sounds hesitant, uncertain, or unhappy.

  • Methods for conducting primary research

Your methods for conducting primary research may vary based on the information you’re looking for and how you prefer to interact with your target market.

Surveys are a method to obtain direct information and feedback from the target audience. Depending on the target market’s specific needs, they can be conducted over the phone, online, or face-to-face.

Observation

In some cases, primary research will involve watching the behaviors of consumers or members of the target audience.

Communication with members of the target audience who can share direct information and feedback about products and services.

Test marketing

Explore customer response to a product or marketing campaign before a wider release.

Competitor visits

Competitor visits allow you to check out what competitors have to offer to get a better feel for how they interact with their target markets. This approach can help you better understand what the market might be looking for.

This involves bringing a group of people together to discuss a specific product or need within the industry. This approach could help provide essential insights into the needs of that market.

Usability testing

Usability testing allows you to evaluate a product’s usability when you launch a live prototype. You might recruit representative users to perform tasks while you observe, ask questions, and take notes on how they use your product.

  • When to conduct primary research

Primary research is needed when you want first-hand information about your product, service, or target market. There are several circumstances where primary research may be the best strategy for getting the information you need.

You might use it to:

Understand pricing information, including what price points customers are likely to purchase at. 

Get insight into your sales process. For example, you might look at screenshots of a sales demo, listen to audio recordings of the sales process, or evaluate key details and descriptions. 

Learn about problems your consumers might be having and how your business can solve them.

Gauge how a company feels about its competitors. For example, you might want to ask an e-tailer if they plan to offer free shipping to compete with Amazon, Walmart, and other major retailers.

  • How to get started with primary research

Step one: Define the problem you’re trying to answer. Clearly identify what you want to know and why it’s important. Does the customer want you to perform the “usual?” This is often the case if they are new, inexperienced, or simply too busy and want to have the task taken care of.

Step two: Determine the best method for getting those answers. Do you need quantitative data , which can be measured in multiple-choice surveys? Or do you need more detailed qualitative data , which may require focus groups or interviews?

Step three: Select your target. Where will you conduct your primary research? You may already have a focus group available; for example, a social media group where people already gather to discuss your brand.

Step four: Compile your questions or define your method. Clearly set out what information you need and how you plan to gather it.

Step five: Research!

  • Advantages of primary research

Primary research offers a number of potential advantages. Most importantly, it offers you information that you can’t get elsewhere.

It provides you with direct information from consumers who are already members of your target market or using your products.

You are able to get feedback directly from your target audience, which can allow you to immediately improve products or services and provide better support to your target market.

Primary data is current. Secondary sources may contain outdated data.

Primary data is reliable. You will know what methods you used and how the data relates to your research because you collected it yourself.

  • Disadvantages of primary research

You might decide primary research isn’t the best option for your research project when you consider the disadvantages.

Primary research can be time-consuming. You will have to put in the time to collect data yourself, meaning the research may take longer to complete.

Primary research may be more expensive to conduct if it involves face-to-face interactions with your target audience, subscriptions for insight platforms, or participant remuneration.

The people you engage with for your research may feel disrupted by information-gathering methods, so you may not be able to use the same focus group every time you conduct that research.

It can be difficult to gather accurate information from a small group of people, especially if you deliberately select a focus group made up of existing customers. 

You may have a hard time accessing people who are not already members of your customer base.

Biased surveys can be a challenge. Researchers may, for example, inadvertently structure questions to encourage participants to respond in a particular way. Questions may also be too confusing or complex for participants to answer accurately.

Despite the researcher’s best efforts, participants don’t always take studies seriously. They may provide inaccurate or irrelevant answers to survey questions, significantly impacting any conclusions you reach. Therefore, researchers must take extra caution when examining results.

Conducting primary research can help you get a closer look at what is really going on with your target market and how they are using your product. That research can then inform your efforts to improve your services and products.

What is primary research, and why is it important?

Primary research is a research method that allows researchers to directly collect information for their use. It can provide more accurate insights into the target audience and market information companies really need.

What are primary research sources?

Primary research sources may include surveys, interviews, visits to competitors, or focus groups.

What is the best method of primary research?

The best method of primary research depends on the type of information you are gathering. If you need qualitative information, you may want to hold focus groups or interviews. On the other hand, if you need quantitative data, you may benefit from conducting surveys with your target audience.

Should you be using a customer insights hub?

Do you want to discover previous research faster?

Do you share your research findings with others?

Do you analyze research data?

Start for free today, add your research, and get to key insights faster

Editor’s picks

Last updated: 18 April 2023

Last updated: 27 February 2023

Last updated: 22 August 2024

Last updated: 5 February 2023

Last updated: 16 August 2024

Last updated: 9 March 2023

Last updated: 30 April 2024

Last updated: 12 December 2023

Last updated: 11 March 2024

Last updated: 4 July 2024

Last updated: 6 March 2024

Last updated: 5 March 2024

Last updated: 13 May 2024

Latest articles

Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next, log in or sign up.

Get started for free

Banner

  • University of California College of the Law, San Francisco

International Law Research Guide

General principles of law.

  • International Law News & Blogs
  • Basic Reference Tools
  • Sources of International Law
  • State Practice
  • Opinio Juris
  • General Principles of Law Contents
  • What are General Principles of Law?

Understand the Concept of General Principles of Law

  • Find General Principles of Law
  • Teachings of Publicists
  • International Case Law
  • Find Articles & Book Chapters
  • Lexis, Westlaw & Bloomberg
  • International Environmental Law
  • International Privacy, Security & Cyber Law This link opens in a new window
  • Private International Law
  • Comparative Law This link opens in a new window
  • International Law Research Guides
  • Scholarly Legal Writing Guidance

Find Evidence of General Principles of Law

Definitions.

"[S]hould neither treaty or custom prove adequate to resolve a contentious question, resort may be had to “general principles” as a subsidiary source. The general principles are commonly recognized as the norms existing in the municipal law of the majority of nations. When such a norm (i.e. the rule against judicial bias) has achieved the requisite degree of usage, it may thus be recognized as a subsidiary source of the substantive content of international law." - Oxford Reference

Oxford

Scholarly Articles

Westlaw logo

Book Chapters

UC Law SF Logo

To Find Books in UC Law SF library on a foreign law topic, try searching for Law and the country name as a subject header and a key word related to the legal issue. Examples are provided below.

Some books provide comparisons of law from different jurisdictions. This series may provide you with a collection or articles about the principles of law across multiple jurisdictions.

HeinOnline Provides an excellent Finding tool for Foreign Law Articles.

HeinOnline Logo

To quickly find relevant primary and secondary sources on a specific legal topic in another country, use:

Brill logo brill spelled in blue capital letters on a white background

If the Foreign Law Guide provides the name of a law or legal resource, but does not provide a direct link, you can try searching for the publication in the UC Law SF catalog.

Primary Law

To quickly compare law on a particular legal issue across multiple countries, use:

Lexis Logo

  • << Previous: Opinio Juris
  • Next: Teachings of Publicists >>
  • Last Updated: Aug 28, 2024 1:35 PM
  • URL: https://libguides.uclawsf.edu/international-law

Advertisement

type of principles research

  • Register for Free
  • My account Orders Downloads Address Payment methods Account details
  • About About Psychiatrist.com About JCP About PCC About CME Institute

Error: Search field were incomplete.

psychiatrist.com logo

Different Types of Love Activate the Brain Differently

by Denis Storey August 27, 2024 at 11:51 AM UTC

New research maps the neural mechanisms behind different forms of love, revealing that each type activates distinct areas of the brain.

Clinical relevance: New research maps the neural mechanisms behind different forms of love, revealing that each type activates distinct areas of the brain.

  • The study used fMRI to analyze brain responses to love for romantic partners, children, friends, pets, strangers, and nature.
  • Interpersonal love primarily activates brain regions linked to social cognition and the brain’s reward system.
  • Love for nature and pets involves unique neural patterns, influenced by both biological and cultural factors.

Love might be a “many-splendored thing,” but it also appears to take different forms. New research from Aalto University – and published in the Oxford journal Cerebral Cortex –   plots a roadmap of the neural mechanisms that make up the various forms of love. That map reveals that the different types of love engage disparate – and distinct – areas of the brain.

The study shows that it remains integral to forging and sustaining connections with other people, parents, and even things.

The Finnish researchers hoped to explain the neural basis of love beyond the well-documented romantic and maternal types. This also sought to bridge that gap by examining the brain’s response to six different objects of love.

“We now provide a more comprehensive picture of the brain activity associated with different types of love than previous research,” Pärttyli Rinne, the philosopher who organized the research project, explained in a press release . “The activation pattern of love is generated in social situations in the basal ganglia, the midline of the forehead, the precuneus, and the temporoparietal junction at the sides of the back of the head.”

Methodology of Love

Leveraging functional magnetic resonance imaging (fMRI), the team investigated brain activity related to love for romantic partners, children, friends, strangers, pets, and nature.

To stir those feelings, the researchers asked the study participants to listen to short stories written to target specific love types. The researchers then used fMRI to track the resulting brain activity. The findings suggest that the brain’s responses relies heavily on the object of affection, with different forms triggering separate neural networks.

The researchers discovered that interpersonal love, such as that for romantic partners, children, and friends, primarily triggers brain regions related to social cognition. These areas include the temporoparietal junction and midline structures, which are more active during love for people.

Notably, pet owners showed more obvious activity in these regions when thinking about their pets compared to participants without pets, suggesting a deeper emotional connection between pet owners and their animals.

The researchers also found that love for romantic partners, children, and friends elicited stronger and more widespread activation in the brain’s reward system, which includes the striatum, ventral tegmental area, and orbitofrontal cortex.

On the other hand, love for strangers, pets, and nature triggered less of a response in these regions, reflecting the weaker affiliative bonds normally linked to these forms of love.

Incidentally, the study found that love for nature showed up in brain regions different from those affiliated with interpersonal love. Specifically, affection for nature engaged the fusiform gyrus, parahippocampal gyrus, and superior parietal lobes. This implies that it could be tied to aesthetic appreciation and a sense of connection to the environment.

Biology, Culture Each Play a Part

The researchers suggest that the diverse neural patterns involved stem from biological and cultural sources. The study supports the concept that love, while rooted in fundamental neurobiological mechanisms, can also be influenced by outside factors.

For example, the stronger brain activity the researchers witnessed in pet owners underscores the role of cultural factors in forming emotional bonds.

The study results open up potential new ways to think about how different types of love might change based on neurological conditions or mental health issues. And by charting the brain’s reactions to different forms of affection, the study sheds light on how it operates on both biological and cultural levels.

Further Reading

Study Refutes Concept of Love Languages

Barriers to Loving: A Clinician’s Perspective

Dogs Can Smell Our Stress And It Burns Them Out

Original Research

type of principles research

Perinatal Timing of Obsessive-Compulsive Disorder Onset

Findings show that OCD is most likely to begin in the early postpartum, with a rapid transition from symptoms to disorder. People with pre-existing OCD are vulnerable to symptom exacerbation postpartum, and in those without pre-existing OCD, onset can occur up to 8 months postpar...

Nichole Fairbrother and others

type of principles research

Comorbid Posttraumatic Stress Disorder and Trichotillomania

PTSD is frequently comorbid with trichotillomania, and their co-occurrence enhances the risk for a range of impulsive behaviors.

Austin Huang and others

type of principles research

Search Articles

Related articles, table of contents.

National Academies Press: OpenBook

Responsible Science: Ensuring the Integrity of the Research Process: Volume I (1992)

Chapter: 2 scientific principles and research practices, scientific principles and research practices.

Until the past decade, scientists, research institutions, and government agencies relied solely on a system of self-regulation based on shared ethical principles and generally accepted research practices to ensure integrity in the research process. Among the very basic principles that guide scientists, as well as many other scholars, are those expressed as respect for the integrity of knowledge, collegiality, honesty, objectivity, and openness. These principles are at work in the fundamental elements of the scientific method, such as formulating a hypothesis, designing an experiment to test the hypothesis, and collecting and interpreting data. In addition, more particular principles characteristic of specific scientific disciplines influence the methods of observation; the acquisition, storage, management, and sharing of data; the communication of scientific knowledge and information; and the training of younger scientists. 1 How these principles are applied varies considerably among the several scientific disciplines, different research organizations, and individual investigators.

The basic and particular principles that guide scientific research practices exist primarily in an unwritten code of ethics. Although some have proposed that these principles should be written down and formalized, 2 the principles and traditions of science are, for the most part, conveyed to successive generations of scientists through example, discussion, and informal education. As was pointed out in an early Academy report on responsible conduct of research in the

health sciences, “a variety of informal and formal practices and procedures currently exist in the academic research environment to assure and maintain the high quality of research conduct” (IOM, 1989a, p. 18).

Physicist Richard Feynman invoked the informal approach to communicating the basic principles of science in his 1974 commencement address at the California Institute of Technology (Feynman, 1985):

[There is an] idea that we all hope you have learned in studying science in school—we never explicitly say what this is, but just hope that you catch on by all the examples of scientific investigation. It's a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty—a kind of leaning over backwards. For example, if you're doing an experiment, you should report everything that you think might make it invalid—not only what you think is right about it; other causes that could possibly explain your results; and things you thought of that you've eliminated by some other experiment, and how they worked—to make sure the other fellow can tell they have been eliminated.

Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can—if you know anything at all wrong, or possibly wrong—to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. In summary, the idea is to try to give all the information to help others to judge the value of your contribution, not just the information that leads to judgment in one particular direction or another. (pp. 311-312)

Many scholars have noted the implicit nature and informal character of the processes that often guide scientific research practices and inference. 3 Research in well-established fields of scientific knowledge, guided by commonly accepted theoretical paradigms and experimental methods, involves few disagreements about what is recognized as sound scientific evidence. Even in a revolutionary scientific field like molecular biology, students and trainees have learned the basic principles governing judgments made in such standardized procedures as cloning a new gene and determining its sequence.

In evaluating practices that guide research endeavors, it is important to consider the individual character of scientific fields. Research fields that yield highly replicable results, such as ordinary organic chemical structures, are quite different from fields such as cellular immunology, which are in a much earlier stage of development and accumulate much erroneous or uninterpretable material before the pieces fit together coherently. When a research field is too new or too fragmented to support consensual paradigms or established methods, different scientific practices can emerge.

In broadest terms, scientists seek a systematic organization of knowledge about the universe and its parts. This knowledge is based on explanatory principles whose verifiable consequences can be tested by independent observers. Science encompasses a large body of evidence collected by repeated observations and experiments. Although its goal is to approach true explanations as closely as possible, its investigators claim no final or permanent explanatory truths. Science changes. It evolves. Verifiable facts always take precedence.

Scientists operate within a system designed for continuous testing, where corrections and new findings are announced in refereed scientific publications. The task of systematizing and extending the understanding of the universe is advanced by eliminating disproved ideas and by formulating new tests of others until one emerges as the most probable explanation for any given observed phenomenon. This is called the scientific method.

An idea that has not yet been sufficiently tested is called a hypothesis. Different hypotheses are sometimes advanced to explain the same factual evidence. Rigor in the testing of hypotheses is the heart of science, if no verifiable tests can be formulated, the idea is called an hypothesis—one that is not fruitful; such hypotheses fail to stimulate research and are unlikely to advance scientific knowledge.

A fruitful hypothesis may develop into a theory after substantial observational or experimental support has accumulated. When a hypothesis has survived repeated opportunities for disproof and when competing hypotheses have been eliminated as a result of failure to produce the predicted consequences, that hypothesis may become the accepted theory explaining the original facts.

Scientific theories are also predictive. They allow us to anticipate yet unknown phenomena and thus to focus research on more narrowly defined areas. If the results of testing agree with predictions from a theory, the theory is provisionally corroborated. If not, it is proved false and must be either abandoned or modified to account for the inconsistency.

Scientific theories, therefore, are accepted only provisionally. It is always possible that a theory that has withstood previous testing may eventually be disproved. But as theories survive more tests, they are regarded with higher levels of confidence.

In science, then, facts are determined by observation or measurement of natural or experimental phenomena. A hypothesis is a proposed explanation of those facts. A theory is a hypothesis that has gained wide acceptance because it has survived rigorous investigation of its predictions.

. science accommodates, indeed welcomes, new discoveries: its theories change and its activities broaden as new facts come to light or new potentials are recognized. Examples of events changing scientific thought are legion. Truly scientific understanding cannot be attained or even pursued effectively when explanations not derived from or tested by the scientific method are accepted.

SOURCE: National Academy of Sciences and National Research Council(1984), pp. 8-11.

A well-established discipline can also experience profound changes during periods of new conceptual insights. In these moments, when scientists must cope with shifting concepts, the matter of what counts as scientific evidence can be subject to dispute. Historian Jan Sapp has described the complex interplay between theory and observation that characterizes the operation of scientific judgment in the selection of research data during revolutionary periods of paradigmatic shift (Sapp, 1990, p. 113):

What “liberties” scientists are allowed in selecting positive data and omitting conflicting or “messy” data from their reports is not defined by any timeless method. It is a matter of negotiation. It is learned, acquired socially; scientists make judgments about what fellow scientists might expect in order to be convincing. What counts as good evidence may be more or less well-defined after a new discipline or specialty is formed; however, at revolutionary stages in science, when new theories and techniques are being put forward, when standards have yet to be negotiated, scientists are less certain as to what others may require of them to be deemed competent and convincing.

Explicit statements of the values and traditions that guide research practice have evolved through the disciplines and have been given in textbooks on scientific methodologies. 4 In the past few decades, many scientific and engineering societies representing individual disciplines have also adopted codes of ethics (see Volume II of this report for examples), 5 and more recently, a few research institutions have developed guidelines for the conduct of research (see Chapter 6 ).

But the responsibilities of the research community and research institutions in assuring individual compliance with scientific principles, traditions, and codes of ethics are not well defined. In recent

years, the absence of formal statements by research institutions of the principles that should guide research conducted by their members has prompted criticism that scientists and their institutions lack a clearly identifiable means to ensure the integrity of the research process.

FACTORS AFFECTING THE DEVELOPMENT OF RESEARCH PRACTICES

In all of science, but with unequal emphasis in the several disciplines, inquiry proceeds based on observation and experimentation, the exercising of informed judgment, and the development of theory. Research practices are influenced by a variety of factors, including:

The general norms of science;

The nature of particular scientific disciplines and the traditions of organizing a specific body of scientific knowledge;

The example of individual scientists, particularly those who hold positions of authority or respect based on scientific achievements;

The policies and procedures of research institutions and funding agencies; and

Socially determined expectations.

The first three factors have been important in the evolution of modern science. The latter two have acquired more importance in recent times.

Norms of Science

As members of a professional group, scientists share a set of common values, aspirations, training, and work experiences. 6 Scientists are distinguished from other groups by their beliefs about the kinds of relationships that should exist among them, about the obligations incurred by members of their profession, and about their role in society. A set of general norms are imbedded in the methods and the disciplines of science that guide individual, scientists in the organization and performance of their research efforts and that also provide a basis for nonscientists to understand and evaluate the performance of scientists.

But there is uncertainty about the extent to which individual scientists adhere to such norms. Most social scientists conclude that all behavior is influenced to some degree by norms that reflect socially or morally supported patterns of preference when alternative courses of action are possible. However, perfect conformity with any rele-

vant set of norms is always lacking for a variety of reasons: the existence of competing norms, constraints, and obstacles in organizational or group settings, and personality factors. The strength of these influences, and the circumstances that may affect them, are not well understood.

In a classic statement of the importance of scientific norms, Robert Merton specified four norms as essential for the effective functioning of science: communism (by which Merton meant the communal sharing of ideas and findings), universalism, disinterestedness, and organized skepticism (Merton, 1973). Neither Merton nor other sociologists of science have provided solid empirical evidence for the degree of influence of these norms in a representative sample of scientists. In opposition to Merton, a British sociologist of science, Michael Mulkay, has argued that these norms are “ideological” covers for self-interested behavior that reflects status and politics (Mulkay, 1975). And the British physicist and sociologist of science John Ziman, in an article synthesizing critiques of Merton's formulation, has specified a set of structural factors in the bureaucratic and corporate research environment that impede the realization of that particular set of norms: the proprietary nature of research, the local importance and funding of research, the authoritarian role of the research manager, commissioned research, and the required expertise in understanding how to use modern instruments (Ziman, 1990).

It is clear that the specific influence of norms on the development of scientific research practices is simply not known and that further study of key determinants is required, both theoretically and empirically. Commonsense views, ideologies, and anecdotes will not support a conclusive appraisal.

Individual Scientific Disciplines

Science comprises individual disciplines that reflect historical developments and the organization of natural and social phenomena for study. Social scientists may have methods for recording research data that differ from the methods of biologists, and scientists who depend on complex instrumentation may have authorship practices different from those of scientists who work in small groups or carry out field studies. Even within a discipline, experimentalists engage in research practices that differ from the procedures followed by theorists.

Disciplines are the “building blocks of science,” and they “designate the theories, problems, procedures, and solutions that are prescribed, proscribed, permitted, and preferred” (Zuckerman, 1988a,

p. 520). The disciplines have traditionally provided the vital connections between scientific knowledge and its social organization. Scientific societies and scientific journals, some of which have tens of thousands of members and readers, and the peer review processes used by journals and research sponsors are visible forms of the social organization of the disciplines.

The power of the disciplines to shape research practices and standards is derived from their ability to provide a common frame of reference in evaluating the significance of new discoveries and theories in science. It is the members of a discipline, for example, who determine what is “good biology” or “good physics” by examining the implications of new research results. The disciplines' abilities to influence research standards are affected by the subjective quality of peer review and the extent to which factors other than disciplinary quality may affect judgments about scientific achievements. Disciplinary departments rely primarily on informal social and professional controls to promote responsible behavior and to penalize deviant behavior. These controls, such as social ostracism, the denial of letters of support for future employment, and the withholding of research resources, can deter and penalize unprofessional behavior within research institutions. 7

Many scientific societies representing individual disciplines have adopted explicit standards in the form of codes of ethics or guidelines governing, for example, the editorial practices of their journals and other publications. 8 Many societies have also established procedures for enforcing their standards. In the past decade, the societies' codes of ethics—which historically have been exhortations to uphold high standards of professional behavior —have incorporated specific guidelines relevant to authorship practices, data management, training and mentoring, conflict of interest, reporting research findings, treatment of confidential or proprietary information, and addressing error or misconduct.

The Role of Individual Scientists and Research Teams

The methods by which individual scientists and students are socialized in the principles and traditions of science are poorly understood. The principles of science and the practices of the disciplines are transmitted by scientists in classroom settings and, perhaps more importantly, in research groups and teams. The social setting of the research group is a strong and valuable characteristic of American science and education. The dynamics of research groups can foster —or inhibit—innovation, creativity, education, and collaboration.

One author of a historical study of research groups in the chemical and biochemical sciences has observed that the laboratory director or group leader is the primary determinant of a group's practices (Fruton, 1990). Individuals in positions of authority are visible and are also influential in determining funding and other support for the career paths of their associates and students. Research directors and department chairs, by virtue of personal example, thus can reinforce, or weaken, the power of disciplinary standards and scientific norms to affect research practices.

To the extent that the behavior of senior scientists conforms with general expectations for appropriate scientific and disciplinary practice, the research system is coherent and mutually reinforcing. When the behavior of research directors or department chairs diverges from expectations for good practice, however, the expected norms of science become ambiguous, and their effects are thus weakened. Thus personal example and the perceived behavior of role models and leaders in the research community can be powerful stimuli in shaping the research practices of colleagues, associates, and students.

The role of individuals in influencing research practices can vary by research field, institution, or time. The standards and expectations for behavior exemplified by scientists who are highly regarded for their technical competence or creative insight may have greater influence than the standards of others. Individual and group behaviors may also be more influential in times of uncertainty and change in science, especially when new scientific theories, paradigms, or institutional relationships are being established.

Institutional Policies

Universities, independent institutes, and government and industrial research organizations create the environment in which research is done. As the recipients of federal funds and the institutional sponsors of research activities, administrative officers must comply with regulatory and legal requirements that accompany public support. They are required, for example, “to foster a research environment that discourages misconduct in all research and that deals forthrightly with possible misconduct” (DHHS, 1989a, p. 32451).

Academic institutions traditionally have relied on their faculty to ensure that appropriate scientific and disciplinary standards are maintained. A few universities and other research institutions have also adopted policies or guidelines to clarify the principles that their members are expected to observe in the conduct of scientific research. 9 In addition, as a result of several highly publicized incidents of miscon-

duct in science and the subsequent enactment of governmental regulations, most major research institutions have now adopted policies and procedures for handling allegations of misconduct in science.

Institutional policies governing research practices can have a powerful effect on research practices if they are commensurate with the norms that apply to a wide spectrum of research investigators. In particular, the process of adopting and implementing strong institutional policies can sensitize the members of those institutions to the potential for ethical problems in their work. Institutional policies can establish explicit standards that institutional officers then have the power to enforce with sanctions and penalties.

Institutional policies are limited, however, in their ability to specify the details of every problematic situation, and they can weaken or displace individual professional judgment in such situations. Currently, academic institutions have very few formal policies and programs in specific areas such as authorship, communication and publication, and training and supervision.

Government Regulations and Policies

Government agencies have developed specific rules and procedures that directly affect research practices in areas such as laboratory safety, the treatment of human and animal research subjects, and the use of toxic or potentially hazardous substances in research.

But policies and procedures adopted by some government research agencies to address misconduct in science (see Chapter 5 ) represent a significant new regulatory development in the relationships between research institutions and government sponsors. The standards and criteria used to monitor institutional compliance with an increasing number of government regulations and policies affecting research practices have been a source of significant disagreement and tension within the research community.

In recent years, some government research agencies have also adopted policies and procedures for the treatment of research data and materials in their extramural research programs. For example, the National Science Foundation (NSF) has implemented a data-sharing policy through program management actions, including proposal review and award negotiations and conditions. The NSF policy acknowledges that grantee institutions will “keep principal rights to intellectual property conceived under NSF sponsorship” to encourage appropriate commercialization of the results of research (NSF, 1989b, p. 1). However, the NSF policy emphasizes “that retention of such rights does not reduce the responsibility of researchers and in-

stitutions to make results and supporting materials openly accessible ” (p. 1).

In seeking to foster data sharing under federal grant awards, the government relies extensively on the scientific traditions of openness and sharing. Research agency officials have observed candidly that if the vast majority of scientists were not so committed to openness and dissemination, government policy might require more aggressive action. But the principles that have traditionally characterized scientific inquiry can be difficult to maintain. For example, NSF staff have commented, “Unless we can arrange real returns or incentives for the original investigator, either in financial support or in professional recognition, another researcher's request for sharing is likely to present itself as ‘hassle'—an unwelcome nuisance and diversion. Therefore, we should hardly be surprised if researchers display some reluctance to share in practice, however much they may declare and genuinely feel devotion to the ideal of open scientific communication ” (NSF, 1989a, p. 4).

Social Attitudes and Expectations

Research scientists are part of a larger human society that has recently experienced profound changes in attitudes about ethics, morality, and accountability in business, the professions, and government. These attitudes have included greater skepticism of the authority of experts and broader expectations about the need for visible mechanisms to assure proper research practices, especially in areas that affect the public welfare. Social attitudes are also having a more direct influence on research practices as science achieves a more prominent and public role in society. In particular, concern about waste, fraud, and abuse involving government funds has emerged as a factor that now directly influences the practices of the research community.

Varying historical and conceptual perspectives also can affect expectations about standards of research practice. For example, some journalists have criticized several prominent scientists, such as Mendel, Newton, and Millikan, because they “cut corners in order to make their theories prevail” (Broad and Wade, 1982, p. 35). The criticism suggests that all scientists at all times, in all phases of their work, should be bound by identical standards.

Yet historical studies of the social context in which scientific knowledge has been attained suggest that modern criticism of early scientific work often imposes contemporary standards of objectivity and empiricism that have in fact been developed in an evolutionary manner. 10 Holton has argued, for example, that in selecting data for

publication, Millikan exercised creative insight in excluding unreliable data resulting from experimental error. But such practices, by today 's standards, would not be acceptable without reporting the justification for omission of recorded data.

In the early stages of pioneering studies, particularly when fundamental hypotheses are subject to change, scientists must be free to use creative judgment in deciding which data are truly significant. In such moments, the standards of proof may be quite different from those that apply at stages when confirmation and consensus are sought from peers. Scientists must consistently guard against self-deception, however, particularly when theoretical prejudices tend to overwhelm the skepticism and objectivity basic to experimental practices.

In discussing “the theory-ladenness of observations,” Sapp (1990) observed the fundamental paradox that can exist in determining the “appropriateness” of data selection in certain experiments done in the past: scientists often craft their experiments so that the scientific problems and research subjects conform closely with the theory that they expect to verify or refute. Thus, in some cases, their observations may come closer to theoretical expectations than what might be statistically proper.

This source of bias may be acceptable when it is influenced by scientific insight and judgment. But political, financial, or other sources of bias can corrupt the process of data selection. In situations where both kinds of influence exist, it is particularly important for scientists to be forthcoming about possible sources of bias in the interpretation of research results. The coupling of science to other social purposes in fostering economic growth and commercial technology requires renewed vigilance to maintain acceptable standards for disclosure and control of financial or competitive conflicts of interest and bias in the research environment. The failure to distinguish between appropriate and inappropriate sources of bias in research practices can lead to erosion of public trust in the autonomy of the research enterprise.

RESEARCH PRACTICES

In reviewing modern research practices for a range of disciplines, and analyzing factors that could affect the integrity of the research process, the panel focused on the following four areas:

Data handling—acquisition, management, and storage;

Communication and publication;

Correction of errors; and

Research training and mentorship.

Commonly understood practices operate in each area to promote responsible research conduct; nevertheless, some questionable research practices also occur. Some research institutions, scientific societies, and journals have established policies to discourage questionable practices, but there is not yet a consensus on how to treat violations of these policies. 11 Furthermore, there is concern that some questionable practices may be encouraged or stimulated by other institutional factors. For example, promotion or appointment policies that stress quantity rather than the quality of publications as a measure of productivity could contribute to questionable practices.

Data Handling

Acquisition and management.

Scientific experiments and measurements are transformed into research data. The term “research data” applies to many different forms of scientific information, including raw numbers and field notes, machine tapes and notebooks, edited and categorized observations, interpretations and analyses, derived reagents and vectors, and tables, charts, slides, and photographs.

Research data are the basis for reporting discoveries and experimental results. Scientists traditionally describe the methods used for an experiment, along with appropriate calibrations, instrument types, the number of repeated measurements, and particular conditions that may have led to the omission of some datain the reported version. Standard procedures, innovations for particular purposes, and judgments concerning the data are also reported. The general standard of practice is to provide information that is sufficiently complete so that another scientist can repeat or extend the experiment.

When a scientist communicates a set of results and a related piece of theory or interpretation in any form (at a meeting, in a journal article, or in a book), it is assumed that the research has been conducted as reported. It is a violation of the most fundamental aspect of the scientific research process to set forth measurements that have not, in fact, been performed (fabrication) or to ignore or change relevant data that contradict the reported findings (falsification).

On occasion what is actually proper research practice may be confused with misconduct in science. Thus, for example, applying scientific judgment to refine data and to remove spurious results places

special responsibility on the researcher to avoid misrepresentation of findings. Responsible practice requires that scientists disclose the basis for omitting or modifying data in their analyses of research results, especially when such omissions or modifications could alter the interpretation or significance of their work.

In the last decade, the methods by which research scientists handle, store, and provide access to research data have received increased scrutiny, owing to conflicts, over ownership, such as those described by Nelkin (1984); advances in the methods and technologies that are used to collect, retain, and share data; and the costs of data storage. More specific concerns have involved the profitability associated with the patenting of science-based results in some fields and the need to verify independently the accuracy of research results used in public or private decision making. In resolving competing claims, the interests of individual scientists and research institutions may not always coincide: researchers may be willing to exchange scientific data of possible economic significance without regard for financial or institutional implications, whereas their institutions may wish to establish intellectual property rights and obligations prior to any disclosure.

The general norms of science emphasize the principle of openness. Scientists are generally expected to exchange research data as well as unique research materials that are essential to the replication or extension of reported findings. The 1985 report Sharing Research Data concluded that the general principle of data sharing is widely accepted, especially in the behavioral and social sciences (NRC, 1985). The report catalogued the benefits of data sharing, including maintaining the integrity of the research process by providing independent opportunities for verification, refutation, or refinement of original results and data; promoting new research and the development and testing of new theories; and encouraging appropriate use of empirical data in policy formulation and evaluation. The same report examined obstacles to data sharing, which include the criticism or competition that might be stimulated by data sharing; technical barriers that may impede the exchange of computer-readable data; lack of documentation of data sets; and the considerable costs of documentation, duplication, and transfer of data.

The exchange of research data and reagents is ideally governed by principles of collegiality and reciprocity: scientists often distribute reagents with the hope that the recipient will reciprocate in the future, and some give materials out freely with no stipulations attached. 12 Scientists who repeatedly or flagrantly deviate from the tradition of sharing become known to their peers and may suffer

subtle forms of professional isolation. Such cases may be well known to senior research investigators, but they are not well documented.

Some scientists may share materials as part of a collaborative agreement in exchange for co-authorship on resulting publications. Some donors stipulate that the shared materials are not to be used for applications already being pursued by the donor's laboratory. Other stipulations include that the material not be passed on to third parties without prior authorization, that the material not be used for proprietary research, or that the donor receive prepublication copies of research publications derived from the material. In some instances, so-called materials transfer agreements are executed to specify the responsibilities of donor and recipient. As more academic research is being supported under proprietary agreements, researchers and institutions are experiencing the effects of these arrangements on research practices.

Governmental support for research studies may raise fundamental questions of ownership and rights of control, particularly when data are subsequently used in proprietary efforts, public policy decisions, or litigation. Some federal research agencies have adopted policies for data sharing to mitigate conflicts over issues of ownership and access (NIH, 1987; NSF, 1989b).

Many research investigators store primary data in the laboratories in which the data were initially derived, generally as electronic records or data sheets in laboratory notebooks. For most academic laboratories, local customary practice governs the storage (or discarding) of research data. Formal rules or guidelines concerning their disposition are rare.

Many laboratories customarily store primary data for a set period (often 3 to 5 years) after they are initially collected. Data that support publications are usually retained for a longer period than are those tangential to reported results. Some research laboratories serve as the proprietor of data and data books that are under the stewardship of the principal investigator. Others maintain that it is the responsibility of the individuals who collected the data to retain proprietorship, even if they leave the laboratory.

Concerns about misconduct in science have raised questions about the roles of research investigators and of institutions in maintaining and providing access to primary data. In some cases of alleged misconduct, the inability or unwillingness of an investigator to provide

primary data or witnesses to support published reports sometimes has constituted a presumption that the experiments were not conducted as reported. 13 Furthermore, there is disagreement about the responsibilities of investigators to provide access to raw data, particularly when the reported results have been challenged by others. Many scientists believe that access should be restricted to peers and colleagues, usually following publication of research results, to reduce external demands on the time of the investigator. Others have suggested that raw data supporting research reports should be accessible to any critic or competitor, at any time, especially if the research is conducted with public funds. This topic, in particular, could benefit from further research and systematic discussion to clarify the rights and responsibilities of research investigators, institutions, and sponsors.

Institutional policies have been developed to guide data storage practices in some fields, often stimulated by desires to support the patenting of scientific results and to provide documentation for resolving disputes over patent claims. Laboratories concerned with patents usually have very strict rules concerning data storage and note keeping, often requiring that notes be recorded in an indelible form and be countersigned by an authorized person each day. A few universities have also considered the creation of central storage repositories for all primary data collected by their research investigators. Some government research institutions and industrial research centers maintain such repositories to safeguard the record of research developments for scientific, historical, proprietary, and national security interests.

In the academic environment, however, centralized research records raise complex problems of ownership, control, and access. Centralized data storage is costly in terms of money and space, and it presents logistical problems of cataloguing and retrieving data. There have been suggestions that some types of scientific data should be incorporated into centralized computerized data banks, a portion of which could be subject to periodic auditing or certification. 14 But much investigator-initiated research is not suitable for random data audits because of the exploratory nature of basic or discovery research. 15

Some scientific journals now require that full data for research papers be deposited in a centralized data bank before final publication. Policies and practices differ, but in some fields support is growing for compulsory deposit to enhance researchers' access to supporting data.

Issues Related to Advances in Information Technology

Advances in electronic and other information technologies have raised new questions about the customs and practices that influence the storage, ownership, and exchange of electronic data and software. A number of special issues, not addressed by the panel, are associated with computer modeling, simulation, and other approaches that are becoming more prevalent in the research environment. Computer technology can enhance research collaboration; it can also create new impediments to data sharing resulting from increased costs, the need for specialized equipment, or liabilities or uncertainties about responsibilities for faulty data, software, or computer-generated models.

Advances in computer technology may assist in maintaining and preserving accurate records of research data. Such records could help resolve questions about the timing or accuracy of specific research findings, especially when a principal investigator is not available or is uncooperative in responding to such questions. In principle, properly managed information technologies, utilizing advances in nonerasable optical disk systems, might reinforce openness in scientific research and make primary data more transparent to collaborators and research managers. For example, the so-called WORM (write once, read many) systems provide a high-density digital storage medium that supplies an ineradicable audit trail and historical record for all entered information (Haas, 1991).

Advances in information technologies could thus provide an important benefit to research institutions that wish to emphasize greater access to and storage of primary research data. But the development of centralized information systems in the academic research environment raises difficult issues of ownership, control, and principle that reflect the decentralized character of university governance. Such systems are also a source of additional research expense, often borne by individual investigators. Moreover, if centralized systems are perceived by scientists as an inappropriate or ineffective form of management or oversight of individual research groups, they simply may not work in an academic environment.

Communication and Publication

Scientists communicate research results by a variety of formal and informal means. In earlier times, new findings and interpretations were communicated by letter, personal meeting, and publication. Today, computer networks and facsimile machines have sup-

plemented letters and telephones in facilitating rapid exchange of results. Scientific meetings routinely include poster sessions and press conferences as well as formal presentations. Although research publications continue to document research findings, the appearance of electronic publications and other information technologies heralds change. In addition, incidents of plagiarism, the increasing number of authors per article in selected fields, and the methods by which publications are assessed in determining appointments and promotions have all increased concerns about the traditions and practices that have guided communication and publication.

Journal publication, traditionally an important means of sharing information and perspectives among scientists, is also a principal means of establishing a record of achievement in science. Evaluation of the accomplishments of individual scientists often involves not only the numbers of articles that have resulted from a selected research effort, but also the particular journals in which the articles have appeared. Journal submission dates are often important in establishing priority and intellectual property claims.

Authorship of original research reports is an important indicator of accomplishment, priority, and prestige within the scientific community. Questions of authorship in science are intimately connected with issues of credit and responsibility. Authorship practices are guided by disciplinary traditions, customary practices within research groups, and professional and journal standards and policies. 16 There is general acceptance of the principle that each named author has made a significant intellectual contribution to the paper, even though there remains substantial disagreement over the types of contributions that are judged to be significant.

A general rule is that an author must have participated sufficiently in the work to take responsibility for its content and vouch for its validity. Some journals have adopted more specific guidelines, suggesting that credit for authorship be contingent on substantial participation in one or more of the following categories: (1) conception and design of the experiment, (2) execution of the experiment and collection and storage of the supporting data, (3) analysis and interpretation of the primary data, and (4) preparation and revision of the manuscript. The extent of participation in these four activities required for authorship varies across journals, disciplines, and research groups. 17

“Honorary,” “gift,” or other forms of noncontributing authorship

are problems with several dimensions. 18 Honorary authors reap an inflated list of publications incommensurate with their scientific contributions (Zen, 1988). Some scientists have requested or been given authorship as a form of recognition of their status or influence rather than their intellectual contribution. Some research leaders have a custom of including their own names in any paper issuing from their laboratory, although this practice is increasingly discouraged. Some students or junior staff encourage such “gift authorship” because they feel that the inclusion of prestigious names on their papers increases the chance of publication in well-known journals. In some cases, noncontributing authors have been listed without their consent, or even without their being told. In response to these practices, some journals now require all named authors to sign the letter that accompanies submission of the original article, to ensure that no author is named without consent.

“Specialized” authorship is another issue that has received increasing attention. In these cases, a co-author may claim responsibility for a specialized portion of the paper and may not even see or be able to defend the paper as a whole. 19 “Specialized” authorship may also result from demands that co-authorship be given as a condition of sharing a unique research reagent or selected data that do not constitute a major contribution—demands that many scientists believe are inappropriate. “Specialized” authorship may be appropriate in cross-disciplinary collaborations, in which each participant has made an important contribution that deserves recognition. However, the risks associated with the inabilities of co-authors to vouch for the integrity of an entire paper are great; scientists may unwittingly become associated with a discredited publication.

Another problem of lesser importance, except to the scientists involved, is the order of authors listed on a paper. The meaning of author order varies among and within disciplines. For example, in physics the ordering of authors is frequently alphabetical, whereas in the social sciences and other fields, the ordering reflects a descending order of contribution to the described research. Another practice, common in biology, is to list the senior author last.

Appropriate recognition for the contributions of junior investigators, postdoctoral fellows, and graduate students is sometimes a source of discontent and unease in the contemporary research environment. Junior researchers have raised concerns about treatment of their contributions when research papers are prepared and submitted, particularly if they are attempting to secure promotions or independent research funding or if they have left the original project. In some cases, well-meaning senior scientists may grant junior colleagues

undeserved authorship or placement as a means of enhancing the junior colleague's reputation. In others, significant contributions may not receive appropriate recognition.

Authorship practices are further complicated by large-scale projects, especially those that involve specialized contributions. Mission teams for space probes, oceanographic expeditions, and projects in high-energy physics, for example, all involve large numbers of senior scientists who depend on the long-term functioning of complex equipment. Some questions about communication and publication that arise from large science projects such as the Superconducting Super Collider include: Who decides when an experiment is ready to be published? How is the spokesperson for the experiment determined? Who determines who can give talks on the experiment? How should credit for technical or hardware contributions be acknowledged?

Apart from plagiarism, problems of authorship and credit allocation usually do not involve misconduct in science. Although some forms of “gift authorship,” in which a designated author made no identifiable contribution to a paper, may be viewed as instances of falsification, authorship disputes more commonly involve unresolved differences of judgment and style. Many research groups have found that the best method of resolving authorship questions is to agree on a designation of authors at the outset of the project. The negotiation and decision process provides initial recognition of each member's effort, and it may prevent misunderstandings that can arise during the course of the project when individuals may be in transition to new efforts or may become preoccupied with other matters.

Plagiarism. Plagiarism is using the ideas or words of another person without giving appropriate credit. Plagiarism includes the unacknowledged use of text and ideas from published work, as well as the misuse of privileged information obtained through confidential review of research proposals and manuscripts.

As described in Honor in Science, plagiarism can take many forms: at one extreme is the exact replication of another's writing without appropriate attribution (Sigma Xi, 1986). At the other is the more subtle “borrowing” of ideas, terms, or paraphrases, as described by Martin et al., “so that the result is a mosaic of other people's ideas and words, the writer's sole contribution being the cement to hold the pieces together.” 20 The importance of recognition for one's intellectual abilities in science demands high standards of accuracy and diligence in ensuring appropriate recognition for the work of others.

The misuse of privileged information may be less clear-cut because it does not involve published work. But the general principles

of the importance of giving credit to the accomplishments of others are the same. The use of ideas or information obtained from peer review is not acceptable because the reviewer is in a privileged position. Some organizations, such as the American Chemical Society, have adopted policies to address these concerns (ACS, 1986).

Additional Concerns. Other problems related to authorship include overspecialization, overemphasis on short-term projects, and the organization of research communication around the “least publishable unit.” In a research system that rewards quantity at the expense of quality and favors speed over attention to detail (the effects of “publish or perish”), scientists who wait until their research data are complete before releasing them for publication may be at a disadvantage. Some institutions, such as Harvard Medical School, have responded to these problems by limiting the number of publications reviewed for promotion. Others have placed greater emphasis on major contributions as the basis for evaluating research productivity.

As gatekeepers of scientific journals, editors are expected to use good judgment and fairness in selecting papers for publication. Although editors cannot be held responsible for the errors or inaccuracies of papers that may appear in their journals, editors have obligations to consider criticism and evidence that might contradict the claims of an author and to facilitate publication of critical letters, errata, or retractions. 21 Some institutions, including the National Library of Medicine and professional societies that represent editors of scientific journals, are exploring the development of standards relevant to these obligations (Bailar et al., 1990).

Should questions be raised about the integrity of a published work, the editor may request an author's institution to address the matter. Editors often request written assurances that research reported conforms to all appropriate guidelines involving human or animal subjects, materials of human origin, or recombinant DNA.

In theory, editors set standards of authorship for their journals. In practice, scientists in the specialty do. Editors may specify the. terms of acknowledgment of contributors who fall short of authorship status, and make decisions regarding appropriate forms of disclosure of sources of bias or other potential conflicts of interest related to published articles. For example, the New England Journal of Medicine has established a category of prohibited contributions from authors engaged in for-profit ventures: the journal will not allow

such persons to prepare review articles or editorial commentaries for publication. Editors can clarify and insist on the confidentiality of review and take appropriate actions against reviewers who violate it. Journals also may require or encourage their authors to deposit reagents and sequence and crystallographic data into appropriate databases or storage facilities. 22

Peer Review

Peer review is the process by which editors and journals seek to be advised by knowledgeable colleagues about the quality and suitability of a manuscript for publication in a journal. Peer review is also used by funding agencies to seek advice concerning the quality and promise of proposals for research support. The proliferation of research journals and the rewards associated with publication and with obtaining research grants have put substantial stress on the peer review system. Reviewers for journals or research agencies receive privileged information and must exert great care to avoid sharing such information with colleagues or allowing it to enter their own work prematurely.

Although the system of peer review is generally effective, it has been suggested that the quality of refereeing has declined, that self-interest has crept into the review process, and that some journal editors and reviewers exert inappropriate influence on the type of work they deem publishable. 23

Correction of Errors

At some level, all scientific reports, even those that mark profound advances, contain errors of fact or interpretation. In part, such errors reflect uncertainties intrinsic to the research process itself —a hypothesis is formulated, an experimental test is devised, and based on the interpretation of the results, the hypothesis is refined, revised, or discarded. Each step in this cycle is subject to error. For any given report, “correctness” is limited by the following:

The precision and accuracy of the measurements. These in turn depend on available technology, the use of proper statistical and analytical methods, and the skills of the investigator.

Generality of the experimental system and approach. Studies must often be carried out using “model systems.” In biology, for example, a given phenomenon is examined in only one or a few among millions of organismal species.

Experimental design—a product of the background and expertise of the investigator.

Interpretation and speculation regarding the significance of the findings—judgments that depend on expert knowledge, experience, and the insightfulness and boldness of the investigator.

Viewed in this context, errors are an integral aspect of progress in attaining scientific knowledge. They are consequences of the fact that scientists seek fundamental truths about natural processes of vast complexity. In the best experimental systems, it is common that relatively few variables have been identified and that even fewer can be controlled experimentally. Even when important variables are accounted for, the interpretation of the experimental results may be incorrect and may lead to an erroneous conclusion. Such conclusions are sometimes overturned by the original investigator or by others when new insights from another study prompt a reexamination of older reported data. In addition, however, erroneous information can also reach the scientific literature as a consequence of misconduct in science.

What becomes of these errors or incorrect interpretations? Much has been made of the concept that science is “self-correcting”—that errors, whether honest or products of misconduct, will be exposed in future experiments because scientific truth is founded on the principle that results must be verifiable and reproducible. This implies that errors will generally not long confound the direction of thinking or experimentation in actively pursued areas of research. Clearly, published experiments are not routinely replicated precisely by independent investigators. However, each experiment is based on conclusions from prior studies; repeated failure of the experiment eventually calls into question those conclusions and leads to reevaluation of the measurements, generality, design, and interpretation of the earlier work.

Thus publication of a scientific report provides an opportunity for the community at large to critique and build on the substance of the report, and serves as one stage at which errors and misinterpretations can be detected and corrected. Each new finding is considered by the community in light of what is already known about the system investigated, and disagreements with established measurements and interpretations must be justified. For example, a particular interpretation of an electrical measurement of a material may implicitly predict the results of an optical experiment. If the reported optical results are in disagreement with the electrical interpretation, then the latter is unlikely to be correct—even though the measurements them-

selves were carefully and correctly performed. It is also possible, however, that the contradictory results are themselves incorrect, and this possibility will also be evaluated by the scientists working in the field. It is by this process of examination and reexamination that science advances.

The research endeavor can therefore be viewed as a two-tiered process: first, hypotheses are formulated, tested, and modified; second, results and conclusions are reevaluated in the course of additional study. In fact, the two tiers are interrelated, and the goals and traditions of science mandate major responsibilities in both areas for individual investigators. Importantly, the principle of self-correction does not diminish the responsibilities of the investigator in either area. The investigator has a fundamental responsibility to ensure that the reported results can be replicated in his or her laboratory. The scientific community in general adheres strongly to this principle, but practical constraints exist as a result of the availability of specialized instrumentation, research materials, and expert personnel. Other forces, such as competition, commercial interest, funding trends and availability, or pressure to publish may also erode the role of replication as a mechanism for fostering integrity in the research process. The panel is unaware of any quantitative studies of this issue.

The process of reevaluating prior findings is closely related to the formulation and testing of hypotheses. 24 Indeed, within an individual laboratory, the formulation/testing phase and the reevaluation phase are ideally ongoing interactive processes. In that setting, the precise replication of a prior result commonly serves as a crucial control in attempts to extend the original findings. It is not unusual that experimental flaws or errors of interpretation are revealed as the scope of an investigation deepens and broadens.

If new findings or significant questions emerge in the course of a reevaluation that affect the claims of a published report, the investigator is obliged to make public a correction of the erroneous result or to indicate the nature of the questions. Occasionally, this takes the form of a formal published retraction, especially in situations in which a central claim is found to be fundamentally incorrect or irreproducible. More commonly, a somewhat different version of the original experiment, or a revised interpretation of the original result, is published as part of a subsequent report that extends in other ways the initial work. Some concerns have been raised that such “revisions” can sometimes be so subtle and obscure as to be unrecognizable. Such behavior is, at best, a questionable research practice. Clearly, each scientist has a responsibility to foster an environment that en-

courages and demands rigorous evaluation and reevaluation of every key finding.

Much greater complexity is encountered when an investigator in one research group is unable to confirm the published findings of another. In such situations, precise replication of the original result is commonly not attempted because of the lack of identical reagents, differences in experimental protocols, diverse experimental goals, or differences in personnel. Under these circumstances, attempts to obtain the published result may simply be dropped if the central claim of the original study is not the major focus of the new study. Alternatively, the inability to obtain the original finding may be documented in a paper by the second investigator as part of a challenge to the original claim. In any case, such questions about a published finding usually provoke the initial investigator to attempt to reconfirm the original result, or to pursue additional studies that support and extend the original findings.

In accordance with established principles of science, scientists have the responsibility to replicate and reconfirm their results as a normal part of the research process. The cycles of theoretical and methodological formulation, testing, and reevaluation, both within and between laboratories, produce an ongoing process of revision and refinement that corrects errors and strengthens the fabric of research.

Research Training and Mentorship

The panel defined a mentor as that person directly responsible for the professional development of a research trainee. 25 Professional development includes both technical training, such as instruction in the methods of scientific research (e.g., research design, instrument use, and selection of research questions and data), and socialization in basic research practices (e.g., authorship practices and sharing of research data).

Positive Aspects of Mentorship

The relationship of the mentor and research trainee is usually characterized by extraordinary mutual commitment and personal involvement. A mentor, as a research advisor, is generally expected to supervise the work of the trainee and ensure that the trainee's research is completed in a sound, honest, and timely manner. The ideal mentor challenges the trainee, spurs the trainee to higher scientific achievement, and helps socialize the trainee into the community

of scientists by demonstrating and discussing methods and practices that are not well understood.

Research mentors thus have complex and diverse roles. Many individuals excel in providing guidance and instruction as well as personal support, and some mentors are resourceful in providing funds and securing professional opportunities for their trainees. The mentoring relationship may also combine elements of other relationships, such as parenting, coaching, and guildmastering. One mentor has written that his “research group is like an extended family or small tribe, dependent on one another, but led by the mentor, who acts as their consultant, critic, judge, advisor, and scientific father” (Cram, 1989, p. 1). Another mentor described as “orphaned graduate students” trainees who had lost their mentors to death, job changes, or in other ways (Sindermann, 1987). Many students come to respect and admire their mentors, who act as role models for their younger colleagues.

Difficulties Associated with Mentorship

However, the mentoring relationship does not always function properly or even satisfactorily. Almost no literature exists that evaluates which problems are idiosyncratic and which are systemic. However, it is clear that traditional practices in the area of mentorship and training are under stress. In some research fields, for example, concerns are being raised about how the increasing size and diverse composition of research groups affect the quality of the relationship between trainee and mentor. As the size of research laboratories expands, the quality of the training environment is at risk (CGS, 1990a).

Large laboratories may provide valuable instrumentation and access to unique research skills and resources as well as an opportunity to work in pioneering fields of science. But as only one contribution to the efforts of a large research team, a graduate student's work may become highly specialized, leading to a narrowing of experience and greater dependency on senior personnel; in a period when the availability of funding may limit research opportunities, laboratory heads may find it necessary to balance research decisions for the good of the team against the individual educational interests of each trainee. Moreover, the demands of obtaining sufficient resources to maintain a laboratory in the contemporary research environment often separate faculty from their trainees. When laboratory heads fail to participate in the everyday workings of the laboratory—even for the most beneficent of reasons, such as finding funds to support young investigators—their inattention may harm their trainees' education.

Although the size of a research group can influence the quality of mentorship, the more important issues are the level of supervision received by trainees, the degree of independence that is appropriate for the trainees' experience and interests, and the allocation of credit for achievements that are accomplished by groups composed of individuals with different status. Certain studies involving large groups of 40 to 100 or more are commonly carried out by collaborative or hierarchical arrangements under a single investigator. These factors may affect the ability of research mentors to transmit the methods and ethical principles according to which research should be conducted.

Problems also arise when faculty members are not directly rewarded for their graduate teaching or training skills. Although faculty may receive indirect rewards from the contributions of well-trained graduate students to their own research as well as the satisfaction of seeing their students excelling elsewhere, these rewards may not be sufficiently significant in tenure or promotion decisions. When institutional policies fail to recognize and reward the value of good teaching and mentorship, the pressures to maintain stable funding for research teams in a competitive environment can overwhelm the time allocated to teaching and mentorship by a single investigator.

The increasing duration of the training period in many research fields is another source of concern, particularly when it prolongs the dependent status of the junior investigator. The formal period of graduate and postdoctoral training varies considerably among fields of study. In 1988, the median time to the doctorate from the baccalaureate degree was 6.5 years (NRC, 1989). The disciplinary median varied: 5.5 years in chemistry; 5.9 years in engineering; 7.1 years in health sciences and in earth, atmospheric, and marine sciences; and 9.0 years in anthropology and sociology. 26

Students, research associates, and faculty are currently raising various questions about the rights and obligations of trainees. Sexist behavior by some research directors and other senior scientists is a particular source of concern. Another significant concern is that research trainees may be subject to exploitation because of their subordinate status in the research laboratory, particularly when their income, access to research resources, and future recommendations are dependent on the goodwill of the mentor. Foreign students and postdoctoral fellows may be especially vulnerable, since their immigration status often depends on continuation of a research relationship with the selected mentor.

Inequalities between mentor and trainee can exacerbate ordinary conflicts such as the distribution of credit or blame for research error (NAS, 1989). When conflicts arise, the expectations and assumptions

that govern authorship practices, ownership of intellectual property, and the giving of references and recommendations are exposed for professional—and even legal—scrutiny (Nelkin, 1984; Weil and Snapper, 1989).

Making Mentorship Better

Ideally, mentors and trainees should select each other with an eye toward scientific merit, intellectual and personal compatibility, and other relevant factors. But this situation operates only under conditions of freely available information and unconstrained choice —conditions that usually do not exist in academic research groups. The trainee may choose to work with a faculty member based solely on criteria of patronage, perceived influence, or ability to provide financial support.

Good mentors may be well known and highly regarded within their research communities and institutions. Unfortunately, individuals who exploit the mentorship relationship may be less visible. Poor mentorship practices may be self-correcting over time, if students can detect and avoid research groups characterized by disturbing practices. However, individual trainees who experience abusive relationships with a mentor may discover only too late that the practices that constitute the abuse were well known but were not disclosed to new initiates.

It is common practice for a graduate student to be supervised not only by an individual mentor but also by a committee that represents the graduate department or research field of the student. However, departmental oversight is rare for the postdoctoral research fellow. In order to foster good mentorship practices for all research trainees, many groups and institutions have taken steps to clarify the nature of individual and institutional responsibilities in the mentor–trainee relationship. 27

FINDINGS AND CONCLUSIONS

The self-regulatory system that characterizes the research process has evolved from a diverse set of principles, traditions, standards, and customs transmitted from senior scientists, research directors, and department chairs to younger scientists by example, discussion, and informal education. The principles of honesty, collegiality, respect for others, and commitment to dissemination, critical evaluation, and rigorous training are characteristic of all the sciences. Methods and techniques of experimentation, styles of communicating findings,

the relationship between theory and experimentation, and laboratory groupings for research and for training vary with the particular scientific disciplines. Within those disciplines, practices combine the general with the specific. Ideally, research practices reflect the values of the wider research community and also embody the practical skills needed to conduct scientific research.

Practicing scientists are guided by the principles of science and the standard practices of their particular scientific discipline as well as their personal moral principles. But conflicts are inherent among these principles. For example, loyalty to one's group of colleagues can be in conflict with the need to correct or report an abuse of scientific practice on the part of a member of that group.

Because scientists and the achievements of science have earned the respect of society at large, the behavior of scientists must accord not only with the expectations of scientific colleagues, but also with those of a larger community. As science becomes more closely linked to economic and political objectives, the processes by which scientists formulate and adhere to responsible research practices will be subject to increasing public scrutiny. This is one reason for scientists and research institutions to clarify and strengthen the methods by which they foster responsible research practices.

Accordingly, the panel emphasizes the following conclusions:

The panel believes that the existing self-regulatory system in science is sound. But modifications are necessary to foster integrity in a changing research environment, to handle cases of misconduct in science, and to discourage questionable research practices.

Individual scientists have a fundamental responsibility to ensure that their results are reproducible, that their research is reported thoroughly enough so that results are reproducible, and that significant errors are corrected when they are recognized. Editors of scientific journals share these last two responsibilities.

Research mentors, laboratory directors, department heads, and senior faculty are responsible for defining, explaining, exemplifying, and requiring adherence to the value systems of their institutions. The neglect of sound training in a mentor's laboratory will over time compromise the integrity of the research process.

Administrative officials within the research institution also bear responsibility for ensuring that good scientific practices are observed in units of appropriate jurisdiction and that balanced reward systems appropriately recognize research quality, integrity, teaching, and mentorship. Adherence to scientific principles and disciplinary standards is at the root of a vital and productive research environment.

At present, scientific principles are passed on to trainees primarily by example and discussion, including training in customary practices. Most research institutions do not have explicit programs of instruction and discussion to foster responsible research practices, but the communication of values and traditions is critical to fostering responsible research practices and detering misconduct in science.

Efforts to foster responsible research practices in areas such as data handling, communication and publication, and research training and mentorship deserve encouragement by the entire research community. Problems have also developed in these areas that require explicit attention and correction by scientists and their institutions. If not properly resolved, these problems may weaken the integrity of the research process.

1. See, for example, Kuyper (1991).

2. See, for example, the proposal by Pigman and Carmichael (1950).

3. See, for example, Holton (1988) and Ravetz (1971).

4. Several excellent books on experimental design and statistical methods are available. See, for example, Wilson (1952) and Beveridge (1957).

5. For a somewhat dated review of codes of ethics adopted by the scientific and engineering societies, see Chalk et al. (1981).

6. The discussion in this section is derived from Mark Frankel's background paper, “Professional Societies and Responsible Research Conduct,” included in Volume II of this report.

7. For a broader discussion on this point, see Zuckerman (1977).

8. For a full discussion of the roles of scientific societies in fostering responsible research practices, see the background paper prepared by Mark Frankel, “Professional Societies and Responsible Research Conduct,” in Volume II of this report.

9. Selected examples of academic research conduct policies and guidelines are included in Volume II of this report.

10. See, for example, Holton's response to the criticisms of Millikan in Chapter 12 of Thematic Origins of Scientific Thought (Holton, 1988). See also Holton (1978).

11. See, for example, responses to the Proceedings of the National Academy of Sciences action against Friedman: Hamilton (1990) and Abelson et al. (1990). See also the discussion in Bailar et al. (1990).

12. Much of the discussion in this section is derived from a background paper, “Reflections on the Current State of Data and Reagent Exchange Among Biomedical Researchers,” prepared by Robert Weinberg and included in Volume II of this report.

13. See, for example, Culliton (1990) and Bradshaw et al. (1990). For the impact of the inability to provide corroborating data or witnesses, also see Ross et al. (1989).

14. See, for example, Rennie (1989) and Cassidy and Shamoo (1989).

15. See, for example, the discussion on random data audits in Institute of Medicine (1989a), pp. 26-27.

16. For a full discussion of the practices and policies that govern authorship in the biological sciences, see Bailar et al. (1990).

17. Note that these general guidelines exclude the provision of reagents or facilities or the supervision of research as a criteria of authorship.

18. A full discussion of problematic practices in authorship is included in Bailar et al. (1990). A controversial review of the responsibilities of co-authors is presented by Stewart and Feder (1987).

19. In the past, scientific papers often included a special note by a named researcher, not a co-author of the paper, who described, for example, a particular substance or procedure in a footnote or appendix. This practice seems to.have been abandoned for reasons that are not well understood.

20. Martin et al. (1969), as cited in Sigma Xi (1986), p. 41.

21. Huth (1988) suggests a “notice of fraud or notice of suspected fraud” issued by the journal editor to call attention to the controversy (p. 38). Angell (1983) advocates closer coordination between institutions and editors when institutions have ascertained misconduct.

22. Such facilities include Cambridge Crystallographic Data Base, GenBank at Los Alamos National Laboratory, the American Type Culture Collection, and the Protein Data Bank at Brookhaven National Laboratory. Deposition is important for data that cannot be directly printed because of large volume.

23. For more complete discussions of peer review in the wider context, see, for example, Cole et al. (1977) and Chubin and Hackett (1990).

24. The strength of theories as sources of the formulation of scientific laws and predictive power varies among different fields of science. For example, theories derived from observations in the field of evolutionary biology lack a great deal of predictive power. The role of chance in mutation and natural selection is great, and the future directions that evolution may take are essentially impossible to predict. Theory has enormous power for clarifying understanding of how evolution has occurred and for making sense of detailed data, but its predictive power in this field is very limited. See, for example, Mayr (1982, 1988).

25. Much of the discussion on mentorship is derived from a background paper prepared for the panel by David Guston. A copy of the full paper, “Mentorship and the Research Training Experience,” is included in Volume II of this report.

26. Although the time to the doctorate is increasing, there is some evidence that the magnitude of the increase may be affected by the organization of the cohort chosen for study. In the humanities, the increased time to the doctorate is not as large if one chooses as an organizational base the year in which the baccalaureate was received by Ph.D. recipients, rather than the year in which the Ph.D. was completed; see Bowen et al. (1991).

27. Some universities have written guidelines for the supervision or mentorship of trainees as part of their institutional research policy guidelines (see, for example, the guidelines adopted by Harvard University and the University of Michigan that are included in Volume II of this report). Other groups or institutions have written “guidelines ” (IOM, 1989a; NIH, 1990), “checklists” (CGS, 1990a), and statements of “areas of concern” and suggested “devices” (CGS, 1990c).

The guidelines often affirm the need for regular, personal interaction between the mentor and the trainee. They indicate that mentors may need to limit the size of their laboratories so that they are able to interact directly and frequently with all of their trainees. Although there are many ways to ensure responsible mentorship, methods that provide continuous feedback, whether through formal or informal mechanisms, are apt to be the most successful (CGS, 1990a). Departmental mentorship awards (comparable to teaching or research prizes) can recognize, encourage, and enhance the

mentoring relationship. For other discussions on mentorship, see the paper by David Guston in Volume II of this report.

One group convened by the Institute of Medicine has suggested “that the university has a responsibility to ensure that the size of a research unit does not outstrip the mentor's ability to maintain adequate supervision” (IOM, 1989a, p. 85). Others have noted that although it may be desirable to limit the number of trainees assigned to a senior investigator, there is insufficient information at this time to suggest that numbers alone significantly affect the quality of research supervision (IOM, 1989a, p. 33).

Responsible Science is a comprehensive review of factors that influence the integrity of the research process. Volume I examines reports on the incidence of misconduct in science and reviews institutional and governmental efforts to handle cases of misconduct.

The result of a two-year study by a panel of experts convened by the National Academy of Sciences, this book critically analyzes the impact of today's research environment on the traditional checks and balances that foster integrity in science.

Responsible Science is a provocative examination of the role of educational efforts; research guidelines; and the contributions of individual scientists, mentors, and institutional officials in encouraging responsible research practices.

READ FREE ONLINE

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Do you want to take a quick tour of the OpenBook's features?

Show this book's table of contents , where you can jump to any chapter by name.

...or use these buttons to go back to the previous chapter or skip to the next one.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

Switch between the Original Pages , where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

To search the entire text of this book, type in your search term here and press Enter .

Share a link to this book page on your preferred social network or via email.

View our suggested citation for this chapter.

Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

Get Email Updates

Do you enjoy reading reports from the Academies online for free ? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released.

Search

Research Administrator II

Apply now Job no: 532931 Work type: Staff Full-Time Location: Main Campus (Gainesville, FL) Categories: Libraries/Museums Department: 56010200 - NH-BUDGET / HUMAN RESOURCES

Classification Title:

Research Administrator II

Job Description:

As a Research Administrator II, you will be responsible for a wide range of tasks spanning the entire lifecycle of research projects, from grant opportunity identification to proposal submission, award management, and compliance oversight. Your primary responsibilities will include:

Grant Preparation/Submission:
- Develop and implement procedures for efficient grant proposal preparation and submission.
- Coordinate with faculty to compile proposal documents, ensuring adherence to administrative, budgetary, and compliance requirements.
- Conduct comprehensive reviews of proposal content, advising on formatting, language, and alignment with sponsor guidelines.
- Prepare and submit proposals through electronic systems, facilitating a timely and effective submission process.

Grant Opportunity Management:
- Proactively identify potential funding sources aligned with faculty research interests.
- Provide identified opportunities to appropriate faculty members.
- Prepare budgets and justifications for applications and RFP responses.

Grant Management:
- Coordinate reports for internal and external requirements.
- Serve as a liaison with regulatory bodies, such as the IRB and Animal Care.
- Manage subawards and modifications, ensuring compliance and timely execution.
- Assist in the transfer of project awards to and from the university.
- Maintain up-to-date records and manage documentation for audits and reviews.

Principal Investigator (PI) Training:
- Develop and deliver training sessions for PIs on research administration policies, procedures, and resources.
- Create and update training materials to reflect changes in regulations and best practices.

 

Expected Salary: Salary range $62,000.00 - $72,000.00, commensurate with education and experience.
Minimum Requirements:
Preferred Qualifications:

Familiarity with myUFL, myinvestiGator, Microsoft Office (especially Excel), and UFIRST.

Ability to understand and apply applicable rules, regulations, policies and procedures, especially as they relate to grant proposals and appropriate activities.

Strong customer service skills, especially when faced with an unfamiliar question or problem.

Excellent reading comprehension and written communication skills, both oral and written.

Ability to work independently and utilize problem-solving techniques.

Detail oriented and strong familiarity with UF accounting principles and cost accounting standards.

Ability to plan, organize, coordinate work assignments and multi-task when needed.

Transparency through open and honest communication
Continuous improvement by communicating struggles and streamlining processes.
Teamwork by supporting the knowledge and ideas of others.

 

Special Instructions to Applicants:

In order to be considered, you must upload your cover letter and resume.

Application must be submitted by 11:55 p.m. (ET) of the posting end date.

Health Assessment Required: No

Advertised: 27 Aug 2024 Eastern Daylight Time Applications close: 09 Sep 2024 Eastern Daylight Time

Back to search results Apply now Refer a friend

Search results

Position Department Location Closes
56010200 - NH-BUDGET / HUMAN RESOURCES Main Campus (Gainesville, FL)
The Florida Museum of Natural History seeks a Research Administrator II to provide professional oversight, guidance and coordination of intricate, large scale proposals. With exceptional communication skills, this individual should excel at organizing and facilitating collaboration among diverse teams to produce high-quality proposals within specified timelines. The Research Administrator will act as a consultant during the proposal development phase, providing expertise in structural design, budgetary considerations, cost-share strategies, and overall strategic planning. This position will additionally play a crucial role in identifying key opportunities for research funding and ensuring faculty are pursuing these opportunities. This position will be responsible for the professional oversight, guidance and coordination of various aspects of pre-award and post-award grant administration and fiscal operations. The ideal candidate will have strong analytical skills, experience in research administration, and a commitment to providing exceptional customer service.

Current Opportunities

Powered by PageUp

Refine search

  • Staff Full-Time 1
  • Libraries/Museums 1
  • Main Campus (Gainesville, FL) 1
  • 56010200 - NH-BUDGET / HUMAN RESOURCES 1
  • Frequently Asked Questions
  • Veteran Preference
  • Applicant Tutorial
  • UF Hiring Policies
  • Disclosure of Campus Security Policy and Campus Crime Statistics
  • Institute of Food and Agricultural Sciences Faculty Positions
  • Labor Condition Application (ETA Form 9035): Notice of Filings
  • Application for Permanent Employment Certification (ETA Form 9089): Notice of Job Availability
  • Search Committee Public Meeting Notices
  • Accessibility at UF
  • Drug and Alcohol Abuse Prevention Program (DAAPP)
  • Drug-Free Workplace

Equal Opportunity Employer

The University is committed to non-discrimination with respect to race, creed, color, religion, age, disability, sex, sexual orientation, gender identity and expression, marital status, national origin, political opinions or affiliations, genetic information and veteran status in all aspects of employment including recruitment, hiring, promotions, transfers, discipline, terminations, wage and salary administration, benefits, and training.

We will email you new jobs that match this search.

Ok, we will send you jobs like this.

The email address was invalid, please check for errors.

You must agree to the privacy statement

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Published: 27 August 2024

Tumour mutational burden: clinical utility, challenges and emerging improvements

  • Jan Budczies   ORCID: orcid.org/0000-0002-6668-5327 1 , 2 , 3   na1 ,
  • Daniel Kazdal   ORCID: orcid.org/0000-0001-8187-3281 1 , 2 , 3   na1 ,
  • Michael Menzel   ORCID: orcid.org/0000-0002-4129-4741 1 , 3 ,
  • Susanne Beck 1 , 3 ,
  • Klaus Kluck   ORCID: orcid.org/0009-0001-1000-8052 1 , 3 ,
  • Christian Altbürger   ORCID: orcid.org/0000-0002-4545-2719 1 , 3 ,
  • Constantin Schwab 1 , 3 ,
  • Michael Allgäuer 1 , 3 ,
  • Aysel Ahadova 4 , 5 ,
  • Matthias Kloor 4 , 5 ,
  • Peter Schirmacher 1 , 3 ,
  • Solange Peters 6 ,
  • Alwin Krämer 7 , 8 ,
  • Petros Christopoulos 2 , 9 &
  • Albrecht Stenzinger 1 , 2 , 3  

Nature Reviews Clinical Oncology ( 2024 ) Cite this article

6 Altmetric

Metrics details

  • Cancer genetics
  • Cancer immunotherapy
  • Predictive markers
  • Tumour biomarkers

Tumour mutational burden (TMB), defined as the total number of somatic non-synonymous mutations present within the cancer genome, varies across and within cancer types. A first wave of retrospective and prospective research identified TMB as a predictive biomarker of response to immune-checkpoint inhibitors and culminated in the disease-agnostic approval of pembrolizumab for patients with TMB-high tumours based on data from the Keynote-158 trial. Although the applicability of outcomes from this trial to all cancer types and the optimal thresholds for TMB are yet to be ascertained, research into TMB is advancing along three principal avenues: enhancement of TMB assessments through rigorous quality control measures within the laboratory process, including the mitigation of confounding factors such as limited panel scope and low tumour purity; refinement of the traditional TMB framework through the incorporation of innovative concepts such as clonal, persistent or HLA-corrected TMB, tumour neoantigen load and mutational signatures; and integration of TMB with established and emerging biomarkers such as PD-L1 expression, microsatellite instability, immune gene expression profiles and the tumour immune contexture. Given its pivotal functions in both the pathogenesis of cancer and the ability of the immune system to recognize tumours, a profound comprehension of the foundational principles and the continued evolution of TMB are of paramount relevance for the field of oncology.

Tumour mutational burden (TMB) is a predictive biomarker for benefit from immune-checkpoint inhibition across cancer types and within some cancer types.

Together with the cut-off of 10 mut/Mb, TMB is included in the entity-agnostic FDA approval of pembrolizumab for patients with advanced-stage solid tumours following disease progression on standard-of-care therapy who lack an alternative treatment option.

The use of large sequencing panels and tissue samples of sufficient tumour purity are key to ensuring accurate TMB quantification and precise patient stratification.

The predictive value of TMB for response to immune-checkpoint inhibitors is only proven for certain histologies and even for these, sensitivity and specificity for the prediction of benefit are limited.

TMB is the first step in a long series of immunological processes including antigen presentation, T cell priming and antigen recognition, all of which are necessary for an antitumour immune response.

Research efforts to improve the predictive value of TMB include refinement by selection or weighting of the mutations included and/or combination with assessments of other genetic and/or immunological tumour-related variables.

This is a preview of subscription content, access via your institution

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 12 print issues and online access

195,33 € per year

only 16,28 € per issue

Buy this article

  • Purchase on SpringerLink
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

type of principles research

Similar content being viewed by others

type of principles research

Impact of panel design and cut-off on tumour mutational burden assessment in metastatic solid tumour samples

type of principles research

Recurrent somatic mutations as predictors of immunotherapy response

type of principles research

Panels and models for accurate prediction of tumor mutation burden in tumor samples

Hanahan, D. & Weinberg, R. A. The hallmarks of cancer. Cell 100 , 57–70 (2000).

Article   CAS   PubMed   Google Scholar  

Vogelstein, B. & Kinzler, K. W. The multistep nature of cancer. Trends Genet. 9 , 138141 (1993).

Article   Google Scholar  

Vogelstein, B. et al. Cancer genome landscapes. Science 339 , 1546–1558 (2013).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Alexandrov, L. B. et al. Signatures of mutational processes in human cancer. Nature 500 , 415–421 (2013).

Rizvi, N. A. et al. Mutational landscape determines sensitivity to PD-1 blockade in non-small cell lung cancer. Science 348 , 124–128 (2015).

Snyder, A. et al. Genetic basis for clinical response to CTLA-4 blockade in melanoma. N. Engl. J. Med. 371 , 2189–2199 (2014).

Article   PubMed   PubMed Central   Google Scholar  

Gandhi, L. et al. Pembrolizumab plus chemotherapy in metastatic non-small-cell lung cancer. N. Engl. J. Med. 378 , 2078–2092 (2018).

Paz-Ares, L. et al. Pembrolizumab plus chemotherapy for squamous non-small-cell lung cancer. N. Engl. J. Med. 379 , 2040–2051 (2018).

Mok, T. S. K. et al. Pembrolizumab versus chemotherapy for previously untreated, PD-L1-expressing, locally advanced or metastatic non-small-cell lung cancer (KEYNOTE-042): a randomised, open-label, controlled, phase 3 trial. Lancet 393 , 1819–1830 (2019).

Hellmann, M. D. et al. Nivolumab plus ipilimumab in advanced non-small-cell lung cancer. N. Engl. J. Med. 381 , 2020–2031 (2019).

Chan, T. A. et al. Development of tumor mutation burden as an immunotherapy biomarker: utility for the oncology clinic. Ann. Oncol. 30 , 44–56 (2019).

Carbone, D. P. et al. First-line nivolumab in stage IV or recurrent non-small-cell lung cancer. N. Engl. J. Med. 376 , 2415–2426 (2017).

Hellmann, M. D. et al. Genomic features of response to combination immunotherapy in patients with advanced non-small-cell lung cancer. Cancer Cell 33 , 843–852.e4 (2018).

Hellmann, M. D. et al. Nivolumab plus ipilimumab in lung cancer with a high tumor mutational burden. N. Engl. J. Med. 378 , 2093–2104 (2018).

Yarchoan, M., Hopkins, A. & Jaffee, E. M. Tumor mutational burden and response rate to PD-1 inhibition. N. Engl. J. Med. 377 , 2500–2501 (2017).

Banchereau, R. et al. Molecular determinants of response to PD-L1 blockade across tumor types. Nat. Commun. 12 , 3969 (2021).

Samstein, R. M. et al. Tumor mutational load predicts survival after immunotherapy across multiple cancer types. Nat. Genet. 51 , 202–206 (2019).

Cao, D., Xu, H., Xu, X., Guo, T. & Ge, W. High tumor mutation burden predicts better efficacy of immunotherapy: a pooled analysis of 103078 cancer patients. Oncoimmunology 8 , e1629258 (2019).

Ricciuti, B. et al. Association of high tumor mutation burden in non-small cell lung cancers with increased immune infiltration and improved clinical outcomes of PD-L1 blockade across PD-L1 expression levels. JAMA Oncol. 8 , 1160–1168 (2022).

Thummalapalli, R. et al. Clinical and molecular features of long-term response to immune checkpoint inhibitors in patients with advanced non-small cell lung cancer. Clin. Cancer Res. 20 , 4408–4418 (2023).

Yarchoan, M. et al. PD-L1 expression and tumor mutational burden are independent biomarkers in most cancers. JCI Insight 4 , e126908 (2019).

Budczies, J. et al. Integrated analysis of the immunological and genetic status in and across cancer types: impact of mutational signatures beyond tumor mutational burden. Oncoimmunology 7 , e1526613 (2018).

Krämer, A. et al. Cancer of unknown primary: ESMO Clinical Practice Guideline for diagnosis, treatment and follow-up. Ann. Oncol. 34 , 228–246 (2023).

Article   PubMed   Google Scholar  

Ross, J. S. et al. Comprehensive genomic profiling of carcinoma of unknown primary origin: retrospective molecular classification considering the CUPISCO study design. Oncologist 26 , e394–e402 (2021).

Bochtler, T. et al. Prognostic impact of copy number alterations and tumor mutational burden in carcinoma of unknown primary. Genes Chromosomes Cancer 61 , 551–560 (2022).

Gatalica, Z., Xiu, J., Swensen, J. & Vranic, S. Comprehensive analysis of cancers of unknown primary for the biomarkers of response to immune checkpoint blockade therapy. Eur. J. Cancer 94 , 179–186 (2018).

Marabelle, A. et al. Association of tumour mutational burden with outcomes in patients with advanced solid tumours treated with pembrolizumab: prospective biomarker analysis of the multicohort, open-label, phase 2 KEYNOTE-158 study. Lancet Oncol. 21 , 1353–1365 (2020).

Pouyiourou, M. et al. Nivolumab and ipilimumab in recurrent or refractory cancer of unknown primary: a phase II trial. Nat. Commun. 14 , 6761 (2023).

Krämer, A. et al. Molecularly guided therapy versus chemotherapy after disease control in unfavourable cancer of unknown primary (CUPISCO): an open-label, randomised, phase 2 study. Lancet 404 , 527–539 (2024).

Gandara, D. R. et al. Tumor mutational burden (TMB) measurement from an FDA-approved assay and real-world overall survival (rwOS) on single-agent immune checkpoint inhibitors (ICI) in over 8,000 patients across 24 cancer types. J. Clin. Oncol. 41 , 2503 (2023).

Davis, A. A. et al. Comparison of tumor mutational burden (TMB) across tumor tissue and circulating tumor DNA (ctDNA). J. Clin. Oncol. 35 , e23028 (2017).

Gandara, D. R. et al. Blood-based tumor mutational burden as a predictor of clinical benefit in non-small-cell lung cancer patients treated with atezolizumab. Nat. Med. 24 , 1441–1448 (2018).

Wang, Z. et al. Assessment of blood tumor mutational burden as a potential biomarker for immunotherapy in patients with non-small cell lung cancer with use of a next-generation sequencing cancer gene panel. JAMA Oncol. 5 , 696–702 (2019).

Si, H. et al. A blood-based assay for assessment of tumor mutational burden in firstline metastatic NSCLC treatment: results from the MYSTIC study. Clin. Cancer Res. 27 , 1631–1640 (2021).

Peters, S. et al. Atezolizumab versus chemotherapy in advanced or metastatic NSCLC with high blood-based tumor mutational burden: primary analysis of BFAST cohort C randomized phase 3 trial. Nat. Med. 28 , 1831–1839 (2022).

Prasad, V. & Addeo, A. The FDA approval of pembrolizumab for patients with TMB >10 mut/Mb: was it a wise decision? No. Ann. Oncol. 31 , 1112–1114 (2020).

Subbiah, V., Solit, D. B., Chan, T. A. & Kurzrock, R. The FDA approval of pembrolizumab for adult and pediatric patients with tumor mutational burden (TMB) ≥ 10: a decision centered on empowering patients and their physicians. Ann. Oncol. 31 , 1115–1118 (2020).

Chang, H. et al. Bioinformatic methods and bridging of assay results for reliable tumor mutational burden assessment in non-small-cell lung cancer. Mol. Diagn. Ther. 23 , 507–520 (2019).

Vega, D. M. et al. Aligning tumor mutational burden (TMB) quantification across diagnostic platforms: phase II of the Friends of Cancer Research TMB Harmonization Project. Ann. Oncol. 32 , 1626–1636 (2021).

Stenzinger, A. et al. Harmonization and standardization of panel-based tumor mutational burden measurement: real-world results and recommendations of the quality in pathology study. J. Thorac. Oncol. 15 , 1177–1189 (2020).

Budczies, J. et al. Optimizing panel-based tumor mutational burden (TMB) measurement. Ann. Oncol. 30 , 1496–1506 (2019).

Turajlic, S. et al. Insertion-and-deletion-derived tumour-specific neoantigens and the immunogenic phenotype: a pan-cancer analysis. Lancet Oncol. 18 , 1009–1021 (2017).

Kloor, M. & von Knebel Doeberitz, M. The immune biology of microsatellite-unstable cancer. Trends Cancer 2 , 121–133 (2016).

Tomasetti, C., Vogelstein, B. & Parmigiani, G. Half or more of the somatic mutations in cancers of self-renewing tissues originate prior to tumor initiation. Proc. Natl Acad. Sci. USA 110 , 1999–2004 (2013).

Lawrence, M. S. et al. Mutational heterogeneity in cancer and the search for new cancer-associated genes. Nature 499 , 214–218 (2013).

Merino, D. M. et al. Establishing guidelines to harmonize tumor mutational burden (TMB): in silico assessment of variation in TMB quantification across diagnostic platforms: phase I of the Friends of Cancer Research TMB Harmonization Project. J. Immunother. Cancer 8 , e000147 (2020).

Menzel, M. et al. Multicentric pilot study to standardize clinical whole exome sequencing (WES) for cancer patients. npj Precis. Oncol. 7 , 106 (2023).

Rosenthal, R. et al. Neoantigen-directed immune escape in lung cancer evolution. Nature 567 , 479–485 (2019).

Kazdal, D. et al. Spatial and temporal heterogeneity of panel-based tumor mutational burden in pulmonary adenocarcinoma: separating biology from technical artifacts. J. Thorac. Oncol. 14 , 1935–1947 (2019).

Anagnostou, V. et al. Multimodal genomic features predict outcome of immune checkpoint blockade in non-small-cell lung cancer. Nat. Cancer 1 , 99–111 (2020).

Karczewski, K. J. et al. The mutational constraint spectrum quantified from variation in 141,456 humans. Nature 581 , 434–443 (2020).

Nassar, A. H. et al. Ancestry-driven recalibration of tumor mutational burden and disparate clinical outcomes in response to immune checkpoint inhibitors. Cancer Cell 40 , 1161–1172.e5 (2022).

Huang, R. S. P., Graf, R. P. & Oxnard, G. R. Not all TMB assays are the same: clinical validity of robust algorithmic germline filtering. Cancer Cell 41 , 819–820 (2023).

Buchhalter, I. et al. Size matters: dissecting key parameters for panel-based tumor mutational burden analysis. Int. J. Cancer 144 , 848–858 (2019).

Budczies, J. et al. Quantifying potential confounders of panel-based tumor mutational burden (TMB) measurement. Lung Cancer 142 , 114–119 (2020).

Osipov, A. et al. Tumor mutational burden, toxicity, and response of immune checkpoint inhibitors targeting PD(L)1, CTLA-4, and combination: a meta-regression analysis. Clin. Cancer Res. 26 , 4842–4851 (2020).

Sneddon, S. et al. Identification of a CD8+ T-cell response to a predicted neoantigen in malignant mesothelioma. Oncoimmunology 9 , 1684713 (2020).

Ros, J. et al. Immunotherapy for colorectal cancer with high microsatellite instability: the ongoing search for biomarkers. Cancers 15 , 4245 (2023).

Rousseau, B. et al. The spectrum of benefit from checkpoint blockade in hypermutated tumors. N. Engl. J. Med. 384 , 1168–1170 (2021).

Mcgrail, D. J. et al. High tumor mutation burden fails to predict immune checkpoint blockade response across all cancer types. Ann. Oncol. 32 , 661–672 (2021).

Ready, N. et al. First-line nivolumab plus ipilimumab in advanced non-small-cell lung cancer (CheckMate 568): outcomes by programmed death ligand 1 and tumor mutational burden as biomarkers. J. Clin. Oncol. 37 , 992–1000 (2019).

Kwon, M. et al. Determinants of response and intrinsic resistance to PD-1 blockade in microsatellite instability-high gastric cancer. Cancer Discov. 11 , 2168–2185 (2021).

Mandal, R. et al. Genetic diversity of tumors with mismatch repair deficiency influences anti-PD-1 immunotherapy response. Science 364 , 485–491 (2019).

Schrock, A. B. et al. Tumor mutational burden is predictive of response to immune checkpoint inhibitors in MSI-high metastatic colorectal cancer. Ann. Oncol. 30 , 10961103 (2019).

Loupakis, F. et al. Prediction of benefit from checkpoint inhibitors in mismatch repair deficient metastatic colorectal cancer: role of tumor infiltrating lymphocytes. Oncologist 25 , 481–487 (2020).

Cohen, R. et al. Association of primary resistance to immune checkpoint inhibitors in metastatic colorectal cancer with misdiagnosis of microsatellite instability or mismatch repair deficiency status. JAMA Oncol. 5 , 551–555 (2019).

Paz-Ares, L. et al. Pembrolizumab (pembro) plus platinum-based chemotherapy (chemo) for metastatic NSCLC: tissue TMB (tTMB) and outcomes in KEYNOTE-021, 189, and 407. Ann. Oncol. 30 , v917–v918 (2019).

Langer, C. et al. OA04.05 KEYNOTE-021: TMB and outcomes for carboplatin and pemetrexed with or without pembrolizumab for nonsquamous NSCLC. J. Thorac. Oncol . https://doi.org/10.1016/j.jtho.2019.08.426 (2019).

Garassino, M. C. et al. Evaluation of blood TMB (bTMB) in KEYNOTE-189: pembrolizumab (pembro) plus chemotherapy (chemo) with pemetrexed and platinum versus placebo plus chemo as first-line therapy for metastatic nonsquamous NSCLC. J. Clin. Oncol. 38 , 9521–9521 (2020).

Mountzios, G. et al. Association of the advanced lung cancer inflammation index (ALI) with immune checkpoint inhibitor efficacy in patients with advanced non-small-cell lung cancer. ESMO Open 6 , 100254 (2021).

Alessi, J. V. et al. Clinicopathologic and genomic factors impacting efficacy of firstline chemoimmunotherapy in advanced NSCLC. J. Thorac. Oncol. 18 , 731–743 (2023).

Giannakis, M. et al. Genomic correlates of immune-cell infiltrates in colorectal carcinoma. Cell Rep. 15 , 857–865 (2016).

Wang, P., Chen, Y. & Wang, C. Beyond tumor mutation burden: tumor neoantigen burden as a biomarker for immunotherapy and other types of therapy. Front. Oncol. 11 , 672677 (2021).

Rock, K. L., Reits, E. & Neefjes, J. Present yourself! By MHC class I and MHC class II molecules. Trends Immunol. 37 , 724–737 (2016).

Neefjes, J., Jongsma, M. L., Paul, P. & Bakke, O. Towards a systems understanding of MHC class I and MHC class II antigen presentation. Nat. Rev. Immunol. 11 , 823–836 (2011).

Jongsma, M. L. M., Neefjes, J. & Spaapen, R. M. Playing hide and seek: tumor cells in control of MHC class I antigen presentation. Mol. Immunol. 136 , 36–44 (2021).

Macy, A. M., Herrmann, L. M., Adams, A. C. & Hastings, K. T. Major histocompatibility complex class II in the tumor microenvironment: functions of nonprofessional antigen-presenting cells. Curr. Opin. Immunol. 83 , 102330 (2023).

Bawden, E. & Gebhardt, T. The multifaceted roles of CD4(+) T cells and MHC class II in cancer surveillance. Curr. Opin. Immunol. 83 , 102345 (2023).

Richters, M. M. et al. Best practices for bioinformatic characterization of neoantigens for clinical utility. Genome Med. 11 , 56 (2019).

Fotakis, G., Trajanoski, Z. & Rieder, D. Computational cancer neoantigen prediction: current status and recent advances. Immunooncol. Technol. 12 , 100052 (2021).

Vita, R. et al. The Immune Epitope Database (IEDB): 2018 update. Nucleic Acids Res. 47 , D339–D343 (2019).

Shao, W., Caron, E., Pedrioli, P. & Aebersold, R. The SysteMHC Atlas: a computational pipeline, a website, and a data repository for immunopeptidomic analyses. Methods Mol. Biol. 2120 , 173–181 (2020).

Perez-Riverol, Y. et al. The PRIDE database resources in 2022: a hub for mass spectrometry-based proteomics evidences. Nucleic Acids Res. 50 , D543–D552 (2022).

Andreatta, M. & Nielsen, M. Gapped sequence alignment using artificial neural networks: application to the MHC class I system. Bioinformatics 32 , 511–517 (2016).

O’Donnell, T. J. et al. MHCflurry: open-source class I MHC binding affinity prediction. Cell Syst. 7 , 129–132.e4 (2018).

Reynisson, B., Alvarez, B., Paul, S., Peters, B. & Nielsen, M. NetMHCpan-4.1 and NetMHCIIpan-4.0: improved predictions of MHC antigen presentation by concurrent motif deconvolution and integration of MS MHC eluted ligand data. Nucleic Acids Res. 48 , W449–W454 (2020).

O’Donnell, T. J., Rubinsteyn, A. & Laserson, U. MHCflurry 2.0: improved pan-allele prediction of MHC class I-presented peptides by incorporating antigen processing. Cell Syst. 11 , 418–419 (2020).

Shim, J. H. et al. HLA-corrected tumor mutation burden and homologous recombination deficiency for the prediction of response to PD-(L)1 blockade in advanced non-small-cell lung cancer patients. Ann. Oncol. 31 , 902–911 (2020).

Ghorani, E. et al. Differential binding affinity of mutated peptides for MHC class I is a predictor of survival in advanced lung cancer and melanoma. Ann. Oncol. 29 , 271–279 (2018).

Balachandran, V. P. et al. Identification of unique neoantigen qualities in long-term survivors of pancreatic cancer. Nature 551 , 512–516 (2017).

Luksza, M. et al. A neoantigen fitness model predicts tumour response to checkpoint blockade immunotherapy. Nature 551 , 517–520 (2017).

Litchfield, K. et al. Meta-analysis of tumor- and T cell-intrinsic mechanisms of sensitization to checkpoint inhibition. Cell 184 , 596–614 e514 (2021).

Luksza, M. et al. Neoantigen quality predicts immunoediting in survivors of pancreatic cancer. Nature 606 , 389–395 (2022).

Wells, D. K. et al. Key parameters of tumor epitope immunogenicity revealed through a consortium approach improve neoantigen prediction. Cell 183 , 818–834.e13 (2020).

Rojas, L. A. et al. Personalized RNA neoantigen vaccines stimulate T cells in pancreatic cancer. Nature 618 , 144–150 (2023).

Weber, J. S. et al. Individualised neoantigen therapy mRNA-4157 (V940) plus pembrolizumab versus pembrolizumab monotherapy in resected melanoma (KEYNOTE-942): a randomised, phase 2b study. Lancet 403 , 632–644 (2024).

Oreper, D., Klaeger, S., Jhunjhunwala, S. & Delamarre, L. The peptide woods are lovely, dark and deep: hunting for novel cancer antigens. Semin. Immunol. 67 , 101758 (2023).

Ng, K. W. et al. Antibodies against endogenous retroviruses promote lung cancer immunotherapy. Nature 616 , 563–573 (2023).

Panda, A. et al. Endogenous retrovirus expression is associated with response to immune checkpoint blockade in clear cell renal cell carcinoma. JCI Insight 3 , e121522 (2018).

McGranahan, N. et al. Clonal neoantigens elicit T cell immunoreactivity and sensitivity to immune checkpoint blockade. Science 351 , 1463–1469 (2016).

Wolf, Y. et al. UVB-induced tumor heterogeneity diminishes immune response in melanoma. Cell 179 , 219–235.e21 (2019).

Boll, L. M. et al. The impact of mutational clonality in predicting the response to immune checkpoint inhibitors in advanced urothelial cancer. Sci. Rep. 13 , 15287 (2023).

Westcott, P. M. K. et al. Mismatch repair deficiency is not sufficient to elicit tumor immunogenicity. Nat. Genet. 55 , 1686–1695 (2023).

Ravi, A. et al. Genomic and transcriptomic analysis of checkpoint blockade response in advanced non-small cell lung cancer. Nat. Genet. 55 , 807–819 (2023).

Freeman, S. S. et al. Combined tumor and immune signals from genomes or transcriptomes predict outcomes of checkpoint inhibition in melanoma. Cell Rep. Med. 3 , 100500 (2022).

Niknafs, N. et al. Persistent mutation burden drives sustained anti-tumor immune responses. Nat. Med. 29 , 440–449 (2023).

Anagnostou, V. et al. Integrative tumor and immune cell multi-omic analyses predict response to immune checkpoint blockade in melanoma. Cell Rep. Med. 1 , 100139 (2020).

Markham, J. F. et al. Predicting response to immune checkpoint blockade in NSCLC with tumour-only RNA-seq. Br. J. Cancer 128 , 1148–1154 (2023).

Szeto, C. et al. High correlation between TMB, expressed TMB, and neoantigen load using tumor: normal whole exome DNA and matched whole transcriptome RNA sequencing. J. Clin. Oncol. 38 , e15238 (2020).

DiGuardo, M. A. et al. RNA-seq reveals differences in expressed tumor mutation burden in colorectal and endometrial cancers with and without defective DNA-mismatch repair. J. Mol. Diagn. 23 , 555–564 (2021).

Sorokin, M. et al. RNA sequencing data for FFPE tumor blocks can be used for robust estimation of tumor mutation burden in individual biosamples. Front. Oncol. 11 , 732644 (2021).

Alexandrov, L. B. et al. The repertoire of mutational signatures in human cancer. Nature 578 , 94–101 (2020).

Alexandrov, L. B., Nik-Zainal, S., Wedge, D. C., Campbell, P. J. & Stratton, M. R. Deciphering signatures of mutational processes operative in human cancer. Cell Rep. 3 , 246–259 (2013).

Degasperi, A. et al. Substitution mutational signatures in whole-genome-sequenced cancers in the UK population. Science https://doi.org/10.1126/science.abl9283 (2022).

Nik-Zainal, S. et al. Mutational processes molding the genomes of 21 breast cancers. Cell 149 , 979–993 (2012).

Sha, D. et al. Tumor mutational burden as a predictive biomarker in solid tumors. Cancer Discov. 10 , 1808–1825 (2020).

The COSMIC Database v3.4. https://cancer.sanger.ac.uk/signatures/ (2023).

Zhao, E. Y. et al. Homologous recombination deficiency and platinum-based therapy outcomes in advanced breast cancer. Clin. Cancer Res. 23 , 7521–7530 (2017).

Davies, H. et al. HRDetect is a predictor of BRCA1 and BRCA2 deficiency based on mutational signatures. Nat. Med. 23 , 517–525 (2017).

Nguyen, L., Martens, J. W. M., Van Hoeck, A. & Cuppen, E. Pan-cancer landscape of homologous recombination deficiency. Nat. Commun. 11 , 5584 (2020).

Secrier, M. et al. Mutational signatures in esophageal adenocarcinoma define etiologically distinct subgroups with therapeutic relevance. Nat. Genet. 48 , 1131–1141 (2016).

Petrelli, A. et al. BRCA2 germline mutations identify gastric cancers responsive to PARP inhibitors. Cancer Res. 83 , 1699–1710 (2023).

Peng, G. et al. Genome-wide transcriptome profiling of homologous recombination DNA repair. Nat. Commun. 5 , 3361 (2014).

Li, H. et al. PARP inhibitor resistance: the underlying mechanisms and clinical implications. Mol. Cancer 19 , 107 (2020).

Le, D. T. et al. PD-1 blockade in tumors with mismatch-repair deficiency. N. Engl. J. Med. 372 , 2509–2520 (2015).

Le, D. T. et al. Mismatch repair deficiency predicts response of solid tumors to PD-1 blockade. Science 357 , 409–413 (2017).

Marabelle, A. et al. Efficacy of pembrolizumab in patients with noncolorectal high microsatellite instability/mismatch repair-deficient cancer: results from the phase II KEYNOTE-158 study. J. Clin. Oncol. 38 , 1–10 (2020).

Touat, M. et al. Mechanisms and therapeutic implications of hypermutation in gliomas. Nature 580 , 517–523 (2020).

Chen, H. et al. The immune response-related mutational signatures and driver genes in non-small-cell lung cancer. Cancer Sci. 110 , 2348–2356 (2019).

Miao, D. et al. Genomic correlates of response to immune checkpoint blockade in microsatellite-stable solid tumors. Nat. Genet. 50 , 1271–1281 (2018).

Wang, S., Jia, M., He, Z. & Liu, X. S. APOBEC3B and APOBEC mutational signature as potential predictive markers for immunotherapy response in non-small cell lung cancer. Oncogene 37 , 3924–3936 (2018).

Liao, J. et al. Clinical and genomic characterization of mutational signatures across human cancers. Int. J. Cancer 152 , 1613–1629 (2023).

Chong, W. et al. Association of clock-like mutational signature with immune checkpoint inhibitor outcome in patients with melanoma and NSCLC. Mol. Ther. Nucleic Acids 23 , 89–100 (2021).

Valero, C. et al. Clinical-genomic determinants of immune checkpoint blockade response in head and neck squamous cell carcinoma. J. Clin. Invest. 133 , e169823 (2023).

Koh, G., Degasperi, A., Zou, X., Momen, S. & Nik-Zainal, S. Mutational signatures: 1267 emerging concepts, caveats and clinical applications. Nat. Rev. Cancer 21 , 619–637 (2021).

NIH. The cost of sequencing a human genome. genome.gov https://www.genome.gov/about-genomics/fact-sheets/Sequencing-Human-Genome-cost (2021).

Newell, F. et al. Multiomic profiling of checkpoint inhibitor-treated melanoma: identifying predictors of response and resistance, and markers of biological discordance. Cancer Cell 40 , 88–102.e7 (2022).

Liu, D. et al. Integrative molecular and clinical modeling of clinical outcomes to PD1 blockade in patients with metastatic melanoma. Nat. Med. 25 , 1916–1927 (2019).

Forde, P. M. et al. Durvalumab with platinum-pemetrexed for unresectable pleural mesothelioma: survival, genomic and immunologic analyses from the phase 2 PrE0505 trial. Nat. Med. 27 , 1910–1920 (2021).

Anagnostou, V., Landon, B. V., Medina, J. E., Forde, P. & Velculescu, V. E. Translating the evolving molecular landscape of tumors to biomarkers of response for cancer immunotherapy. Sci. Transl. Med. 14 , eabo3958 (2022).

Sinha, N. et al. Immune determinants of the association between tumor mutational burden and immunotherapy response across cancer types. Cancer Res. 82 , 20762083 (2022).

Danaher, P. et al. Gene expression markers of tumor infiltrating leukocytes. J. Immunother. Cancer 5 , 18 (2017).

Cristescu, R. et al. Pan-tumor genomic biomarkers for PD-1 checkpoint blockade-based immunotherapy. Science 362 , eaar3593 (2018).

Goodman, A. M. et al. MHC-I genotype and tumor mutational burden predict response to immunotherapy. Genome Med. 12 , 45 (2020).

Han, J. et al. Pan-cancer analysis reveals sex-specific signatures in the tumor microenvironment. Mol. Oncol. 16 , 2153–2173 (2022).

Lee, J. S. & Ruppin, E. Multiomics prediction of response rates to therapies to inhibit programmed cell death 1 and programmed cell death 1 ligand 1. JAMA Oncol. 5 , 1614–1618 (2019).

Stenzinger, A., Kazdal, D. & Peters, S. Strength in numbers: predicting response to checkpoint inhibitors from large clinical datasets. Cell 184 , 571–573 (2021).

Mason, M. et al. A community challenge to predict clinical outcomes after immune checkpoint blockade in non-small cell lung cancer. J. Transl. Med. 22 , 190 (2024).

Gajic, Z. Z., Deshpande, A., Legut, M., Imielinski, M. & Sanjana, N. E. Recurrent somatic mutations as predictors of immunotherapy response. Nat. Commun. 13 , 3938 (2022).

Wang, J. et al. Mutational analysis of microsatellite-stable gastrointestinal cancer with high tumour mutational burden: a retrospective cohort study. Lancet Oncol. 24 , 151–161 (2023).

Colle, R. et al. BRAF V600E/RAS mutations and Lynch syndrome in patients with MSIH/dMMR metastatic colorectal cancer treated with immune checkpoint inhibitors. Oncologist 28 , 771–779 (2023).

Liu, G. C. et al. The heterogeneity between Lynch-associated and sporadic MMR deficiency in colorectal cancers. J. Natl Cancer Inst. 110 , 975–984 (2018).

Ratovomanana, T. et al. Prediction of response to immune checkpoint blockade in patients with metastatic colorectal cancer with microsatellite instability. Ann. Oncol. 34 , 703–713 (2023).

Zhang, J. et al. The combination of neoantigen quality and T lymphocyte infiltrates identifies glioblastomas with the longest survival. Commun. Biol. 2 , 135 (2019).

Sade-Feldman, M. et al. Resistance to checkpoint blockade therapy through inactivation of antigen presentation. Nat. Commun. 8 , 1136 (2017).

Sucker, A. et al. Genetic evolution of T-cell resistance in the course of melanoma progression. Clin. Cancer Res. 20 , 6593–6604 (2014).

Mumphrey, M. B. et al. Distinct mutational processes shape selection of MHC class I and class II mutations across primary and metastatic tumors. Cell Rep. 42 , 112965 (2023).

Middha, S. et al. Majority of B2M-mutant and -deficient colorectal carcinomas achieve clinical benefit from immune checkpoint inhibitor therapy and are microsatellite instability-high. JCO Precis. Oncol . https://doi.org/10.1200/PO.18.00321 (2019).

Tikidzhieva, A. et al. Microsatellite instability and beta2-microglobulin mutations as prognostic markers in colon cancer: results of the FOGT-4 trial. Br. J. Cancer 106 , 12391245 (2012).

Barrow, P. et al. Confirmation that somatic mutations of beta-2 microglobulin correlate with a lack of recurrence in a subset of stage II mismatch repair deficient colorectal cancers from the QUASAR trial. Histopathology 75 , 236–246 (2019).

Busch, E. et al. Beta-2-microglobulin mutations are linked to a distinct metastatic pattern and a favorable outcome in microsatellite-unstable stage IV gastrointestinal cancers. Front. Oncol. 11 , 669774 (2021).

Germano, G. et al. CD4 T cell-dependent rejection of beta-2 microglobulin null mismatch repair-deficient tumors. Cancer Discov. 11 , 1844–1859 (2021).

de Vries, N. L. et al. γδ T cells are effectors of immunotherapy in cancers with HLA class I defects. Nature 613 , 743–750 (2023).

Marabelle, A., Aspeslagh, S., Postel-Vinay, S. & Soria, J. C. JAK mutations as escape mechanisms to anti-PD-1 therapy. Cancer Discov. 7 , 128–130 (2017).

Bayle, A. et al. ESMO study on the availability and accessibility of biomolecular technologies in oncology in Europe. Ann. Oncol. 34 , 934–945 (2023).

Genomics England. The 100,000 Genomes Project. https://www.genomicsengland.co.uk/initiatives/100000-genomes-project (2024).

The German Federal Ministry of Health. GenomeDE — National Strategy for Genomic Medicine. https://www.bundesgesundheitsministerium.de/en/en/international/european-health-policy/genomde-en.html (2024).

The US National Cancer Institute Pan-Cancer Atlas. https://gdc.cancer.gov/about-data/publications/pancanatlas (2024).

Maruvka, Y. E. et al. Analysis of somatic microsatellite indels identifies driver events in human tumors. Nat. Biotechnol. 35 , 951–959 (2017).

Marcus, L. et al. FDA approval summary: pembrolizumab for the treatment of tumor mutational burden-high solid tumors. Clin. Cancer Res. 27 , 4685–4689 (2021).

Download references

Author information

These authors contributed equally: Jan Budczies, Daniel Kazdal.

Authors and Affiliations

Institute of Pathology, Heidelberg University Hospital, Heidelberg, Germany

Jan Budczies, Daniel Kazdal, Michael Menzel, Susanne Beck, Klaus Kluck, Christian Altbürger, Constantin Schwab, Michael Allgäuer, Peter Schirmacher & Albrecht Stenzinger

Translational Lung Research Center (TLRC) Heidelberg, Member of the German Center for Lung Research (DZL), Heidelberg, Germany

Jan Budczies, Daniel Kazdal, Petros Christopoulos & Albrecht Stenzinger

Center for Personalized Medicine (ZPM), Heidelberg, Germany

Department of Applied Tumour Biology, Institute of Pathology, Heidelberg University Hospital, Heidelberg, Germany

Aysel Ahadova & Matthias Kloor

Clinical Cooperation Unit Applied Tumour Biology, German Cancer Research Center (DKFZ), Heidelberg, Germany

Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne University, Lausanne, Switzerland

Solange Peters

Clinical Cooperation Unit Molecular Hematology/Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany

Alwin Krämer

Department of Internal Medicine V, University of Heidelberg, Heidelberg, Germany

Department of Thoracic Oncology, Thoraxklinik and National Center for Tumour Diseases at Heidelberg University Hospital, Heidelberg, Germany

Petros Christopoulos

You can also search for this author in PubMed   Google Scholar

Contributions

J.B., M.M., S.B., K.K., C.A., C.S., A.A., M.K. and A.K. researched data for the manuscript; J.B., D.K., M.M., S.B., K.K., P.S., S.P. and A.S. made a substantial contribution to discussions of content; J.B., D.K., C.A., C.S., A.A., M.K., A.K., P.C. and A.S. wrote the manuscript; and all authors reviewed and/or edited before submission.

Corresponding authors

Correspondence to Jan Budczies or Albrecht Stenzinger .

Ethics declarations

Competing interests.

J.B. has acted as a consultant of MSD. D.K. has acted as a consultant and/or adviser of Agilent, AstraZeneca, BMS, Illumina, Incyte, Eli Lilly, Pfizer and Takeda. C.S. has acted as a consultant of MSD and has received speaker’s fees from Bayer Vital and Boehringer Ingelheim. P.S. has acted as a consultant of BMS, Eisai, Incyte, Janssen, MSD and Roche and has received research funding from BMS, Chugai and Incyte. S.P. has acted as a consultant and/or adviser of AbbVie, Amgen, Arcus, AstraZeneca, Bayer, Beigene, BerGenBio, Biocartis, BioInvent, Blueprint Medicines, Boehringer Ingelheim, Bristol-Myers Squibb, Clovis, Daiichi Sankyo, Debiopharm, Eli Lilly, F-Star, Fishawack, Foundation Medicine, Genzyme, Gilead, GlaxoSmithKline, Hutchmed, Illumina, Incyte, Ipsen, iTeos, Janssen, Merck Sharp & Dohme, Merck Serono, Merrimack, Mirati, Nykode Therapeutics, Novartis, Novocure, Pharma Mar, Promontory Therapeutics, Pfizer, Regeneron, Roche/Genentech, Sanofi, Seattle Genetics and Takeda; holds a board of director position for Galenica; has acted as a speaker for AstraZeneca, Boehringer Ingelheim, BMS, Eli Lilly, Foundation Medicine, GlaxoSmithKline, Illumina, Ipsen, Merck Sharp & Dohme, Mirati, Novartis, Pfizer, Roche/Genentech, Sanofi and Takeda; and has acted as a principal investigator in trials sponsored by Amgen, Arcus, AstraZeneca, Beigene, Bristol-Myers Squibb, GlaxoSmithKline, iTeos, Merck Sharp & Dohme, Mirati, Pharma Mar, Promontory Therapeutics, Roche/Genentech and Seattle Genetics. A.K. has acted as a consultant and/or adviser, received support for attending meetings and/or travel, participated in a data safety monitoring board and received research funding from F. Hoffmann-La Roche and has received research funding from BMS and Molecular Health. P.C. has acted as a consultant and/or adviser of AstraZeneca, Boehringer Ingelheim, Chugai, Pfizer, Novartis, MSD, Takeda and Roche; has acted as a speaker for AstraZeneca, Janssen, Novartis, Roche, Pfizer, Thermo Fisher and Takeda; has received research funding from AstraZeneca, Amgen, Boehringer Ingelheim, Novartis, Roche and Takeda; and has received travel support from AstraZeneca, Eli Lilly, Daiichi Sankyo, Gilead, Novartis, Pfizer and Takeda. A.S. has acted as an adviser and/or speaker for Agilent, Aignostics, Amgen, Astellas, AstraZeneca, Bayer, BMS, Eli Lilly, Illumina, Incyte, Janssen, MSD, Novartis, Pfizer, Qlucore, QuIP, Roche, Sanofi, Seagen, Servier, Takeda and Thermo Fisher Scientific and has received research funding from Bayer, BMS, Chugai, Incyte and MSD.

Peer review

Peer review information.

Nature Reviews Clinical Oncology thanks A. Schoenfeld, R. Thummalapalli and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Cite this article.

Budczies, J., Kazdal, D., Menzel, M. et al. Tumour mutational burden: clinical utility, challenges and emerging improvements. Nat Rev Clin Oncol (2024). https://doi.org/10.1038/s41571-024-00932-9

Download citation

Accepted : 23 July 2024

Published : 27 August 2024

DOI : https://doi.org/10.1038/s41571-024-00932-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: Cancer newsletter — what matters in cancer research, free to your inbox weekly.

type of principles research

  • Research Blog
  • Research explained

Physical activity and Parkinson’s: what do we know?

Research has shown that physical activity comes with many benefits for people with Parkinson’s, from reducing movement symptoms, to improving overall mental and physical wellbeing. But why is this and what are the best ways to stay active?

Keeping active is good for everyone. A regular exercise routine can help to maintain and build strength and power in your muscles, improve flexibility in your joints, and keep you generally fit and mobile. It can also keep your mind healthy, improve your mood and help you sleep. All of these benefits can help to better cope with the challenges that living with Parkinson’s may bring.

The impact of physical activity on Parkinson’s has been studied for many years and there is plenty of research evidence to support the importance of keeping active. Let’s take a look at the highlights.

What’s the evidence that physical activity is beneficial for Parkinson’s?

In 2023, a review was published that analysed over 150 research studies that investigated the impact of different types of physical activity on people with Parkinson’s. The review aimed to understand how different types of physical activity can be used to manage Parkinson’s symptoms. Overall, the researchers found that taking part in physical activity, which included dance, aqua-training and weight training, had benefits for people with Parkinson’s in terms of movement or improved quality of life, when compared with people who had not been active.  Read a summary of the 2023 physical activity review on the Parkinson’s UK website .

Another study, published in 2024, explored the long term effects of tai chi, a martial art that involves gentle movements, on Parkinson’s. The study followed 330 people with Parkinson’s who had and hadn’t engaged in regular tai chi training over 3 and a half years. The results showed that tai chi training had a long-term beneficial effect on Parkinson’s, improving both movement and non-movement symptoms. Symptoms also appeared to deteriorate more slowly in people who practised tai chi, suggesting that physical activity could slow the progression of Parkinson’s.  Read the published paper exploring Parkinson’s and tai chi in the Journal of Neurology, Neurosurgery and Psychiatry .

Dance has also been investigated for Parkinson’s. The PD Ballet study explored how engaging in a ballet class once a week for 12 weeks could impact people with Parkinson’s. 53 people took part in the classes, which were led by English National Ballet dancers. The results showed improvements to movement symptoms, pain, and other non-movement symptoms.  Read the published PD-Ballet paper  in the journal Neurology .

"The benefits of physical activity extend beyond the physical benefits that many people are familiar with such as improved fitness or increased muscle strength. Participation in physical activity also has many social benefits including developing a sense of community and providing opportunity for shared experience or shared learning. Above all, physical activity provides people with Parkinson’s to do something positive to help themselves."

Dr Julie Jones, Physiotherapist and researcher at Robert Gordon University, Aberdeen

In 2021, Laurel took part in a study investigating the benefits of mini trampolining exercises for people with neurological conditions. After the study, participants reported improvements in confidence, muscle strength, balance and coordination.

Laurel shared : “Taking part was most enjoyable, fun even. A weekly free one to one exercise class with a physiotherapist who is engaged in research to try to help people with Parkinson’s. It challenged me and helped me to work harder than I'd thought I'd be capable of. I felt it definitely improved my strength, balance and confidence. What's not to like? I'd do it again in a shot!'

How is physical activity and exercise changing what’s happening in the brain in Parkinson’s?

Whilst studies show that keeping active is good for people with Parkinson’s, our understanding of why is quite limited.

We know that aerobic exercise, which is high-impact and more vigorous, makes the heart work harder than normal to deliver oxygen to working muscles. This increases both the heart rate and breathing rate and means the brain receives a greater supply of blood and therefore more oxygen and nutrients to keep the tissue healthy and functioning well.

There is also growing evidence to show that types of exercise that raise the heart rate, like swimming or brisk walking, can stimulate the body to produce growth factors. Growth factors are sometimes described as fertilisers for the brain because they encourage new growth and help to keep brain cells healthy. One growth factor thought to increase after exercise is called brain derived neurotrophic factor, BDNF, which helps improve memory and thinking, among other roles.

Research, published in 2016, analysed a number of different studies involving a total of 1,111 participants, and found that exercise caused large amounts of BDNF to be delivered to the brain, and that regular exercise increased this effect.  Read the 2016 published paper about BDNF in the Journal of Psychiatric Research. Similarly, a review, published in 2024, analysed 16 different studies and concluded that physical activity improved levels of BDNF. Levels of BDNF increased as the intensity and amount of exercise increased, but didn’t differ between different types of physical activity.  Read the 2024 review of exercise and BDNF in the Frontiers in Physiology Journal .

Researchers have also suggested that regular exercise may improve symptoms of Parkinson’s by creating more connections between areas of the brain affected by the condition. One research study, published in 2021, compared cycling on a stationary bike (the exercise group) to stretching (less active group) 3 times a week over a 6 month period. 130 people with Parkinson’s took part in the study. After 6 months, they reported that people in the exercise group had developed more connections between important brain areas and performed better on thinking and memory tests.  Read a summary of the 2021 cycling study in a news article on the Parkinson’s UK website.

"Depending on what type of exercise you do and how long you do it for, the brain can be pushed to levels where new pathways and connections to different areas of the brain are created. New blood vessels form, bringing in a fresh supply of nutrition whilst clearing away unneeded waste, keeping the brain clean and efficient. Hormones that help with healing and learning can also be produced by more vigorous types of exercise."

Dr Bhanu Ramaswamy OBE, Independent Physiotherapy Consultant

Which type of physical activity is best for Parkinson’s?

Research has explored many different types of physical activity, from dance and swimming, to yoga and boxing. But, we don’t yet know which types of physical activity may help with specific symptoms, such as tremor. Future research is needed to understand this.

Dr Julie Jones shared: “Research into physical activity is really important as Parkinson’s affects people differently. Therefore, research is needed to determine or inform the optimum physical activity prescription for each person with Parkinson’s.”

Different activities will naturally work better for different people depending on their symptoms and experience of the condition. A good tip is to do something that you enjoy. If you want to get active but you’re not sure where to start, Parkinson’s UK can point you in the right direction.  Visit the physical activity resources page on our website .

Dr Bhanu Ramaswamy OBE shared: “I would say there is no ‘best’. Most people with Parkinson’s need to do a combination of types of physical activity, and most will benefit from exercising in company. It is about doing something regularly, even if your choice of activity alters over time because of differing fashions in sport or because you need to improve a specific fitness component of your body.”

What research is ongoing in this area?

More research is needed to better understand whether keeping active can slow the progression of the condition, whether different types of physical activity can help to manage different symptoms and how people with Parkinson’s can be better supported to keep active.

Professor Bastiaan Bloem is currently leading a research study called SLOW-SPEED. The study is exploring if it’s possible to use physical activity to slow the development of Parkinson’s in people who have early symptoms or a high risk of developing the condition. Participants are asked to take part in an exercise programme, which is delivered remotely to a smartphone, for a period of 3 years. The study hopes to provide insights into the success of remote exercise programmes and whether physical activity could become a preventative measure to slow the development of the condition. This research is part funded by Parkinson’s UK and is due to conclude in June 2027.

A number of studies are also exploring different programmes to help people stay active. Dr Gill Barry is leading a study at Northumbria University exploring whether an NHS approved digital health programme can help people with Parkinson’s stay active. The programme, called Keep On and Keep Up, is designed to engage older people in safe and effective balance, strength and fall prevention exercises, but has not yet been tested on people with Parkinson’s. The study is due to finish in June 2025.

Research continues to tell us more about physical activity and Parkinson’s. We know that there are benefits to staying active, and Parkinson’s UK can support you on your journey. Find out more by requesting a free copy of our  Being Active with Parkinson’s guide or contact the Physical Activity team at  [email protected] .

Related content

Focused ultrasound and parkinson’s: what does the research say, uniting with global partners to speed up the search for better treatments, why fixing mitochondria could be the key to parkinson's, what’s it like taking part in a drug trial, people powering parkinson’s research.

  • Load more from the Research blogs
  • Privacy Policy

Research Method

Home » Basic Research – Types, Methods and Examples

Basic Research – Types, Methods and Examples

Table of Contents

Basic Research

Basic Research

Definition:

Basic Research, also known as Fundamental or Pure Research , is scientific research that aims to increase knowledge and understanding about the natural world without necessarily having any practical or immediate applications. It is driven by curiosity and the desire to explore new frontiers of knowledge rather than by the need to solve a specific problem or to develop a new product.

Types of Basic Research

Types of Basic Research are as follows:

Experimental Research

This type of research involves manipulating one or more variables to observe their effect on a particular phenomenon. It aims to test hypotheses and establish cause-and-effect relationships.

Observational Research

This type of research involves observing and documenting natural phenomena without manipulating any variables. It aims to describe and understand the behavior of the observed system.

Theoretical Research

This type of research involves developing and testing theories and models to explain natural phenomena. It aims to provide a framework for understanding and predicting observations and experiments.

Descriptive Research

This type of research involves describing and cataloging natural phenomena without attempting to explain or understand them. It aims to provide a comprehensive and accurate picture of the observed system.

Comparative Research

This type of research involves comparing different systems or phenomena to identify similarities and differences. It aims to understand the underlying principles that govern different natural phenomena.

Historical Research

This type of research involves studying past events, developments, and discoveries to understand how science has evolved over time. It aims to provide insights into the factors that have influenced scientific progress and the role of basic research in shaping our understanding of the world.

Data Collection Methods

Some common data collection methods used in basic research include:

  • Observation : This involves watching and recording natural phenomena in a systematic and structured way. Observations can be made in a laboratory setting or in the field and can be qualitative or quantitative.
  • Surveys and questionnaires: These are tools for collecting data from a large number of individuals about their attitudes, beliefs, behaviors, and experiences. Surveys and questionnaires can be administered in person, by mail, or online.
  • Interviews : Interviews involve asking questions to a person or a group of people to gather information about their experiences, opinions, and perspectives. Interviews can be structured, semi-structured, or unstructured.
  • Experiments : Experiments involve manipulating one or more variables and observing their effect on a particular phenomenon. Experiments can be conducted in a laboratory or in the field and can be controlled or naturalistic.
  • Case studies : Case studies involve in-depth analysis of a particular individual, group, or phenomenon. Case studies can provide rich and detailed information about complex phenomena.
  • Archival research : Archival research involves analyzing existing data, documents, and records to answer research questions. Archival research can be used to study historical events, trends, and developments.
  • Simulation : Simulation involves creating a computer model of a particular phenomenon to study its behavior and predict its future outcomes. Simulation can be used to study complex systems that are difficult to study in the real world.

Data Analysis Methods

Some common data analysis methods used in basic research include:

  • Descriptive statistics: This involves summarizing and describing data using measures such as mean, median, mode, and standard deviation. Descriptive statistics provide a simple and easy way to understand the basic properties of the data.
  • Inferential statistics : This involves making inferences about a population based on data collected from a sample. Inferential statistics can be used to test hypotheses, estimate parameters, and quantify uncertainty.
  • Qualitative analysis : This involves analyzing data that are not numerical in nature, such as text, images, or audio recordings. Qualitative analysis can involve coding, categorizing, and interpreting data to identify themes, patterns, and relationships.
  • Content analysis: This involves analyzing the content of text, images, or audio recordings to identify specific words, phrases, or themes. Content analysis can be used to study communication, media, and discourse.
  • Multivariate analysis: This involves analyzing data that have multiple variables or factors. Multivariate analysis can be used to identify patterns and relationships among variables, cluster similar observations, and reduce the dimensionality of the data.
  • Network analysis: This involves analyzing the structure and dynamics of networks, such as social networks, communication networks, or ecological networks. Network analysis can be used to study the relationships and interactions among individuals, groups, or entities.
  • Machine learning : This involves using algorithms and models to analyze and make predictions based on data. Machine learning can be used to identify patterns, classify observations, and make predictions based on complex data sets.

Basic Research Methodology

Basic research methodology refers to the approach, techniques, and procedures used to conduct basic research. The following are some common steps involved in basic research methodology:

  • Formulating research questions or hypotheses : This involves identifying the research problem and formulating specific questions or hypotheses that can guide the research.
  • Reviewing the literature: This involves reviewing and synthesizing existing research on the topic of interest to identify gaps, controversies, and areas for further investigation.
  • Designing the study: This involves designing a study that is appropriate for the research question or hypothesis. The study design can involve experiments, observations, surveys, case studies, or other methods.
  • Collecting data: This involves collecting data using appropriate methods and instruments, such as observation, surveys, experiments, or interviews.
  • Analyzing data: This involves analyzing the collected data using appropriate methods, such as descriptive or inferential statistics, qualitative analysis, or content analysis.
  • Interpreting results : This involves interpreting the results of the data analysis in light of the research question or hypothesis and the existing literature.
  • Drawing conclusions: This involves drawing conclusions based on the interpretation of the results and assessing their implications for the research question or hypothesis.
  • Communicating findings : This involves communicating the research findings in the form of research reports, journal articles, conference presentations, or other forms of dissemination.

Applications of Basic Research

Some applications of basic research include:

  • Medical breakthroughs : Basic research in fields such as biology, chemistry, and physics has led to important medical breakthroughs, including the discovery of antibiotics, vaccines, and new drugs.
  • Technology advancements: Basic research in fields such as computer science, physics, and engineering has led to advancements in technology, such as the development of the internet, smartphones, and other electronic devices.
  • Environmental solutions: Basic research in fields such as ecology, geology, and meteorology has led to the development of solutions to environmental problems, such as climate change, air pollution, and water contamination.
  • Economic growth: Basic research can stimulate economic growth by creating new industries and markets based on scientific discoveries and technological advancements.
  • National security: Basic research in fields such as physics, chemistry, and biology has led to the development of new technologies for national security, including encryption, radar, and stealth technology.

Examples of Basic Research

Here are some examples of basic research:

  • Astronomy : Astronomers conduct basic research to understand the fundamental principles that govern the universe, such as the laws of gravity, the behavior of stars and galaxies, and the origins of the universe.
  • Genetics : Geneticists conduct basic research to understand the genetic basis of various traits, diseases, and disorders. This research can lead to the development of new treatments and therapies for genetic diseases.
  • Physics : Physicists conduct basic research to understand the fundamental principles of matter and energy, such as quantum mechanics, particle physics, and cosmology. This research can lead to new technologies and advancements in fields such as medicine and engineering.
  • Neuroscience: Neuroscientists conduct basic research to understand the structure and function of the brain, including how it processes information and controls behavior. This research can lead to new treatments and therapies for neurological disorders and brain injuries.
  • Mathematics : Mathematicians conduct basic research to develop and explore new mathematical theories, such as number theory, topology, and geometry. This research can lead to new applications in fields such as computer science, physics, and engineering.
  • Chemistry : Chemists conduct basic research to understand the fundamental properties of matter and how it interacts with other substances. This research can lead to the development of new materials, drugs, and technologies.

Purpose of Basic Research

The purpose of basic research, also known as fundamental or pure research, is to expand knowledge in a particular field or discipline without any specific practical application in mind. The primary goal of basic research is to advance our understanding of the natural world and to uncover fundamental principles and relationships that underlie complex phenomena.

Basic research is often exploratory in nature, with researchers seeking to answer fundamental questions about how the world works. The research may involve conducting experiments, collecting and analyzing data, or developing new theories and hypotheses. Basic research often requires a high degree of creativity, innovation, and intellectual curiosity, as well as a willingness to take risks and pursue unconventional lines of inquiry.

Although basic research is not conducted with a specific practical outcome in mind, it can lead to significant practical applications in various fields. Many of the major scientific discoveries and technological advancements of the past century have been rooted in basic research, from the discovery of antibiotics to the development of the internet.

In summary, the purpose of basic research is to expand knowledge and understanding in a particular field or discipline, with the goal of uncovering fundamental principles and relationships that can help us better understand the natural world. While the practical applications of basic research may not always be immediately apparent, it has led to significant scientific and technological advancements that have benefited society in numerous ways.

When to use Basic Research

Basic research is generally conducted when scientists and researchers are seeking to expand knowledge and understanding in a particular field or discipline. It is particularly useful when there are gaps in our understanding of fundamental principles and relationships that underlie complex phenomena. Here are some situations where basic research might be particularly useful:

  • Exploring new fields: Basic research can be particularly valuable when researchers are exploring new fields or areas of inquiry where little is known. By conducting basic research, scientists can establish a foundation of knowledge that can be built upon in future studies.
  • Testing new theories: Basic research can be useful when researchers are testing new theories or hypotheses that have not been tested before. This can help scientists to gain a better understanding of how the world works and to identify areas where further research is needed.
  • Developing new technologies : Basic research can be important for developing new technologies and innovations. By conducting basic research, scientists can uncover new materials, properties, and relationships that can be used to develop new products or technologies.
  • Investigating complex phenomena : Basic research can be particularly valuable when investigating complex phenomena that are not yet well understood. By conducting basic research, scientists can gain a better understanding of the underlying principles and relationships that govern complex systems.
  • Advancing scientific knowledge: Basic research is important for advancing scientific knowledge in general. By conducting basic research, scientists can uncover new principles and relationships that can be applied across multiple fields of study.

Characteristics of Basic Research

Here are some of the main characteristics of basic research:

  • Focus on fundamental knowledge : Basic research is focused on expanding our understanding of the natural world and uncovering fundamental principles and relationships that underlie complex phenomena. The primary goal of basic research is to advance knowledge without any specific practical application in mind.
  • Exploratory in nature: Basic research is often exploratory in nature, with researchers seeking to answer fundamental questions about how the world works. The research may involve conducting experiments, collecting and analyzing data, or developing new theories and hypotheses.
  • Long-term focus: Basic research is often focused on long-term outcomes rather than immediate practical applications. The insights and discoveries generated by basic research may take years or even decades to translate into practical applications.
  • High degree of creativity and innovation : Basic research often requires a high degree of creativity, innovation, and intellectual curiosity. Researchers must be willing to take risks and pursue unconventional lines of inquiry.
  • Emphasis on scientific rigor: Basic research is conducted using the scientific method, which emphasizes the importance of rigorous experimental design, data collection and analysis, and peer review.
  • Interdisciplinary: Basic research is often interdisciplinary, drawing on multiple fields of study to address complex research questions. Basic research can be conducted in fields ranging from physics and chemistry to biology and psychology.
  • Open-ended : Basic research is open-ended, meaning that it does not have a specific end goal in mind. Researchers may follow unexpected paths or uncover new lines of inquiry that they had not anticipated.

Advantages of Basic Research

Here are some of the main advantages of basic research:

  • Advancing scientific knowledge: Basic research is essential for expanding our understanding of the natural world and uncovering fundamental principles and relationships that underlie complex phenomena. This knowledge can be applied across multiple fields of study and can lead to significant scientific and technological advancements.
  • Fostering innovation: Basic research often requires a high degree of creativity, innovation, and intellectual curiosity. By encouraging scientists to pursue unconventional lines of inquiry and take risks, basic research can lead to breakthrough discoveries and innovations.
  • Stimulating economic growth : Basic research can lead to the development of new technologies and products that can stimulate economic growth and create new industries. Many of the major scientific and technological advancements of the past century have been rooted in basic research.
  • Improving health and well-being: Basic research can lead to the development of new drugs, therapies, and medical treatments that can improve health and well-being. For example, many of the major advances in medical science, such as the development of antibiotics and vaccines, were rooted in basic research.
  • Training the next generation of scientists : Basic research is essential for training the next generation of scientists and researchers. By providing opportunities for young scientists to engage in research and gain hands-on experience, basic research helps to develop the skills and expertise needed to advance scientific knowledge in the future.
  • Encouraging interdisciplinary collaboration : Basic research often requires collaboration between scientists from different fields of study. By fostering interdisciplinary collaboration, basic research can lead to new insights and discoveries that would not be possible through single-discipline research alone.

Limitations of Basic Research

Here are some of the main limitations of basic research:

  • Lack of immediate practical applications : Basic research is often focused on long-term outcomes rather than immediate practical applications. The insights and discoveries generated by basic research may take years or even decades to translate into practical applications.
  • High cost and time requirements: Basic research can be expensive and time-consuming, as it often requires sophisticated equipment, specialized facilities, and large research teams. Funding for basic research can be limited, making it difficult to sustain long-term projects.
  • Ethical concerns : Basic research may involve working with animal models or human subjects, raising ethical concerns around the use of animals or the safety and well-being of human participants.
  • Uncertainty around outcomes: Basic research is often open-ended, meaning that it does not have a specific end goal in mind. This uncertainty can make it difficult to justify funding for basic research, as it is difficult to predict what outcomes the research will produce.
  • Difficulty in communicating results : Basic research can produce complex and technical findings that may be difficult to communicate to the general public or policymakers. This can make it challenging to generate public support for basic research or to translate basic research findings into policy or practical applications.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Textual Analysis

Textual Analysis – Types, Examples and Guide

Survey Research

Survey Research – Types, Methods, Examples

Transformative Design

Transformative Design – Methods, Types, Guide

Applied Research

Applied Research – Types, Methods and Examples

Experimental Research Design

Experimental Design – Types, Methods, Guide

Phenomenology

Phenomenology – Methods, Examples and Guide

  • Alzheimer's disease & dementia
  • Arthritis & Rheumatism
  • Attention deficit disorders
  • Autism spectrum disorders
  • Biomedical technology
  • Diseases, Conditions, Syndromes
  • Endocrinology & Metabolism
  • Gastroenterology
  • Gerontology & Geriatrics
  • Health informatics
  • Inflammatory disorders
  • Medical economics
  • Medical research
  • Medications
  • Neuroscience
  • Obstetrics & gynaecology
  • Oncology & Cancer
  • Ophthalmology
  • Overweight & Obesity
  • Parkinson's & Movement disorders
  • Psychology & Psychiatry
  • Radiology & Imaging
  • Sleep disorders
  • Sports medicine & Kinesiology
  • Vaccination
  • Breast cancer
  • Cardiovascular disease
  • Chronic obstructive pulmonary disease
  • Colon cancer
  • Coronary artery disease
  • Heart attack
  • Heart disease
  • High blood pressure
  • Kidney disease
  • Lung cancer
  • Multiple sclerosis
  • Myocardial infarction
  • Ovarian cancer
  • Post traumatic stress disorder
  • Rheumatoid arthritis
  • Schizophrenia
  • Skin cancer
  • Type 2 diabetes
  • Full List »

share this!

August 27, 2024

This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

reputable news agency

Risk for dementia found to be similar with SGLT2 inhibitors, dulaglutide in type 2 diabetes

by Elana Gotkine

Risk for dementia similar with SGLT2 inhibitors, dulaglutide in T2DM

For older adults with type 2 diabetes, the risk for dementia seems similar with sodium-glucose cotransporter 2 (SGLT2) inhibitors and the glucagon-like peptide 1 receptor agonist (GLP-1 RA) dulaglutide, according to a study published online Aug. 27 in the Annals of Internal Medicine .

Bin Hong, from the School of Pharmacy at Sungkyunkwan University in Suwon, South Korea, and colleagues compared the risk for dementia between SGLT2 inhibitors and dulaglutide in a target trial emulation study using nationwide health care data for South Korea obtained between 2010 and 2022.

Participants were aged 60 years or older with type 2 diabetes and were initiating treatment with SGLT2 inhibitors (12,489 patients; 51.9 percent dapagliflozin and 48.1 percent empagliflozin) or dulaglutide (1,075 patients).

The researchers found that during a median follow-up of 4.4 years, the primary outcome event of presumed clinical onset of dementia occurred in 69 and 43 participants in the SGLT2 and dulaglutide groups, respectively, with an estimated risk difference of −0.91 percentage points (95 percent confidence interval, −2.45 to 0.63) and estimated risk ratio of 0.81 (95 percent confidence interval, 0.56 to 1.16).

"In conclusion, we found little difference in the risk for dementia for SGLT2 inhibitors compared with dulaglutide in our data," the authors write. "However, whether these findings generalize to newer GLP-1 RAs is uncertain."

Copyright © 2024 HealthDay . All rights reserved.

Explore further

Feedback to editors

type of principles research

New insights into the pathogenesis of amyotrophic lateral sclerosis

57 minutes ago

type of principles research

Small RNAs found to boost immune response to tuberculosis

type of principles research

Medication may stop migraines before headache starts, study shows

2 hours ago

type of principles research

Study reveals molecular mechanism behind multiple sclerosis and other autoimmune diseases

type of principles research

Collaborative research cracks the autism code, making the neurodivergent brain visible

type of principles research

Autistic traits, behavioral problems in 7-year-olds linked with gender nonconforming play

4 hours ago

type of principles research

People with mild cases of mental ill-health may be perceived differently depending on presence of diagnostic labels

type of principles research

New pancreatic cancer treatment proves effective in shrinking, clearing tumors

type of principles research

Study finds unhealthy commodities—like alcohol and social media—are connected with poor mental health

type of principles research

Donating a kidney is even safer now than long thought, US study shows

Related stories.

type of principles research

SGLT2 inhibitors may cut risk for heart failure hospitalization

Sep 28, 2021

type of principles research

SGLT2 inhibitors protect against kidney disease in T2DM

Oct 11, 2019

type of principles research

Data provide new perspective for understanding the antidepressant-like effects of a diabetes drug

Mar 6, 2024

type of principles research

Sodium-glucose cotransporter 2 inhibitors not cancer risk factor

Oct 3, 2017

type of principles research

SGLT2 inhibitors not linked with improved survival in hospitalized COVID-19 patients

Aug 29, 2023

type of principles research

Higher risk for amputation, DKA with SGLT2 inhibitors for T2DM

Nov 19, 2018

Recommended for you

type of principles research

Treating depression with psilocybin or escitalopram found to result in different hierarchical brain reconfigurations

10 hours ago

type of principles research

How zebrafish map their environment: Spatial orientation mechanisms surprisingly similar to our own

6 hours ago

type of principles research

Prioritizing the unexpected: New brain mechanism uncovered

7 hours ago

type of principles research

Uncovering the mechanics behind ketamine's rapid antidepressant effects

type of principles research

How humble cells in a little-known organ manage brain inflammation

8 hours ago

Let us know if there is a problem with our content

Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).

Please select the most appropriate category to facilitate processing of your request

Thank you for taking time to provide your feedback to the editors.

Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.

E-mail the story

Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Medical Xpress in any form.

Newsletter sign up

Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties.

More information Privacy policy

Donate and enjoy an ad-free experience

We keep our content available to everyone. Consider supporting Science X's mission by getting a premium account.

E-mail newsletter

IMAGES

  1. Research Ethics: Definition, Principles and Advantages

    type of principles research

  2. PPT

    type of principles research

  3. Principles of research design.

    type of principles research

  4. Salem Press

    type of principles research

  5. PPT

    type of principles research

  6. What Are The 4 Types Of Research Design In Research Methodology

    type of principles research

COMMENTS

  1. Scientific Principles and Research Practices

    Until the past decade, scientists, research institutions, and government agencies relied solely on a system of self-regulation based on shared ethical principles and generally accepted research practices to ensure integrity in the research process. Among the very basic principles that guide scientists, as well as many other scholars, are those expressed as respect for the integrity of ...

  2. Ethical Considerations in Research

    Research ethics are a set of principles that guide your research designs and practices in both quantitative and qualitative research. In this article, you will learn about the types and examples of ethical considerations in research, such as informed consent, confidentiality, and avoiding plagiarism. You will also find out how to apply ethical principles to your own research projects with ...

  3. Guiding Principles for Ethical Research

    This includes considering whether the question asked is answerable, whether the research methods are valid and feasible, and whether the study is designed with accepted principles, clear methods, and reliable practices. Invalid research is unethical because it is a waste of resources and exposes people to risk for no purpose. Fair subject selection

  4. PDF Research Ethics: A Handbook of Principles and Procedures

    3.5.1.1 The primary responsibility for the ethical conduct of research lies with the researcher. In cases of uncertainty, however, members of staff seeking approval may liaise with the relevant gatekeeper in order to ensure that their research does not contravene the principles expressed in this Handbook.

  5. Ethics in scientific research: a lens into its importance, history, and

    Introduction. Ethics are a guiding principle that shapes the conduct of researchers. It influences both the process of discovery and the implications and applications of scientific findings 1.Ethical considerations in research include, but are not limited to, the management of data, the responsible use of resources, respect for human rights, the treatment of human and animal subjects, social ...

  6. Ten simple rules for good research practice

    These principles can (and should) be implemented as a habit in everyday research, just like toothbrushing. Open in a separate window. ... To go even further, registered reports are a novel article type that incentivize high-quality research—irrespective of the ultimate study outcome [25,26]. With registered reports, peer-reviewers decide ...

  7. Understanding Scientific and Research Ethics

    The underlying principles of scientific and research ethics. ... The specific details may vary widely depending on the type of research you're conducting, but there are clear themes running through all research and reporting ethical requirements: Documented 3rd party oversight ... For animal research consult with your institutional animal ...

  8. Understanding Research Ethics

    Failure to adhere to the research ethics will lead to research misconduct with repercussions. The main ethical principles that a researcher needs to follow are: i. Honesty—reporting of honest data, methodology, results, publication status. Additionally, do not deceive research sponsors, colleagues, or the public. ii.

  9. Chapter 2: Principles of Research

    2.1 Basic Concepts. Before we address where research questions in psychology come from—and what makes them more or less interesting—it is important to understand the kinds of questions that researchers in psychology typically ask. This requires a quick introduction to several basic concepts, many of which we will return to in more detail ...

  10. What Is a Research Design

    Step 1: Consider your aims and approach. Step 2: Choose a type of research design. Step 3: Identify your population and sampling method. Step 4: Choose your data collection methods. Step 5: Plan your data collection procedures. Step 6: Decide on your data analysis strategies. Other interesting articles.

  11. Research Methods

    Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design. When planning your methods, there are two key decisions you will make. First, decide how you will collect data. Your methods depend on what type of data you need to answer your research question:

  12. Research Design

    The set of principles, techniques, and tools used to carry out the research plan and achieve research objectives. Describes the overall approach and strategy used to conduct research, including the type of data to be collected, the sources of data, and the methods for collecting and analyzing data.

  13. Research Methodology

    Qualitative Research Methodology. This is a research methodology that involves the collection and analysis of non-numerical data such as words, images, and observations. This type of research is often used to explore complex phenomena, to gain an in-depth understanding of a particular topic, and to generate hypotheses.

  14. Research Methods

    Quantitative research methods are used to collect and analyze numerical data. This type of research is useful when the objective is to test a hypothesis, determine cause-and-effect relationships, and measure the prevalence of certain phenomena. Quantitative research methods include surveys, experiments, and secondary data analysis.

  15. Types of Research

    This type of research is subdivided into two types: Technological applied research: looks towards improving efficiency in a particular productive sector through the improvement of processes or machinery related to said productive processes. Scientific applied research: has predictive purposes. Through this type of research design, we can ...

  16. What is Research? Definition, Types, Methods, and Examples

    Theoretical research deepens existing knowledge without attempting to solve specific problems. For example, a study may explore theoretical frameworks to understand the underlying principles of human behaviour. Applied research focuses on real-world issues and aims to provide practical solutions. An example could be a study investigating the ...

  17. A tutorial on methodological studies: the what, when, how and why

    Appropriate reflection and application of basic principles of epidemiology and biostatistics are required in the design and analysis of methodological studies. ... this field targets issues with research design, conduct, analysis and reporting. Various types of research reports are often examined as the unit of analysis in these studies (e.g ...

  18. Research Methods In Psychology

    Olivia Guy-Evans, MSc. Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

  19. Primary Research Types, Methods And Examples

    Here are the four main types of primary research: Surveys. Observations. Interviews. Focus groups. When conducting primary research, you can collect qualitative or quantitative data (or both). Qualitative primary data collection provides a vast array of feedback or information about products and services.

  20. General Principles of Law

    A research guide to help researchers locate and understand public and private international law resources. ... At the heart of the book is a new tetrahedral framework of analysis - looking to function, type, methodology and jurisprudential legitimacy. ... General Principles and the Coherence of International Law' provides a collection of ...

  21. Different Types of Love Activate the Brain Differently

    Love might be a "many-splendored thing," but it also appears to take different forms. New research from Aalto University - and published in the Oxford journal Cerebral Cortex - plots a roadmap of the neural mechanisms that make up the various forms of love. That map reveals that the different types of love engage disparate - and distinct - areas of the brain.

  22. 2 SCIENTIFIC PRINCIPLES AND RESEARCH PRACTICES

    The panel defined a mentor as that person directly responsible for the professional development of a research trainee. 25 Professional development includes both technical training, such as instruction in the methods of scientific research (e.g., research design, instrument use, and selection of research questions and data), and socialization in ...

  23. Theory

    Definition: Theory is a set of ideas or principles used to explain or describe a particular phenomenon or set of phenomena. The term "theory" is commonly used in the scientific context to refer to a well-substantiated explanation of some aspect of the natural world that is based on empirical evidence and rigorous testing.

  24. University of Florida

    Research Administrator II. Job Description: As a Research Administrator II, you will be responsible for a wide range of tasks spanning the entire lifecycle of research projects, from grant opportunity identification to proposal submission, award management, and compliance oversight. Your primary responsibilities will include:

  25. Tumour mutational burden: clinical utility, challenges and ...

    Tumour mutational burden (TMB), defined as the total number of somatic non-synonymous mutations present within the cancer genome, varies across and within cancer types. A first wave of ...

  26. Physical activity and Parkinson's: what do we know?

    More research is needed to better understand whether keeping active can slow the progression of the condition, whether different types of physical activity can help to manage different symptoms and how people with Parkinson's can be better supported to keep active. Professor Bastiaan Bloem is currently leading a research study called SLOW-SPEED.

  27. Basic Research

    Astronomy: Astronomers conduct basic research to understand the fundamental principles that govern the universe, such as the laws of gravity, the behavior of stars and galaxies, and the origins of the universe. Genetics: Geneticists conduct basic research to understand the genetic basis of various traits, diseases, and disorders.

  28. First-Principles Studies on Structural, Electronic and Optical

    The structural, electronic, and optical properties of nickel disulfide (NiS2) and iron (Fe)-doped NiS2 were computed by using first-principles calculations through the density functional theory (DFT) method. The Fe was used as a dopant element to understand the behaviour and the key mechanism of Fe-doped NiS2 as a counter electrode in dye-sensitised solar cells (DSSC).

  29. Risk for dementia found to be similar with SGLT2 inhibitors

    Participants were aged 60 years or older with type 2 diabetes and were initiating treatment with SGLT2 inhibitors (12,489 patients; 51.9 percent dapagliflozin and 48.1 percent empagliflozin) or ...