Study Site Homepage

  • Request new password
  • Create a new account

Doing Research in the Real World

Student resources, multiple choice quiz.

Take the quiz to test your understanding of the key concepts covered in the chapter. Try testing yourself before you read the chapter to see where your strengths and weaknesses are, then test yourself again once you’ve read the chapter to see how well you’ve understood.

Tip: Click on each link to expand and view the content. Click again to collapse.

PART A: PRINCIPLES AND PLANNING FOR RESEARCH

1. Which of the following should not be a criterion for a good research project?

  • Demonstrates the abilities of the researcher
  • Is dependent on the completion of other projects
  • Demonstrates the integration of different fields of knowledge
  • Develops the skills of the researcher

b.  Is dependent on the completion of other projects

2. Which form of reasoning is the process of drawing a specific conclusion from a set of premises?

  • Objective reasoning
  • Positivistic reasoning
  • Inductive reasoning
  • Deductive reasoning

d:  Deductive reasoning

3. Research that seeks to examine the findings of a study by using the same design but a different sample is which of the following?

  • An exploratory study
  • A replication study
  • An empirical study
  • Hypothesis testing

b:  A replication study

4. A researcher designs an experiment to test how variables interact to influence job-seeking behaviours. The main purpose of the study was:

  • Description
  • Exploration
  • Explanation

d:  Explanation

5. Cyber bullying at work is a growing threat to employee job satisfaction. Researchers want to find out why people do this and how they feel about it. The primary purpose of the study is:

c:  Exploration

6. A theory: 

  • Is an accumulated body of knowledge
  • Includes inconsequential ideas
  • Is independent of research methodology
  • Should be viewed uncritically

a:  Is an accumulated body of knowledge

7. Which research method is a bottom-up approach to research?

  • Deductive method
  • Explanatory method
  • Inductive method
  • Exploratory method

c:  Inductive method

8. How much confidence should you place in a single research study?

  • You should trust research findings after different researchers have replicated the findings
  • You should completely trust a single research study
  • Neither a nor b
  • Both a and b 

a:  You should trust research findings after different researchers have replicated the findings

9. A qualitative research problem statement:

  • Specifies the research methods to be utilized
  • Specifies a research hypothesis
  • Expresses a relationship between variables
  • Conveys a sense of emerging design

d:  Conveys a sense of emerging design

10. Which of the following is a good research question?

  • To produce a report on student job searching behaviours
  • To identify the relationship between self-efficacy and student job searching behaviours
  • Students with higher levels of self-efficacy will demonstrate more active job searching behaviours
  • Do students with high levels of self-efficacy demonstrate more active job searching behaviours?

d:  Do students with high levels of self-efficacy demonstrate more active job searching behaviours?

11. A review of the literature prior to formulating research questions allows the researcher to :

  • Provide an up-to-date understanding of the subject, its significance, and structure
  • Guide the development of research questions
  • Present the kinds of research methodologies used in previous studies
  • All of the above

d:  All of the above

12. Sometimes a comprehensive review of the literature prior to data collection is not recommended by:

  • Ethnomethodology
  • Grounded theory
  • Symbolic interactionism
  • Feminist theory

b:  Grounded theory

13. The feasibility of a research study should be considered in light of: 

  • Cost and time required to conduct the study
  • Access to gatekeepers and respondents
  • Potential ethical concerns

14. Research that uses qualitative methods for one phase and quantitative methods for the next phase is known as:

  • Action research
  • Mixed-method research
  • Quantitative research
  • Pragmatic research

b:  Mixed-method research

15. Research hypotheses are:

  • Formulated prior to a review of the literature
  • Statements of predicted relationships between variables
  • B but not A
  • Both A and B

c:  B but not A

16. Which research approach is based on the epistemological viewpoint of pragmatism? 

  • Qualitative research
  • Mixed-methods research

c:  Mixed-methods research

17. Adopting ethical principles in research means: 

  • Avoiding harm to participants
  • The researcher is anonymous
  • Deception is only used when necessary
  • Selected informants give their consent

a:  Avoiding harm to participants

18. A radical perspective on ethics suggests that: 

  • Researchers can do anything they want
  • The use of checklists of ethical actions is essential
  • The powers of Institutional Review Boards should be strengthened
  • Ethics should be based on self-reflexivity

d:  Ethics should be based on self-reflexivity

19. Ethical problems can arise when researching the Internet because:

  • Everyone has access to digital media
  • Respondents may fake their identities
  • Researchers may fake their identities
  • Internet research has to be covert

b:  Respondents may fake their identities

20. The Kappa statistic: 

  • Is a measure of inter-judge validity
  • Compares the level of agreement between two judges against what might have been predicted by chance
  • Ranges from 0 to +1
  • Is acceptable above a score of 0.5

b:  Compares the level of agreement between two judges against what might have been predicted by chance

PART B: RESEARCH METHODOLOGY  

1. Which research paradigm is most concerned about generalizing its findings? 

a:  Quantitative research

2. A variable that is presumed to cause a change in another variable is called:

  • An intervening variable
  • A dependent variable
  • An independent variable
  • A numerical variable

c:  An independent variable

3. A study of teaching professionals posits that their performance-related pay increases their motivation which in turn leads to an increase in their job satisfaction. What kind of variable is ‘motivation”’ in this study? 

  • Extraneous 
  • Confounding
  • Intervening
  • Manipulated

c:  Intervening

4. Which correlation is the strongest? 

5. When interpreting a correlation coefficient expressing the relationship between two variables, it is important not to:

  • Assume causality
  • Measure the values for X and Y independently
  • Choose X and Y values that are normally distributed
  • Check the direction of the relationship

a:  Assume causality

6. Which of the following can be described as a nominal variable? 

  • Annual income
  • Annual sales
  • Geographical location of a firm

d:  Geographical location of a firm

7. A positive correlation occurs when:

  • Two variables remain constant
  • Two variables move in the same direction
  • One variable goes up and the other goes down
  • Two variables move in opposite directions

b:  Two variables move in the same direction

8. The key defining characteristic of experimental research is that:

  • The independent variable is manipulated
  • Hypotheses are proved
  • A positive correlation exists
  • Samples are large

a:  The independent variable is manipulated

9. Qualitative research is used in all the following circumstances, EXCEPT:

  • It is based on a collection of non-numerical data such as words and pictures
  • It often uses small samples
  • It uses the inductive method
  • It is typically used when a great deal is already known about the topic of interest

d:  It is typically used when a great deal is already known about the topic of interest

10. In an experiment, the group that does not receive the intervention is called:

  • The experimental group
  • The participant group
  • The control group
  • The treatment group

c:  The control group

11. Which generally cannot be guaranteed in conducting qualitative studies in the field? 

  • Keeping participants from physical and emotional harm
  • Gaining informed consent
  • Assuring anonymity rather than just confidentiality
  • Maintaining consent forms

c:  Assuring anonymity rather than just confidentiality

12. Which of the following is not ethical practice in research with humans? 

  • Maintaining participants’ anonymity
  • Informing participants that they are free to withdraw at any time
  • Requiring participants to continue until the study has been completed

d:  Requiring participants to continue until the study has been completed

13. What do we call data that are used for a new study but which were collected by an earlier researcher for a different set of research questions?

  • Secondary data
  • Field notes
  • Qualitative data
  • Primary data

a:  Secondary data

14. When each member of a population has an equal chance of being selected, this is called:

  • A snowball sample
  • A stratified sample
  • A random probability sample
  • A non-random sample

c:  A random probability sample

15. Which of the following techniques yields a simple random sample of hospitals?

  • Randomly selecting a district and then sampling all hospitals within the district
  • Numbering all the elements of a hospital sampling frame and then using a random number generator to pick hospitals from the table
  • Listing hospitals by sector and choosing a proportion from within each sector at random
  • Choosing volunteer hospitals to participate

b:  Numbering all the elements of a hospital sampling frame and then using a random number generator to pick hospitals from the table

16. Which of the following statements are true?

  • The larger the sample size, the larger the confidence interval
  • The smaller the sample size, the greater the sampling error
  • The more categories being measured, the smaller the sample size
  • A confidence level of 95 percent is always sufficient

b:  The smaller the sample size, the greater the sampling error

17. Which of the following will produce the least sampling error?

  • A large sample based on convenience sampling 
  • A small sample based on random sampling
  • A large snowball sample
  • A large sample based on random sampling

d:  A large sample based on random sampling

18. When people are readily available, volunteer, or are easily recruited to the sample, this is called:

  • Snowball sampling
  • Convenience sampling
  • Stratified sampling
  • Random sampling

b:  Convenience sampling

19. In qualitative research, sampling that involves selecting diverse cases is referred to as:

  • Typical-case sampling
  • Critical-case sampling
  • Intensity sampling
  • Maximum variation sampling

d:  Maximum variation sampling

20. A test accurately indicates an employee’s scores on a future criterion (e.g., conscientiousness).  What kind of validity is this?

a:  Predictive

PART C: DATA COLLECTION METHODS  

1. When designing a questionnaire it is important to do each of the following EXCEPT

  • Pilot the questionnaire
  • Avoid jargon
  • Avoid double questions
  • Use leading questions

d:  Use leading questions

2. One advantage of using a questionnaire is that:

  • Probe questions can be asked
  • Respondents can be put at ease
  • Interview bias can be avoided
  • Response rates are always high

c:  Interview bias can be avoided

3. Which of the following is true of observations?

  • It takes less time than interviews
  • It is often not possible to determine exactly why people behave as they do
  • Covert observation raises fewer ethical concerns than overt

b:  It is often not possible to determine exactly why people behave as they do

4. A researcher secretly becomes an active member of a group in order to observe their behaviour. This researcher is acting as:

  • An overt participant observer
  • A covert non-participant observer
  • A covert participant observer
  • None of the above

c:  A covert participant observer

5. All of the following are advantages of structured observation, EXCEPT:

  • Results can be replicated at a different time
  • The coding schedule might impose a framework on what is being observed
  • Data can be collected that participants may not realize is important
  • Data do not have to rely on the recall of participants

b:  The coding schedule might impose a framework on what is being observed

6. When conducting an interview, asking questions such as: "What else? or ‘Could you expand on that?’ are all forms of:

  • Structured responses
  • Category questions

7. Secondary data can include which of the following? 

  • Government statistics
  • Personal diaries
  • Organizational records

8. An ordinal scale is:

  • The simplest form of measurement
  • A scale with an absolute zero point
  • A rank-order scale of measurement
  • A scale with equal intervals between ranks

c:  A rank-order scale of measurement

9. Which term measures the extent to which scores from a test can be used to infer or predict performance in some activity? 

  • Face validity
  • Content reliability
  • Criterion-related validity
  • Construct validity

c:  Criterion-related validity

10. The ‘reliability’of a measure refers to the researcher asking:

  • Does it give consistent results?
  • Does it measure what it is supposed to measure?
  • Can the results be generalized?
  • Does it have face reliability?

a:  Does it give consistent results?

11. Interviewing is the favoured approach EXCEPT when:

  • There is a need for highly personalized data
  • It is important to ask supplementary questions
  • High numbers of respondents are needed
  • Respondents have difficulty with written language

c:  High numbers of respondents are needed

12. Validity in interviews is strengthened by the following EXCEPT:

  • Building rapport with interviewees
  • Multiple questions cover the same theme
  • Constructing interview schedules that contain themes drawn from the literature
  • Prompting respondents to expand on initial responses

b:  Multiple questions cover the same theme

13. Interview questions should:

  • Lead the respondent
  • Probe sensitive issues
  • Be delivered in a neutral tone
  • Test the respondents’ powers of memory

c:  Be delivered in a neutral tone

14. Active listening skills means:

  • Asking as many questions as possible
  • Avoiding silences
  • Keeping to time
  • Attentive listening

d:  Attentive listening

15. All the following are strengths of focus groups EXCEPT:

  • They allow access to a wide range of participants
  • Discussion allows for the validation of ideas and views
  • They can generate a collective perspective
  • They help maintain confidentiality

d:  They help maintain confidentiality

16. Which of the following is not always true about focus groups?

  • The ideal size is normally between 6 and 12 participants
  • Moderators should introduce themselves to the group
  • Participants should come from diverse backgrounds
  • The moderator poses preplanned questions

c:  Participants should come from diverse backgrounds

17. A disadvantage of using secondary data is that:

  • The data may have been collected with reference to research questions that are not those of the researcher
  • The researcher may bring more detachment in viewing the data than original researchers could muster
  • Data have often been collected by teams of experienced researchers
  • Secondary data sets are often available and accessible

a:  The data may have been collected with reference to research questions that are not those of the researcher

18. All of the following are sources of secondary data EXCEPT:

  • Official statistics
  • A television documentary
  • The researcher’s research diary
  • A company’s annual report

c:  The researcher’s research diary

19. Which of the following is not true about visual methods?

  • They are not reliant on respondent recall
  • The have low resource requirements
  • They do not rely on words to capture what is happening
  • They can capture what is happening in real time

b:  The have low resource requirements

20. Avoiding naïve empiricism in the interpretation of visual data means:

  • Understanding the context in which they were produced
  • Ensuring that visual images such as photographs are accurately taken
  • Only using visual images with other data gathering sources
  • Planning the capture of visual data carefully

a:  Understanding the context in which they were produced

PART D: ANALYSIS AND REPORT WRITING  

1. Which of the following is incorrect when naming a variable in SPSS?

  • Must begin with a letter and not a number
  • Must end in a full stop
  • Cannot exceed 64 characters
  • Cannot include symbols such as ?, & and %

b:  Must end in a full stop

2. Which of the following is not an SPSS Type variable?

3. A graph that uses vertical bars to represent data is called:

  • A bar chart
  • A pie chart
  • A line graph
  • A vertical graph

a:  A bar chart

4. The purpose of descriptive statistics is to:

  • Summarize the characteristics of a data set
  • Draw conclusions from the data

a:  Summarize the characteristics of a data set

5. The measure of the extent to which responses vary from the mean is called:

  • The normal distribution
  • The standard deviation
  • The variance

c:  The standard deviation

6. To compare the performance of a group at time T1 and then at T2, we would use:

  • A chi-squared test
  • One-way analysis of variance
  • Analysis of variance
  • A paired t-test

d:  A paired t-test

7. A Type 1 error occurs in a situation where:

  • The null hypothesis is accepted when it is in fact true
  • The null hypothesis is rejected when it is in fact false
  • The null hypothesis is rejected when it is in fact true
  • The null hypothesis is accepted when it is in fact false

c:  The null hypothesis is rejected when it is in fact true

8. The significance level

  • Is set after a statistical test is conducted
  • Is always set at 0.05
  • Results in a p -value
  • Measures the probability of rejecting a true null hypothesis

d:  Measures the probability of rejecting a true null hypothesis

9. To predict the value of the dependent variable for a new case based on the knowledge of one or more independent variables, we would use

  • Regression analysis
  • Correlation analysis
  • Kolmogorov-Smirnov test

a:  Regression analysis

10. In conducting secondary data analysis, researchers should ask themselves all of the following EXCEPT:

  • Who produced the document?
  • Is the material genuine?
  • How can respondents be re-interviewed?
  • Why was the document produced?

c:  How can respondents be re-interviewed?

11. Which of the following are not true of reflexivity?

  • It recognizes that the researcher is not a neutral observer
  • It has mainly been applied to the analysis of qualitative data
  • It is part of a post-positivist tradition
  • A danger of adopting a reflexive stance is the researcher can become the focus of the study

c:  It is part of a post-positivist tradition

12. Validity in qualitative research can be strengthened by all of the following EXCEPT:

  • Member checking for accuracy and interpretation
  • Transcribing interviews to improve accuracy of data
  • Exploring rival explanations
  • Analysing negative cases

b:  Transcribing interviews to improve accuracy of data

13. Qualitative data analysis programs are useful for each of the following EXCEPT: 

  • Manipulation of large amounts of data
  • Exploring of the data against new dimensions
  • Querying of data
  • Generating codes

d:  Generating codes

14. Which part of a research report contains details of how the research was planned and conducted?

  • Introduction

b:  Design 

15. Which of the following is a form of research typically conducted by managers and other professionals to address issues in their organizations and/or professional practice?

  • Basic research
  • Professional research
  • Predictive research

a:  Action research

16. Plagiarism can be avoided by:

  • Copying the work of others accurately
  • Paraphrasing the author’s text in your own words
  • Cut and pasting from the Internet
  • Quoting directly without revealing the source

b:  Paraphrasing the author’s text in your own words

17. In preparing for a presentation, you should do all of the following EXCEPT:

  • Practice the presentation
  • Ignore your nerves
  • Get to know more about your audience
  • Take an advanced look, if possible, at the facilities

b:  Ignore your nerves

18. You can create interest in your presentation by:

  • Using bullet points
  • Reading from notes
  • Maximizing the use of animation effects
  • Using metaphors

d:  Using metaphors

19. In preparing for a viva or similar oral examination, it is best if you have:

  • Avoided citing the examiner in your thesis
  • Made exaggerated claims on the basis of your data
  • Published and referenced your own article(s)
  • Tried to memorize your work

c:  Published and referenced your own article(s)

20. Grounded theory coding:

  • Makes use of a priori concepts from the literature
  • Uses open coding, selective coding, then axial coding
  • Adopts a deductive stance
  • Stops when theoretical saturation has been reached

d:  Stops when theoretical saturation has been reached

University of Northern Iowa Home

  • Chapter Four: Quantitative Methods (Part 1)

Once you have chosen a topic to investigate, you need to decide which type of method is best to study it. This is one of the most important choices you will make on your research journey. Understanding the value of each of the methods described in this textbook to answer different questions allows you to be able to plan your own studies with more confidence, critique the studies others have done, and provide advice to your colleagues and friends on what type of research they should do to answer questions they have. After briefly reviewing quantitative research assumptions, this chapter is organized in three parts or sections. These parts can also be used as a checklist when working through the steps of your study. Specifically, part 1 focuses on planning a quantitative study (collecting data), part two explains the steps involved in doing a quantitative study, and part three discusses how to make sense of your results (organizing and analyzing data).

  • Chapter One: Introduction
  • Chapter Two: Understanding the distinctions among research methods
  • Chapter Three: Ethical research, writing, and creative work
  • Chapter Four: Quantitative Methods (Part 2 - Doing Your Study)
  • Chapter Four: Quantitative Methods (Part 3 - Making Sense of Your Study)
  • Chapter Five: Qualitative Methods (Part 1)
  • Chapter Five: Qualitative Data (Part 2)
  • Chapter Six: Critical / Rhetorical Methods (Part 1)
  • Chapter Six: Critical / Rhetorical Methods (Part 2)
  • Chapter Seven: Presenting Your Results

Quantitative Worldview Assumptions: A Review

In chapter 2, you were introduced to the unique assumptions quantitative research holds about knowledge and how it is created, or what the authors referred to in chapter one as "epistemology." Understanding these assumptions can help you better determine whether you need to use quantitative methods for a particular research study in which you are interested.

Quantitative researchers believe there is an objective reality, which can be measured. "Objective" here means that the researcher is not relying on their own perceptions of an event. S/he is attempting to gather "facts" which may be separate from people's feeling or perceptions about the facts. These facts are often conceptualized as "causes" and "effects." When you ask research questions or pose hypotheses with words in them such as "cause," "effect," "difference between," and "predicts," you are operating under assumptions consistent with quantitative methods. The overall goal of quantitative research is to develop generalizations that enable the researcher to better predict, explain, and understand some phenomenon.

Because of trying to prove cause-effect relationships that can be generalized to the population at large, the research process and related procedures are very important for quantitative methods. Research should be consistently and objectively conducted, without bias or error, in order to be considered to be valid (accurate) and reliable (consistent). Perhaps this emphasis on accurate and standardized methods is because the roots of quantitative research are in the natural and physical sciences, both of which have at their base the need to prove hypotheses and theories in order to better understand the world in which we live. When a person goes to a doctor and is prescribed some medicine to treat an illness, that person is glad such research has been done to know what the effects of taking this medicine is on others' bodies, so s/he can trust the doctor's judgment and take the medicines.

As covered in chapters 1 and 2, the questions you are asking should lead you to a certain research method choice. Students sometimes want to avoid doing quantitative research because of fear of math/statistics, but if their questions call for that type of research, they should forge ahead and use it anyway. If a student really wants to understand what the causes or effects are for a particular phenomenon, they need to do quantitative research. If a student is interested in what sorts of things might predict a person's behavior, they need to do quantitative research. If they want to confirm the finding of another researcher, most likely they will need to do quantitative research. If a student wishes to generalize beyond their participant sample to a larger population, they need to be conducting quantitative research.

So, ultimately, your choice of methods really depends on what your research goal is. What do you really want to find out? Do you want to compare two or more groups, look for relationships between certain variables, predict how someone will act or react, or confirm some findings from another study? If so, you want to use quantitative methods.

A topic such as self-esteem can be studied in many ways. Listed below are some example RQs about self-esteem. Which of the following research questions should be answered with quantitative methods?

  • Is there a difference between men's and women's level of self- esteem?
  • How do college-aged women describe their ups and downs with self-esteem?
  • How has "self-esteem" been constructed in popular self-help books over time?
  • Is there a relationship between self-esteem levels and communication apprehension?

What are the advantages of approaching a topic like self-esteem using quantitative methods? What are the disadvantages?

For more information, see the following website: Analyse This!!! Learning to analyse quantitative data

Answers:  1 & 4

Quantitative Methods Part One: Planning Your Study

Planning your study is one of the most important steps in the research process when doing quantitative research. As seen in the diagram below, it involves choosing a topic, writing research questions/hypotheses, and designing your study. Each of these topics will be covered in detail in this section of the chapter.

Image removed.

Topic Choice

Decide on topic.

How do you go about choosing a topic for a research project? One of the best ways to do this is to research something about which you would like to know more. Your communication professors will probably also want you to select something that is related to communication and things you are learning about in other communication classes.

When the authors of this textbook select research topics to study, they choose things that pique their interest for a variety of reasons, sometimes personal and sometimes because they see a need for more research in a particular area. For example, April Chatham-Carpenter studies adoption return trips to China because she has two adopted daughters from China and because there is very little research on this topic for Chinese adoptees and their families; she studied home vs. public schooling because her sister home schools, and at the time she started the study very few researchers had considered the social network implications for home schoolers (cf.  http://www.uni.edu/chatham/homeschool.html ).

When you are asked in this class and other classes to select a topic to research, think about topics that you have wondered about, that affect you personally, or that know have gaps in the research. Then start writing down questions you would like to know about this topic. These questions will help you decide whether the goal of your study is to understand something better, explain causes and effects of something, gather the perspectives of others on a topic, or look at how language constructs a certain view of reality.

Review Previous Research

In quantitative research, you do not rely on your conclusions to emerge from the data you collect. Rather, you start out looking for certain things based on what the past research has found. This is consistent with what was called in chapter 2 as a deductive approach (Keyton, 2011), which also leads a quantitative researcher to develop a research question or research problem from reviewing a body of literature, with the previous research framing the study that is being done. So, reviewing previous research done on your topic is an important part of the planning of your study. As seen in chapter 3 and the Appendix, to do an adequate literature review, you need to identify portions of your topic that could have been researched in the past. To do that, you select key terms of concepts related to your topic.

Some people use concept maps to help them identify useful search terms for a literature review. For example, see the following website: Concept Mapping: How to Start Your Term Paper Research .

Narrow Topic to Researchable Area

Once you have selected your topic area and reviewed relevant literature related to your topic, you need to narrow your topic to something that can be researched practically and that will take the research on this topic further. You don't want your research topic to be so broad or large that you are unable to research it. Plus, you want to explain some phenomenon better than has been done before, adding to the literature and theory on a topic. You may want to test out what someone else has found, replicating their study, and therefore building to the body of knowledge already created.

To see how a literature review can be helpful in narrowing your topic, see the following sources.  Narrowing or Broadening Your Research Topic  and  How to Conduct a Literature Review in Social Science

Research Questions & Hypotheses

Write Your Research Questions (RQs) and/or Hypotheses (Hs)

Once you have narrowed your topic based on what you learned from doing your review of literature, you need to formalize your topic area into one or more research questions or hypotheses. If the area you are researching is a relatively new area, and no existing literature or theory can lead you to predict what you might find, then you should write a research question. Take a topic related to social media, for example, which is a relatively new area of study. You might write a research question that asks:

"Is there a difference between how 1st year and 4th year college students use Facebook to communicate with their friends?"

If, however, you are testing out something you think you might find based on the findings of a large amount of previous literature or a well-developed theory, you can write a hypothesis. Researchers often distinguish between  null  and  alternative  hypotheses. The alternative hypothesis is what you are trying to test or prove is true, while the null hypothesis assumes that the alternative hypothesis is not true. For example, if the use of Facebook had been studied a great deal, and there were theories that had been developed on the use of it, then you might develop an alternative hypothesis, such as: "First-year students spend more time on using Facebook to communicate with their friends than fourth-year students do." Your null hypothesis, on the other hand, would be: "First-year students do  not  spend any more time using Facebook to communication with their friends than fourth-year students do." Researchers, however, only state the alternative hypothesis in their studies, and actually call it "hypothesis" rather than "alternative hypothesis."

Process of Writing a Research Question/Hypothesis.

Once you have decided to write a research question (RQ) or hypothesis (H) for your topic, you should go through the following steps to create your RQ or H.

Name the concepts from your overall research topic that you are interested in studying.

RQs and Hs have variables, or concepts that you are interested in studying. Variables can take on different values. For example, in the RQ above, there are at least two variables – year in college and use of Facebook (FB) to communicate. Both of them have a variety of levels within them.

When you look at the concepts you identified, are there any concepts which seem to be related to each other? For example, in our RQ, we are interested in knowing if there is a difference between first-year students and fourth-year students in their use of FB, meaning that we believe there is some connection between our two variables.

  • Decide what type of a relationship you would like to study between the variables. Do you think one causes the other? Does a difference in one create a difference in the other? As the value of one changes, does the value of the other change?

Identify which one of these concepts is the independent (or predictor) variable, or the concept that is perceived to be the cause of change in the other variable? Which one is the dependent (criterion) variable, or the one that is affected by changes in the independent variable? In the above example RQ, year in school is the independent variable, and amount of time spent on Facebook communicating with friends is the dependent variable. The amount of time spent on Facebook depends on a person's year in school.

If you're still confused about independent and dependent variables, check out the following site: Independent & Dependent Variables .

Express the relationship between the concepts as a single sentence – in either a hypothesis or a research question.

For example, "is there a difference between international and American students on their perceptions of the basic communication course," where cultural background and perceptions of the course are your two variables. Cultural background would be the independent variable, and perceptions of the course would be your dependent variable. More examples of RQs and Hs are provided in the next section.

APPLICATION: Try the above steps with your topic now. Check with your instructor to see if s/he would like you to send your topic and RQ/H to him/her via e-mail.

Types of Research Questions/Hypotheses

Once you have written your RQ/H, you need to determine what type of research question or hypothesis it is. This will help you later decide what types of statistics you will need to run to answer your question or test your hypothesis. There are three possible types of questions you might ask, and two possible types of hypotheses. The first type of question cannot be written as a hypothesis, but the second and third types can.

Descriptive Question.

The first type of question is a descriptive question. If you have only one variable or concept you are studying, OR if you are not interested in how the variables you are studying are connected or related to each other, then your question is most likely a descriptive question.

This type of question is the closest to looking like a qualitative question, and often starts with a "what" or "how" or "why" or "to what extent" type of wording. What makes it different from a qualitative research question is that the question will be answered using numbers rather than qualitative analysis. Some examples of a descriptive question, using the topic of social media, include the following.

"To what extent are college-aged students using Facebook to communicate with their friends?"
"Why do college-aged students use Facebook to communicate with their friends?"

Notice that neither of these questions has a clear independent or dependent variable, as there is no clear cause or effect being assumed by the question. The question is merely descriptive in nature. It can be answered by summarizing the numbers obtained for each category, such as by providing percentages, averages, or just the raw totals for each type of strategy or organization. This is true also of the following research questions found in a study of online public relations strategies:

"What online public relations strategies are organizations implementing to combat phishing" (Baker, Baker, & Tedesco, 2007, p. 330), and
"Which organizations are doing most and least, according to recommendations from anti- phishing advocacy recommendations, to combat phishing" (Baker, Baker, & Tedesco, 2007, p. 330)

The researchers in this study reported statistics in their results or findings section, making it clearly a quantitative study, but without an independent or dependent variable; therefore, these research questions illustrate the first type of RQ, the descriptive question.

Difference Question/Hypothesis.

The second type of question is a question/hypothesis of difference, and will often have the word "difference" as part of the question. The very first research question in this section, asking if there is a difference between 1st year and 4th year college students' use of Facebook, is an example of this type of question. In this type of question, the independent variable is some type of grouping or categories, such as age. Another example of a question of difference is one April asked in her research on home schooling: "Is there a difference between home vs. public schoolers on the size of their social networks?" In this example, the independent variable is home vs. public schooling (a group being compared), and the dependent variable is size of social networks. Hypotheses can also be difference hypotheses, as the following example on the same topic illustrates: "Public schoolers have a larger social network than home schoolers do."

Relationship/Association Question/Hypothesis.

The third type of question is a relationship/association question or hypothesis, and will often have the word "relate" or "relationship" in it, as the following example does: "There is a relationship between number of television ads for a political candidate and how successful that political candidate is in getting elected." Here the independent (or predictor) variable is number of TV ads, and the dependent (or criterion) variable is the success at getting elected. In this type of question, there is no grouping being compared, but rather the independent variable is continuous (ranges from zero to a certain number) in nature. This type of question can be worded as either a hypothesis or as a research question, as stated earlier.

Test out your knowledge of the above information, by answering the following questions about the RQ/H listed below. (Remember, for a descriptive question there are no clear independent & dependent variables.)

  • What is the independent variable (IV)?
  • What is the dependent variable (DV)?
  • What type of research question/hypothesis is it? (descriptive, difference, relationship/association)
  • "Is there a difference on relational satisfaction between those who met their current partner through online dating and those who met their current partner face-to-face?"
  • "How do Fortune 500 firms use focus groups to market new products?"
  • "There is a relationship between age and amount of time spent online using social media."

Answers: RQ1  is a difference question, with type of dating being the IV and relational satisfaction being the DV. RQ2  is a descriptive question with no IV or DV. RQ3  is a relationship hypothesis with age as the IV and amount of time spent online as the DV.

Design Your Study

The third step in planning your research project, after you have decided on your topic/goal and written your research questions/hypotheses, is to design your study which means to decide how to proceed in gathering data to answer your research question or to test your hypothesis. This step includes six things to do. [NOTE: The terms used in this section will be defined as they are used.]

  • Decide type of study design: Experimental, quasi-experimental, non-experimental.
  • Decide kind of data to collect: Survey/interview, observation, already existing data.
  • Operationalize variables into measurable concepts.
  • Determine type of sample: Probability or non-probability.
  • Decide how you will collect your data: face-to-face, via e-mail, an online survey, library research, etc.
  • Pilot test your methods.

Types of Study Designs

With quantitative research being rooted in the scientific method, traditional research is structured in an experimental fashion. This is especially true in the natural sciences, where they try to prove causes and effects on topics such as successful treatments for cancer. For example, the University of Iowa Hospitals and Clinics regularly conduct clinical trials to test for the effectiveness of certain treatments for medical conditions ( University of Iowa Hospitals & Clinics: Clinical Trials ). They use human participants to conduct such research, regularly recruiting volunteers. However, in communication, true experiments with treatments the researcher controls are less necessary and thus less common. It is important for the researcher to understand which type of study s/he wishes to do, in order to accurately communicate his/her methods to the public when describing the study.

There are three possible types of studies you may choose to do, when embarking on quantitative research: (a) True experiments, (b) quasi-experiments, and (c) non-experiments.

For more information to read on these types of designs, take a look at the following website and related links in it: Types of Designs .

The following flowchart should help you distinguish between the three types of study designs described below.

Image removed.

True Experiments.

The first two types of study designs use difference questions/hypotheses, as the independent variable for true and quasi-experiments is  nominal  or categorical (based on categories or groupings), as you have groups that are being compared. As seen in the flowchart above, what distinguishes a true experiment from the other two designs is a concept called "random assignment." Random assignment means that the researcher controls to which group the participants are assigned. April's study of home vs. public schooling was NOT a true experiment, because she could not control which participants were home schooled and which ones were public schooled, and instead relied on already existing groups.

An example of a true experiment reported in a communication journal is a study investigating the effects of using interest-based contemporary examples in a lecture on the history of public relations, in which the researchers had the following two hypotheses: "Lectures utilizing interest- based examples should result in more interested participants" and "Lectures utilizing interest- based examples should result in participants with higher scores on subsequent tests of cognitive recall" (Weber, Corrigan, Fornash, & Neupauer, 2003, p. 118). In this study, the 122 college student participants were randomly assigned by the researchers to one of two lecture video viewing groups: a video lecture with traditional examples and a video with contemporary examples. (To see the results of the study, look it up using your school's library databases).

A second example of a true experiment in communication is a study of the effects of viewing either a dramatic narrative television show vs. a nonnarrative television show about the consequences of an unexpected teen pregnancy. The researchers randomly assigned their 367 undergraduate participants to view one of the two types of shows.

Moyer-Gusé, E., & Nabi, R. L. (2010). Explaining the effects of narrative in an entertainment television program: Overcoming resistance to persuasion.  Human Communication Research, 36 , 26-52.

A third example of a true experiment done in the field of communication can be found in the following study.

Jensen, J. D. (2008). Scientific uncertainty in news coverage of cancer research: Effects of hedging on scientists' and journalists' credibility.  Human Communication Research, 34,  347-369.

In this study, Jakob Jensen had three independent variables. He randomly assigned his 601 participants to 1 of 20 possible conditions, between his three independent variables, which were (a) a hedged vs. not hedged message, (b) the source of the hedging message (research attributed to primary vs. unaffiliated scientists), and (c) specific news story employed (of which he had five randomly selected news stories about cancer research to choose from). Although this study was pretty complex, it does illustrate the true experiment in our field since the participants were randomly assigned to read a particular news story, with certain characteristics.

Quasi-Experiments.

If the researcher is not able to randomly assign participants to one of the treatment groups (or independent variable), but the participants already belong to one of them (e.g., age; home vs. public schooling), then the design is called a quasi-experiment. Here you still have an independent variable with groups, but the participants already belong to a group before the study starts, and the researcher has no control over which group they belong to.

An example of a hypothesis found in a communication study is the following: "Individuals high in trait aggression will enjoy violent content more than nonviolent content, whereas those low in trait aggression will enjoy violent content less than nonviolent content" (Weaver & Wilson, 2009, p. 448). In this study, the researchers could not assign the participants to a high or low trait aggression group since this is a personality characteristic, so this is a quasi-experiment. It does not have any random assignment of participants to the independent variable groups. Read their study, if you would like to, at the following location.

Weaver, A. J., & Wilson, B. J. (2009). The role of graphic and sanitized violence in the enjoyment of television dramas.  Human Communication Research, 35  (3), 442-463.

Benoit and Hansen (2004) did not choose to randomly assign participants to groups either, in their study of a national presidential election survey, in which they were looking at differences between debate and non-debate viewers, in terms of several dependent variables, such as which candidate viewers supported. If you are interested in discovering the results of this study, take a look at the following article.

Benoit, W. L., & Hansen, G. J. (2004). Presidential debate watching, issue knowledge, character evaluation, and vote choice.  Human Communication Research, 30  (1), 121-144.

Non-Experiments.

The third type of design is the non-experiment. Non-experiments are sometimes called survey designs, because their primary way of collecting data is through surveys. This is not enough to distinguish them from true experiments and quasi-experiments, however, as both of those types of designs may use surveys as well.

What makes a study a non-experiment is that the independent variable is not a grouping or categorical variable. Researchers observe or survey participants in order to describe them as they naturally exist without any experimental intervention. Researchers do not give treatments or observe the effects of a potential natural grouping variable such as age. Descriptive and relationship/association questions are most often used in non-experiments.

Some examples of this type of commonly used design for communication researchers include the following studies.

  • Serota, Levine, and Boster (2010) used a national survey of 1,000 adults to determine the prevalence of lying in America (see  Human Communication Research, 36 , pp. 2-25).
  • Nabi (2009) surveyed 170 young adults on their perceptions of reality television on cosmetic surgery effects, looking at several things: for example, does viewing cosmetic surgery makeover programs relate to body satisfaction (p. 6), finding no significant relationship between those two variables (see  Human Communication Research, 35 , pp. 1-27).
  • Derlega, Winstead, Mathews, and Braitman (2008) collected stories from 238 college students on reasons why they would disclose or not disclose personal information within close relationships (see  Communication Research Reports, 25 , pp. 115-130). They coded the participants' answers into categories so they could count how often specific reasons were mentioned, using a method called  content analysis , to answer the following research questions:

RQ1: What are research participants' attributions for the disclosure and nondisclosure of highly personal information?

RQ2: Do attributions reflect concerns about rewards and costs of disclosure or the tension between openness with another and privacy?

RQ3: How often are particular attributions for disclosure/nondisclosure used in various types of relationships? (p. 117)

All of these non-experimental studies have in common no researcher manipulation of an independent variable or even having an independent variable that has natural groups that are being compared.

Identify which design discussed above should be used for each of the following research questions.

  • Is there a difference between generations on how much they use MySpace?
  • Is there a relationship between age when a person first started using Facebook and the amount of time they currently spend on Facebook daily?
  • Is there a difference between potential customers' perceptions of an organization who are shown an organization's Facebook page and those who are not shown an organization's Facebook page?

[HINT: Try to identify the independent and dependent variable in each question above first, before determining what type of design you would use. Also, try to determine what type of question it is – descriptive, difference, or relationship/association.]

Answers: 1. Quasi-experiment 2. Non-experiment 3. True Experiment

Data Collection Methods

Once you decide the type of quantitative research design you will be using, you will need to determine which of the following types of data you will collect: (a) survey data, (b) observational data, and/or (c) already existing data, as in library research.

Using the survey data collection method means you will talk to people or survey them about their behaviors, attitudes, perceptions, and demographic characteristics (e.g., biological sex, socio-economic status, race). This type of data usually consists of a series of questions related to the concepts you want to study (i.e., your independent and dependent variables). Both of April's studies on home schooling and on taking adopted children on a return trip back to China used survey data.

On a survey, you can have both closed-ended and open-ended questions. Closed-ended questions, can be written in a variety of forms. Some of the most common response options include the following.

Likert responses – for example: for the following statement, ______ do you strongly agree agree neutral disagree strongly disagree

Semantic differential – for example: does the following ______ make you Happy ..................................... Sad

Yes-no answers for example: I use social media daily. Yes / No.

One site to check out for possible response options is  http://www.360degreefeedback.net/media/ResponseScales.pdf .

Researchers often follow up some of their closed-ended questions with an "other" category, in which they ask their participants to "please specify," their response if none of the ones provided are applicable. They may also ask open-ended questions on "why" a participant chose a particular answer or ask participants for more information about a particular topic. If the researcher wants to use the open-ended question responses as part of his/her quantitative study, the answers are usually coded into categories and counted, in terms of the frequency of a certain answer, using a method called  content analysis , which will be discussed when we talk about already-existing artifacts as a source of data.

Surveys can be done face-to-face, by telephone, mail, or online. Each of these methods has its own advantages and disadvantages, primarily in the form of the cost in time and money to do the survey. For example, if you want to survey many people, then online survey tools such as surveygizmo.com and surveymonkey.com are very efficient, but not everyone has access to taking a survey on the computer, so you may not get an adequate sample of the population by doing so. Plus you have to decide how you will recruit people to take your online survey, which can be challenging. There are trade-offs with every method.

For more information on things to consider when selecting your survey method, check out the following website:

Selecting the Survey Method .

There are also many good sources for developing a good survey, such as the following websites. Constructing the Survey Survey Methods Designing Surveys

Observation.

A second type of data collection method is  observation . In this data collection method, you make observations of the phenomenon you are studying and then code your observations, so that you can count what you are studying. This type of data collection method is often called interaction analysis, if you collect data by observing people's behavior. For example, if you want to study the phenomenon of mall-walking, you could go to a mall and count characteristics of mall-walkers. A researcher in the area of health communication could study the occurrence of humor in an operating room, for example, by coding and counting the use of humor in such a setting.

One extended research study using observational data collection methods, which is cited often in interpersonal communication classes, is John Gottman's research, which started out in what is now called "The Love Lab." In this lab, researchers observe interactions between couples, including physiological symptoms, using coders who look for certain items found to predict relationship problems and success.

Take a look at the YouTube video about "The Love Lab" at the following site to learn more about the potential of using observation in collecting data for a research study:  The "Love" Lab .

Already-Existing Artifacts.

The third method of quantitative data collection is the use of  already-existing artifacts . With this method, you choose certain artifacts (e.g., newspaper or magazine articles; television programs; webpages) and code their content, resulting in a count of whatever you are studying. With this data collection method, researchers most often use what is called quantitative  content analysis . Basically, the researcher counts frequencies of something that occurs in an artifact of study, such as the frequency of times something is mentioned on a webpage. Content analysis can also be used in qualitative research, where a researcher identifies and creates text-based themes but does not do a count of the occurrences of these themes. Content analysis can also be used to take open-ended questions from a survey method, and identify countable themes within the questions.

Content analysis is a very common method used in media studies, given researchers are interested in studying already-existing media artifacts. There are many good sources to illustrate how to do content analysis such as are seen in the box below.

See the following sources for more information on content analysis. Writing Guide: Content Analysis A Flowchart for the Typical Process of Content Analysis Research What is Content Analysis?

With content analysis and any method that you use to code something into categories, one key concept you need to remember is  inter-coder or inter-rater reliability , in which there are multiple coders (at least two) trained to code the observations into categories. This check on coding is important because you need to check to make sure that the way you are coding your observations on the open-ended answers is the same way that others would code a particular item. To establish this kind of inter-coder or inter-rater reliability, researchers prepare codebooks (to train their coders on how to code the materials) and coding forms for their coders to use.

To see some examples of actual codebooks used in research, see the following website:  Human Coding--Sample Materials .

There are also online inter-coder reliability calculators some researchers use, such as the following:  ReCal: reliability calculation for the masses .

Regardless of which method of data collection you choose, you need to decide even more specifically how you will measure the variables in your study, which leads us to the next planning step in the design of a study.

Operationalization of Variables into Measurable Concepts

When you look at your research question/s and/or hypotheses, you should know already what your independent and dependent variables are. Both of these need to be measured in some way. We call that way of measuring  operationalizing  a variable. One way to think of it is writing a step by step recipe for how you plan to obtain data on this topic. How you choose to operationalize your variable (or write the recipe) is one all-important decision you have to make, which will make or break your study. In quantitative research, you have to measure your variables in a valid (accurate) and reliable (consistent) manner, which we discuss in this section. You also need to determine the level of measurement you will use for your variables, which will help you later decide what statistical tests you need to run to answer your research question/s or test your hypotheses. We will start with the last topic first.

Level of Measurement

Level of measurement has to do with whether you measure your variables using categories or groupings OR whether you measure your variables using a continuous level of measurement (range of numbers). The level of measurement that is considered to be categorical in nature is called nominal, while the levels of measurement considered to be continuous in nature are ordinal, interval, and ratio. The only ones you really need to know are nominal, ordinal, and interval/ratio.

Image removed.

Nominal  variables are categories that do not have meaningful numbers attached to them but are broader categories, such as male and female, home schooled and public schooled, Caucasian and African-American.  Ordinal  variables do have numbers attached to them, in that the numbers are in a certain order, but there are not equal intervals between the numbers (e.g., such as when you rank a group of 5 items from most to least preferred, where 3 might be highly preferred, and 2 hated).  Interval/ratio  variables have equal intervals between the numbers (e.g., weight, age).

For more information about these levels of measurement, check out one of the following websites. Levels of Measurement Measurement Scales in Social Science Research What is the difference between ordinal, interval and ratio variables? Why should I care?

Validity and Reliability

When developing a scale/measure or survey, you need to be concerned about validity and reliability. Readers of quantitative research expect to see researchers justify their research measures using these two terms in the methods section of an article or paper.

Validity.   Validity  is the extent to which your scale/measure or survey adequately reflects the full meaning of the concept you are measuring. Does it measure what you say it measures? For example, if researchers wanted to develop a scale to measure "servant leadership," the researchers would have to determine what dimensions of servant leadership they wanted to measure, and then create items which would be valid or accurate measures of these dimensions. If they included items related to a different type of leadership, those items would not be a valid measure of servant leadership. When doing so, the researchers are trying to prove their measure has internal validity. Researchers may also be interested in external validity, but that has to do with how generalizable their study is to a larger population (a topic related to sampling, which we will consider in the next section), and has less to do with the validity of the instrument itself.

There are several types of validity you may read about, including face validity, content validity, criterion-related validity, and construct validity. To learn more about these types of validity, read the information at the following link: Validity .

To improve the validity of an instrument, researchers need to fully understand the concept they are trying to measure. This means they know the academic literature surrounding that concept well and write several survey questions on each dimension measured, to make sure the full idea of the concept is being measured. For example, Page and Wong (n.d.) identified four dimensions of servant leadership: character, people-orientation, task-orientation, and process-orientation ( A Conceptual Framework for Measuring Servant-Leadership ). All of these dimensions (and any others identified by other researchers) would need multiple survey items developed if a researcher wanted to create a new scale on servant leadership.

Before you create a new survey, it can be useful to see if one already exists with established validity and reliability. Such measures can be found by seeing what other respected studies have used to measure a concept and then doing a library search to find the scale/measure itself (sometimes found in the reference area of a library in books like those listed below).

Reliability .  Reliability  is the second criterion you will need to address if you choose to develop your own scale or measure. Reliability is concerned with whether a measurement is consistent and reproducible. If you have ever wondered why, when taking a survey, that a question is asked more than once or very similar questions are asked multiple times, it is because the researchers one concerned with proving their study has reliability. Are you, for example, answering all of the similar questions similarly? If so, the measure/scale may have good reliability or consistency over time.

Researchers can use a variety of ways to show their measure/scale is reliable. See the following websites for explanations of some of these ways, which include methods such as the test-retest method, the split-half method, and inter-coder/rater reliability. Types of Reliability Reliability

To understand the relationship between validity and reliability, a nice visual provided below is explained at the following website (Trochim, 2006, para. 2). Reliability & Validity

Self-Quiz/Discussion:

Take a look at one of the surveys found at the following poll reporting sites on a topic which interests you. Critique one of these surveys, using what you have learned about creating surveys so far.

http://www.pewinternet.org/ http://pewresearch.org/ http://www.gallup.com/Home.aspx http://www.kff.org/

One of the things you might have critiqued in the previous self-quiz/discussion may have had less to do with the actual survey itself, but rather with how the researchers got their participants or sample. How participants are recruited is just as important to doing a good study as how valid and reliable a survey is.

Imagine that in the article you chose for the last "self-quiz/discussion" you read the following quote from the Pew Research Center's Internet and American Life Project: "One in three teens sends more than 100 text messages a day, or 3000 texts a month" (Lenhart, 2010, para.5). How would you know whether you could trust this finding to be true? Would you compare it to what you know about texting from your own and your friends' experiences? Would you want to know what types of questions people were asked to determine this statistic, or whether the survey the statistic is based on is valid and reliable? Would you want to know what type of people were surveyed for the study? As a critical consumer of research, you should ask all of these types of questions, rather than just accepting such a statement as undisputable fact. For example, if only people shopping at an Apple Store were surveyed, the results might be skewed high.

In particular, related to the topic of this section, you should ask about the sampling method the researchers did. Often, the researchers will provide information related to the sample, stating how many participants were surveyed (in this case 800 teens, aged 12-17, who were a nationally representative sample of the population) and how much the "margin of error" is (in this case +/- 3.8%). Why do they state such things? It is because they know the importance of a sample in making the case for their findings being legitimate and credible.  Margin of error  is how much we are confident that our findings represent the population at large. The larger the margin of error, the less likely it is that the poll or survey is accurate. Margin of error assumes a 95% confidence level that what we found from our study represents the population at large.

For more information on margin of error, see one of the following websites. Answers.com Margin of Error Stats.org Margin of Error Americanresearchgroup.com Margin of Error [this last site is a margin of error calculator, which shows that margin of error is directly tied to the size of your sample, in relationship to the size of the population, two concepts we will talk about in the next few paragraphs]

In particular, this section focused on sampling will talk about the following topics: (a) the difference between a population vs. a sample; (b) concepts of error and bias, or "it's all about significance"; (c) probability vs. non-probability sampling; and (d) sample size issues.

Population vs. Sample

When doing quantitative studies, such as the study of cell phone usage among teens, you are never able to survey the entire population of teenagers, so you survey a portion of the population. If you study every member of a population, then you are conducting a census such as the United States Government does every 10 years. When, however, this is not possible (because you do not have the money the U.S. government has!), you attempt to get as good a sample as possible.

Characteristics of a population are summarized in numerical form, and technically these numbers are called  parameters . However, numbers which summarize the characteristics of a sample are called  statistics .

Error and Bias

If a sample is not done well, then you may not have confidence in how the study's results can be generalized to the population from which the sample was taken. Your confidence level is often stated as the  margin of error  of the survey. As noted earlier, a study's margin of error refers to the degree to which a sample differs from the total population you are studying. In the Pew survey, they had a margin of error of +/- 3.8%. So, for example, when the Pew survey said 33% of teens send more than 100 texts a day, the margin of error means they were 95% sure that 29.2% - 36.8% of teens send this many texts a day.

Margin of error is tied to  sampling error , which is how much difference there is between your sample's results and what would have been obtained if you had surveyed the whole population. Sample error is linked to a very important concept for quantitative researchers, which is the notion of  significance . Here, significance does not refer to whether some finding is morally or practically significant, it refers to whether a finding is statistically significant, meaning the findings are not due to chance but actually represent something that is found in the population.  Statistical significance  is about how much you, as the researcher, are willing to risk saying you found something important and be wrong.

For the difference between statistical significance and practical significance, see the following YouTube video:  Statistical and Practical Significance .

Scientists set certain arbitrary standards based on the probability they could be wrong in reporting their findings. These are called  significance levels  and are commonly reported in the literature as  p <.05  or  p <.01  or some other probability (or  p ) level.

If an article says a statistical test reported that  p < .05 , it simply means that they are most likely correct in what they are saying, but there is a 5% chance they could be wrong and not find the same results in the population. If p < .01, then there would be only a 1% chance they were wrong and would not find the same results in the population. The lower the probability level, the more certain the results.

When researchers are wrong, or make that kind of decision error, it often implies that either (a) their sample was biased and was not representative of the true population in some way, or (b) that something they did in collecting the data biased the results. There are actually two kinds of sampling error talked about in quantitative research: Type I and Type II error.  Type 1 error  is what happens when you think you found something statistically significant and claim there is a significant difference or relationship, when there really is not in the actual population. So there is something about your sample that made you find something that is not in the actual population. (Type I error is the same as the probability level, or .05, if using the traditional p-level accepted by most researchers.)  Type II error  happens when you don't find a statistically significant difference or relationship, yet there actually is one in the population at large, so once again, your sample is not representative of the population.

For more information on these two types of error, check out the following websites. Hypothesis Testing: Type I Error, Type II Error Type I and Type II Errors - Making Mistakes in the Justice System

Researchers want to select a sample that is representative of the population in order to reduce the likelihood of having a sample that is biased. There are two types of bias particularly troublesome for researchers, in terms of sampling error. The first type is  selection bias , in which each person in the population does not have an equal chance to be chosen for the sample, which happens frequently in communication studies, because we often rely on convenience samples (whoever we can get to complete our surveys). The second type of bias is  response bias , in which those who volunteer for a study have different characteristics than those who did not volunteer for the study, another common challenge for communication researchers. Volunteers for a study may very well be different from persons who choose not to volunteer for a study, so that you have a biased sample by relying just on volunteers, which is not representative of the population from which you are trying to sample.

Probability vs. Non-Probability Sampling

One of the best ways to lower your sampling error and reduce the possibility of bias is to do probability or random sampling. This means that every person in the population has an equal chance of being selected to be in your sample. Another way of looking at this is to attempt to get a  representative  sample, so that the characteristics of your sample closely approximate those of the population. A sample needs to contain essentially the same variations that exist in the population, if possible, especially on the variables or elements that are most important to you (e.g., age, biological sex, race, level of education, socio-economic class).

There are many different ways to draw a probability/random sample from the population. Some of the most common are a  simple random sample , where you use a random numbers table or random number generator to select your sample from the population.

There are several examples of random number generators available online. See the following example of an online random number generator:  http://www.randomizer.org/ .

A  systematic random sample  takes every n-th number from the population, depending on how many people you would like to have in your sample. A  stratified random sample  does random sampling within groups, and a  multi-stage  or  cluster sample  is used when there are multiple groups within a large area and a large population, and the researcher does random sampling in stages.

If you are interested in understanding more about these types of probability/random samples, take a look at the following website: Probability Sampling .

However, many times communication researchers use whoever they can find to participate in their study, such as college students in their classes since these people are easily accessible. Many of the studies in interpersonal communication and relationship development, for example, used this type of sample. This is called a convenience sample. In doing so, they are using a non- probability or non-random sample. In these types of samples, each member of the population does not have an equal opportunity to be selected. For example, if you decide to ask your facebook friends to participate in an online survey you created about how college students in the U.S. use cell phones to text, you are using a non-random type of sample. You are unable to randomly sample the whole population in the U.S. of college students who text, so you attempt to find participants more conveniently. Some common non-random or non-probability samples are:

  • accidental/convenience samples, such as the facebook example illustrates
  • quota samples, in which you do convenience samples within subgroups of the population, such as biological sex, looking for a certain number of participants in each group being compared
  • snowball or network sampling, where you ask current participants to send your survey onto their friends.

For more information on non-probability sampling, see the following website: Nonprobability Sampling .

Researchers, such as communication scholars, often use these types of samples because of the nature of their research. Most research designs used in communication are not true experiments, such as would be required in the medical field where they are trying to prove some cause-effect relationship to cure or alleviate symptoms of a disease. Most communication scholars recognize that human behavior in communication situations is much less predictable, so they do not adhere to the strictest possible worldview related to quantitative methods and are less concerned with having to use probability sampling.

They do recognize, however, that with either probability or non-probability sampling, there is still the possibility of bias and error, although much less with probability sampling. That is why all quantitative researchers, regardless of field, will report statistical significance levels if they are interested in generalizing from their sample to the population at large, to let the readers of their work know how confident they are in their results.

Size of Sample

The larger the sample, the more likely the sample is going to be representative of the population. If there is a lot of variability in the population (e.g., lots of different ethnic groups in the population), a researcher will need a larger sample. If you are interested in detecting small possible differences (e.g., in a close political race), you need a larger sample. However, the bigger your population, the less you have to increase the size of your sample in order to have an adequate sample, as is illustrated by an example sample size calculator such as can be found at  http://www.raosoft.com/samplesize.html .

Using the example sample size calculator, see how you might determine how large of a sample you might need in order to study how college students in the U.S. use texting on their cell phones. You would have to first determine approximately how many college students are in the U.S. According to ANEKI, there are a little over 14,000,000 college students in the U.S. ( Countries with the Most University Students ). When inputting that figure into the sample size calculator below (using no commas for the population size), you would need a sample size of approximately 385 students. If the population size was 20,000, you would need a sample of 377 students. If the population was only 2,000, you would need a sample of 323. For a population of 500, you would need a sample of 218.

It is not enough, however, to just have an adequate or large sample. If there is bias in the sampling, you can have a very bad large sample, one that also does not represent the population at large. So, having an unbiased sample is even more important than having a large sample.

So, what do you do, if you cannot reasonably conduct a probability or random sample? You run statistics which report significance levels, and you report the limitations of your sample in the discussion section of your paper/article.

Pilot Testing Methods

Now that we have talked about the different elements of your study design, you should try out your methods by doing a pilot test of some kind. This means that you try out your procedures with someone to try to catch any mistakes in your design before you start collecting data from actual participants in your study. This will save you time and money in the long run, along with unneeded angst over mistakes you made in your design during data collection. There are several ways you might do this.

You might ask an expert who knows about this topic (such as a faculty member) to try out your experiment or survey and provide feedback on what they think of your design. You might ask some participants who are like your potential sample to take your survey or be a part of your pilot test; then you could ask them which parts were confusing or needed revising. You might have potential participants explain to you what they think your questions mean, to see if they are interpreting them like you intended, or if you need to make some questions clearer.

The main thing is that you do not just assume your methods will work or are the best type of methods to use until you try them out with someone. As you write up your study, in your methods section of your paper, you can then talk about what you did to change your study based on the pilot study you did.

Institutional Review Board (IRB) Approval

The last step of your planning takes place when you take the necessary steps to get your study approved by your institution's review board. As you read in chapter 3, this step is important if you are planning on using the data or results from your study beyond just the requirements for your class project. See chapter 3 for more information on the procedures involved in this step.

Conclusion: Study Design Planning

Once you have decided what topic you want to study, you plan your study. Part 1 of this chapter has covered the following steps you need to follow in this planning process:

  • decide what type of study you will do (i.e., experimental, quasi-experimental, non- experimental);
  • decide on what data collection method you will use (i.e., survey, observation, or already existing data);
  • operationalize your variables into measureable concepts;
  • determine what type of sample you will use (probability or non-probability);
  • pilot test your methods; and
  • get IRB approval.

At that point, you are ready to commence collecting your data, which is the topic of the next section in this chapter.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is Quantitative Research? | Definition, Uses & Methods

What Is Quantitative Research? | Definition, Uses & Methods

Published on June 12, 2020 by Pritha Bhandari . Revised on June 22, 2023.

Quantitative research is the process of collecting and analyzing numerical data. It can be used to find patterns and averages, make predictions, test causal relationships, and generalize results to wider populations.

Quantitative research is the opposite of qualitative research , which involves collecting and analyzing non-numerical data (e.g., text, video, or audio).

Quantitative research is widely used in the natural and social sciences: biology, chemistry, psychology, economics, sociology, marketing, etc.

  • What is the demographic makeup of Singapore in 2020?
  • How has the average temperature changed globally over the last century?
  • Does environmental pollution affect the prevalence of honey bees?
  • Does working from home increase productivity for people with long commutes?

Table of contents

Quantitative research methods, quantitative data analysis, advantages of quantitative research, disadvantages of quantitative research, other interesting articles, frequently asked questions about quantitative research.

You can use quantitative research methods for descriptive, correlational or experimental research.

  • In descriptive research , you simply seek an overall summary of your study variables.
  • In correlational research , you investigate relationships between your study variables.
  • In experimental research , you systematically examine whether there is a cause-and-effect relationship between variables.

Correlational and experimental research can both be used to formally test hypotheses , or predictions, using statistics. The results may be generalized to broader populations based on the sampling method used.

To collect quantitative data, you will often need to use operational definitions that translate abstract concepts (e.g., mood) into observable and quantifiable measures (e.g., self-ratings of feelings and energy levels).

Quantitative research methods
Research method How to use Example
Control or manipulate an to measure its effect on a dependent variable. To test whether an intervention can reduce procrastination in college students, you give equal-sized groups either a procrastination intervention or a comparable task. You compare self-ratings of procrastination behaviors between the groups after the intervention.
Ask questions of a group of people in-person, over-the-phone or online. You distribute with rating scales to first-year international college students to investigate their experiences of culture shock.
(Systematic) observation Identify a behavior or occurrence of interest and monitor it in its natural setting. To study college classroom participation, you sit in on classes to observe them, counting and recording the prevalence of active and passive behaviors by students from different backgrounds.
Secondary research Collect data that has been gathered for other purposes e.g., national surveys or historical records. To assess whether attitudes towards climate change have changed since the 1980s, you collect relevant questionnaire data from widely available .

Note that quantitative research is at risk for certain research biases , including information bias , omitted variable bias , sampling bias , or selection bias . Be sure that you’re aware of potential biases as you collect and analyze your data to prevent them from impacting your work too much.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Once data is collected, you may need to process it before it can be analyzed. For example, survey and test data may need to be transformed from words to numbers. Then, you can use statistical analysis to answer your research questions .

Descriptive statistics will give you a summary of your data and include measures of averages and variability. You can also use graphs, scatter plots and frequency tables to visualize your data and check for any trends or outliers.

Using inferential statistics , you can make predictions or generalizations based on your data. You can test your hypothesis or use your sample data to estimate the population parameter .

First, you use descriptive statistics to get a summary of the data. You find the mean (average) and the mode (most frequent rating) of procrastination of the two groups, and plot the data to see if there are any outliers.

You can also assess the reliability and validity of your data collection methods to indicate how consistently and accurately your methods actually measured what you wanted them to.

Quantitative research is often used to standardize data collection and generalize findings . Strengths of this approach include:

  • Replication

Repeating the study is possible because of standardized data collection protocols and tangible definitions of abstract concepts.

  • Direct comparisons of results

The study can be reproduced in other cultural settings, times or with different groups of participants. Results can be compared statistically.

  • Large samples

Data from large samples can be processed and analyzed using reliable and consistent procedures through quantitative data analysis.

  • Hypothesis testing

Using formalized and established hypothesis testing procedures means that you have to carefully consider and report your research variables, predictions, data collection and testing methods before coming to a conclusion.

Despite the benefits of quantitative research, it is sometimes inadequate in explaining complex research topics. Its limitations include:

  • Superficiality

Using precise and restrictive operational definitions may inadequately represent complex concepts. For example, the concept of mood may be represented with just a number in quantitative research, but explained with elaboration in qualitative research.

  • Narrow focus

Predetermined variables and measurement procedures can mean that you ignore other relevant observations.

  • Structural bias

Despite standardized procedures, structural biases can still affect quantitative research. Missing data , imprecise measurements or inappropriate sampling methods are biases that can lead to the wrong conclusions.

  • Lack of context

Quantitative research often uses unnatural settings like laboratories or fails to consider historical and cultural contexts that may affect data collection and results.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square goodness of fit test
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Inclusion and exclusion criteria

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.

Operationalization means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioral avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalize the variables that you want to measure.

Reliability and validity are both about how well a method measures something:

  • Reliability refers to the  consistency of a measure (whether the results can be reproduced under the same conditions).
  • Validity   refers to the  accuracy of a measure (whether the results really do represent what they are supposed to measure).

If you are doing experimental research, you also have to consider the internal and external validity of your experiment.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 22). What Is Quantitative Research? | Definition, Uses & Methods. Scribbr. Retrieved June 18, 2024, from https://www.scribbr.com/methodology/quantitative-research/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, descriptive statistics | definitions, types, examples, inferential statistics | an easy introduction & examples, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

Educational resources and simple solutions for your research journey

What is quantitative research? Definition, methods, types, and examples

What is Quantitative Research? Definition, Methods, Types, and Examples

quantitative empirical research methods include the following except

If you’re wondering what is quantitative research and whether this methodology works for your research study, you’re not alone. If you want a simple quantitative research definition , then it’s enough to say that this is a method undertaken by researchers based on their study requirements. However, to select the most appropriate research for their study type, researchers should know all the methods available. 

Selecting the right research method depends on a few important criteria, such as the research question, study type, time, costs, data availability, and availability of respondents. There are two main types of research methods— quantitative research  and qualitative research. The purpose of quantitative research is to validate or test a theory or hypothesis and that of qualitative research is to understand a subject or event or identify reasons for observed patterns.   

Quantitative research methods  are used to observe events that affect a particular group of individuals, which is the sample population. In this type of research, diverse numerical data are collected through various methods and then statistically analyzed to aggregate the data, compare them, or show relationships among the data. Quantitative research methods broadly include questionnaires, structured observations, and experiments.  

Here are two quantitative research examples:  

  • Satisfaction surveys sent out by a company regarding their revamped customer service initiatives. Customers are asked to rate their experience on a rating scale of 1 (poor) to 5 (excellent).  
  • A school has introduced a new after-school program for children, and a few months after commencement, the school sends out feedback questionnaires to the parents of the enrolled children. Such questionnaires usually include close-ended questions that require either definite answers or a Yes/No option. This helps in a quick, overall assessment of the program’s outreach and success.  

quantitative empirical research methods include the following except

Table of Contents

What is quantitative research ? 1,2

quantitative empirical research methods include the following except

The steps shown in the figure can be grouped into the following broad steps:  

  • Theory : Define the problem area or area of interest and create a research question.  
  • Hypothesis : Develop a hypothesis based on the research question. This hypothesis will be tested in the remaining steps.  
  • Research design : In this step, the most appropriate quantitative research design will be selected, including deciding on the sample size, selecting respondents, identifying research sites, if any, etc.
  • Data collection : This process could be extensive based on your research objective and sample size.  
  • Data analysis : Statistical analysis is used to analyze the data collected. The results from the analysis help in either supporting or rejecting your hypothesis.  
  • Present results : Based on the data analysis, conclusions are drawn, and results are presented as accurately as possible.  

Quantitative research characteristics 4

  • Large sample size : This ensures reliability because this sample represents the target population or market. Due to the large sample size, the outcomes can be generalized to the entire population as well, making this one of the important characteristics of quantitative research .  
  • Structured data and measurable variables: The data are numeric and can be analyzed easily. Quantitative research involves the use of measurable variables such as age, salary range, highest education, etc.  
  • Easy-to-use data collection methods : The methods include experiments, controlled observations, and questionnaires and surveys with a rating scale or close-ended questions, which require simple and to-the-point answers; are not bound by geographical regions; and are easy to administer.  
  • Data analysis : Structured and accurate statistical analysis methods using software applications such as Excel, SPSS, R. The analysis is fast, accurate, and less effort intensive.  
  • Reliable : The respondents answer close-ended questions, their responses are direct without ambiguity and yield numeric outcomes, which are therefore highly reliable.  
  • Reusable outcomes : This is one of the key characteristics – outcomes of one research can be used and replicated in other research as well and is not exclusive to only one study.  

Quantitative research methods 5

Quantitative research methods are classified into two types—primary and secondary.  

Primary quantitative research method:

In this type of quantitative research , data are directly collected by the researchers using the following methods.

– Survey research : Surveys are the easiest and most commonly used quantitative research method . They are of two types— cross-sectional and longitudinal.   

->Cross-sectional surveys are specifically conducted on a target population for a specified period, that is, these surveys have a specific starting and ending time and researchers study the events during this period to arrive at conclusions. The main purpose of these surveys is to describe and assess the characteristics of a population. There is one independent variable in this study, which is a common factor applicable to all participants in the population, for example, living in a specific city, diagnosed with a specific disease, of a certain age group, etc. An example of a cross-sectional survey is a study to understand why individuals residing in houses built before 1979 in the US are more susceptible to lead contamination.  

->Longitudinal surveys are conducted at different time durations. These surveys involve observing the interactions among different variables in the target population, exposing them to various causal factors, and understanding their effects across a longer period. These studies are helpful to analyze a problem in the long term. An example of a longitudinal study is the study of the relationship between smoking and lung cancer over a long period.  

– Descriptive research : Explains the current status of an identified and measurable variable. Unlike other types of quantitative research , a hypothesis is not needed at the beginning of the study and can be developed even after data collection. This type of quantitative research describes the characteristics of a problem and answers the what, when, where of a problem. However, it doesn’t answer the why of the problem and doesn’t explore cause-and-effect relationships between variables. Data from this research could be used as preliminary data for another study. Example: A researcher undertakes a study to examine the growth strategy of a company. This sample data can be used by other companies to determine their own growth strategy.  

quantitative empirical research methods include the following except

– Correlational research : This quantitative research method is used to establish a relationship between two variables using statistical analysis and analyze how one affects the other. The research is non-experimental because the researcher doesn’t control or manipulate any of the variables. At least two separate sample groups are needed for this research. Example: Researchers studying a correlation between regular exercise and diabetes.  

– Causal-comparative research : This type of quantitative research examines the cause-effect relationships in retrospect between a dependent and independent variable and determines the causes of the already existing differences between groups of people. This is not a true experiment because it doesn’t assign participants to groups randomly. Example: To study the wage differences between men and women in the same role. For this, already existing wage information is analyzed to understand the relationship.  

– Experimental research : This quantitative research method uses true experiments or scientific methods for determining a cause-effect relation between variables. It involves testing a hypothesis through experiments, in which one or more independent variables are manipulated and then their effect on dependent variables are studied. Example: A researcher studies the importance of a drug in treating a disease by administering the drug in few patients and not administering in a few.  

The following data collection methods are commonly used in primary quantitative research :  

  • Sampling : The most common type is probability sampling, in which a sample is chosen from a larger population using some form of random selection, that is, every member of the population has an equal chance of being selected. The different types of probability sampling are—simple random, systematic, stratified, and cluster sampling.  
  • Interviews : These are commonly telephonic or face-to-face.  
  • Observations : Structured observations are most commonly used in quantitative research . In this method, researchers make observations about specific behaviors of individuals in a structured setting.  
  • Document review : Reviewing existing research or documents to collect evidence for supporting the quantitative research .  
  • Surveys and questionnaires : Surveys can be administered both online and offline depending on the requirement and sample size.

The data collected can be analyzed in several ways in quantitative research , as listed below:  

  • Cross-tabulation —Uses a tabular format to draw inferences among collected data  
  • MaxDiff analysis —Gauges the preferences of the respondents  
  • TURF analysis —Total Unduplicated Reach and Frequency Analysis; helps in determining the market strategy for a business  
  • Gap analysis —Identify gaps in attaining the desired results  
  • SWOT analysis —Helps identify strengths, weaknesses, opportunities, and threats of a product, service, or organization  
  • Text analysis —Used for interpreting unstructured data  

Secondary quantitative research methods :

This method involves conducting research using already existing or secondary data. This method is less effort intensive and requires lesser time. However, researchers should verify the authenticity and recency of the sources being used and ensure their accuracy.  

The main sources of secondary data are: 

  • The Internet  
  • Government and non-government sources  
  • Public libraries  
  • Educational institutions  
  • Commercial information sources such as newspapers, journals, radio, TV  

What is quantitative research? Definition, methods, types, and examples

When to use quantitative research 6  

Here are some simple ways to decide when to use quantitative research . Use quantitative research to:  

  • recommend a final course of action  
  • find whether a consensus exists regarding a particular subject  
  • generalize results to a larger population  
  • determine a cause-and-effect relationship between variables  
  • describe characteristics of specific groups of people  
  • test hypotheses and examine specific relationships  
  • identify and establish size of market segments  

A research case study to understand when to use quantitative research 7  

Context: A study was undertaken to evaluate a major innovation in a hospital’s design, in terms of workforce implications and impact on patient and staff experiences of all single-room hospital accommodations. The researchers undertook a mixed methods approach to answer their research questions. Here, we focus on the quantitative research aspect.  

Research questions : What are the advantages and disadvantages for the staff as a result of the hospital’s move to the new design with all single-room accommodations? Did the move affect staff experience and well-being and improve their ability to deliver high-quality care?  

Method: The researchers obtained quantitative data from three sources:  

  • Staff activity (task time distribution): Each staff member was shadowed by a researcher who observed each task undertaken by the staff, and logged the time spent on each activity.  
  • Staff travel distances : The staff were requested to wear pedometers, which recorded the distances covered.  
  • Staff experience surveys : Staff were surveyed before and after the move to the new hospital design.  

Results of quantitative research : The following observations were made based on quantitative data analysis:  

  • The move to the new design did not result in a significant change in the proportion of time spent on different activities.  
  • Staff activity events observed per session were higher after the move, and direct care and professional communication events per hour decreased significantly, suggesting fewer interruptions and less fragmented care.  
  • A significant increase in medication tasks among the recorded events suggests that medication administration was integrated into patient care activities.  
  • Travel distances increased for all staff, with highest increases for staff in the older people’s ward and surgical wards.  
  • Ratings for staff toilet facilities, locker facilities, and space at staff bases were higher but those for social interaction and natural light were lower.  

Advantages of quantitative research 1,2

When choosing the right research methodology, also consider the advantages of quantitative research and how it can impact your study.  

  • Quantitative research methods are more scientific and rational. They use quantifiable data leading to objectivity in the results and avoid any chances of ambiguity.  
  • This type of research uses numeric data so analysis is relatively easier .  
  • In most cases, a hypothesis is already developed and quantitative research helps in testing and validatin g these constructed theories based on which researchers can make an informed decision about accepting or rejecting their theory.  
  • The use of statistical analysis software ensures quick analysis of large volumes of data and is less effort intensive.  
  • Higher levels of control can be applied to the research so the chances of bias can be reduced.  
  • Quantitative research is based on measured value s, facts, and verifiable information so it can be easily checked or replicated by other researchers leading to continuity in scientific research.  

Disadvantages of quantitative research 1,2

Quantitative research may also be limiting; take a look at the disadvantages of quantitative research. 

  • Experiments are conducted in controlled settings instead of natural settings and it is possible for researchers to either intentionally or unintentionally manipulate the experiment settings to suit the results they desire.  
  • Participants must necessarily give objective answers (either one- or two-word, or yes or no answers) and the reasons for their selection or the context are not considered.   
  • Inadequate knowledge of statistical analysis methods may affect the results and their interpretation.  
  • Although statistical analysis indicates the trends or patterns among variables, the reasons for these observed patterns cannot be interpreted and the research may not give a complete picture.  
  • Large sample sizes are needed for more accurate and generalizable analysis .  
  • Quantitative research cannot be used to address complex issues.  

What is quantitative research? Definition, methods, types, and examples

Frequently asked questions on  quantitative research    

Q:  What is the difference between quantitative research and qualitative research? 1  

A:  The following table lists the key differences between quantitative research and qualitative research, some of which may have been mentioned earlier in the article.  

     
Purpose and design                   
Research question         
Sample size  Large  Small 
Data             
Data collection method  Experiments, controlled observations, questionnaires and surveys with a rating scale or close-ended questions. The methods can be experimental, quasi-experimental, descriptive, or correlational.  Semi-structured interviews/surveys with open-ended questions, document study/literature reviews, focus groups, case study research, ethnography 
Data analysis             

Q:  What is the difference between reliability and validity? 8,9    

A:  The term reliability refers to the consistency of a research study. For instance, if a food-measuring weighing scale gives different readings every time the same quantity of food is measured then that weighing scale is not reliable. If the findings in a research study are consistent every time a measurement is made, then the study is considered reliable. However, it is usually unlikely to obtain the exact same results every time because some contributing variables may change. In such cases, a correlation coefficient is used to assess the degree of reliability. A strong positive correlation between the results indicates reliability.  

Validity can be defined as the degree to which a tool actually measures what it claims to measure. It helps confirm the credibility of your research and suggests that the results may be generalizable. In other words, it measures the accuracy of the research.  

The following table gives the key differences between reliability and validity.  

     
Importance  Refers to the consistency of a measure  Refers to the accuracy of a measure 
Ease of achieving  Easier, yields results faster  Involves more analysis, more difficult to achieve 
Assessment method  By examining the consistency of outcomes over time, between various observers, and within the test  By comparing the accuracy of the results with accepted theories and other measurements of the same idea 
Relationship  Unreliable measurements typically cannot be valid  Valid measurements are also reliable 
Types  Test-retest reliability, internal consistency, inter-rater reliability  Content validity, criterion validity, face validity, construct validity 

Q:  What is mixed methods research? 10

quantitative empirical research methods include the following except

A:  A mixed methods approach combines the characteristics of both quantitative research and qualitative research in the same study. This method allows researchers to validate their findings, verify if the results observed using both methods are complementary, and explain any unexpected results obtained from one method by using the other method. A mixed methods research design is useful in case of research questions that cannot be answered by either quantitative research or qualitative research alone. However, this method could be more effort- and cost-intensive because of the requirement of more resources. The figure 3 shows some basic mixed methods research designs that could be used.  

Thus, quantitative research is the appropriate method for testing your hypotheses and can be used either alone or in combination with qualitative research per your study requirements. We hope this article has provided an insight into the various facets of quantitative research , including its different characteristics, advantages, and disadvantages, and a few tips to quickly understand when to use this research method.  

References  

  • Qualitative vs quantitative research: Differences, examples, & methods. Simply Psychology. Accessed Feb 28, 2023. https://simplypsychology.org/qualitative-quantitative.html#Quantitative-Research  
  • Your ultimate guide to quantitative research. Qualtrics. Accessed February 28, 2023. https://www.qualtrics.com/uk/experience-management/research/quantitative-research/  
  • The steps of quantitative research. Revise Sociology. Accessed March 1, 2023. https://revisesociology.com/2017/11/26/the-steps-of-quantitative-research/  
  • What are the characteristics of quantitative research? Marketing91. Accessed March 1, 2023. https://www.marketing91.com/characteristics-of-quantitative-research/  
  • Quantitative research: Types, characteristics, methods, & examples. ProProfs Survey Maker. Accessed February 28, 2023. https://www.proprofssurvey.com/blog/quantitative-research/#Characteristics_of_Quantitative_Research  
  • Qualitative research isn’t as scientific as quantitative methods. Kmusial blog. Accessed March 5, 2023. https://kmusial.wordpress.com/2011/11/25/qualitative-research-isnt-as-scientific-as-quantitative-methods/  
  • Maben J, Griffiths P, Penfold C, et al. Evaluating a major innovation in hospital design: workforce implications and impact on patient and staff experiences of all single room hospital accommodation. Southampton (UK): NIHR Journals Library; 2015 Feb. (Health Services and Delivery Research, No. 3.3.) Chapter 5, Case study quantitative data findings. Accessed March 6, 2023. https://www.ncbi.nlm.nih.gov/books/NBK274429/  
  • McLeod, S. A. (2007).  What is reliability?  Simply Psychology. www.simplypsychology.org/reliability.html  
  • Reliability vs validity: Differences & examples. Accessed March 5, 2023. https://statisticsbyjim.com/basics/reliability-vs-validity/  
  • Mixed methods research. Community Engagement Program. Harvard Catalyst. Accessed February 28, 2023. https://catalyst.harvard.edu/community-engagement/mmr  

Researcher.Life is a subscription-based platform that unifies the best AI tools and services designed to speed up, simplify, and streamline every step of a researcher’s journey. The Researcher.Life All Access Pack is a one-of-a-kind subscription that unlocks full access to an AI writing assistant, literature recommender, journal finder, scientific illustration tool, and exclusive discounts on professional publication services from Editage.  

Based on 21+ years of experience in academia, Researcher.Life All Access empowers researchers to put their best research forward and move closer to success. Explore our top AI Tools pack, AI Tools + Publication Services pack, or Build Your Own Plan. Find everything a researcher needs to succeed, all in one place –  Get All Access now starting at just $17 a month !    

Related Posts

research

What is Research? Definition, Types, Methods, and Examples

Language barrier

Language and Cultural Barriers in Research: How to Bridge the Gap

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • What Is Quantitative Research? | Definition & Methods

What Is Quantitative Research? | Definition & Methods

Published on 4 April 2022 by Pritha Bhandari . Revised on 10 October 2022.

Quantitative research is the process of collecting and analysing numerical data. It can be used to find patterns and averages, make predictions, test causal relationships, and generalise results to wider populations.

Quantitative research is the opposite of qualitative research , which involves collecting and analysing non-numerical data (e.g. text, video, or audio).

Quantitative research is widely used in the natural and social sciences: biology, chemistry, psychology, economics, sociology, marketing, etc.

  • What is the demographic makeup of Singapore in 2020?
  • How has the average temperature changed globally over the last century?
  • Does environmental pollution affect the prevalence of honey bees?
  • Does working from home increase productivity for people with long commutes?

Table of contents

Quantitative research methods, quantitative data analysis, advantages of quantitative research, disadvantages of quantitative research, frequently asked questions about quantitative research.

You can use quantitative research methods for descriptive, correlational or experimental research.

  • In descriptive research , you simply seek an overall summary of your study variables.
  • In correlational research , you investigate relationships between your study variables.
  • In experimental research , you systematically examine whether there is a cause-and-effect relationship between variables.

Correlational and experimental research can both be used to formally test hypotheses , or predictions, using statistics. The results may be generalised to broader populations based on the sampling method used.

To collect quantitative data, you will often need to use operational definitions that translate abstract concepts (e.g., mood) into observable and quantifiable measures (e.g., self-ratings of feelings and energy levels).

Quantitative research methods
Research method How to use Example
Control or manipulate an to measure its effect on a dependent variable. To test whether an intervention can reduce procrastination in college students, you give equal-sized groups either a procrastination intervention or a comparable task. You compare self-ratings of procrastination behaviors between the groups after the intervention.
Ask questions of a group of people in-person, over-the-phone or online. You distribute with rating scales to first-year international college students to investigate their experiences of culture shock.
(Systematic) observation Identify a behavior or occurrence of interest and monitor it in its natural setting. To study college classroom participation, you sit in on classes to observe them, counting and recording the prevalence of active and passive behaviors by students from different backgrounds.
Secondary research Collect data that has been gathered for other purposes e.g., national surveys or historical records. To assess whether attitudes towards climate change have changed since the 1980s, you collect relevant questionnaire data from widely available .

Prevent plagiarism, run a free check.

Once data is collected, you may need to process it before it can be analysed. For example, survey and test data may need to be transformed from words to numbers. Then, you can use statistical analysis to answer your research questions .

Descriptive statistics will give you a summary of your data and include measures of averages and variability. You can also use graphs, scatter plots and frequency tables to visualise your data and check for any trends or outliers.

Using inferential statistics , you can make predictions or generalisations based on your data. You can test your hypothesis or use your sample data to estimate the population parameter .

You can also assess the reliability and validity of your data collection methods to indicate how consistently and accurately your methods actually measured what you wanted them to.

Quantitative research is often used to standardise data collection and generalise findings . Strengths of this approach include:

  • Replication

Repeating the study is possible because of standardised data collection protocols and tangible definitions of abstract concepts.

  • Direct comparisons of results

The study can be reproduced in other cultural settings, times or with different groups of participants. Results can be compared statistically.

  • Large samples

Data from large samples can be processed and analysed using reliable and consistent procedures through quantitative data analysis.

  • Hypothesis testing

Using formalised and established hypothesis testing procedures means that you have to carefully consider and report your research variables, predictions, data collection and testing methods before coming to a conclusion.

Despite the benefits of quantitative research, it is sometimes inadequate in explaining complex research topics. Its limitations include:

  • Superficiality

Using precise and restrictive operational definitions may inadequately represent complex concepts. For example, the concept of mood may be represented with just a number in quantitative research, but explained with elaboration in qualitative research.

  • Narrow focus

Predetermined variables and measurement procedures can mean that you ignore other relevant observations.

  • Structural bias

Despite standardised procedures, structural biases can still affect quantitative research. Missing data , imprecise measurements or inappropriate sampling methods are biases that can lead to the wrong conclusions.

  • Lack of context

Quantitative research often uses unnatural settings like laboratories or fails to consider historical and cultural contexts that may affect data collection and results.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to test a hypothesis by systematically collecting and analysing data, while qualitative methods allow you to explore ideas and experiences in depth.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organisations.

Operationalisation means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.

Reliability and validity are both about how well a method measures something:

  • Reliability refers to the  consistency of a measure (whether the results can be reproduced under the same conditions).
  • Validity   refers to the  accuracy of a measure (whether the results really do represent what they are supposed to measure).

If you are doing experimental research , you also have to consider the internal and external validity of your experiment.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Bhandari, P. (2022, October 10). What Is Quantitative Research? | Definition & Methods. Scribbr. Retrieved 18 June 2024, from https://www.scribbr.co.uk/research-methods/introduction-to-quantitative-research/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

News alert: UC Berkeley has announced its next university librarian

Secondary menu

  • Log in to your Library account
  • Hours and Maps
  • Connect from Off Campus
  • UC Berkeley Home

Search form

Research methods--quantitative, qualitative, and more: overview.

  • Quantitative Research
  • Qualitative Research
  • Data Science Methods (Machine Learning, AI, Big Data)
  • Text Mining and Computational Text Analysis
  • Evidence Synthesis/Systematic Reviews
  • Get Data, Get Help!

About Research Methods

This guide provides an overview of research methods, how to choose and use them, and supports and resources at UC Berkeley. 

As Patten and Newhart note in the book Understanding Research Methods , "Research methods are the building blocks of the scientific enterprise. They are the "how" for building systematic knowledge. The accumulation of knowledge through research is by its nature a collective endeavor. Each well-designed study provides evidence that may support, amend, refute, or deepen the understanding of existing knowledge...Decisions are important throughout the practice of research and are designed to help researchers collect evidence that includes the full spectrum of the phenomenon under study, to maintain logical rules, and to mitigate or account for possible sources of bias. In many ways, learning research methods is learning how to see and make these decisions."

The choice of methods varies by discipline, by the kind of phenomenon being studied and the data being used to study it, by the technology available, and more.  This guide is an introduction, but if you don't see what you need here, always contact your subject librarian, and/or take a look to see if there's a library research guide that will answer your question. 

Suggestions for changes and additions to this guide are welcome! 

START HERE: SAGE Research Methods

Without question, the most comprehensive resource available from the library is SAGE Research Methods.  HERE IS THE ONLINE GUIDE  to this one-stop shopping collection, and some helpful links are below:

  • SAGE Research Methods
  • Little Green Books  (Quantitative Methods)
  • Little Blue Books  (Qualitative Methods)
  • Dictionaries and Encyclopedias  
  • Case studies of real research projects
  • Sample datasets for hands-on practice
  • Streaming video--see methods come to life
  • Methodspace- -a community for researchers
  • SAGE Research Methods Course Mapping

Library Data Services at UC Berkeley

Library Data Services Program and Digital Scholarship Services

The LDSP offers a variety of services and tools !  From this link, check out pages for each of the following topics:  discovering data, managing data, collecting data, GIS data, text data mining, publishing data, digital scholarship, open science, and the Research Data Management Program.

Be sure also to check out the visual guide to where to seek assistance on campus with any research question you may have!

Library GIS Services

Other Data Services at Berkeley

D-Lab Supports Berkeley faculty, staff, and graduate students with research in data intensive social science, including a wide range of training and workshop offerings Dryad Dryad is a simple self-service tool for researchers to use in publishing their datasets. It provides tools for the effective publication of and access to research data. Geospatial Innovation Facility (GIF) Provides leadership and training across a broad array of integrated mapping technologies on campu Research Data Management A UC Berkeley guide and consulting service for research data management issues

General Research Methods Resources

Here are some general resources for assistance:

  • Assistance from ICPSR (must create an account to access): Getting Help with Data , and Resources for Students
  • Wiley Stats Ref for background information on statistics topics
  • Survey Documentation and Analysis (SDA) .  Program for easy web-based analysis of survey data.

Consultants

  • D-Lab/Data Science Discovery Consultants Request help with your research project from peer consultants.
  • Research data (RDM) consulting Meet with RDM consultants before designing the data security, storage, and sharing aspects of your qualitative project.
  • Statistics Department Consulting Services A service in which advanced graduate students, under faculty supervision, are available to consult during specified hours in the Fall and Spring semesters.

Related Resourcex

  • IRB / CPHS Qualitative research projects with human subjects often require that you go through an ethics review.
  • OURS (Office of Undergraduate Research and Scholarships) OURS supports undergraduates who want to embark on research projects and assistantships. In particular, check out their "Getting Started in Research" workshops
  • Sponsored Projects Sponsored projects works with researchers applying for major external grants.
  • Next: Quantitative Research >>
  • Last Updated: Apr 25, 2024 11:09 AM
  • URL: https://guides.lib.berkeley.edu/researchmethods
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

quantitative empirical research methods include the following except

Home Market Research

Empirical Research: Definition, Methods, Types and Examples

What is Empirical Research

Content Index

Empirical research: Definition

Empirical research: origin, quantitative research methods, qualitative research methods, steps for conducting empirical research, empirical research methodology cycle, advantages of empirical research, disadvantages of empirical research, why is there a need for empirical research.

Empirical research is defined as any research where conclusions of the study is strictly drawn from concretely empirical evidence, and therefore “verifiable” evidence.

This empirical evidence can be gathered using quantitative market research and  qualitative market research  methods.

For example: A research is being conducted to find out if listening to happy music in the workplace while working may promote creativity? An experiment is conducted by using a music website survey on a set of audience who are exposed to happy music and another set who are not listening to music at all, and the subjects are then observed. The results derived from such a research will give empirical evidence if it does promote creativity or not.

LEARN ABOUT: Behavioral Research

You must have heard the quote” I will not believe it unless I see it”. This came from the ancient empiricists, a fundamental understanding that powered the emergence of medieval science during the renaissance period and laid the foundation of modern science, as we know it today. The word itself has its roots in greek. It is derived from the greek word empeirikos which means “experienced”.

In today’s world, the word empirical refers to collection of data using evidence that is collected through observation or experience or by using calibrated scientific instruments. All of the above origins have one thing in common which is dependence of observation and experiments to collect data and test them to come up with conclusions.

LEARN ABOUT: Causal Research

Types and methodologies of empirical research

Empirical research can be conducted and analysed using qualitative or quantitative methods.

  • Quantitative research : Quantitative research methods are used to gather information through numerical data. It is used to quantify opinions, behaviors or other defined variables . These are predetermined and are in a more structured format. Some of the commonly used methods are survey, longitudinal studies, polls, etc
  • Qualitative research:   Qualitative research methods are used to gather non numerical data.  It is used to find meanings, opinions, or the underlying reasons from its subjects. These methods are unstructured or semi structured. The sample size for such a research is usually small and it is a conversational type of method to provide more insight or in-depth information about the problem Some of the most popular forms of methods are focus groups, experiments, interviews, etc.

Data collected from these will need to be analysed. Empirical evidence can also be analysed either quantitatively and qualitatively. Using this, the researcher can answer empirical questions which have to be clearly defined and answerable with the findings he has got. The type of research design used will vary depending on the field in which it is going to be used. Many of them might choose to do a collective research involving quantitative and qualitative method to better answer questions which cannot be studied in a laboratory setting.

LEARN ABOUT: Qualitative Research Questions and Questionnaires

Quantitative research methods aid in analyzing the empirical evidence gathered. By using these a researcher can find out if his hypothesis is supported or not.

  • Survey research: Survey research generally involves a large audience to collect a large amount of data. This is a quantitative method having a predetermined set of closed questions which are pretty easy to answer. Because of the simplicity of such a method, high responses are achieved. It is one of the most commonly used methods for all kinds of research in today’s world.

Previously, surveys were taken face to face only with maybe a recorder. However, with advancement in technology and for ease, new mediums such as emails , or social media have emerged.

For example: Depletion of energy resources is a growing concern and hence there is a need for awareness about renewable energy. According to recent studies, fossil fuels still account for around 80% of energy consumption in the United States. Even though there is a rise in the use of green energy every year, there are certain parameters because of which the general population is still not opting for green energy. In order to understand why, a survey can be conducted to gather opinions of the general population about green energy and the factors that influence their choice of switching to renewable energy. Such a survey can help institutions or governing bodies to promote appropriate awareness and incentive schemes to push the use of greener energy.

Learn more: Renewable Energy Survey Template Descriptive Research vs Correlational Research

  • Experimental research: In experimental research , an experiment is set up and a hypothesis is tested by creating a situation in which one of the variable is manipulated. This is also used to check cause and effect. It is tested to see what happens to the independent variable if the other one is removed or altered. The process for such a method is usually proposing a hypothesis, experimenting on it, analyzing the findings and reporting the findings to understand if it supports the theory or not.

For example: A particular product company is trying to find what is the reason for them to not be able to capture the market. So the organisation makes changes in each one of the processes like manufacturing, marketing, sales and operations. Through the experiment they understand that sales training directly impacts the market coverage for their product. If the person is trained well, then the product will have better coverage.

  • Correlational research: Correlational research is used to find relation between two set of variables . Regression analysis is generally used to predict outcomes of such a method. It can be positive, negative or neutral correlation.

LEARN ABOUT: Level of Analysis

For example: Higher educated individuals will get higher paying jobs. This means higher education enables the individual to high paying job and less education will lead to lower paying jobs.

  • Longitudinal study: Longitudinal study is used to understand the traits or behavior of a subject under observation after repeatedly testing the subject over a period of time. Data collected from such a method can be qualitative or quantitative in nature.

For example: A research to find out benefits of exercise. The target is asked to exercise everyday for a particular period of time and the results show higher endurance, stamina, and muscle growth. This supports the fact that exercise benefits an individual body.

  • Cross sectional: Cross sectional study is an observational type of method, in which a set of audience is observed at a given point in time. In this type, the set of people are chosen in a fashion which depicts similarity in all the variables except the one which is being researched. This type does not enable the researcher to establish a cause and effect relationship as it is not observed for a continuous time period. It is majorly used by healthcare sector or the retail industry.

For example: A medical study to find the prevalence of under-nutrition disorders in kids of a given population. This will involve looking at a wide range of parameters like age, ethnicity, location, incomes  and social backgrounds. If a significant number of kids coming from poor families show under-nutrition disorders, the researcher can further investigate into it. Usually a cross sectional study is followed by a longitudinal study to find out the exact reason.

  • Causal-Comparative research : This method is based on comparison. It is mainly used to find out cause-effect relationship between two variables or even multiple variables.

For example: A researcher measured the productivity of employees in a company which gave breaks to the employees during work and compared that to the employees of the company which did not give breaks at all.

LEARN ABOUT: Action Research

Some research questions need to be analysed qualitatively, as quantitative methods are not applicable there. In many cases, in-depth information is needed or a researcher may need to observe a target audience behavior, hence the results needed are in a descriptive analysis form. Qualitative research results will be descriptive rather than predictive. It enables the researcher to build or support theories for future potential quantitative research. In such a situation qualitative research methods are used to derive a conclusion to support the theory or hypothesis being studied.

LEARN ABOUT: Qualitative Interview

  • Case study: Case study method is used to find more information through carefully analyzing existing cases. It is very often used for business research or to gather empirical evidence for investigation purpose. It is a method to investigate a problem within its real life context through existing cases. The researcher has to carefully analyse making sure the parameter and variables in the existing case are the same as to the case that is being investigated. Using the findings from the case study, conclusions can be drawn regarding the topic that is being studied.

For example: A report mentioning the solution provided by a company to its client. The challenges they faced during initiation and deployment, the findings of the case and solutions they offered for the problems. Such case studies are used by most companies as it forms an empirical evidence for the company to promote in order to get more business.

  • Observational method:   Observational method is a process to observe and gather data from its target. Since it is a qualitative method it is time consuming and very personal. It can be said that observational research method is a part of ethnographic research which is also used to gather empirical evidence. This is usually a qualitative form of research, however in some cases it can be quantitative as well depending on what is being studied.

For example: setting up a research to observe a particular animal in the rain-forests of amazon. Such a research usually take a lot of time as observation has to be done for a set amount of time to study patterns or behavior of the subject. Another example used widely nowadays is to observe people shopping in a mall to figure out buying behavior of consumers.

  • One-on-one interview: Such a method is purely qualitative and one of the most widely used. The reason being it enables a researcher get precise meaningful data if the right questions are asked. It is a conversational method where in-depth data can be gathered depending on where the conversation leads.

For example: A one-on-one interview with the finance minister to gather data on financial policies of the country and its implications on the public.

  • Focus groups: Focus groups are used when a researcher wants to find answers to why, what and how questions. A small group is generally chosen for such a method and it is not necessary to interact with the group in person. A moderator is generally needed in case the group is being addressed in person. This is widely used by product companies to collect data about their brands and the product.

For example: A mobile phone manufacturer wanting to have a feedback on the dimensions of one of their models which is yet to be launched. Such studies help the company meet the demand of the customer and position their model appropriately in the market.

  • Text analysis: Text analysis method is a little new compared to the other types. Such a method is used to analyse social life by going through images or words used by the individual. In today’s world, with social media playing a major part of everyone’s life, such a method enables the research to follow the pattern that relates to his study.

For example: A lot of companies ask for feedback from the customer in detail mentioning how satisfied are they with their customer support team. Such data enables the researcher to take appropriate decisions to make their support team better.

Sometimes a combination of the methods is also needed for some questions that cannot be answered using only one type of method especially when a researcher needs to gain a complete understanding of complex subject matter.

We recently published a blog that talks about examples of qualitative data in education ; why don’t you check it out for more ideas?

Since empirical research is based on observation and capturing experiences, it is important to plan the steps to conduct the experiment and how to analyse it. This will enable the researcher to resolve problems or obstacles which can occur during the experiment.

Step #1: Define the purpose of the research

This is the step where the researcher has to answer questions like what exactly do I want to find out? What is the problem statement? Are there any issues in terms of the availability of knowledge, data, time or resources. Will this research be more beneficial than what it will cost.

Before going ahead, a researcher has to clearly define his purpose for the research and set up a plan to carry out further tasks.

Step #2 : Supporting theories and relevant literature

The researcher needs to find out if there are theories which can be linked to his research problem . He has to figure out if any theory can help him support his findings. All kind of relevant literature will help the researcher to find if there are others who have researched this before, or what are the problems faced during this research. The researcher will also have to set up assumptions and also find out if there is any history regarding his research problem

Step #3: Creation of Hypothesis and measurement

Before beginning the actual research he needs to provide himself a working hypothesis or guess what will be the probable result. Researcher has to set up variables, decide the environment for the research and find out how can he relate between the variables.

Researcher will also need to define the units of measurements, tolerable degree for errors, and find out if the measurement chosen will be acceptable by others.

Step #4: Methodology, research design and data collection

In this step, the researcher has to define a strategy for conducting his research. He has to set up experiments to collect data which will enable him to propose the hypothesis. The researcher will decide whether he will need experimental or non experimental method for conducting the research. The type of research design will vary depending on the field in which the research is being conducted. Last but not the least, the researcher will have to find out parameters that will affect the validity of the research design. Data collection will need to be done by choosing appropriate samples depending on the research question. To carry out the research, he can use one of the many sampling techniques. Once data collection is complete, researcher will have empirical data which needs to be analysed.

LEARN ABOUT: Best Data Collection Tools

Step #5: Data Analysis and result

Data analysis can be done in two ways, qualitatively and quantitatively. Researcher will need to find out what qualitative method or quantitative method will be needed or will he need a combination of both. Depending on the unit of analysis of his data, he will know if his hypothesis is supported or rejected. Analyzing this data is the most important part to support his hypothesis.

Step #6: Conclusion

A report will need to be made with the findings of the research. The researcher can give the theories and literature that support his research. He can make suggestions or recommendations for further research on his topic.

Empirical research methodology cycle

A.D. de Groot, a famous dutch psychologist and a chess expert conducted some of the most notable experiments using chess in the 1940’s. During his study, he came up with a cycle which is consistent and now widely used to conduct empirical research. It consists of 5 phases with each phase being as important as the next one. The empirical cycle captures the process of coming up with hypothesis about how certain subjects work or behave and then testing these hypothesis against empirical data in a systematic and rigorous approach. It can be said that it characterizes the deductive approach to science. Following is the empirical cycle.

  • Observation: At this phase an idea is sparked for proposing a hypothesis. During this phase empirical data is gathered using observation. For example: a particular species of flower bloom in a different color only during a specific season.
  • Induction: Inductive reasoning is then carried out to form a general conclusion from the data gathered through observation. For example: As stated above it is observed that the species of flower blooms in a different color during a specific season. A researcher may ask a question “does the temperature in the season cause the color change in the flower?” He can assume that is the case, however it is a mere conjecture and hence an experiment needs to be set up to support this hypothesis. So he tags a few set of flowers kept at a different temperature and observes if they still change the color?
  • Deduction: This phase helps the researcher to deduce a conclusion out of his experiment. This has to be based on logic and rationality to come up with specific unbiased results.For example: In the experiment, if the tagged flowers in a different temperature environment do not change the color then it can be concluded that temperature plays a role in changing the color of the bloom.
  • Testing: This phase involves the researcher to return to empirical methods to put his hypothesis to the test. The researcher now needs to make sense of his data and hence needs to use statistical analysis plans to determine the temperature and bloom color relationship. If the researcher finds out that most flowers bloom a different color when exposed to the certain temperature and the others do not when the temperature is different, he has found support to his hypothesis. Please note this not proof but just a support to his hypothesis.
  • Evaluation: This phase is generally forgotten by most but is an important one to keep gaining knowledge. During this phase the researcher puts forth the data he has collected, the support argument and his conclusion. The researcher also states the limitations for the experiment and his hypothesis and suggests tips for others to pick it up and continue a more in-depth research for others in the future. LEARN MORE: Population vs Sample

LEARN MORE: Population vs Sample

There is a reason why empirical research is one of the most widely used method. There are a few advantages associated with it. Following are a few of them.

  • It is used to authenticate traditional research through various experiments and observations.
  • This research methodology makes the research being conducted more competent and authentic.
  • It enables a researcher understand the dynamic changes that can happen and change his strategy accordingly.
  • The level of control in such a research is high so the researcher can control multiple variables.
  • It plays a vital role in increasing internal validity .

Even though empirical research makes the research more competent and authentic, it does have a few disadvantages. Following are a few of them.

  • Such a research needs patience as it can be very time consuming. The researcher has to collect data from multiple sources and the parameters involved are quite a few, which will lead to a time consuming research.
  • Most of the time, a researcher will need to conduct research at different locations or in different environments, this can lead to an expensive affair.
  • There are a few rules in which experiments can be performed and hence permissions are needed. Many a times, it is very difficult to get certain permissions to carry out different methods of this research.
  • Collection of data can be a problem sometimes, as it has to be collected from a variety of sources through different methods.

LEARN ABOUT:  Social Communication Questionnaire

Empirical research is important in today’s world because most people believe in something only that they can see, hear or experience. It is used to validate multiple hypothesis and increase human knowledge and continue doing it to keep advancing in various fields.

For example: Pharmaceutical companies use empirical research to try out a specific drug on controlled groups or random groups to study the effect and cause. This way, they prove certain theories they had proposed for the specific drug. Such research is very important as sometimes it can lead to finding a cure for a disease that has existed for many years. It is useful in science and many other fields like history, social sciences, business, etc.

LEARN ABOUT: 12 Best Tools for Researchers

With the advancement in today’s world, empirical research has become critical and a norm in many fields to support their hypothesis and gain more knowledge. The methods mentioned above are very useful for carrying out such research. However, a number of new methods will keep coming up as the nature of new investigative questions keeps getting unique or changing.

Create a single source of real data with a built-for-insights platform. Store past data, add nuggets of insights, and import research data from various sources into a CRM for insights. Build on ever-growing research with a real-time dashboard in a unified research management platform to turn insights into knowledge.

LEARN MORE         FREE TRIAL

MORE LIKE THIS

quantitative empirical research methods include the following except

QuestionPro Thrive: A Space to Visualize & Share the Future of Technology

Jun 18, 2024

quantitative empirical research methods include the following except

Relationship NPS Fails to Understand Customer Experiences — Tuesday CX

CX Platforms

CX Platform: Top 13 CX Platforms to Drive Customer Success

Jun 17, 2024

quantitative empirical research methods include the following except

How to Know Whether Your Employee Initiatives are Working

Other categories.

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

Canvas | University | Ask a Librarian

  • Library Homepage
  • Arrendale Library

Empirical & Non-Empirical Research

  • Empirical Research

Introduction: What is Empirical Research?

Quantitative methods, qualitative methods.

  • Quantitative vs. Qualitative
  • Reference Works for Social Sciences Research
  • What is Non-Empirical Research?
  • Contact Us!

 Call us at 706-776-0111

  Chat with a Librarian

  Send Us Email

  Library Hours

Empirical research  is based on phenomena that can be observed and measured. Empirical research derives knowledge from actual experience rather than from theory or belief. 

Key characteristics of empirical research include:

  • Specific research questions to be answered;
  • Definitions of the population, behavior, or phenomena being studied;
  • Description of the methodology or research design used to study this population or phenomena, including selection criteria, controls, and testing instruments (such as surveys);
  • Two basic research processes or methods in empirical research: quantitative methods and qualitative methods (see the rest of the guide for more about these methods).

(based on the original from the Connelly LIbrary of LaSalle University)

quantitative empirical research methods include the following except

Empirical Research: Qualitative vs. Quantitative

Learn about common types of journal articles that use APA Style, including empirical studies; meta-analyses; literature reviews; and replication, theoretical, and methodological articles.

Academic Writer

© 2024 American Psychological Association.

  • More about Academic Writer ...

Quantitative Research

A quantitative research project is characterized by having a population about which the researcher wants to draw conclusions, but it is not possible to collect data on the entire population.

  • For an observational study, it is necessary to select a proper, statistical random sample and to use methods of statistical inference to draw conclusions about the population. 
  • For an experimental study, it is necessary to have a random assignment of subjects to experimental and control groups in order to use methods of statistical inference.

Statistical methods are used in all three stages of a quantitative research project.

For observational studies, the data are collected using statistical sampling theory. Then, the sample data are analyzed using descriptive statistical analysis. Finally, generalizations are made from the sample data to the entire population using statistical inference.

For experimental studies, the subjects are allocated to experimental and control group using randomizing methods. Then, the experimental data are analyzed using descriptive statistical analysis. Finally, just as for observational data, generalizations are made to a larger population.

Iversen, G. (2004). Quantitative research . In M. Lewis-Beck, A. Bryman, & T. Liao (Eds.), Encyclopedia of social science research methods . (pp. 897-898). Thousand Oaks, CA: SAGE Publications, Inc.

Qualitative Research

What makes a work deserving of the label qualitative research is the demonstrable effort to produce richly and relevantly detailed descriptions and particularized interpretations of people and the social, linguistic, material, and other practices and events that shape and are shaped by them.

Qualitative research typically includes, but is not limited to, discerning the perspectives of these people, or what is often referred to as the actor’s point of view. Although both philosophically and methodologically a highly diverse entity, qualitative research is marked by certain defining imperatives that include its case (as opposed to its variable) orientation, sensitivity to cultural and historical context, and reflexivity. 

In its many guises, qualitative research is a form of empirical inquiry that typically entails some form of purposive sampling for information-rich cases; in-depth interviews and open-ended interviews, lengthy participant/field observations, and/or document or artifact study; and techniques for analysis and interpretation of data that move beyond the data generated and their surface appearances. 

Sandelowski, M. (2004).  Qualitative research . In M. Lewis-Beck, A. Bryman, & T. Liao (Eds.),  Encyclopedia of social science research methods . (pp. 893-894). Thousand Oaks, CA: SAGE Publications, Inc.

  • Next: Quantitative vs. Qualitative >>
  • Last Updated: Jun 18, 2024 4:12 PM
  • URL: https://library.piedmont.edu/empirical-research
  • Ebooks & Online Video
  • New Materials
  • Renew Checkouts
  • Faculty Resources
  • Library Friends
  • Library Services
  • Our Mission
  • Library History
  • Ask a Librarian!
  • Making Citations
  • Working Online

Friend us on Facebook!

Arrendale Library Piedmont University 706-776-0111

  • Connelly Library

Qualitative and Quantitative Research

What is "empirical research".

  • empirical research
  • Locating Articles in Cinahl and PsycInfo
  • Locating Articles in PubMed
  • Getting the Articles

Empirical research  is based on observed and measured phenomena and derives knowledge from actual experience rather than from theory or belief. 

How do you know if a study is empirical? Read the subheadings within the article, book, or report and look for a description of the research "methodology."  Ask yourself: Could I recreate this study and test these results?

Key characteristics to look for:

  • Specific research questions  to be answered
  • Definition of the  population, behavior, or   phenomena  being studied
  • Description of the  process  used to study this population or phenomena, including selection criteria, controls, and testing instruments (such as surveys)

Another hint: some scholarly journals use a specific layout, called the "IMRaD" format, to communicate empirical research findings. Such articles typically have 4 components:

  • Introduction : sometimes called "literature review" -- what is currently known about the topic -- usually includes a theoretical framework and/or discussion of previous studies
  • Methodology:  sometimes called "research design" --  how to recreate the study -- usually describes the population, research process, and analytical tools
  • Results : sometimes called "findings"  --  what was learned through the study -- usually appears as statistical data or as substantial quotations from research participants
  • Discussion : sometimes called "conclusion" or "implications" -- why the study is important -- usually describes how the research results influence professional practices or future studies
  • << Previous: Home
  • Next: Locating Articles in Cinahl and PsycInfo >>

La Salle University

© Copyright La Salle University. All rights reserved.

quantitative empirical research methods include the following except

DCJ Program: What is Empirical Research?

  • Online Databases/Articles
  • Electronic Resources
  • Selected Websites
  • Writing Your Dissertation
  • The Literature Review

What is Empirical Research?

  • Library Homepage This link opens in a new window
  • Statistical Software & Tools

What is an empirical article? An empirical article reports on research conducted by the author or authors. The research can be based on observations or experiments.   

What types of research make an article empirical? An empirical article may report a study that used  quantitative research methods , which generate numerical data and seek to establish causal relationships between two or more variables. They may also report on a study that uses  qualitative research methods , which objectively and critically analyze behaviors, beliefs, feelings, or values with few or no numerical data available for analysis

How can I tell if an article is empirical?

  • Check the publication in which the article appears. Is it scholarly? The vast majority of empirical articles will be in scholarly journals.
  • Read the article's abstract. Does it include details of a study, observation, or analysis of a number of participants or subjects?
  • Look at the article itself. Is it more than three pages long? Most empirical articles will be fairly lengthy.
  • Look at the article itself. If it contains a subsection marked "Methodology" and another called "Results," "discussion", and "conclusions (or recommendations)," it is probably empirical.
  • If you're still unsure, consult with your professor, or contact the library (800) 359-5945 or [email protected].

How can I search for these articles? There is no quick way to limit your searches only to articles that review empirical studies (or to empirical studies themselves). You will have to do keyword searches, then review article abstracts in order to determine the nature of each.

quantitative empirical research methods include the following except

Empirical research reports the results of a study that uses data derived from actual observation or experimentation.  Empirical research articles are primary sources.

An empirical research article typically includes the following sections:

The following  methodologies  are examples of empirical research and therefore,  primary sources:

  • Empirical research
  • Quantitative study
  • Qualitative study
  • Longitudinal study

E mpirical research articles can be accessed by searching the library databases , such as EBSCOhost, and ProQuest.  

  • << Previous: The Literature Review
  • Next: Library Homepage >>
  • Last Updated: May 7, 2024 1:21 PM
  • URL: https://slulibrary.saintleo.edu/DCJ_Program

Banner

Research Basics - All Subjects

  • Getting Started
  • Evaluating Resources
  • Generating Search Terms
  • Citations, Copyright, Plagiarism
  • What is AI Plagiarism?
  • Physical Library Resources
  • Digital Library Resources
  • Videos / Tutorials on Search Skills
  • Primary Sources
  • Subject Guides: Find your Major This link opens in a new window
  • Open Educational Resources (OER) This link opens in a new window
  • Qualitative, Quantitative & Empirical Research

Quantitative Research

Purpose           Supports a hypothesis through a review of the literature
Aim Provides a statistical model of what the literature presents
Previous Knowledge Researcher already knows what has been discovered
Phase in Process Generally occurs later in the research process
Research Design Designed before research begins
Data-Gathering Data is gathered using tools like surveys or computer programs
Form of Data Data is numerical
Objectivity of Research More objective; researcher measures and analyzes data
Keywords Quantitative, survey, literature review

Qualitative Research

Purpose           Used for exploration, generates a hypothesis
Aim Provides an in-depth description of the research methods to be used
Previous Knowledge Researcher has a general idea of what will be discovered
Phase in Process Usually occurs early in the research process
Research Design Design is developed during research
Data-Gathering Researcher gathers data from interviews, etc.
Form of Data Data takes the form of interviews, videos, artifacts
Objectivity of Research More subjective; researcher interprets events
Keywords Qualitative, methods, results, interviews

Empirical Studies

  • An empirical study is research derived from actual observation or experimentation.
  • The written articles resulting from empirical studies undergo a rigorous review by experts in the field of study prior to being published in journals.
  • After passing this review the articles are published in a scholarly, peer-reviewed, or academic journal.
  • Empirical study articles will generally contain the following features: Abstract - This is a summary of the article. Introduction - This is often identified as the hypothesis of the study and describes the researcher's intent.            Method - A description of how the research was conducted. Results - A description of the findings obtained as a result of the research. Most often answers the hypothesis. Conclusion - A description of how/if the findings were successful and the impact made as a result. References - A detailed listing of all resources cited in the article that support the written work.              
  empirical, experiment, methodology, observation, outcomes, sample size, statistical analysis, study
         
  generally employ .

         

Mixed Methods Research

Mixed Methods Research uses strategies from both qualitative and quantitative research processes to provide a greater understanding of the subject matter.

  • << Previous: Open Educational Resources (OER)
  • Next: Help >>
  • Last Updated: Jun 18, 2024 4:39 PM
  • URL: https://campbellsville.libguides.com/researchbasics

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Published: 01 June 2023

Data, measurement and empirical methods in the science of science

  • Lu Liu 1 , 2 , 3 , 4 ,
  • Benjamin F. Jones   ORCID: orcid.org/0000-0001-9697-9388 1 , 2 , 3 , 5 , 6 ,
  • Brian Uzzi   ORCID: orcid.org/0000-0001-6855-2854 1 , 2 , 3 &
  • Dashun Wang   ORCID: orcid.org/0000-0002-7054-2206 1 , 2 , 3 , 7  

Nature Human Behaviour volume  7 ,  pages 1046–1058 ( 2023 ) Cite this article

18k Accesses

14 Citations

117 Altmetric

Metrics details

  • Scientific community

The advent of large-scale datasets that trace the workings of science has encouraged researchers from many different disciplinary backgrounds to turn scientific methods into science itself, cultivating a rapidly expanding ‘science of science’. This Review considers this growing, multidisciplinary literature through the lens of data, measurement and empirical methods. We discuss the purposes, strengths and limitations of major empirical approaches, seeking to increase understanding of the field’s diverse methodologies and expand researchers’ toolkits. Overall, new empirical developments provide enormous capacity to test traditional beliefs and conceptual frameworks about science, discover factors associated with scientific productivity, predict scientific outcomes and design policies that facilitate scientific progress.

Similar content being viewed by others

quantitative empirical research methods include the following except

SciSciNet: A large-scale open data lake for the science of science research

quantitative empirical research methods include the following except

A dataset for measuring the impact of research data and their curation

quantitative empirical research methods include the following except

Envisioning a “science diplomacy 2.0”: on data, global challenges, and multi-layered networks

Scientific advances are a key input to rising standards of living, health and the capacity of society to confront grand challenges, from climate change to the COVID-19 pandemic 1 , 2 , 3 . A deeper understanding of how science works and where innovation occurs can help us to more effectively design science policy and science institutions, better inform scientists’ own research choices, and create and capture enormous value for science and humanity. Building on these key premises, recent years have witnessed substantial development in the ‘science of science’ 4 , 5 , 6 , 7 , 8 , 9 , which uses large-scale datasets and diverse computational toolkits to unearth fundamental patterns behind scientific production and use.

The idea of turning scientific methods into science itself is long-standing. Since the mid-20th century, researchers from different disciplines have asked central questions about the nature of scientific progress and the practice, organization and impact of scientific research. Building on these rich historical roots, the field of the science of science draws upon many disciplines, ranging from information science to the social, physical and biological sciences to computer science, engineering and design. The science of science closely relates to several strands and communities of research, including metascience, scientometrics, the economics of science, research on research, science and technology studies, the sociology of science, metaknowledge and quantitative science studies 5 . There are noticeable differences between some of these communities, mostly around their historical origins and the initial disciplinary composition of researchers forming these communities. For example, metascience has its origins in the clinical sciences and psychology, and focuses on rigour, transparency, reproducibility and other open science-related practices and topics. The scientometrics community, born in library and information sciences, places a particular emphasis on developing robust and responsible measures and indicators for science. Science and technology studies engage the history of science and technology, the philosophy of science, and the interplay between science, technology and society. The science of science, which has its origins in physics, computer science and sociology, takes a data-driven approach and emphasizes questions on how science works. Each of these communities has made fundamental contributions to understanding science. While they differ in their origins, these differences pale in comparison to the overarching, common interest in understanding the practice of science and its societal impact.

Three major developments have encouraged rapid advances in the science of science. The first is in data 9 : modern databases include millions of research articles, grant proposals, patents and more. This windfall of data traces scientific activity in remarkable detail and at scale. The second development is in measurement: scholars have used data to develop many new measures of scientific activities and examine theories that have long been viewed as important but difficult to quantify. The third development is in empirical methods: thanks to parallel advances in data science, network science, artificial intelligence and econometrics, researchers can study relationships, make predictions and assess science policy in powerful new ways. Together, new data, measurements and methods have revealed fundamental new insights about the inner workings of science and scientific progress itself.

With multiple approaches, however, comes a key challenge. As researchers adhere to norms respected within their disciplines, their methods vary, with results often published in venues with non-overlapping readership, fragmenting research along disciplinary boundaries. This fragmentation challenges researchers’ ability to appreciate and understand the value of work outside of their own discipline, much less to build directly on it for further investigations.

Recognizing these challenges and the rapidly developing nature of the field, this paper reviews the empirical approaches that are prevalent in this literature. We aim to provide readers with an up-to-date understanding of the available datasets, measurement constructs and empirical methodologies, as well as the value and limitations of each. Owing to space constraints, this Review does not cover the full technical details of each method, referring readers to related guides to learn more. Instead, we will emphasize why a researcher might favour one method over another, depending on the research question.

Beyond a positive understanding of science, a key goal of the science of science is to inform science policy. While this Review mainly focuses on empirical approaches, with its core audience being researchers in the field, the studies reviewed are also germane to key policy questions. For example, what is the appropriate scale of scientific investment, in what directions and through what institutions 10 , 11 ? Are public investments in science aligned with public interests 12 ? What conditions produce novel or high-impact science 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 ? How do the reward systems of science influence the rate and direction of progress 13 , 21 , 22 , 23 , 24 , and what governs scientific reproducibility 25 , 26 , 27 ? How do contributions evolve over a scientific career 28 , 29 , 30 , 31 , 32 , and how may diversity among scientists advance scientific progress 33 , 34 , 35 , among other questions relevant to science policy 36 , 37 .

Overall, this review aims to facilitate entry to science of science research, expand researcher toolkits and illustrate how diverse research approaches contribute to our collective understanding of science. Section 2 reviews datasets and data linkages. Section 3 reviews major measurement constructs in the science of science. Section 4 considers a range of empirical methods, focusing on one study to illustrate each method and briefly summarizing related examples and applications. Section 5 concludes with an outlook for the science of science.

Historically, data on scientific activities were difficult to collect and were available in limited quantities. Gathering data could involve manually tallying statistics from publications 38 , 39 , interviewing scientists 16 , 40 , or assembling historical anecdotes and biographies 13 , 41 . Analyses were typically limited to a specific domain or group of scientists. Today, massive datasets on scientific production and use are at researchers’ fingertips 42 , 43 , 44 . Armed with big data and advanced algorithms, researchers can now probe questions previously not amenable to quantification and with enormous increases in scope and scale, as detailed below.

Publication datasets cover papers from nearly all scientific disciplines, enabling analyses of both general and domain-specific patterns. Commonly used datasets include the Web of Science (WoS), PubMed, CrossRef, ORCID, OpenCitations, Dimensions and OpenAlex. Datasets incorporating papers’ text (CORE) 45 , 46 , 47 , data entities (DataCite) 48 , 49 and peer review reports (Publons) 33 , 50 , 51 have also become available. These datasets further enable novel measurement, for example, representations of a paper’s content 52 , 53 , novelty 15 , 54 and interdisciplinarity 55 .

Notably, databases today capture more diverse aspects of science beyond publications, offering a richer and more encompassing view of research contexts and of researchers themselves (Fig. 1 ). For example, some datasets trace research funding to the specific publications these investments support 56 , 57 , allowing high-scale studies of the impact of funding on productivity and the return on public investment. Datasets incorporating job placements 58 , 59 , curriculum vitae 21 , 59 and scientific prizes 23 offer rich quantitative evidence on the social structure of science. Combining publication profiles with mentorship genealogies 60 , 61 , dissertations 34 and course syllabi 62 , 63 provides insights on mentoring and cultivating talent.

figure 1

This figure presents commonly used data types in science of science research, information contained in each data type and examples of data sources. Datasets in the science of science research have not only grown in scale but have also expanded beyond publications to integrate upstream funding investments and downstream applications that extend beyond science itself.

Finally, today’s scope of data extends beyond science to broader aspects of society. Altmetrics 64 captures news media and social media mentions of scientific articles. Other databases incorporate marketplace uses of science, including through patents 10 , pharmaceutical clinical trials and drug approvals 65 , 66 . Policy documents 67 , 68 help us to understand the role of science in the halls of government 69 and policy making 12 , 68 .

While datasets of the modern scientific enterprise have grown exponentially, they are not without limitations. As is often the case for data-driven research, drawing conclusions from specific data sources requires scrutiny and care. Datasets are typically based on published work, which may favour easy-to-publish topics over important ones (the streetlight effect) 70 , 71 . The publication of negative results is also rare (the file drawer problem) 72 , 73 . Meanwhile, English language publications account for over 90% of articles in major data sources, with limited coverage of non-English journals 74 . Publication datasets may also reflect biases in data collection across research institutions or demographic groups. Despite the open science movement, many datasets require paid subscriptions, which can create inequality in data access. Creating more open datasets for the science of science, such as OpenAlex, may not only improve the robustness and replicability of empirical claims but also increase entry to the field.

As today’s datasets become larger in scale and continue to integrate new dimensions, they offer opportunities to unveil the inner workings and external impacts of science in new ways. They can enable researchers to reach beyond previous limitations while conducting original studies of new and long-standing questions about the sciences.

Measurement

Here we discuss prominent measurement approaches in the science of science, including their purposes and limitations.

Modern publication databases typically include data on which articles and authors cite other papers and scientists. These citation linkages have been used to engage core conceptual ideas in scientific research. Here we consider two common measures based on citation information: citation counts and knowledge flows.

First, citation counts are commonly used indicators of impact. The term ‘indicator’ implies that it only approximates the concept of interest. A citation count is defined as how many times a document is cited by subsequent documents and can proxy for the importance of research papers 75 , 76 as well as patented inventions 77 , 78 , 79 . Rather than treating each citation equally, measures may further weight the importance of each citation, for example by using the citation network structure to produce centrality 80 , PageRank 81 , 82 or Eigenfactor indicators 83 , 84 .

Citation-based indicators have also faced criticism 84 , 85 . Citation indicators necessarily oversimplify the construct of impact, often ignoring heterogeneity in the meaning and use of a particular reference, the variations in citation practices across fields and institutional contexts, and the potential for reputation and power structures in science to influence citation behaviour 86 , 87 . Researchers have started to understand more nuanced citation behaviours ranging from negative citations 86 to citation context 47 , 88 , 89 . Understanding what a citation actually measures matters in interpreting and applying many research findings in the science of science. Evaluations relying on citation-based indicators rather than expert judgements raise questions regarding misuse 90 , 91 , 92 . Given the importance of developing indicators that can reliably quantify and evaluate science, the scientometrics community has been working to provide guidance for responsible citation practices and assessment 85 .

Second, scientists use citations to trace knowledge flows. Each citation in a paper is a link to specific previous work from which we can proxy how new discoveries draw upon existing ideas 76 , 93 and how knowledge flows between fields of science 94 , 95 , research institutions 96 , regions and nations 97 , 98 , 99 , and individuals 81 . Combinations of citation linkages can also approximate novelty 15 , disruptiveness 17 , 100 and interdisciplinarity 55 , 95 , 101 , 102 . A rapidly expanding body of work further examines citations to scientific articles from other domains (for example, patents, clinical drug trials and policy documents) to understand the applied value of science 10 , 12 , 65 , 66 , 103 , 104 , 105 .

Individuals

Analysing individual careers allows researchers to answer questions such as: How do we quantify individual scientific productivity? What is a typical career lifecycle? How are resources and credits allocated across individuals and careers? A scholar’s career can be examined through the papers they publish 30 , 31 , 106 , 107 , 108 , with attention to career progression and mobility, publication counts and citation impact, as well as grant funding 24 , 109 , 110 and prizes 111 , 112 , 113 ,

Studies of individual impact focus on output, typically approximated by the number of papers a researcher publishes and citation indicators. A popular measure for individual impact is the h -index 114 , which takes both volume and per-paper impact into consideration. Specifically, a scientist is assigned the largest value h such that they have h papers that were each cited at least h times. Later studies build on the idea of the h -index and propose variants to address limitations 115 , these variants ranging from emphasizing highly cited papers in a career 116 , to field differences 117 and normalizations 118 , to the relative contribution of an individual in collaborative works 119 .

To study dynamics in output over the lifecycle, individuals can be studied according to age, career age or the sequence of publications. A long-standing literature has investigated the relationship between age and the likelihood of outstanding achievement 28 , 106 , 111 , 120 , 121 . Recent studies further decouple the relationship between age, publication volume and per-paper citation, and measure the likelihood of producing highly cited papers in the sequence of works one produces 30 , 31 .

As simple as it sounds, representing careers using publication records is difficult. Collecting the full publication list of a researcher is the foundation to study individuals yet remains a key challenge, requiring name disambiguation techniques to match specific works to specific researchers. Although algorithms are increasingly capable at identifying millions of career profiles 122 , they vary in accuracy and robustness. ORCID can help to alleviate the problem by offering researchers the opportunity to create, maintain and update individual profiles themselves, and it goes beyond publications to collect broader outputs and activities 123 . A second challenge is survivorship bias. Empirical studies tend to focus on careers that are long enough to afford statistical analyses, which limits the applicability of the findings to scientific careers as a whole. A third challenge is the breadth of scientists’ activities, where focusing on publications ignores other important contributions such as mentorship and teaching, service (for example, refereeing papers, reviewing grant proposals and editing journals) or leadership within their organizations. Although researchers have begun exploring these dimensions by linking individual publication profiles with genealogical databases 61 , 124 , dissertations 34 , grants 109 , curriculum vitae 21 and acknowledgements 125 , scientific careers beyond publication records remain under-studied 126 , 127 . Lastly, citation-based indicators only serve as an approximation of individual performance with similar limitations as discussed above. The scientific community has called for more appropriate practices 85 , 128 , ranging from incorporating expert assessment of research contributions to broadening the measures of impact beyond publications.

Over many decades, science has exhibited a substantial and steady shift away from solo authorship towards coauthorship, especially among highly cited works 18 , 129 , 130 . In light of this shift, a research field, the science of team science 131 , 132 , has emerged to study the mechanisms that facilitate or hinder the effectiveness of teams. Team size can be proxied by the number of coauthors on a paper, which has been shown to predict distinctive types of advance: whereas larger teams tend to develop ideas, smaller teams tend to disrupt current ways of thinking 17 . Team characteristics can be inferred from coauthors’ backgrounds 133 , 134 , 135 , allowing quantification of a team’s diversity in terms of field, age, gender or ethnicity. Collaboration networks based on coauthorship 130 , 136 , 137 , 138 , 139 offer nuanced network-based indicators to understand individual and institutional collaborations.

However, there are limitations to using coauthorship alone to study teams 132 . First, coauthorship can obscure individual roles 140 , 141 , 142 , which has prompted institutional responses to help to allocate credit, including authorship order and individual contribution statements 56 , 143 . Second, coauthorship does not reflect the complex dynamics and interactions between team members that are often instrumental for team success 53 , 144 . Third, collaborative contributions can extend beyond coauthorship in publications to include members of a research laboratory 145 or co-principal investigators (co-PIs) on a grant 146 . Initiatives such as CRediT may help to address some of these issues by recording detailed roles for each contributor 147 .

Institutions

Research institutions, such as departments, universities, national laboratories and firms, encompass wider groups of researchers and their corresponding outputs. Institutional membership can be inferred from affiliations listed on publications or patents 148 , 149 , and the output of an institution can be aggregated over all its affiliated researchers 150 . Institutional research information systems (CRIS) contain more comprehensive research outputs and activities from employees.

Some research questions consider the institution as a whole, investigating the returns to research and development investment 104 , inequality of resource allocation 22 and the flow of scientists 21 , 148 , 149 . Other questions focus on institutional structures as sources of research productivity by looking into the role of peer effects 125 , 151 , 152 , 153 , how institutional policies impact research outcomes 154 , 155 and whether interdisciplinary efforts foster innovation 55 . Institution-oriented measurement faces similar limitations as with analyses of individuals and teams, including name disambiguation for a given institution and the limited capacity of formal publication records to characterize the full range of relevant institutional outcomes. It is also unclear how to allocate credit among multiple institutions associated with a paper. Moreover, relevant institutional employees extend beyond publishing researchers: interns, technicians and administrators all contribute to research endeavours 130 .

In sum, measurements allow researchers to quantify scientific production and use across numerous dimensions, but they also raise questions of construct validity: Does the proposed metric really reflect what we want to measure? Testing the construct’s validity is important, as is understanding a construct’s limits. Where possible, using alternative measurement approaches, or qualitative methods such as interviews and surveys, can improve measurement accuracy and the robustness of findings.

Empirical methods

In this section, we review two broad categories of empirical approaches (Table 1 ), each with distinctive goals: (1) to discover, estimate and predict empirical regularities; and (2) to identify causal mechanisms. For each method, we give a concrete example to help to explain how the method works, summarize related work for interested readers, and discuss contributions and limitations.

Descriptive and predictive approaches

Empirical regularities and generalizable facts.

The discovery of empirical regularities in science has had a key role in driving conceptual developments and the directions of future research. By observing empirical patterns at scale, researchers unveil central facts that shape science and present core features that theories of scientific progress and practice must explain. For example, consider citation distributions. de Solla Price first proposed that citation distributions are fat-tailed 39 , indicating that a few papers have extremely high citations while most papers have relatively few or even no citations at all. de Solla Price proposed that citation distribution was a power law, while researchers have since refined this view to show that the distribution appears log-normal, a nearly universal regularity across time and fields 156 , 157 . The fat-tailed nature of citation distributions and its universality across the sciences has in turn sparked substantial theoretical work that seeks to explain this key empirical regularity 20 , 156 , 158 , 159 .

Empirical regularities are often surprising and can contest previous beliefs of how science works. For example, it has been shown that the age distribution of great achievements peaks in middle age across a wide range of fields 107 , 121 , 160 , rejecting the common belief that young scientists typically drive breakthroughs in science. A closer look at the individual careers also indicates that productivity patterns vary widely across individuals 29 . Further, a scholar’s highest-impact papers come at a remarkably constant rate across the sequence of their work 30 , 31 .

The discovery of empirical regularities has had important roles in shaping beliefs about the nature of science 10 , 45 , 161 , 162 , sources of breakthrough ideas 15 , 163 , 164 , 165 , scientific careers 21 , 29 , 126 , 127 , the network structure of ideas and scientists 23 , 98 , 136 , 137 , 138 , 139 , 166 , gender inequality 57 , 108 , 126 , 135 , 143 , 167 , 168 , and many other areas of interest to scientists and science institutions 22 , 47 , 86 , 97 , 102 , 105 , 134 , 169 , 170 , 171 . At the same time, care must be taken to ensure that findings are not merely artefacts due to data selection or inherent bias. To differentiate meaningful patterns from spurious ones, it is important to stress test the findings through different selection criteria or across non-overlapping data sources.

Regression analysis

When investigating correlations among variables, a classic method is regression, which estimates how one set of variables explains variation in an outcome of interest. Regression can be used to test explicit hypotheses or predict outcomes. For example, researchers have investigated whether a paper’s novelty predicts its citation impact 172 . Adding additional control variables to the regression, one can further examine the robustness of the focal relationship.

Although regression analysis is useful for hypothesis testing, it bears substantial limitations. If the question one wishes to ask concerns a ‘causal’ rather than a correlational relationship, regression is poorly suited to the task as it is impossible to control for all the confounding factors. Failing to account for such ‘omitted variables’ can bias the regression coefficient estimates and lead to spurious interpretations. Further, regression models often have low goodness of fit (small R 2 ), indicating that the variables considered explain little of the outcome variation. As regressions typically focus on a specific relationship in simple functional forms, regressions tend to emphasize interpretability rather than overall predictability. The advent of predictive approaches powered by large-scale datasets and novel computational techniques offers new opportunities for modelling complex relationships with stronger predictive power.

Mechanistic models

Mechanistic modelling is an important approach to explaining empirical regularities, drawing from methods primarily used in physics. Such models predict macro-level regularities of a system by modelling micro-level interactions among basic elements with interpretable and modifiable formulars. While theoretical by nature, mechanistic models in the science of science are often empirically grounded, and this approach has developed together with the advent of large-scale, high-resolution data.

Simplicity is the core value of a mechanistic model. Consider for example, why citations follow a fat-tailed distribution. de Solla Price modelled the citing behaviour as a cumulative advantage process on a growing citation network 159 and found that if the probability a paper is cited grows linearly with its existing citations, the resulting distribution would follow a power law, broadly aligned with empirical observations. The model is intentionally simplified, ignoring myriad factors. Yet the simple cumulative advantage process is by itself sufficient in explaining a power law distribution of citations. In this way, mechanistic models can help to reveal key mechanisms that can explain observed patterns.

Moreover, mechanistic models can be refined as empirical evidence evolves. For example, later investigations showed that citation distributions are better characterized as log-normal 156 , 173 , prompting researchers to introduce a fitness parameter to encapsulate the inherent differences in papers’ ability to attract citations 174 , 175 . Further, older papers are less likely to be cited than expected 176 , 177 , 178 , motivating more recent models 20 to introduce an additional aging effect 179 . By combining the cumulative advantage, fitness and aging effects, one can already achieve substantial predictive power not just for the overall properties of the system but also the citation dynamics of individual papers 20 .

In addition to citations, mechanistic models have been developed to understand the formation of collaborations 136 , 180 , 181 , 182 , 183 , knowledge discovery and diffusion 184 , 185 , topic selection 186 , 187 , career dynamics 30 , 31 , 188 , 189 , the growth of scientific fields 190 and the dynamics of failure in science and other domains 178 .

At the same time, some observers have argued that mechanistic models are too simplistic to capture the essence of complex real-world problems 191 . While it has been a cornerstone for the natural sciences, representing social phenomena in a limited set of mathematical equations may miss complexities and heterogeneities that make social phenomena interesting in the first place. Such concerns are not unique to the science of science, as they represent a broader theme in computational social sciences 192 , 193 , ranging from social networks 194 , 195 to human mobility 196 , 197 to epidemics 198 , 199 . Other observers have questioned the practical utility of mechanistic models and whether they can be used to guide decisions and devise actionable policies. Nevertheless, despite these limitations, several complex phenomena in the science of science are well captured by simple mechanistic models, showing a high degree of regularity beneath complex interacting systems and providing powerful insights about the nature of science. Mixing such modelling with other methods could be particularly fruitful in future investigations.

Machine learning

The science of science seeks in part to forecast promising directions for scientific research 7 , 44 . In recent years, machine learning methods have substantially advanced predictive capabilities 200 , 201 and are playing increasingly important parts in the science of science. In contrast to the previous methods, machine learning does not emphasize hypotheses or theories. Rather, it leverages complex relationships in data and optimizes goodness of fit to make predictions and categorizations.

Traditional machine learning models include supervised, semi-supervised and unsupervised learning. The model choice depends on data availability and the research question, ranging from supervised models for citation prediction 202 , 203 to unsupervised models for community detection 204 . Take for example mappings of scientific knowledge 94 , 205 , 206 . The unsupervised method applies network clustering algorithms to map the structures of science. Related visualization tools make sense of clusters from the underlying network, allowing observers to see the organization, interactions and evolution of scientific knowledge. More recently, supervised learning, and deep neural networks in particular, have witnessed especially rapid developments 207 . Neural networks can generate high-dimensional representations of unstructured data such as images and texts, which encode complex properties difficult for human experts to perceive.

Take text analysis as an example. A recent study 52 utilizes 3.3 million paper abstracts in materials science to predict the thermoelectric properties of materials. The intuition is that the words currently used to describe a material may predict its hitherto undiscovered properties (Fig. 2 ). Compared with a random material, the materials predicted by the model are eight times more likely to be reported as thermoelectric in the next 5 years, suggesting that machine learning has the potential to substantially speed up knowledge discovery, especially as data continue to grow in scale and scope. Indeed, predicting the direction of new discoveries represents one of the most promising avenues for machine learning models, with neural networks being applied widely to biology 208 , physics 209 , 210 , mathematics 211 , chemistry 212 , medicine 213 and clinical applications 214 . Neural networks also offer a quantitative framework to probe the characteristics of creative products ranging from scientific papers 53 , journals 215 , organizations 148 , to paintings and movies 32 . Neural networks can also help to predict the reproducibility of papers from a variety of disciplines at scale 53 , 216 .

figure 2

This figure illustrates the word2vec skip-gram methods 52 , where the goal is to predict useful properties of materials using previous scientific literature. a , The architecture and training process of the word2vec skip-gram model, where the 3-layer, fully connected neural network learns the 200-dimensional representation (hidden layer) from the sparse vector for each word and its context in the literature (input layer). b , The top two principal components of the word embedding. Materials with similar features are close in the 2D space, allowing prediction of a material’s properties. Different targeted words are shown in different colours. Reproduced with permission from ref. 52 , Springer Nature Ltd.

While machine learning can offer high predictive accuracy, successful applications to the science of science face challenges, particularly regarding interpretability. Researchers may value transparent and interpretable findings for how a given feature influences an outcome, rather than a black-box model. The lack of interpretability also raises concerns about bias and fairness. In predicting reproducible patterns from data, machine learning models inevitably include and reproduce biases embedded in these data, often in non-transparent ways. The fairness of machine learning 217 is heavily debated in applications ranging from the criminal justice system to hiring processes. Effective and responsible use of machine learning in the science of science therefore requires thoughtful partnership between humans and machines 53 to build a reliable system accessible to scrutiny and modification.

Causal approaches

The preceding methods can reveal core facts about the workings of science and develop predictive capacity. Yet, they fail to capture causal relationships, which are particularly useful in assessing policy interventions. For example, how can we test whether a science policy boosts or hinders the performance of individuals, teams or institutions? The overarching idea of causal approaches is to construct some counterfactual world where two groups are identical to each other except that one group experiences a treatment that the other group does not.

Towards causation

Before engaging in causal approaches, it is useful to first consider the interpretative challenges of observational data. As observational data emerge from mechanisms that are not fully known or measured, an observed correlation may be driven by underlying forces that were not accounted for in the analysis. This challenge makes causal inference fundamentally difficult in observational data. An awareness of this issue is the first step in confronting it. It further motivates intermediate empirical approaches, including the use of matching strategies and fixed effects, that can help to confront (although not fully eliminate) the inference challenge. We first consider these approaches before turning to more fully causal methods.

Matching. Matching utilizes rich information to construct a control group that is similar to the treatment group on as many observable characteristics as possible before the treatment group is exposed to the treatment. Inferences can then be made by comparing the treatment and the matched control groups. Exact matching applies to categorical values, such as country, gender, discipline or affiliation 35 , 218 . Coarsened exact matching considers percentile bins of continuous variables and matches observations in the same bin 133 . Propensity score matching estimates the probability of receiving the ‘treatment’ on the basis of the controlled variables and uses the estimates to match treatment and control groups, which reduces the matching task from comparing the values of multiple covariates to comparing a single value 24 , 219 . Dynamic matching is useful for longitudinally matching variables that change over time 220 , 221 .

Fixed effects. Fixed effects are a powerful and now standard tool in controlling for confounders. A key requirement for using fixed effects is that there are multiple observations on the same subject or entity (person, field, institution and so on) 222 , 223 , 224 . The fixed effect works as a dummy variable that accounts for the role of any fixed characteristic of that entity. Consider the finding where gender-diverse teams produce higher-impact papers than same-gender teams do 225 . A confounder may be that individuals who tend to write high-impact papers may also be more likely to work in gender-diverse teams. By including individual fixed effects, one accounts for any fixed characteristics of individuals (such as IQ, cultural background or previous education) that might drive the relationship of interest.

In sum, matching and fixed effects methods reduce potential sources of bias in interpreting relationships between variables. Yet, confounders may persist in these studies. For instance, fixed effects do not control for unobserved factors that change with time within the given entity (for example, access to funding or new skills). Identifying casual effects convincingly will then typically require distinct research methods that we turn to next.

Quasi-experiments

Researchers in economics and other fields have developed a range of quasi-experimental methods to construct treatment and control groups. The key idea here is exploiting randomness from external events that differentially expose subjects to a particular treatment. Here we review three quasi-experimental methods: difference-in-differences, instrumental variables and regression discontinuity (Fig. 3 ).

figure 3

a – c , This figure presents illustrations of ( a ) differences-in-differences, ( b ) instrumental variables and ( c ) regression discontinuity methods. The solid line in b represents causal links and the dashed line represents the relationships that are not allowed, if the IV method is to produce causal inference.

Difference-in-differences. Difference-in-difference regression (DiD) investigates the effect of an unexpected event, comparing the affected group (the treated group) with an unaffected group (the control group). The control group is intended to provide the counterfactual path—what would have happened were it not for the unexpected event. Ideally, the treated and control groups are on virtually identical paths before the treatment event, but DiD can also work if the groups are on parallel paths (Fig. 3a ). For example, one study 226 examines how the premature death of superstar scientists affects the productivity of their previous collaborators. The control group are collaborators of superstars who did not die in the time frame. The two groups do not show significant differences in publications before a death event, yet upon the death of a star scientist, the treated collaborators on average experience a 5–8% decline in their quality-adjusted publication rates compared with the control group. DiD has wide applicability in the science of science, having been used to analyse the causal effects of grant design 24 , access costs to previous research 155 , 227 , university technology transfer policies 154 , intellectual property 228 , citation practices 229 , evolution of fields 221 and the impacts of paper retractions 230 , 231 , 232 . The DiD literature has grown especially rapidly in the field of economics, with substantial recent refinements 233 , 234 .

Instrumental variables. Another quasi-experimental approach utilizes ‘instrumental variables’ (IV). The goal is to determine the causal influence of some feature X on some outcome Y by using a third, instrumental variable. This instrumental variable is a quasi-random event that induces variation in X and, except for its impact through X , has no other effect on the outcome Y (Fig. 3b ). For example, consider a study of astronomy that seeks to understand how telescope time affects career advancement 235 . Here, one cannot simply look at the correlation between telescope time and career outcomes because many confounds (such as talent or grit) may influence both telescope time and career opportunities. Now consider the weather as an instrumental variable. Cloudy weather will, at random, reduce an astronomer’s observational time. Yet, the weather on particular nights is unlikely to correlate with a scientist’s innate qualities. The weather can then provide an instrumental variable to reveal a causal relationship between telescope time and career outcomes. Instrumental variables have been used to study local peer effects in research 151 , the impact of gender composition in scientific committees 236 , patents on future innovation 237 and taxes on inventor mobility 238 .

Regression discontinuity. In regression discontinuity, policies with an arbitrary threshold for receiving some benefit can be used to construct treatment and control groups (Fig. 3c ). Take the funding paylines for grant proposals as an example. Proposals with scores increasingly close to the payline are increasingly similar in their both observable and unobservable characteristics, yet only those projects with scores above the payline receive the funding. For example, a study 110 examines the effect of winning an early-career grant on the probability of winning a later, mid-career grant. The probability has a discontinuous jump across the initial grant’s payline, providing the treatment and control groups needed to estimate the causal effect of receiving a grant. This example utilizes the ‘sharp’ regression discontinuity that assumes treatment status to be fully determined by the cut-off. If we assume treatment status is only partly determined by the cut-off, we can use ‘fuzzy’ regression discontinuity designs. Here the probability of receiving a grant is used to estimate the future outcome 11 , 110 , 239 , 240 , 241 .

Although quasi-experiments are powerful tools, they face their own limitations. First, these approaches identify causal effects within a specific context and often engage small numbers of observations. How representative the samples are for broader populations or contexts is typically left as an open question. Second, the validity of the causal design is typically not ironclad. Researchers usually conduct different robustness checks to verify whether observable confounders have significant differences between the treated and control groups, before treatment. However, unobservable features may still differ between treatment and control groups. The quality of instrumental variables and the specific claim that they have no effect on the outcome except through the variable of interest, is also difficult to assess. Ultimately, researchers must rely partly on judgement to tell whether appropriate conditions are met for causal inference.

This section emphasized popular econometric approaches to causal inference. Other empirical approaches, such as graphical causal modelling 242 , 243 , also represent an important stream of work on assessing causal relationships. Such approaches usually represent causation as a directed acyclic graph, with nodes as variables and arrows between them as suspected causal relationships. In the science of science, the directed acyclic graph approach has been applied to quantify the causal effect of journal impact factor 244 and gender or racial bias 245 on citations. Graphical causal modelling has also triggered discussions on strengths and weaknesses compared to the econometrics methods 246 , 247 .

Experiments

In contrast to quasi-experimental approaches, laboratory and field experiments conduct direct randomization in assigning treatment and control groups. These methods engage explicitly in the data generation process, manipulating interventions to observe counterfactuals. These experiments are crafted to study mechanisms of specific interest and, by designing the experiment and formally randomizing, can produce especially rigorous causal inference.

Laboratory experiments. Laboratory experiments build counterfactual worlds in well-controlled laboratory environments. Researchers randomly assign participants to the treatment or control group and then manipulate the laboratory conditions to observe different outcomes in the two groups. For example, consider laboratory experiments on team performance and gender composition 144 , 248 . The researchers randomly assign participants into groups to perform tasks such as solving puzzles or brainstorming. Teams with a higher proportion of women are found to perform better on average, offering evidence that gender diversity is causally linked to team performance. Laboratory experiments can allow researchers to test forces that are otherwise hard to observe, such as how competition influences creativity 249 . Laboratory experiments have also been used to evaluate how journal impact factors shape scientists’ perceptions of rewards 250 and gender bias in hiring 251 .

Laboratory experiments allow for precise control of settings and procedures to isolate causal effects of interest. However, participants may behave differently in synthetic environments than in real-world settings, raising questions about the generalizability and replicability of the results 252 , 253 , 254 . To assess causal effects in real-world settings, researcher use randomized controlled trials.

Randomized controlled trials. A randomized controlled trial (RCT), or field experiment, is a staple for causal inference across a wide range of disciplines. RCTs randomly assign participants into the treatment and control conditions 255 and can be used not only to assess mechanisms but also to test real-world interventions such as policy change. The science of science has witnessed growing use of RCTs. For instance, a field experiment 146 investigated whether lower search costs for collaborators increased collaboration in grant applications. The authors randomly allocated principal investigators to face-to-face sessions in a medical school, and then measured participants’ chance of writing a grant proposal together. RCTs have also offered rich causal insights on peer review 256 , 257 , 258 , 259 , 260 and gender bias in science 261 , 262 , 263 .

While powerful, RCTs are difficult to conduct in the science of science, mainly for two reasons. The first concerns potential risks in a policy intervention. For instance, while randomizing funding across individuals could generate crucial causal insights for funders, it may also inadvertently harm participants’ careers 264 . Second, key questions in the science of science often require a long-time horizon to trace outcomes, which makes RCTs costly. It also raises the difficulty of replicating findings. A relative advantage of the quasi-experimental methods discussed earlier is that one can identify causal effects over potentially long periods of time in the historical record. On the other hand, quasi-experiments must be found as opposed to designed, and they often are not available for many questions of interest. While the best approaches are context dependent, a growing community of researchers is building platforms to facilitate RCTs for the science of science, aiming to lower their costs and increase their scale. Performing RCTs in partnership with science institutions can also contribute to timely, policy-relevant research that may substantially improve science decision-making and investments.

Research in the science of science has been empowered by the growth of high-scale data, new measurement approaches and an expanding range of empirical methods. These tools provide enormous capacity to test conceptual frameworks about science, discover factors impacting scientific productivity, predict key scientific outcomes and design policies that better facilitate future scientific progress. A careful appreciation of empirical techniques can help researchers to choose effective tools for questions of interest and propel the field. A better and broader understanding of these methodologies may also build bridges across diverse research communities, facilitating communication and collaboration, and better leveraging the value of diverse perspectives. The science of science is about turning scientific methods on the nature of science itself. The fruits of this work, with time, can guide researchers and research institutions to greater progress in discovery and understanding across the landscape of scientific inquiry.

Bush, V . S cience–the Endless Frontier: A Report to the President on a Program for Postwar Scientific Research (National Science Foundation, 1990).

Mokyr, J. The Gifts of Athena (Princeton Univ. Press, 2011).

Jones, B. F. in Rebuilding the Post-Pandemic Economy (eds Kearney, M. S. & Ganz, A.) 272–310 (Aspen Institute Press, 2021).

Wang, D. & Barabási, A.-L. The Science of Science (Cambridge Univ. Press, 2021).

Fortunato, S. et al. Science of science. Science 359 , eaao0185 (2018).

Article   PubMed   PubMed Central   Google Scholar  

Azoulay, P. et al. Toward a more scientific science. Science 361 , 1194–1197 (2018).

Article   PubMed   Google Scholar  

Clauset, A., Larremore, D. B. & Sinatra, R. Data-driven predictions in the science of science. Science 355 , 477–480 (2017).

Article   CAS   PubMed   Google Scholar  

Zeng, A. et al. The science of science: from the perspective of complex systems. Phys. Rep. 714 , 1–73 (2017).

Article   Google Scholar  

Lin, Z., Yin. Y., Liu, L. & Wang, D. SciSciNet: a large-scale open data lake for the science of science research. Sci. Data, https://doi.org/10.1038/s41597-023-02198-9 (2023).

Ahmadpoor, M. & Jones, B. F. The dual frontier: patented inventions and prior scientific advance. Science 357 , 583–587 (2017).

Azoulay, P., Graff Zivin, J. S., Li, D. & Sampat, B. N. Public R&D investments and private-sector patenting: evidence from NIH funding rules. Rev. Econ. Stud. 86 , 117–152 (2019).

Yin, Y., Dong, Y., Wang, K., Wang, D. & Jones, B. F. Public use and public funding of science. Nat. Hum. Behav. 6 , 1344–1350 (2022).

Merton, R. K. The Sociology of Science: Theoretical and Empirical Investigations (Univ. Chicago Press, 1973).

Kuhn, T. The Structure of Scientific Revolutions (Princeton Univ. Press, 2021).

Uzzi, B., Mukherjee, S., Stringer, M. & Jones, B. Atypical combinations and scientific impact. Science 342 , 468–472 (2013).

Zuckerman, H. Scientific Elite: Nobel Laureates in the United States (Transaction Publishers, 1977).

Wu, L., Wang, D. & Evans, J. A. Large teams develop and small teams disrupt science and technology. Nature 566 , 378–382 (2019).

Wuchty, S., Jones, B. F. & Uzzi, B. The increasing dominance of teams in production of knowledge. Science 316 , 1036–1039 (2007).

Foster, J. G., Rzhetsky, A. & Evans, J. A. Tradition and innovation in scientists’ research strategies. Am. Sociol. Rev. 80 , 875–908 (2015).

Wang, D., Song, C. & Barabási, A.-L. Quantifying long-term scientific impact. Science 342 , 127–132 (2013).

Clauset, A., Arbesman, S. & Larremore, D. B. Systematic inequality and hierarchy in faculty hiring networks. Sci. Adv. 1 , e1400005 (2015).

Ma, A., Mondragón, R. J. & Latora, V. Anatomy of funded research in science. Proc. Natl Acad. Sci. USA 112 , 14760–14765 (2015).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Ma, Y. & Uzzi, B. Scientific prize network predicts who pushes the boundaries of science. Proc. Natl Acad. Sci. USA 115 , 12608–12615 (2018).

Azoulay, P., Graff Zivin, J. S. & Manso, G. Incentives and creativity: evidence from the academic life sciences. RAND J. Econ. 42 , 527–554 (2011).

Schor, S. & Karten, I. Statistical evaluation of medical journal manuscripts. JAMA 195 , 1123–1128 (1966).

Platt, J. R. Strong inference: certain systematic methods of scientific thinking may produce much more rapid progress than others. Science 146 , 347–353 (1964).

Ioannidis, J. P. Why most published research findings are false. PLoS Med. 2 , e124 (2005).

Simonton, D. K. Career landmarks in science: individual differences and interdisciplinary contrasts. Dev. Psychol. 27 , 119 (1991).

Way, S. F., Morgan, A. C., Clauset, A. & Larremore, D. B. The misleading narrative of the canonical faculty productivity trajectory. Proc. Natl Acad. Sci. USA 114 , E9216–E9223 (2017).

Sinatra, R., Wang, D., Deville, P., Song, C. & Barabási, A.-L. Quantifying the evolution of individual scientific impact. Science 354 , aaf5239 (2016).

Liu, L. et al. Hot streaks in artistic, cultural, and scientific careers. Nature 559 , 396–399 (2018).

Liu, L., Dehmamy, N., Chown, J., Giles, C. L. & Wang, D. Understanding the onset of hot streaks across artistic, cultural, and scientific careers. Nat. Commun. 12 , 5392 (2021).

Squazzoni, F. et al. Peer review and gender bias: a study on 145 scholarly journals. Sci. Adv. 7 , eabd0299 (2021).

Hofstra, B. et al. The diversity–innovation paradox in science. Proc. Natl Acad. Sci. USA 117 , 9284–9291 (2020).

Huang, J., Gates, A. J., Sinatra, R. & Barabási, A.-L. Historical comparison of gender inequality in scientific careers across countries and disciplines. Proc. Natl Acad. Sci. USA 117 , 4609–4616 (2020).

Gläser, J. & Laudel, G. Governing science: how science policy shapes research content. Eur. J. Sociol. 57 , 117–168 (2016).

Stephan, P. E. How Economics Shapes Science (Harvard Univ. Press, 2012).

Garfield, E. & Sher, I. H. New factors in the evaluation of scientific literature through citation indexing. Am. Doc. 14 , 195–201 (1963).

Article   CAS   Google Scholar  

de Solla Price, D. J. Networks of scientific papers. Science 149 , 510–515 (1965).

Etzkowitz, H., Kemelgor, C. & Uzzi, B. Athena Unbound: The Advancement of Women in Science and Technology (Cambridge Univ. Press, 2000).

Simonton, D. K. Scientific Genius: A Psychology of Science (Cambridge Univ. Press, 1988).

Khabsa, M. & Giles, C. L. The number of scholarly documents on the public web. PLoS ONE 9 , e93949 (2014).

Xia, F., Wang, W., Bekele, T. M. & Liu, H. Big scholarly data: a survey. IEEE Trans. Big Data 3 , 18–35 (2017).

Evans, J. A. & Foster, J. G. Metaknowledge. Science 331 , 721–725 (2011).

Milojević, S. Quantifying the cognitive extent of science. J. Informetr. 9 , 962–973 (2015).

Rzhetsky, A., Foster, J. G., Foster, I. T. & Evans, J. A. Choosing experiments to accelerate collective discovery. Proc. Natl Acad. Sci. USA 112 , 14569–14574 (2015).

Poncela-Casasnovas, J., Gerlach, M., Aguirre, N. & Amaral, L. A. Large-scale analysis of micro-level citation patterns reveals nuanced selection criteria. Nat. Hum. Behav. 3 , 568–575 (2019).

Hardwicke, T. E. et al. Data availability, reusability, and analytic reproducibility: evaluating the impact of a mandatory open data policy at the journal Cognition. R. Soc. Open Sci. 5 , 180448 (2018).

Nagaraj, A., Shears, E. & de Vaan, M. Improving data access democratizes and diversifies science. Proc. Natl Acad. Sci. USA 117 , 23490–23498 (2020).

Bravo, G., Grimaldo, F., López-Iñesta, E., Mehmani, B. & Squazzoni, F. The effect of publishing peer review reports on referee behavior in five scholarly journals. Nat. Commun. 10 , 322 (2019).

Tran, D. et al. An open review of open review: a critical analysis of the machine learning conference review process. Preprint at https://doi.org/10.48550/arXiv.2010.05137 (2020).

Tshitoyan, V. et al. Unsupervised word embeddings capture latent knowledge from materials science literature. Nature 571 , 95–98 (2019).

Yang, Y., Wu, Y. & Uzzi, B. Estimating the deep replicability of scientific findings using human and artificial intelligence. Proc. Natl Acad. Sci. USA 117 , 10762–10768 (2020).

Mukherjee, S., Uzzi, B., Jones, B. & Stringer, M. A new method for identifying recombinations of existing knowledge associated with high‐impact innovation. J. Prod. Innov. Manage. 33 , 224–236 (2016).

Leahey, E., Beckman, C. M. & Stanko, T. L. Prominent but less productive: the impact of interdisciplinarity on scientists’ research. Adm. Sci. Q. 62 , 105–139 (2017).

Sauermann, H. & Haeussler, C. Authorship and contribution disclosures. Sci. Adv. 3 , e1700404 (2017).

Oliveira, D. F. M., Ma, Y., Woodruff, T. K. & Uzzi, B. Comparison of National Institutes of Health grant amounts to first-time male and female principal investigators. JAMA 321 , 898–900 (2019).

Yang, Y., Chawla, N. V. & Uzzi, B. A network’s gender composition and communication pattern predict women’s leadership success. Proc. Natl Acad. Sci. USA 116 , 2033–2038 (2019).

Way, S. F., Larremore, D. B. & Clauset, A. Gender, productivity, and prestige in computer science faculty hiring networks. In Proc. 25th International Conference on World Wide Web 1169–1179. (ACM 2016)

Malmgren, R. D., Ottino, J. M. & Amaral, L. A. N. The role of mentorship in protege performance. Nature 465 , 622–626 (2010).

Ma, Y., Mukherjee, S. & Uzzi, B. Mentorship and protégé success in STEM fields. Proc. Natl Acad. Sci. USA 117 , 14077–14083 (2020).

Börner, K. et al. Skill discrepancies between research, education, and jobs reveal the critical need to supply soft skills for the data economy. Proc. Natl Acad. Sci. USA 115 , 12630–12637 (2018).

Biasi, B. & Ma, S. The Education-Innovation Gap (National Bureau of Economic Research Working papers, 2020).

Bornmann, L. Do altmetrics point to the broader impact of research? An overview of benefits and disadvantages of altmetrics. J. Informetr. 8 , 895–903 (2014).

Cleary, E. G., Beierlein, J. M., Khanuja, N. S., McNamee, L. M. & Ledley, F. D. Contribution of NIH funding to new drug approvals 2010–2016. Proc. Natl Acad. Sci. USA 115 , 2329–2334 (2018).

Spector, J. M., Harrison, R. S. & Fishman, M. C. Fundamental science behind today’s important medicines. Sci. Transl. Med. 10 , eaaq1787 (2018).

Haunschild, R. & Bornmann, L. How many scientific papers are mentioned in policy-related documents? An empirical investigation using Web of Science and Altmetric data. Scientometrics 110 , 1209–1216 (2017).

Yin, Y., Gao, J., Jones, B. F. & Wang, D. Coevolution of policy and science during the pandemic. Science 371 , 128–130 (2021).

Sugimoto, C. R., Work, S., Larivière, V. & Haustein, S. Scholarly use of social media and altmetrics: a review of the literature. J. Assoc. Inf. Sci. Technol. 68 , 2037–2062 (2017).

Dunham, I. Human genes: time to follow the roads less traveled? PLoS Biol. 16 , e3000034 (2018).

Kustatscher, G. et al. Understudied proteins: opportunities and challenges for functional proteomics. Nat. Methods 19 , 774–779 (2022).

Rosenthal, R. The file drawer problem and tolerance for null results. Psychol. Bull. 86 , 638 (1979).

Franco, A., Malhotra, N. & Simonovits, G. Publication bias in the social sciences: unlocking the file drawer. Science 345 , 1502–1505 (2014).

Vera-Baceta, M.-A., Thelwall, M. & Kousha, K. Web of Science and Scopus language coverage. Scientometrics 121 , 1803–1813 (2019).

Waltman, L. A review of the literature on citation impact indicators. J. Informetr. 10 , 365–391 (2016).

Garfield, E. & Merton, R. K. Citation Indexing: Its Theory and Application in Science, Technology, and Humanities (Wiley, 1979).

Kelly, B., Papanikolaou, D., Seru, A. & Taddy, M. Measuring Technological Innovation Over the Long Run Report No. 0898-2937 (National Bureau of Economic Research, 2018).

Kogan, L., Papanikolaou, D., Seru, A. & Stoffman, N. Technological innovation, resource allocation, and growth. Q. J. Econ. 132 , 665–712 (2017).

Hall, B. H., Jaffe, A. & Trajtenberg, M. Market value and patent citations. RAND J. Econ. 36 , 16–38 (2005).

Google Scholar  

Yan, E. & Ding, Y. Applying centrality measures to impact analysis: a coauthorship network analysis. J. Am. Soc. Inf. Sci. Technol. 60 , 2107–2118 (2009).

Radicchi, F., Fortunato, S., Markines, B. & Vespignani, A. Diffusion of scientific credits and the ranking of scientists. Phys. Rev. E 80 , 056103 (2009).

Bollen, J., Rodriquez, M. A. & Van de Sompel, H. Journal status. Scientometrics 69 , 669–687 (2006).

Bergstrom, C. T., West, J. D. & Wiseman, M. A. The eigenfactor™ metrics. J. Neurosci. 28 , 11433–11434 (2008).

Cronin, B. & Sugimoto, C. R. Beyond Bibliometrics: Harnessing Multidimensional Indicators of Scholarly Impact (MIT Press, 2014).

Hicks, D., Wouters, P., Waltman, L., De Rijcke, S. & Rafols, I. Bibliometrics: the Leiden Manifesto for research metrics. Nature 520 , 429–431 (2015).

Catalini, C., Lacetera, N. & Oettl, A. The incidence and role of negative citations in science. Proc. Natl Acad. Sci. USA 112 , 13823–13826 (2015).

Alcacer, J. & Gittelman, M. Patent citations as a measure of knowledge flows: the influence of examiner citations. Rev. Econ. Stat. 88 , 774–779 (2006).

Ding, Y. et al. Content‐based citation analysis: the next generation of citation analysis. J. Assoc. Inf. Sci. Technol. 65 , 1820–1833 (2014).

Teufel, S., Siddharthan, A. & Tidhar, D. Automatic classification of citation function. In Proc. 2006 Conference on Empirical Methods in Natural Language Processing, 103–110 (Association for Computational Linguistics 2006)

Seeber, M., Cattaneo, M., Meoli, M. & Malighetti, P. Self-citations as strategic response to the use of metrics for career decisions. Res. Policy 48 , 478–491 (2019).

Pendlebury, D. A. The use and misuse of journal metrics and other citation indicators. Arch. Immunol. Ther. Exp. 57 , 1–11 (2009).

Biagioli, M. Watch out for cheats in citation game. Nature 535 , 201 (2016).

Jo, W. S., Liu, L. & Wang, D. See further upon the giants: quantifying intellectual lineage in science. Quant. Sci. Stud. 3 , 319–330 (2022).

Boyack, K. W., Klavans, R. & Börner, K. Mapping the backbone of science. Scientometrics 64 , 351–374 (2005).

Gates, A. J., Ke, Q., Varol, O. & Barabási, A.-L. Nature’s reach: narrow work has broad impact. Nature 575 , 32–34 (2019).

Börner, K., Penumarthy, S., Meiss, M. & Ke, W. Mapping the diffusion of scholarly knowledge among major US research institutions. Scientometrics 68 , 415–426 (2006).

King, D. A. The scientific impact of nations. Nature 430 , 311–316 (2004).

Pan, R. K., Kaski, K. & Fortunato, S. World citation and collaboration networks: uncovering the role of geography in science. Sci. Rep. 2 , 902 (2012).

Jaffe, A. B., Trajtenberg, M. & Henderson, R. Geographic localization of knowledge spillovers as evidenced by patent citations. Q. J. Econ. 108 , 577–598 (1993).

Funk, R. J. & Owen-Smith, J. A dynamic network measure of technological change. Manage. Sci. 63 , 791–817 (2017).

Yegros-Yegros, A., Rafols, I. & D’este, P. Does interdisciplinary research lead to higher citation impact? The different effect of proximal and distal interdisciplinarity. PLoS ONE 10 , e0135095 (2015).

Larivière, V., Haustein, S. & Börner, K. Long-distance interdisciplinarity leads to higher scientific impact. PLoS ONE 10 , e0122565 (2015).

Fleming, L., Greene, H., Li, G., Marx, M. & Yao, D. Government-funded research increasingly fuels innovation. Science 364 , 1139–1141 (2019).

Bowen, A. & Casadevall, A. Increasing disparities between resource inputs and outcomes, as measured by certain health deliverables, in biomedical research. Proc. Natl Acad. Sci. USA 112 , 11335–11340 (2015).

Li, D., Azoulay, P. & Sampat, B. N. The applied value of public investments in biomedical research. Science 356 , 78–81 (2017).

Lehman, H. C. Age and Achievement (Princeton Univ. Press, 2017).

Simonton, D. K. Creative productivity: a predictive and explanatory model of career trajectories and landmarks. Psychol. Rev. 104 , 66 (1997).

Duch, J. et al. The possible role of resource requirements and academic career-choice risk on gender differences in publication rate and impact. PLoS ONE 7 , e51332 (2012).

Wang, Y., Jones, B. F. & Wang, D. Early-career setback and future career impact. Nat. Commun. 10 , 4331 (2019).

Bol, T., de Vaan, M. & van de Rijt, A. The Matthew effect in science funding. Proc. Natl Acad. Sci. USA 115 , 4887–4890 (2018).

Jones, B. F. Age and great invention. Rev. Econ. Stat. 92 , 1–14 (2010).

Newman, M. Networks (Oxford Univ. Press, 2018).

Mazloumian, A., Eom, Y.-H., Helbing, D., Lozano, S. & Fortunato, S. How citation boosts promote scientific paradigm shifts and nobel prizes. PLoS ONE 6 , e18975 (2011).

Hirsch, J. E. An index to quantify an individual’s scientific research output. Proc. Natl Acad. Sci. USA 102 , 16569–16572 (2005).

Alonso, S., Cabrerizo, F. J., Herrera-Viedma, E. & Herrera, F. h-index: a review focused in its variants, computation and standardization for different scientific fields. J. Informetr. 3 , 273–289 (2009).

Egghe, L. An improvement of the h-index: the g-index. ISSI Newsl. 2 , 8–9 (2006).

Kaur, J., Radicchi, F. & Menczer, F. Universality of scholarly impact metrics. J. Informetr. 7 , 924–932 (2013).

Majeti, D. et al. Scholar plot: design and evaluation of an information interface for faculty research performance. Front. Res. Metr. Anal. 4 , 6 (2020).

Sidiropoulos, A., Katsaros, D. & Manolopoulos, Y. Generalized Hirsch h-index for disclosing latent facts in citation networks. Scientometrics 72 , 253–280 (2007).

Jones, B. F. & Weinberg, B. A. Age dynamics in scientific creativity. Proc. Natl Acad. Sci. USA 108 , 18910–18914 (2011).

Dennis, W. Age and productivity among scientists. Science 123 , 724–725 (1956).

Sanyal, D. K., Bhowmick, P. K. & Das, P. P. A review of author name disambiguation techniques for the PubMed bibliographic database. J. Inf. Sci. 47 , 227–254 (2021).

Haak, L. L., Fenner, M., Paglione, L., Pentz, E. & Ratner, H. ORCID: a system to uniquely identify researchers. Learn. Publ. 25 , 259–264 (2012).

Malmgren, R. D., Ottino, J. M. & Amaral, L. A. N. The role of mentorship in protégé performance. Nature 465 , 662–667 (2010).

Oettl, A. Reconceptualizing stars: scientist helpfulness and peer performance. Manage. Sci. 58 , 1122–1140 (2012).

Morgan, A. C. et al. The unequal impact of parenthood in academia. Sci. Adv. 7 , eabd1996 (2021).

Morgan, A. C. et al. Socioeconomic roots of academic faculty. Nat. Hum. Behav. 6 , 1625–1633 (2022).

San Francisco Declaration on Research Assessment (DORA) (American Society for Cell Biology, 2012).

Falk‐Krzesinski, H. J. et al. Advancing the science of team science. Clin. Transl. Sci. 3 , 263–266 (2010).

Cooke, N. J. et al. Enhancing the Effectiveness of Team Science (National Academies Press, 2015).

Börner, K. et al. A multi-level systems perspective for the science of team science. Sci. Transl. Med. 2 , 49cm24 (2010).

Leahey, E. From sole investigator to team scientist: trends in the practice and study of research collaboration. Annu. Rev. Sociol. 42 , 81–100 (2016).

AlShebli, B. K., Rahwan, T. & Woon, W. L. The preeminence of ethnic diversity in scientific collaboration. Nat. Commun. 9 , 5163 (2018).

Hsiehchen, D., Espinoza, M. & Hsieh, A. Multinational teams and diseconomies of scale in collaborative research. Sci. Adv. 1 , e1500211 (2015).

Koning, R., Samila, S. & Ferguson, J.-P. Who do we invent for? Patents by women focus more on women’s health, but few women get to invent. Science 372 , 1345–1348 (2021).

Barabâsi, A.-L. et al. Evolution of the social network of scientific collaborations. Physica A 311 , 590–614 (2002).

Newman, M. E. Scientific collaboration networks. I. Network construction and fundamental results. Phys. Rev. E 64 , 016131 (2001).

Newman, M. E. Scientific collaboration networks. II. Shortest paths, weighted networks, and centrality. Phys. Rev. E 64 , 016132 (2001).

Palla, G., Barabási, A.-L. & Vicsek, T. Quantifying social group evolution. Nature 446 , 664–667 (2007).

Ross, M. B. et al. Women are credited less in science than men. Nature 608 , 135–145 (2022).

Shen, H.-W. & Barabási, A.-L. Collective credit allocation in science. Proc. Natl Acad. Sci. USA 111 , 12325–12330 (2014).

Merton, R. K. Matthew effect in science. Science 159 , 56–63 (1968).

Ni, C., Smith, E., Yuan, H., Larivière, V. & Sugimoto, C. R. The gendered nature of authorship. Sci. Adv. 7 , eabe4639 (2021).

Woolley, A. W., Chabris, C. F., Pentland, A., Hashmi, N. & Malone, T. W. Evidence for a collective intelligence factor in the performance of human groups. Science 330 , 686–688 (2010).

Feldon, D. F. et al. Postdocs’ lab engagement predicts trajectories of PhD students’ skill development. Proc. Natl Acad. Sci. USA 116 , 20910–20916 (2019).

Boudreau, K. J. et al. A field experiment on search costs and the formation of scientific collaborations. Rev. Econ. Stat. 99 , 565–576 (2017).

Holcombe, A. O. Contributorship, not authorship: use CRediT to indicate who did what. Publications 7 , 48 (2019).

Murray, D. et al. Unsupervised embedding of trajectories captures the latent structure of mobility. Preprint at https://doi.org/10.48550/arXiv.2012.02785 (2020).

Deville, P. et al. Career on the move: geography, stratification, and scientific impact. Sci. Rep. 4 , 4770 (2014).

Edmunds, L. D. et al. Why do women choose or reject careers in academic medicine? A narrative review of empirical evidence. Lancet 388 , 2948–2958 (2016).

Waldinger, F. Peer effects in science: evidence from the dismissal of scientists in Nazi Germany. Rev. Econ. Stud. 79 , 838–861 (2012).

Agrawal, A., McHale, J. & Oettl, A. How stars matter: recruiting and peer effects in evolutionary biology. Res. Policy 46 , 853–867 (2017).

Fiore, S. M. Interdisciplinarity as teamwork: how the science of teams can inform team science. Small Group Res. 39 , 251–277 (2008).

Hvide, H. K. & Jones, B. F. University innovation and the professor’s privilege. Am. Econ. Rev. 108 , 1860–1898 (2018).

Murray, F., Aghion, P., Dewatripont, M., Kolev, J. & Stern, S. Of mice and academics: examining the effect of openness on innovation. Am. Econ. J. Econ. Policy 8 , 212–252 (2016).

Radicchi, F., Fortunato, S. & Castellano, C. Universality of citation distributions: toward an objective measure of scientific impact. Proc. Natl Acad. Sci. USA 105 , 17268–17272 (2008).

Waltman, L., van Eck, N. J. & van Raan, A. F. Universality of citation distributions revisited. J. Am. Soc. Inf. Sci. Technol. 63 , 72–77 (2012).

Barabási, A.-L. & Albert, R. Emergence of scaling in random networks. Science 286 , 509–512 (1999).

de Solla Price, D. A general theory of bibliometric and other cumulative advantage processes. J. Am. Soc. Inf. Sci. 27 , 292–306 (1976).

Cole, S. Age and scientific performance. Am. J. Sociol. 84 , 958–977 (1979).

Ke, Q., Ferrara, E., Radicchi, F. & Flammini, A. Defining and identifying sleeping beauties in science. Proc. Natl Acad. Sci. USA 112 , 7426–7431 (2015).

Bornmann, L., de Moya Anegón, F. & Leydesdorff, L. Do scientific advancements lean on the shoulders of giants? A bibliometric investigation of the Ortega hypothesis. PLoS ONE 5 , e13327 (2010).

Mukherjee, S., Romero, D. M., Jones, B. & Uzzi, B. The nearly universal link between the age of past knowledge and tomorrow’s breakthroughs in science and technology: the hotspot. Sci. Adv. 3 , e1601315 (2017).

Packalen, M. & Bhattacharya, J. NIH funding and the pursuit of edge science. Proc. Natl Acad. Sci. USA 117 , 12011–12016 (2020).

Zeng, A., Fan, Y., Di, Z., Wang, Y. & Havlin, S. Fresh teams are associated with original and multidisciplinary research. Nat. Hum. Behav. 5 , 1314–1322 (2021).

Newman, M. E. The structure of scientific collaboration networks. Proc. Natl Acad. Sci. USA 98 , 404–409 (2001).

Larivière, V., Ni, C., Gingras, Y., Cronin, B. & Sugimoto, C. R. Bibliometrics: global gender disparities in science. Nature 504 , 211–213 (2013).

West, J. D., Jacquet, J., King, M. M., Correll, S. J. & Bergstrom, C. T. The role of gender in scholarly authorship. PLoS ONE 8 , e66212 (2013).

Gao, J., Yin, Y., Myers, K. R., Lakhani, K. R. & Wang, D. Potentially long-lasting effects of the pandemic on scientists. Nat. Commun. 12 , 6188 (2021).

Jones, B. F., Wuchty, S. & Uzzi, B. Multi-university research teams: shifting impact, geography, and stratification in science. Science 322 , 1259–1262 (2008).

Chu, J. S. & Evans, J. A. Slowed canonical progress in large fields of science. Proc. Natl Acad. Sci. USA 118 , e2021636118 (2021).

Wang, J., Veugelers, R. & Stephan, P. Bias against novelty in science: a cautionary tale for users of bibliometric indicators. Res. Policy 46 , 1416–1436 (2017).

Stringer, M. J., Sales-Pardo, M. & Amaral, L. A. Statistical validation of a global model for the distribution of the ultimate number of citations accrued by papers published in a scientific journal. J. Assoc. Inf. Sci. Technol. 61 , 1377–1385 (2010).

Bianconi, G. & Barabási, A.-L. Bose-Einstein condensation in complex networks. Phys. Rev. Lett. 86 , 5632 (2001).

Bianconi, G. & Barabási, A.-L. Competition and multiscaling in evolving networks. Europhys. Lett. 54 , 436 (2001).

Yin, Y. & Wang, D. The time dimension of science: connecting the past to the future. J. Informetr. 11 , 608–621 (2017).

Pan, R. K., Petersen, A. M., Pammolli, F. & Fortunato, S. The memory of science: Inflation, myopia, and the knowledge network. J. Informetr. 12 , 656–678 (2018).

Yin, Y., Wang, Y., Evans, J. A. & Wang, D. Quantifying the dynamics of failure across science, startups and security. Nature 575 , 190–194 (2019).

Candia, C. & Uzzi, B. Quantifying the selective forgetting and integration of ideas in science and technology. Am. Psychol. 76 , 1067 (2021).

Milojević, S. Principles of scientific research team formation and evolution. Proc. Natl Acad. Sci. USA 111 , 3984–3989 (2014).

Guimera, R., Uzzi, B., Spiro, J. & Amaral, L. A. N. Team assembly mechanisms determine collaboration network structure and team performance. Science 308 , 697–702 (2005).

Newman, M. E. Coauthorship networks and patterns of scientific collaboration. Proc. Natl Acad. Sci. USA 101 , 5200–5205 (2004).

Newman, M. E. Clustering and preferential attachment in growing networks. Phys. Rev. E 64 , 025102 (2001).

Iacopini, I., Milojević, S. & Latora, V. Network dynamics of innovation processes. Phys. Rev. Lett. 120 , 048301 (2018).

Kuhn, T., Perc, M. & Helbing, D. Inheritance patterns in citation networks reveal scientific memes. Phys. Rev. 4 , 041036 (2014).

Jia, T., Wang, D. & Szymanski, B. K. Quantifying patterns of research-interest evolution. Nat. Hum. Behav. 1 , 0078 (2017).

Zeng, A. et al. Increasing trend of scientists to switch between topics. Nat. Commun. https://doi.org/10.1038/s41467-019-11401-8 (2019).

Siudem, G., Żogała-Siudem, B., Cena, A. & Gagolewski, M. Three dimensions of scientific impact. Proc. Natl Acad. Sci. USA 117 , 13896–13900 (2020).

Petersen, A. M. et al. Reputation and impact in academic careers. Proc. Natl Acad. Sci. USA 111 , 15316–15321 (2014).

Jin, C., Song, C., Bjelland, J., Canright, G. & Wang, D. Emergence of scaling in complex substitutive systems. Nat. Hum. Behav. 3 , 837–846 (2019).

Hofman, J. M. et al. Integrating explanation and prediction in computational social science. Nature 595 , 181–188 (2021).

Lazer, D. et al. Computational social science. Science 323 , 721–723 (2009).

Lazer, D. M. et al. Computational social science: obstacles and opportunities. Science 369 , 1060–1062 (2020).

Albert, R. & Barabási, A.-L. Statistical mechanics of complex networks. Rev. Mod. Phys. 74 , 47 (2002).

Newman, M. E. The structure and function of complex networks. SIAM Rev. 45 , 167–256 (2003).

Song, C., Qu, Z., Blumm, N. & Barabási, A.-L. Limits of predictability in human mobility. Science 327 , 1018–1021 (2010).

Alessandretti, L., Aslak, U. & Lehmann, S. The scales of human mobility. Nature 587 , 402–407 (2020).

Pastor-Satorras, R. & Vespignani, A. Epidemic spreading in scale-free networks. Phys. Rev. Lett. 86 , 3200 (2001).

Pastor-Satorras, R., Castellano, C., Van Mieghem, P. & Vespignani, A. Epidemic processes in complex networks. Rev. Mod. Phys. 87 , 925 (2015).

Goodfellow, I., Bengio, Y. & Courville, A. Deep Learning (MIT Press, 2016).

Bishop, C. M. Pattern Recognition and Machine Learning (Springer, 2006).

Dong, Y., Johnson, R. A. & Chawla, N. V. Will this paper increase your h-index? Scientific impact prediction. In Proc. 8th ACM International Conference on Web Search and Data Mining, 149–158 (ACM 2015)

Xiao, S. et al. On modeling and predicting individual paper citation count over time. In IJCAI, 2676–2682 (IJCAI, 2016)

Fortunato, S. Community detection in graphs. Phys. Rep. 486 , 75–174 (2010).

Chen, C. Science mapping: a systematic review of the literature. J. Data Inf. Sci. 2 , 1–40 (2017).

CAS   Google Scholar  

Van Eck, N. J. & Waltman, L. Citation-based clustering of publications using CitNetExplorer and VOSviewer. Scientometrics 111 , 1053–1070 (2017).

LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521 , 436–444 (2015).

Senior, A. W. et al. Improved protein structure prediction using potentials from deep learning. Nature 577 , 706–710 (2020).

Krenn, M. & Zeilinger, A. Predicting research trends with semantic and neural networks with an application in quantum physics. Proc. Natl Acad. Sci. USA 117 , 1910–1916 (2020).

Iten, R., Metger, T., Wilming, H., Del Rio, L. & Renner, R. Discovering physical concepts with neural networks. Phys. Rev. Lett. 124 , 010508 (2020).

Guimerà, R. et al. A Bayesian machine scientist to aid in the solution of challenging scientific problems. Sci. Adv. 6 , eaav6971 (2020).

Segler, M. H., Preuss, M. & Waller, M. P. Planning chemical syntheses with deep neural networks and symbolic AI. Nature 555 , 604–610 (2018).

Ryu, J. Y., Kim, H. U. & Lee, S. Y. Deep learning improves prediction of drug–drug and drug–food interactions. Proc. Natl Acad. Sci. USA 115 , E4304–E4311 (2018).

Kermany, D. S. et al. Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell 172 , 1122–1131.e9 (2018).

Peng, H., Ke, Q., Budak, C., Romero, D. M. & Ahn, Y.-Y. Neural embeddings of scholarly periodicals reveal complex disciplinary organizations. Sci. Adv. 7 , eabb9004 (2021).

Youyou, W., Yang, Y. & Uzzi, B. A discipline-wide investigation of the replicability of psychology papers over the past two decades. Proc. Natl Acad. Sci. USA 120 , e2208863120 (2023).

Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K. & Galstyan, A. A survey on bias and fairness in machine learning. ACM Computing Surveys (CSUR) 54 , 1–35 (2021).

Way, S. F., Morgan, A. C., Larremore, D. B. & Clauset, A. Productivity, prominence, and the effects of academic environment. Proc. Natl Acad. Sci. USA 116 , 10729–10733 (2019).

Li, W., Aste, T., Caccioli, F. & Livan, G. Early coauthorship with top scientists predicts success in academic careers. Nat. Commun. 10 , 5170 (2019).

Hendry, D. F., Pagan, A. R. & Sargan, J. D. Dynamic specification. Handb. Econ. 2 , 1023–1100 (1984).

Jin, C., Ma, Y. & Uzzi, B. Scientific prizes and the extraordinary growth of scientific topics. Nat. Commun. 12 , 5619 (2021).

Azoulay, P., Ganguli, I. & Zivin, J. G. The mobility of elite life scientists: professional and personal determinants. Res. Policy 46 , 573–590 (2017).

Slavova, K., Fosfuri, A. & De Castro, J. O. Learning by hiring: the effects of scientists’ inbound mobility on research performance in academia. Organ. Sci. 27 , 72–89 (2016).

Sarsons, H. Recognition for group work: gender differences in academia. Am. Econ. Rev. 107 , 141–145 (2017).

Campbell, L. G., Mehtani, S., Dozier, M. E. & Rinehart, J. Gender-heterogeneous working groups produce higher quality science. PLoS ONE 8 , e79147 (2013).

Azoulay, P., Graff Zivin, J. S. & Wang, J. Superstar extinction. Q. J. Econ. 125 , 549–589 (2010).

Furman, J. L. & Stern, S. Climbing atop the shoulders of giants: the impact of institutions on cumulative research. Am. Econ. Rev. 101 , 1933–1963 (2011).

Williams, H. L. Intellectual property rights and innovation: evidence from the human genome. J. Polit. Econ. 121 , 1–27 (2013).

Rubin, A. & Rubin, E. Systematic Bias in the Progress of Research. J. Polit. Econ. 129 , 2666–2719 (2021).

Lu, S. F., Jin, G. Z., Uzzi, B. & Jones, B. The retraction penalty: evidence from the Web of Science. Sci. Rep. 3 , 3146 (2013).

Jin, G. Z., Jones, B., Lu, S. F. & Uzzi, B. The reverse Matthew effect: consequences of retraction in scientific teams. Rev. Econ. Stat. 101 , 492–506 (2019).

Azoulay, P., Bonatti, A. & Krieger, J. L. The career effects of scandal: evidence from scientific retractions. Res. Policy 46 , 1552–1569 (2017).

Goodman-Bacon, A. Difference-in-differences with variation in treatment timing. J. Econ. 225 , 254–277 (2021).

Callaway, B. & Sant’Anna, P. H. Difference-in-differences with multiple time periods. J. Econ. 225 , 200–230 (2021).

Hill, R. Searching for Superstars: Research Risk and Talent Discovery in Astronomy Working Paper (Massachusetts Institute of Technology, 2019).

Bagues, M., Sylos-Labini, M. & Zinovyeva, N. Does the gender composition of scientific committees matter? Am. Econ. Rev. 107 , 1207–1238 (2017).

Sampat, B. & Williams, H. L. How do patents affect follow-on innovation? Evidence from the human genome. Am. Econ. Rev. 109 , 203–236 (2019).

Moretti, E. & Wilson, D. J. The effect of state taxes on the geographical location of top earners: evidence from star scientists. Am. Econ. Rev. 107 , 1858–1903 (2017).

Jacob, B. A. & Lefgren, L. The impact of research grant funding on scientific productivity. J. Public Econ. 95 , 1168–1177 (2011).

Li, D. Expertise versus bias in evaluation: evidence from the NIH. Am. Econ. J. Appl. Econ. 9 , 60–92 (2017).

Pearl, J. Causal diagrams for empirical research. Biometrika 82 , 669–688 (1995).

Pearl, J. & Mackenzie, D. The Book of Why: The New Science of Cause and Effect (Basic Books, 2018).

Traag, V. A. Inferring the causal effect of journals on citations. Quant. Sci. Stud. 2 , 496–504 (2021).

Traag, V. & Waltman, L. Causal foundations of bias, disparity and fairness. Preprint at https://doi.org/10.48550/arXiv.2207.13665 (2022).

Imbens, G. W. Potential outcome and directed acyclic graph approaches to causality: relevance for empirical practice in economics. J. Econ. Lit. 58 , 1129–1179 (2020).

Heckman, J. J. & Pinto, R. Causality and Econometrics (National Bureau of Economic Research, 2022).

Aggarwal, I., Woolley, A. W., Chabris, C. F. & Malone, T. W. The impact of cognitive style diversity on implicit learning in teams. Front. Psychol. 10 , 112 (2019).

Balietti, S., Goldstone, R. L. & Helbing, D. Peer review and competition in the Art Exhibition Game. Proc. Natl Acad. Sci. USA 113 , 8414–8419 (2016).

Paulus, F. M., Rademacher, L., Schäfer, T. A. J., Müller-Pinzler, L. & Krach, S. Journal impact factor shapes scientists’ reward signal in the prospect of publication. PLoS ONE 10 , e0142537 (2015).

Williams, W. M. & Ceci, S. J. National hiring experiments reveal 2:1 faculty preference for women on STEM tenure track. Proc. Natl Acad. Sci. USA 112 , 5360–5365 (2015).

Collaboration, O. S. Estimating the reproducibility of psychological science. Science 349 , aac4716 (2015).

Camerer, C. F. et al. Evaluating replicability of laboratory experiments in economics. Science 351 , 1433–1436 (2016).

Camerer, C. F. et al. Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015. Nat. Hum. Behav. 2 , 637–644 (2018).

Duflo, E. & Banerjee, A. Handbook of Field Experiments (Elsevier, 2017).

Tomkins, A., Zhang, M. & Heavlin, W. D. Reviewer bias in single versus double-blind peer review. Proc. Natl Acad. Sci. USA 114 , 12708–12713 (2017).

Blank, R. M. The effects of double-blind versus single-blind reviewing: experimental evidence from the American Economic Review. Am. Econ. Rev. 81 , 1041–1067 (1991).

Boudreau, K. J., Guinan, E. C., Lakhani, K. R. & Riedl, C. Looking across and looking beyond the knowledge frontier: intellectual distance, novelty, and resource allocation in science. Manage. Sci. 62 , 2765–2783 (2016).

Lane, J. et al. When Do Experts Listen to Other Experts? The Role of Negative Information in Expert Evaluations for Novel Projects Working Paper #21-007 (Harvard Business School, 2020).

Teplitskiy, M. et al. Do Experts Listen to Other Experts? Field Experimental Evidence from Scientific Peer Review (Harvard Business School, 2019).

Moss-Racusin, C. A., Dovidio, J. F., Brescoll, V. L., Graham, M. J. & Handelsman, J. Science faculty’s subtle gender biases favor male students. Proc. Natl Acad. Sci. USA 109 , 16474–16479 (2012).

Forscher, P. S., Cox, W. T., Brauer, M. & Devine, P. G. Little race or gender bias in an experiment of initial review of NIH R01 grant proposals. Nat. Hum. Behav. 3 , 257–264 (2019).

Dennehy, T. C. & Dasgupta, N. Female peer mentors early in college increase women’s positive academic experiences and retention in engineering. Proc. Natl Acad. Sci. USA 114 , 5964–5969 (2017).

Azoulay, P. Turn the scientific method on ourselves. Nature 484 , 31–32 (2012).

Download references

Acknowledgements

The authors thank all members of the Center for Science of Science and Innovation (CSSI) for invaluable comments. This work was supported by the Air Force Office of Scientific Research under award number FA9550-19-1-0354, National Science Foundation grant SBE 1829344, and the Alfred P. Sloan Foundation G-2019-12485.

Author information

Authors and affiliations.

Center for Science of Science and Innovation, Northwestern University, Evanston, IL, USA

Lu Liu, Benjamin F. Jones, Brian Uzzi & Dashun Wang

Northwestern Institute on Complex Systems, Northwestern University, Evanston, IL, USA

Kellogg School of Management, Northwestern University, Evanston, IL, USA

College of Information Sciences and Technology, Pennsylvania State University, University Park, PA, USA

National Bureau of Economic Research, Cambridge, MA, USA

Benjamin F. Jones

Brookings Institution, Washington, DC, USA

McCormick School of Engineering, Northwestern University, Evanston, IL, USA

  • Dashun Wang

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Dashun Wang .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature Human Behaviour thanks Ludo Waltman, Erin Leahey and Sarah Bratt for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Cite this article.

Liu, L., Jones, B.F., Uzzi, B. et al. Data, measurement and empirical methods in the science of science. Nat Hum Behav 7 , 1046–1058 (2023). https://doi.org/10.1038/s41562-023-01562-4

Download citation

Received : 30 June 2022

Accepted : 17 February 2023

Published : 01 June 2023

Issue Date : July 2023

DOI : https://doi.org/10.1038/s41562-023-01562-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Publication, funding, and experimental data in support of human reference atlas construction and usage.

  • Yongxin Kong
  • Katy Börner

Scientific Data (2024)

Rescaling the disruption index reveals the universality of disruption distributions in science

  • Alex J. Yang
  • Hongcun Gong
  • Sanhong Deng

Scientometrics (2024)

Scientific Data (2023)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

quantitative empirical research methods include the following except

The Empirical Research: Context, Data, and Methods

  • First Online: 01 September 2023

Cite this chapter

quantitative empirical research methods include the following except

  • Eleonora Rossero 2  

59 Accesses

This chapter constitutes a reflexive account, necessary to clarify the theoretical assumptions, the researcher’s characteristics, and the methods employed. First of all, I will begin by introducing the seminal work of Erving Goffman on asylums, as well as more recent ethnographic contributions on acute mental healthcare. Then, I provide a description of the context in which the empirical study took place, the research design, and the techniques employed. Lastly, I will reflect on the personal implications of being on the field.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

His name was Andrea Soldi, he was 45 years old, and he suffered from schizophrenia. He died on August 5, 2015, in Turin, after the violent intervention of municipal police officers who were there to execute a mandatory medical treatment (“ Trattamento Sanitario Obbligatorio ”, TSO). The police officers and the psychiatrist have been convicted for his death in October 2020, with a sentence ( Corte d’Appello ) of one year and six months.

The not-so-frequently deep and prolonged crisis this boy was undergoing put the Violet Centre’s professionals under much pressure. On one hand, this allowed the raising of many crucial issues I had the opportunity to discuss with them as they unfolded in front of my eyes. On the other hand, the time they dedicated to me and my questions despite the challenging situation counts double. For this, I feel deeply grateful to them.

I also had the impression that for some nurses the constant presence of strangers was disturbing. In particular, I remember hearing the following exclamation as I was introduced to a nurse I met for the first time: “oh, so we also have sociologists now?”. I understand the problem here was not about me personally, but nevertheless I did not perceive it as a pleasant welcoming.

Åkerström, M. (2002). Slaps, punches, pinches—But not violence: Boundary-work in nursing homes for the elderly. Symbolic Interaction, 25 (4), 515–536.

Article   Google Scholar  

Allen, D. (2001). Narrating nursing jurisdiction: “Atrocity stories” and “boundary-work”. Symbolic Interaction, 24 (1), 75–103.

Babini, V. (2009). Liberi tutti: manicomi e psichiatri in Italia: una storia del Novecento . il Mulino.

Google Scholar  

Bonner, G., Lowe, T., Rawcliffe, D., & Wellman, N. (2002). Trauma for all: A pilot study of the subjective experience of physical restraint for mental health inpatients and staff in the UK. Journal of Psychiatric and Mental Health Nursing, 9 (4), 465–473.

Article   PubMed   Google Scholar  

Brown, B., Crawford, P., Gilbert, P., Gilbert, J., & Gale, C. (2014). Practical compassions: Repertoires of practice and compassion talk in acute mental healthcare. Sociology of Health & Illness, 36 (3), 383–399.

Buus, N. (2008). Negotiating clinical knowledge: A field study of psychiatric nurses’ everyday communication. Nursing Inquiry, 15 (3), 189–198.

Cardano, M. (2011). La ricerca qualitativa . il Mulino.

Cardano, M. (2020). Defending qualitative research: Design, analysis and textualization . Routledge.

Book   Google Scholar  

Cetina, K. K., Schatzki, T. R., & Von Savigny, E. (Eds.). (2005). The practice turn in contemporary theory . Routledge.

Cussins, C. M. (1996). Ontological choreography: Agency through objectification in infertility clinics. Social Studies of Science, 26 (3), 575–610.

Dell’Acqua, G., Norcio, B., de Girolamo, G., Barbato, A., Bracco, R., Gaddini, A., Miglio, R., Morosini, P., Picardi, A., Rossi, E., Rucci, P., & Santone, G. (2007). Caratteristiche e attività delle strutture di ricovero per pazienti psichiatrici acuti: i risultati dell’indagine nazionale “Progress Acuti”. Giornale Italiano di Psicopatologia, 13 , 26–39.

Denzin, N. K., & Lincoln, Y. S. (Eds.). (2017). The Sage handbook of qualitative research . Sage.

Di Lorenzo, R., Baraldi, S., Ferrara, M., Mimmi, S., & Rigatelli, M. (2012). Physical restraints in an Italian psychiatric ward: Clinical reasons and staff organization problems. Perspectives in Psychiatric Care, 48 (2), 95–107.

Di Napoli, W., & Andreatta, O. (2014). A “no-restraint” psychiatric department: Operative protocols and outcome data from the “opened-doors experience” in Trento. Psychiatria Danubina, 26 (1), 138–141.

PubMed   Google Scholar  

Ferioli, V. (2013). Contenzione: aspetti clinici, giuridici e psico-dinamici. Psichiatria e Psicoterapia, 32 (1), 29–44.

Foucault, M. (2006a). History of madness . Routledge.

Foucault, M. (2006b). Psychiatric power: Lectures at the Collège de France, 1973–1974 . Palgrave Macmillan.

Gieryn, T. F. (1983). Boundary-work and the demarcation of science from non-science: Strains and interests in professional ideologies of scientists. American Sociological Review, 48 , 781–795.

Goffman, E. (1961). Asylums: Essays on the social situations of mental patients and other inmates . Anchor Books.

Gomm, R., Hammersley, M., & Foster, P. (Eds.). (2000). Case study method: Key issues, key texts . Sage.

Goodwin, C. (1994). Professional vision. American Anthropologist, 96 (3), 606–633.

Griffin, C., & Bengry-Howell, A. (2017). Ethnography. In C. Willig & W. Sainton Rogers (Eds.), The SAGE handbook of qualitative research in psychology (pp. 38–54). SAGE Publications Ltd.

Chapter   Google Scholar  

Hamilton, B. E., & Manias, E. (2007). Rethinking nurses’ observations: Psychiatric nursing skills and invisibility in an acute inpatient setting. Social Science & Medicine, 65 (2), 331–343.

Jacob, J. D., Holmes, D., Rioux, D., & Corneau, P. (2018). Patients’ perspective on mechanical restraints in acute and emergency psychiatric settings: A poststructural feminist analysis. In J. M. Kilty & E. Dej (Eds.), Containing madness. Gender and ‘psy’ in institutional contexts (pp. 93–117). Palgrave Macmillan.

Johansson, I. M., Skärsäter, I., & Danielson, E. (2006). The health-care environment on a locked psychiatric ward: An ethnographic study. International Journal of Mental Health Nursing, 15 (4), 242–250.

Johnston, M. S., & Kilty, J. M. (2014). Power, control and coercion: Exploring hyper-masculine performativity by private guards in a psychiatric ward setting. In D. Holmes, A. Perron, & J. D. Jacob (Eds.), Power and the psychiatric apparatus: Repression, transformation and assistance (pp. 61–90). Ashgate Publishing, Ltd.

Katz, P., & Kirkland, F. R. (1990). Violence and social structure on mental hospital wards. Psychiatry, 53 (3), 262–277.

Kersting, X. A. K., Hirsch, S., & Steinert, T. (2019). Physical harm and death in the context of coercive measures in psychiatric patients: A systematic review. Frontiers in Psychiatry, 10 , 400.

Article   PubMed   PubMed Central   Google Scholar  

Lamont, M., & Molnár, V. (2002). The study of boundaries in the social sciences. Annual Review of Sociology, 28 (1), 167–195.

Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation . Cambridge University Press.

Law, J. (2010). Care and killing: Tensions in veterinary practice. In A. Mol, I. Moser, & J. Pols (Eds.), Care in practice: On tinkering in clinics, homes and farms (Vol. 8, pp. 57–72). transcript Verlag.

Mason, J. (2002). Qualitative Researching , 2nd Edition. London: Sage Publications.

Mauceri, S. (Ed.). (2017). “Contenere” la contenzione meccanica in Italia. Primo rapporto sui diritti negati dalla pratica di legare coercitivamente i pazienti psichiatrici nei SPDC. A Buon Diritto—Quaderni .

Mezzina, R. (2014). Community mental health care in Trieste and beyond: An “Open Door–No Restraint” system of care for recovery and citizenship. The Journal of Nervous and Mental Disease, 202 (6), 440–445.

Morrison, E. F. (1990). The tradition of toughness: A study of nonprofessional nursing care in psychiatric settings. Image: The Journal of Nursing Scholarship, 22 (1), 32–38.

Okin, R. (2020). The Trieste model. In T. Burns & J. Foot (Eds.), Basaglia’s international legacy: From asylum to community (pp. 317–331). Oxford University Press.

Oliveira, T. T. S. D. S., Fabrici, E. P., & Santos, M. A. D. (2018). Structure and functioning of a mental health team of Trieste in its members’ perspective: A qualitative study. Psicologia em Pesquisa, 12 (2), 24–35.

Pilgrim, D. (2002). The biopsychosocial model in Anglo-American psychiatry: Past, present and future? Journal of Mental Health, 11 (6), 585–594.

Pols, J. (2003). Enforcing patient rights or improving care? The interference of two modes of doing good in mental health care. Sociology of Health & Illness, 25 (4), 320–347.

Prior, P. M. (1995). Surviving psychiatric institutionalisation: A case study. Sociology of Health & Illness, 17 (5), 651–667.

Quirk, A., & Lelliott, P. (2001). What do we know about life on acute psychiatric wards in the UK? A review of the research evidence. Social Science & Medicine, 53 (12), 1565–1574.

Quirk, A., & Lelliott, P. (2004). Users’ experiences of inpatient services. In P. Campling, S. Davies, & G. Farquharson (Eds.), From toxic institutions to therapeutic environments . Gaskell.

Quirk, A., Lelliott, P., & Seale, C. (2006). The permeable institution: An ethnographic study of three acute psychiatric wards in London. Social Science & Medicine, 63 (8), 2105–2117.

Ragin, C. C., & Becker, H. S. (Eds.). (1992). What is a case?: Exploring the foundations of social inquiry . Cambridge University Press.

Rhodes, L. A. (1991). Emptying beds: The work of an emergency psychiatric unit . University of California Press.

Rogers, A., & Pilgrim, D. (2014). A sociology of mental health and illness . McGraw-Hill Education UK.

Toresini, L. (2007). SPDC no restraint. La sfida della cura. In AA.VV., I Servizi Psichiatrici di Diagnosi e Cura. L’utopia della cura in ospedale . Edizioni Co.Pro.S, Caltagirone.

Working Party on Psychiatry and Human Rights. (2000). White Paper on the protection of the human rights and dignity of people suffering from mental disorder, especially those placed as involuntary patients in a psychiatric establishment . Strasbourg.

Download references

Author information

Authors and affiliations.

University of Turin, Turin, Italy

Eleonora Rossero

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Rossero, E. (2023). The Empirical Research: Context, Data, and Methods. In: Care in a Time of Crisis. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-031-34418-3_4

Download citation

DOI : https://doi.org/10.1007/978-3-031-34418-3_4

Published : 01 September 2023

Publisher Name : Palgrave Macmillan, Cham

Print ISBN : 978-3-031-34417-6

Online ISBN : 978-3-031-34418-3

eBook Packages : Behavioral Science and Psychology Behavioral Science and Psychology (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

sustainability-logo

Article Menu

quantitative empirical research methods include the following except

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Quantitative evaluation of china’s biogenetic resources conservation policies based on the policy modeling consistency index model, 1. introduction, 2. literature review, 3. research design, 3.1. research methodology, 3.2. sample sources, 4. policy text analysis, 5. pmc index model construction, 5.1. variable identification and indicator selection, 5.2. construction of multiple input–output tables, 5.3. calculation method of pmc index model, 5.4. the method of pmc index model surface drawing, 6. empirical measurements, 6.1. selection of the sample of policies for empirical evaluation, 6.2. grade evaluation, 6.3. pmc index calculation, 6.4. presentation of pmc surface plot, 7. results and discussion, 7.1. general evaluation of the empirical results, 7.2. evaluation by grade, 8. conclusions, author contributions, institutional review board statement, informed consent statement, data availability statement, conflicts of interest.

  • Boursot, P.; Desmarais, E. Genetic evaluation of biodiversity. Biofuture 1997 , 17 , 29–33. [ Google Scholar ] [ CrossRef ]
  • Fransen, A.; Bulkeley, H. Transnational Governing at the Climate-Biodiversity Frontier: Employing a Governmentality Perspective. Glob. Environ. Politics 2024 , 24 , 76–99. [ Google Scholar ] [ CrossRef ]
  • Salgotra, R.K.; Chauhan, B.S. Genetic diversity, conservation, and utilization of plant genetic resources. Genes 2023 , 14 , 18–25. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Wang, T.; Li, M.; Rasheed, M.F. The nexus between resource depletion, price fluctuations, and sustainable development in expenditure on resources. Resour. Policy 2024 , 89 , 117–125. [ Google Scholar ] [ CrossRef ]
  • Francolini, E.M.; Mann-Lang, J.B.; McKinley, E.; Mann, B.Q.; Abrahams, M.I. Stakeholder perspectives on socio-economic challenges and recommendations for better management of the Aliwal Shoal Marine protected area in South Africa. Mar. Policy 2023 , 148 , 102–121. [ Google Scholar ] [ CrossRef ]
  • Sherman, B.; Henry, R.J. The Nagoya Protocol and historical collections of plants. Nat. Plants 2020 , 6 , 430–432. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Nam, M. Analysis of genetic resources protection policy under the perspective of intellectual property rights. J. Cent. Univ. Natl. (Sci. Ed.) 2019 , 28 , 53–56. [ Google Scholar ]
  • Dai, S.; Zhang, W.; Zong, J.; Wang, Y.; Wang, G. How effective is the green development policy of China’s Yangtze River economic belt? A quantitative evaluation based on the PMC-index model. Int. J. Environ. Res. Public Health 2021 , 18 , 7676. [ Google Scholar ] [ CrossRef ]
  • Hoban, S.; da Silva, J.M.; Hughes, A.; Hunter, M.E.; Kalamujić Stroil, B.; Laikre, L.; Mastretta-Yanes, A.; Millette, K.; Paz-Vinas, I.; Bustos, L.R.; et al. Too simple, too complex, or just right? Advantages, challenges, and guidance for indicators of genetic diversity. Bioscience 2024 , 11 , 49–53. [ Google Scholar ] [ CrossRef ]
  • Hoban, S.; Arntzen, J.W.; Bertorelle, G.; Bryja, J.; Fernandes, M.; Frith, K.; Gaggiotti, O.; Galbusera, P.; Godoy, J.A.; Hauffe, H.C.; et al. Conservation genetic resources for effective species survival (congress): Bridging the divide between conservation research and practice. J. Nat. Conserv. 2013 , 21 , 433–437. [ Google Scholar ] [ CrossRef ]
  • Legese, K.; Bekele, A. Assessment of challenges and opportunities for wildlife conservation in Wenchi highlands, central Ethiopia. Trop. Conserv. Sci. 2023 , 16 , 533–541. [ Google Scholar ] [ CrossRef ]
  • Walls, S.C. Coping with constraints: Achieving effective conservation with limited resources. Front. Ecol. Evol. 2018 , 6 , 26–35. [ Google Scholar ] [ CrossRef ]
  • Grajal, A. Biodiversity and the nation state: Regulating access to genetic resources limits biodiversity research in developing countries. Conserv. Biol. 1999 , 13 , 6–10. [ Google Scholar ] [ CrossRef ]
  • Trommetter, M. Biodiversity and international stakes: A question of access. Ecol. Econ. 2005 , 53 , 573–583. [ Google Scholar ] [ CrossRef ]
  • Andrade, R.; van Riper, C.J.; Goodson, D.J.; Johnson, D.N.; Stewart, W.; López-Rodríguez, M.D.; Cebrián-Piqueras, M.A.; Horcea-Milcu, A.I.; Lo, V.; Raymond, C.M. Values shift in response to social learning through deliberation about protected areas. Glob. Environ. Chang.-Hum. Policy Dimens. 2023 , 78 , 96–101. [ Google Scholar ] [ CrossRef ]
  • Locatelli, B.; Laurenceau, M.; Chumpisuca, Y.R.; Pramova, E.; Vallet, A.; Conde, Y.Q.; Zavala, R.C.; Djoudi, H.; Lavorel, S.; Colloff, M.J. In people’s minds and on the ground: Values and power in climate change adaptation. Environ. Sci. Policy 2022 , 137 , 75–86. [ Google Scholar ] [ CrossRef ]
  • Gollin, D.; Evenson, R. Valuing animal genetic resources: Lessons from plant genetic resources. Ecol. Econ. 2003 , 45 , 353–363. [ Google Scholar ] [ CrossRef ]
  • Winge, T. Linking access and benefit-sharing for crop genetic resources to climate change adaptation. Plant Genet. Resour.-Charact. Util. 2016 , 14 , 11–27. [ Google Scholar ] [ CrossRef ]
  • Cowell, C.; Paton, A.; Borrell, J.S.; Williams, C.; Wilkin, P.; Antonelli, A.; Baker, W.J.; Buggs, R.; Fay, M.F.; Gargiulo, R.; et al. Uses and benefits of digital sequence information from plant genetic resources: Lessons learnt from botanical collections. Plants People Planet 2022 , 4 , 33–43. [ Google Scholar ] [ CrossRef ]
  • Geary, J.; Bubela, T. Governance of a global genetic resource commons for non-commercial research: A case-study of the DNA barcode commons. Int. J. Commons 2019 , 13 , 205–243. [ Google Scholar ] [ CrossRef ]
  • Lawson, C.; Rourke, M.; Humphries, F. Information as the latest site of conflict in the ongoing contests about access to and sharing the benefits from exploiting genetic resources. Queen Mary J. Intellect. Prop. 2020 , 10 , 7–33. [ Google Scholar ] [ CrossRef ]
  • Putterman, D. Trade and the biodiversity convention. Nature 1994 , 371 , 553–554. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Anwar, M.; Khattak, M.S.; Popp, J.; Meyer, D.F.; Máté, D. The nexus of government incentives and sustainable development goals: Is the management of resources the solution to non-profit organisations? Technol. Econ. Dev. Econ. 2020 , 26 , 1284–1310. [ Google Scholar ] [ CrossRef ]
  • Xu, S.; He, X.; Xu, L. Market or government: Who plays a decisive role in R&D resource allocation? China Financ. Rev. Int. 2019 , 9 , 110–136. [ Google Scholar ]
  • Marjanović, N.; Jovanović, V.; Ratknić, T.; Paunković, D. The role of leadership in natural resource conservation and sustainable development—A case study of local self-government of eastern serbia. Ekon. Poljopr.-Econ. Agric. 2019 , 66 , 889–903. [ Google Scholar ] [ CrossRef ]
  • Baker, T.; Nelson, R.E. Creating something from nothing: Resource construction through entrepreneurial bricolage. Adm. Sci. Q. 2005 , 50 , 329–366. [ Google Scholar ] [ CrossRef ]
  • Cusack, C.; Cohen, B.; Mignone, J.; Chartier, M.J.; Lutfiyya, Z. Participatory action as a research method with public health nurses. J. Adv. Nurs. 2018 , 74 , 1544–1553. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Ylönen, M.; Salmivaara, A. Policy coherence across agenda 2030 and the sustainable development goals: Lessons from Finland. Dev. Policy Rev. 2021 , 39 , 829–847. [ Google Scholar ] [ CrossRef ]
  • Davies, J.K.; Sherriff, N.S. Assessing public health policy approaches to level-up the gradient in health inequalities: The Gradient Evaluation Framework. Public Health 2014 , 128 , 246–253. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Kuhlmann, S. Evaluation of research and innovation policies: A discussion of trends with examples from Germany. Int. J. Technol. Manag. 2003 , 26 , 131–149. [ Google Scholar ] [ CrossRef ]
  • Wang, G.; Yang, Y. Quantitative Evaluation of Digital Economy Policy in Heilongjiang Province of China Based on the PMC-AE Index Model. Sage Open 2024 , 14 , 13–19. [ Google Scholar ] [ CrossRef ]
  • Brandt, L.; Biesebroeck, J.V.; Zhang, Y. Creative accounting or creative destruction? firm-level productivity growth in Chinese manufacturing. J. Dev. Econ. 2012 , 97 , 339–351. [ Google Scholar ] [ CrossRef ]
  • Qin, Q.; Sun, Y. Assessing the Intention to Provide Human Genetic Resources: An Explanatory Model. Public Health Genom. 2020 , 23 , 133–148. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Attridge, J. Innovation models in the biopharmaceutical sector. Int. J. Innov. Manag. 2007 , 11 , 215–243. [ Google Scholar ] [ CrossRef ]
  • Wang, N.; Wang, W.; Song, T.; Wang, H.; Cheng, Z. A quantitative evaluation of water resource management policies in China based on the PMC index model. Water Policy 2022 , 24 , 1859–1875. [ Google Scholar ] [ CrossRef ]
  • Yang, Y.; Tang, J.; Li, Z.; Wen, J. How effective is the health promotion policy in Sichuan, China: Based on the PMC-Index model and field evaluation. BMC Public Health 2022 , 22 , 53–57. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Zhang, Y.; Wang, T.; Wang, C.; Cheng, C. Quantitative Evaluation of China’s CSR Policies Based on the PMC-Index Model. Sustainability 2023 , 15 , 7194. [ Google Scholar ] [ CrossRef ]
  • Dai, S.; Zhang, W.; Lan, L. Quantitative Evaluation of China’s Ecological Protection Compensation Policy Based on PMC Index Model. Int. J. Environ. Res. Public Health 2022 , 19 , 10227. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Estrada, M. Policy modeling: Definition, classification and evaluation. J. Policy Model. 2011 , 33 , 523–536. [ Google Scholar ] [ CrossRef ]
  • Huang, G.; Shen, X.; Zhang, X.; Gu, W. Quantitative evaluation of China’s central-level land consolidation policies in the past forty years based on the text analysis and PMC-index model. Land 2023 , 12 , 223–231. [ Google Scholar ] [ CrossRef ]
  • Ruiz, E.; Yap, S.; Nagaraj, S. Beyond the ceteris paribus assumption: Modeling demand and supply assuming omnia mobilis. Soc. Sci. Electron. Publ. 2010 , 17 , 1522–1531. [ Google Scholar ]
  • Zhang, Y.; Qie, H. Research on quantitative evaluation of popular entrepreneurship and innovation policies--taking the intelligence of 10 shuangchuang policies in 2017 as an example. J. Intell. 2018 , 37 , 158–164+186. [ Google Scholar ]
  • Ding, X.; Fang, Y. Research on mining and quantitative evaluation of support policies for “chinese core”. Soft Sci. 2019 , 33 , 34–39. [ Google Scholar ]
  • Wu, W.; Sheng, L.; Tang, F.; Zhang, A. Quantitative evaluation of manufacturing innovation policies based on feature analysis. Sci. Res. 2020 , 38 , 2246–2257. [ Google Scholar ]
  • Liu, S.; Pang, Y.; Zhang, H.; Wang, B.; Ye, B. Comprehensive evaluation index system and assessment method for natural forest resources protection project in China. J. Ecol. 2021 , 41 , 5067–5079. [ Google Scholar ]
  • Ma, J.; Du, G.; Xia, C. CO 2 emission changes of China’s power generation system: Input-output subsystem analysis. Energy Policy 2019 , 12 , 1–12. [ Google Scholar ] [ CrossRef ]
  • Cai, D.; Chai, Y.; Tian, Z. Quantitative evaluation of digital economy policy texts in Jilin Province based on PMC index model. Intell. Sci. 2021 , 39 , 139–145. [ Google Scholar ]
  • Shi, L.; Huang, X.; Huang, J. Content analysis and quantitative evaluation of national fitness public service policy based on TM-PMC index model. China Sports Sci. Technol. 2023 , 59 , 13–22. [ Google Scholar ]
  • Gu, Y.; He, D.; Huang, J.; Sun, H.; Wang, H. Research on the policy environment of China’s healthcare big data development based on PMC index model. China Health Policy Res. 2022 , 15 , 45–51. [ Google Scholar ]
  • Rao, M.; Johnson, A.; Spence, K.; Sypasong, A.; Bynum, N.; Sterling, E.; Phimminith, T.; Praxaysombath, B. Building Capacity for Protected Area Management in Lao PDR. Environ. Manag. 2014 , 53 , 715–727. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Qin, T. The legislative positioning of the Biosafety Law and its unfolding. Soc. Sci. J. 2020 , 248 , 134–147+209. [ Google Scholar ]
  • Liu, D.; Zhang, F.; Wu, X.; Li, J. The progress and application of genetic resources value assessment. Environ. Sustain. Dev. 2015 , 40 , 19–22. [ Google Scholar ]
  • Poudel, D.; Johnsen, F.H. Valuation of crop genetic resources in Kaski, Nepal: Farmers’ willingness to pay for rice landraces conservation. J. Environ. Manag. 2009 , 90 , 483–491. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Zhang, B.; Cao, C. Policy: Four gaps in China’s new environmental law. Nature 2015 , 517 , 433–434. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Cao, C. China’s evolving biosafety/biosecurity legislations. J. Law Biosci. 2021 , 8 , 20. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Qin, M.; Yue, C.; Du, Y. Evolution of China’s marine ranching policy based on the perspective of policy tools. Mar. Policy 2020 , 117 , 103941. [ Google Scholar ] [ CrossRef ]

Click here to enlarge figure

WordFrequencyWordFrequencyWordFrequencyWordFrequency
Creatures4141Conservation820Survey401Aquatic255
Resources2833Utilization782Livestock399Genebank231
Genetics2337Species780Establishment390Seed Farms229
Government1930Environment753Maintenance378Protected Zones220
Agriculture1667Ecology692Benefits377Research193
Diversity1433Protection559Encouragement371Animal188
Rural1220Development503Human360Laboratory179
Fisheries1009Prevention499Technology347Knowledge177
Collection997Security441Production323Innovation169
Development985Local440Revision283Microbiology167
Safeguard967gene429Construction280Mentoring166
Data906Promote416Organization277Enabling162
Management821Oversight403Forestry263Information160
First-Level VariableNo.Second-Level VariableNo.MetricsRationale for the Establishment
Policy NatureX SupervisionX Involved or not, 1 for yes, 0 for noReferring to the research setup of Y.A. Zhang et al. [ ].
GuidanceX Involved or not, 1 for yes, 0 for no
RecommendationX Involved or not, 1 for yes, 0 for no
ForecastingX Involved or not, 1 for yes, 0 for no
PlanningX Involved or not, 1 for yes, 0 for no
Policy TimelinessX Long-termX Yes or no, 1 for yes, 0 for noReferring to the research setup of X.J. Ding et al. [ ].
Mid-termX Yes or no, 1 for yes, 0 for no
Short-termX Yes or no, 1 for yes, 0 for no
Policy LevelX NationalX Yes or no, 1 for yes, 0 for noReferring to the research setup of W.H. Wu et al. [ ].
ProvincialX Yes or no, 1 for yes, 0 for no
MunicipalX Yes or no, 1 for yes, 0 for no
Policy SubjectX NPC Standing CommitteeX Yes or no, 1 for yes, 0 for noReferring to the research setup of Y.A. Zhang et al. [ ] and based on the policy sample’s issuing subject setting
General Office of the State CouncilX Yes or no, 1 for yes, 0 for no
State Ministries and CommissionsX Yes or no, 1 for yes, 0 for no
Provincial and Municipal OfficesX Yes or no, 1 for yes, 0 for no
Policy AreaX PoliticsX Involved or not, 1 for yes, 0 for noReferring to Ruiz Estrada’s research setup
EconomicsX Involved or not, 1 for yes, 0 for no
CulturalX Involved or not, 1 for yes, 0 for no
SocialX Involved or not, 1 for yes, 0 for no
Science and TechnologyX Involved or not, 1 for yes, 0 for no
Policy ContentX BiosecurityX Involved or not, 1 for yes, 0 for noReferring to S. Liu et al. [ ]’s study and setting up the content analysis based on the policy sample
Resource ConservationX Involved or not, 1 for yes, 0 for no
Seedstock ConservationX Involved or not, 1 for yes, 0 for no
BiodiversityX Involved or not, 1 for yes, 0 for no
Science and Technology InnovationX Involved or not, 1 for yes, 0 for no
ResearchX Involved or not, 1 for yes, 0 for no
Genetic DataX Involved or not, 1 for yes, 0 for no
Economic DevelopmentX Involved or not, 1 for yes, 0 for no
ExploitationX Involved or not, 1 for yes, 0 for no
ImmigrationX Involved or not, 1 for yes, 0 for no
Policy FunctionX Organizational LeadershipX Involved or not, 1 for yes, 0 for noBased on the policy objectives of the sample and the analytical settings of the text keywords
Publicity and PopularizationX Involved or not, 1 for yes, 0 for no
Financial SupportX Involved or not, 1 for yes, 0 for no
Technical TestingX Involved or not, 1 for yes, 0 for no
Education and TrainingX Involved or not, 1 for yes, 0 for no
Supervision MechanismX Involved or not, 1 for yes, 0 for no
Policy EvaluationX Scientific ProgrammeX Yes or no, 1 for yes, 0 for noReferring to the research setup of Y.A. Zhang et al. [ ].
Clear GoalsX Yes or no, 1 for yes, 0 for no
Soundly BasedX Yes or no, 1 for yes, 0 for no
Encouragement of InnovationX Yes or no, 1 for yes, 0 for no
Policy ReceptorsX ProvincialX Targeted or not, 1 for yes, 0 for noSet up for textual analysis of policy samples
Autonomous regions and municipalitiesX Targeted or not, 1 for yes, 0 for no
MunicipalitiesX Targeted or not, 1 for yes, 0 for no
OtherX Targeted or not, 1 for yes, 0 for no
First-Level VariableSecond-Level Variable
X X  X  X  X  X
X X  X  X
X X  X  X
X X  X  X  X
X X  X  X  X  X
X X  X  X  X  X  X  X  X  X  X
X X  X  X  X  X  X
X X  X  X  X
X X  X  X  X
No.Name of the PolicyPublishersRelease Date
P1Law on BiosafetyStanding Committee of the National People’s Congress (NPC)17 October 2020
P2Regulations on the Management of Human Genetic ResourcesState Council (PRC)28 May 2019
P3Opinions of the General Office of the State Council on Strengthening the Protection and Utilization of Agricultural Germplasm ResourcesState Council Office of the People’s Republic of China30 December 2019
P4Measures for the Management of Livestock and Poultry Genetic Resources Breeding Reserve Sanctuaries and Gene Banks(former) Ministry of Agriculture5 June 2006
P5Regulations on Biodiversity Protection in Yunnan ProvinceYunnan Provincial People’s Congress (including Standing Committee)21 September 2018
P6Measures for the Management of Access and Benefit-Sharing of Biogenetic Resources and Associated Traditional Knowledge of the Guangxi Zhuang Autonomous Region (Trial Implementation)Department of Ecology and Environment of Guangxi Zhuang Autonomous Region24 September 2021
P7Notice of the General Office of the People’s Government of Shanxi Province on Strengthening the Conservation of Aquatic Bio-resources and Promoting the Sustainable Development of FisheriesPeople’s Government of Shanxi Province22 September 2006
P8Implementation Opinions of the People’s Government of Shanghai on Further Strengthening Biodiversity ConservationPeople’s Government of Shanghai Municipality18 November 2022
P9Regulations on the Protection of Biodiversity in Xiangxi Tujia and Miao Autonomous PrefectureStanding Committee of Xiangxi Tujia and Miao Autonomous Prefecture People’s Congress30 July 2020
P10Opinions on Strengthening the Management of Compensation for Losses of Marine Biological Resources Issued by the Office of the Lianyungang Municipal GovernmentPeople’s Government of Lianyungang City7 November 2017
PMC Index8~96~7.994~5.990~3.99
Grade Excellent Good Acceptable Failing
First-Level VariableP1P2P3P4P5P6P7P8P9P10Mean Value
X 0.8000.8000.8000.8000.8000.8000.8000.8000.8000.8000.800
X 0.3330.3330.3330.3330.3330.3330.3330.3330.3330.3330.333
X 1.0001.0001.0001.0000.6670.6670.6670.6670.3330.3330.733
X 0.2500.2500.2500.2500.2500.2500.2500.2500.2500.2500.250
X 0.8000.8001.0000.8001.0000.8000.6001.0000.8001.0000.860
X 0.9000.8001.0000.7000.8000.9000.8000.9000.8000.9000.840
X 1.0001.0001.0001.0001.0001.0000.8331.0001.0001.0000.983
X 1.0000.5001.0000.5000.7500.7500.5001.0001.0001.0000.800
X 1.0001.0001.0000.7500.7500.7500.7500.5000.5000.5000.750
PMC Index7.0836.4837.3835.8136.3506.2505.1336.4505.8166.1166.349
Policy Rank24195610387/
Policy GradeGoodGoodGoodAcceptableGoodGoodAcceptableGoodAcceptableGood/
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Qi, L.; Chen, W.; Li, C.; Song, X.; Ge, L. Quantitative Evaluation of China’s Biogenetic Resources Conservation Policies Based on the Policy Modeling Consistency Index Model. Sustainability 2024 , 16 , 5158. https://doi.org/10.3390/su16125158

Qi L, Chen W, Li C, Song X, Ge L. Quantitative Evaluation of China’s Biogenetic Resources Conservation Policies Based on the Policy Modeling Consistency Index Model. Sustainability . 2024; 16(12):5158. https://doi.org/10.3390/su16125158

Qi, Liwen, Wenjing Chen, Chunyan Li, Xiaoting Song, and Lanqing Ge. 2024. "Quantitative Evaluation of China’s Biogenetic Resources Conservation Policies Based on the Policy Modeling Consistency Index Model" Sustainability 16, no. 12: 5158. https://doi.org/10.3390/su16125158

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

Ch.1 Mastering Biology

Profile Picture

Students also viewed

Profile Picture

IMAGES

  1. Empirical Research: Definition, Methods, Types and Examples

    quantitative empirical research methods include the following except

  2. What Is Empirical Research?

    quantitative empirical research methods include the following except

  3. Empirical Research Methodology

    quantitative empirical research methods include the following except

  4. 15 Research Methodology Examples (2024)

    quantitative empirical research methods include the following except

  5. Quantitative Research

    quantitative empirical research methods include the following except

  6. QUANTITATIVE RESEARCH METHODS Quantitative Research Methods Include a

    quantitative empirical research methods include the following except

VIDEO

  1. Exploring Qualitative and Quantitative Research Methods and why you should use them

  2. Difference In Difference

  3. QUANTITATIVE RESEARCH

  4. Empirical research methods

  5. Empirical Probability

  6. EDFN3520_WK02B_Lecture

COMMENTS

  1. Ch 1 Qs Flashcards

    1 This is correct. Objectivity and the use of empirical data are unique to the scientific method and not associated with other ways of knowing. Study with Quizlet and memorize flashcards containing terms like 1. Quantitative research uses the following methods of data collection except: 1. Surveys 2.

  2. Research Quiz 1 Flashcards

    Research Quiz 1. Quantitative research uses the following methods of data collection except: Click the card to flip 👆. Quantitative nursing research uses approaches that can be quantified. Participant observation is used in qualitative research. Click the card to flip 👆.

  3. Sociology: Chapter 2 (Discover Sociological Research)

    Study with Quizlet and memorize flashcards containing terms like The process of gathering empirical (scientific and specific) data, creating theories, and rigorously testing theories is known as ______. a. the sociological method b. the scientific method c. data collection d. theoretical reasoning, Which of the following is an example of quantitative research? a. forty in-depth interviews with ...

  4. Multiple Choice Quiz

    9. Qualitative research is used in all the following circumstances, EXCEPT: It is based on a collection of non-numerical data such as words and pictures; It often uses small samples; It uses the inductive method; It is typically used when a great deal is already known about the topic of interest

  5. Chapter Four: Quantitative Methods (Part 1)

    These parts can also be used as a checklist when working through the steps of your study. Specifically, part 1 focuses on planning a quantitative study (collecting data), part two explains the steps involved in doing a quantitative study, and part three discusses how to make sense of your results (organizing and analyzing data). Research Methods.

  6. What Is Quantitative Research?

    Revised on June 22, 2023. Quantitative research is the process of collecting and analyzing numerical data. It can be used to find patterns and averages, make predictions, test causal relationships, and generalize results to wider populations. Quantitative research is the opposite of qualitative research, which involves collecting and analyzing ...

  7. What is Quantitative Research? Definition, Methods, Types, and Examples

    Quantitative research is the process of collecting and analyzing numerical data to describe, predict, or control variables of interest. This type of research helps in testing the causal relationships between variables, making predictions, and generalizing results to wider populations. The purpose of quantitative research is to test a predefined ...

  8. What Is Quantitative Research?

    Revised on 10 October 2022. Quantitative research is the process of collecting and analysing numerical data. It can be used to find patterns and averages, make predictions, test causal relationships, and generalise results to wider populations. Quantitative research is the opposite of qualitative research, which involves collecting and ...

  9. Quantitative Methods

    Definition. Quantitative method is the collection and analysis of numerical data to answer scientific research questions. Quantitative method is used to summarize, average, find patterns, make predictions, and test causal associations as well as generalizing results to wider populations.

  10. Research Methods--Quantitative, Qualitative, and More: Overview

    About Research Methods. This guide provides an overview of research methods, how to choose and use them, and supports and resources at UC Berkeley. As Patten and Newhart note in the book Understanding Research Methods, "Research methods are the building blocks of the scientific enterprise. They are the "how" for building systematic knowledge.

  11. Empirical Research: Quantitative & Qualitative

    This guide provides an overview of empirical research and quantitative and qualitative social science research methods. ... Generates a report of findings that includes expressive language and a personal voice.

  12. Empirical Research: Definition, Methods, Types and Examples

    Types and methodologies of empirical research. Empirical research can be conducted and analysed using qualitative or quantitative methods. Quantitative research: Quantitative research methods are used to gather information through numerical data. It is used to quantify opinions, behaviors or other defined variables.

  13. Empirical Research: Quantitative & Qualitative

    Quantitative Research. A quantitative research project is characterized by having a population about which the researcher wants to draw conclusions, but it is not possible to collect data on the entire population.. For an observational study, it is necessary to select a proper, statistical random sample and to use methods of statistical inference to draw conclusions about the population.

  14. Qualitative, Quantitative, & Empirical Studies

    QUANTITATIVE Studies · Generate numerical data or data that can be converted into numbers. · Can sort out HOW MUCH or WHEN · Can be an empirical article when its methodology describes original research (i.e., not a review article). Terms for Searching/Search Strategies: Examples of quantitative studies: • Case report - report on a single patient; • Case series - report on a series of ...

  15. What is "Empirical Research"?

    Another hint: some scholarly journals use a specific layout, called the "IMRaD" format, to communicate empirical research findings. Such articles typically have 4 components: Introduction : sometimes called "literature review" -- what is currently known about the topic -- usually includes a theoretical framework and/or discussion of previous ...

  16. DCJ Program: What is Empirical Research?

    An empirical article may report a study that used quantitative research methods, which generate numerical data and seek to establish causal relationships between two or more variables. They may also report on a study that uses qualitative research methods, which objectively and critically analyze behaviors, beliefs, feelings, or values with few ...

  17. Chapter 5

    Terms in this set (25) In comparison to qualitative research, quantitative research involves all of the following EXCEPT: Results that apply to other situations. Lucy has collected Twitter posts and compiled these posts in a large spreadsheet. She will analyze the posts as part of her research project to determine if men and women react ...

  18. Qualitative, Quantitative & Empirical Research

    Provides an in-depth description of the research methods to be used: Previous Knowledge: Researcher has a general idea of what will be discovered: Phase in Process: Usually occurs early in the research process: Research Design: Design is developed during research: Data-Gathering: Researcher gathers data from interviews, etc. Form of Data

  19. Data, measurement and empirical methods in the science of science

    The first is in data 9: modern databases include millions of research articles, grant proposals, patents and more. This windfall of data traces scientific activity in remarkable detail and at scale.

  20. Introduction to Empirical Data Analysis

    1.1.1 Empirical Studies and Quantitative Data Analysis. Empirical research involves the collection of data and their evaluation using qualitative or quantitative methods. ... The following methods are covered: Chapter 2 ... An alternative option is to use the IBM SPSS Statistics Premium package which includes all the procedures of the Basic and ...

  21. 3165 final Flashcards

    Study with Quizlet and memorize flashcards containing terms like Quantitative research uses the following methods of data collection except: 1. surveys. 2. questionnaires. 3. participant observation. 4. psychosocial instruments., Knowledge is information acquired in a variety of different ways. Methods used to acquire this knowledge are referred to as: 1. scientific integrity. 2. scientific ...

  22. The Empirical Research: Context, Data, and Methods

    The concrete empirical contexts that hosted the research, identified as information-rich cases that could function as best examples of the studied phenomenon, are the following. The first case, representing the "restraint" model, comprises an acute psychiatric ward, which I will call the Pine Ward, and a Mental Health Centre I call the ...

  23. Sustainability

    Biogenetic resources are the foundation of biodiversity and are of great significance to the sustainability of human society. The effective promotion of biogenetic resource conservation depends on the scientific formulation and implementation of relevant policies, so the quantitative evaluation of biogenetic resource conservation policies can provide decision support for the next step of ...

  24. Ch.1 Mastering Biology Flashcards

    Study with Quizlet and memorize flashcards containing terms like The scientific method includes all of the following EXCEPT:, A carefully formulated scientific explanation that is based on extensive observations and is in accord with scientific principles is called, The smallest units that still retain the characteristics of an element are called and more.