Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Random vs. Systematic Error | Definition & Examples

Random vs. Systematic Error | Definition & Examples

Published on May 7, 2021 by Pritha Bhandari . Revised on June 22, 2023.

In scientific research, measurement error is the difference between an observed value and the true value of something. It’s also called observation error or experimental error.

There are two main types of measurement error:

Random error is a chance difference between the observed and true values of something (e.g., a researcher misreading a weighing scale records an incorrect measurement).

  • Systematic error is a consistent or proportional difference between the observed and true values of something (e.g., a miscalibrated scale consistently registers weights as higher than they actually are).

By recognizing the sources of error, you can reduce their impacts and record accurate and precise measurements. Gone unnoticed, these errors can lead to research biases like omitted variable bias or information bias .

Table of contents

Are random or systematic errors worse, random error, reducing random error, systematic error, reducing systematic error, other interesting articles, frequently asked questions about random and systematic error.

In research, systematic errors are generally a bigger problem than random errors.

Random error isn’t necessarily a mistake, but rather a natural part of measurement. There is always some variability in measurements, even when you measure the same thing repeatedly, because of fluctuations in the environment, the instrument, or your own interpretations.

But variability can be a problem when it affects your ability to draw valid conclusions about relationships between variables . This is more likely to occur as a result of systematic error.

Precision vs accuracy

Random error mainly affects precision , which is how reproducible the same measurement is under equivalent circumstances. In contrast, systematic error affects the accuracy of a measurement, or how close the observed value is to the true value.

Taking measurements is similar to hitting a central target on a dartboard. For accurate measurements, you aim to get your dart (your observations) as close to the target (the true values) as you possibly can. For precise measurements, you aim to get repeated observations as close to each other as possible.

Random error introduces variability between different measurements of the same thing, while systematic error skews your measurement away from the true value in a specific direction.

Precision vs accuracy

When you only have random error, if you measure the same thing multiple times, your measurements will tend to cluster or vary around the true value. Some values will be higher than the true score, while others will be lower. When you average out these measurements, you’ll get very close to the true score.

For this reason, random error isn’t considered a big problem when you’re collecting data from a large sample—the errors in different directions will cancel each other out when you calculate descriptive statistics . But it could affect the precision of your dataset when you have a small sample.

Systematic errors are much more problematic than random errors because they can skew your data to lead you to false conclusions. If you have systematic error, your measurements will be biased away from the true values. Ultimately, you might make a false positive or a false negative conclusion (a Type I or II error ) about the relationship between the variables you’re studying.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Random error affects your measurements in unpredictable ways: your measurements are equally likely to be higher or lower than the true values.

In the graph below, the black line represents a perfect match between the true scores and observed scores of a scale. In an ideal world, all of your data would fall on exactly that line. The green dots represent the actual observed scores for each measurement with random error added.

Random error

Random error is referred to as “noise”, because it blurs the true value (or the “signal”) of what’s being measured. Keeping random error low helps you collect precise data.

Sources of random errors

Some common sources of random error include:

  • natural variations in real world or experimental contexts.
  • imprecise or unreliable measurement instruments.
  • individual differences between participants or units.
  • poorly controlled experimental procedures.
Random error source Example
Natural variations in context In an about memory capacity, your participants are scheduled for memory tests at different times of day. However, some participants tend to perform better in the morning while others perform better later in the day, so your measurements do not reflect the true extent of memory capacity for each individual.
Imprecise instrument You measure wrist circumference using a tape measure. But your tape measure is only accurate to the nearest half-centimeter, so you round each measurement up or down when you record data.
Individual differences You ask participants to administer a safe electric shock to themselves and rate their pain level on a 7-point rating scale. Because pain is subjective, it’s hard to reliably measure. Some participants overstate their levels of pain, while others understate their levels of pain.

Random error is almost always present in research, even in highly controlled settings. While you can’t eradicate it completely, you can reduce random error using the following methods.

Take repeated measurements

A simple way to increase precision is by taking repeated measurements and using their average. For example, you might measure the wrist circumference of a participant three times and get slightly different lengths each time. Taking the mean of the three measurements, instead of using just one, brings you much closer to the true value.

Increase your sample size

Large samples have less random error than small samples. That’s because the errors in different directions cancel each other out more efficiently when you have more data points. Collecting data from a large sample increases precision and statistical power .

Control variables

In controlled experiments , you should carefully control any extraneous variables that could impact your measurements. These should be controlled for all participants so that you remove key sources of random error across the board.

Systematic error means that your measurements of the same thing will vary in predictable ways: every measurement will differ from the true measurement in the same direction, and even by the same amount in some cases.

Systematic error is also referred to as bias because your data is skewed in standardized ways that hide the true values. This may lead to inaccurate conclusions.

Types of systematic errors

Offset errors and scale factor errors are two quantifiable types of systematic error.

An offset error occurs when a scale isn’t calibrated to a correct zero point. It’s also called an additive error or a zero-setting error.

A scale factor error is when measurements consistently differ from the true value proportionally (e.g., by 10%). It’s also referred to as a correlational systematic error or a multiplier error.

You can plot offset errors and scale factor errors in graphs to identify their differences. In the graphs below, the black line shows when your observed value is the exact true value, and there is no random error.

The blue line is an offset error: it shifts all of your observed values upwards or downwards by a fixed amount (here, it’s one additional unit).

The purple line is a scale factor error: all of your observed values are multiplied by a factor—all values are shifted in the same direction by the same proportion, but by different absolute amounts.

Systematic error

Sources of systematic errors

The sources of systematic error can range from your research materials to your data collection procedures and to your analysis techniques. This isn’t an exhaustive list of systematic error sources, because they can come from all aspects of research.

Response bias occurs when your research materials (e.g., questionnaires ) prompt participants to answer or act in inauthentic ways through leading questions . For example, social desirability bias can lead participants try to conform to societal norms, even if that’s not how they truly feel.

Your question states: “Experts believe that only systematic actions can reduce the effects of climate change. Do you agree that individual actions are pointless?”

Experimenter drift occurs when observers become fatigued, bored, or less motivated after long periods of data collection or coding, and they slowly depart from using standardized procedures in identifiable ways.

Initially, you code all subtle and obvious behaviors that fit your criteria as cooperative. But after spending days on this task, you only code extremely obviously helpful actions as cooperative.

Sampling bias occurs when some members of a population are more likely to be included in your study than others. It reduces the generalizability of your findings, because your sample isn’t representative of the whole population.

Prevent plagiarism. Run a free check.

You can reduce systematic errors by implementing these methods in your study.

Triangulation

Triangulation means using multiple techniques to record observations so that you’re not relying on only one instrument or method.

For example, if you’re measuring stress levels, you can use survey responses, physiological recordings, and reaction times as indicators. You can check whether all three of these measurements converge or overlap to make sure that your results don’t depend on the exact instrument used.

Regular calibration

Calibrating an instrument means comparing what the instrument records with the true value of a known, standard quantity. Regularly calibrating your instrument with an accurate reference helps reduce the likelihood of systematic errors affecting your study.

You can also calibrate observers or researchers in terms of how they code or record data. Use standard protocols and routine checks to avoid experimenter drift.

Randomization

Probability sampling methods help ensure that your sample doesn’t systematically differ from the population.

In addition, if you’re doing an experiment, use random assignment to place participants into different treatment conditions. This helps counter bias by balancing participant characteristics across groups.

Wherever possible, you should hide the condition assignment from participants and researchers through masking (blinding) .

Participants’ behaviors or responses can be influenced by experimenter expectancies and demand characteristics in the environment, so controlling these will help you reduce systematic bias.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Ecological validity

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Random and systematic error are two types of measurement error.

Systematic error is a consistent or proportional difference between the observed and true values of something (e.g., a miscalibrated scale consistently records weights as higher than they actually are).

Systematic error is generally a bigger problem in research.

With random error, multiple measurements will tend to cluster around the true value. When you’re collecting data from a large sample , the errors in different directions will cancel each other out.

Systematic errors are much more problematic because they can skew your data away from the true value. This can lead you to false conclusions ( Type I and II errors ) about the relationship between the variables you’re studying.

Random error  is almost always present in scientific studies, even in highly controlled settings. While you can’t eradicate it completely, you can reduce random error by taking repeated measurements, using a large sample, and controlling extraneous variables .

You can avoid systematic error through careful design of your sampling , data collection , and analysis procedures. For example, use triangulation to measure your variables using multiple methods; regularly calibrate instruments or procedures; use random sampling and random assignment ; and apply masking (blinding) where possible.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 22). Random vs. Systematic Error | Definition & Examples. Scribbr. Retrieved August 8, 2024, from https://www.scribbr.com/methodology/random-vs-systematic-error/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, reliability vs. validity in research | difference, types and examples, what is a controlled experiment | definitions & examples, extraneous variables | examples, types & controls, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

Learn The Types

Learn About Different Types of Things and Unleash Your Curiosity

Understanding Experimental Errors: Types, Causes, and Solutions

Types of experimental errors.

In scientific experiments, errors can occur that affect the accuracy and reliability of the results. These errors are often classified into three main categories: systematic errors, random errors, and human errors. Here are some common types of experimental errors:

1. Systematic Errors

Systematic errors are consistent and predictable errors that occur throughout an experiment. They can arise from flaws in equipment, calibration issues, or flawed experimental design. Some examples of systematic errors include:

– Instrumental Errors: These errors occur due to inaccuracies or limitations of the measuring instruments used in the experiment. For example, a thermometer may consistently read temperatures slightly higher or lower than the actual value.

– Environmental Errors: Changes in environmental conditions, such as temperature or humidity, can introduce systematic errors. For instance, if an experiment requires precise temperature control, fluctuations in the room temperature can impact the results.

– Procedural Errors: Errors in following the experimental procedure can lead to systematic errors. This can include improper mixing of reagents, incorrect timing, or using the wrong formula or equation.

2. Random Errors

Random errors are unpredictable variations that occur during an experiment. They can arise from factors such as inherent limitations of measurement tools, natural fluctuations in data, or human variability. Random errors can occur independently in each measurement and can cause data points to scatter around the true value. Some examples of random errors include:

– Instrument Noise: Instruments may introduce random noise into the measurements, resulting in small variations in the recorded data.

– Biological Variability: In experiments involving living organisms, natural biological variability can contribute to random errors. For example, in studies involving human subjects, individual differences in response to a treatment can introduce variability.

– Reading Errors: When taking measurements, human observers can introduce random errors due to imprecise readings or misinterpretation of data.

3. Human Errors

Human errors are mistakes or inaccuracies that occur due to human factors, such as lack of attention, improper technique, or inadequate training. These errors can significantly impact the experimental results. Some examples of human errors include:

– Data Entry Errors: Mistakes made when recording data or entering data into a computer can introduce errors. These errors can occur due to typographical mistakes, transposition errors, or misinterpretation of results.

– Calculation Errors: Errors in mathematical calculations can occur during data analysis or when performing calculations required for the experiment. These errors can result from mathematical mistakes, incorrect formulas, or rounding errors.

– Experimental Bias: Personal biases or preconceived notions held by the experimenter can introduce bias into the experiment, leading to inaccurate results.

It is crucial for scientists to be aware of these types of errors and take measures to minimize their impact on experimental outcomes. This includes careful experimental design, proper calibration of instruments, multiple repetitions of measurements, and thorough documentation of procedures and observations.

You Might Also Like:

Patio perfection: choosing the best types of pavers for your outdoor space, a guide to types of pupusas: delicious treats from central america, exploring modern period music: from classical to jazz and beyond.

Back Home

  • Science Notes Posts
  • Contact Science Notes
  • Todd Helmenstine Biography
  • Anne Helmenstine Biography
  • Free Printable Periodic Tables (PDF and PNG)
  • Periodic Table Wallpapers
  • Interactive Periodic Table
  • Periodic Table Posters
  • Science Experiments for Kids
  • How to Grow Crystals
  • Chemistry Projects
  • Fire and Flames Projects
  • Holiday Science
  • Chemistry Problems With Answers
  • Physics Problems
  • Unit Conversion Example Problems
  • Chemistry Worksheets
  • Biology Worksheets
  • Periodic Table Worksheets
  • Physical Science Worksheets
  • Science Lab Worksheets
  • My Amazon Books

Absolute and Relative Error and How to Calculate Them

Absolute, Relative, and Percent Error

Absolute, relative, and percent error are the most common experimental error calculations in science. Grouped together, they are types of approximation error. Basically, the premise is that no matter how carefully you measure something, you’ll always be off a bit due to the limitations of the measuring instrument. For example, you may be only able to measure to the nearest millimeter on a ruler or the nearest milliliter on a graduated cylinder. Here are the definitions, equations, and examples of how to use these types of error calculations.

Absolute Error

Absolute error is the magnitude (size) of the difference between a measured value and a true or exact value.

Absolute Error = |True Value – Measured Value|

Absolute Error Example: A measurement is 24.54 mm and the true or known value is 26.00 mm. Find the absolute error. Absolute Error = |26.00 mm – 25.54 mm|= 0.46 mm Note absolute error retains its units of measurement.

The vertical bars indicate absolute value . In other words, you drop any negative sign you may get. For this reason, it doesn’t actually matter whether you subtract the measured value from the true value or the other way around. You’ll see the formula written both ways in textbooks and both forms are correct.

What matters is that you interpret the error correctly. If you graph error bars, half of the error is higher than the measured value and half is lower. For example, if your error is 0.2 cm, it is the same as saying ±0.1 cm.

The absolute error tells you how big a difference there is between the measured and true values, but this information isn’t very helpful when you want to know if the measured value is close to the real value or not. For example, an absolute error of 0.1 grams is more significant if the true value is 1.4 grams than if the true value is 114 kilograms! This is where relative error and percent error help.

Relative Error

Relative error puts absolute error into perspective because it compares the size of absolute error to the size of the true value. Note that the units drop off in this calculation, so relative error is dimensionless (unitless).

Relative Error = |True Value – Measured Value| / True Value Relative Error = Absolute Error / True Value

Relative Error Example: A measurement is 53 and the true or known value is 55. Find the relative error. Relative Error = |55 – 53| / 55 = 0.034 Note this value maintains two significant digits.

Note: Relative error is undefined when the true value is zero . Also, relative error only makes sense when a measurement scale starts at a true zero. So, it makes sense for the Kelvin temperature scale, but not for Fahrenheit or Celsius!

Percent Error

Percent error is just relative error multiplied by 100%. It tells what percent of a measurement is questionable.

Percent Error = |True Value – Measured Value| / True Value x 100% Percent Error = Absolute Error / True Value x 100% Percent Error = Relative Error x 100%

Percent Error Example: A speedometer says a car is going 70 mph but its real speed is 72 mph. Find the percent error. Percent Error = |72 – 70| / 72 x 100% = 2.8%

Mean Absolute Error

Absolute error is fine if you’re only taking one measurement, but what about when you collect more data? Then, mean absolute error is useful. Mean absolute error or MAE is the sum of all the absolute errors divided by the number of errors (data points). In other words, it’s the average of the errors. Mean absolute error, like absolute error, retains its units.

Mean Absolute Error Example: You weigh yourself three times and get values of 126 lbs, 129 lbs, 127 lbs. Your true weight is 127 lbs. What is the mean absolute error of the measurements. Mean Absolute Error = [|126-127 lbs|+|129-127 lbs|+|127-127 lbs|]/3 = 1 lb

  • Hazewinkel, Michiel, ed. (2001). “Theory of Errors.”  Encyclopedia of Mathematics . Springer Science+Business Media B.V. / Kluwer Academic Publishers. ISBN 978-1-55608-010-4.
  • Helfrick, Albert D. (2005). Modern Electronic Instrumentation and Measurement Techniques . ISBN 81-297-0731-4.
  • Steel, Robert G. D.; Torrie, James H. (1960).  Principles and Procedures of Statistics, With Special Reference to Biological Sciences . McGraw-Hill. 

Related Posts

IMAGES

  1. PPT

    experimental errors

  2. Types of Error

    experimental errors

  3. Errors and uncertainties

    experimental errors

  4. Types Of Experimental Errors

    experimental errors

  5. PPT

    experimental errors

  6. PPT

    experimental errors

COMMENTS

  1. Random vs. Systematic Error | Definition & Examples - Scribbr

    It’s also called observation error or experimental error. There are two main types of measurement error: Random error is a chance difference between the observed and true values of something (e.g., a researcher misreading a weighing scale records an incorrect measurement).

  2. Experimental Error Types, Sources & Examples | What is ...

    Learn what experimental error is and how it affects the accuracy and precision of measurements. Explore the three types of experimental error: systematic, random and blunders, and see examples of each type.

  3. Sources of Error in Science Experiments

    Learn why all science experiments have error, how to calculate it, and the sources and types of errors you should report.

  4. Understanding Experimental Errors: Types, Causes, And ...

    Learn how to identify and reduce systematic, random, and human errors in scientific experiments. Find out the common examples, sources, and effects of each type of error and how to avoid them.

  5. 1B.2: Making Measurements: Experimental Error, Accuracy ...

    Experimental Error. What is the difference between random and systematic error? There are two concepts we need to understand in experimental error, accuracy and precision. Accuracy is how close your value or measurement is to the correct (true) value, and precision is how close repeated measurements are to each other.

  6. Introduction to Error and Uncertainty - Columbia University

    Understanding possible errors is an important issue in any experimental science. The conclusions we draw from the data, and especially the strength of those conclusions, will depend on how well we control the uncertainties. Let's consider an example: You're trying to measure something and from theory, you know the expected value should be 2.3.

  7. Measurement and Error Analysis - physics.columbia.edu

    There are two fundamentally different types of experimental error. Statistical errors are random in nature: repeated measurements will differ from each other and from the true value by amounts which

  8. 4.2: Characterizing Experimental Errors - Chemistry LibreTexts

    The first of these questions addresses the accuracy of our measurements and the second addresses the precision of our measurements. In this section we consider the types of experimental errors that affect accuracy and precision.

  9. Absolute and Relative Error and How to Calculate Them

    Absolute, relative, and percent error are the most common experimental error calculations in science. Grouped together, they are types of approximation error. Basically, the premise is that no matter how carefully you measure something, you’ll always be off a bit due to the limitations of the measuring instrument.

  10. Appendix A: Treatment of Experimental Errors - Chemistry ...

    A well-designed experiment attempts to minimize both systematic and random errors, thereby allowing both high accuracy and high precision from careful measurements. Since systematic errors are not generally manifest in successive measurements, they can be avoided only by careful calibration and consideration of all possible corrections.