99% Upper | |||||
N | Mean | SE Mean | Bound | Z | P |
227 | 19.600 | 0.485 | 20.727 | -7.02 | 0.000 |
Excel does not offer 1-sample hypothesis testing.
Frequently, the population standard deviation (σ) is not known. We can estimate the population standard deviation (σ) with the sample standard deviation (s). However, the test statistic will no longer follow the standard normal distribution. We must rely on the student’s t-distribution with n-1 degrees of freedom. Because we use the sample standard deviation (s), the test statistic will change from a Z-score to a t-score.
Steps for a hypothesis test are the same that we covered in Section 2.
Just as with the hypothesis test from the previous section, the data for this test must be from a random sample and requires either that the population from which the sample was drawn be normal or that the sample size is sufficiently large (n≥30). A t-test is robust, so small departures from normality will not adversely affect the results of the test. That being said, if the sample size is smaller than 30, it is always good to verify the assumption of normality through a normal probability plot.
We will still have the same three pairs of null and alternative hypotheses and we can still use either the classical approach or the p-value approach.
Selecting the correct critical value from the student’s t-distribution table depends on three factors: the type of test (one-sided or two-sided alternative hypothesis), the sample size, and the level of significance.
For a two-sided test (“not equal” alternative hypothesis), the critical value (t α /2 ), is determined by alpha ( α ), the level of significance, divided by two, to deal with the possibility that the result could be less than OR greater than the known value.
For a one-sided test (“a less than” or “greater than” alternative hypothesis), the critical value (t α ) , is determined by alpha ( α ), the level of significance, being all in the one side.
Find the critical value you would use to test the claim that μ ≠ 112 with a sample size of 18 and a 5% level of significance.
In this case, the critical value (t α /2 ) would be 2.110. This is a two-sided question (≠) so you would divide alpha by 2 (0.05/2 = 0.025) and go down the 0.025 column to 17 degrees of freedom.
What would the critical value be if you wanted to test that μ < 112 for the same data?
In this case, the critical value would be 1.740. This is a one-sided question (<) so alpha would be divided by 1 (0.05/1 = 0.05). You would go down the 0.05 column with 17 degrees of freedom to get the correct critical value.
In 2005, the mean pH level of rain in a county in northern New York was 5.41. A biologist believes that the rain acidity has changed. He takes a random sample of 11 rain dates in 2010 and obtains the following data. Use a 1% level of significance to test his claim.
4.70, 5.63, 5.02, 5.78, 4.99, 5.91, 5.76, 5.54, 5.25, 5.18, 5.01
The sample size is small and we don’t know anything about the distribution of the population, so we examine a normal probability plot. The distribution looks normal so we will continue with our test.
The sample mean is 5.343 with a sample standard deviation of 0.397.
We will fail to reject the null hypothesis. We do not have enough evidence to support the claim that the mean rain pH has changed.
Cadmium, a heavy metal, is toxic to animals. Mushrooms, however, are able to absorb and accumulate cadmium at high concentrations. The government has set safety limits for cadmium in dry vegetables at 0.5 ppm. Biologists believe that the mean level of cadmium in mushrooms growing near strip mines is greater than the recommended limit of 0.5 ppm, negatively impacting the animals that live in this ecosystem. A random sample of 51 mushrooms gave a sample mean of 0.59 ppm with a sample standard deviation of 0.29 ppm. Use a 5% level of significance to test the claim that the mean cadmium level is greater than the acceptable limit of 0.5 ppm.
The sample size is greater than 30 so we are assured of a normal distribution of the means.
Step 4) State a Conclusion.
The test statistic falls in the rejection zone. We will reject the null hypothesis. We have enough evidence to support the claim that the mean cadmium level is greater than the acceptable safe limit.
BUT, what happens if the significance level changes to 1%?
The critical value is now found by going down the 0.01 column with 50 degrees of freedom. The critical value is 2.403. The test statistic is now LESS THAN the critical value. The test statistic does not fall in the rejection zone. The conclusion will change. We do NOT have enough evidence to support the claim that the mean cadmium level is greater than the acceptable safe limit of 0.5 ppm.
The level of significance is the probability that you, as the researcher, set to decide if there is enough statistical evidence to support the alternative claim. It should be set before the experiment begins.
We can also use the p-value approach for a hypothesis test about the mean when the population standard deviation ( σ ) is unknown. However, when using a student’s t-table, we can only estimate the range of the p-value, not a specific value as when using the standard normal table. The student’s t-table has area (probability) across the top row in the table, with t-scores in the body of the table.
Estimating P-value from a Student’s T-table
If your test statistic is 3.789 with 3 degrees of freedom, you would go across the 3 df row. The value 3.789 falls between the values 3.482 and 4.541 in that row. Therefore, the p-value is between 0.02 and 0.01. The p-value will be greater than 0.01 but less than 0.02 (0.01<p<0.02).
If your level of significance is 5%, you would reject the null hypothesis as the p-value (0.01-0.02) is less than alpha ( α ) of 0.05.
If your level of significance is 1%, you would fail to reject the null hypothesis as the p-value (0.01-0.02) is greater than alpha ( α ) of 0.01.
Software packages typically output p-values. It is easy to use the Decision Rule to answer your research question by the p-value method.
(referring to Ex. 12)
Test of mu = 0.5 vs. > 0.5
95% Lower | ||||||
N | Mean | StDev | SE Mean | Bound | T | P |
51 | 0.5900 | 0.2900 | 0.0406 | 0.5219 | 2.22 | 0.016 |
Additional example: www.youtube.com/watch?v=WwdSjO4VUsg .
Frequently, the parameter we are testing is the population proportion.
Recall that the best point estimate of p , the population proportion, is given by
when np (1 – p )≥10. We can use both the classical approach and the p-value approach for testing.
The steps for a hypothesis test are the same that we covered in Section 2.
The test statistic follows the standard normal distribution. Notice that the standard error (the denominator) uses p instead of p̂ , which was used when constructing a confidence interval about the population proportion. In a hypothesis test, the null hypothesis is assumed to be true, so the known proportion is used.
A botanist has produced a new variety of hybrid soy plant that is better able to withstand drought than other varieties. The botanist knows the seed germination for the parent plants is 75%, but does not know the seed germination for the new hybrid. He tests the claim that it is different from the parent plants. To test this claim, 450 seeds from the hybrid plant are tested and 321 have germinated. Use a 5% level of significance to test this claim that the germination rate is different from 75%.
This is a two-sided question so alpha is divided by 2.
The test statistic does not fall in the rejection zone. We fail to reject the null hypothesis. We do not have enough evidence to support the claim that the germination rate of the hybrid plant is different from the parent plants.
Let’s answer this question using the p-value approach. Remember, for a two-sided alternative hypothesis (“not equal”), the p-value is two times the area of the test statistic. The test statistic is -1.81 and we want to find the area to the left of -1.81 from the standard normal table.
Now compare the p-value to alpha. The Decision Rule states that if the p-value is less than alpha, reject the H 0 . In this case, the p-value (0.0702) is greater than alpha (0.05) so we will fail to reject H 0 . We do not have enough evidence to support the claim that the germination rate of the hybrid plant is different from the parent plants.
You are a biologist studying the wildlife habitat in the Monongahela National Forest. Cavities in older trees provide excellent habitat for a variety of birds and small mammals. A study five years ago stated that 32% of the trees in this forest had suitable cavities for this type of wildlife. You believe that the proportion of cavity trees has increased. You sample 196 trees and find that 79 trees have cavities. Does this evidence support your claim that there has been an increase in the proportion of cavity trees?
Use a 10% level of significance to test this claim.
This is a one-sided question so alpha is divided by 1.
The test statistic is larger than the critical value (it falls in the rejection zone). We will reject the null hypothesis. We have enough evidence to support the claim that there has been an increase in the proportion of cavity trees.
Now use the p-value approach to answer the question. This is a right-sided question (“greater than”), so the p-value is equal to the area to the right of the test statistic. Go to the positive side of the standard normal table and find the area associated with the Z-score of 2.49. The area is 0.9936. Remember that this table is cumulative from the left. To find the area to the right of 2.49, we subtract from one.
p-value = (1 – 0.9936) = 0.0064
The p-value is less than the level of significance (0.10), so we reject the null hypothesis. We have enough evidence to support the claim that the proportion of cavity trees has increased.
(referring to Ex. 15)
Test of p = 0.32 vs. p > 0.32
90% Lower | ||||||
Sample | X | N | Sample p | Bound | Z-Value | p-Value |
1 | 79 | 196 | 0.403061 | 0.358160 | 2.49 | 0.006 |
Using the normal approximation. |
When people think of statistical inference, they usually think of inferences involving population means or proportions. However, the particular population parameter needed to answer an experimenter’s practical questions varies from one situation to another, and sometimes a population’s variability is more important than its mean. Thus, product quality is often defined in terms of low variability.
Sample variance S 2 can be used for inferences concerning a population variance σ 2 . For a random sample of n measurements drawn from a normal population with mean μ and variance σ 2 , the value S 2 provides a point estimate for σ 2 . In addition, the quantity ( n – 1) S 2 / σ 2 follows a Chi-square ( χ 2 ) distribution, with df = n – 1.
The properties of Chi-square ( χ 2 ) distribution are:
Alternative hypothesis:
where the χ 2 critical value in the rejection region is based on degrees of freedom df = n – 1 and a specified significance level of α .
As with previous sections, if the test statistic falls in the rejection zone set by the critical value, you will reject the null hypothesis.
A forester wants to control a dense understory of striped maple that is interfering with desirable hardwood regeneration using a mist blower to apply an herbicide treatment. She wants to make sure that treatment has a consistent application rate, in other words, low variability not exceeding 0.25 gal./acre (0.06 gal. 2 ). She collects sample data (n = 11) on this type of mist blower and gets a sample variance of 0.064 gal. 2 Using a 5% level of significance, test the claim that the variance is significantly greater than 0.06 gal. 2
H 0 : σ 2 = 0.06
H 1 : σ 2 >0.06
The critical value is 18.307. Any test statistic greater than this value will cause you to reject the null hypothesis.
The test statistic is
We fail to reject the null hypothesis. The forester does NOT have enough evidence to support the claim that the variance is greater than 0.06 gal. 2 You can also estimate the p-value using the same method as for the student t-table. Go across the row for degrees of freedom until you find the two values that your test statistic falls between. In this case going across the row 10, the two table values are 4.865 and 15.987. Now go up those two columns to the top row to estimate the p-value (0.1-0.9). The p-value is greater than 0.1 and less than 0.9. Both are greater than the level of significance (0.05) causing us to fail to reject the null hypothesis.
(referring to Ex. 16)
Test and CI for One Variance
Method | ||
Null hypothesis | Sigma-squared | = 0.06 |
Alternative hypothesis | Sigma-squared | > 0.06 |
The chi-square method is only for the normal distribution.
Test | |||
Method | Statistic | DF | P-Value |
Chi-Square | 10.67 | 10 | 0.384 |
Excel does not offer 1-sample χ 2 testing.
To test a claim about μ when σ is known.
Natural Resources Biometrics Copyright © 2014 by Diane Kiernan is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.
If you're seeing this message, it means we're having trouble loading external resources on our website.
If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.
To log in and use all the features of Khan Academy, please enable JavaScript in your browser.
Course: ap®︎/college statistics > unit 10.
Statistics By Jim
Making statistics intuitive
By Jim Frost 10 Comments
A confidence interval (CI) is a range of values that is likely to contain the value of an unknown population parameter . These intervals represent a plausible domain for the parameter given the characteristics of your sample data. Confidence intervals are derived from sample statistics and are calculated using a specified confidence level.
Population parameters are typically unknown because it is usually impossible to measure entire populations. By using a sample, you can estimate these parameters. However, the estimates rarely equal the parameter precisely thanks to random sampling error . Fortunately, inferential statistics procedures can evaluate a sample and incorporate the uncertainty inherent when using samples. Confidence intervals place a margin of error around the point estimate to help us understand how wrong the estimate might be.
You’ll frequently use confidence intervals to bound the sample mean and standard deviation parameters. But you can also create them for regression coefficients , proportions, rates of occurrence (Poisson), and the differences between populations.
Related post : Populations, Parameters, and Samples in Inferential Statistics
The confidence level is the long-run probability that a series of confidence intervals will contain the true value of the population parameter.
Different random samples drawn from the same population are likely to produce slightly different intervals. If you draw many random samples and calculate a confidence interval for each sample, a percentage of them will contain the parameter.
The confidence level is the percentage of the intervals that contain the parameter. For 95% confidence intervals, an average of 19 out of 20 include the population parameter, as shown below.
The image above shows a hypothetical series of 20 confidence intervals from a study that draws multiple random samples from the same population. The horizontal red dashed line is the population parameter, which is usually unknown. Each blue dot is a the sample’s point estimate for the population parameter. Green lines represent CIs that contain the parameter, while the red line is a CI that does not contain it. The graph illustrates how confidence intervals are not perfect but usually correct.
The CI procedure provides meaningful estimates because it produces ranges that usually contain the parameter. Hence, they present plausible values for the parameter.
Technically, you can create CIs using any confidence level between 0 and 100%. However, the most common confidence level is 95%. Analysts occasionally use 99% and 90%.
Related posts : Populations and Samples and Parameters vs. Statistics ,
A confidence interval indicates where the population parameter is likely to reside. For example, a 95% confidence interval of the mean [9 11] suggests you can be 95% confident that the population mean is between 9 and 11.
Confidence intervals also help you navigate the uncertainty of how well a sample estimates a value for an entire population.
These intervals start with the point estimate for the sample and add a margin of error around it. The point estimate is the best guess for the parameter value. The margin of error accounts for the uncertainty involved when using a sample to estimate an entire population.
The width of the confidence interval around the point estimate reveals the precision. If the range is narrow, the margin of error is small, and there is only a tiny range of plausible values. That’s a precise estimate. However, if the interval is wide, the margin of error is large, and the actual parameter value is likely to fall somewhere within that more extensive range . That’s an imprecise estimate.
Ideally, you’d like a narrow confidence interval because you’ll have a much better idea of the actual population value!
For example, imagine we have two different samples with a sample mean of 10. It appears both estimates are the same. Now let’s assess the 95% confidence intervals. One interval is [5 15] while the other is [9 11]. The latter range is narrower, suggesting a more precise estimate.
That’s how CIs provide more information than the point estimate (e.g., sample mean) alone.
Related post : Precision vs. Accuracy
Confidence intervals are similarly helpful for understanding an effect size. For example, if you assess a treatment and control group, the mean difference between these groups is the estimated effect size. A 2-sample t-test can construct a confidence interval for the mean difference.
In this scenario, consider both the size and precision of the estimated effect. Ideally, an estimated effect is both large enough to be meaningful and sufficiently precise for you to trust. CIs allow you to assess both of these considerations! Learn more about this distinction in my post about Practical vs. Statistical Significance .
Learn more about how confidence intervals and hypothesis tests are similar .
Related post : Effect Sizes in Statistics
A frequent misuse is applying confidence intervals to the distribution of sample values. Remember that these ranges apply only to population parameters, not the data values.
For example, a 95% confidence interval [10 15] indicates that we can be 95% confident that the parameter is within that range.
However, it does NOT indicate that 95% of the sample values occur in that range.
If you need to use your sample to find the proportion of data values likely to fall within a range, use a tolerance interval instead.
Related post : See how confidence intervals compare to prediction intervals and tolerance intervals .
Ok, so you want narrower CIs for their greater precision. What conditions produce tighter ranges?
Sample size, variability, and the confidence level affect the widths of confidence intervals. The first two are characteristics of your sample, which I’ll cover first.
Variability present in your data affects the precision of the estimate. Your confidence intervals will be broader when your sample standard deviation is high.
It makes sense when you think about it. When there is a lot of variability present in your sample, you’re going to be less sure about the estimates it produces. After all, a high standard deviation means your sample data are really bouncing around! That’s not conducive for finding precise estimates.
Unfortunately, you often don’t have much control over data variability. You can institute measurement and data collection procedures that reduce outside sources of variability, but after that, you’re at the mercy of the variability inherent in your subject area. But, if you can reduce external sources of variation, that’ll help you reduce the width of your confidence intervals.
Increasing your sample size is the primary way to reduce the widths of confidence intervals because, in most cases, you can control it more than the variability. If you don’t change anything else and only increase the sample size, the ranges tend to narrow. Need even tighter CIs? Just increase the sample size some more!
Theoretically, there is no limit, and you can dramatically increase the sample size to produce remarkably narrow ranges. However, logistics, time, and cost issues will constrain your maximum sample size in the real world.
In summary, larger sample sizes and lower variability reduce the margin of error around the point estimate and create narrower confidence intervals. I’ll point out these factors again when we get to the formula later in this post.
Related post : Sample Statistics Are Always Wrong (to Some Extent)!
The confidence level also affects the confidence interval width. However, this factor is a methodology choice separate from your sample’s characteristics.
If you increase the confidence level (e.g., 95% to 99%) while holding the sample size and variability constant, the confidence interval widens. Conversely, decreasing the confidence level (e.g., 95% to 90%) narrows the range.
I’ve found that many students find the effect of changing the confidence level on the width of the range to be counterintuitive.
Imagine you take your knowledge of a subject area and indicate you’re 95% confident that the correct answer lies between 15 and 20. Then I ask you to give me your confidence for it falling between 17 and 18. The correct answer is less likely to fall within the narrower interval, so your confidence naturally decreases.
Conversely, I ask you about your confidence that it’s between 10 and 30. That’s a much wider range, and the correct value is more likely to be in it. Consequently, your confidence grows.
Confidence levels involve a tradeoff between confidence and the interval’s spread. To have more confidence that the parameter falls within the interval, you must widen the interval. Conversely, your confidence necessarily decreases if you use a narrower range.
Confidence intervals account for sampling uncertainty by using critical values, sampling distributions, and standard errors. The precise formula depends on the type of parameter you’re evaluating. The most common type is for the mean, so I’ll stick with that.
You’ll use critical Z-values or t-values to calculate your confidence interval of the mean. T-values produce more accurate confidence intervals when you do not know the population standard deviation. That’s particularly true for sample sizes smaller than 30. For larger samples, the two methods produce similar results. In practice, you’d usually use a t-value.
Below are the confidence interval formulas for both Z and t. However, you’d only use one of them.
The only difference between the two formulas is the critical value. If you’re using the critical z-value, you’ll always use 1.96 for 95% confidence intervals. However, for the t-value, you’ll need to know the degrees of freedom and then look up the critical value in a t-table or online calculator.
To calculate a confidence interval, take the critical value (Z or t) and multiply it by the standard error of the mean (SEM). This value is known as the margin of error (MOE) . Then add and subtract the MOE from the sample mean (x̄) to produce the upper and lower limits of the range.
Related posts : Critical Values , Standard Error of the Mean , and Sampling Distributions
Think back to the discussion about the factors affecting the confidence interval widths. The formula helps you understand how that works. Recall that the critical value * SEM = MOE.
Smaller margins of error produce narrower confidence intervals. By looking at this equation, you can see that the following conditions create a smaller MOE:
Let’s move on to using these formulas to find a confidence interval! For this example, I’ll use a fuel cost dataset that I’ve used in other posts: FuelCosts . The dataset contains a random sample of 25 fuel costs. We want to calculate the 95% confidence interval of the mean.
However, imagine we have only the following summary information instead of the dataset.
Fortunately, that’s all we need to calculate our 95% confidence interval of the mean.
We need to decide on using the critical Z or t-value. I’ll use a critical t-value because the sample size (25) is less than 30. However, if the summary didn’t provide the sample size, we could use the Z-value method for an approximation.
My next step is to look up the critical t-value using my t-table. In the table, I’ll choose the alpha that equals 1 – the confidence level (1 – 0.95 = 0.05) for a two-sided test. Below is a truncated version of the t-table. Click for the full t-distribution table .
In the table, I see that for a two-sided interval with 25 – 1 = 24 degrees of freedom and an alpha of 0.05, the critical value is 2.064.
Let’s enter all of this information into the formula.
First, I’ll calculate the margin of error:
Next, I’ll take the sample mean and add and subtract the margin of error from it:
The 95% confidence interval of the mean for fuel costs is 267.0 – 394.2. We can be 95% confident that the population mean falls within this range.
If you had used the critical z-value (1.96), you would enter that into the formula instead of the t-value (2.064) and obtain a slightly different confidence interval. However, t-values produce more accurate results, particularly for smaller samples like this one.
As an aside, the Z-value method always produces narrower confidence intervals than t-values when your sample size is less than infinity. So, basically always! However, that’s not good because Z-values underestimate the uncertainty when you’re using a sample estimate of the standard deviation rather than the actual population value. And you practically never know the population standard deviation.
Neyman, J. (1937). Outline of a Theory of Statistical Estimation Based on the Classical Theory of Probability . Philosophical Transactions of the Royal Society A . 236 (767): 333–380.
April 23, 2024 at 8:37 am
February 24, 2024 at 8:29 am
Thank you so much
February 14, 2024 at 1:56 pm
If I take a sample and create a confidence interval for the mean, can I say that 95% of the mean of the other samples I will take can be found in this range?
February 23, 2024 at 8:40 pm
Unfortunately, that would be an invalid statement. The CI formula uses your sample to estimate the properties of the population to construct the CI. Your estimates are bound to be off by at least a little bit. If you knew the precise properties of the population, you could determine the range in which 95% of random samples from that population would fall. However, again, you don’t know the precise properties of the population. You just have estimates based on your sample.
September 29, 2023 at 6:55 pm
Hi Jim, My confusion is similar to one comment. What I cannot seem to understand is the concept of individual and many CIs and therefore statements such as X% of the CIs.
For a sampling distribution, which itself requires many samples to produce, we try to find a confidence interval. Then how come there are multiple CIs. More specifically “Different random samples drawn from the same population are likely to produce slightly different intervals. If you draw many random samples and calculate a confidence interval for each sample, a percentage of them will contain the parameter.” this is what confuses me. Is interval here represents the range of the samples drawn? If that is true, why is the term CI or interval used for sample range? If not, could you please explain what is mean by an individual CI or how are we calculating confidence interval for each sample? In the image depicting 19 out of 20 will have population parameter, is the green line the range of individual samples or the confidence interval?
Please try to sort this confusion out for me. I find your website really helpful for clearing my statistical concepts. Thank you in advance for helping out. Regards.
September 30, 2023 at 1:52 am
A key point to remember is that inferential statistics occur in the context of drawing many random samples from the same population. Of course, a single study typically draws a single sample. However, if that study were to draw another random sample, it would be somewhat different than the first sample. A third sample would be somewhat different as well. That produces the sampling distribution, which helps you calculate p-values and construct CIs. Inferential statistics procedures use the idea of many samples to incorporate random sampling error into the results.
For CIs, if you were to collect many random samples, a certain percentage of them will contain the population parameter. That percentage is the confidence interval. Again, a single study will only collect a single sample. However, picturing many CIs helps you understand the concept of the confidence level. In practice, a study generates one CI per parameter estimate. But the graph with multiple CIs is just to help you understand the concept of confidence level.
Alternatively, you can think of CIs as an object class. Suppose 100 disparate studies produce 95% CIs. You can assume that about 95 of those CIs actually contain the population parameter. Using statistical procedures, you can estimate the sampling distribution using the sample itself without collecting many samples.
I don’t know what you mean by “Interval here represents the range of samples drawn.” As I write in this article, the CI is an interval of values that likely contain the population parameter. Reread the section titled How to Interpret Confidence Intervals to understand what each one means.
Each CI is estimated from a single sample and a study generates one CI per parameter estimate. However, again, understanding the concept of the confidence level is easier when you picture multiple CIs. But if a single study were to collect multiple samples and produces multiple CIs, that graph is what you’d expect to see. Although, in the real world, you never know for sure whether a CI actually contains the parameter or not.
The green lines represent CIs that contain the population parameter. Red lines represent CIs that do not contain the population parameter. The graph illustrates how CIs are not perfect but they are usually correct. I’ve added text to the article to clarify that image.
I also show you how to calculate the CI for a mean in this article. I’m not sure what more you need to understand there? I’m happy to clarify any part of that.
I hope that helps!
July 6, 2023 at 10:14 am
Hi Jim, This was an excellent article, thank you! I have a question: when computing a CI in its single-sample t-test module, SPSS appears to use the difference between population and sample means as a starting point (so the formula would be (X-bar-mu) +/- tcv(SEM)). I’ve consulted multiple stats books, but none of them compute a CI that way for a single-sample t-test. Maybe I’m just missing something and this is a perfectly acceptable way of doing things (I mean, SPSS does it :-)), but it yields substantially different lower and upper bounds from a CI that uses the traditional X-bar as a starting point. Do you have any insights? Many thanks in advance! Stephen
July 7, 2023 at 2:56 am
Hi Stephen,
I’m not an SPSS user but that formula is confusing. They presented this formula as being for the CI of a sample mean?
I’m not sure why they’re subtracting Mu. For one thing, you almost never know what Mu is because you’d have to measure the entire population. And, if you knew Mu, you wouldn’t need to perform a t-test! Why would you use a sample mean (X-bar) if you knew the population mean? None of it makes sense to me. It must be an error of some kind even if just of documentation.
October 13, 2022 at 8:33 am
Are there strict distinctions between the terms “confident”, “likely”, and “probability”? I’ve seen a number of other sources exclaim that for a given calculated confidence interval, the frequentist interpretation of that is the parameter is either in or not in that interval. They say another frequent misinterpretation is that the parameter lies within a calculated interval with a 95% probability.
It’s very confusing to balance that notion with practical casual communication of data in non-research settings.
October 13, 2022 at 5:43 pm
It is a confusing issue.
In this strictest technical sense, the confidence level is probability that applies to the process but NOT an individual confidence interval. There are several reasons for that.
In the frequentist framework, the probability that an individual CI contains the parameter is either 100% or 0%. It’s either in it or out. The parameter is not a random variable. However, because you don’t know the parameter value, you don’t know which of those two conditions is correct. That’s the conceptual approach. And the mathematics behind the scenes are complementary to that. There’s just no way to calculate the probability that an individual CI contains the parameter.
On the other hand, the process behind creating the intervals will cause X% of the CIs at the Xth confidence level to include that parameter. So, for all 95% CIs, you’d expect 95% of them to contain the parameter value. The confidence level applies to the process, not the individual CIs. Statisticians intentionally used the term “confidence” to describe that as opposed to “probability” hoping to make that distinction.
So, the 95% confidence applies the process but not individual CIs.
However, if you’re thinking that if 95% of many CIs contain the parameter, then surely a single CI has a 95% probability. From a technical standpoint, that is NOT true. However, it sure sounds logical. Most statistics make intuitive sense to me, but I struggle with that one myself. I’ve asked other statisticians to get their take on it. The basic gist of their answers is that there might be other information available which can alter the actual probability. Not all CIs produced by the process have the same probability. For example, if an individual CI is a bit higher or lower than most other CIs for the same thing, the CIs with the unusual values will have lower probabilities for containing the parameters.
I think that makes sense. The only problem is that you often don’t know where your individual CI fits in. That means you don’t know the probability for it specifically. But you do know the overall probability for the process.
The answer for this question is never totally satisfying. Just remember that there is no mathematical way in the frequentist framework to calculate the probability that an individual CI contains the parameter. However, the overall process is designed such that all CIs using a particular confidence level will have the specified proportion containing the parameter. However, you can’t apply that overall proportion to your individual CI because on the technical side there’s no mathematical way to do that and conceptually, you don’t know where your individual CI fits in the entire distribution of CIs.
selected template will load here
This action is not available.
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by OpenStax.
9.2: null and alternative hypotheses.
Some of the following statements refer to the null hypothesis, some to the alternate hypothesis.
State the null hypothesis, \(H_{0}\), and the alternative hypothesis. \(H_{a}\), in terms of the appropriate parameter \((\mu \text{or} p)\).
Over the past few decades, public health officials have examined the link between weight concerns and teen girls' smoking. Researchers surveyed a group of 273 randomly selected teen girls living in Massachusetts (between 12 and 15 years old). After four years the girls were surveyed again. Sixty-three said they smoked to stay thin. Is there good evidence that more than thirty percent of the teen girls smoke to stay thin? The alternative hypothesis is:
A statistics instructor believes that fewer than 20% of Evergreen Valley College (EVC) students attended the opening night midnight showing of the latest Harry Potter movie. She surveys 84 of her students and finds that 11 attended the midnight showing. An appropriate alternative hypothesis is:
Previously, an organization reported that teenagers spent 4.5 hours per week, on average, on the phone. The organization thinks that, currently, the mean is higher. Fifteen randomly chosen teenagers were asked how many hours per week they spend on the phone. The sample mean was 4.75 hours with a sample standard deviation of 2.0. Conduct a hypothesis test. The null and alternative hypotheses are:
State the Type I and Type II errors in complete sentences given the following statements.
For statements a-j in Exercise 9.109 , answer the following in complete sentences.
When a new drug is created, the pharmaceutical company must subject it to testing before receiving the necessary permission from the Food and Drug Administration (FDA) to market the drug. Suppose the null hypothesis is “the drug is unsafe.” What is the Type II Error?
A statistics instructor believes that fewer than 20% of Evergreen Valley College (EVC) students attended the opening midnight showing of the latest Harry Potter movie. She surveys 84 of her students and finds that 11 of them attended the midnight showing. The Type I error is to conclude that the percent of EVC students who attended is ________.
It is believed that Lake Tahoe Community College (LTCC) Intermediate Algebra students get less than seven hours of sleep per night, on average. A survey of 22 LTCC Intermediate Algebra students generated a mean of 7.24 hours with a standard deviation of 1.93 hours. At a level of significance of 5%, do LTCC Intermediate Algebra students get less than seven hours of sleep per night, on average?
The Type II error is not to reject that the mean number of hours of sleep LTCC students get per night is at least seven when, in fact, the mean number of hours
Previously, an organization reported that teenagers spent 4.5 hours per week, on average, on the phone. The organization thinks that, currently, the mean is higher. Fifteen randomly chosen teenagers were asked how many hours per week they spend on the phone. The sample mean was 4.75 hours with a sample standard deviation of 2.0. Conduct a hypothesis test, the Type I error is:
It is believed that Lake Tahoe Community College (LTCC) Intermediate Algebra students get less than seven hours of sleep per night, on average. A survey of 22 LTCC Intermediate Algebra students generated a mean of 7.24 hours with a standard deviation of 1.93 hours. At a level of significance of 5%, do LTCC Intermediate Algebra students get less than seven hours of sleep per night, on average? The distribution to be used for this test is \(\bar{X} \sim\) ________________
The National Institute of Mental Health published an article stating that in any one-year period, approximately 9.5 percent of American adults suffer from depression or a depressive illness. Suppose that in a survey of 100 people in a certain town, seven of them suffered from depression or a depressive illness. Conduct a hypothesis test to determine if the true proportion of people in that town suffering from depression or a depressive illness is lower than the percent in the general adult American population.
For each of the word problems, use a solution sheet to do the hypothesis test. The solution sheet is found in [link] . Please feel free to make copies of the solution sheets. For the online version of the book, it is suggested that you copy the .doc or the .pdf files.
If you are using a Student's \(t\) - distribution for one of the following homework problems, you may assume that the underlying population is normally distributed. (In general, you must first prove that assumption, however.)
A particular brand of tires claims that its deluxe tire averages at least 50,000 miles before it needs to be replaced. From past studies of this tire, the standard deviation is known to be 8,000. A survey of owners of that tire design is conducted. From the 28 tires surveyed, the mean lifespan was 46,500 miles with a standard deviation of 9,800 miles. Using \(\alpha = 0.05\), is the data highly inconsistent with the claim?
From generation to generation, the mean age when smokers first start to smoke varies. However, the standard deviation of that age remains constant of around 2.1 years. A survey of 40 smokers of this generation was done to see if the mean starting age is at least 19. The sample mean was 18.1 with a sample standard deviation of 1.3. Do the data support the claim at the 5% level?
The cost of a daily newspaper varies from city to city. However, the variation among prices remains steady with a standard deviation of 20¢. A study was done to test the claim that the mean cost of a daily newspaper is $1.00. Twelve costs yield a mean cost of 95¢ with a standard deviation of 18¢. Do the data support the claim at the 1% level?
An article in the San Jose Mercury News stated that students in the California state university system take 4.5 years, on average, to finish their undergraduate degrees. Suppose you believe that the mean time is longer. You conduct a survey of 49 students and obtain a sample mean of 5.1 with a sample standard deviation of 1.2. Do the data support your claim at the 1% level?
The mean number of sick days an employee takes per year is believed to be about ten. Members of a personnel department do not believe this figure. They randomly survey eight employees. The number of sick days they took for the past year are as follows: 12; 4; 15; 3; 11; 8; 6; 8. Let \(x =\) the number of sick days they took for the past year. Should the personnel team believe that the mean number is ten?
In 1955, Life Magazine reported that the 25 year-old mother of three worked, on average, an 80 hour week. Recently, many groups have been studying whether or not the women's movement has, in fact, resulted in an increase in the average work week for women (combining employment and at-home work). Suppose a study was done to determine if the mean work week has increased. 81 women were surveyed with the following results. The sample mean was 83; the sample standard deviation was ten. Does it appear that the mean work week has increased for women at the 5% level?
Your statistics instructor claims that 60 percent of the students who take her Elementary Statistics class go through life feeling more enriched. For some reason that she can't quite figure out, most people don't believe her. You decide to check this out on your own. You randomly survey 64 of her past Elementary Statistics students and find that 34 feel more enriched as a result of her class. Now, what do you think?
The “plus-4s” confidence interval is \((0.411, 0.648)\)
A Nissan Motor Corporation advertisement read, “The average man’s I.Q. is 107. The average brown trout’s I.Q. is 4. So why can’t man catch brown trout?” Suppose you believe that the brown trout’s mean I.Q. is greater than four. You catch 12 brown trout. A fish psychologist determines the I.Q.s as follows: 5; 4; 7; 3; 6; 4; 5; 3; 6; 3; 8; 5. Conduct a hypothesis test of your belief.
Refer to Exercise 9.119 . Conduct a hypothesis test to see if your decision and conclusion would change if your belief were that the brown trout’s mean I.Q. is not four.
According to an article in Newsweek , the natural ratio of girls to boys is 100:105. In China, the birth ratio is 100: 114 (46.7% girls). Suppose you don’t believe the reported figures of the percent of girls born in China. You conduct a study. In this study, you count the number of girls and boys born in 150 randomly chosen recent births. There are 60 girls and 90 boys born of the 150. Based on your study, do you believe that the percent of girls born in China is 46.7?
A poll done for Newsweek found that 13% of Americans have seen or sensed the presence of an angel. A contingent doubts that the percent is really that high. It conducts its own survey. Out of 76 Americans surveyed, only two had seen or sensed the presence of an angel. As a result of the contingent’s survey, would you agree with the Newsweek poll? In complete sentences, also give three reasons why the two polls might give different results.
The“plus-4s” confidence interval is (0.0022, 0.0978)
The mean work week for engineers in a start-up company is believed to be about 60 hours. A newly hired engineer hopes that it’s shorter. She asks ten engineering friends in start-ups for the lengths of their mean work weeks. Based on the results that follow, should she count on the mean work week to be shorter than 60 hours?
Data (length of mean work week): 70; 45; 55; 60; 65; 55; 55; 60; 50; 55.
Use the “Lap time” data for Lap 4 (see [link] ) to test the claim that Terri finishes Lap 4, on average, in less than 129 seconds. Use all twenty races given.
Use the “Initial Public Offering” data (see [link] ) to test the claim that the mean offer price was $18 per share. Do not use all the data. Use your random number generator to randomly survey 15 prices.
The following questions were written by past students. They are excellent problems!
"Asian Family Reunion," by Chau Nguyen
Every two years it comes around.
We all get together from different towns.
In my honest opinion,
It's not a typical family reunion.
Not forty, or fifty, or sixty,
But how about seventy companions!
The kids would play, scream, and shout
One minute they're happy, another they'll pout.
The teenagers would look, stare, and compare
From how they look to what they wear.
The men would chat about their business
That they make more, but never less.
Money is always their subject
And there's always talk of more new projects.
The women get tired from all of the chats
They head to the kitchen to set out the mats.
Some would sit and some would stand
Eating and talking with plates in their hands.
Then come the games and the songs
And suddenly, everyone gets along!
With all that laughter, it's sad to say
That it always ends in the same old way.
They hug and kiss and say "good-bye"
And then they all begin to cry!
I say that 60 percent shed their tears
But my mom counted 35 people this year.
She said that boys and men will always have their pride,
So we won't ever see them cry.
I myself don't think she's correct,
So could you please try this problem to see if you object?
Note that here the “large-sample” \(1 - \text{PropZTest}\) provides the approximate \(p\text{-value}\) of 0.0438. Whenever a \(p\text{-value}\) based on a normal approximation is close to the level of significance, the exact \(p\text{-value}\) based on binomial probabilities should be calculated whenever possible. This is beyond the scope of this course.
"The Problem with Angels," by Cyndy Dowling
Although this problem is wholly mine,
The catalyst came from the magazine, Time.
On the magazine cover I did find
The realm of angels tickling my mind.
Inside, 69% I found to be
In angels, Americans do believe.
Then, it was time to rise to the task,
Ninety-five high school and college students I did ask.
Viewing all as one group,
Random sampling to get the scoop.
So, I asked each to be true,
"Do you believe in angels?" Tell me, do!
Hypothesizing at the start,
Totally believing in my heart
That the proportion who said yes
Would be equal on this test.
Lo and behold, seventy-three did arrive,
Out of the sample of ninety-five.
Now your job has just begun,
Solve this problem and have some fun.
"Blowing Bubbles," by Sondra Prull
Studying stats just made me tense,
I had to find some sane defense.
Some light and lifting simple play
To float my math anxiety away.
Blowing bubbles lifts me high
Takes my troubles to the sky.
POIK! They're gone, with all my stress
Bubble therapy is the best.
The label said each time I blew
The average number of bubbles would be at least 22.
I blew and blew and this I found
From 64 blows, they all are round!
But the number of bubbles in 64 blows
Varied widely, this I know.
20 per blow became the mean
They deviated by 6, and not 16.
From counting bubbles, I sure did relax
But now I give to you your task.
Was 22 a reasonable guess?
Find the answer and pass this test!
"Dalmatian Darnation," by Kathy Sparling
A greedy dog breeder named Spreckles
Bred puppies with numerous freckles
The Dalmatians he sought
Possessed spot upon spot
The more spots, he thought, the more shekels.
His competitors did not agree
That freckles would increase the fee.
They said, “Spots are quite nice
But they don't affect price;
One should breed for improved pedigree.”
The breeders decided to prove
This strategy was a wrong move.
Breeding only for spots
Would wreak havoc, they thought.
His theory they want to disprove.
They proposed a contest to Spreckles
Comparing dog prices to freckles.
In records they looked up
One hundred one pups:
Dalmatians that fetched the most shekels.
They asked Mr. Spreckles to name
An average spot count he'd claim
To bring in big bucks.
Said Spreckles, “Well, shucks,
It's for one hundred one that I aim.”
Said an amateur statistician
Who wanted to help with this mission.
“Twenty-one for the sample
Standard deviation's ample:
They examined one hundred and one
Dalmatians that fetched a good sum.
They counted each spot,
Mark, freckle and dot
And tallied up every one.
Instead of one hundred one spots
They averaged ninety six dots
Can they muzzle Spreckles’
Obsession with freckles
Based on all the dog data they've got?
"Macaroni and Cheese, please!!" by Nedda Misherghi and Rachelle Hall
As a poor starving student I don't have much money to spend for even the bare necessities. So my favorite and main staple food is macaroni and cheese. It's high in taste and low in cost and nutritional value.
One day, as I sat down to determine the meaning of life, I got a serious craving for this, oh, so important, food of my life. So I went down the street to Greatway to get a box of macaroni and cheese, but it was SO expensive! $2.02 !!! Can you believe it? It made me stop and think. The world is changing fast. I had thought that the mean cost of a box (the normal size, not some super-gigantic-family-value-pack) was at most $1, but now I wasn't so sure. However, I was determined to find out. I went to 53 of the closest grocery stores and surveyed the prices of macaroni and cheese. Here are the data I wrote in my notebook:
Price per box of Mac and Cheese:
I could see that the cost varied but I had to sit down to figure out whether or not I was right. If it does turn out that this mouth-watering dish is at most $1, then I'll throw a big cheesy party in our next statistics lab, with enough macaroni and cheese for just me. (After all, as a poor starving student I can't be expected to feed our class of animals!)
"William Shakespeare: The Tragedy of Hamlet, Prince of Denmark," by Jacqueline Ghodsi
THE CHARACTERS (in order of appearance):
Scene: The great library of the castle, in which Hamlet does his lessons
(The day is fair, but the face of Hamlet is clouded. He paces the large room. His tutor, Polonius, is reprimanding Hamlet regarding the latter’s recent experience. Horatio is seated at the large table at right stage.)
POLONIUS: My Lord, how cans’t thou admit that thou hast seen a ghost! It is but a figment of your imagination!
HAMLET: I beg to differ; I know of a certainty that five-and-seventy in one hundred of us, condemned to the whips and scorns of time as we are, have gazed upon a spirit of health, or goblin damn’d, be their intents wicked or charitable.
POLONIUS If thou doest insist upon thy wretched vision then let me invest your time; be true to thy work and speak to me through the reason of the null and alternate hypotheses. (He turns to Horatio.) Did not Hamlet himself say, “What piece of work is man, how noble in reason, how infinite in faculties? Then let not this foolishness persist. Go, Horatio, make a survey of three-and-sixty and discover what the true proportion be. For my part, I will never succumb to this fantasy, but deem man to be devoid of all reason should thy proposal of at least five-and-seventy in one hundred hold true.
HORATIO (to Hamlet): What should we do, my Lord?
HAMLET: Go to thy purpose, Horatio.
HORATIO: To what end, my Lord?
HAMLET: That you must teach me. But let me conjure you by the rights of our fellowship, by the consonance of our youth, but the obligation of our ever-preserved love, be even and direct with me, whether I am right or no.
(Horatio exits, followed by Polonius, leaving Hamlet to ponder alone.)
(The next day, Hamlet awaits anxiously the presence of his friend, Horatio. Polonius enters and places some books upon the table just a moment before Horatio enters.)
POLONIUS: So, Horatio, what is it thou didst reveal through thy deliberations?
HORATIO: In a random survey, for which purpose thou thyself sent me forth, I did discover that one-and-forty believe fervently that the spirits of the dead walk with us. Before my God, I might not this believe, without the sensible and true avouch of mine own eyes.
POLONIUS: Give thine own thoughts no tongue, Horatio. (Polonius turns to Hamlet.) But look to’t I charge you, my Lord. Come Horatio, let us go together, for this is not our test. (Horatio and Polonius leave together.)
HAMLET: To reject, or not reject, that is the question: whether ‘tis nobler in the mind to suffer the slings and arrows of outrageous statistics, or to take arms against a sea of data, and, by opposing, end them. (Hamlet resignedly attends to his task.)
(Curtain falls)
"Untitled," by Stephen Chen
I've often wondered how software is released and sold to the public. Ironically, I work for a company that sells products with known problems. Unfortunately, most of the problems are difficult to create, which makes them difficult to fix. I usually use the test program X, which tests the product, to try to create a specific problem. When the test program is run to make an error occur, the likelihood of generating an error is 1%.
So, armed with this knowledge, I wrote a new test program Y that will generate the same error that test program X creates, but more often. To find out if my test program is better than the original, so that I can convince the management that I'm right, I ran my test program to find out how often I can generate the same error. When I ran my test program 50 times, I generated the error twice. While this may not seem much better, I think that I can convince the management to use my test program instead of the original test program. Am I right?
The“plus-4s” confidence interval is \((0.004, 0.144)\).
"Japanese Girls’ Names"
by Kumi Furuichi
It used to be very typical for Japanese girls’ names to end with “ko.” (The trend might have started around my grandmothers’ generation and its peak might have been around my mother’s generation.) “Ko” means “child” in Chinese characters. Parents would name their daughters with “ko” attaching to other Chinese characters which have meanings that they want their daughters to become, such as Sachiko—happy child, Yoshiko—a good child, Yasuko—a healthy child, and so on.
However, I noticed recently that only two out of nine of my Japanese girlfriends at this school have names which end with “ko.” More and more, parents seem to have become creative, modernized, and, sometimes, westernized in naming their children.
I have a feeling that, while 70 percent or more of my mother’s generation would have names with “ko” at the end, the proportion has dropped among my peers. I wrote down all my Japanese friends’, ex-classmates’, co-workers, and acquaintances’ names that I could remember. Following are the names. (Some are repeats.) Test to see if the proportion has dropped for this generation.
Ai, Akemi, Akiko, Ayumi, Chiaki, Chie, Eiko, Eri, Eriko, Fumiko, Harumi, Hitomi, Hiroko, Hiroko, Hidemi, Hisako, Hinako, Izumi, Izumi, Junko, Junko, Kana, Kanako, Kanayo, Kayo, Kayoko, Kazumi, Keiko, Keiko, Kei, Kumi, Kumiko, Kyoko, Kyoko, Madoka, Maho, Mai, Maiko, Maki, Miki, Miki, Mikiko, Mina, Minako, Miyako, Momoko, Nana, Naoko, Naoko, Naoko, Noriko, Rieko, Rika, Rika, Rumiko, Rei, Reiko, Reiko, Sachiko, Sachiko, Sachiyo, Saki, Sayaka, Sayoko, Sayuri, Seiko, Shiho, Shizuka, Sumiko, Takako, Takako, Tomoe, Tomoe, Tomoko, Touko, Yasuko, Yasuko, Yasuyo, Yoko, Yoko, Yoko, Yoshiko, Yoshiko, Yoshiko, Yuka, Yuki, Yuki, Yukiko, Yuko, Yuko.
"Phillip’s Wish," by Suzanne Osorio
My nephew likes to play
Chasing the girls makes his day.
He asked his mother
If it is okay
To get his ear pierced.
She said, “No way!”
To poke a hole through your ear,
Is not what I want for you, dear.
He argued his point quite well,
Says even my macho pal, Mel,
Has gotten this done.
It’s all just for fun.
C’mon please, mom, please, what the hell.
Again Phillip complained to his mother,
Saying half his friends (including their brothers)
Are piercing their ears
And they have no fears
He wants to be like the others.
She said, “I think it’s much less.
We must do a hypothesis test.
And if you are right,
I won’t put up a fight.
But, if not, then my case will rest.”
We proceeded to call fifty guys
To see whose prediction would fly.
Nineteen of the fifty
Said piercing was nifty
And earrings they’d occasionally buy.
Then there’s the other thirty-one,
Who said they’d never have this done.
So now this poem’s finished.
Will his hopes be diminished,
Or will my nephew have his fun?
"The Craven," by Mark Salangsang
Once upon a morning dreary
In stats class I was weak and weary.
Pondering over last night’s homework
Whose answers were now on the board
This I did and nothing more.
While I nodded nearly napping
Suddenly, there came a tapping.
As someone gently rapping,
Rapping my head as I snore.
Quoth the teacher, “Sleep no more.”
“In every class you fall asleep,”
The teacher said, his voice was deep.
“So a tally I’ve begun to keep
Of every class you nap and snore.
The percentage being forty-four.”
“My dear teacher I must confess,
While sleeping is what I do best.
The percentage, I think, must be less,
A percentage less than forty-four.”
This I said and nothing more.
“We’ll see,” he said and walked away,
And fifty classes from that day
He counted till the month of May
The classes in which I napped and snored.
The number he found was twenty-four.
At a significance level of 0.05,
Please tell me am I still alive?
Or did my grade just take a dive
Plunging down beneath the floor?
Upon thee I hereby implore.
Toastmasters International cites a report by Gallop Poll that 40% of Americans fear public speaking. A student believes that less than 40% of students at her school fear public speaking. She randomly surveys 361 schoolmates and finds that 135 report they fear public speaking. Conduct a hypothesis test to determine if the percent at her school is less than 40%.
Sixty-eight percent of online courses taught at community colleges nationwide were taught by full-time faculty. To test if 68% also represents California’s percent for full-time faculty teaching the online classes, Long Beach City College (LBCC) in California, was randomly selected for comparison. In the same year, 34 of the 44 online courses LBCC offered were taught by full-time faculty. Conduct a hypothesis test to determine if 68% represents California. NOTE: For more accurate results, use more California community colleges and this past year's data.
According to an article in Bloomberg Businessweek , New York City's most recent adult smoking rate is 14%. Suppose that a survey is conducted to determine this year’s rate. Nine out of 70 randomly chosen N.Y. City residents reply that they smoke. Conduct a hypothesis test to determine if the rate is still 14% or if it has decreased.
The mean age of De Anza College students in a previous term was 26.6 years old. An instructor thinks the mean age for online students is older than 26.6. She randomly surveys 56 online students and finds that the sample mean is 29.4 with a standard deviation of 2.1. Conduct a hypothesis test.
Registered nurses earned an average annual salary of $69,110. For that same year, a survey was conducted of 41 California registered nurses to determine if the annual salary is higher than $69,110 for California nurses. The sample average was $71,121 with a sample standard deviation of $7,489. Conduct a hypothesis test.
La Leche League International reports that the mean age of weaning a child from breastfeeding is age four to five worldwide. In America, most nursing mothers wean their children much earlier. Suppose a random survey is conducted of 21 U.S. mothers who recently weaned their children. The mean weaning age was nine months (3/4 year) with a standard deviation of 4 months. Conduct a hypothesis test to determine if the mean weaning age in the U.S. is less than four years old.
Over the past few decades, public health officials have examined the link between weight concerns and teen girls' smoking. Researchers surveyed a group of 273 randomly selected teen girls living in Massachusetts (between 12 and 15 years old). After four years the girls were surveyed again. Sixty-three said they smoked to stay thin. Is there good evidence that more than thirty percent of the teen girls smoke to stay thin?
After conducting the test, your decision and conclusion are
A statistics instructor believes that fewer than 20% of Evergreen Valley College (EVC) students attended the opening night midnight showing of the latest Harry Potter movie. She surveys 84 of her students and finds that 11 of them attended the midnight showing.
At a 1% level of significance, an appropriate conclusion is:
Previously, an organization reported that teenagers spent 4.5 hours per week, on average, on the phone. The organization thinks that, currently, the mean is higher. Fifteen randomly chosen teenagers were asked how many hours per week they spend on the phone. The sample mean was 4.75 hours with a sample standard deviation of 2.0. Conduct a hypothesis test.
At a significance level of \(a = 0.05\), what is the correct conclusion?
Instructions: For the following ten exercises,
Hypothesis testing: For the following ten exercises, answer each question.
State the null and alternate hypothesis.
State the \(p\text{-value}\).
State \(\alpha\).
What is your decision?
Write a conclusion.
Answer any other questions asked in the problem.
According to the Center for Disease Control website, in 2011 at least 18% of high school students have smoked a cigarette. An Introduction to Statistics class in Davies County, KY conducted a hypothesis test at the local high school (a medium sized–approximately 1,200 students–small city demographic) to determine if the local high school’s percentage was lower. One hundred fifty students were chosen at random and surveyed. Of the 150 students surveyed, 82 have smoked. Use a significance level of 0.05 and using appropriate statistical evidence, conduct a hypothesis test and state the conclusions.
A recent survey in the N.Y. Times Almanac indicated that 48.8% of families own stock. A broker wanted to determine if this survey could be valid. He surveyed a random sample of 250 families and found that 142 owned some type of stock. At the 0.05 significance level, can the survey be considered to be accurate?
Driver error can be listed as the cause of approximately 54% of all fatal auto accidents, according to the American Automobile Association. Thirty randomly selected fatal accidents are examined, and it is determined that 14 were caused by driver error. Using \(\alpha = 0.05\), is the AAA proportion accurate?
The US Department of Energy reported that 51.7% of homes were heated by natural gas. A random sample of 221 homes in Kentucky found that 115 were heated by natural gas. Does the evidence support the claim for Kentucky at the \(\alpha = 0.05\) level in Kentucky? Are the results applicable across the country? Why?
For Americans using library services, the American Library Association claims that at most 67% of patrons borrow books. The library director in Owensboro, Kentucky feels this is not true, so she asked a local college statistic class to conduct a survey. The class randomly selected 100 patrons and found that 82 borrowed books. Did the class demonstrate that the percentage was higher in Owensboro, KY? Use \(\alpha = 0.01\) level of significance. What is the possible proportion of patrons that do borrow books from the Owensboro Library?
The Weather Underground reported that the mean amount of summer rainfall for the northeastern US is at least 11.52 inches. Ten cities in the northeast are randomly selected and the mean rainfall amount is calculated to be 7.42 inches with a standard deviation of 1.3 inches. At the \(\alpha = 0.05 level\), can it be concluded that the mean rainfall was below the reported average? What if \(\alpha = 0.01\)? Assume the amount of summer rainfall follows a normal distribution.
A survey in the N.Y. Times Almanac finds the mean commute time (one way) is 25.4 minutes for the 15 largest US cities. The Austin, TX chamber of commerce feels that Austin’s commute time is less and wants to publicize this fact. The mean for 25 randomly selected commuters is 22.1 minutes with a standard deviation of 5.3 minutes. At the \(\alpha = 0.10\) level, is the Austin, TX commute significantly less than the mean commute time for the 15 largest US cities?
A report by the Gallup Poll found that a woman visits her doctor, on average, at most 5.8 times each year. A random sample of 20 women results in these yearly visit totals
3; 2; 1; 3; 7; 2; 9; 4; 6; 6; 8; 0; 5; 6; 4; 2; 1; 3; 4; 1
At the \(\alpha = 0.05\) level can it be concluded that the sample mean is higher than 5.8 visits per year?
According to the N.Y. Times Almanac the mean family size in the U.S. is 3.18. A sample of a college math class resulted in the following family sizes:
5; 4; 5; 4; 4; 3; 6; 4; 3; 3; 5; 5; 6; 3; 3; 2; 7; 4; 5; 2; 2; 2; 3; 2
At \(\alpha = 0.05\) level, is the class’ mean family size greater than the national average? Does the Almanac result remain valid? Why?
The student academic group on a college campus claims that freshman students study at least 2.5 hours per day, on average. One Introduction to Statistics class was skeptical. The class took a random sample of 30 freshman students and found a mean study time of 137 minutes with a standard deviation of 45 minutes. At α = 0.01 level, is the student academic group’s claim correct?
Saul Mcleod, PhD
Editor-in-Chief for Simply Psychology
BSc (Hons) Psychology, MRes, PhD, University of Manchester
Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.
Learn about our Editorial Process
Olivia Guy-Evans, MSc
Associate Editor for Simply Psychology
BSc (Hons) Psychology, MSc Psychology of Education
Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.
On This Page:
A research hypothesis, in its plural form “hypotheses,” is a specific, testable prediction about the anticipated results of a study, established at its outset. It is a key component of the scientific method .
Hypotheses connect theory to data and guide the research process towards expanding scientific understanding
Predictions typically arise from a thorough knowledge of the research literature, curiosity about real-world problems or implications, and integrating this to advance theory. They build on existing literature while providing new insight.
Alternative hypothesis.
The research hypothesis is often called the alternative or experimental hypothesis in experimental research.
It typically suggests a potential relationship between two key variables: the independent variable, which the researcher manipulates, and the dependent variable, which is measured based on those changes.
The alternative hypothesis states a relationship exists between the two variables being studied (one variable affects the other).
A hypothesis is a testable statement or prediction about the relationship between two or more variables. It is a key component of the scientific method. Some key points about hypotheses:
In summary, a hypothesis is a precise, testable statement of what researchers expect to happen in a study and why. Hypotheses connect theory to data and guide the research process towards expanding scientific understanding.
An experimental hypothesis predicts what change(s) will occur in the dependent variable when the independent variable is manipulated.
It states that the results are not due to chance and are significant in supporting the theory being investigated.
The alternative hypothesis can be directional, indicating a specific direction of the effect, or non-directional, suggesting a difference without specifying its nature. It’s what researchers aim to support or demonstrate through their study.
The null hypothesis states no relationship exists between the two variables being studied (one variable does not affect the other). There will be no changes in the dependent variable due to manipulating the independent variable.
It states results are due to chance and are not significant in supporting the idea being investigated.
The null hypothesis, positing no effect or relationship, is a foundational contrast to the research hypothesis in scientific inquiry. It establishes a baseline for statistical testing, promoting objectivity by initiating research from a neutral stance.
Many statistical methods are tailored to test the null hypothesis, determining the likelihood of observed results if no true effect exists.
This dual-hypothesis approach provides clarity, ensuring that research intentions are explicit, and fosters consistency across scientific studies, enhancing the standardization and interpretability of research outcomes.
A non-directional hypothesis, also known as a two-tailed hypothesis, predicts that there is a difference or relationship between two variables but does not specify the direction of this relationship.
It merely indicates that a change or effect will occur without predicting which group will have higher or lower values.
For example, “There is a difference in performance between Group A and Group B” is a non-directional hypothesis.
A directional (one-tailed) hypothesis predicts the nature of the effect of the independent variable on the dependent variable. It predicts in which direction the change will take place. (i.e., greater, smaller, less, more)
It specifies whether one variable is greater, lesser, or different from another, rather than just indicating that there’s a difference without specifying its nature.
For example, “Exercise increases weight loss” is a directional hypothesis.
The Falsification Principle, proposed by Karl Popper , is a way of demarcating science from non-science. It suggests that for a theory or hypothesis to be considered scientific, it must be testable and irrefutable.
Falsifiability emphasizes that scientific claims shouldn’t just be confirmable but should also have the potential to be proven wrong.
It means that there should exist some potential evidence or experiment that could prove the proposition false.
However many confirming instances exist for a theory, it only takes one counter observation to falsify it. For example, the hypothesis that “all swans are white,” can be falsified by observing a black swan.
For Popper, science should attempt to disprove a theory rather than attempt to continually provide evidence to support a research hypothesis.
Hypotheses make probabilistic predictions. They state the expected outcome if a particular relationship exists. However, a study result supporting a hypothesis does not definitively prove it is true.
All studies have limitations. There may be unknown confounding factors or issues that limit the certainty of conclusions. Additional studies may yield different results.
In science, hypotheses can realistically only be supported with some degree of confidence, not proven. The process of science is to incrementally accumulate evidence for and against hypothesized relationships in an ongoing pursuit of better models and explanations that best fit the empirical data. But hypotheses remain open to revision and rejection if that is where the evidence leads.
We can never 100% prove the alternative hypothesis. Instead, we see if we can disprove, or reject the null hypothesis.
If we reject the null hypothesis, this doesn’t mean that our alternative hypothesis is correct but does support the alternative/experimental hypothesis.
Upon analysis of the results, an alternative hypothesis can be rejected or supported, but it can never be proven to be correct. We must avoid any reference to results proving a theory as this implies 100% certainty, and there is always a chance that evidence may exist which could refute a theory.
Consider a hypothesis many teachers might subscribe to: students work better on Monday morning than on Friday afternoon (IV=Day, DV= Standard of work).
Now, if we decide to study this by giving the same group of students a lesson on a Monday morning and a Friday afternoon and then measuring their immediate recall of the material covered in each session, we would end up with the following:
Related Articles
Research Methodology
Qualitative Data Coding
What Is a Focus Group?
Cross-Cultural Research Methodology In Psychology
What Is Internal Validity In Research?
Research Methodology , Statistics
What Is Face Validity In Research? Importance & How To Measure
Criterion Validity: Definition & Examples
In the previous section, we learned how to perform a hypothesis test for one proportion. The concepts of hypothesis testing remain constant for any hypothesis test. In these next few sections, we will present the hypothesis test for one mean. We start with our knowledge of the sampling distribution of the sample mean.
Recall that under certain conditions, the sampling distribution of the sample mean, \(\bar{x} \), is approximately normal with mean, \(\mu \), standard error \(\dfrac{\sigma}{\sqrt{n}} \), and estimated standard error \(\dfrac{s}{\sqrt{n}} \).
\(H_0\colon \mu=\mu_0\)
Conditions:
Test Statistic:
If at least one of conditions are satisfied, then...
\( t=\dfrac{\bar{x}-\mu_0}{\frac{s}{\sqrt{n}}} \)
will follow a t-distribution with \(n-1 \) degrees of freedom.
Notice when working with continuous data we are going to use a t statistic as opposed to the z statistic. This is due to the fact that the sample size impacts the sampling distribution and needs to be taken into account. We do this by recognizing “degrees of freedom”. We will not go into too much detail about degrees of freedom in this course.
Let’s look at an example.
This depends on the standard deviation of \(\bar{x} \) .
\begin{align} t^*&=\dfrac{\bar{x}-\mu}{\frac{s}{\sqrt{n}}}\\&=\dfrac{8.3-8.5}{\frac{1.2}{\sqrt{61}}}\\&=-1.3 \end{align}
Thus, we are asking if \(-1.3\) is very far away from zero, since that corresponds to the case when \(\bar{x}\) is equal to \(\mu_0 \). If it is far away, then it is unlikely that the null hypothesis is true and one rejects it. Otherwise, one cannot reject the null hypothesis.
\( H_0\colon \mu=\mu_0 \)
\( H_a\colon \mu\ne \mu_0 \)
Conditions : The data comes from an approximately normal distribution or the sample size is at least 30
Typically, 5%. If \(\alpha\) is not specified, use 5%
One Mean t-test: \( t^*=\dfrac{\bar{x}-\mu_0}{\frac{s}{\sqrt{n}}} \)
Typically we will let Minitab handle this for us. But if you are really interested, you can look p values up in probability tables found in the appendix of your textbook!
If the p-value is less than the significance level, \(\alpha\), then reject \(H_0\) (and conclude \(H_a \)). If it is greater than the significance level, then do not reject \(H_0 \).
State an overall conclusion.
Conduct a one-sample mean t-test.
Note that these steps are very similar to those for one-mean confidence interval. The differences occur in steps 4 through 8.
To conduct the one sample mean t-test in Minitab...
IMAGES
VIDEO
COMMENTS
Table of contents. Step 1: State your null and alternate hypothesis. Step 2: Collect data. Step 3: Perform a statistical test. Step 4: Decide whether to reject or fail to reject your null hypothesis. Step 5: Present your findings. Other interesting articles. Frequently asked questions about hypothesis testing.
If the biologist set her significance level \(\alpha\) at 0.05 and used the critical value approach to conduct her hypothesis test, she would reject the null hypothesis if her test statistic t* were less than -1.6939 (determined using statistical software or a t-table):s-3-3. Since the biologist's test statistic, t* = -4.60, is less than -1.6939, the biologist rejects the null hypothesis.
However, it is possible to do the F-test if you have the 'sufficient statistics', which consist of the three sample sizes, the three sample means, and the three sample variances. My Answer gives elementary versions of the formulas (for equal sample sizes) and shows how to do the F-test using only the sufficient statistics. $\endgroup$ -
Figure 7.1.1. Before calculating the probability, it is useful to see how many standard deviations away from the mean the sample mean is. Using the formula for the z-score from chapter 6, you find. z = ¯ x − μo σ / √n = 490 − 500 25 / √30 = − 2.19. This sample mean is more than two standard deviations away from the mean.
Photo from StepUp Analytics. Hypothesis testing is a method of statistical inference that considers the null hypothesis H₀ vs. the alternative hypothesis Ha, where we are typically looking to assess evidence against H₀. Such a test is used to compare data sets against one another, or compare a data set against some external standard. The former being a two sample test (independent or ...
S.3 Hypothesis Testing. In reviewing hypothesis tests, we start first with the general idea. Then, we keep returning to the basic procedures of hypothesis testing, each time adding a little more detail. The general idea of hypothesis testing involves: Making an initial assumption. Collecting evidence (data).
Below these are summarized into six such steps to conducting a test of a hypothesis. Set up the hypotheses and check conditions: Each hypothesis test includes two hypotheses about the population. One is the null hypothesis, notated as H 0, which is a statement of a particular parameter value. This hypothesis is assumed to be true until there is ...
Hypothesis testing is a crucial procedure to perform when you want to make inferences about a population using a random sample. These inferences include estimating population properties such as the mean, differences between means, proportions, and the relationships between variables. This post provides an overview of statistical hypothesis testing.
Significance tests give us a formal process for using sample data to evaluate the likelihood of some claim about a population value. Learn how to conduct significance tests and calculate p-values to see how likely a sample result is to occur by random chance. You'll also see how we use p-values to make conclusions about hypotheses.
Hypothesis testing is a technique that is used to verify whether the results of an experiment are statistically significant. It involves the setting up of a null hypothesis and an alternate hypothesis. There are three types of tests that can be conducted under hypothesis testing - z test, t test, and chi square test.
The null hypothesis is the claim that the population means are all equal and the alternative hypothesis is the claim that at least one of the population means is different from the others. Collect the sample information for the test and identify the significance level. The p-value is the area in the right tail of the [latex]F[/latex]-distribution.
In general, we can test for equality of proportions using a χ2 χ 2 test where the typical null hypothesis, H0 H 0, is the following: H0:p1 = p2 =... = pk H 0: p 1 = p 2 =... = p k. i.e., all of the proportions are equal to each other. Now in your case you null hypothesis is the following: H0: p1 = p2 = p3 H 0: p 1 = p 2 = p 3.
Choosing the Right Statistical Test | Types & Examples. Published on January 28, 2020 by Rebecca Bevans. Revised on June 22, 2023. ... A Step-by-Step Guide with Easy Examples Hypothesis testing is a formal procedure for investigating our ideas about the world. It allows you to statistically test your predictions. 2219.
2 Answers. I would make a recommendation that, rather than conducting three t t -tests, you conduct an ANOVA test. This is a test designed to assess equality of means of three or more groups and, if memory serves correctly, requires the same assumptions as the t t -test but for three or more groups. In addition, your statement had a subtle flaw.
We have good reason to believe that people leaving the comedy will be in a better mood, so we use a one-tailed test at \(α\) = 0.05 to test our hypothesis. Our calculated test statistic has a value of \(t\) = 2.48, and in step 2 we found that the critical value is \(t*\) = 1.671.
Full Hypothesis Test Examples. Example 8.6.4. Statistics students believe that the mean score on the first statistics test is 65. A statistics instructor thinks the mean score is higher than 65. He samples ten statistics students and obtains the scores 65 65 70 67 66 63 63 68 72 71.
hypothesis tests. Explain measures of association and why they are necessary. Use SPSS to run analysis of variance and interpret the output. CHAPTER12 Hypothesis Testing With Three or More Population Means Analysis of Variance In Chapter 11, you learned how to determine whether a two-class categorical variable exerts an impact on a continuous
ANOVA is inherently a 2-sided test. Say you have two groups, A and B, and you want to run a 2-sample t-test on them, with the alternative hypothesis being: Ha: µ.a ≠ µ.b. You will get some test statistic, call it t, and some p-value, call it p1. If you then run an ANOVA on these two groups, you will get an test statistic, f, and a p-value p2.
Components of a Formal Hypothesis Test. The null hypothesis is a statement about the value of a population parameter, such as the population mean (µ) or the population proportion (p).It contains the condition of equality and is denoted as H 0 (H-naught).. H 0: µ = 157 or H 0: p = 0.37. The alternative hypothesis is the claim to be tested, the opposite of the null hypothesis.
Flag. Evan. 4 years ago. Since we're subtracting the two samples, the mean would be the 1st sample mean minus the 2nd sample mean (µ1 - µ2). Sal finds that to be 0.38 - 0.33 = 0.05 at. 6:46. In this video, Sal is figuring out if there is convincing evidence that the difference in population means is actually 0.
Two-Tailed. In our example concerning the mean grade point average, suppose again that our random sample of n = 15 students majoring in mathematics yields a test statistic t* instead of equaling -2.5.The P-value for conducting the two-tailed test H 0: μ = 3 versus H A: μ ≠ 3 is the probability that we would observe a test statistic less than -2.5 or greater than 2.5 if the population mean ...
A confidence interval (CI) is a range of values that is likely to contain the value of an unknown population parameter. These intervals represent a plausible domain for the parameter given the characteristics of your sample data. Confidence intervals are derived from sample statistics and are calculated using a specified confidence level.
Fifteen randomly chosen teenagers were asked how many hours per week they spend on the phone. The sample mean was 4.75 hours with a sample standard deviation of 2.0. Conduct a hypothesis test. The null and alternative hypotheses are: H0: ˉx = 4.5, Ha: ˉx > 4.5. H 0: x ¯ = 4.5, H a: x ¯ > 4.5. H0: μ ≥ 4.5, Ha: μ < 4.5.
Formula Calculation. Calculate the chi-square statistic (χ2) by completing the following steps: Calculate the expected frequencies and the observed frequencies. For each observed number in the table, subtract the corresponding expected number (O — E). Square the difference (O —E)². Sum all the values for (O - E)² / E.
7. Here's what else to consider. Be the first to add your personal experience. Choosing the right statistical test for your hypothesis is a pivotal step in data analysis. It can make the ...
Power analysis is a statistical method used to determine the sample size needed to detect an effect of a given size with a certain degree of confidence. It involves specifying the significance ...
The insignificance of null hypothesis significance testing. Political research quarterly, 52(3), 647-674. Krueger, J. (2001). Null hypothesis significance testing: On the survival of a flawed method. American Psychologist, 56(1), 16. Masson, M. E. (2011). A tutorial on a practical Bayesian alternative to null-hypothesis significance testing.
Examples. A research hypothesis, in its plural form "hypotheses," is a specific, testable prediction about the anticipated results of a study, established at its outset. It is a key component of the scientific method. Hypotheses connect theory to data and guide the research process towards expanding scientific understanding.
If using the raw data, enter the column of interest into the blank variable window below the drop down selection. If using summarized data, enter the sample size, sample mean, and sample standard deviation in their respective fields. Choose the check box for "Perform hypothesis test" and enter the null hypothesis value. Choose Options.