Statistical Hypothesis Testing

86.3K reads

Statistical hypothesis testing is used to determine whether an experiment conducted provides enough evidence to reject a proposition.

This article is a part of the guide:

Discover 34 more articles on this topic

Browse Full Outline

It is also used to remove the chance process in an experiment and establish its validity and relationship with the event under consideration.

For example, suppose you want to study the effect of smoking on the occurrence of lung cancer cases. If you take a small group, it may happen that there appears no correlation at all, and you find that there are many smokers with healthy lungs and many non-smokers with lung cancer.

However, it can just happen that this is by chance, and in the overall population this isn't true. In order to remove this element of chance and increase the reliability of our hypothesis, we use statistical hypothesis testing.

In this, you will first assume a hypothesis that smoking and lung cancer are unrelated. This is called the 'null hypothesis', which is central to any statistical hypothesis testing.

You should therefore first choose a distribution for the experimental group. Normal distribution is one of the most common distributions encountered in nature, but it can be different in different special cases.

Quiz 1 Quiz 2 Quiz 3 All Quizzes

The Critical Value

There should then be limits set on the critical value, beyond which you can assume that the experiment proves that the null hypothesis is false and therefore using statistical hypothesis testing, the experiment shows there is enough evidence to reject the null hypothesis. This is generally set at 5% or 1% chance probability.

This means that if the experiment suggests that the probability of a chance event in the experiment is less than this critical value, then the null hypothesis can be rejected.

If the null hypothesis is rejected, then we need to look for an alternative hypothesis that is in line with the experimental observations.

There is also the gray area in between, like at the 15-20% level, in which it is hard to say whether the null hypothesis can be rejected. In such cases, we can say that there is reason enough to doubt the validity of the null hypothesis but there isn't enough evidence to suggest that we reject the null hypothesis altogether.

A result in the gray area often leads to more exploration before concluding anything.

Accepting a Hypothesis

The other thing with statistical hypothesis testing is that there can only be an experiment performed that doubts the validity of the null hypothesis, but there can be no experiment that can somehow demonstrate that the null hypothesis is actually valid. This because of the falsifiability-principle in the scientific method.

Therefore it is a tricky situation for someone who wants to show the independence of the two events, like smoking and lung cancer in our previous example.

This problem can be overcome using a confidence interval and then arguing that the experimental data reveals that the first event has a negligible (as much as the confidence interval) effect, if at all, on the second event.

In the figure below, we can see that one can argue the independence is within 0.05 times the standard deviation.

Confidence Interval

Full reference: 

(Nov 15, 2009). Statistical Hypothesis Testing. Retrieved Jul 20, 2024 from

You Are Allowed To Copy The Text

The text in this article is licensed under the Creative Commons-License Attribution 4.0 International (CC BY 4.0).

This means you're free to copy, share and adapt any parts (or all) of the text in the article, as long as you give appropriate credit and provide a link/reference to this page.

That is it. You don't need our permission to copy the article; just include a link/reference back to this page. You can use it freely (with some kind of link), and we're also okay with people reprinting in publications like books, blogs, newsletters, course-material, papers, wikipedia and presentations (with clear attribution).

Want to stay up to date? Follow us!