Range (Statistics)

91.9K reads

In statistics, range is defined simply as the difference between the maximum and minimum observations. It is intuitively obvious why we define range in statistics this way - range should suggest how diversely spread out the values are, and by computing the difference between the maximum and minimum values, we can get an estimate of the spread of the data.

This article is a part of the guide:

Discover 17 more articles on this topic

Browse Full Outline

For example, suppose an experiment involves finding out the weight of lab rats and the values in grams are 320, 367, 423, 471 and 480. In this case, the range is simply computed as 480-320 = 160 grams.

Quiz 1 Quiz 2 Quiz 3 All Quizzes

Some Limitations of Range

Range is quite a useful indication of how spread out the data is, but it has some serious limitations. This is because sometimes data can have outliers that are widely off the other data points. In these cases, the range might not give a true indication of the spread of data.

For example, in our previous case, consider a small baby rat added to the data set that weighs only 50 grams. Now the range is computed as 480-50 = 430 grams, which looks like a false indication of the dispersion of data.

This limitation of range is to be expected primarily because range is computed taking only two data points into consideration. Thus it cannot give a very good estimate of how the overall data behaves.





Practical Utility of Range

In a lot of cases, however, data is closely clustered and if the number of observations is very large, then it can give a good sense of data distribution. For example, consider a huge survey of the IQ levels of university students consisting of 10,000 students from different backgrounds. In this case, the range can be a useful tool to measure the dispersion of IQ values among university students.

Sometimes, we define range in such a way so as to eliminate the outliers and extreme points in the data set. For example, the inter-quartile range in statistics is defined as the difference between the third and first quartiles. You can immediately see how this new definition of range is more robust than the previous one. Here the outliers will not matter and this definition takes the whole distribution of data into consideration and not just the maximum and minimum values.

It should be pointed out that in spite of several limitations, the range can be a useful indication for many cases. As a student of statistics you should understand what kinds of data are best suited to be defined based on range. If there are too many outliers, it may not be a good idea. But range gives a quick and easy to estimate indication about the spread of data.

Full reference: 

(Jun 10, 2011). Range (Statistics). Retrieved Dec 12, 2024 from Explorable.com: https://explorable.com/range-in-statistics

You Are Allowed To Copy The Text

The text in this article is licensed under the Creative Commons-License Attribution 4.0 International (CC BY 4.0).

This means you're free to copy, share and adapt any parts (or all) of the text in the article, as long as you give appropriate credit and provide a link/reference to this page.

That is it. You don't need our permission to copy the article; just include a link/reference back to this page. You can use it freely (with some kind of link), and we're also okay with people reprinting in publications like books, blogs, newsletters, course-material, papers, wikipedia and presentations (with clear attribution).





Want to stay up to date? Follow us!