Statistics Tutorial

646.6K reads

This statistics tutorial is a guide to help you understand key concepts of statistics and how these concepts relate to the scientific method and research.

This article is a part of the guide:

Discover 17 more articles on this topic

Browse Full Outline

Scientists frequently use statistics to analyze their results. Why do researchers use statistics? Statistics can help understand a phenomenon by confirming or rejecting a hypothesis. It is vital to how we acquire knowledge to most scientific theories.

You don't need to be a scientist though; anyone wanting to learn about how researchers can get help from statistics may want to read this statistics tutorial for the scientific method.

What is Statistics?

Quiz 1 Quiz 2 Quiz 3 All Quizzes

Research Data

This section of the statistics tutorial is about understanding how data is acquired and used.

The results of a science investigation often contain much more data or information than the researcher needs. This data-material, or information, is called raw data.

To be able to analyze the data sensibly, the raw data is processed into "output data". There are many methods to process the data, but basically the scientist organizes and summarizes the raw data into a more sensible chunk of data. Any type of organized information may be called a "data set".

Then, researchers may apply different statistical methods to analyze and understand the data better (and more accurately). Depending on the research, the scientist may also want to use statistics descriptively or for exploratory research.

What is great about raw data is that you can go back and check things if you suspect something different is going on than you originally thought. This happens after you have analyzed the meaning of the results.

The raw data can give you ideas for new hypotheses, since you get a better view of what is going on. You can also control the variables which might influence the conclusion (e.g. third variables). In statistics, a parameter is any numerical quantity that characterizes a given population or some aspect of it.





Central Tendency and Normal Distribution

This part of the statistics tutorial will help you understand distribution, central tendency and how it relates to data sets.

Much data from the real world is normal distributed, that is, a frequency curve, or a frequency distribution, which has the most frequent number near the middle. Many experiments rely on assumptions of a normal distribution. This is a reason why researchers very often measure the central tendency in statistical research, such as the mean(arithmetic mean or geometric mean), median or mode.

The central tendency may give a fairly good idea about the nature of the data (mean, median and mode shows the "middle value"), especially when combined with measurements on how the data is distributed. Scientists normally calculate the standard deviation to measure how the data is distributed.

But there are various methods to measure how data is distributed: variance, standard deviation, standard error of the mean, standard error of the estimate or "range" (which states the extremities in the data).

To create the graph of the normal distribution for something, you'll normally use the arithmetic mean of a "big enough sample" and you will have to calculate the standard deviation.

However, the sampling distribution will not be normally distributed if the distribution is skewed (naturally) or has outliers (often rare outcomes or measurement errors) messing up the data. One example of a distribution which is not normally distributed is the F-distribution, which is skewed to the right.

So, often researchers double check that their results are normally distributed using range, median and mode. If the distribution is not normally distributed, this will influence which statistical test/method to choose for the analysis.

Other Tools

Hypothesis Testing - Statistics Tutorial

How do we know whether a hypothesis is correct or not?

Why use statistics to determine this?

Using statistics in research involves a lot more than make use of statistical formulas or getting to know statistical software.

Making use of statistics in research basically involves

  1. Learning basic statistics
  2. Understanding the relationship between probability and statistics
  3. Comprehension of the two major branches in statistics: descriptive statistics and inferential statistics.
  4. Knowledge of how statistics relates to the scientific method.

Statistics in research is not just about formulas and calculation. (Many wrong conclusions have been conducted from not understanding basic statistical concepts)

Statistics inference helps us to draw conclusions from samples of a population.

When conducting experiments, a critical part is to test hypotheses against each other. Thus, it is an important part of the statistics tutorial for the scientific method.

Hypothesis testing is conducted by formulating an alternative hypothesis which is tested against the null hypothesis, the common view. The hypotheses are tested statistically against each other.

The researcher can work out a confidence interval, which defines the limits when you will regard a result as supporting the null hypothesis and when the alternative research hypothesis is supported.

This means that not all differences between the experimental group and the control group can be accepted as supporting the alternative hypothesis - the result need to differ significantly statistically for the researcher to accept the alternative hypothesis. This is done using a significance test (another article).

Caution though, data dredging, data snooping or fishing for data without later testing your hypothesis in a controlled experiment may lead you to conclude on cause and effect even though there is no relationship to the truth.

Depending on the hypothesis, you will have to choose between one-tailed and two tailed tests.

Sometimes the control group is replaced with experimental probability - often if the research treats a phenomenon which is ethically problematic, economically too costly or overly time-consuming, then the true experimental design is replaced by a quasi-experimental approach.

Often there is a publication bias when the researcher finds the alternative hypothesis correct, rather than having a "null result", concluding that the null hypothesis provides the best explanation.

If applied correctly, statistics can be used to understand cause and effect between research variables.

It may also help identify third variables, although statistics can also be used to manipulate and cover up third variables if the person presenting the numbers does not have honest intentions (or sufficient knowledge) with their results.

Misuse of statistics is a common phenomenon, and will probably continue as long as people have intentions about trying to influence others. Proper statistical treatment of experimental data can thus help avoid unethical use of statistics. Philosophy of statistics involves justifying proper use of statistics, ensuring statistical validity and establishing the ethics in statistics.

Here is another great statistics tutorial which integrates statistics and the scientific method.

Reliability and Experimental Error

Statistical tests make use of data from samples. These results are then generalized to the general population. How can we know that it reflects the correct conclusion?

Contrary to what some might believe, errors in research are an essential part of significance testing. Ironically, the possibility of a research error is what makes the research scientific in the first place. If a hypothesis cannot be falsified (e.g. the hypothesis has circular logic), it is not testable, and thus not scientific, by definition.

If a hypothesis is testable, to be open to the possibility of going wrong. Statistically this opens up the possibility of getting experimental errors in your results due to random errors or other problems with the research. Experimental errors may also be broken down into Type-I error and Type-II error. ROC Curves are used to calculate sensitivity between true positives and false positives.

A power analysis of a statistical test can determine how many samples a test will need to have an acceptable p-value in order to reject a false null hypothesis.

The margin of error is related to the confidence interval and the relationship between statistical significance, sample size and expected results. The effect size estimate the strength of the relationship between two variables in a population. It may help determine the sample size needed to generalize the results to the whole population.

Replicating the research of others is also essential to understand if the results of the research were a result which can be generalized or just due to a random "outlier experiment". Replication can help identify both random errors and systematic errors (test validity).

Cronbach's Alpha is used to measure the internal consistency or reliability of a test score.

Replicating the experiment/research ensures the reliability of the results statistically.

What you often see if the results have outliers, is a regression towards the mean, which then makes the result not be statistically different between the experimental and control group.

Statistical Tests

Here we will introduce a few commonly used statistics tests/methods, often used by researchers.

Relationship Between Variables

The relationship between variables is very important to scientists. This will help them to understand the nature of what they are studying. A linear relationship is when two variables varies proportionally, that is, if one variable goes up, the other variable will also go up with the same ratio. A non-linear relationship is when variables do not vary proportionally. Correlation is a a way to express relationship between two data sets or between two variables.

Measurement scales are used to classify, categorize and (if applicable) quantify variables.

Pearson correlation coefficient (or Pearson Product-Moment Correlation) will only express the linear relationship between two variables. Spearman rho is mostly used for linear relationships when dealing with ordinal variables. Kendall's tau (τ) coefficient can be used to measure nonlinear relationships.

Partial Correlation (and Multiple Correlation) may be used when controlling for a third variable.

Predictions

The goal of predictions is to understand causes. Correlation does not necessarily mean causation. With linear regression, you often measure a manipulated variable.

What is the difference between correlation and linear regression? Basically, a correlational study looks at the strength between the variables whereas linear regression is about the best fit line in a graph.

Regression analysis and other modeling tools

Bayesian Probability is a way of predicting the likelihood of future events in an interactive way, rather than to start measuring and then get results/predictions.

Testing Hypotheses Statistically

Student's t-test is a test which can indicate whether the null hypothesis is correct or not. In research it is often used to test differences between two groups (e.g. between a control group and an experimental group).

The t-test assumes that the data is more or less normally distributed and that the variance is equal (this can be tested by the F-test).

Student's t-test:

Wilcoxon Signed Rank Test may be used for non-parametric data.

Z-Test is similar to a t-test, but will usually not be used on sample sizes below 30.

Chi-Square can be used if the data is qualitative rather than quantitative.

Comparing More Than Two Groups

An ANOVA, or Analysis of Variance, is used when it is desirable to test whether there are different variability between groups rather than different means. Analysis of Variance can also be applied to more than two groups. The F-distribution can be used to calculate p-values for the ANOVA.

Analysis of Variance

Nonparametric Statistics

Some common methods using nonparametric statistics:

Other Important Terms in Statistics

Full reference: 

(Feb 13, 2008). Statistics Tutorial. Retrieved Dec 12, 2024 from Explorable.com: https://explorable.com/statistics-tutorial

You Are Allowed To Copy The Text

The text in this article is licensed under the Creative Commons-License Attribution 4.0 International (CC BY 4.0).

This means you're free to copy, share and adapt any parts (or all) of the text in the article, as long as you give appropriate credit and provide a link/reference to this page.

That is it. You don't need our permission to copy the article; just include a link/reference back to this page. You can use it freely (with some kind of link), and we're also okay with people reprinting in publications like books, blogs, newsletters, course-material, papers, wikipedia and presentations (with clear attribution).





Want to stay up to date? Follow us!