Whenever a statistical analysis is performed and results interpreted, there is always a finite chance that the results are purely by chance. This is an inherent limitation of any statistical analysis and cannot be done away with. Also, mistakes such as measurement errors may cause the experimenter to misinterpret the results.
However, the probability that the process was simply a chance encounter can be calculated, and a minimum threshold of statistical significance can be set. If the results are obtained such that the probability that they are simply a chance process is less than this threshold of significance, then we can say the results are not due to chance.
Common statistically significant levels are 5%, 1% and 0.1% depending on the analysis
In terms of null hypothesis, the concept of statistical significance can be understood to be the minimum level at which the null hypothesis can be rejected. This means if the experimenter sets his statistical significance level at 5% and the probability that the results are a chance process is 3%, then the experimenter can claim that the null hypothesis can be rejected.
In this case, the experimenter will call his results to be statistically significant. Lower the significance level, higher the confidence.
Statistically significant results are required for many practical cases of experimentation in various branches of research. The choice of the statistical significance level is influenced by a number of parameters and changes with different experiments.
In most cases of practical consideration, however, the distribution of parameters or qualities follows a normal distribution, which is also the simplest case under consideration. However, care should always be taken to account for other distributions within the given population.
While determining significant results statistically, it is important to note that it is impossible to use statistics to prove that the difference in levels of two parameters is zero. This means that the results of a significant analysis should not be interpreted as meaning there was no difference. The only thing that the statistical analysis can state is that the experiment failed to find any difference.
Although 5%, 1% and 0.1% are common significance levels, it is not clear cut which level to use in an actual study - it depends on the norms of the field, previous studies, and the amount of evidence needed. However, it is not recommended to have a higher significance level than 5% because it too often leads to type 1-errors.