Comparing statistical significance, sample size and expected effects are important before constructing and experiment.
A power analysis is used to reveal the minimum sample size which is required compared to the significance level and expected effects.
Many effects have been missed due to the lack of planning a study and thus having a too low sample size. Also, there is nothing wrong with having a too big sample size, but often much money and efforts are required to increase the sample size, and it could prove to be unnecessary.
If you want to generalize the findings of your research on a small sample to a whole population, your sample size should at least be of a size that could meet the significance level, given the expected effects. Expected effects are often worked out from pilot studies, common sense-thinking or by comparing similar experiments. Expected effects may not be fully accurate.
Comparing the statistical significance and sample size is done to be able to extend the results obtained for the given sample to the whole population.
It is useful to do this before running the experiment - sometimes you may find that you need a much bigger sample size to get a significant result, than it is feasible to obtain (thus making you rethink before going through the whole procedure).
Different experiments invariably have different sample sizes and significance levels. The concepts are very useful in biological, economical and social experiments and all kinds of generalizations based on information about a smaller subset.
For example, if an experimenter takes a survey of a group of 100 people and decides the presidential votes based on this data, the results are likely to be highly erroneous because the population size is huge compared to the sample size.
The needed effect is much smaller since this experiment requires much 'power'.
The sample size depends on the confidence interval and confidence level. The lower the confidence interval required, the higher sample size is needed.
For example, if you are interviewing 1000 people in a town on their choice of presidential candidate, your results may be accurate to within +/- 4% of your findings. If you wish to lower the confidence interval to +/- 1%, then you will naturally need to interview more people, which means an increase in the sample size.
If you want your presidential results to be of 99% confidence level instead of 95%, then you will need to have a much higher sample size of people to interview. This means that the survey needs higher power to accept a hypothesis.
Some researchers choose to increase their sample size if they have an effect which is almost within significance level. This is done since the researcher suspects that he is short of samples, rather than that there is no effect there. You need to be careful using this method, as it increases the chances of creating a false positive result.
When you have a higher sample size, the likelihood of encountering Type-I and Type-II errors occurring reduces, at least if other parts of your study is carefully constructed and problems avoided. Higher sample size allows the researcher to increase the significance level of the findings, since the confidence of the result are likely to increase with a higher sample size. This is to be expected because larger the sample size, the more accurately it is expected to mirror the behavior of the whole group.
Therefore if you want to reject your null hypothesis, then you should make sure your sample size is at least equal to the sample size needed for the statistical significance chosen and expected effects.