Cause and effect is one of the most commonly misunderstood concepts in science and is often misused by lawyers, the media, politicians and even scientists themselves, in an attempt to add legitimacy to research.
The basic principle of causality is determining whether the results and trends seen in an experiment are actually caused by the manipulation or whether some other factor may underlie the process.
Unfortunately, the media and politicians often jump upon scientific results and proclaim that it conveniently fits their beliefs and policies. Some scientists, fixated upon 'proving' that their view of the world is correct, leak their results to the press before allowing the peer review process to check and validate their work.
Some examples of this are rife in alternative therapy, when a group of scientists announces that they have found the next healthy superfood or that a certain treatment cured swine flu. Many of these claims deviate from the scientific process and pay little heed to cause and effect, diluting the claims of genuine researchers in the field.
The key principle of establishing cause and effect is proving that the effects seen in the experiment happened after the cause.
This seems to be an extremely obvious statement, but that is not always the case. Natural phenomena are complicated and intertwined, often overlapping and making it difficult to establish a natural order. Think about it this way: in an experiment to study the effects of depression upon alcohol consumption, researchers find that people who suffer from higher levels of depression drink more, and announce that this correlation shows that depression drives people to drink.
However, is this necessarily the case? Depression could be the cause that makes people drink more but it is equally possible that heavy consumption of alcohol, a depressant, makes people more depressed. This type of classic 'chicken and egg' argument makes establishing causality one of the most difficult aspects of scientific research. It is also one of the most important factors, because it can misdirect scientists. It also leaves the research open to manipulation by interest groups, who will take the results and proclaim them as a truth.
With the above example, an alcoholic drink manufacturer could use the second interpretation to claim that alcohol is not a factor in depression and that the responsibility is upon society to ensure that people do not become depressed. An anti-alcohol group, on the other hand, could claim that alcohol is harmful and use the results to lobby for harsher drinking laws. The same research leads to two different interpretations and, the answer given to the media can depend upon who funds the work.
Unfortunately, most of the general public are not scientists and cannot be expected to filter every single news item that they read for quality or delve into which group funded research. Even respected and trusted newspapers, journals and internet resources can fall into the causality trap, so marketing groups can influence perceptions.
The other problem with causality is that a researcher cannot always guarantee that their particular manipulation of a variable was the sole reason for the perceived trends and correlation.
In a complex experiment, it is often difficult to isolate and neutralize the influence of confounding variables. This makes it exceptionally difficult for the researcher to state that their treatment is the sole cause, so any research program must contain measures to establish the cause and effect relationship.
In the physical sciences, such as physics and chemistry, it is fairly easy to establish causality, because a good experimental design can neutralize any potentially confounding variables. Sociology, at the other extreme, is exceptionally prone to causality issues, because individual humans and social groups vary so wildly and are subjected to a wide range of external pressures and influences.
For results to have any meaning, a researcher must make causality the first priority, simply because it can have such a devastating effect upon validity. Most experiments with some validity issues can be salvaged, and produce some usable data. An experiment with no established cause and effect, on the other hand, will be practically useless and a waste of resources.
The first thing to remember with causality, especially in the non-physical sciences, is that it is impossible to establish complete causality.
However, the magical figure of 100% proof of causality is what every researcher must strive for, to ensure that a group of their peers will accept the results. The only way to do this is through a strong and well-considered experimental design, often containing pilot studies to establish cause and effect before plowing on with a complex and expensive study.
The temporal factor is usually the easiest aspect to neutralize, simply because most experiments involve administering a treatment and then observing the effects, giving a linear temporal relationship. In experiments that use historical data, as with the drinking/depression example, this can be a little more complex. Most researchers performing such a program will supplement it with a series of individual case studies, and interviewing a selection of the participants, in depth, will allow the researchers to find the order of events.
For example, interviewing a sample of the depressed heavy drinkers will establish whether they felt that they were depressed before they started drinking or if the depression came later. The process of establishing cause and effect is a matter of ensuring that the potential influence of 'missing variables' is minimized.
One notable example, by the researchers Balnaves and Caputi, looked at the academic performance of university students and attempted to find a correlation with age. Indeed, they found that older, more mature students performed significantly better. However, as they pointed out, you cannot simply say that age causes the effect of making people into better students. Such a simplistic assumption is called a spurious relationship, the process of 'leaping to conclusions.'
In fact, there is a whole host of reasons why a mature student performs better: they have more life experience and confidence, and many feel that it is their last chance to succeed; my graduation year included a 75-year-old man, and nobody studied harder! Mature students may well have made a great financial sacrifice, so they are a little more determined to succeed. Establishing cause and effect is extremely difficult in this case, so the researchers interpreted the results very carefully.
Another example is the idea that because people who eat a lot of extra virgin olive oil live for longer, olive oil makes people live longer. While there is some truth behind this, you have to remember that most regular olive oil eaters also eat a Mediterranean diet, have active lifestyles, and generally less stress. These also have a strong influence, so any such research program should include studies into the effect of these - this is why a research program is not always a single experiment but often a series of experiments.
One of the biggest threats to internal validity through incorrect application of cause and effect is the 'history' threat.
This is where another event actually caused the effect noticed, rather than your treatment or manipulation. Most researchers perform a pre-test upon a group, administer the treatment and then measure the post-test results (pretest-posttest-design). If the results are better, it is easy to assume that the treatment caused the result, but this is not necessarily the case.
For example, take the case of an educational researcher wishing to measure the effect of a new teaching method upon the mathematical aptitude of students. They pre-test, teach the new program for a few months and then posttest. Results improve, and they proclaim that their program works.
However, the research was ruined by a historical threat: during the course of the research, a major television network released a new educational series called 'Maths made Easy,' which most of the students watched. This influenced the results and compromised the validity of the experiment.
Fortunately, the solution to this problem is easy: if the researcher uses a two group pretest-posttest design with a control group, the control group will be equally influenced by the historical event, so the researcher can still establish a good baseline. There are a number of other 'single group' threats, but establishing a good control driven study largely eliminates these threats to causality.
Social threats are a big problem for social researchers simply because they are one of the most difficult of the threats to minimize. These types of threats arise from issues within the participant groups or the researchers themselves. In an educational setting, with two groups of children, one treated and one not, there are a number of potential issues.
These social effects are extremely difficult to minimize without creating other threats to internal validity.
For example, using different schools is one idea, but this can lead to other internal validity issues, especially because the participant groups cannot be randomized. In reality, this is why most social research programs incorporate a variety of different methods and include more than one experiment, to establish the potential level of these threats and incorporate them into the interpretation of the data.
Multiple group threats are a danger to causality caused by differences between two or more groups of participants. The main example of this is selection bias, or assignment bias, where the two groups are assigned unevenly, perhaps leaving one group with a larger proportion of high achievers. This will skew the results and mask the effects of the entire experiment.
While there are other types of multiple group threat, they are all subtypes of selection bias and involve the two groups receiving different treatment. If the groups are selected from different socio-economic backgrounds, or one has a much better teacher, this can skew the results. Without going into too much detail, the only way to reduce the influence of multiple group threats is through randomization, matched pairs designs or another assignment type.
As can be seen, establishing cause and effect is one of the most important factors in designing a robust research experiment. One of the best ways to learn about causality is through experience and analysis - every time you see some innovative research or findings in the media, think about what the results are trying to tell you and whether the researchers are justified in drawing their conclusions.
This does not have to be restricted to 'hard' science, because political researchers are the worst habitual offenders. Archaeology, economics and market research are other areas where cause and effect is important, so should provide some excellent examples of how to establish cause and effect.
Want the full version to study at home, take to school or just scribble on?
Whether you are an academic novice, or you simply want to brush up your skills, this book will take your academic writing skills to the next level.