Verification error is the process of trying to fit results to match preconceived ideas.
In psychology and social science, results are naturally open to personal interpretation, especially for emotive issues, such as ethics or racism. A researcher must incorporate mechanisms to reduce the chance of confirmation bias, or risk losing validity and credibility.
In psychology and social science, results are naturally open to personal interpretation, especially for emotive issues, such as ethics or racism. A researcher must incorporate mechanisms to reduce the chance of confirmation bias, or risk losing validity and credibility.
In the physical sciences, there is a growing trend towards verification error. This is not so much because of flaws in experimental design, but because the intense competition for funding drives scientists to find the results needed.
For example, people’s jobs rely upon attracting grants for research into climate change, and so very little good science about the process is available. The rest is poorly constructed and shoddily performed, funded by hidden agendas.
Before studying how verification error occurs, it is necessary to understand a little about verificationism, an underlying philosophical viewpoint.
Verificationism is the belief that any scientific statement must be verifiable to have any relevance. Concisely, this means that it should make sense, and also be worth researching.
In its broadest term, verificationism refutes ethics, metaphysics, and theology as non-provable, and scientifically non-testable.
Verificationism is rooted in Aristotelian philosophy, where the basic tenet is that, “Only what is known can be tested.” The idea fell out of use, superseded by falsifiability and later theories.
The growth in verification errors, which devalue all of science, has revived verificationism.
Science is in serious danger of devaluation in the public perception, due to media saturation with sensationalist results. Verificationism acts as a barrier against junk science and useless research. If a theory is non-testable, or has no conceivable use, then it is not worth testing.
A lack of verification leads to bias, where a scientist fits the evidence around a theory, rather than using the scientific method.
For example, the writer, Erich Von Daniken, postulated a theory that the Nazca lines were landing guides for alien spacecraft. This idea was built around a flimsy premise; they can only be seen fully from the air; therefore, they must have been constructed for the benefit of aliens. Verification error occurred because he started with the assumption that his theory was right and looked for evidence, however tenuous, to fit.
Verificationism points out the fallacies in this thesis statement but, undaunted, he went on to find evidence to fit his theory.
Proudly, he proclaimed that he had proved beyond doubt that aliens landed in the Nazca desert.
If he had followed the scientific method, he should have said, “The Nazca lines are visible from the air. There must be a reason. What is the most likely reason?” He could then use reasoning, and Occam’s razor, to generate simpler hypothesis and avoid verification errors.
In the physical sciences, whilst verification error should be a little rarer, it can still happen.
For example, a scientist may suddenly decide upon a theory and then look for evidence to fit it. The correct way is to look at the evidence and propose a hypothesis to explain it. Performing this task the wrong way around destroys validity.
From the literature review onwards, the scientist will filter the data to take the research in a certain direction, throwing out any conflicting evidence.
This flaw breeds junk science and pseudo-science, where results are cherry picked to suit an agenda.
For example, if a tobacco company gives a grant into research upon the effects of smoking, they want results proving that there is no increased health risk.
An environmental group will tend to pick results proving climate change, whereas oil companies will pick results showing that man is having no effect. A few press releases later, and the results are heralded as a breakthrough.
In the social sciences and psychology, confirmation bias is probably the biggest single source of experimental error.
It is where a researcher looks at the results from their research, and tries to fit them around pre-existing expectations and hypotheses.
On occasion, this may be intentional, and driven by the need for results and research grants. At other times, it is a wholly subconscious process, as the human mind often tries to make patterns from randomness. In any subjective experiment, pre-conceived ideas will always draw people, even subconsciously, to information shaped by their pre-existing beliefs. This is especially true with ethics.
For example, debates about experiments on animals, or science versus religion, are colored by belief rather than scientific facts.
Polarization occurs when people begin to select information supporting their own pre-existing beliefs, and drift further from the middle ground. Ideally, pure science would remain away from the media, but it is impossible to remain in an isolated bubble.
For example, it takes a brave scientist to stick their head above the parapet and produce evidence to deny global warming, because of the intense pressure to conform to the majority view. Whether they are right or wrong, science revolves around debate and conflict, but media and political pressure often forces verification error.
Deliberate bias often involves picking information to support an opinion already cast. The worst offenders are politicians, who routinely manipulate data to garner votes.
The British Government’s ‘Sexing Up’ of the case for war against Iraq is a prime example. They carefully selected information that supported the existence of Weapons of Mass Destruction, and made the case for war.
Perhaps the best summary for confirmation bias was given by Nickerson, in 1989. He stated his belief that confirmation bias accounts for a large number of the disputes between nations.1 This premonition has proved to be true, and hundreds of thousands of lives were lost to confirmation bias.
References:
(1) Evans, J. St. B. T. (1989). Bias in human reasoning: Causes and consequences. Hillsdale, NJ: Erlbaum.
Martyn Shuttleworth (Feb 27, 2008). Verification Error. Retrieved Dec 12, 2024 from Explorable.com: https://explorable.com/verification-error
The text in this article is licensed under the Creative Commons-License Attribution 4.0 International (CC BY 4.0).
This means you're free to copy, share and adapt any parts (or all) of the text in the article, as long as you give appropriate credit and provide a link/reference to this page.
That is it. You don't need our permission to copy the article; just include a link/reference back to this page. You can use it freely (with some kind of link), and we're also okay with people reprinting in publications like books, blogs, newsletters, course-material, papers, wikipedia and presentations (with clear attribution).