Since I am not a scientist, it follows that I am qualified to write about things that aren’t science. We can deal with the logic of such conclusions during another post, but for now, let’s take a look at some of the hallmarks of pseudoscience.
The brightest of the red flags is taking one’s claims straight to the public or, more ominously, to a selected sympathetic audience. New claims or ideas should be announced to fellow scientists, who can conduct peer review and tests.
In 1989, Stanley Pons and Martin Fleischmann made the astounding claim that they had managed cold fusion, a nuclear reaction at room temperature, as opposed to the seven-figure temperatures thought to be required. Because of the significant implications, this claim attracted worldwide notice. But Pons and Fleishmann revealed little about their methods and resisted requests to validate the claim. Other physicists were unable to replicate the alleged findings, and the purported methods could never be proven. Pons and Fleishmann withdrew from public life, and the only thing cold was their reception in the scientific community.
Similarly, the company Steorn announced it had produced a perpetual motion machine, dubbed the Orbo. Again, this claim flew in the face of accepted science. The Orbo could never be verified under independent testing conditions and it met the same fate as cold fusion.
Ken Ham regularly issues pronouncements asserting evidence for a Young Earth or Intelligent Design. But he publishes these on his website, with no independent verification and without inviting challenge. When I asked on his Facebook page if any of his work had been published in a scientific, peer-reviewed journal, the response was to delete the question and ban me from further posting. Someone doing genuine science is going to want it tested. Getting it right and adding to the field’s knowledge takes precedence over bias, personal vindication, and accolades.
When dealing with new products, another major red flag is failing to use a control group alongside the group being tested. Furthermore, the subjects, and those researching them, should never know which group is which. This equates to a double blind test, which should always be used if practicable and ethical.
Once the test is complete, it should be accepted or rejected as a whole. Cherry picking results to support one’s position is pseudoscience.
Another dead giveaway is being agenda-based. When a researcher announces a finding, he or she should detail the method, evidence, and conclusion. There should be no mention of the dangers of engineered food or of the environment being bombarded with toxins. There should be no rants about impending pandemics or about Intelligent Design being denied a spot in biology class by a nefarious cabal of political and educational bigwigs.
When such alarmist language is used, it is probable that the “study” had predetermined results. Scientists search for evidence, develop a hypothesis based on what they’ve found, then test those ideas. Pseudoscience starts with a desired hypothesis, then seeks evidence to support it, perhaps even relying on anecdotes. Testing, if done at all, is shoddy and conducted by biased researchers, who either agree with the agenda or who are being paid to reach desired conclusions.
It is true that research can reveal serious dangers. But the results should speak for themselves. If there are shouts of alarm, they should come from those hearing the report, not those giving it.
Warning bells should also sound when hearing that something is “all natural.” The product is probably all natural, but there would be no reason to mention this, as all natural does not necessarily mean all good. Hemlock, mercury, and mamba venom are all-natural. Throwing this term around is meant to appeal to persons seeking a healthy lifestyle and has little scientific relevance.
Similarly, vague, undefined terms with no scientific meaning are another red flag. Examples included negative energy, immune booster, and chi. And products that are claimed to be based on ancient or long-lost knowledge are best avoided.
While not fraudulent like the previous examples, there is another point to address. An honest mistake can be made by confusing correlation and causation. A study at the University of Pennsylvania, whose results were published in Nature, revealed that children who slept with the light on were more likely to develop myopia. However, another study at Ohio State University found that the true cause was genetics. It turned out that myopic parents were more likely to put their children to sleep with the light on. While there was a correlation between the light being on and developing myopia, it was not the cause.