Moon loon tune

MOONLAND

The 50th anniversary of the moon landing will be in 2019, but don’t expect a golden year from those who insist it was a hoax. After 49+ years, this bunch still resorts to long-disproven scenarios, while summarily dismissing any discomfiting evidence.

As to why NASA would pretend to go to the moon, deniers have speculated it could have been seen as a Cold War victory, that it distracted from the Vietnam War, or that it would ensure the space administration would continue being funded. While those all might have been consequences of a successful moonshot, that’s separate from it being proof the whole thing was staged. Using this line of thinking is to commit the Affirming the Consequent fallacy.

Since a sizable majority think we went to the moon and most who feel otherwise are incapable of being persuaded, why blog about it? Primarily because there may be a 12-year-old who is hearing denier points and refutations to them for the first time. Scientific knowledge is always one generation from extinction. Plus, addressing these points is a rejoinder to those who claim skeptics and scientists are the truly closed-minded and are mindless sheep who instinctively swallow what we are fed.

After the Apollo and Gemini launches, early flat-Earthers Samuel Shenton and Charles Johnson responded with launches of their own, in the form of charging they were fabrications. This included an evidence-free assertion that Arthur C. Clarke directed, wrote, and produced the moon-landing script. This was updated to become Stanley Kubrick in another narrative. The latter assertion was initially a parody of the Clarke claim, but has come to be interpreted as serious by some deniers. This is similar to how some flat Earth folks are coming to believe there is no Finland or Australia, ideas that were written as satirical criticisms of flat Earthers. However, fashioning a Poe against these types is nearly impossible because it will come to be taken as true by those without the mental acumen to realize they are being mocked.

The question deniers have most difficulty answering is why NASA would fake five  subsequent landings. The moving pieces that would have to be seamlessly assembled for one successful hoax would be astronomical, and each further attempt would run further risk of getting caught. The return trips were interpreted by deniers as attempts to continue the momentum, while the fact that we haven’t been back since 1972 or set up  moon colonies are said to be proof it was staged. So return trips and a lack thereof are both considered evidence of a hoax by the conspiracy theorist.

According to Sketoid’s Brian Dunning, 400,000 persons worked on the moon mission. Yet, all were able to overcome the desire for wealth that an exposé might bring. None were overcome with guilt, none let something slip in an unguarded moment, none got drunk enough to say something, none made a deathbed confession. Dunning further noted that 3,500 journalists investigated, researched, reported, and observed every second of Apollo 11 and were unable to uncover anything suggesting it was a charade. To a conspiracy theorist, that means another 3,500 persons were in on it. To everyone else, it’s more solid evidence of the moon launch and landing being authentic.

Now let’s plow through some of the denier points. One of the more frequently-parroted is that persons attempting to leave Earth’s orbit would be fried by the Van Allen belts. This is an example of what Dr. Steven Novella means when he says pseudoscientists and alternative medics use science like a drunk uses a lamppost: For support, not illumination.

The radiation belts have been discovered, understood, and explained by science. Moon landing deniers, a subset of pseudoscientists, use this discovery to try and score a point for their side, whereas they generally have a jaded view of science. Religious flat Earther Philip Stallings insists the Van Allen belts are another name for the firmament God set in place in Genesis. However, never has a scientific explanation been replaced by a religious one. Scientists did not discover, define, and explain the Van Allen Belts, only to be supplanted by those penning Genesis. Those religious writers did not discover errors in the original Van Allen belt research, leading to our understanding of the firmament. Rather, Genesis authors came up with what their eyes and their very limited knowledge of the natural world permitted. A few millennium later, science learned the truth. Still, Stallings claims that we cannot penetrate the firmament, which he thinks is the Van Allen belt, or that if we could, it would not be survivable.

They key here is that astronauts traveled thorough the belts in a rocket, not in an extended stay hotel. They made it through this high-radiation zone in an hour, only one percent of the the time necessary to start experiencing radiation sickness.

Another argument deniers try to make is that a loud rocket motor would make it impossible to hear astronaut voices. However, viewers could hear the communication with NASA because where the astronauts were, there was no air and therefore no sound. Secondly, the microphones were inside insulating helmets.

A third point deniers raise is that photos of the Lunar Module on the surface are missing a blast crater that presumably should have resulted from its landing. Of this, Dunning wrote, “When the Lunar Module came in to land, it came in with horizontal velocity as the pilot searched for a place to land. Once he found one, he descended, throttled back, and a probe extending over a meter below the landing pads touched the ground and shut off the rocket motor. It was only a very brief moment that the rocket nozzle was actually directed at the landing site, and only at reduced power.”

A similar point is that the Lunar Module’s landing rocket would have blasted all the dust away from the area, so any footprints would have been obliterated. However, there is no air on the moon and no resulting shockwaves. The powerful flames and swirling smoke associated with rocket launches happen because exhaust is being pushed into the air. With no wind or air in the equation, there is no consequent explosion.

The one claim so hackneyed that almost everyone has heard is that the U.S. flag is flapping in a supposedly-nonexistent breeze. This was caused by two factors. First, the flag was folded for the moon trip and the seeming rustling is actually just the creasing that resulted. Second, the apparent movement only happens when an astronaut is adjusting the pole.

Still another denier objection centers on photos of an astronaut that feature another moonwalker’s reflection in his helmet visor. This is supposedly crucial because neither astronaut has a camera to his face. However, this is because astronaut cameras were affixed to their spacesuit. Keeping with camera points, deniers say film would have melted in the 250-degree weather. However, Apollo astronauts used cameras and film specifically made for and insulated against such temperature extremes.

There were other still objections raised by deniers that I handled during this blog’s nascent days if one wishes to read more.

For years, deniers challenged NASA to provide photos of landing sites with vehicles left behind. In 2009, the Lunar Reconnaissance Orbiter provided just such proof. Two years later, the same craft produced clearer images. Like those who considered President Obama’s release of his long-term birth certificate to be MORE proof that he was Kenyan-born because of layers or the timing of the release or whatever, those who thought Armstrong and Aldrin never left orbit were even more convinced of this after the 2009 and 2011 images were made public. They were computer-generated or otherwise fabricated. They were not released in 1975 or 1985 because of technology limitations – not with satellites, but with PhotoShop. To a hardcore conspiracy theorist, any disproving evidence is part of the cover-up.

Besides these photos, a second key piece of evidence that the moon landing happened is the extensive monitoring of Apollo flights. Astronomers, academics, journalists, and excited amateurs all employed telescopes, radios, and radar to track the mission. This included enemies such as the Soviets. Observatories and hobbyists worldwide reported sightings of the Apollo spacecraft. Had the Apollo spacecraft remained in Earthly orbit, it would have been easy to spot even without a telescope.

Then there are the rocks brought back by astronauts. These rocks have been radiometrically dated as being nearly four and a half billion years old, more ancient than any naturally-occurring Earth rock. Dunning further noted, “The moon rocks have impact craters only a millimeter across, created by impacts from micrometeors traveling about 50,000 miles per hour. This is impossible on Earth because the atmosphere blocks them, and it can’t be faked because we don’t have anything that can accelerate small projectiles to that speed.”

What say you to all this, Philip Stallings? From his blog: “1969. That was the year you were told we went to the moon. Do you see anything suspicious about that number? Three 6’s.” I’m only seeing one six myself. Maybe the two nines got turned upside down when they hit the firmament.

 

 

 

Advertisements

Water water everywhere, so let’s all take a drink

WATER BOTTLE

Most of us need eight hours a sleep a night to fully function. But the daytime equivalent of needing eight glasses of water per day rests on myth.

Zero glasses per day would leave someone dead within a week, while eight glasses is likely more than necessary, so where does the true number lie? That depends on the person and circumstances.

Whoever the person, their body will be among the least efficient users of water on the planet. Regrettably for us homo sapiens, the need we can go the second-shortest time without (after oxygen) is one which our bodies can store little of. Further, we have no way to replenish spent water supplies other than drinking it or having in administered intravenously. The latter is impractical outside a medical setting, so we need to make sure we gulp enough, but the idea that means eight glasses a day for persons of every age, weight, climate, and activity level is mistaken.

That notion dates to a 1945 U.S. Food and Nutrition Board suggestion that persons get two and a half liters of water daily. Two key items here. First, this amount was based on the mostly-correct idea that humans on average lose about two liters of water per day. But no research was conducted to affirm the idea of 2.5 liters being right for all persons in all circumstances. Second, the recommendation included the long-forgotten caveat that some of the consumed water could come from food sources.

All foods contain water, from the copious amount in aptly-named watermelon to the negligible level in saltines. The food and drink one intakes without thinking about it may suffice for one’s needs and the easy trick is to let thirst be your guide. There is no need to consciously ingest eight 8-ounce glasses per day unless that happens to coincide with what your thirst dictates.

Humans lose water in vapor form when we breathe and still more is lost through urine and sweat. Even an Inuit coach potato will perspire, though imperceptibly, and this goes back to our inefficient use of internal water supplies. Our bodies use sweat for temperature control by drawing heat off the skin, where it evaporates.

The amount varies by person and environment, but the average amount lost per day to sweating, breathing, and urinating is two liters. Whatever is lost must be replenished to maintain equilibrium. But, again, two liters is merely the average, and the determining factor is how much a given person has lost, and water contained in foods also serves to replace spent reserves.

Of course, one should adjust if in hot weather or doing hard labor. And in an article on the McGill University website, Dr. Christopher Labos cautions that the thirst reflex wanes with age, which is one reason seniors die during heat waves. So age, temperature, and activity can all result in reasonable exceptions to the notion that consciously drinking a set amount of water per day is unnecessary.

If a person in those circumstances drinks too much, they should be fine. Except in extreme cases, drinking more water than what the body needs is harmless, though without benefit. Excess amounts will be pissed away. The kidneys’ primary role is to ensure water losses equal water intake. If they fail in this mission and water retention occurs, the victim will experience swollen feet, with this ballooning then creeping its way up the legs. This is nature’s way of letting us know a vital organ is failing and we need medical attention immediately.

There have been isolated cases of water toxemia, a disruption of brain function that occurs when the usual balance of electrolytes is thrown off through severe over-hydration.

The campy 1970s phenomenon, the Book of Lists, reported on a woman who was convinced she was susceptible to the same type of cancer that killed her mother, so she consumed gallons of water for days on end, causing her overtaxed kidneys to shut down, killing her. Then in 2007, Californian Jennifer Strange died in a radio stunt gone horribly wrong. She chugged about two gallons per day without urinating in an attempt to win a contest prize of a Wii system.

Like the 1970s victim referenced in the previous paragraph, some persons think extra water will make the kidneys more efficient. But Labos cited a randomized study in the American Medical Association journal in which 631 kidney disease patients drank more water than members of a control group and experienced no improvement.

So  the best available evidence points to the notion of needing eight glasses a day to be unfounded. If they start messing with my eight hours of sleep, then we’ll have issues.

 

 

“Locally groan” (Local produce)

strawberry

The list of alarmist adjectives on some food containers is so long that soon it may need to be continued on the back. Gluten-free, MSG-free, rBST-free, non-GMO, organic, no aspartame, no glyphosate, all-natural, no preservatives, no added hormones, no antibiotics.

I have addressed these concocted carton concerns before and will not be rehashing them here. But when this word parade would include the word “local,” I figured that’s one I could support. The closer the food on my plate is to the farm where it was grown, the less fuel and resultant pollutants are being produced. Or so it seemed. But Brian Dunning at Skeptoid cautions this may not always be the case. This issue is complex and edibles shipped from farther away may sometimes mean fewer emissions.

Besides being a critical thinker, skeptic, and possessor of broad knowledge, Dunning also has a background in food produce. He once worked for a company that blossomed from a family fruit stand to a chain that sold produce from local family farms. In its nascent years, the company would send a truck to each farmer it purchased from and deliver the food straight from its store to the grocer’s. As the number of stores multiplied, the company maintained this method.

But soon the owners realized that finding a farmer near each new store it opened  was unfeasible. Sending a truck to each farm and to each market resulted in the routes crisscrossing and defeating the strategy’s intent. It proved to be terribly inefficient, besides being the antithesis of the green-friendliness they were aiming for.

So the company combined routes, enabling it to use fewer and smaller trucks, which meant less local produce but also less burned fuel. A distribution center still got the food out quickly but substantially reduced the total mileage. As the company continued to grow, larger distributions centers were built, sometimes even farther away from the markets they delivered to, but the energy savings continued to be realized.

This can work even on monumental scales. In some cases, Conex-sized purchases made from a company overseas might still be cheaper for the retailer. A crop’s cost is driven mostly by the conditions required to grow it. Spain’s soil and climate makes for fertile tomato growing year-round. By contrast, perennially dreary England means tomato growers there need to use heated greenhouses. The costs associated with that method must be passed onto the consumer. Therefore, a food wholesaler in Leeds would be making a good decision in terms of profit and energy efficiency if he has the red fruits shipped from Catalonia rather than from five miles away.

Or say you live in Moline and want some wool or lamb chops for your business. There are no shepherds in your neighborhood, so whatever are you to do? You could head to rural Illinois and likely find someone who could help. But if buying on a large scale, this would not be the most energy-efficient method.

New Zealand’s climate allows for perennial sheep grazing, so our prospective purchaser would be better off looking there. And despite being almost halfway around the world from New Zealand, if our British tomato buyers decided to branch into mutton, they would make less of an environmental impact by buying from someone near Auckland as opposed to someone in the London vicinity. A New York Times article noted that, “Lamb…shipped 11,000 miles by boat to Britain produces 1,520 pounds of carbon dioxide emissions per ton, while British lamb produces 6,280 pounds of carbon dioxide per ton, in part because poorer British pastures force farmers to use feed.”

Finally, Dunning cited the case of cattle producer Joel Salatin, who stipulates that customers must come to his ranch. That may seem like a method of reducing emissions, but it actually exacerbates the problem. Under this plan, if 200 customers want Salatin’s beef, 200 of them will get in a car and drive to him. A better strategy would be to only service orders that use no more than a specified amount of fuel spent per pound of beef purchased. But at least he’s not selling it in packages that spend 20 words telling the consumer what’s not in it.

 

   

 

“Hang a leftie” (Southpaw shaming)

trumpleft

In my early teens at church, some older youth were talking about a tabloid article which purported that all lefthanders were from outer space. This led the preacher’s southpaw son to say into the fountain pen he was holding, “They’ve discovered us, Master.”

Funny as that impromptu line was, it obscured the fact that being a lefthander in church just a few hundred years before that would have been no laughing matter. Just how long the church considers something evil will vary by sin. Gays and evolution have sat near the top of this Luciferian list for more than a century. Meanwhile, excoriations of Catholics and dancing have moved to the fringe of Christianity. And congregations who consider mixed fabrics and lefties to be Satanic spawn are virtually extinct.

While southpaws were traditionally reviled in most societies, there have been exceptions. Ancient Andeans thought lefthanders were bestowed with magical and healing properties. Also offering left-handed compliments were Greeks and Celts, the latter associating them with femininity and, therefore, the continuation of life. Jews and Christians likewise tied left-handedness to womanhood, but given the misogyny prevalent in those religions, adherents considered this a detrimental trait. Believers viewed lefties like they did their womenfolk: Inferior, weak, and destined for subservience.

In the book of Matthew, souls gather at check-in to see where their eternal reservations have been made and are told, “He shall say unto them on the left hand, depart from me, ye cursed, into everlasting fire, prepared for the devil and his angels.” While the malefactors are tossed into a burning lake, Jesus sits at God’s right hand. With these images in mind, more than a few left hands were bruised by a nun’s ruler and it was common fairly deep into the 20th Century for schools to forcibly retrain lefthanders to use the correct side.

Christianity claimed no monopoly on this southpaw shaming. Even today, many Muslims and Hindus use their right hand for honorable tasks such as greeting friends, signing contracts, and accepting gifts. Meanwhile, the lowly left is reserved for actions considered unclean. These habits grew from sanitation issues. Since the right served as the dominant hand for 90 percent of the population, persons used it when eating, handling food, and interacting with others. The left hand, meanwhile, was used for hygienic activities. These customs were uniform with no consideration of an individual’s dominant hand so the left came to be considered unclean.

And these were minor annoyances compared to how other cultures dealt with left-handedness. Some 19th Century Zulu tribes scalded youngsters’ left hands so they would no longer be of use.  Perpetrators of the Spanish Inquisition and Salem Witch Trials went one worse, sometimes executing persons for using the wrong hand.

Tired of religion having all the fun out in left field, pseudoscientists got in on the act. Downplaying the morally degenerate angle, they instead considered lefties to be a biological mistake. In the early 1900s, criminologist pioneer Cesare Lombroso offered precisely that take with writings that would make a Klansman proud. Switching the blame from Beelzebub to the brain, Lombroso insisted that “as man advances in civilization and culture, he shows an always greater right-sidedness as compared to…women and savage races.” Lombroso further associated left-handedness with the primitive and the barbaric, while considering right-handers to be civilized and peaceful.

Around the same time, a McClure’s article informed readers than southpaws were “more common among the lower strata, negroes, and savages.” If desiring a viewpoint even more, um, right wing, consider what Austrian physician and psychologist Wilhelm Stekel wrote in 1911: “The right-hand path always signifies the way to righteousness, the left-hand the path to crime. Thus the left may signify homosexuality, incest, and perversion, while the right signifies marriage.”

This bigotry faded over the next few decades, though it lingered in some quarters. In the 1970s, psychologist Theodore Blau was still calling left-handed children sinister, academically suspect, and prone to mental illness. And just three years ago, an Oklahoma preschool teacher forced a 4-year old southpaw to use his right hand. When pressed for an explanation, the teacher referenced a publication that branded lefthanders evil, unlucky, and sinister. She also made note of Satan’s supposed southpaw status.

One of the few nuggets of accuracy in all this is that nine out of 10 humans are left-handed. And this biological determination runs very deep. In a Discover article, retired University of Kansas anthropologist David Frayer discussed how he deduced that 1.8 million years ago, Neanderthals had the same 9-to-1 preference.

He observed a series of ridges on the outer surfaces of Neanderthals’ upper front teeth. As to how this indicated hand preference, the article explained: “One direction of diagonal marks, either from upper right to lower left or upper left to lower right, would dominate. Individuals working with tough, fibrous material could have held it between their teeth and one hand, then used an edged stone tool to saw off a small piece with the other hand.” These observations showed the 9-to-1 ratio.  

As to why it was happening even way back then, one theory holds that the brain’s hemispheres split tasks for purposes of efficiency and this division of labor included favoring the right hand for most manual activities. That would explain why most persons are right-handed, but what answer is there for the relative few who become lefties?

Neuropsychologist Chris McManus theorizes that lefties result from a mutation that began occurring around 60,000 years ago. This mutation does not precisely mandate left-handedness, but it cancels the bias for the right and gives those who inherit it a 50-50 chance of being left-handed. That clears up how a set of identical twins can include a righty and a lefty. And what McManus and Frayer have discovered likely explains why lefties are among us without needing to resort to demons, defects, or alien preacher children.

 

“Inheritance facts” (Heritability)

genes

There is a minor Internet presence who calls himself the Libertarian Realist, though given his endorsement of the Confederacy and fluoridation conspiracy theories, I doubt he’s either.

And as a long-time libertarian and skeptic, I find libertarian conspiracy theorists to be the planet’s most baffling creatures. They think a government too incompetent to build roads, run schools, or implement a sensible welfare program will simultaneously master geoengineering, the AIDS crisis, and false flag shootings.

With this guy, however, conspiracy theories are only a tiny fraction of his work. He focuses mostly on race and fixates on the idea that those of his color (excluding Jews) are more intelligent and fit than all other skin tones, especially blacks.

He arrived at these conclusions mostly by misunderstanding, or choosing to ignore, how heritability works. Eminent skeptic Emil Karlsson explained that heritability estimates the amount of variation in a given trait within a population that cannot be explained by environment or random chance. Further, it is unrelated to genetic differences between populations, much as the Libertarian Realist wishes that it were.

Science blogger Gerhard Adam provided a concise description when he wrote,  “Heritability addresses the relationship between nature (genetics) and nurture (environment), so that as each changes, the variation between individuals within a population can be estimated based on these influences. In this context, environment refers to everything external to the genome that could affect expression.” 

Race pseudoscientists like the Libertarian Realist make three key errors with regard to heritability. First, they mistakenly think heritability is a measure of how genetic a trait is. They think genes are nearly the sole factor for determining traits and consider the environment much less relevant. This is mistaken since heritability is about how much variation in a trait can be explained by genetic differences.

Consider the heritability of height for North Koreans. In that country, it will mostly be determined by whether the person is in the ruling elite or is among the serfs. The vast differences in nutrition and health care between those two groups will be the primary factor. By contrast, height differences among Swedes, with their egalitarian access to healthy food and medicine, will mostly be due to genetics.

Another example. Karlsson wrote that in the mid-19th century, U.S.-born males were 3.5 inches taller on average than Dutch men. But by 2000, Dutch males were two inches taller on average than their American counterparts. According to the Libertarian Realist’s thinking, neither population should have a height advantage since the majority in both groups were white men. But changes did occur, and the tendency of U.S. men to be taller than the Dutch and the reversal of this trend would best be explained by changes in environments for both groups.

This leads into racists’ second error, that heritability explains the differences between biological populations. But heritability refers to what proportion of variation in a trait can be explained by genetic variation within a specific population and in a specific environment. It is not a measure of how genetic a trait is. Racists rely on heritability estimates to insist that IQ and other factors are immutable, but heritability also depends on environment. And more than 90 percent of genetic variation occurs within groups and genetic diversity is seen more in clines than in socially-constructed racial categories.

Finally, racists assert that heritability renders useless any attempt to alter traits by managing environmental factors. They say any change to education, income, food, medicine, and housing will not impact the person’s traits, which they maintain are fixed at birth owing to genetics. But as Angelina Jolie’s adopted children and multiple studies can attest, persons going from destitute circumstances to affluent ones will see multitudinous benefits beyond wealth.

 

“Plate histrionics” (Glyphosate fears)

SKULL

There is a claim out there (way out there) that the weed killer glyphosate is present in food at unsafe levels. This claim appears in a work promoted by the likes of Food Democracy Now and Food Babe, not in peer reviewed journals. Still, in this forum, we place a premium on what is said, not who said it, so let’s examine the assertions. 

The publication endorsed by the aforementioned pair alleges that that studies have uncovered dangerous amounts of the herbicide glyphosate in our cabbage and Oreos, among many other edibles. The cover of this work shows a foreboding figure in a hazmat suit saturating future food with what is implied to be toxic levels of chemicals. Accompanying that image is a munching baby next to a spray bottle of Roundup, a Monsanto product which contains glyphosate.

If shouts of alarm ever accompany a scientific study, they should come from those hearing the results, not those giving them. When the latter happens, it is almost always a sign that the “research” was meant only to confirm a desired outcome and that the Scientific Method was skirted. Still, let’s look at what the report said, not its cover or who produced it, in order to make a critical analysis of it.

Michelle Miller of Ag Daily notes that the methods used in the studies make it impossible to distinguish glyphosate from similar chemical structures and may not even be able to differentiate it from water. She wrote, “To detect glyphosate…costs hundreds of thousands of dollars and is a very difficult, scientifically complex task.” The methods cited in these studies fail to meet those standards, though not as spectacularly as Zen Honeycutt’s $125 device meant to detect glyphosate levels.   

Another crucial point is how little glyphosate is spread over large farming aread. It’s just 22 ounces per acre, which would be equal to about two sodas sprinkled on a baseball diamond. Moreover, Miller reports that she sprays just two days a year and that’s done early in the growing season, before the edible part of the plant has emerged. Pointing out that the dose makes the poison, Miller adds that glyphosate is less toxic than baking soda.

Besides, the weed killer impacts enzymes found in plants and does not affect mammals, including humans. The only harm done to animals is when lab rats, mice, and fish are force fed outrageous amounts of it.

I’m all for studies as long as they follow established protocols, employ the Scientific Method, are replicable, and are peer reviewed. Along those lines, the Government Accountability Office once called on the FDA to monitor food for glyphosate residue. But the effort was halted due to a lack of agreement on testing protocols, equipment shortcomings, and the varying analysis methods at the different FDA laboratories.

Sensing a connection between the shuttered testing program and the experiments on overdosed rodents, Food Babe pounced: “Could it be that Monsanto didn’t like the results they started getting, especially since the FDA found glyphosate in foods that should be especially safe like BABY FOOD?”

Shouting something doesn’t make it more relevant and all caps won’t make it more accurate. Instead of providing evidence for the conspiracy she suggested, Food Babe let her followers assume it was true. She provided no examples of test results that Monsanto wouldn’t like, offered no audio recordings about keeping the findings hush-hush, and presented no independent lab experiments that revealed dangerous amounts of herbicide on our plates.

Another vacuous Food Babe claim is that multiple studies show that while probable harm to humans from glyphosate begins at one part per 10 billion, foods in the studies were found to have 1,000 times that. In truth, only one of the studies she listed provided support for that claim, and that one involved testing on mice. And even among vermin, the danger was considered potential instead of probable. Glyphosate, if it’s detectable on any food at all, is in nowhere close to a dangerous amount.

There are legitimate dietary concerns out there, by glyphosate residue is not among them. Alarmist, untrue charges, on the other hand, are much harder to stomach.

“Fool injected” (Anti-vax argument)

BITE

One of the keys to developing critical thinking skills is to understand the importance of addressing a point and not the person making it. Focusing on irrelevant factors like the speaker’s color, gender, ethnicity, politics, economic status, or background will leave one vulnerable to committing an ad hominem, specifically a genetic fallacy.

A few years ago, I came across a graph that purported to demonstrate that measles was well on its way out before the vaccine to combat it was introduced. It showed that the death rate from measles had dramatically declined before persons began being immunized for it. The conclusion was that the vaccine was inconsequential to the disease’s demise. To dismiss this as the ramble of an anti-vax loon would have been to commit an ad hominem. To address the point from a critical thinking perspective, I needed to examine the claim for truthfulness, then see if the whole picture was being painted, and also consider other angles.

When I did so, I learned that the anti-vaxxer’s point was accurate, but incomplete. While the death rate for measles was going down before the advent of the vaccine, the morbidity rate was not. Measles is an endemic disease, so populations can build resistance to it, but it can also be deadly when introduced to a new group. This, when combined with measles’ highly contagious nature and the susceptibility of preschoolers to it, explains why incidences of the disease spiked and descended several times, at approximately four-year intervals.

But there has been no such spike, or even a tiny bump, since the vaccine was introduced in 1964. In fact, there were 364 measles deaths in 1963, and none by 2004, a reduction of 100 percent. The anti-vaxxer’s chart showed how many persons were dying from measles, but not how many persons were contracting it. Advances in health care had enabled more persons to live with the disease, but only the vaccine eliminated it.   

Earlier this year, I again made myself examine an anti-vaxxer claim rather than dismissing it. For years, I had pointed out there was more formaldehyde in a pear than in any vaccine. But one day, I read an anti-vax blog that asked, “When was the last time you injected a pear?” The point was that the way a substance enters the body makes a difference and the blogger even noted that one could safely drink cobra venom.

And he’s correct. Swallowing the snake juice would be different from having fangs inject it into you. If one were so inclined to try the former, the gastrointestinal tract would break down the venom, similar to how the body digests proteins in food. Also, if one drank venom, it would never enter the bloodstream in active form. By contrast, when a snake bites someone, the victim has nothing beneath its skin or in its muscles to counteract the venom. Since it’s not broken down, the venom swims to the lymph glands and into the bloodstream, where it attacks the nervous system and heart, perhaps fatally.

But while anti-vaxxers are correct on these points, they again fail to understand that this has no bearing on a vaccine’s efficiency or safety. While a snakebite and a vaccine both involve injected substances, using this to compare the two is a false equivalency because one saves lives and the other ends them.   

Like the measles deaths graph, if I had dismissed the pear point because it came from someone I viewed as an anti-vax, pro-disease crank, I would have failed my critical thinking test for the day. Consider this an endorsement for avoiding echo chambers and contemplating various viewpoints. Sometimes the opposing view will be right; other times, it will be wrong, but will cause you to examine the issue and learn something you hadn’t realized. In this case, what I learned was the difference in how the body handles injections and ingestions, and the impact this has on a vaccine’s efficacy.  

The key is how much of a substance gets into the bloodstream because once it’s there, the body will process it the same, regardless of how it arrived. With snake venom, there are too many toxins for the body to handle and the poison makes its way to vital organs. While vaccines have ingredients that would be dangerous in high doses, these are in tiny amounts and toxicity is determined by dose, not ingredient. Further, venom contains active neurotoxins and vaccines do not.

Anti-vaxxers may argue that vaccines bypass the immune system, but again, they are being selective with the facts. Vaccines will bypass the body’s first line of defense, but they are designed to do so and won’t work otherwise. Vaccines contain antigens, which are dead or damaged viruses that are active enough to provoke an immune response, but too impotent to be harmful. This forces the body to develop antibodies against the real virus and thereby become immune to it. If the antigens were destroyed right away, they would never serve their purpose. Besides, antigens are not straggling interlopers, but rather they work their way out of the body like other foreign substances.

Since anti-vaxxers focus on injections, I wonder if their movement would have gained its sinister steam if it didn’t have scary needles to fall back on. What if vaccines were in chewable tablet or powder form and yielded a sweet taste as opposed to a sore arm? According to the Vaxplanations blog, the reason such an approach cannot be pursued is because oral forms of most vaccines would be incapable of getting past the gastrointestinal tract. Stomach acid, enzymes, and gut bacteria would render them useless. There are a few exceptions, such as the oral vaccines for rotavirus and polio, which work because both diseases are caused by gut pathogens.