Human Design is a form of numerology made up by Alan Krakower, who heard a voice telling him how it works, with the voice apparently encouraging him to charge others for access to the information.
Consumers input their name, precise minute of birth, and time zone born in. In return, they receive a hodgepodge of numbers, symbols, and shapes, along with a nine-item list that allegedly describes the person. The items are vague personality attributes, not testable claims or specific facts. They contain no precise details, such as dates and locations of education or employment, which would give the graph credibility.
Still, some people embrace Human Design and its promise of easy life answers sprinkled with eastern mysticism verbiage. Skeptoid’s Brian Dunning noted that while those who embrace such notions have an affinity for the Appeal to Antiquity fallacy, it is not absolute. He wrote, “Compare two concepts of the human body: First, the four bodily humors, which nobody believes in today; and second, qi, which is widely believed today.”
The difference, Dunning continued, is that one is physical, the other metaphysical. The latter is more vague, while the former could be searched for physically, not found, and therefore be disproven.
Therefore, physical claims are dismissed and metaphysical claims embraced, especially when they purport to provide a blueprint for success without any accompanying effort.
While efforts to foist creationism on public school biology students have failed, such attempts continually arise like The Phoenix, a bird with as much claim to being real as any creationist argument.
While the legal losses have been declarative, adherents have latched onto a solitary, isolated line from a 1987 defeat and have sucked it dry for more than 30 years. The sentence suggested teachings about human origins which fail to incorporate biology may be permissible if the purpose is secular.
There is no such animal, literally or figuratively, but proponents used this single utterance to invent the notion of Intelligent Design. In this concept, any deity or higher being, not necessarily the Biblical one, could have created life. The façade is so transparent that no follower of any religious subset besides U.S. evangelical Christians have ever embraced the idea, and a publication lauding Intelligent Design has as its cover Leonardo DaVinci’s The Creation of Adam.
ID proponents include virtually no biologists, and we could count on one evolved opposable-digit hand how many of them have done molecular biology research. While ID proponents are nowhere to be found in peer-reviewed journals, their banter is a regular feature on Christian media. There, biologists are portrayed as confused, stubborn, disillusioned, frustrated, or immoral, which even if all true, would be ad homimen attacks unrelated to the scientists’ research, findings, or writings.
Proponents embrace the god of the gaps fallacy, gleefully plugging their favored deity into any crevice science has yet to fully explain. But our focus today is on one of those who is among that literal handful of molecular biologists who endorse ID: Lehigh University biochemist Michael Behe. He accepts that microevolution through random mutation diversifies organisms into species and genera, and perhaps even families. But he feels something more is needed to explain large-scale evolutionary transitions. Into this gap, which he creates from feelings and not evidence, he wedges the Christian god. He never says that verbatim, but he does allow his evangelical Christian followers to accept this interpretation and promote it.
In a review of Behe’s latest book, Darwin Devolves, John Jay College biology professor Nathan Lents writes that Behe purportedly undertakes to prove that evolutionary processes are insufficient to generate adaptive innovations, yet the author spends precious little time addressing this.
Further, Behe dedicates precious few paragraphs addressing key evolutionary mechanisms that serve to undermine his thesis. Consider horizontal gene transfer, which occurs when genetic material moves from one species to another, usually through a virus. For example, Lents explains, deer ticks evolved defenses against bacteria through genes that came from those bacteria.
While uncommon, such horizontal gene transfer can have profound effects on a species’ eventual lineage. Behe dedicates nary a word to this in Darwin Devolves.
Also unmentioned by Behe is exaptation, which refers to an organism co-opting a structure for a new function. Lents cites the example of mammalian middle ear bones that were adapted from jaw bones in our reptilian ancestors.
Now, when Behe writes that natural selection cannot fully account for the planet’s molecular biodiversity, he is right. But we know that because of scientific discoveries made since Darwin, not because of ancient religious texts or the writings of an iconoclastic microbiology professor who bypasses peer review.
In an attempt to bolster his view that natural selection in insufficient, Behe writes that that Richard Lenski’s e. coli experiment shows that mutation and natural selection serve only to “break or blunt genes.” But Behe misinterprets the experiment and ignores that its controlled environment is deliberately artificial. Lents notes that bacteria in the experiment have access to unlimited food, static temperatures, high oxygen, and are without competitors, pathogens, or threats to their immune system.
Behe also dismisses finch diversification, announcing he is unimpressed with their becoming about 18 species across five genera. He compares finch diversification to the adaptive radiation of animals during the Cambrian explosion more than 500 million years ago. He gloats that finches failed to become a new phylum, class, or even order.
Lents answers that the Cambrian explosion took place over a much longer time and involved simpler animals which produce much faster than finches.
With an online treasure trove of overwhelming evidence available, lay persons who latch onto a favored position in lieu of science are without excuse. But a harsher criticism should be leveled at anyone whose experience and education should be used to correct those lay persons instead of comforting them.
Van Halen famously insisted on having no brown M&Ms in their bowls backstage. This was not based on a color-based munchies preference, but was rather the band’s way of ensuring their contract had been read.
Another creative, albeit in this case distasteful, use of M&Ms will be the focus of this post. In this instance, the candies are at the center of a hypothetical, foreboding challenge in which a small fraction of them have been poisoned.
Presented a bowl in which, say, three percent of the M&Ms would have fatal results if ingested, a person is rhetorically asked is they would gobble a handful. They clearly would not, so the analogy then compares the sweet treats to Hispanic immigrants, Muslims, AIDS patients, or some other group the speaker holds in low regard. Perhaps only three percent of them are bad apples, but we need to chop down the entire tree since we have no way of knowing which is which. The analogy is usually employed by xenophobes but has sometimes been those on the far left to portray men as monsters that need guarded against.
Regardless of whether it comes from the left, right, or someplace else, the analogy is a mistaken one. When this comparison of people to candies is made, the speaker implies that demonizing an entire population is as legitimate as declining to gobble a handful of potentially deadly tiny round confectionaries.
To see how mistaken that analogy it is, use the point against the person making it. Let’s say someone uses the comparison to insist that we should err of the side of caution and deny entry to persons of Middle Eastern ancestry. Counter that position by saying that while most MAGA hat wearers are not violent hatemongers who would attack minorities, three percent of them might do so. Therefore all persons expressing xenophobic sentiments should be stripped of their citizenship and deported. Unless the proponent is likewise willing to embrace this position, he or she doesn’t truly believe in the comparison.
Further, the analogy implies that we could reduce the risk to zero by avoiding all M&Ms. In the same way that the color-coded National Terrorism Advisory System includes no all-clear and thus keeps us in a perpetual state of worry, the M&Ms in the analogy are meant to cause perpetual concern. The only way to be sure to avoid danger is to avoid them entirely. The candy analogy seems to work because most people would not eat one M&M if there was even a .0001 percent chance of being poisoned. But nothing is ever risk free and no analogy proponent would think we should avoid getting out of bed, an event that kills dozens of people a year.
Also, even if you happen to come across a dangerous member of the derided group, you may well escape unscathed, whereas with the poisoned M&M, death is a certainty for anyone who consumes it. Therefore, the danger posed by the group member is greatly exaggerated when compared with how likely they are to harm a specific person.
Finally, the analogy falls flat since M&Ms all look the same, except for the color difference, and there would be no way of knowing which ones were poisoned. But when it comes to people, background checks and indicators give us a good idea of how dangerous a specific Hispanic, Muslim, or other group member is likely to be.
Last post, we took the far left to task, so in the interest of being nonpartisan, we will today call out the right wing. Specifically, we will look at the insinuation that Joe Biden is responsible for rising gasoline prices. This is not a true partisan issue, as some left-wingers have blamed Republican presidents for pump pain, and there are plenty of conservatives who understand that the White House doesn’t set gas prices.
But those that do think that are the focus of today’s post. Expect for some negligible indirect influence, the commander in chief has nothing to do with whether one shells out two dollars or five for their gallon of mid-octane.
According to the U.S. Energy Information Administration, the key factors in the price gasoline consumers pay are: Taxes; crude oil cost; refining costs and profits; and distribution and marketing costs. The executive branch position on civil liberties, infrastructure, national defense, Brussels sprouts appreciation proclamations, or everything else are nonfactors.
Writing for the New York Times, Richard Thaler explained that the U.S. consumes 20 percent of the world’s oil while owning just two percent of the reserves. That means the Middle East has us by the collective balls in perpetuity.
Thaler wrote that while this leaves the U.S. little say in the price of oil, the country could help itself by reducing consumption, using oil more efficiently, and prioritizing alternative fuel sources. But this would be tedious even if everyone was on board with the ideas. And that is not the case, as evidenced by the ostentatious souped-up trucks which double as moving platforms for oversized U.S. and Confederate flags (pick a side, dude).
And even those Americans not in the redneck subset love their automobiles. Further, alternative energy has seen only lukewarm results. Therefore, Thaler opines a better approach would be to gradually raise gasoline taxes to what they are in Western Europe. Because those taxes are high, fuel-efficient automobiles are far more common in Germany than in Georgia. The high taxes could be more than offset by the drop in demand.
So the one indirect impact a president could have would be to suggest charting this corrective course. But that would be political suicide in the United States. So they do nothing and we are left with the bizarre, indefensible spectacle of praising or condemning the executive branch for something beyond its control. We might as well blame them for my leaky faucet.
Thaler wrote his piece in 2012 but nothing has changed since then. For a specific look at today’s Biden Blame, we consider the writings of Jonathan Oher on thejostle.com. He highlights some social medial posts which insist the president is responsible for the rising prices and others which portend an even more frightening fuel future.
On Biden’s inauguration day, the average price for a gallon of gasoline in the U.S. was $2.37. The posts that Oher cited had prices being 30 percent lower than that, but beyond the factual error is the mistaken insinuation as to who is to blame if the price becomes 4, 5, or even 6 dollars per gallon. Tellingly, none of the posters seem ready to heap praise on the president if the prices plummet to $1.50 a year from now.
The posts also play loose with the facts, showing prices a few days before and after inauguration day, but posting them from different parts of the country. Different locales will always play different prices because of state taxes and distribution costs. Using this disparity to make the point would be like comparing the January temperatures in Minneapolis to those in Miami and blaming the president for global warming.
But, again, the key point here is not the actual price or the fluctuation but the party responsible.
The rise seen over the past two months is primarily due to a correction of gas prices that dipped during the pandemic, which created an artificial drop in demand. With the country somewhat opening up, full tanks are needed for these trips to the now-open malls, sports arenas, and restaurants.
Beyond fuel usage, crude oil cost plays a role, as the slick substance is likewise recovering from the pandemic. The cost went down more than 10 percent from January 2020 to January 2021. As that price corrects, gasoline prices will rise, as will the number of misinformed memes about who is responsible.
While the loony far left dominates colleges, the rigid, absolute mathematics field would seem like an area that would provide a, how shall we say, safe space, from all this.
Alas, that is no longer the case, with the advent of, “A Pathway to Equitable Math Instruction: Dismantling Racism in Mathematics.”
This pompous pamphlet thunderously asserts that the following are racist acts: Expecting students to meet benchmarks; Teaching math in a linear fashion; Focusing on how to get the right answer; Showing one’s work; And raising your hand to be recognized.
While ostensibly meant to somehow bolster Black children, the tract instead belittles them by assuming they should never be expected to gain mathematics proficiency. As Columbia University Linguistics and Music History Professor John McWhorter wrote, “It claims to be about teaching math while founded on shielding students from the requirement to actually do it. This is not pedagogy; it is preaching.”
Mathematics rests on explicitly-formulated definitions and facts. Were this not the case, bridges would collapse, planes would never go airborne, and monetary transactions would be a gibberish nightmares. It would be literally fatal if engineers and mechanics were to adopt such notions as new geometry, woke algebra, or calculus of color.
Math is the same everywhere. There is no German Geometry, Algerian Algebra, or French Fractions. There is no “White Way” of getting the answer and, in fact, the field serves as one of the world’s great equalizers. In math class, there are no essays where one can con their way to an answer without ever saying anything constructive. The answers, and how they are arrived at, are uniform worldwide. But this supposed math handbook, McWhorter notes, “says very little about how to actually teach kids of any ethnicity math. In fact it is detrimental to teaching math by urging the elimination of practices, like having students show their work.”
For while showing work is painted as an instance of White supremacy, the process is essential to correcting errors, it shows students understand the process, and it ensures the answer was not purloined from the kid one desk over.
As to arriving at the correct answer, this entirely reasonable and logical goal is considered a weapon in the White supremacist toolkit. This offensive, paternalistic absurdity assumes that most Black children are incapable of conquering the discipline.
Like McWhorter, Princeton mathematics professor Sergiu Klainerman is pained by this development: “I have witnessed the decline of universities and cultural institutions as they have embraced political ideology at the expense of rigorous scholarship. I had naively thought that the STEM disciplines would be spared from this ideological takeover.”
This now-seemingly complete takeover represents a soft totalitarianism where dissenters are not extra-judicially executed or exiled to Siberia, but are fired, doxed, picketed at home, and have a pound of their flesh extracted by the virtual mob.
Nothing in historical or contemporary mathematics suggests that it should be done in a different way based on geography or that it is race-dependent. To the contrary, math enjoys a long and rich history across the cultures, with major developments and contributions from Egyptians, Babylonians, Greeks, Chinese, Indians, and Arabs. Schools throughout the world teach the same principles and math serves as a universal language.
During international sports competitions, players on both sides may speak nary a word of their opponent’s language, but they are bonded by common rules they all follow. Similarly, race is no barrier to mathematics and this equality makes it the antithesis of supremacism.
No event is too routine to be exempt from conspiracy theorist thought. A minor Internet splash this winter has centered on the insinuation that snow, at least in some places, is actually something else.
Precisely what it is, who is responsible, and how malevolent it is, varies by claimant, but the key point is that “they” are up to something again. The excited proponents most frequently cite Bill Gates as the responsible villain. The software pioneer has achieved Rothschild/Bohemian Grove/Bilderberger status when it comes to being tabbed for every evil ever foisted upon Mankind.
In these videos, which are remarkably similar in terms of content and low production value, speakers ask three primary questions about this supposed snow. Asking questions is fine, if based on genuine curiosity. It’s another matter when questions are thinly-veiled accusations which serve as a precursor to considering those answering them to be in on the plot.
These plotters include the eminently delightful Emily Calandrelli, who explained what’s going on in these videos. In the one Calandrelli responded to, the narrator wields a butane lighter and wonders why this makes the snow char, why the snow smells like plastic, and why it melts so slowly.
It chars because of incomplete combustion from the butane lighter. Butane comprises carbon and hydrogen and the resultant black smudge represents leftover carbon from incomplete combustion.
Calandrelli used a glass to demonstrate that the same soot results when butane lighters are applied to other objects. So unless the video producer is prepared to launch a tirade against phony drinking receptacles, this answer suffices.
With regard to the plascity aroma, Calandrelli explains the funny smell is the consequence of the chemicals concentrating during the burning.
Finally, the white precipitation melts slower than expected for two reasons. First, most of the water is being absorbed into the snowball. Second, it sublimates, meaning it goes directly from solid to gas. Besides, it takes more heat than most people might think. You’d get the same surprisingly slow result from using fire to try and melt an ice cube.
These succinct, scientific explanations contract mightily to the open-ended nightmarish scenarios suggested by the other side.
Writing for Yahoo!, Caroline Delbert reminded readers that weather control has a long history in paranoid circles. Manifestations of this have included HAARP, chemtrails, and seeded rain clouds.
“In this case, conspiracy theorists might believe increased snowfall indicates something about climate change, which they say is part of a global agenda to push government restrictions onto residents,” Delbert wrote.
Theorists paint Gates and the Chinese government beneficiaries of a world blanketed by pretend snow. What the white stuff actually is or how it benefits two already immensely powerful entities is unexplained. Sounds like a snow job to me.
Pete the Pup of Our Gang fame and Spuds MacKenzie were both pit bulls, a designation which refers not to specific breed but a collection of related ones. These include the American pit bull terrier, the American Staffordshire terrier, the Staffordshire bull terrier, and the American bully.
A Little Rascals sidekick and a party-loving beer pitchdog contrasts mightily with the bloodthirsty, intimidating image of pit bulls held by many and promoted by some media.
But while pit bulls have been implicated in fatal attacks, the notion that they by and large are dangerous is a misnomer. Some pit bulls were bred by unscrupulous sadists for bull-baiting and dogfighting, while others were bred to hone their friendliness, loyalty, and attentiveness.
Skeptoid’s Brian Dunning noted that the Journal of the American Veterinary Medicine Association found that from 1979 to 1998, 238 Americans were killed by 403 dogs, with pits bulls and Rottweilers accounting for more than half of the tragedies.
However, the authors noted that the study failed to account for the antics and personalities of the owners. Some societal outcasts prefer the image and street cred that comes with owning an animal with a dangerous reputation and they are only too happy to promote this. So while there may be a correlation between dog bites and some breeds, there may not be causation.
The study also fails to adjust for the percentage of American dogs that each breed represents. Further, reliable numbers are unavailable because the American Kennel Club registry includes only canines whose masters have registered them, and that is primarily done by serious owners who prefer purebred, pedigreed show dogs.
Additionally, fatalities are not the only factor when assessing dog dangers. Dunning wrote that a study in Pediatrics found there were 30,000 dog bites for every fatal dog mauling. German shepherds were the most bitey breed and there are no statistics to support how likely pit bulls are to bite relative to other breeds. In fairness, however, pit bull attacks may inflict more damage or be more likely to be fatal than most breeds.
The Pediatrics study also showed that dogs are more likely to chomp away if they are male, unneutered, less than five years old, weigh more than 45 pounds, live with elementary school age children, and kept chained outdoors. All those are more likely factors to dog danger than whether the animal is a pit bull.
The SkepDoc, Harriet Hall, coined the phrase Fairy Science to refer to trying to find the cause or solution to a mystery without first ascertaining that the entity exists.
One could look at the demographics for how much money is given for lost choppers and whether race, religion, or riches play a role. You could look at trends of whether molars are deemed more valuable than incisors. And we could see if the pandemic impacted any of this. But all of this would be to assume the existence of a stealthy spirit who undertakes nocturnal sojourns to children who have one less tooth than they did the day before.
While no one seriously insists on the existence of a Tooth Fairy, they do so with Bigfoot, meridians, and extraterrestrial visitors. They ponder who may be Sasquatch’s closest biological relatives, wonder which internal bodily pathway should be punctured to cure eczema, and postulate as the purpose of alien anal probes. The do so without having first shown the relevant phenomena are real.
While Hall had medicine in mind, the concept of Tooth Fairy Science can also apply to history.
Consider the Holy Grail, which is purported to be the receptacle Jesus drank from during the Last Supper. It has been sought and written about for centuries.
However, there are no references to the Holy Grail in the Biblical accounts of Jesus, nor does the religious receptacle make so much as a cameo in any other first millennium text.
Writing for Skeptoid, Brian Dunning noted that 12th Century cleric Geoffrey of Monmouth published “History of the Kings of Britain,” which described Arthur as an unbeatable warrior and which included one of the first known references to the cup.
Then in 1190, Dunning continued, the poet Chrétien de Troyes created a heroic knight named Perceval, who proposes Arthur and his knights search for the Holy Grail in order to restore the assembly’s honor and prestige. Dunning noted that in this and future fictional works, the object was not near as important as the quest for it.
So it took nearly 1200 years for the notion of a Holy Grail to emerge. Since then, it has assumed iconic status and made countless appearances in print and film. According to Dunning, John Calvin identified nearly two dozen cups that had been identified by the bearer as the true Grail. Many other assertions have been made since, some of which ascribe supernatural powers to the cup, and none of which have cleared the first hurdle of proving that there had ever been a Grail held by a dining Jewish messiah.
One attempt to drastically alter history purports that a 19th Century disaster obliterated much of the world, and in this misfortune’s wake sprung up most of today’s nations and societies.
A mud flood sludge, in all its rhyming glory, is said to have been the cause. Homes, businesses, farms, railroads, streams, and much more were said to have been swept under by the deluge. In this tale, villages that were partially buried were part of an advanced civilization called Tartaria. Residents of this futuristic landscape are described as giants who were already enjoying free wireless energy. In the same way that the 1755 Lisbon Earthquake forever bumped Portugal from its status as the world’s most powerful nation, the mud flood relegated Tartaria to the historical dustbin and allowed Western Civilization to flourish.
In what passes for their evidence, adherents point to any early photograph showing sundry town or country folks digging through high mud. Or they will refer to a modern picture of well-worn buildings featuring floors below grade, especially if there basement windows or if excavators are busy next door, having basement walls or foundations. They claim this as proof those lower layers had been topped by mud. This is no more convincing that using early photos of sailors to claim the entire world was once covered with water, or point to same era photos of planes as evidence were once an entirely airborne species.
Now let’s transition to a linguistic note. Writing for Skeptoid, Brian Dunning noted that until the mid-1800s, Europeans used the term Tartars to describe residents of largely unexplored Asian regions, such as Manchuria, Siberia, and Mongolia. Less than 200 years ago, world maps displayed an area dubbed “Tartary” in what we today call Asia. This cartographic tidbit is presented by proponents that not that long ago there existed a great civilization that succumbed to wet dirt.
This is a reverse the Great Mounds theory or of Mormon theology, both of which hold that Native American tribes were predated by White settlers. These palefaces made the greatest contributions to North America, and between those accomplishments and having been here first, are therefore entitled to the land. In the Tartaria belief system, it is those who constituted the minority in North American who are fetishized and made into inhabitants of an exotic, exalted kingdom. But the common ground between the two ideas is that they are bereft of any historical or anthropological evidence. Another charge bandied without proof is that governments are dedicated to suppressing this evidence. If so, the authorities are failing miserably, as the mud flood hypothesis can be found with a Google or YouTube search.
Dunning wrote that city officials sometimes raise their street levels, which necessitates burying the first few floors, in order reduce the steepness of some hills. “Similar earthmoving projects have been undertaken in cities all around the world, particularly in the decades around the turn of the 20th century, when streetcars and automobiles quite suddenly came into wide use and required regrading in areas that were already developed,” he explained.
This brief lesson on city planning and engineering is a tidy answer that obliterates any need for a mud flood explication.
The gentle giant Robert Wadlow is sometimes insinuated by believers to be of many such behemoths to have roamed Earth at this time. Wadlow is the only one of unusual size in those photos, but that is glossed over by believers. His era was no more populated with giants than our time is chockfull of potential trillionaires because Jeff Bezos walks amongst us.
In first grade I would entertain my classmates by jumping off my desk when the teacher left the room. By my senior year in high school, I had attained a similar level of popularity by being easily the most garrulous participant in the civic teacher’s preferred Socratic Method. Some days consisted entirely of a dialogue between the two of us, and as long as conversation kept going, the teacher would refrain from giving his boring lectures.
My two learning styles in these environments could be described respectively as nonexistent and highly participatory. But according to one hypothesis, learning can be described in one of four ways: Visual, Aural, Read/write and Kinesthetic. Students answer 16 questions about their learning preferences and a computer program spits out which learning style would work best for them.
The follow-on step is to give hands-on lessons to those who those who learn best that way, lecture to those who prefer presentations by subject matter experts, and show videos with pleasing graphics to the more visually-oriented. The idea seems sound and the intent is admirable.
But Skeptoid’s Brian Dunning has highlighted some studies that show the idea is not near as effective as advertised. He cited a study published in Psychological Science in the Public Interest, which concluded, “There is no adequate evidence base to justify incorporating learning-styles assessments into general educational practice.
Dunning added, “Any reasonable review of just a small percentage of the academic work on learning styles gives you the same answer: there’s no evidence that they work.”
Here’s why. First, respondents end up divided into disparate, absolute categories. They are introverted or extroverted, absorb visuals or deflect them, prefer one speaker to several. In reality, few people fit snugly into a particular group. Given an either-or option of listening to lecture or reading a graphic-heavy textbook, the person will answer. But perhaps the preference is a very slight one – yet it will end up being favored 100 percent in the calculation. It also leaves no wiggle room for evolving preferences or working best with a mix of the styles.
Another issue, Dunning noted, is that preference won’t necessarily equate to aptitude. You can like something without being very good at it, as a number or weekend golfers can attest.