There are two principal problems with Carl Sagan’s Baloney Detection Kit. First, it is misapplied by false skeptics through its utilization as a means to enact denial, ignorance and application of ‘skepticism’ outside a context of neutrality and sincere investigation. Second, The Baloney Detection Kit is flat out incorrect. Destructively incorrect. It presents an approach to citizen science which is an abrogation of its correct methodological order, employs explanations which can be equivocated to justify abuse, and contains principles which are patently wrong under the scientific method. It is the work of an academic who spent a career in celebrity promotion, publishing articles and making arguments targeting a nascent bunk-consuming public; moreover not one spent in the bowels of tough ends-oriented pluralistic research and hypothesis reduction. Ironically, perhaps its best use is in developing a framework inside of which one can observe and detect a fake skeptic.
I am a Carl Sagan fan, don’t get me wrong. But not everything he contended is correct. In one particular instance, his error was in providing the general public a mechanism which would never pass the sniff-test of Ethical Skepticism. Carl Sagan published his famous “Baloney Detection Kit” in 1995 as a mechanism to convey to those of the non-scientific general population, permission and means to discriminate the topical veracity of various threatening claims being foisted inside a growing and increasingly uncontrollable media environment. With the proliferation of cable TV, magazines, controversial BBS (eventually internet) sites, and book stores selling New Age materials, it became necessary (in the minds of Social Skeptics) to outfit the population with an artifice which could be used as a means of controlling information; information which used to be squelched simply by its denial of access to the media.
Carl habitually conveyed false depictions of what indeed is skepticism; conflating it in the quote below with cynicism and completely missing the fact that skepticism involves precisely an active, researching and open mind. Pretending that possessing an open mind is somehow the opposite of skepticism, and involves giving all ideas ‘equal validity.’ In his mad rush to pummel this strawman of what a researching open mind is, he attempts to foist below – that the purpose of skepticism is therefore the alternative: to force most-likely conclusions in lieu of scientific research (see Garbage Skepticism).
This false dilemma (bifurcation fallacy with a call to choose side) about what skepticism entails has resulted in a mis-education of the public as to the definition and ethic of skepticism – one which affords the cynic a comfortable hiding place inside of science. Carl expounds on this bifurcation, inside his intro to the Baloney Detection Kit:
“If you are only skeptical, then no new ideas make it through to you. You never learn anything new. You become a crotchety old person convinced that nonsense is ruling the world. But every now and then, a new idea turns out to be on the mark, valid and wonderful. If you are too much in the habit of being skeptical about everything, you are going to miss or resent it, and you will be standing in the way of understanding and progress.”
This is not skepticism at all. Skepticism is not a process of filtering out ideas that come to you, delivered on a silver platter. That is methodical cynicism (see The Tower of Wrong – The Art of Professional Lying). I cringe that a person who made it to a PhD level could carry such an errant definition of skepticism. How did he prosecute his thesis, qualify necessity, reduce alternatives – prep guidance for replication? Oh, yes – he was an astrophysicist. Abductive research rules in such subjects. But it is obvious to me that Carl never ran a research lab, nor chased down a new discovery (no, mapping things that move in the sky is not the same thing), nor solved a perplexing dilemma in a politically charged or profit-demand environment. He would have learned that his chief battle would be against these very people; zombies marching along blindly identifying as ‘skeptics’ – as if that tendered them some kind of appeal to authority in comparison to actual field observation and research.
“But if you are open to the point of gullibility and have not an ounce of skeptical sense in you, then you cannot distinguish the useful as from the worthless ones. If all ideas have equal validity then you are lost, because then, it seems to me, no ideas have any validity at all.”
Neither is this skepticism. Skepticism is not about being credulously receptive to new ideas. “Don’t be so open minded that your brains fall out.” My god, if you whip off this quip, not only have your brains already fallen out, but to a lab researcher it is apparent that you do not have the first inkling of what skepticism is. Skepticism is based upon the concept of epoché – a suspended neutral disposition – and is only exercised through actual science. This was a key tenet of science foundation which Sagan did not grasp. I do not ignore fringe subjects because they are ‘Woo’ – I hold them as incrementally irrelevant, simply because they have not established necessity under the scientific method. This is how a real lab works. This discipline is how my materials team broke one of its most key discoveries. We were not fake skeptics, thankfully. Fake skeptics never get this – that is why it is so easy to spot them.
“Some ideas are better than others. The machinery for distinguishing them is an essential tool in dealing with the world. And it is precisely the mix of these two modes of thought that is central to the success of science.”
The purpose of skepticism is NOT to decide disposition of ideas. My god, can we get any more idiotic here? Philosophy can never …let me say it again, never, step in and over-rule science on a matter of epistemic merit or neutrality. This is not philosophy’s purpose, nor skepticism’s purpose as a part of philosophy.
But I digress, as these are only some of the pseudo-scientific pop-skeptic ideas which Sagan foisted on the general public. Sadly, not everyone has run a research lab, nor made key discoveries – and watched the methods which served to produce such things. The ‘Kit’, below, continues inside this cynical vein we just touched on, and begins with the errant step of ‘assailing the facts.’ People (cynics) who have not made the effort to understand the evidence, always start by questioning the facts. This is a well known tactic of pseudoscience, conspiracy spinning and methodical cynicism. It was the first step undertaken by the Watergate defense team, the Shell Oil Gulf oil disaster response team, Global Warming deniers, Moon Landing hoax claimants, JFK Conspiracy Theorists, ad absurdum. But amazingly, if the tactic produces the correct answer, it is suddenly sound as the Pound. Imagine that. The fallacious basis of the Kit continues from there, throughout. The Baloney Detection Kit has been inexpertly touted as a means of constructing an argument, or understanding reality, or recognizing a fallacious or fraudulent argument. To a seasoned philosopher, one understands that the Baloney Detection Kit does none of these things, and as well does not claim to do any of these things. This set of hyperbole is typically spouted inside nonsense published by numerous dilettante voices seeking to appear as scientific through recitation of false authority.
Errant Application of The Baloney Detection Kit: Burning House Science
The Baloney Detection Kit comprises a series of guidelines to help improve the clarity of deliberation inside a topic where singularity of ‘truth’ is being brought into question. In other words, a proponent is proposing a suggestion of plurality (see The Real Ockham’s Razor), a new explanation of data surrounding a phenomenon, or new data eliciting a new phenomenon. In this instance of plurality, in the case of an increasing unknown, especially in asymmetric and non-linear partially understood systems, one has to be careful to distinguish conditions of Plurality from conditions wherein one argues Cynicism and Denial. Below are some examples of these three conditions, elicited through the ‘burning house’ construct.
Gnosis (reveals cynicism)
Gnosis is the body of existing or ascertainable knowledge. If a person asserts to you that your house is on fire, all you need to do with regard to science, is go there and observe the house for yourself. In addition, you should heed what the person says because ignoring their contention could be a very costly mistake. It does not matter who makes the contention for the most part. Under no circumstance is there a necessity for deliberating the epistemology and the most likely explanation as to why the person made this contention. First because the answer is readily detectable by direct observation, and second, because the cost of ignoring the challenge to observe could potentially be very high.
This is NOT an application of skepticism, rather an application of Gnosis. Any time a person can go check out a contention via means of direct observation, or faces an urgent need to investigate, this is what they should indeed do. To apply the Baloney Detection Kit under this condition is not skepticism, rather cynicism.
Plurality (reveals ignorance or valuable dissent)
If a person asserts that dim-able light controls in your house will start a fire and burn your house down, this is a debatable condition which might hinge on the veracity of the claim data and any potential agenda which is being sold by the claimant. Data may not be readily available on the subject and may be cryptic in its collection. To ignore this however, in the scheme of things, would not be wise. (Plurality simply means that more than one hypothesis is being investigated from necessity – dim-able light controls can cause fires more frequently than standard controls, or – dim-able light controls do not cause fires more than do standard controls. And not evaluated by one single p-value test or question.)
This is an application of skepticism, wherein one suspends judgement until sufficient opportunity to investigate and gather direct evidence, expert opinions and data on the topic. In this circumstance, the Baloney Detection Kit is applied correctly ONLY under
- an assumption of suspended judgement on the part of the skeptic, and
- a sincere effort being placed into the contention’s investigation.
please note: if a skeptic adheres to a perspective of plurality, is willing to look at multiple sides of an issue, follows edicts 1 & 2 above, and after that diligent process objectively dissents, then heed their input as it may well be of enormous value. If however a researcher does this, and skeptics still hound him or her, appearing unable to discern an honest researcher – take that as a warning.
Refusal to investigate, blocking investigation, pretending to investigate (Novella shuffle or Nickell plating) or applying ‘skepticism’ outside of conditions 1 and 2 above, is not skepticism, rather the act of ignorance.
Denial (is a Martial Art)
If a person asserts that they have first hand evidence that your wife is plotting to burn your house down because you have not paid child support, because of the emotional entrenchment, you may choose to deny that this could ever happen. One might go through extensive cognitive dismissal exercises in such a case in order to remove the idea from being a burden.
This is an application wherein one executes a mental martial art employed to remove fear from the mind. To apply the Baloney Detection Kit under this condition is not skepticism, rather the martial art of defensive denial.
The first principle error in utilization of the Baloney Detection Kit is that dilettante skeptics apply the Kit to support positions and conclusions in both Gnosis/Cynicism and Denial situations. Second, they habitually refuse to apply it inside of Plurality under an assumption of suspended judgement and sincere investigation. All three of these scenarios of application are invalid, yet constitute to my best perception the vast majority of times I have seen this Kit plied.
When Skepticism is Not Sincere: Humping the Elephant
A second problem is that Sagan’s Baloney Detection Kit itself is wrong. The sequence of the elements are in the wrong order, the elements themselves are expressed in equivocal fashion, some of the elements are incorrect – abrogating the scientific method – and all of the elements themselves are written in a fashion so as to at the least, favor, imply or promote Denial, Ignorance of Plurality and Cynicism.
The question of Ethical Skepticism is not one of skillfully dismissing arguments and data in order to protect the understanding of the truth; or even what we perceive to be the ‘most likely truth.’ This constitutes simply an exercise in convincing ourselves how smart we are, and how well we understand everything around us. The real scientific question is: can one develop an idempotent method by which one can improve understanding without tampering with the data and arguments unnecessarily. In ethical science, one seeks to give competing explanations a fighting chance until falsified on their own through accrued verity – and NOT by means of how clever we are.
Inside the provision of The Baloney Detection Kit, Carl has given the dilettante practitioner a black belt method in defending their minds; the martial art of denial – false skepticism employed to protect the assumptions they were given in their youth, against ideas of which they are terrified or to promote special concealed religious and social agendas.
After all, for many skeptics, indeed the agenda is the most important thing on their mind. Or in the case of the numptant, the focus is really and finally – never about the topic – it is fixated solely and squarely upon them, their superior ability to be correct at all times. The numpty is simply humping the elephant for his own gratification, and bears no concern for mankind, suffering nor our knowledge development process. Humpty Numpty.
/philosophy : pseudoscience : neurosis : self-obsession/ : a person who is educated or intelligent enough to execute a method, memorize a list of key phrases/retorts or understand some scientific reasoning, yet is gullible or lacking in circumspection to where they are unable to understand the applicable deeper meaning/science, the harm they cause nor their role in being manipulated inside propaganda. A numptant, or ‘numpty’ can be discerned through the number of subjects about which they like to argue. This indicating a clear preference not for any compassion or concern regarding any particular subject; rather the superior nature of their own thinking, argument, adherence to rationality and compliance inside any topic in which they can demonstrate such. Science, or the pretense thereof, is a handy shield behind which to exercise such a neurosis.
The problem is that Social Skeptics, as implied in the cartoon to the right, always bear an underlying agenda – an agenda which can be protected and promoted through the employment of false method. A method crafted towards selfish, unclear, non-deontological and non-value providing ends. They are inevitably ‘humping the elephant’ for their own surreptitious benefit, as it were, and not really attempting to improve overall understanding.
So without further ado, here are the errors upon which Mr. Sagan, seeking to communicate a methodology of thought control to the general population, was indeed incorrect.
Errors in The Baloney Detection Kit – The Fake Skeptic Detection Kit
There is nothing inherently wrong with fact checking, the consideration of multiple hypotheses, debate from all sides and the applications of Ockham’s Razor. The key is when, how and why you employ these principles. As with a weapon, they can be abused to effect a process which is harmful, as well as one which resolves or reduces complicatedness or the unknown. The terminology employed is useless in meaning until applied into a context – one which promotes or obfuscates science. The Ethical Skeptic is not as swayed by impressive sounding sciencey principles – as he or she is observing a person who demonstrates the ability to employ them skillfully and ethically. And as is the case with a weapon, one can spot an amateur on the firing line very quickly by the way in which they handle their weapon. Just because one holds it and fires it, does not mean they know what they are doing. Below, you will observe in The Baloney Detection Kit a person who is firing a weapon – but bears none of the earmarks of those who are skilled at its mastery.¹ For a detailed checklist of fake skeptic traits, check this out: How to Spot a Fake Skeptic.
- Wherever possible there must be independent confirmation of the “facts.”
The first step of the scientific method is to make observations, not assail purported ‘facts.’ This first advisement is 100% incorrect under the scientific method. The first aspect of an argument to examine is its soundness, and predicate and logical calculus – and not ‘facts’. This is why fake skeptics scream so often about ‘facts’, ‘evidence’ and (informal) ‘fallacies’, because
Facts constitute a relatively weak form inference as compared to soundness, predicate and logical deduction; offering a playground of slack and deception/diversion in the process of boasting about argument strength or lack thereof, and
Most faking skeptics do not grasp principles of soundness, predicate and logical calculus, nor the role of induction inference in the first place. ‘Facts’ are the first rung on the hierarchy which they possess the mental bandwidth to understand and debate.
A deductive falsification finishes its argument at the Soundness and Formal Theory levels of strength assessment. It is conclusive regardless of circumstantial informal issues. These are rendered moot precisely because falsification has been attained. Faking skeptics seek to distract from the core modus ponens of a falsification argument by pulling it down into the mud of circumstantial ‘facts’ instead; relying upon the reality that most people cannot discern falsification from inference.
Informal ‘fallacies’ sound like crushing intellectual blows in an argument, when in fact most of the time they are not. These are tool of those who seek to win at all costs, even if upon an apparent technicality. An arguer who possesses genuine concern about the subject, is not distracted by irrelevant or partially salient technicality.
Assailing facts can serve as a means of avoiding field investigation and attempting to torpedo a case artificially before it can be approached by the scientific method. This ‘fact verification’ is never the first step in science, as it is too prone to cherry picking, cherry sorting, strawman and scarecrow errors, existential bias, MiHoDeAL bias, or taxonomy fallacy, etc. It is not that facts are not checked, but pursuing this step fist, out of order, opens the door of method wide open to become simply an exercise in reactive dissonance. If you see a skeptic undertaking this as a first step, you are more than likely witness to a person who is executing the martial art of Denial. Had Carl written this step after witnessing the debates raging over Anthropogenic Global Warming, he might have thought twice about employing this step as a starting block in his Kit. Scottish philospher Thomas Carlyle is purported to have said “Conclusive facts are inseparable from inconclusive except by a head that already understands and knows.” Evidence should be able to be assembled into an elegantly predictive mechanism. This more than anything corroborates its ‘facts.’ Not our cherry picked expertise and agenda. To winnow out facts based on personal knowledge, or one at a time, stands as an exercise in denial, a non rectum agitur fallacy of manipulating the method of science in order to derive a desired outcome; from a head that perceives incorrectly that it ‘already understands and knows.’
Maleduction – an argument must be reduced before it can be approached by induction or deduction – failure to reduce an argument or verify that the next appropriate question under the scientific method is being indeed addressed, is a sign of pseudoscience at play. The non rectum agitur fallacy of attempting to proceed under or derive conclusions from science when the question being addressed is agenda driven, non-reductive or misleading in its formulation or sequencing.
If a ‘skeptic’ begins by assailing the facts, and seeks to simply ‘establish a chain of questionability’ around an idea, as their first step of investigation of an issue; then be wary that you might be working with a fake skeptic. A well known adage in philosophy goes as this, “People who have not made the effort to understand the evidence, always start by questioning the facts.”
- Encourage substantive debate on the evidence by knowledgeable proponents of all points of view.
There are two pitfalls which can occur under this context ambiguity. First that the ‘debate’ is enacted among persons who have absolutely no expertise (or possess counter-trained agendas) on the subject at hand at all, other than at most a self purported ‘skepticism’ (maybe they read The Baloney Detection Kit!), and second that the process of Peer Review (those who hold an equal or suitable expertise to provide constructive criticism of the argument or data) is not conducted at the beginning of the scientific method, other than to assist in hypothesis development. To provide Peer Review at the beginning of a scientific process, stands simply as a means of squelching the inception of that process. Ideally peers are allies during hypothesis development (they should be eager to see the results), and objective ‘opponents’ during anonymous Peer Review. This such ‘review’ as expressed in the Kit can only serve to revolve around strawman and scarecrow representations of the topic at hand since NO observation or development has been completed at this point. The only suitable inputs from peers which should be broached at this point in the process is ‘what is the first question which should be asked under the scientific method,’ and ‘what do I need in order to develop this possible contention into a testable hypothesis?’ Anything outside of this is simply another process of seeing how smart we are at enforcing the truth. Nothing perhaps elicits this principle better than Carl Sagan himself in the Dragons of Eden: “Those at too great a distance may, I am well aware, mistake ignorance for perspective.” Well, I am sure they will at least mistake skepticism for perspective.
Non Rectum Agitur Fallacy – a purposeful abrogation of the scientific method through corrupted method sequence or the framing and asking of the wrong, ill prepared, unit biased or invalid question, conducted as a pretense of executing the scientific method on the part of a biased participant. Applying a step of the scientific method, out of order – so as to artificially force a conclusion, such as providing ‘peer review’ on sponsored constructs and observations, rather than studies and claims, in an effort to kill research into those constructs and observations.
One key indicator useful in spotting a faking skeptic is: are they allies at the beginning of the scientific method? Helping the sponsors formulate the right question; eager to see the testing executed? Or are they simply detractors, hoping to stop testing before it could begin, pretending to issue ‘peer review,’ condemnation and requesting proof as the first step in the scientific method? Does ‘all points of view’ habitually end in their insistence on assuming conforming ideas are correct under ‘Occam’s Razor’ and contending that alternative ideas are therefore now foolish and should merit no research (are a pseudoscience)? These are the habits of a fake skeptic.
- Arguments from authority carry little weight — “authorities” have made mistakes in the past. They will do so again in the future. Perhaps a better way to say it is that in science there are no authorities; at most, there are experts.
This is a tautology, which is moot in its true sense, but damaging if applied in error. If the only knowledgeable researchers in a field are the ones being denoted as the ‘authorities’ in this case, then this step is equivocally incorrect. If the experts are the active researchers on the topic, then this step might be viable. If however the ‘experts’ are simply skeptical peers, not involved in the research, then their input can simply act as a pseudo-expert resource (parem falsum fallacy) from which to enact denial and cynicism activities. The bottom line: if they are actively and skeptically researching the topic, consider their input with an air of neutrality. If they have no involvement in the research, disposition their input until time for peer review, after sufficient time and method wherein the case has had a fighting chance to stand on its legs and be heard. Time before the wolves of ‘rationality’ and parem falsum expertise attack it. Again, this step is in the wrong sequence in comparison to the scientific method.
I don’t want to hear a skeptic debating someone who possesses first hand observation or necessity or sponsor science. This simply constitutes a person exercising the martial art of denial, put on display to instruct the rest of the world as to it’s fine art. Perhaps this is where the climate skeptics learned how to do it so well?
The Appeal to Skepticism – Ergo Sum Veritas Fallacy
1a. The contention, implication or inference that one’s own ideas or the ideas of others hold authoritative or evidence based veracity simply because their proponent has declared themselves to be a ‘skeptic.’
1b. The assumption, implication or inference that an organization bearing a form of title regarding skepticism immediately holds de facto unquestionable factual or ideological credibility over any other entity having conducted an equivalent level of research into a matter at hand.
1c. The assumption, implication or inference that an organization or individual bearing a form of title regarding skepticism, adheres to a higher level of professionalism, ethics or morality than does the general population.
1′ (strong). The assumption that because one or one’s organization is acting in the name of skepticism or science, that such a self claimed position affords that organization and/or its members exemption from defamation, business tampering, fraud, privacy, stalking, harassment and tortious interference laws.
If a skeptic, as their method of doing ‘background research’ on a topic or regarding an observation, simply goes out and reviews the available skeptic doctrine on the subject and those who made the observations, then comes back with a whole series of denial articles and one-liner quotes from celebrity skeptics, then regards implicitly that researchers are ‘authorities’ and celebrities are ‘experts’ – beware, this is not a skeptic.
- Spin more than one hypothesis. If there’s something to be explained, think of all the different ways in which it could be explained. Then think of tests by which you might systematically disprove each of the alternatives. What survives, the hypothesis that resists disproof in this Darwinian selection among “multiple working hypotheses,” has a much better chance of being the right answer than if you had simply run with the first idea that caught your fancy.
This is incorrect. Before one should seek to explain something, one should first ask ‘what do I need in order to develop this construct into a testable hypothesis?’ and ‘what is my next question to be asked under the scientific method?’ This does NOT mean that you have to start immediately testing and considering every and all ‘hypotheses’ upon which every cynic sitting at the table insists. Besides, the mere mention of an explanation does not qualify it as a hypothesis, much more diligence is involved. You do not have to start by researching ALL the ways in which something could be explained, as this simply constitutes an exercise in futility, distraction, waste and above all – ego (see pseudo-theory).
This is not how real labs and scientific groups work – as in reality, effective teams construct an elegant hypothesis reduction critical path. Critical path and straightforward falsification alternatives should be matured first for testing. It is amazing how much you learn in this process. Things which save mountains of time as compared to the blind shotgun testing suggested here by Sagan. This is followed by readily measurable or predictively accurate alternatives, of those still remaining on the critical path. Frivolous or wild alternatives usually fall out through natural falsification during the reduction process on their own. There is no need to pay special attention to them until forced to do so by the evidence. Why did Sagan not know this, or at least express it in layman’s terms?
Additionally one does not ‘spin’ a hypothesis, one develops a hypothesis. Spinning hypotheses, again, is NOT the first step in the scientific method. One cannot simply willy-nilly, sling out plausible deniability constructs like a machine cranking out compositions of personal brilliance. This exercise constitutes not science, rather a smoke screen; the desire to obscure science. Hypotheses are the direct result of having completed the Observation, Necessity, Intelligence/Data Aggregation, Construct Formulation and Sponsorship Peer Input and finally Reduction steps of the scientific method. They are serous exercises in scientific methodical diligence. Again, this step in the Kit is wrong. It brings into question whether or not Carl Sagan actually ever did any real science, other than running celestial observations, writing books, articles, doing TV shows, classes or student exams.
Negare Attentio Effect – the unconscious habituation of a person seeking publicity or attention in which they will gravitate more and more to stances of denial, skepticism and doubting inside issues of debate, as their principal method of communication or public contention. This condition is subconsciously reinforced because they are rewarded with immediate greater credence when tendering a position of doubt, find the effort or scripted method of arguing easier, enjoy shaming and demeaning people not similar to their own perception of self or presume that counter-claims don’t require any evidence, work or research.
If a skeptic blathers the first conforming explanation they can think of regarding a challenging idea or observation, and regards that as a ‘hypothesis,’ be very wary. If the skeptic insists that you must first research their explanation, even if it is impossible to approach by means of falsification, then drop them from making input, as they do not understand science; only their religious agenda.
- Try not to get overly attached to a hypothesis just because it’s yours. It’s only a way station in the pursuit of knowledge. Ask yourself why you like the idea. Compare it fairly with the alternatives. See if you can find reasons for rejecting it. If you don’t, others will.
Now this is a salient and objective advisement on the part of Sagan. If you do not critique your own hypothesis, others will. This reminds me of the myriad of ‘perpetual motion’ machines I have reviewed and debunked over the years. In one instance an inventor brought a device into a convention I was attending. It had a battery hooked up to a flywheel covered in tin foil (no this is not meant as implicitly pejorative), extracting ‘orgone’ energy from the ether, which was in turn charging a dead battery in series downline of the fly-wheel and charged battery. His contention was that 100% of the charged battery’s potential difference and ampere charge was transferred from the live battery to the dead battery, AND we received the benefit of the kinetic energy of the motion of the wheel during the charge transfer to boot. Energy from nowhere!! – and still a fully charged battery!! Well energy from the ether, that is. Of course if this was true then we needed to stop digging for coal immediately.
I asked him if he then took the newly charged battery and swapped it with the newly depleted battery and ran the test again and again to see if the duration of wheel motion was the same in each iteration of charge transfer. Whereupon he replied ‘uh, no.’ Then we attempted this test. Sure enough, in each iteration the source battery depleted in about 55-65% of the time it did in the previous iteration. It was not a perpetual motion machine. He was deceived by his only measuring the potential (V) of the battery and not its ampere transfer charge (C= A·s). His presentation was in reality an appeal for Peer Input, prior to hypothesis development. The inventor jumped the gun hoping to have a proof, before asking ‘what do I need to do in order to develop this construct into a hypothesis?’ This is what I helped him with – as an ally not an opponent – I really wanted to see this apparatus matured into hypothesis. Yet I still admired the man for objectivity, courage and the spirit of inquiry. I respected him more than I do a fake skeptic. He was able to objectively look at, and be convinced by the reduction evidence. False skeptics simply whip out their one-liner from the Baloney Detection Kit and walk off.
Reactive Dissonance (in business and legal domains, also called ‘malfeasance’) – the fallacious habitual mindset of a researcher or doctrine of a group conducting research, wherein when faced with a challenging observation, study or experiment outcome, to immediately set aside rational data collection, hypothesis reduction and scientific method protocols in favor of crafting a means to dismiss the observation.
To the faking skeptic, there is only one answer, that which conforms with their religious view of reality, and that which protects their bandwagon agenda. Any critique of their hypothesis simply serves to place you in the lunatic camp. They will not assist a researcher in developing a hypothesis or asking the right questions, they do not care about the answer, they only seek to crush, mock, obfuscate and destroy. Perhaps that litmus test is the best baloney detection kit of them all.
- Quantify. If whatever it is you’re explaining has some measure, some numerical quantity attached to it, you’ll be much better able to discriminate among competing hypotheses. What is vague and qualitative is open to many explanations. Of course there are the truths to be sought in the many qualitative issues we are obliged to confront, but finding them is more challenging.
Define, Constrain, then Measure, not quantify. Quantification is the oldest way of deceiving in the book. There is an old saying ‘numbers don’t lie.’ But in one of my firms we had a saying that ‘numerals don’t lie, but numbers lie like a sick dog.’ There is a simple reality to which every researcher becomes used, in that one comes to realize that excessive reliance upon numbers can be used to deceive just as well as can deceptive vague arguments. Or as Mark Twain put it, ‘there are lies, damn lies, and then there are statistics.’ But Sagan is correct here in that, if one understands the context of numerics and measures, and if one uses them to elicit and not to direct decision, then quantification can provide an enormous benefit to both hypothesis development and hypothesis reduction processes. One must remember that models result in convergent, divergent and constraining results; very few of which can be used to effect a decision solely on the basis of that model. I have never once advised a corporation or lab to adopt/assume the results of an empirical model at face value. That would be foolishness. But these are measures in the end, used to gauge magnitude and direction; perspective, not simply quantification, for quantification’s sake. There is a stark distinction. Use numbers to add value and clarity, not to put on a show of objective diligence.
Procrustean Bed – an arbitrary standard to which exact conformity is forced, in order to artificially drive a conclusion or adherence to a specific solution. The undesirable practice of tailoring data to fit its container or some other preconceived argument being promoted by a carrier of an agenda.
Three things which deceive. If numbers and recitations only come from the camp which serves a skeptic’s favored alternative, then be very wary. If numbers are the proprietary property of a group making a contention, and they do not allow you to see where these numbers were derived, be very wary. If a skeptic tosses out a fabutistic which contends that science thinks certain things, and that they represent that scientific thought, and does not appear to understand its context or method of origin, be very wary. If the ‘skeptic’ whips out a claim to scientific consensus, ask them where it was published (scientific journal) and by what of the three methods was it derived: association poll, meta-analysis study, or peer directed alternative research.
- If there’s a chain of argument, every link in the chain must work (including the premise) — not just most of them.
Every argument, even the simplest explanation, contends from an implicit or explicit chain of argument. The key is to ask ourselves, have we fairly recognized the implicit chain of assumptions and principles incumbent in our favored argument? A second principle resides in this – in Quality Control analytics and the assembly and design of processes, one learns that daisy chaining a stream of reliable processes together, will simply serve to produce an unreliable process. Ten sure bets in a chain of risk dependency equals a sure loss. This renders EVERY chain of logic vulnerable to question (see Stacks of Provisional Knowledge). The wise application of this step involves experience in understanding how to neutralize chains of dependency through incremental hypothesis predictive success; along with the realization that even our foundational givens, can be critiqued under such methodology. It is just up to us to possess the courage to do so. “We have, as human beings, a storytelling problem. We’re a bit too quick to come up with explanations for things we don’t really have an explanation for.” Or so proclaimed Malcolm Gladwell in Blink: The Power of Thinking Without Thinking. There is not going to be a perfect alternative; a perfect story; a perfect explanation. Only social epistemologies are touted by their agenda bearers as being flawless. Be cautious of such claims of perfect explanations or the need for an explanatory basis to be perfect, or people who understand the chain of dependency and surreptitiously employ it in a process of methodical cynicism.
Self Confirming Process – a process which is constructed to only find the answer which was presumed before its formulation. A lexicon, set of assumptions or data, procedure or process of logical calculus which can only serve to confirm a presupposed answer it was designed to find in the first place. A process which bears no quality control, review, or does not contain a method through which it can reasonably determine its own conclusion to be in question or error.
If your ‘skeptic’ presents arguments with all the loose ends accounted for, and all the questions wrapped up, if they fail to express ‘I don’t know’ on a subject, if all the outlying data in their argument does not exist, if they are quick tempered or hate their opponents easily, and fail to cite a history of personal mistakes and what they learned from them, then be very wary that you are dealing with a social epistemologist – bearing an agenda.
- Occam’s Razor. This convenient rule-of-thumb urges us when faced with two hypotheses that explain the data equally well to choose the simpler.
This contention is simply, unequivocally, incorrect. The diligent researcher needs be constantly circumspect for both the existential and transactional variants of the ‘Occam’s Razor’ fallacy. This is a ridiculous contention of a person who has only argued, enforces simple science and has never actually reduced hypotheses inside a scientific framework. For a more exhaustive explanation of why applying ‘Occam’s Razor’ (much less spelling it incorrectly) at the END of a pretend ‘scientific method’ is an exercise in deception, one can be found here: Ethical Skepticism Part 5 – The Real Ockham’s Razor.
Occam’s Razor Fallacy – abuse of Ockham’s Razor (and misspelling) in order to to enact a process of sciencey-looking ignorance and to impose a favored idea. Can exist in four forms, transactional, existential, observational and utility blindness.
Transactional Occam’s Razor Fallacy (Appeal to Ignorance) – the false contention that a challenging construct, observation or paradigm must immediately be ‘explained.’ Sidestepping of the data aggregation, question development, intelligence and testing/replication steps of the scientific method and forcing a skip right to its artificially conclusive end (final peer review by ‘Occam’s Razor’).
Existential Occam’s Razor Fallacy (Appeal to Authority) – the false contention that the simplest or most probable explanation tends to be the scientifically correct one. Suffers from the weakness that myriad and complex underpinning assumptions, based upon scant predictive/suggestive study, provisional knowledge or Popper insufficient science, result in the condition of tendering the appearance of ‘simplicity.’
Observational Occam’s Razor Fallacy (Exclusion Bias) – through insisting that observations and data be falsely addressed as ‘claims’ needing immediate explanation, and through rejecting such a ‘claim’ (observation) based upon the idea that it introduces plurality (it is not simple), one effectively ensures that no observations will ever be recognized which serve to frame and reduce a competing alternative. One will in effect perpetually prove only what they have assumed as true, regardless of the idea’s inherent risk. No competing idea can ever be formulated because outlier data and observations are continuously discarded immediately, one at a time by means of being deemed ‘extraordinary claims’.
Utility Blindness – when simplicity or parsimony are incorrectly applied as excuse to resist the development of a new scientific explanatory model, data or challenging observation set, when indeed the participant refuses to consider or examine the explanatory utility of any similar new model under consideration.
If a skeptic talks with conclusive authority on an issue and says that their argument is based upon ‘Occam’s Razor,’ ignore them because they are clueless.
- Always ask whether the hypothesis can be, at least in principle, falsified. Propositions that are untestable, unfalsifiable are not worth much. Consider the grand idea that our Universe and everything in it is just an elementary particle — an electron, say — in a much bigger Cosmos. But if we can never acquire information from outside our Universe, is not the idea incapable of disproof? You must be able to check assertions out. Inveterate skeptics must be given the chance to follow your reasoning, to duplicate your experiments and see if they get the same result.
Testable is not the same thing as falsifiable. This advice fits within the ethical science necessity of asking ‘what do I need to do in order to develop this construct into a hypothesis?’ Falsification is of extraordinary importance of course in avoiding a Popper Error. There must be recognition however, that many of our strongest hypotheses and accepted sets of science, reside squarely upon simply a series of predictive and associative studies. Many sets of accepted science do not intrinsically lend themselves to falsification. Our next suitable action is to ask ‘What is the right next question to ask under the scientific method?’ We only hope that it lends to a falsification based answer; but this is less often the case. The question, in reality many times instead becomes: can the hypothesis we are considering make successful predictions which can be confirmed by further measures under the scientific method? This principal many times in reality stands in lieu of direct falsification based reductions. If I am lucky however, predictive or associative testing might allow the team to falsify a non-critical path alternative down the line, so that we do not have to focus on it later. A second fallacy danger resides in pursuing falsification for an alternative, yet refusing to pursue falsification for a null hypothesis when it can readily be had. This is a very common form of official pseudoscience. The idea that mind Ξ brain can be tested for falsification. But if we spend all our time and science resources seeking to falsify the antithetical alternative hypotheses instead or promote our favored religious null hypothesis with only supporting research, we are guilty of applying promotification and false parsimony. This is pseudoscience.
Consilience Evasion – a refusal to consider scientifically multiple sources of evidence which are in congruence or agreement, focusing instead on targeting a single item from that group of individual sources of evidence because single items appear weaker when addressed alone. Also called a ‘silly con.’
If your ‘skeptic’ cannot differentiate between falsification, predictive and associative tests, nor when and how to employ them relative to a null, conforming or alternative hypothesis, be very wary. If they do not know how hypothesis testing is sequenced, think that the formulation of a hypothesis is a mere matter of debate, cannot cite a reduction hierarchy and critical path, as well as cite some key examples, then they are pretending to know science.
As much as I loved his work Cosmos, The Demon Haunted World and The Cosmic Connection, sometimes, we should be hesitant in overly lauding and proclaiming the work of celebrities like Carl Sagan, despite their notoriety, as authorities. Circus Partis is a false appeal to an authority who is simply ‘famous for being famous;’ and in the case of Carl Sagan, while the fame is warranted, the context of authority simply may not be.
¹ Sagan, Carl, Demon-Haunted World: Science as a Candle in the Dark; Ballantine Books, Jul 06, 2011; ISBN 9780307801043.