Why Sagan is Wrong – The Fake Skeptic Detection Kit

There are two principal problems with Carl Sagan’s Baloney Detection Kit. First, it is misapplied by false skeptics through its utilization as a means to enact denial, ignorance and application of ‘skepticism’ outside a context of neutrality and sincere investigation. Second, The Baloney Detection Kit is flat out incorrect. Destructively incorrect. It presents an approach to citizen science which is an abrogation of its correct methodological order, employs explanations which can be equivocated to justify abuse, and contains principles which are patently wrong under the scientific method. It is the work of an academic who spent a career in celebrity promotion, publishing articles and making arguments targeting a nascent bunk-consuming public; moreover not one spent in the bowels of tough ends-oriented pluralistic research and hypothesis reduction. Ironically, perhaps its best use is in developing a framework inside of which one can observe and detect a fake skeptic.

carl sagan picI am a Carl Sagan fan, don’t get me wrong. But not everything he contended is correct. In one particular instance, his error was in providing the general public a mechanism which would never pass the sniff-test of Ethical Skepticism. Carl Sagan published his famous “Baloney Detection Kit” in 1995 as a mechanism to convey to those of the non-scientific general population, permission and means to discriminate the topical veracity of various threatening claims being foisted inside a growing and increasingly uncontrollable media environment. With the proliferation of cable TV, magazines, controversial BBS (eventually internet) sites, and book stores selling New Age materials, it became necessary (in the minds of Social Skeptics) to outfit the population with an artifice which could be used as a means of controlling information; information which used to be squelched simply by its denial of access to the media.

Carl habitually conveyed false depictions of what indeed is skepticism; conflating it in the quote below with cynicism and completely missing the fact that skepticism involves precisely an active, researching and open mind. Pretending that possessing an open mind is somehow the opposite of skepticism, and involves giving all ideas ‘equal validity.’ In his mad rush to pummel this strawman of what a researching open mind is, he attempts to foist below – that the purpose of skepticism is therefore the alternative: to force most-likely conclusions in lieu of scientific research (see Garbage Skepticism).

This false dilemma (bifurcation fallacy with a call to choose side) about what skepticism entails has resulted in a mis-education of the public as to the definition and ethic of skepticism – one which affords the cynic a comfortable hiding place inside of science. Carl expounds on this bifurcation, inside his intro to the Baloney Detection Kit:

“If you are only skeptical, then no new ideas make it through to you. You never learn anything new. You become a crotchety old person convinced that nonsense is ruling the world. But every now and then, a new idea turns out to be on the mark, valid and wonderful. If you are too much in the habit of being skeptical about everything, you are going to miss or resent it, and you will be standing in the way of understanding and progress.”

This is not skepticism at all. Skepticism is not a process of filtering out ideas that come to you, delivered on a silver platter. That is methodical cynicism (see The Tower of Wrong – The Art of Professional Lying). I cringe that a person who made it to a PhD level could carry such an errant definition of skepticism.  How did he prosecute his thesis, qualify necessity, reduce alternatives – prep guidance for replication? Oh, yes – he was an astrophysicist.  Abductive research rules in such subjects. But it is obvious to me that Carl never ran a research lab, nor chased down a new discovery (no, mapping things that move in the sky is not the same thing), nor solved a perplexing dilemma in a politically charged or profit-demand environment. He would have learned that his chief battle would be against these very people; zombies marching along blindly identifying as ‘skeptics’ – as if that tendered them some kind of appeal to authority in comparison to actual field observation and research.

“But if you are open to the point of gullibility and have not an ounce of skeptical sense in you, then you cannot distinguish the useful as from the worthless ones. If all ideas have equal validity then you are lost, because then, it seems to me, no ideas have any validity at all.”

Neither is this skepticism. Skepticism is not about being credulously receptive to new ideas. “Don’t be so open minded that your brains fall out.” My god, if you whip off this quip, not only have your brains already fallen out, but to a lab researcher it is apparent that you do not have the first inkling of what skepticism is. Skepticism is based upon the concept of epoché – a suspended neutral disposition – and is only exercised through actual science.  This was a key tenet of science foundation which Sagan did not grasp. I do not ignore fringe subjects because they are ‘Woo’ – I hold them as incrementally irrelevant, simply because they have not established necessity under the scientific method.  This is how a real lab works. This discipline is how my materials team broke one of its most key discoveries.  We were not fake skeptics, thankfully. Fake skeptics never get this – that is why it is so easy to spot them.

“Some ideas are better than others. The machinery for distinguishing them is an essential tool in dealing with the world. And it is precisely the mix of these two modes of thought that is central to the success of science.”

The purpose of skepticism is NOT to decide disposition of ideas.  My god, can we get any more idiotic here?  Philosophy can never …let me say it again, never, step in and over-rule science on a matter of epistemic merit or neutrality. This is not philosophy’s purpose, nor skepticism’s purpose as a part of philosophy.

gullible cynic skepticBut I digress, as these are only some of the pseudo-scientific pop-skeptic ideas which Sagan foisted on the general public. Sadly, not everyone has run a research lab, nor made key discoveries – and watched the methods which served to produce such things. The ‘Kit’, below, continues inside this cynical vein we just touched on, and begins with the errant step of ‘assailing the facts.’ People (cynics) who have not made the effort to understand the evidence, always start by questioning the facts. This is a well known tactic of pseudoscience, conspiracy spinning and methodical cynicism. It was the first step undertaken by the Watergate defense team, the Shell Oil Gulf oil disaster response team, Global Warming deniers, Moon Landing hoax claimants, JFK Conspiracy Theorists, ad absurdum. But amazingly, if the tactic produces the correct answer, it is suddenly sound as the Pound. Imagine that. The fallacious basis of the Kit continues from there, throughout. The Baloney Detection Kit has been inexpertly touted as a means of constructing an argument, or understanding reality, or recognizing a fallacious or fraudulent argument. To a seasoned philosopher, one understands that the Baloney Detection Kit does none of these things, and as well does not claim to do any of these things.  This set of hyperbole is typically spouted inside nonsense published by numerous dilettante voices seeking to appear as scientific through recitation of false authority.

Errant Application of The Baloney Detection Kit: Burning House Science

The Baloney Detection Kit comprises a series of guidelines to help improve the clarity of deliberation inside a topic where singularity of ‘truth’ is being brought into question.  In other words, a proponent is proposing a suggestion of plurality (see The Real Ockham’s Razor), a new explanation of data surrounding a phenomenon, or new data eliciting a new phenomenon.  In this instance of plurality, in the case of an increasing unknown, especially in asymmetric and non-linear partially understood systems, one has to be careful to distinguish conditions of Plurality from conditions wherein one argues Cynicism and Denial. Below are some examples of these three conditions, elicited through the ‘burning house’ construct.

Gnosis (reveals cynicism)

buring house scienceGnosis is the body of existing or ascertainable knowledge. If a person asserts to you that your house is on fire, all you need to do with regard to science, is go there and observe the house for yourself. In addition, you should heed what the person says because ignoring their contention could be a very costly mistake. It does not matter who makes the contention for the most part. Under no circumstance is there a necessity for deliberating the epistemology and the most likely explanation as to why the person made this contention. First because the answer is readily detectable by direct observation, and second, because the cost of ignoring the challenge to observe could potentially be very high.

This is NOT an application of skepticism, rather an application of Gnosis. Any time a person can go check out a contention via means of direct observation, or faces an urgent need to investigate, this is what they should indeed do. To apply the Baloney Detection Kit under this condition is not skepticism, rather cynicism.

Plurality (reveals ignorance or valuable dissent)

If a person asserts that dim-able light controls in your house will start a fire and burn your house down, this is a debatable condition which might hinge on the veracity of the claim data and any potential agenda which is being sold by the claimant. Data may not be readily available on the subject and may be cryptic in its collection. To ignore this however, in the scheme of things, would not be wise. (Plurality simply means that more than one hypothesis is being investigated from necessity – dim-able light controls can cause fires more frequently than standard controls, or – dim-able light controls do not cause fires more than do standard controls. And not evaluated by one single p-value test or question.)

This is an application of skepticism, wherein one suspends judgement until sufficient opportunity to investigate and gather direct evidence, expert opinions and data on the topic. In this circumstance, the Baloney Detection Kit is applied correctly ONLY under

  1. an assumption of suspended judgement on the part of the skeptic, and
  2. a sincere effort being placed into the contention’s investigation.

please note: if a skeptic adheres to a perspective of plurality, is willing to look at multiple sides of an issue, follows edicts 1 & 2 above, and after that diligent process objectively dissents, then heed their input as it may well be of enormous value. If however a researcher does this, and skeptics still hound him or her, appearing unable to discern an honest researcher – take that as a warning.

Refusal to investigate, blocking investigation, pretending to investigate (Novella shuffle or Nickell plating) or applying ‘skepticism’ outside of conditions 1 and 2 above, is not skepticism, rather the act of ignorance.

Denial (is a Martial Art)

If a person asserts that they have first hand evidence that your wife is plotting to burn your house down because you have not paid child support, because of the emotional entrenchment, you may choose to deny that this could ever happen. One might go through extensive cognitive dismissal exercises in such a case in order to remove the idea from being a burden.

This is an application wherein one executes a mental martial art employed to remove fear from the mind. To apply the Baloney Detection Kit under this condition is not skepticism, rather the martial art of defensive denial.

The first principle error in utilization of the Baloney Detection Kit is that dilettante skeptics apply the Kit to support positions and conclusions in both Gnosis/Cynicism and Denial situations.  Second, they habitually refuse to apply it inside of Plurality under an assumption of suspended judgement and sincere investigation. All three of these scenarios of application are invalid, yet constitute to my best perception the vast majority of times I have seen this Kit plied.

When Skepticism is Not Sincere: Humping the Elephant

Six Blind Scientists and a SkepticA second problem is that Sagan’s Baloney Detection Kit itself is wrong. The sequence of the elements are in the wrong order, the elements themselves are expressed in equivocal fashion, some of the elements are incorrect – abrogating the scientific method – and all of the elements themselves are written in a fashion so as to at the least, favor, imply or promote Denial, Ignorance of Plurality and Cynicism.

The question of Ethical Skepticism is not one of skillfully dismissing arguments and data in order to protect the understanding of the truth; or even what we perceive to be the ‘most likely truth.’ This constitutes simply an exercise in convincing ourselves how smart we are, and how well we understand everything around us. The real scientific question is: can one develop an idempotent method by which one can improve understanding without tampering with the data and arguments unnecessarily. In ethical science, one seeks to give competing explanations a fighting chance until falsified on their own through accrued verity – and NOT by means of how clever we are.

Inside the provision of The Baloney Detection Kit, Carl has given the dilettante practitioner a black belt method in defending their minds; the martial art of denial – false skepticism employed to protect the assumptions they were given in their youth, against ideas of which they are terrified or to promote special concealed religious and social agendas.

After all, for many skeptics, indeed the agenda is the most important thing on their mind. Or in the case of the numptant, the focus is really and finally – never about the topic – it is fixated solely and squarely upon them, their superior ability to be correct at all times. The numpty is simply humping the elephant for his own gratification, and bears no concern for mankind, suffering nor our knowledge development process. Humpty Numpty.

Numptured/Numptant/Numpty

/philosophy : pseudoscience : neurosis : self-obsession/ : a person who is educated or intelligent enough to execute a method, memorize a list of key phrases/retorts or understand some scientific reasoning, yet is gullible or lacking in circumspection to where they are unable to understand the applicable deeper meaning/science, the harm they cause nor their role in being manipulated inside propaganda. A numptant, or ‘numpty’ can be discerned through the number of subjects about which they like to argue. This indicating a clear preference not for any compassion or concern regarding any particular subject; rather the superior nature of their own thinking, argument, adherence to rationality and compliance inside any topic in which they can demonstrate such. Science, or the pretense thereof, is a handy shield behind which to exercise such a neurosis.

The problem is that Social Skeptics, as implied in the cartoon to the right, always bear an underlying agenda – an agenda which can be protected and promoted through the employment of false method. A method crafted towards selfish, unclear, non-deontological and non-value providing ends. They are inevitably ‘humping the elephant’ for their own surreptitious benefit, as it were, and not really attempting to improve overall understanding.

So without further ado, here are the errors upon which Mr. Sagan, seeking to communicate a methodology of thought control to the general population, was indeed incorrect.

Errors in The Baloney Detection Kit – The Fake Skeptic Detection Kit

There is nothing inherently wrong with fact checking, the consideration of multiple hypotheses, debate from all sides and the applications of Ockham’s Razor. The key is when, how and why you employ these principles. As with a weapon, they can be abused to effect a process which is harmful, as well as one which resolves or reduces complicatedness or the unknown. The terminology employed is useless in meaning until applied into a context – one which promotes or obfuscates science. The Ethical Skeptic is not as swayed by impressive sounding sciencey principles – as he or she is observing a person who demonstrates the ability to employ them skillfully and ethically. And as is the case with a weapon, one can spot an amateur on the firing line very quickly by the way in which they handle their weapon. Just because one holds it and fires it, does not mean they know what they are doing. Below, you will observe in The Baloney Detection Kit a person who is firing a weapon – but bears none of the earmarks of those who are skilled at its mastery.¹ For a detailed checklist of fake skeptic traits, check this out: How to Spot a Fake Skeptic.

  • Wherever possible there must be independent confirmation of the “facts.”

The first step of the scientific method is to make observations, not assail purported ‘facts.’ This first advisement is 100% incorrect under the scientific method. The first aspect of an argument to examine is its soundness, and predicate and logical calculus – and not ‘facts’. This is why fake skeptics scream so often about ‘facts’, ‘evidence’ and (informal) ‘fallacies’, because

Facts constitute a relatively weak form inference as compared to soundness, predicate and logical deduction; offering a playground of slack and deception/diversion in the process of boasting about argument strength or lack thereof, and

Most faking skeptics do not grasp principles of soundness, predicate and logical calculus, nor the role of induction inference in the first place. ‘Facts’ are the first rung on the hierarchy which they possess the mental bandwidth to understand and debate.

A deductive falsification finishes its argument at the Soundness and Formal Theory levels of strength assessment. It is conclusive regardless of circumstantial informal issues. These are rendered moot precisely because falsification has been attained. Faking skeptics seek to distract from the core modus ponens of a falsification argument by pulling it down into the mud of circumstantial ‘facts’ instead; relying upon the reality that most people cannot discern falsification from inference.

Informal ‘fallacies’ sound like crushing intellectual blows in an argument, when in fact most of the time they are not. These are tool of those who seek to win at all costs, even if upon an apparent technicality. An arguer who possesses genuine concern about the subject, is not distracted by irrelevant or partially salient technicality.

Assailing facts can serve as a means of avoiding field investigation and attempting to torpedo a case artificially before it can be approached by the scientific method.  This ‘fact verification’ is never the first step in science, as it is too prone to cherry picking, cherry sorting, strawman and scarecrow errors, existential bias, MiHoDeAL bias, or taxonomy fallacy, etc. It is not that facts are not checked, but pursuing this step fist, out of order, opens the door of method wide open to become simply an exercise in reactive dissonance. If you see a skeptic undertaking this as a first step, you are more than likely witness to a person who is executing the martial art of Denial. Had Carl written this step after witnessing the debates raging over Anthropogenic Global Warming, he might have thought twice about employing this step as a starting block in his Kit. Scottish philospher Thomas Carlyle is purported to have said “Conclusive facts are inseparable from inconclusive except by a head that already understands and knows.” Evidence should be able to be assembled into an elegantly predictive mechanism. This more than anything corroborates its ‘facts.’ Not our cherry picked expertise and agenda. To winnow out facts based on personal knowledge, or one at a time, stands as an exercise in denial, a non rectum agitur fallacy of manipulating the method of science in order to derive a desired outcome; from a head that perceives incorrectly that it ‘already understands and knows.’

Maleduction – an argument must be reduced before it can be approached by induction or deduction – failure to reduce an argument or verify that the next appropriate question under the scientific method is being indeed addressed, is a sign of pseudoscience at play. The non rectum agitur fallacy of attempting to proceed under or derive conclusions from science when the question being addressed is agenda driven, non-reductive or misleading in its formulation or sequencing.

If a ‘skeptic’ begins by assailing the facts, and seeks to simply ‘establish a chain of questionability’ around an idea, as their first step of investigation of an issue; then be wary that you might be working with a fake skeptic. A well known adage in philosophy goes as this,People who have not made the effort to understand the evidence, always start by questioning the facts.”

  • Encourage substantive debate on the evidence by knowledgeable proponents of all points of view.

There are two pitfalls which can occur under this context ambiguity.  First that the ‘debate’ is enacted among persons who have absolutely no expertise (or possess counter-trained agendas) on the subject at hand at all, other than at most a self purported ‘skepticism’ (maybe they read The Baloney Detection Kit!), and second that the process of Peer Review (those who hold an equal or suitable expertise to provide constructive criticism of the argument or data) is not conducted at the beginning of the scientific method, other than to assist in hypothesis development.  To provide Peer Review at the beginning of a scientific process, stands simply as a means of squelching the inception of that process. Ideally peers are allies during hypothesis development (they should be eager to see the results), and objective ‘opponents’ during anonymous Peer Review. This such ‘review’ as expressed in the Kit can only serve to revolve around strawman and scarecrow representations of the topic at hand since NO observation or development has been completed at this point.  The only suitable inputs from peers which should be broached at this point in the process is ‘what is the first question which should be asked under the scientific method,’ and ‘what do I need in order to develop this possible contention into a testable hypothesis?’ Anything outside of this is simply another process of seeing how smart we are at enforcing the truth. Nothing perhaps elicits this principle better than Carl Sagan himself in the Dragons of Eden: “Those at too great a distance may, I am well aware, mistake ignorance for perspective.” Well, I am sure they will at least mistake skepticism for perspective.

Non Rectum Agitur Fallacy – a purposeful abrogation of the scientific method through corrupted method sequence or the framing and asking of the wrong, ill prepared, unit biased or invalid question, conducted as a pretense of executing the scientific method on the part of a biased participant. Applying a step of the scientific method, out of order – so as to artificially force a conclusion, such as providing ‘peer review’ on sponsored constructs and observations, rather than studies and claims, in an effort to kill research into those constructs and observations.

One key indicator useful in spotting a faking skeptic is: are they allies at the beginning of the scientific method? Helping the sponsors formulate the right question; eager to see the testing executed? Or are they simply detractors, hoping to stop testing before it could begin, pretending to issue ‘peer review,’ condemnation and requesting proof as the first step in the scientific method? Does ‘all points of view’ habitually end in their insistence on assuming conforming ideas are correct under ‘Occam’s Razor’ and contending that alternative ideas are therefore now foolish and should merit no research (are a pseudoscience)? These are the habits of a fake skeptic.

  • Arguments from authority carry little weight — “authorities” have made mistakes in the past. They will do so again in the future. Perhaps a better way to say it is that in science there are no authorities; at most, there are experts.

This is a tautology, which is moot in its true sense, but damaging if applied in error.  If the only knowledgeable researchers in a field are the ones being denoted as the ‘authorities’ in this case, then this step is equivocally incorrect. If the experts are the active researchers on the topic, then this step might be viable. If however the ‘experts’ are simply skeptical peers, not involved in the research, then their input can simply act as a pseudo-expert resource (parem falsum fallacy) from which to enact denial and cynicism activities. The bottom line: if they are actively and skeptically researching the topic, consider their input with an air of neutrality.  If they have no involvement in the research, disposition their input until time for peer review, after sufficient time and method wherein the case has had a fighting chance to stand on its legs and be heard. Time before the wolves of ‘rationality’ and parem falsum expertise attack it. Again, this step is in the wrong sequence in comparison to the scientific method.

I don’t want to hear a skeptic debating someone who possesses first hand observation or necessity or sponsor science. This simply constitutes a person exercising the martial art of denial, put on display to instruct the rest of the world as to it’s fine art.  Perhaps this is where the climate skeptics learned how to do it so well?

The Appeal to Skepticism – Ergo Sum Veritas Fallacy

1a.  The contention, implication or inference that one’s own ideas or the ideas of others hold authoritative or evidence based veracity simply because their proponent has declared themselves to be a ‘skeptic.’

1b.  The assumption, implication or inference that an organization bearing a form of title regarding skepticism immediately holds de facto unquestionable factual or ideological credibility over any other entity having conducted an equivalent level of research into a matter at hand.

1c.  The assumption, implication or inference that an organization or individual bearing a form of title regarding skepticism, adheres to a higher level of professionalism, ethics or morality than does the general population.

1′ (strong).  The assumption that because one or one’s organization is acting in the name of skepticism or science, that such a self claimed position affords that organization and/or its members exemption from defamation, business tampering, fraud, privacy, stalking, harassment and tortious interference laws.

If a skeptic, as their method of doing ‘background research’ on a topic or regarding an observation, simply goes out and reviews the available skeptic doctrine on the subject and those who made the observations, then comes back with a whole series of denial articles and one-liner quotes from celebrity skeptics, then regards implicitly that researchers are ‘authorities’ and celebrities are ‘experts’ – beware, this is not a skeptic.

  • Spin more than one hypothesis. If there’s something to be explained, think of all the different ways in which it could be explained. Then think of tests by which you might systematically disprove each of the alternatives. What survives, the hypothesis that resists disproof in this Darwinian selection among “multiple working hypotheses,” has a much better chance of being the right answer than if you had simply run with the first idea that caught your fancy.

early scientific methodThis is incorrect. Before one should seek to explain something, one should first ask ‘what do I need in order to develop this construct into a testable hypothesis?’ and ‘what is my next question to be asked under the scientific method?’ This does NOT mean that you have to start immediately testing and considering every and all ‘hypotheses’ upon which every cynic sitting at the table insists. Besides, the mere mention of an explanation does not qualify it as a hypothesis, much more diligence is involved. You do not have to start by researching ALL the ways in which something could be explained, as this simply constitutes an exercise in futility, distraction, waste and above all – ego (see pseudo-theory).

This is not how real labs and scientific groups work – as in reality, effective teams construct an elegant hypothesis reduction critical path. Critical path and straightforward falsification alternatives should be matured first for testing. It is amazing how much you learn in this process. Things which save mountains of time as compared to the blind shotgun testing suggested here by Sagan. This is followed by readily measurable or predictively accurate alternatives, of those still remaining on the critical path. Frivolous or wild alternatives usually fall out through natural falsification during the reduction process on their own.  There is no need to pay special attention to them until forced to do so by the evidence. Why did Sagan not know this, or at least express it in layman’s terms?

Additionally one does not ‘spin’ a hypothesis, one develops a hypothesis. Spinning hypotheses, again, is NOT the first step in the scientific method. One cannot simply willy-nilly, sling out plausible deniability constructs like a machine cranking out compositions of personal brilliance. This exercise constitutes not science, rather a smoke screen; the desire to obscure science. Hypotheses are the direct result of having completed the Observation, Necessity, Intelligence/Data Aggregation, Construct Formulation and Sponsorship Peer Input and finally Reduction steps of the scientific method.  They are serous exercises in scientific methodical diligence.  Again, this step in the Kit is wrong. It brings into question whether or not Carl Sagan actually ever did any real science, other than running celestial observations, writing books, articles, doing TV shows, classes or student exams.

Negare Attentio Effect – the unconscious habituation of a person seeking publicity or attention in which they will gravitate more and more to stances of denial, skepticism and doubting inside issues of debate, as their principal method of communication or public contention. This condition is subconsciously reinforced because they are rewarded with immediate greater credence when tendering a position of doubt, find the effort or scripted method of arguing easier, enjoy shaming and demeaning people not similar to their own perception of self or presume that counter-claims don’t require any evidence, work or research.

If a skeptic blathers the first conforming explanation they can think of regarding a challenging idea or observation, and regards that as a ‘hypothesis,’ be very wary. If the skeptic insists that you must first research their explanation, even if it is impossible to approach by means of falsification, then drop them from making input, as they do not understand science; only their religious agenda.

  • Try not to get overly attached to a hypothesis just because it’s yours. It’s only a way station in the pursuit of knowledge. Ask yourself why you like the idea. Compare it fairly with the alternatives. See if you can find reasons for rejecting it. If you don’t, others will.

Now this is a salient and objective advisement on the part of Sagan. If you do not critique your own hypothesis, others will. This reminds me of the myriad of ‘perpetual motion’ machines I have reviewed and debunked over the years. In one instance an inventor brought a device into a convention I was attending. It had a battery hooked up to a flywheel covered in tin foil (no this is not meant as implicitly pejorative), extracting ‘orgone’ energy from the ether, which was in turn charging a dead battery in series downline of the fly-wheel and charged battery. His contention was that 100% of the charged battery’s potential difference and ampere charge was transferred from the live battery to the dead battery, AND we received the benefit of the kinetic energy of the motion of the wheel during the charge transfer to boot.  Energy from nowhere!! – and still a fully charged battery!! Well energy from the ether, that is. Of course if this was true then we needed to stop digging for coal immediately.

I asked him if he then took the newly charged battery and swapped it with the newly depleted battery and ran the test again and again to see if the duration of wheel motion was the same in each iteration of charge transfer. Whereupon he replied ‘uh, no.’  Then we attempted this test. Sure enough, in each iteration the source battery depleted in about 55-65% of the time it did in the previous iteration. It was not a perpetual motion machine. He was deceived by his only measuring the potential (V) of the battery and not its ampere transfer charge (C= A·s). His presentation was in reality an appeal for Peer Input, prior to hypothesis development. The inventor jumped the gun hoping to have a proof, before asking ‘what do I need to do in order to develop this construct into a hypothesis?’ This is what I helped him with – as an ally not an opponent – I really wanted to see this apparatus matured into hypothesis.  Yet I still admired the man for objectivity, courage and the spirit of inquiry. I respected him more than I do a fake skeptic. He was able to objectively look at, and be convinced by the reduction evidence. False skeptics simply whip out their one-liner from the Baloney Detection Kit and walk off.

Reactive Dissonance (in business and legal domains, also called ‘malfeasance’) – the fallacious habitual mindset of a researcher or doctrine of a group conducting research, wherein when faced with a challenging observation, study or experiment outcome, to immediately set aside rational data collection, hypothesis reduction and scientific method protocols in favor of crafting a means to dismiss the observation.

To the faking skeptic, there is only one answer, that which conforms with their religious view of reality, and that which protects their bandwagon agenda. Any critique of their hypothesis simply serves to place you in the lunatic camp. They will not assist a researcher in developing a hypothesis or asking the right questions, they do not care about the answer, they only seek to crush, mock, obfuscate and destroy. Perhaps that litmus test is the best baloney detection kit of them all.

  • Quantify. If whatever it is you’re explaining has some measure, some numerical quantity attached to it, you’ll be much better able to discriminate among competing hypotheses. What is vague and qualitative is open to many explanations. Of course there are the truths to be sought in the many qualitative issues we are obliged to confront, but finding them is more challenging.

Define, Constrain, then Measure, not quantify. Quantification is the oldest way of deceiving in the book. There is an old saying ‘numbers don’t lie.’ But in one of my firms we had a saying that ‘numerals don’t lie, but numbers lie like a sick dog.’ There is a simple reality to which every researcher becomes used, in that one comes to realize that excessive reliance upon numbers can be used to deceive just as well as can deceptive vague arguments. Or as Mark Twain put it, ‘there are lies, damn lies, and then there are statistics.’ But Sagan is correct here in that, if one understands the context of numerics and measures, and if one uses them to elicit and not to direct decision, then quantification can provide an enormous benefit to both hypothesis development and hypothesis reduction processes. One must remember that models result in convergent, divergent and constraining results; very few of which can be used to effect a decision solely on the basis of that model. I have never once advised a corporation or lab to adopt/assume the results of an empirical model at face value. That would be foolishness. But these are measures in the end, used to gauge magnitude and direction; perspective, not simply quantification, for quantification’s sake. There is a stark distinction. Use numbers to add value and clarity, not to put on a show of objective diligence.

Procrustean Bed – an arbitrary standard to which exact conformity is forced, in order to artificially drive a conclusion or adherence to a specific solution. The undesirable practice of tailoring data to fit its container or some other preconceived argument being promoted by a carrier of an agenda.

Three things which deceive. If numbers and recitations only come from the camp which serves a skeptic’s favored alternative, then be very wary. If numbers are the proprietary property of a group making a contention, and they do not allow you to see where these numbers were derived, be very wary. If a skeptic tosses out a fabutistic which contends that science thinks certain things, and that they represent that scientific thought, and does not appear to understand its context or method of origin, be very wary. If the ‘skeptic’ whips out a claim to scientific consensus, ask them where it was published (scientific journal) and by what of the three methods was it derived: association poll, meta-analysis study, or peer directed alternative research.

  • If there’s a chain of argument, every link in the chain must work (including the premise) — not just most of them.

Every argument, even the simplest explanation, contends from an implicit or explicit chain of argument. The key is to ask ourselves, have we fairly recognized the implicit chain of assumptions and principles incumbent in our favored argument? A second principle resides in this – in Quality Control analytics and the assembly and design of processes, one learns that daisy chaining a stream of reliable processes together, will simply serve to produce an unreliable process. Ten sure bets in a chain of risk dependency equals a sure loss.  This renders EVERY chain of logic vulnerable to question (see Stacks of Provisional Knowledge).  The wise application of this step involves experience in understanding how to neutralize chains of dependency through incremental hypothesis predictive success; along with the realization that even our foundational givens, can be critiqued under such methodology.  It is just up to us to possess the courage to do so. “We have, as human beings, a storytelling problem. We’re a bit too quick to come up with explanations for things we don’t really have an explanation for.” Or so proclaimed Malcolm Gladwell in Blink: The Power of Thinking Without Thinking. There is not going to be a perfect alternative; a perfect story; a perfect explanation. Only social epistemologies are touted by their agenda bearers as being flawless. Be cautious of such claims of perfect explanations or the need for an explanatory basis to be perfect, or people who understand the chain of dependency and surreptitiously employ it in a process of methodical cynicism.

Self Confirming Process – a process which is constructed to only find the answer which was presumed before its formulation. A lexicon, set of assumptions or data, procedure or process of logical calculus which can only serve to confirm a presupposed answer it was designed to find in the first place. A process which bears no quality control, review, or does not contain a method through which it can reasonably determine its own conclusion to be in question or error.

If your ‘skeptic’ presents arguments with all the loose ends accounted for, and all the questions wrapped up, if they fail to express ‘I don’t know’ on a subject, if all the outlying data in their argument does not exist, if they are quick tempered or hate their opponents easily, and fail to cite a history of personal mistakes and what they learned from them, then be very wary that you are dealing with a social epistemologist – bearing an agenda.

  • Occam’s Razor. This convenient rule-of-thumb urges us when faced with two hypotheses that explain the data equally well to choose the simpler.

This contention is simply, unequivocally, incorrect. The diligent researcher needs be constantly circumspect for both the existential and transactional variants of the ‘Occam’s Razor’ fallacy. This is a ridiculous contention of a person who has only argued, enforces simple science and has never actually reduced hypotheses inside a scientific framework. For a more exhaustive explanation of why applying ‘Occam’s Razor’ (much less spelling it incorrectly) at the END of a pretend ‘scientific method’ is an exercise in deception, one can be found here: Ethical Skepticism Part 5 – The Real Ockham’s Razor.

Occam’s Razor Fallacy – abuse of Ockham’s Razor (and misspelling) in order to to enact a process of sciencey-looking ignorance and to impose a favored idea. Can exist in four forms, transactional, existential, observational and utility blindness.

Transactional Occam’s Razor Fallacy (Appeal to Ignorance) – the false contention that a challenging construct, observation or paradigm must immediately be ‘explained.’ Sidestepping of the data aggregation, question development, intelligence and testing/replication steps of the scientific method and forcing a skip right to its artificially conclusive end (final peer review by ‘Occam’s Razor’).

Existential Occam’s Razor Fallacy (Appeal to Authority) – the false contention that the simplest or most probable explanation tends to be the scientifically correct one. Suffers from the weakness that myriad and complex underpinning assumptions, based upon scant predictive/suggestive study, provisional knowledge or Popper insufficient science, result in the condition of tendering the appearance of ‘simplicity.’

Observational Occam’s Razor Fallacy (Exclusion Bias) – through insisting that observations and data be falsely addressed as ‘claims’ needing immediate explanation, and through rejecting such a ‘claim’ (observation) based upon the idea that it introduces plurality (it is not simple), one effectively ensures that no observations will ever be recognized which serve to frame and reduce a competing alternative.  One will in effect perpetually prove only what they have assumed as true, regardless of the idea’s inherent risk. No competing idea can ever be formulated because outlier data and observations are continuously discarded immediately, one at a time by means of being deemed ‘extraordinary claims’.

Utility Blindness – when simplicity or parsimony are incorrectly applied as excuse to resist the development of a new scientific explanatory model, data or challenging observation set, when indeed the participant refuses to consider or examine the explanatory utility of any similar new model under consideration.

If a skeptic talks with conclusive authority on an issue and says that their argument is based upon ‘Occam’s Razor,’ ignore them because they are clueless.

  • Always ask whether the hypothesis can be, at least in principle, falsified. Propositions that are untestable, unfalsifiable are not worth much. Consider the grand idea that our Universe and everything in it is just an elementary particle — an electron, say — in a much bigger Cosmos. But if we can never acquire information from outside our Universe, is not the idea incapable of disproof? You must be able to check assertions out. Inveterate skeptics must be given the chance to follow your reasoning, to duplicate your experiments and see if they get the same result.

Testable is not the same thing as falsifiable. This advice fits within the ethical science necessity of asking ‘what do I need to do in order to develop this construct into a hypothesis?’ Falsification is of extraordinary importance of course in avoiding a Popper Error. There must be recognition however, that many of our strongest hypotheses and accepted sets of science, reside squarely upon simply a series of predictive and associative studies. Many sets of accepted science do not intrinsically lend themselves to falsification. Our next suitable action is to ask ‘What is the right next question to ask under the scientific method?’ We only hope that it lends to a falsification based answer; but this is less often the case. The question, in reality many times instead becomes: can the hypothesis we are considering make successful predictions which can be confirmed by further measures under the scientific method?  This principal many times in reality stands in lieu of direct falsification based reductions. If I am lucky however, predictive or associative testing might allow the team to falsify a non-critical path alternative down the line, so that we do not have to focus on it later. A second fallacy danger resides in pursuing falsification for an alternative, yet refusing to pursue falsification for a null hypothesis when it can readily be had.  This is a very common form of official pseudoscience. The idea that mind Ξ brain can be tested for falsification. But if we spend all our time and science resources seeking to falsify the antithetical alternative hypotheses instead or promote our favored religious null hypothesis with only supporting research, we are guilty of applying promotification and false parsimony. This is pseudoscience.

Consilience Evasion – a refusal to consider scientifically multiple sources of evidence which are in congruence or agreement, focusing instead on targeting a single item from that group of individual sources of evidence because single items appear weaker when addressed alone. Also called a ‘silly con.’

If your ‘skeptic’ cannot differentiate between falsification, predictive and associative tests, nor when and how to employ them relative to a null, conforming or alternative hypothesis, be very wary. If they do not know how hypothesis testing is sequenced, think that the formulation of a hypothesis is a mere matter of debate, cannot cite a reduction hierarchy and critical path, as well as cite some key examples, then they are pretending to know science.

As much as I loved his work Cosmos, The Demon Haunted World and The Cosmic Connection, sometimes, we should be hesitant in overly lauding and proclaiming the work of celebrities like Carl Sagan, despite their notoriety, as authorities. Circus Partis is a false appeal to an authority who is simply ‘famous for being famous;’ and in the case of Carl Sagan, while the fame is warranted, the context of authority simply may not be.


¹  Sagan, Carl, Demon-Haunted World: Science as a Candle in the Dark; Ballantine Books, Jul 06, 2011; ISBN 9780307801043.

Diagnostic Habituation Error and Spotting Those Who Fall Its Prey

Diagnostic methods do not lend themselves to discovery goals; only to conclusions. The wise skeptic understands the difference in mindset of either approach, its value in application, and can spot those who fall prey to diagnostic habituation; hell bent on telling the world what is and what is not true.
Linear diagnostic thinkers tend to regard that one must ‘believe’ in or have scientific proof of their idea prior to conducting any research on it in the first place. They will not state this, however – watch carefully their illustration of applied scientific methodology. A bias towards prescriptive conclusions, obsession over beliefs, enemies and wanting proof as the first step of the scientific method will eventually broach in their worn out examples of poorly researched 1972 Skeptic’s Handbook bunk exposé.

When Lab Coats Serve to Deceive Self

There's no such thing as...

Nickell plating is the method of twisted thinking wherein one adorns lab coats and the highly visible implements of science in order to personally foist a display of often questionable empirical rigor. In a similar fashion, lab coats can also be used to deceive self, if one does not “live the examined life” as cited in the Socratic Apology 38a path context. Diagnostic Habituation Error is a very common judgement error in scientific methodology, often committed by professionals who work in very closed set domains, realms which involve a high degree of linear thinking, or matching of observation (or symptom) to prescriptive conclusion. The medical field is one such discipline set inside of which many professionals become blinded by protocols to such an extent that they fail to discern the more complex and asymmetrical demands of science in other disciplines.

For instance, medical diagnosticians use a term called a SmartPhrase in order to quickly communicate and classify a patient medical record entry for action or network flagging. A SmartPhrase is an a priori diagnostic option, a selection from cladistic clinical language, used in patient Electronic Health Record (EHR) documentation. While its intent was originally to compress and denote frequently used language, it has emerged as a defacto diagnostic option set as well. Wittgenstein would be nodding his aged head to this natural evolution. The nomenclature and diagnostic option set afforded makes life immersed inside Electronic Health Records easier for physicians. It makes science easier – but comes at a cost as well. A cost which the diagnostician must constantly bear in mind.

Not all sciences are like diagnostic medicine and astronomy. Most are vastly more complex in their deontologically reductive landscape. Diagnostician’s Error – is the failure to grasp this.

It would not constitute a far stretch of the imagination to understand why a clinical neurologist might not understand the research complexity or sequencing entailed in scientifically identifying a new species, assessing the impact of commodities on economics and poverty or the discovery of a new material phase state.  Despite their scientific training, they will habitually conclude that no such new species/state exists, because the traps we set for them are empty, our observations must have come from flawed observational memory, or that the textbook doctrine on ‘supply and demand’/’elastic and inelastic’ demand curves apply to our situation. Diagnostics in the end, do not lend themselves to discovery.  This is why it is all to common to observe clinical diagnosticians in Social Skeptic roles, denying the existence of this or that or pooh-poohing the latest efforts to use integrative medicine on the part of the public. These ‘skeptics’ comprehend only an abbreviated and one dimensional, linear version of the scientific method; if they apply any at all. In diagnostics, and in particular inside of medicine, the following compromises to the scientific method exist: (diagnostic and clinical medicine and not medical research):

  • symptom eventually equals previously known resolution
  • only the ‘most likely’ or ‘most risk bearing’ alternatives need be tested
  • very little need for discovery research
  • absence of evidence always equals evidence of absence
  • only lab experimental testing is valid
  • single parameter measure judgements are employed with abandon
  • the first question asked is an experiment, little advance thought is required
  • the first question presumes an whole domain of familiar ‘known’
  • the intelligence research has already been completed by others – and is assumed comprehensive
  • necessity observation is done by the patient (but is discounted in favor of experiment)
  • Ockham’s Razor involves fixed pathways
  • the set of possible outcomes is fixed and predetermined
  • an answer must be produced at the end of the deliberative process

The key, for The Ethical Skeptic, is to be able to spot those individuals who not only suffer from forms of Diagnostic Habituation, but also have a propensity to enforce the conclusions from such errant methodology and thinking on the rest of society.  Not all subjects can be resolved by diagnostics and linear thinking. This form of thinking usually involves avoiding a rigor called a logical calculus, in favor of something called abductive (or simplest explanation) reasoning.  If one is not absolutely sure that the domain inside of which they are working is truly abductive (disease is 98% abductive – the rest of our reality is not) – then it would be an error to use this type of reasoning universally as one’s approach to broader science. One does not research anomalous phenomena by using abductive reason for instance; because abductive logical inference was what kept the world locked in religious and Dark Age understandings of our realm for so long.

Reasoning Types†

Abductive Reason (Diagnostic Inference) – a form of precedent based inference which starts with an observation then seeks to find the simplest or most likely explanation. In abductive reasoning, unlike in deductive reasoning, the premises do not guarantee the conclusion. One can understand abductive reasoning as inference to the best known explanation.

Strength – quick to the answer. Usually a clear pathway of delineation. Leverages strength of diagnostic data.

Weakness – Uses the simplest answer (ergo most likely). Does not back up its selection with many key mechanisms of the scientific method. If an abductive model is not periodically tested for its predictive power, such can result in a state of dogmatic axiom.

Inductive Reason (Logical Inference) – is reasoning in which the premises are viewed as supplying strong evidence for the truth of the conclusion. While the conclusion of a deductive argument is certain, the truth of the conclusion of an inductive argument may be probable, based upon the evidence given combined with its ability to predict outcomes.

Strength – flexible and tolerant in using consilience of evidence pathways and logical calculus to establish a provisional answer (different from a simplest answer, however still imbuing risk into the decision set). Able to be applied in research realms where deduction or alternative falsification pathways are difficult to impossible to develop and achieve.

Weakness – can lead research teams into avenues of provisional conclusion bias, where stacked answers begin to become almost religiously enforced until a Kiuhn Paradigm shift or death of the key researchers involved is required to shake science out of its utility blindness on one single answer approach. May not have examined all the alternatives, because of pluralistic ignorance or neglect.

Deductive Reason (Reductive Inference) – is the process of reasoning from one or more statements (premises) to reach a logically certain conclusion. This includes the instance where the elimination of alternatives (negative premises) forces one to conclude the only remaining answer.

Strength – most sound and complete form of reason, especially when reduction of the problem is developed, probative value is high and/or alternative falsification has helped select for the remaining valid understanding.

Weakness – can be applied less often than inductive reason.

Diagnostic Habituation Error

/philosophy : science : method : linear diagnostics : unconscious habituation/ : the tendency of medical professionals and some linear thinkers to habitually reduce subjects of discourse inside protocols of diagnosis and treatment, when not all, or even most fields of discourse can be approached in this manner. Diagnosis must produce an answer, is performed inside a closed set of observational data domain, constrained fields of observation (eg. 1500 most common human maladies), are convergent in model nature, tend to increasing simplicity as coherency is resolved and develop answers which typically select from a closed field of prescriptive conclusions. All of these domain traits are seldom encountered in the broader realms of scientific research.

DIAGNOSTICIANS

Detecting a Linear Diagnostic Thinker – Habituated into Selecting From a Prescriptive Answer Inventory
They tend to think that one must ‘believe’ in or have scientific proof of their idea prior to conducting any research on it in the first place. They will not state this, however – watch carefully their illustration of applied scientific method. They will rarely grasp an Ockham’s Razor threshold of plurality, nor understand its role; obsessively clinging to the null hypothesis until ‘proof’ of something else arrives. Gaming method, knowing full well that ‘proof’ seldom arrives in science.

The determination of a diagnosis of inherited static encephalopathy may be a challenging endeavor at first, and indeed stands as a process of hypothesis reduction and science.  However, this reduction methodology differs from the broader set of science and in particular, discovery science in that it features the following epignosis characteristics. The problem resides when fake skeptics emulate the former process and advertise that its method applies to their ability to prescriptively dismiss what they do not like.

your deceptive practicesA key example of applied Diagnostic Habituation Error can be found here. An elegant demonstration of how well-applied diagnostic methodology inside a clinical technical role can serve to mislead its participant when applied in the broader realms of science. This treatise exhibits a collegiate level dance through repetitious talk about method, parlaying straight into sets of very familiar, poorly researched canned conclusions, excused by high school level pop-skeptic dogma. Worn out old propaganda about about how memory is fallible if we don’t like its evidence, and if you research anything forbidden, your mind is therefore ‘believing’ and playing tricks on its host.

Diagnosis Based Science (How it differs from the broader set of science reduction and discovery)

the diagnostic habituation errorObservational Domain is Set, Experimental Only and Controlled – the human body is the domain and the set of observable parameters is well established, known and relatively easily and only measured.

Example:  Observable parametrics in the human body consist of blood measures, skin measures, neurological signals, chemical signatures and hormone levels, physical measures and those measures which can be ascertained through bacteriology and virology. In medicine, the scientific method starts there.  In discovery science, method does not start with an experiment, it starts with observation, necessity and intelligence. Despite the complexity which is inherent inside these observational domains, still the set is highly restricted and the things observed-for, well known for the most part. In contrast, examining the galaxy for evidence of advanced life will be a long, poorly understood and failure laden pathway. We cannot begin this process with simply the Drake Equation and an experiment and hope to have success.

Field of Observation is Constrained and Well Established with Gnosis Background – there are only a few subset disciplines inside which observations can be made. Each is well documented and for which is published a guiding set of protocols, advisement, and most recent knowledge base regarding that discipline.

Example:  There exist only a closed set of systems inside the human body, which are for the most part well understood in terms of dysfunction and symptom. Integumentary, skeletal, nervous, cardiovascular, endocrine and muscular systems. Compare this to the energy systems which regulate our planetary environment. Most are not well understood in terms of impact, and we are not even sure how many constitute the major contributors to climate impact, or even how to measure them. I am all behind science on Climate Change, but in no way do I regard the discipline as a diagnostic field. I am wary of those who treat it as such.  And they are many.

Fixed Ockham’s Razor Mandate, Single Hypothesis and Extreme Null Evidence Standards – The protocols of diagnosis always dictate that the most likely or danger-entailing explanation be pursued in earnest, first and only. Once this hypothesis has been eliminated, only then can the next potential explanation be pursued. Absence of evidence is always taken as evidence of absence. This is not how discovery or asymmetric science works. Very rarely is diagnostic science challenged with a new medical discovery.

Example:  When a 55 year old patient is experiencing colon pain, the first response protocol in medicine is to order a colonoscopy. But in research regarding speciation for instance, or life on our planet, one does not have to solely pursue classic morphology studies to establish a phylogeny reduction. One can as well simultaneously pursue DNA studies and chemical assay studies which take a completely different tack on the idea at hand, and can be used to challenge the notion that the first phylogeny classification was a suitable null hypothesis to begin with. Real research can begin with several pathways which are in diametric opposition.

Diagnoses are Convergent in Nature – the methods of reduction in a diagnosis consistently converges on one, or maybe two explanatory frameworks inside a well known domain of understanding. In contrast, the broader world of modeling results in convergent models very rarely; moreover, often in non-discriminating or divergent models which require subjective reasoning in order to augment in terms of a decision process (if a decision is chosen at all).

Example:  If I have a patient complaining of tinnitus, my most complex challenge exists on the first day (in most cases). I am initially faced with the possible causes of antibiotics effects, hearing loss, intestinal infection, drug use, excessive caffeine intake, ear infections, emotional stress, sleep disorder or neurological disorder. From there evidence allows our models to converge on one optimal answer in short order in most cases.  Compare in contrast an attempt to discern why the level of poverty in a mineral rich country continues to increase, running counter to the growing GDP derived through exploitation of those minerals. The science and models behind the economics which seek to ascertain the mechanisms driving this effect can become increasingly divergent and subjective as research continues.

Tendency is Towards Increasing Simplicity as Coherency is Resolved – medical diagnoses tend to reduce information sets as coherency is attained and focus on one answer.  Please note that this is not the same as reducing complexity.  The reduction of complexity is not necessarily a scientific goal – as many correct solutions are indeed also inherently complex.

Example:  As I begin to diagnose and treat a case of Guillain–Barré syndrome, despite the initial chaos which might be entailed in symptom and impact mitigation, or the identification of associated maladies – eventually the patient and doctor are left with a few reduced and very focused symptomatic challenges which must be addressed.  CNS impacts, nerve damage, allergies and any residual paralysis, eventually the set of factors reduces to a final few.  In contrast, understanding why the ecosystem of the upper Amazon is collapsing, despite the low incidence of human encroachment, is a daunting and increasingly complex challenge. Its resolution may require much out-of-the-box thinking on the part of researchers who constantly exhaust multiple explanatory pathways and cannot wait for each one-by-one, prescriptive solution or explanation or a null hypothesis to ‘work itself out’ over 25 years.

Selects from Solutions Inside a Closed Field of Prescriptive Options – Almost all medical diagnoses are simply concluded from a well or lesser, but known set of precedent solutions from which decisions can be made and determinations drawn.  This is not the case with broader scope or discovery science.

Example:  In the end, there are only a set of about 1500 primary diseases from which we (most of the time) can regularly choose to diagnose a set of symptoms, most with well established treatment protocols. Contrast this with the World Health Organization’s estimate that over 10,000 monogenic disorders potentially exist.¹ The research task entailed inside monogenic nucleotide disorders is skyrocketing and daunting.  This is discovery science. The diagnosis of the primary human 1500 diseases, is not. Different mindsets will be needed to approach these very different research methodologies.

An Answer Must be Produced or We Fail – 100% of diagnostic processes involve the outcome of a conclusion. In fake skepticism, of course the participants are rife with ‘answer which as the greatest likelihood of being true’ type baloney.  To the Diagnostic Habituated fake skeptic, an answer has to be produced – NOW. But in a discovery process, we do not necessarily have to have an answer or disposition on a subject.  Be very wary of those who seem to force answers and get angry when you do not adopt their conclusion immediately. Be wary of those who have an answer for all 768 entries in The Skeptic’s Dictionary (including those who wrote the material). They are not conducting a scientifically reliable or honest exercise, rather are simply propping up a charade which seeks to alleviate the mental dissonance stress from something larger which disturbs them greatly. See Corber’s Burden. As one claims to be an authority on all that is bunk, their credibility declines in hyperbolic inverse proportion to the number of subjects in which authority is claimed.

Example:  If one is experiencing pain, for the most part both the patient and the researcher will not stop until they have an answer.  A conclusive finish to the pain itself is the goal after all, and not some greater degree of human understanding necessarily.  Contrast this with grand mysteries of the cosmos. We do not yet have an answer to the Sloan Digital Sky Survey “Giant Blob” Quasar Cluster² observation and how it could easily exist under current understandings of classical cosmology or M-theory. We have to await more information. No one has even suggested forcing a ‘answer which has the greatest likelihood of being true.’ To do so would constitute pseudoscience.

It is from this constrained mindset which the Ethical Skeptic must extract himself/herself, in order to begin to grasp why so many subjects are not well understood, and why we must think anew in order to tackle the grander mysteries of our existence. The more we continue to pepper these subjects with prescripted habituated diagnoses, the more those who have conducted real field observation will object. We have well observed the falling back on the same old ‘conspiracy theorist’ pejorative categorization of everyone who disagrees with the diagnoses proffered by these linear thinkers. It is not that a diagnostic approach to science always produces an incorrect answer. But if we follow simply the error of diagnosis habituation, then let’s just declare that mind Ξ brain right now, and we can close up shop and all go home. And while I might not bet against the theory were it on the craps table in Vegas and I were forced to make a selection now, neither am I in an ethical context ready to reject its antithesis simply because some diagnostic linear thinkers told me to.

I am an Ethical Skeptic, I don’t reject your idea as false, but I await more information. As a discovery researcher I refuse to simply accept your diagnostically habitual ‘critical thinking;’ nor its identifying which answer the constrained set has shown ‘is most likely true.’

That is not how real skepticism and real science work.


¹  World Health Organization, “Genes and Human Disease,” Genomic Resource Center, Spring 2015; http://www.who.int/genomics/public/geneticdiseases/en/index2.html.

² The Biggest Thing in the Universe, National Geographic; January 11, 2013, National Geographic Society;  http://news.nationalgeographic.com/news/2013/01/130111-quasar-biggest-thing-universe-science-space-evolution/.

† Abductive, Inductive, and Deductive Reason definitions – are modified from their approximate definitions provided by Wikipedia, in its series on reasoning and logical inference.

https://en.wikipedia.org/wiki/Abductive_reasoning

https://en.wikipedia.org/wiki/Inductive_reasoning

https://en.wikipedia.org/wiki/Deductive_reasoning

The Deontologically Accurate Basis of the Term: Social Skepticism

Failures and agendas in the name of science are not the result of ‘scientism’ per se, as science can never be a teleological ‘-ism’ by its very definition. Science itself is neutral. Failures with respect to science are the result of flawed or manipulated philosophy of science. When social control, change or conformance agents subject science to a state of being their lap-dog, serving specific agendas, such agents err in regard to the philosophical basis of science, skepticism. They are not bad scientists, rather bad philosophers, seeking a socialized goal. They are social skeptics.

Ethical Skepticism agrees with science that there exists no set of truth p which is only true because of a non epistemological basis of desire. I want my beliefs to be true, socially they are justified, I hold moral authority and therefore they should be made true by ‘science.’ This flawed philosophy stands as the essence of Social Skepticism. It is a concealed and deeply seated antipathy towards the protocols of real science. This is why dismissive negativity and intimidation arise so quickly in a Social Skeptic when disdained ideas, evidences or observations are broached.

my science is now the correct scienceAn epistemology consists of both the underpinning objective elements as well as the means of logic, philosophy and method by which we arrive at the proposition p is true.  “Social epistemology is the study of the social dimensions of knowledge or information.”¹ Thus is the definition framed by Alvin Goldman in his excellent article on social epistemology inside the Stanford Encyclopedia of Philosophy resource base.  He further expounds,

Social epistemology is theoretically significant because of the central role of society in the knowledge-forming process. It also has practical importance because of its possible role in the redesign of information-related social institutions.¹

However, The Ethical Skeptic bristles at such machinations, this “redesign of …institutions,” and further contends that social epistemology rarely, if ever, remains constrained to the set of social institutions. This epistemic commitment is especially objectionable when it is employed to extend control over science from such social institutions by tampering with the Knowledge Development Process to support a socially driven end goal. An Ethical Skeptic views this as a highly unethical process. A disservice to mankind for selfish and perfidious purposes. Active pseudoscience as opposed to passive categorization (existential) pseudoscience. Ethical Skepticism agrees with science that there exists no set of truth p which is only true because of a non epistemological basis of desire, q. I want my beliefs q to be true; socially they are justified, I hold moral authority and therefore they should be made true by ‘science.’ It is this antipathy towards science which is the key unacknowledged facet of Social Skepticism. This is why the top concerns for our future, between scientists and SSkeptics, don’t align at all (see Real Scientists Disagree with SSkeptics About World’s Top Concerns for the Future). Social Skeptics only use science as a tool for moral authority; it threatens their power, so they seek to control it at all costs.

Social Skepticism fully understands the obstacles to such thinking were it made manifest, and therefore seeks to establish a set of pathways around this problem.  Hyperepistemological and Hypoepistemological skepticism and science are the false epistemological bases which stand in as the apparent scientific protocols supporting an agenda hinging off of a concealed social epistemological based view of science.  The related definition, extracted from the Stanford Encyclopedia of Philosophy is structured thusly:¹ ²

Social Epistemology

/philosophy : pseudo-philosophy/ When we conceive of epistemology as including knowledge and justified belief as they are positioned within a particular social and historical context, epistemology becomes social epistemology. Since many stakeholders view scientific facts as social constructions, they would deny that the goal of our intellectual and scientific activities is to find facts. Such constructivism, if weak, asserts the epistemological claim that scientific theories are laden with social, cultural, and historical presuppositions and biases; if strong, it asserts the metaphysical claim that truth and reality are themselves socially constructed.¹ ²

Moreover, in recognizing this, when social justice or the counter to a perceived privilege are warranted, short cuts to science in the form of hyper and hypo epistemologies are enacted through bypassing the normal frustrating process of peer review, and substituting instead political-social campaigns – waged to act in lieu of science. These campaigns of ‘settled science’ are prosecuted in an effort to target a disliked culture, non-violent belief set, ethnicity or class – for harm and removal of human rights.

Social Skeptics, view the world of science as a mechanism which can be manipulated and altered to accommodate non-scientific goals, or even promote false scientific conclusions if justified by the moral authority entailed. In their view, science should be employed as the football which enables dictation of morals, standards of human interaction, tolerable or necessary human rights, denigration of specific races, peoples, genders, or groups, acceptable government, political parties and soft economic principles. These strong social epistemological pundits are at their essence scientific crooks.  However, they are fully aware that science, inside the key verticals of its application, in general does not accept such contortions of their professional standards.  As a result, Social Epistemologists must construct sciencey-looking pathways which tender both the appearance of protocol and method, and establish an Apparent Coherency. This Apparent Coherency is then enforced on society as a whole, with much intimidation and negativity as the final facet of its enforcement.

And as is true to form in a socially reenforced protocols, the enormous social pressure brought to bear in the form of anger and mocking humor in a public and derisive context, stands as the signature and indeed red flag hallmark of Social Skepticism.

Social Skepticism

/pseudoscience : agenda : based upon pseudo-philosophy (hypo and hyper epistemology)/ : employment of fake a priori deduction methods combined with biased stacked provisional abductive reasoning, both employed as a masquerade of science method in order to enforce a belief set as being scientific, when it is not. It is a sponsored activist movement which functions as an integral part of the socially engineered mechanisms seeking to dominate human thought, health, welfare and education. This domination serving as means to an end, towards subjection of all mankind’s value to mandated totalitarian institutions. Institutions which serve to benefit a social elite, however which stand threatened by innate elements of mankind’s being and background.

An ideologue driven enforcement of philosophically bad science, crafted to obfuscate mankind’s understanding of critical issues inside which it holds specific goals. Its members practice a form of vigilante bullying, employed in lieu of science to dismiss disliked subjects, persons and evidence before they can ever see the light of day. This seeking to establish as irrefutable truth a core philosophy of material monism, dictating that only specific authorized life physical and energy domains exist. A comprehensive program of enforcement sought accordingly, through rather than the risk of ethical scientific methodology, instead a practice of preemptive methodical cynicism and provisional knowledge which underpins an embargo policy regarding, cultivates ignorance and institutionalizes intimidation surrounding any subject which could conceivably threaten their religion, social control and economic power.

Employment of false hypo or hyper epistemology utilized to enforce a hidden Social Epistemological based agenda seeking establishment of a specific Apparent Coherence which denies all opposing forms of knowledge. Failures and agendas in the name of science are not the result of ‘scientism’ per se, as science can never be a teleological ‘-ism’ by its very definition. Science itself is neutral. Failures with respect to science are the result of flawed or manipulated philosophy of science. When social control, change or conformance agents subject science to a state of being their lap-dog, serving specific agendas, such agents err in regard to the philosophical basis of science, skepticism. They are not bad scientists, rather bad philosophers, seeking a socialized goal. They are social skeptics.

Therefore, as one can see Social Skepticism really stems from a surreptitious social epistemological view of science. A view that science can be molded, shaped and controlled in any fashion that controlling forces see fit.  Further then to be employed as moral authority to enable any policy, governance, party or social goal they initially envisioned. There exists therefore, two versions of application wherein this social epistemology is plied and inside of which it can be concealed and made to appear in the form of science. Hyperepistemology, or in general lying through facts and extremes and Hypoepistemology, or in general lying through misinformation and lax standards.  Finally, the lie, as it is crafted into a social construct under a socially epistemological approach, is termed an Apparent Coherence (see graphic below).

Apparent Coherency - Copy

A hyperepistemology is therefore any pseudoscience which seeks to screen out undesired conclusions by becoming excessively purist in exercise of data, observation, experiment, measurability, reporting and acceptance. It is active transactional pseudoscience. Complementarily, a hypoepistemology is any process which seeks to skip deontological rigor and step right to the prejudiced a priori categorization of a subject as being ‘disproved’ or a favored subject as being ‘consensus.’ This is existential pseudoscience.

Notice that again here, pseudoscience cannot possibly be, in a logical philosophical framework, defined as a specific topic of study. When this false definition is enforced, the whole philosophical basis of epistemology shatters into incoherency.  Such is the nature of social epistemology. It only seeks Apparent Coherency, and nothing more.

Hyperepistemology

/transactional pseudoscience/ Employment of extreme, linear, diagnostic, inconsistent, truncated or twisted forms of science in order to prevent the inclusion or consideration of undesired ideas, data, observations or evidence.  This undertaken in order to enforce a hidden Social Epistemological based agenda seeking establishment of a specific Apparent Coherence which denies all opposing forms of knowledge.

Hypoepistemology

/existential pseudoscience/ Relegation of disfavored subjects and observations into pathways of false science and employment of bucket pejorative categorizations in order to prevent such subjects’ inclusion or consideration in the body of active science.  Conversely, acceptance of an a priori favored idea, as constituting sound science, based simply on its attractiveness inside a set of social goals. These both undertaken in order to enforce a hidden Social Epistemological based agenda seeking establishment of a specific Apparent Coherence which denies all opposing forms of knowledge.

Speaking of social epistemologies, there is an objective in all this. Finally ladies and gentlemen, this whole process introduces the Goal, of the social epistemology called the Omega Hypothesis

Omega Hypothesis (HΩ)

/philosophy : pseudoscience : social epistemology : apparent coherence/ : the argument which is foisted to end all argument, period. An argument which has become more important to protect, than science itself. An invalid null hypothesis or a preferred idea inside a social epistemology. A hypothesis which is defined to end deliberation without due scientific rigor, alternative study consensus or is afforded unmerited protection or assignment as the null. The surreptitiously held and promoted idea or the hypothesis protected by an Inverse Negation Fallacy. Often one which is promoted as true by default, with the knowledge in mind that falsification will be very hard or next to impossible to achieve.

Nihilism

Material Monism

Metaphysical Naturalism

Materialism

Skin Color Hatred, Denigration or Promotion

Political Party Promotion

Political Philosophy Promotion

Class and National Origin Hatred

Class Warfare

Religious Hatred

Obscuring of Forbidden Elements of Knowledge

Academic Hatred

Monopolization

Oligopolization

Socialism

Royalty Promotion and Enrichment

Two Pseudoscientific Mechanisms Currently in Employment

when-science-no-longer-has-to-be-science

And understanding that skepticism, in its true form, is a means of preparing the mind and data sets to accomplish real science and to protect of the method of science – not specific Omega Hypothesis answers nor pseudoscientific mechanisms, it becomes incumbent upon us as Ethical Skeptics to deny this false form of skepticism, and the resulting twisted social epistemologies which result.


¹  Alvin Goldman’s “Social Epistemology,” The Stanford Encyclopedia of Philosophy, http://plato.stanford.edu/entries/epistemology-social/

²  Matthias Steup’s “Epistemology,” The Stanford Encyclopedia of Philosophy, http://plato.stanford.edu/entries/epistemology/#MRE