Not all methods which seek to achieve some kind of benefit through the clear, value laden and risk abating processes of inference can be used in every circumstance. Most of science recognizes this. But when induction is used in lieu of deduction or abduction is used in lieu of induction, when the higher order of logical inference could have been used – beware that pseudoscience might be at play.
Choosing the lower order of logical inference can be a method by which one avoids challenging alternatives and data, yet still tenders the appearance of conducting science. One can dress up in an abductive robe and tender all affectation of science; utter all the right code phrases and one-liners about ‘bunk’ – but an ethical skeptic is armed to see through such a masquerade.
There is this thing called logical inference. Simply put, logical inference is the process of taking observed premises and transmutating them into conjectures. Hopefully beneficial conjectures. Such a process usually involves risk. So, when we are challenged with the need to make some kind of benefit happen, say to alleviate a sickness, or fly from place to place, at times we must face risk in order to achieve such an advancement. The process of science involves a carefully planned set of steps, which allows us to bridge this gap between premise and robust conjecture by means of the most clear, value laden and risk abating pathway which we can determine.
In general, there are three rational processes (and a fourth commonly practiced but invalid one) by which we can arrive at a sought-after conclusion or explanation. Abductive, inductive and deductive reason – in order of increasing scientific gravitas and strength as developmental models of knowledge – constitute the three genres of thought inside which we mature information and methods of research, towards this end. In the three exhibits and finally comparison table below, you will observe the three genres of logical inference compared by the mechanism of science which it brings to bear as a strength. As you may glean through the four exhibits, the most expedient form of legitimate answer development comes in the form of abductive inference, while the most science-intensive form is deduction. As you move from left to right in the table below, the epistemological basis of the explanation increases commensurate with the rigor of research and discipline of thinking. Each ‘scientific mechanism’ is an element, step or feature of the scientific method which affords an increase in verity inside the knowledge development process. A blue check mark in the table below means that inference method provides or satisfies the science mechanism. An orange check mark denotes the condition wherein the inference method only partially provides for the scientific mechanism.
Constructive Ignorance (Lemming Weisheit or Lemming Doctrine)
/philosophy : skepticism : social skepticism/ : a process related to the Lindy Effect and pluralistic ignorance, wherein discipline researchers are rewarded for being productive rather than right, for building ever upward instead of checking the foundations of their research, for promoting doctrine rather than challenging it. These incentives allow weak confirming studies to to be published and untested ideas to proliferate as truth. And once enough critical mass has been achieved, they create a collective perception of strength or consensus.
When Knowledge is Not Necessarily the Goal
Deduction therefore, is the most robust form of inference available to the researcher. Unfortunately however, not every inquiry challenge which we collectively face, can be resolved by deductive methodology. In those instances we may choose to step our methodology down to induction as our means of resolving difficult-to-falsify research (or deescalate). Induction introduces risk into the deontological framework of the knowledge development process. It presents the risk that we become fixated upon one single answer for long periods of time; possibly even making such an explanation prescriptive in its offing – rendering our process of finding explanations vulnerable to even higher risk by introducing habitual abductive methods of logical inference.
They key is this: What is the ‘entity’ being stacked under each inference type, as complicated-ness increases or we conjecture further and further into a discipline featuring a high degree of unknown? (moving to the right on the graph to the right). Often the actual entity being stacked is either risk of error, or error itself – and not, as we misperceive, actual knowledge.
Science – ‘I learn or come to know’ : using deduction and inductive consilience to infer a novel understanding.
Deduct: Conclusiveness – Benefit from falsified ideas is stacked (Understanding Evolves)
Induct: Likeliness – Iterations or predictive trials are stacked (Understanding Matures)
Sciebam – ‘I knew’ : using abduction and panduction to enforce an existing interpretation.
Abduct: Correctness – Assumptions are stacked (Understanding Codifies)
Panduct: Doctrine – Everything but what my club believes, is correlated and falsified (Understanding Decays – an invalid form of inference)
As we stack entities, induction therefore is preferential over abduction and deduction is preferential over induction because of the accumulation of unacknowledged a priori error in each entity addition. Obviously, one should only seek to deescalate their method of logical inference when forced to do so by the logical framework or evidence available to the discipline. However, researcher beware. Choosing the lower order of logical inference can be a method by which one avoids challenging answers, yet still tenders the appearance of conducting science. We start first with a favorite trick of social skeptics – i.e. casually shifting to abductive diagnostic reason in instances where deductive discipline or inductive study are still critically warranted (see Diagnostician’s Error). A second trick can involve the appearance of science through the preemptive or premature intensive focus on one approach at the purposeful expense of necessary and critical alternatives; conjectures involving ideas one wishes to ignore at all costs (see The Omega Hypothesis). This is a furtive process called Methodical Deescalation. It is a theft of knowledge, by slow, sleight-of-hand. One can dress up in an abductive robe and tender all affectation of science; utter all the right code phrases and one-liners about ‘bunk’ – but an ethical skeptic is armed to see through such a masquerade.
/philosophy : pseudoscience : inadequate inference method/ : employing abductive inference in lieu of inductive inference when inductive inference could have, and under the scientific method should have, been employed. In similar fashion employing inductive inference in lieu of deductive inference when deductive inference could have, and under the scientific method should have, been employed.
All things being equal, the latter is superior to the midmost, which is superior to the former:
- Conformance of panduction (while a type/mode of inference this is not actually a type of reasoning)
- Convergence of abductions
- Consilience of inductions
- Consensus of deductions
One of the hallmarks of skepticism is grasping the distinction between a ‘consilience of inductions’ and a ‘convergence of deductions’. All things being equal, a convergence of deductions is superior to a consilience of inductions. When science employs a consilience of inductions, when a convergence of deductions was available, yet was not pursued – then we have an ethical dilemma called Methodical Deescalation.
For example, using magic tricks and magicians, to point out the deceptive nature of the mind and observation (targeting some paranormal thing a skeptic does not like) – is abductive reason. The flaw in this favorite trick of social skeptics, as in the case where they wheel out The Amazing Randi for instance is – that if you were wrong, you would never even know it. You have no methodology of self checking, induction or deduction. It is a trick of purposeful methodical deescalation. The true magic trick pulled on us all.
An example of countering and defusing Methodical Deescalation and neutralizing its resulting ignorance effect:
Earlier in my career I was brought into a research lab by an investment house to act as CEO of its research organization. The goals set before us were clear: re-organize, focus and streamline its research and development work, align its staff/strengths to the best fit roles, and bring to fruition a belabored research critical path regarding a sought-after new discovery in material phase transition lattice and vacancy structures. Without going into the technical nature of the work, which is covered under classification and non-disclosure agreements – we were successful in achieving the groundbreaking discovery in just under 4 months. This as opposed to the 18 month benchmark which had been established by the advising investment fund and the 3 years of flailing around which had preceded. Set aside of course, the risk that the course of art would prove unfruitful or dead-end in the first place. Stockholders, the board of directors, US Government/Military stakeholders, and the intellectual property and prior art patent-holders were ecstatic at the success. One element of appraoch which helped precipitate this success was to assign the right habit/method of inference to the right step in the process. We threw out several of the ‘knowns’ under which our research staff had been burdened, assigned new fresh minds to the observation & critical question sequences – then finally tested several procedures based upon understandings which were ‘highly implausible’. In other words we threw the value of risk-critical-path abductive inference out the window and began to test what it was we ‘knew’. I took the abductive-minded researchers, the ones who instructed everyone as to the highly implausible nature of our thinking, and put them in charge of procedure, script sheet development and Thermo-Fisher data integrity. This worked well. It was a Friday afternoon at 3:45pm when a tech came busting into our offices and cited that three of our test samples from our reactors showed ‘anomalous results’. These results were small, but were undeniable. They flouted the common wisdom as to what could be done with this material, in this phase state. We filed the provisional method, best mode, and device patents through our law firm within the next 14 days. All the credit went to the scientific researchers, all the money went to the investors, and I quietly went on the the next assignment. My name is not on any of the research. This is the way it works. Of course, the stockholders and fund kept me pretty busy doing the same thing over and over again for several years thereafter. They all remain loyal business colleagues to this day.
One cannot spend their life afraid of being found wrong. Wrongness is the titration chemical transition color which indicates a science advance. And those who invest their ego’s into conformance, avoiding taking a look so as not to be found wrong, who celebrate the correctness of the club, they are not scientists nor skeptics at all. They are the fake ilk. Skepticism is more about asking the right question at the right time, and being able to handle the answer which results – than anything else.
Take Two Skepticisms and Don’t Call Me in the Morning
Another example of a circumstance wherein induction was applied in lieu of deduction – and ends up causing consensus favoring an Omega Hypothesis – can be found in our history of research on the epidemiology of peptic ulcers. The desire to protect pharmaceutical revenues and an old ‘answer to end all answers’ or answer which had become more important to protect than even science itself, involved the employment of acid-blockers as primary ulcer treatments. This dogmatic answer was promoted through propaganda in lieu of the well established deductive knowledge that the h. pylori bacterium was the cause of the majority of ulcers. This 40 year comedy in scientific corruption stands as a prime example of methodical deescalation played by industry fake skeptics seeking to protect client market share and revenues.
Another example involves the case where dogmatic skeptics begin to refuse to examine evidence, in favor of maintaining 50-year old understandings of science which are backed by scant study done long ago in questionable contexts and circumstances of bias. Such fake skepticism usually involves choosing the good people and the bad people first, then the good subjects and the bad subjects, followed by implication that all this enormous depth of study exists regarding the subject that they dislike (when 95% of the time such is not the case at all). To the right you can see an example interview from MedPage Today where, celebrity skeptic Steven Novella uses Diagnostic Habituation Error to fail to serve patients who come in and complain of a whole series of symptomatic suffering. It is clear that his 50-year old science, his desire to stop the progress of scientific inquiry (especially medical), and his disdain for both patients, researchers and doctors who carry a ‘narrative’ (this is science?) he does not like – is disturbingly and agency-confirmingly high (not the same thing as bias).1
/philosophy : rhetoric : pseudoscience : false deduction/ : an invalid form of inference which is spun in the form of pseudo-deductive study. Inference which seeks to falsify in one fell swoop ‘everything but what my club believes’ as constituting one group of bad people, who all believe the same wrong and correlated things – this is the warning flag of panductive pseudo-theory. No follow up series studies nor replication methodology can be derived from this type of ‘study’, which in essence serves to make it pseudo-science. This is a common ‘study’ format which is conducted by social skeptics masquerading as scientists, to pan people and subjects they dislike. An example of a panductive inference can be found here: Core thinking error underlies belief in creationism, conspiracy theories It is not that the specifics inside what they are panning, are right or wrong (and they pan a plethora of topics in this method) – but it is the method of inference used to condemn, which is pseudoscience. Even though I agree with many of their conclusions, I do not agree with the methodology by which they arrived at them. It is pseudoscience and can be used to harm innocent subjects and persons, as well as the questionable ones (which also deserve a neutrality to a certain point).
What he has done here is to remove medical science from the realm of deduction (no study should be conducted because it has ‘already been done’ or ‘was settled 50 years ago’) – moving us nominally to a purported process of induction. But he is not really using induction here at all either. What skeptic Novella is slipping by in this furtive expose on skepticism – is that – the predictive strength of standing theory needs no longer be strengthened by the process of iterative predictive confirmation. No actual science deduction or induction ever seems warranted in his small world – inference only meriting a twisted form of club-quality trained ‘finding’.
Take his chosen example on the right, regarding disease which is one in a variety of poorly understood immune responses, can be complicated by a multifaceted appearance and nonspecific symptoms, mimics at least 8 other diseases, and requires more than clinical neurology to diagnose.2 He chooses habitually to handle this process of logical inference with knee-jerk abduction, in one single discussion – by a non-related field clinical technician. All because the patient used a bad word from the bad people. Perhaps Dr. Novella might be freed from his skeptic community shackles here, and perform his job (or refer continued diagnosis to a nephrologist, rheumatologist, endocrinologist, hematologist, gastroenterologist, etc.) – if we took a page from pop star Prince’s notes and renamed this disease “The Disease Formerly Known as Chronic Lyme Disease” – then perhaps this pseudo scientific spell he is operating under might be dispelled and actual science might get conducted. He is so fixated on a moniker and the propaganda surrounding the bad idea and the bad people, that he cannot let one scientific or medical thought enter his brain as a result.3 This is no different, in terms of process of inference, than pulling leeches out of a jar and setting them on the patient.
This is why astronomers and doctors make for the poorest skeptics. They mistakenly believe that the rest of life, indeed all other science, is as straightforward in the linear employment of one single process of inference, as is their discipline.
But alas, this sad play outlined above might have been half palatable had Dr. Novella actually applied even diagnostics – instead, the reality is that here he has not even served diagnostic abduction. If we had, the patient might have even been helped with Lyme-disease mimicking symptom treatment, and a suspension of disposition might have been warranted. We neither have offered the patient a clear pathway of diagnostic delineation, nor leveraged off any diagnostic data to develop an inference. Our only options left are to either jam the symptom set into another malady definition, or if the other suitable malady does not exist – defacto proffer the diagnosis of hypochondria, without saying as much. Novella simply and in knee-jerk fashion, tells the patient that they are following a narrative from people he does not like and to ‘use skepticism’. He has even failed the standard of abductive logical inference (see below). I have fired five doctors in differing circumstances, all who have done this to me in the past. It turned out that I was right to do so, all five times. In each case I on my own, or another less dogmatic doctor, found the actual solution – and the doctor in question turned out to be wrong. Take two skepticisms and don’t call me in the morning.
If your doctor ever does something like this to you, fire his ass quick. He is more concerned about a club agenda than he is science or your well being.
I am sure Dr. Novella’s patient’s suffering went away, probably with the patient leaving his practice (poetically offering him absolutely no feedback on his method, other than inference on his part that he was right). With that being said, let’s examine these three types of reason, all of which Steven Novella failed in the example above.
The Valid Reasoning (Logical Inference) Types
/Diagnostic Inference/ : a form of precedent based inference which starts with an observation then seeks to find the simplest or most likely explanation. In abductive reasoning, unlike in deductive reasoning, the premises do not guarantee the conclusion. One can understand abductive reasoning as inference to the best known explanation.4
Strength – quick to the answer. Usually a clear pathway of delineation. Leverages strength of diagnostic data.
Weakness – Uses the simplest answer (ergo most likely). Does not back up its selection with many key mechanisms of the scientific method. If an abductive model is not periodically tested for its predictive power, such can result in a state of dogmatic axiom. Can be used by those who do not wish to address clarity, value or risk, as an excuse to avoid undertaking the process of science; yet tender the appearance that they have done so.
Risk of Methodical Error: Moderate
plausible propter hoc ergo hoc solus (Plausible Deniability) – Given X, and Given X can cause, contribute to or bear risk exposure of Y, and Given Y’ ∴ X, and only X, caused Y’
Effect of Horizontal or Vertical Pluralistic Stacking: Whipsaw Error Amplification
Chief Mechanism: Occam’s Razor
“All things being equal, the simplest explanation tends to be the correct one.”
Two Forms of Abductive Reason
ex ante – an inference which is derived from predictive, yet unconfirmed forecasts. While this may be a result of induction, the most common usage is in the context of abductive inference.
a priori – relating to or denoting reasoning or knowledge that proceeds from methods and motivations other than science, which preexist any form of observation or experience.
/Logical Inference/ : is reasoning in which the premises are viewed as supplying strong evidence for the truth of the conclusion. While the conclusion of a deductive argument is certain, the truth of the conclusion of an inductive argument may be probable, based upon the evidence given combined with its ability to predict outcomes.5
Strength – flexible and tolerant in using consilience of evidence pathways and logical calculus to establish a provisional answer (different from a simplest answer, however still imbuing risk into the decision set). Able to be applied in research realms where deduction or alternative falsification pathways are difficult to impossible to develop and achieve.
Weakness – can lead research teams into avenues of provisional conclusion bias, where stacked answers begin to become almost religiously enforced until a Kiuhn Paradigm shift or death of the key researchers involved is required to shake science out of its utility blindness on one single answer approach. May not have examined all the alternatives, because of pluralistic ignorance or neglect.
Risk of Methodical Error: Moderate to Low
provisional propter hoc ergo hoc (Provisional Knowledge or House-of-Cards Knowledge) – Given provisionally known X, and Given X provisionally causes, contributes to or bears risk exposure of Y, and Given Y’ ∴ X, and provisionally for future consideration X, caused Y’
Effect of Horizontal or Vertical Pluralistic Stacking: Linear Error Amplification
Chief Mechanism: Consilience
“Multiple avenues of investigation corroborate a provisional explanation as being likely.”
Chief Mechanism: Predictive Ability
“A provisional model is successful in prediction, and as it is matured, its predictive strength also increases.”
/Reductive Inference/ : is the process of reasoning by reduction in complexity, from one or more statements (premises) to reach a final, logically certain conclusion. This includes the instance where the elimination of alternatives (negative premises) forces one to conclude the only remaining answer.6
Strength – most sound and complete form of reason, especially when reduction of the problem is developed, probative value is high and/or alternative falsification has helped select for the remaining valid understanding.
Weakness – can be applied less often than inductive reason.
Risk of Methodical Error: Low
Effect of Horizontal or Vertical Pluralistic Stacking: Diminishing by Error Cancellation
Chief Mechanism: Ockham’s Razor
“Plurality should not be posited without necessity. Once plurality is necessary, it should be served.”
Chief Mechanism: Consensus
“Several alternative explanations were considered, and researchers sponsoring each differing explanation came to agreement that the remaining non-falsified alternative is most conclusive.”
And the astute ethical skeptic will perceive that this last quote relates to the true definition of consensus. Take note when abductive or inductive methods are employed to arrive artificially at consensus. Odds are that such a matching of sustained logical inference with science communicator claims to ‘consensus’ in the media, amount to nothing but pluralistic – or worse jackboot – ignorance.
The Ethical Skeptic, “The Three Types of Reason”; The Ethical Skeptic, WordPress, 25 Jun 2017; Web, https://wp.me/p17q0e-6fD