Reduction: A Bias for Understanding

A bias for understanding is demonstrated no better than by an ethical researcher’s ability to say ‘I do not know.’ Beware of anyone who makes a claim to critical thinking, yet habitually shortcuts reducing the subject being assessed into its cause and risk constituents. Such persons are nothing but salesmen.

It is fully acceptable, nay it is scientific to say, ‘I don’t know.’ Inside ethical skepticism, one should first assume the disciplined epoché of ‘I don’t know’, until an examination of the process of reduction and critical path incremental risk progression is undertaken. This reveals one’s preference for understanding in lieu of fiat knowledge. In similar ethic, reduction is the process of disassembling a macroscopic object into its cause, effect and risk constituents. Reduction is essential to the establishment of mechanism, and mechanism is essential to the establishment of hypothesis. Reduction is a critical part of the first three steps of the scientific method: Observation, Intelligence and Necessity.

Is a rock nothing more than the assembly of quantum states, valences, electrons and bosons comprised by its lattice structure? Is a legal case nothing more than finding a guilty appearing or unpopular party upon which to blame a tort? Is a paradox really a paradox, or are there more surreptitious contributors at play? A habit of reduction in approaching such mysteries reveals a bias for understanding on the part of the sincere researcher. Put another way, by Martyn Shuttleworth and Explorable (the green added by me):1

“Scientific reductionism is the idea of reducing complex interactions and entities to the sum of their constituent parts, in order to make them easier to study and subsequently become potential working elements of hypothesis or inference.

Reduction is the path taken by the ethical skeptic who eschews abductive and panductive inference (the habit of social skeptics). Accordingly, the purpose of reduction is four-fold:

  1. Distinguish critical factors, risks & effects from merely influencing or irrelevant ones
  2. Identify the critical path of inquiry and possible inductive or deductive syllogism
  3. Detect the presence and eliminate the contribution of agency
  4. Establish robust study design/Mitigate testing or analytical noise.

There is a philosophy of science which cites that, determining the answer to an asymmetric dilemma is relatively straightforward, once one has successfully identified all the elemental contributing factors and their relationships. In absence of the process of scientific reduction, one cannot faithfully pursue the powerful forms of inference known as deduction and induction. One must instead resort to the appeal to authority of abduction, or the appeal to ignorance denials of panduction. Under this line of philosophy, the instinct to reduce a complex argument into its basic syllogistic elements, in advance of ‘assailing the facts’ (see Why Sagan is Wrong) or even hypothesis development, reveals a very ethical disposition on the part of the true skeptic: a bias for understanding.

Sir Arthur Conan Doyle was a fan of just such thinking, elicited no better than by four of his most famous quotes, expressed through his fictional persona, Sherlock Holmes:

“It has long been an axiom of mine that the little things are infinitely the most important.” – A Case of Identity

“Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth.” – The Sign of Four

What Arthur Conan Doyle has outlined in this second quote is the process of deduction. Eliminate the entire subset of plausibility which can be falsified, and what you have left is the truth – or the domain of truth at the very least. Arthur Conan Doyle certainly appreciated the role which reduction played inside the escapades of his most notable character, Sherlock Holmes. The reason being, that reduction is necessary before one can undertake the processes which result in inductive and deductive inference. One can undertake abduction and panduction however, without any prior research or work – which is why fake skepticism prefers those methods of inference. Panduction and abduction involve a practice set against which Doyle’s famous detective character warns with utmost effulgence:

“It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.” – A Scandal in Bohemia

“There is nothing more deceptive than an obvious fact.” – The Bascombe Valley Mystery

Upon any semblance of depth in reading Arthur Conan Doyle, one would correctly infer that he possessed a bias for understanding. His pursuits involved subjects which were sure to piss off most social skeptics. Social skeptics typically fail to grasp this irony as well.2 But we will leave that topic for another time. Suffice to say, how could a person this objective and skeptical, rationally dare to venture into the ‘paranormal’? The principle behind this escapes fake skeptics to this very day. Beware of those who promulgate final answers, and then claim a defense of the ‘facts’. Science is much more than this. Therefore, the axiom of ethical skepticism proceeds as such:

One cannot conduct critical thinking nor craft a critical path of incremental risk of conjecture and testing,
without first reducing asymmetry into its series of cause and risk elemental bases.
Beware of anyone who makes a claim to critical thinking, yet habitually shortcuts reducing the subject being assessed.
Such persons are nothing but salesmen.

This process of disassembly of an asymmetrical object into its cause and risk series elements, is called the process of reduction. A bias for understanding is demonstrated no better than by a person who exhibits the patience and discipline to reduce a complex argument, before attempting to formulate a construct, and much less pretend to foist a conclusion about it.

The Reluctant Dowser

After a sixteen year period of living in my house, I had long since misplaced the location of my sprinkler control valves buried under the grass. They are a cluster of 3 electro-servo controlled water valves which my central control unit operates to automatically turn my sprinklers on and off during the spring and summer seasons of grass growth. One of them had malfunctioned, and the section of sprinkler heads which were operated by this electro-servo control, had consequently failed to operate. I called upon a highly recommended local sprinkler repair specialist to come out and take a look. After testing the system under a couple different scenarios (also reduction itself), he confirmed “Yessir, your zone 2 valve has gone bad. Can you tell me where your sprinkler servo units are buried?” I just grimaced and shrugged in reply.

Now I have a rather large yard, where the water main arrives at a completely different entry point than the footprint of most of my grass. Poetically, the master control unit in my garage, is also nowhere near the majority of my yard and sprinkler heads. Determining the location of the set of control valves was going to be a daunting task. Most often they are located in a bundle buried under the ground, but because of my yard configuration and size, this could not be assumed. The sprinkler tech offered to locate the valves for a fee of $75. He directed his son who was working with him, to run to the truck and get a particular machine which is used to accomplish just such a task (eg. Amazon: Armada Pro300 Sprinkler Valve Locator). The reason being offered, that he would have to scan the entire set of possible locations for the valves, and eliminate false positives which will inevitably be generated by underground wires/metal as well. This process would take some time. Therefore the fee.

Now, my grandfather had been a dowser. The process of finding well locations, septic tanks and other formations of underground water, was a service he had regularly provided to the family and neighbors back in the day. Dowsing was something I had been used to, as a robust and reliable form of underground water detection. It was not until I got so smart that I was able to enter graduate school, that I was instructed that dowsing was a form of ‘magical thinking’. But since I am a skeptic of imperious wisdom, and a reductionist at heart, I asked the sprinkler tech with a contemplative push of the bottom lip “Is there a chance perhaps that you might possess say, another more traditional way of finding water under the ground?” Cocking my head slightly while I spoke the word ‘traditional’. The sprinkler tech smiled and said “I might. But I typically don’t do that in neighborhoods like this, ’cause people start yelling at me and calling me names and then stop using my service. As long as no one’s gonna get bent out of shape over it, I can try and dowse this thing if you want.”

I wanted. He dowsed the yard for free. He walked perpendicularly from the property line with his two bent copper wires in hand, stopped mid yard, turned right, walked about 30 paces to a spot in the grass, then slammed his shovel head down right at his toes.

Shtaunk! He found the sprinkler servo-valves, in the middle of nowhere, in less than 3 minutes of work.

I tipped him and his son $30 for the entertainment, lesson and effort in tradition. I don’t know how it works, all I know is that it does work. As the sprinkler tech and his son departed, he shook my hand and said “By the way, those people who used to yell at me for proposing a dowse of their yard, I charged them $125 to use the machine.” He chuckled as he walked off.

The Skeptic’s Dictionary of course, promulgates its abductive and panductive pearls of wisdom, handed down to them by god (see ethical skepticism’s Definition of God), in order to warn against the use of the witchcraft of dowsing, as it could damn your soul to magical thinking hell. From The Skeptic’s Dictionary:3

Since dowsing is not based upon any known scientific or empirical laws or forces of nature, it should be considered a type of divination and an example of magical thinking. The dowser tries to locate objects by occult means.

Translated: ‘We don’t understand it, therefore it is evil‘.

Hmm… this seems familiar to me. I have run into this type of religious (non-reductive) thinking before, and it was not inside science. People who think like this often conceal more awesome insistence than simply the one subject at hand. They want to own your thinking, what you communicate, what you study, and your pursuits as well. Their focus is not the subject, their focus is you. Being mistaken at times, bears less calamity, than is being under their auspices, trust me.

Being correct on 95% of one’s awesome insistence that others comply, stands as poor recompense for the harm one creates through the 5% instance in which one is wrong.

Of course a typical dowser would make no such claim, that dowsing is ‘locating objects by occult means’. I find it funny that, pseudo-skeptics will accept that a machine can find the location of water valves under the ground based upon established principles of science, and simultaneously then contend that a man with a simple device can only claim to accomplish the same thing via ‘occult means’. I smell a lack of reduction (panductive inference) at play. Most dowsers, including Albert Einstein, believe the effect to originate from natural phenomena.4 I don’t know the answer to this. All I know is that it worked for my grandfather, and it worked for my lawn sprinkler maintenance guy. These are no nonsense people – far from being ‘occultists’. I would call them reductionist science practitioners. I would not call the people at The Skeptic’s Dictionary anything but pseudo-skeptics. Dowsing saved me $45. The task now, is to figure out why it appears to consistently work – not start gas can banging and chest pounding about what it is, and what it is not.

Just because I ponder the potential efficacy and epistemology of something you don’t like, does not serve to make my thinking ‘magical’.

Sherlock Holmes comments about this type of crooked thinking, exhibited on the part of the panduction-minded authors at The Skeptic’s Dictionary:

In solving a problem of this sort, the grand thing is to be able to reason backwards. That is a very useful accomplishment, and a very easy one, but people do not practise it much. In the every-day affairs of life it is more useful to reason forwards, and so the other comes to be neglected. There are fifty who can reason synthetically for one who can reason analytically…Let me see if I can make it clearer. Most people, if you describe a train of events to them, will tell you what the result would be. They can put those events together in their minds, and argue from them that something will come to pass. There are few people, however, who, if you told them a result, would be able to evolve from their own inner consciousness what the steps were which led up to that result. This power is what I mean when I talk of reasoning backwards, or analytically. – A Study in Scarlet

Their habitual denial, the refusal to reduce certain phenomena which are robust, or which persist in our observational base, stems in essence from a lack of skill at scientific critical path logic. They do not possess the patience and disciplined method, requisite in the disassembly of the asymmetrical aspects of dowsing into its series elements of risk and conjecture. They refuse to apply any form of logical critical path, and instead choose only to adorn themselves with the lab coats and accoutrements of science, while lazily declaring the answer a priori. Identity warfare, very much the same thing as is virtue signaling.

The Critical Path Role of Reduction

Therefore, in light of Doyle’s penchant for ‘reasoning backwards’, let us define the role of reduction, and its place in the process of science, prior to the assembly of a construct or hypothesis. Reduction is the process of allowing one to see, scientifically. To unravel the factors which are salient to a result.

Reduction (Philosophy of Science)

/philosophy : science : critical path/ : the disassembly of asymmetry between logical objects such that each maybe be examined individually and in relation to their series contribution to the whole in terms of cause, effect and risk. The process of ex ante predicting or ex poste observing the macroscopic characteristics of a logical or physical object by identifying and manipulating the characteristics and interplay of its microscopic components, ostensibly at the lowest level of inspection which can be defined. Reduction must be pursued before a process which may result in a claim to induction, deduction or assessment of risk can be successfully undertaken. Reduction reveals a bias for understanding.5

Reduction is the path taken by the ethical skeptic who eschews abductive and panductive inference (the habit of social skeptics). Accordingly, the purpose of reduction is four-fold:

  1. Distinguish critical factors, risks & effects from merely influencing or irrelevant ones
  2. Identify the critical path of inquiry and possible inductive or deductive syllogism
  3. Detect the presence and eliminate the contribution of agency
  4. Establish robust study design/Mitigate testing or analytical noise.

Now there are a variety of reductionist approaches inside the sciences, and they differ by the general subject. Reduction inside cosmological sciences is not the same thing as reduction inside the biological sciences for instance. Nor does pursuing reduction guarantee resolution of a paradox or inquiry. Reduction is a tool and not necessarily a panacea – which is why you will not see me using the word ‘reductionism’. As this form of extrapolation/equivocation abuse of the term serves to cause much confusion.6 The prevailing principle I use is – reduction is a bias for understanding, which even if academic, inaugurates the researcher into the intelligence domain they will need in order to develop novel approaches and potential hypotheses. You will find that, in short order you appear to have deeper insights than even do the ‘experts’. Reduction involves testing, getting out into the field, being quiet, and the skill to observe. So, even if reduction is not prima facia effective, its exercise is beneficial nonetheless. The process I have employed in the past – in study, and in life and in labs, involves the following steps of breaking down elements of cause, effect and risk (using a test of dowsing as the example):

1.  Elemental-ize – Identify all of the hard elements comprised by the asymmetric object.

eg. formation of water, earth, test human, search grid, depth, device, metal, weather, time of day, temperature, etc.

2.  Filter – Filter elements in critical dependency from merely influencing elements.

eg. formation of water, test human, device, metal.

3.  Identify Critical Path – establish a base critical path among those critical dependency elements:

eg. human to device to metal to water formation

4.  Sensitivity Test Influencing Elements/Regression – Alter all and only the influencing elements in series, without changing the primary elements in dependency, and observe the dependency effect in each alteration. Measure and record the effect for influencing relationships (sensitivity regression analysis).

eg. change location, grid, depth, time of day, human, etc.

5.  Control Test Dependency Elements/Arrival Distributions – Repeat 4 with a control version of the dependency elements – however eliminating those influencing elements which showed no effect in step 4.

eg. the same series of tests with NO formation of water actually present and with a change of device metal

One may discern here, that by reducing the dowsing issue to its critical and influencing-only elements and proceeding accordingly, one has achieved robustness in study design and provided sound basis for the possible development of valid inference. But there is another strength offered by this approach as well; that is, the identification of agency and its surreptitious contribution.

Detecting Agency and Pretend Reduction

Now that we have examined one possible method of reduction regarding dowsing, let’s use this same topic to inspect a scenario of pretend reduction. Will actual ethical study prove a significant effect for dowsing? I do not know the answer to this. Sadly we may never know, as too much agency is wound up inside such testing (see the problem of Club Quality). A small group of tests have been completed, which indicated significant results for some dowsers and ‘no differentiation from random’ for others (Wagner, Betz, and König, 1990 Schlußbericht 01 KB8602, Bundesministerium für Forschung und Technologie).7 In this study, self-purported dowswers were given a hit or miss shot at locating a pipe filled with water concealed under a barn floor. To the right, you can see the results graphic from that study; results which are used by skeptics to issue final disposition on the topic of dowsing. The employment of the ‘misses’ in the graphic to the right constitutes a form of study noise pseudoscience called torfuscation (Saxon for ‘hide in the bog’). I question a study design which tests the mechanism of how accurate a first try is (hits and misses in the graphic to the right) – as such a measure would contain a boat-load of noise.

The hits and misses in the graphic to the right bear a high risk of study noise. Either of two results can be merely a consequence of accident, however the misses have a greater chance of being generated accidentally. It is not simply that this is too small a sample size. No provision for the contribution of noise, regardless of sample size, was made in the study. The misses were then used as evidence of absence. This is torfuscation. It is study fraud.

This is not how scientific testing is done. Part of the objective of a study design is to conduct tests which serve to neutralize, or in the least mitigate, observation noise. The proper way to test this paradigm is to assess whether or not, in repetitive trials, good dowsers can consistently beat efficient pattern searches (JP 3-50 Search and Rescue grids or crime scene search patterns for example) in terms of success arrival distributions on a scale of time (τ). The next step would be to take those who showed very significant results, and conduct a time/grid testing series per the example of reduction I related above. Conducting repetitive regression analyses on alteration of influencing factors, upon signal detection provides more reduction depth, than does a single level hit or miss test. Instead fake skeptics took partial results from the noise of a single level measure (hit or miss), bearing no control reference arrival distribution – a subset of reduction which tendered some convenient facts for their propaganda – and ran with a final answer. No wonder we are ignorant as a race of beings.

I found some people who are bad at dowsing (study noise), therefore dowsing is magical thinking. Who developed this study design, a college freshman? or maybe James Randi? This is not science in the least, and reeks of ‘The JREF Million Dollar Challenge‘ type of idiocy.

Torfuscation

/philosophy : pseudoscience : study fraud : Saxon : ‘hide in the bog’/ : pseudoscience or obfuscation enacted through a Nelsonian knowledge masquerade of scientific protocol and study design. Inappropriate, manipulated or shallow study design crafted so as to obscure or avoid a targeted/disliked inference. A process, contended to be science, wherein one develops a conclusion through cataloging study artifice or observation noise as valid data. Invalid observations which can be parlayed into becoming evidence of absence or evidence of existence as one desires – by accepting only the appropriate hit or miss grouping one desires as basis to support an a priori preference, and as well avoid any further needed ex ante proof.  A refined form of praedicate evidentia or utile abstentia employed through using less rigorous or probative methods of study than are requisite under otherwise ethical science.  Exploitation of study noise generated through first level ‘big data’ or agency-influenced ‘meta-synthesis’, as the ‘evidence’ that no further or deeper study is therefore warranted – and moreover that research of the subject entailed is now socially embargoed.

Example: If you take the SAT a day on which you woke up severely ill, and you get a low score consequently, this is not evidence of your inadmissibility to quality universities. A fake skeptic will use such a circumstance to declare you dumb, by means of the ‘facts’.

Of course, fake skeptics like to make a final declaration of ‘pseudoscience’ as quickly and as shallowly into the data as is possible. They bear a habit of never saying, ‘I do not know’ and a history of attempting to prevent such research from being done in depth, at any cost. This is not skepticism in the least, and smells of desperation – it is debunking. A form of pseudo-reduction. You will note consistency in this, as celebrity skeptics consistently undertake a lazier short-cut process of science, when applied to topics or subjects which they have been assigned to discredit. This process of pseudoscience called pseudo-reduction, otherwise known as debunking (a more detailed methodological outline of this activity can be found here: The New Debunker: Pseudo-Skeptic Sleuth), stands as one of the primary tactics of fake skepticism.

Pseudo-Reduction (Debunking)

/philosophy : pseudoscience/ : the non-critical path disassembly of a minor subset of logical objects as a pretense of examination of the whole. A process which pretends that a robust observation is already understood fully. Which consequently then ventures only far enough into the reducible material to a level sufficient to find ‘facts’ which appear to corroborate one of six a priori disposition buckets to any case of examination: Misidentification, Hoax/Being Hoaxed, Delusion, Lie, Accident, Anecdote. This process exclusively avoids any more depth than this level of attainment, and most often involves a final claim of panductive inference (falsification of an entire domain of ideas), along with a concealed preexisting bias.

Knowing what constitutes sound reduction, how it is applied and how to spot its exercise – stands as a key aspect of ethical skepticism. The ability to spot the faker and distinguish him from the one actually conducting science.

The Ethical Skeptic, “Reduction: A Bias for Understanding” The Ethical Skeptic, WordPress, 4 Oct 2018; Web, https://wp.me/p17q0e-8mj

The Three Types of Reason

Not all methods which seek to achieve some kind of benefit through the clear, value laden and risk abating processes of inference can be used in every circumstance. Most of science recognizes this. But when induction is used in lieu of deduction or abduction is used in lieu of induction, when the higher order of logical inference could have been used – beware that pseudoscience might be at play.
Choosing the lower order of logical inference can be a method by which one avoids challenging alternatives and data, yet still tenders the appearance of conducting science. One can dress up in an abductive robe and tender all affectation of science; utter all the right code phrases and one-liners about ‘bunk’ – but an ethical skeptic is armed to see through such a masquerade.

induction abduction panduction deduction lemmingsThere is this thing called logical inference. Simply put, logical inference is the process of taking observed premises and transmutating them into conjectures. Hopefully beneficial conjectures. Such a process usually involves risk. So, when we are challenged with the need to make some kind of benefit happen, say to alleviate a sickness, or fly from place to place, at times we must face risk in order to achieve such an advancement. The process of science involves a carefully planned set of steps, which allows us to bridge this gap between premise and robust conjecture by means of the most clear, value laden and risk abating pathway which we can determine.

In general, there are three rational processes (and a fourth commonly practiced but invalid one) by which we can arrive at a sought-after conclusion or explanation. Abductive, inductive and deductive reason – in order of increasing scientific gravitas and strength as developmental models of knowledge – constitute the three genres of thought inside which we mature information and methods of research, towards this end. In the three exhibits and finally comparison table below, you will observe the three genres of logical inference compared by the mechanism of science which it brings to bear as a strength. As you may glean through the four exhibits, the most expedient form of legitimate answer development comes in the form of abductive inference, while the most science-intensive form is deduction. As you move from left to right in the table below, the epistemological basis of the explanation increases commensurate with the rigor of research and discipline of thinking. Each ‘scientific mechanism’ is an element, step or feature of the scientific method which affords an increase in verity inside the knowledge development process. A blue check mark in the table below means that inference method provides or satisfies the science mechanism. An orange check mark denotes the condition wherein the inference method only partially provides for the scientific mechanism.

Constructive Ignorance (Lemming Weisheit or Lemming Doctrine)

/philosophy : skepticism : social skepticism/ : a process related to the Lindy Effect and pluralistic ignorance, wherein discipline researchers are rewarded for being productive rather than right, for building ever upward instead of checking the foundations of their research, for promoting doctrine rather than challenging it. These incentives allow weak confirming studies to to be published and untested ideas to proliferate as truth. And once enough critical mass has been achieved, they create a collective perception of strength or consensus.

A more detailed comparison of these three forms of inference, along with a slew of notorious Nelsonian Inferences, may be found in The Ethical Skeptic’s Map of Inference.

When Knowledge is Not Necessarily the Goal

Deduction therefore, is the most robust form of inference available to the researcher. Unfortunately however, not every inquiry challenge which we collectively face, can be resolved by deductive methodology. In those instances we may choose to step our methodology down to induction as our means of resolving difficult-to-falsify research (or deescalate). Induction introduces risk into the deontological framework of the knowledge development process. It presents the risk that we become fixated upon one single answer for long periods of time; possibly even making such an explanation prescriptive in its offing – rendering our process of finding explanations vulnerable to even higher risk by introducing habitual abductive methods of logical inference.

They key is this: What is the ‘entity’ being stacked under each inference type, as complicated-ness increases or we conjecture further and further into a discipline featuring a high degree of unknown? (moving to the right on the graph to the right). Often the actual entity being stacked is either risk of error, or error itself – and not, as we misperceive, actual knowledge.

Science – ‘I learn or come to know’ : using deduction and inductive consilience to infer a novel understanding.

Deduct:  Conclusiveness – Benefit from falsified ideas is stacked (Understanding Evolves)

Induct:   Likeliness – Iterations or predictive trials are stacked (Understanding Matures)

Sciebam – ‘I knew’ : using abduction and panduction to enforce an existing interpretation.

Abduct:  Correctness – Assumptions are stacked (Understanding Codifies)

Panduct:  Doctrine – Everything but what my club believes, is correlated and falsified (Understanding Decays – an invalid form of inference)

As we stack entities, induction therefore is preferential over abduction and deduction is preferential over induction because of the accumulation of unacknowledged a priori error in each entity addition. Obviously, one should only seek to deescalate their method of logical inference when forced to do so by the logical framework or evidence available to the discipline. However, researcher beware. Choosing the lower order of logical inference can be a method by which one avoids challenging answers, yet still tenders the appearance of conducting science. We start first with a favorite trick of social skeptics – i.e. casually shifting to abductive diagnostic reason in instances where deductive discipline or inductive study are still critically warranted (see Diagnostician’s Error). A second trick can involve the appearance of science through the preemptive or premature intensive focus on one approach at the purposeful expense of necessary and critical alternatives; conjectures involving ideas one wishes to ignore at all costs (see The Omega Hypothesis). This is a furtive process called Methodical Deescalation. It is a theft of knowledge, by slow, sleight-of-hand. One can dress up in an abductive robe and tender all affectation of science; utter all the right code phrases and one-liners about ‘bunk’ – but an ethical skeptic is armed to see through such a masquerade.

Methodical Deescalation

/philosophy : pseudoscience : inadequate inference method/ : employing abductive inference in lieu of inductive inference when inductive inference could have, and under the scientific method should have, been employed. In similar fashion employing inductive inference in lieu of deductive inference when deductive inference could have, and under the scientific method should have, been employed. 

All things being equal, the latter is superior to the midmost, which is superior to the former:

  • Conformance of panduction​ (while a type/mode of inference this is not actually a type of reasoning)
  • Convergence of abductions
  • ​Consilience of inductions
  • Consensus of deductions

One of the hallmarks of skepticism is grasping the distinction between a ‘consilience of inductions’ and a ‘convergence of deductions’. All things being equal, a convergence of deductions is superior to a consilience of inductions. When science employs a consilience of inductions, when a convergence of deductions was available, yet was not pursued – then we have an ethical dilemma called Methodical Deescalation.

For example, using magic tricks and magicians, to point out the deceptive nature of the mind and observation (targeting some paranormal thing a skeptic does not like) – is abductive reason. The flaw in this favorite trick of social skeptics, as in the case where they wheel out The Amazing Randi for instance is – that if you were wrong, you would never even know it. You have no methodology of self checking, induction or deduction. It is a trick of purposeful methodical deescalation. The true magic trick pulled on us all.

An example of countering and defusing Methodical Deescalation and neutralizing its resulting ignorance effect:

Earlier in my career I was brought into a research lab by an investment house to act as CEO of its research organization. The goals set before us were clear: re-organize, focus and streamline its research and development work, align its staff/strengths to the best fit roles, and bring to fruition a belabored research critical path regarding a sought-after new discovery in material phase transition lattice and vacancy structures. Without going into the technical nature of the work, which is covered under classification and non-disclosure agreements – we were successful in achieving the groundbreaking discovery in just under 4 months. This as opposed to the 18 month benchmark which had been established by the advising investment fund and the 3 years of flailing around which had preceded. Set aside of course, the risk that the course of art would prove unfruitful or dead-end in the first place. Stockholders, the board of directors, US Government/Military stakeholders, and the intellectual property and prior art patent-holders were ecstatic at the success. One element of appraoch which helped precipitate this success was to assign the right habit/method of inference to the right step in the process. We threw out several of the ‘knowns’ under which our research staff had been burdened, assigned new fresh minds to the observation & critical question sequences – then finally tested several procedures based upon understandings which were ‘highly implausible’. In other words we threw the value of risk-critical-path abductive inference out the window and began to test what it was we ‘knew’. I took the abductive-minded researchers, the ones who instructed everyone as to the highly implausible nature of our thinking, and put them in charge of procedure, script sheet development and Thermo-Fisher data integrity. This worked well. It was a Friday afternoon at 3:45pm when a tech came busting into our offices and cited that three of our test samples from our reactors showed ‘anomalous results’. These results were small, but were undeniable. They flouted the common wisdom as to what could be done with this material, in this phase state. We filed the provisional method, best mode, and device patents through our law firm within the next 14 days. All the credit went to the scientific researchers, all the money went to the investors, and I quietly went on the the next assignment. My name is not on any of the research. This is the way it works. Of course, the stockholders and fund kept me pretty busy doing the same thing over and over again for several years thereafter. They all remain loyal business colleagues to this day.

One cannot spend their life afraid of being found wrong. Wrongness is the titration chemical transition color which indicates a science advance. And those who invest their ego’s into conformance, avoiding taking a look so as not to be found wrong, who celebrate the correctness of the club, they are not scientists nor skeptics at all. They are the fake ilk. Skepticism is more about asking the right question at the right time, and being able to handle the answer which results – than anything else.

Take Two Skepticisms and Don’t Call Me in the Morning

Another example of a circumstance wherein induction was applied in lieu of deduction – and ends up causing consensus favoring an Omega Hypothesis – can be found in our history of research on the epidemiology of peptic ulcers. The desire to protect pharmaceutical revenues and an old ‘answer to end all answers’ or answer which had become more important to protect than even science itself, involved the employment of acid-blockers as primary ulcer treatments. This dogmatic answer was promoted through propaganda in lieu of the well established deductive knowledge that the h. pylori bacterium was the cause of the majority of ulcers. This 40 year comedy in scientific corruption stands as a prime example of methodical deescalation played by industry fake skeptics seeking to protect client market share and revenues.

Another example involves the case where dogmatic skeptics begin to refuse to examine evidence, in favor of maintaining 50-year old understandings of science which are backed by scant study done long ago in questionable contexts and circumstances of bias.  Such fake skepticism usually involves choosing the good people and the bad people first, then the good subjects and the bad subjects, followed by implication that all this enormous depth of study exists regarding the subject that they dislike (when 95% of the time such is not the case at all). To the right you can see an example interview from MedPage Today where, celebrity skeptic Steven Novella uses Diagnostic Habituation Error to fail to serve patients who come in and complain of a whole series of symptomatic suffering. It is clear that his 50-year old science, his desire to stop the progress of scientific inquiry (especially medical), and his disdain for both patients, researchers and doctors who carry a ‘narrative’ (this is science?) he does not like – is disturbingly and agency-confirmingly high (not the same thing as bias).1

Panduction

/philosophy : rhetoric : pseudoscience : false deduction/ : an invalid form of inference which is spun in the form of pseudo-deductive study. Inference which seeks to falsify in one fell swoop ‘everything but what my club believes’ as constituting one group of bad people, who all believe the same wrong and correlated things – this is the warning flag of panductive pseudo-theory. No follow up series studies nor replication methodology can be derived from this type of ‘study’, which in essence serves to make it pseudo-science.  This is a common ‘study’ format which is conducted by social skeptics masquerading as scientists, to pan people and subjects they dislike. An example of a panductive inference can be found here: Core thinking error underlies belief in creationism, conspiracy theories It is not that the specifics inside what they are panning, are right or wrong (and they pan a plethora of topics in this method) – but it is the method of inference used to condemn, which is pseudoscience. Even though I agree with many of their conclusions, I do not agree with the methodology by which they arrived at them. It is pseudoscience and can be used to harm innocent subjects and persons, as well as the questionable ones (which also deserve a neutrality to a certain point).

What he has done here is to remove medical science from the realm of deduction (no study should be conducted because it has ‘already been done’ or ‘was settled 50 years ago’) – moving us nominally to a purported process of induction. But he is not really using induction here at all either. What skeptic Novella is slipping by in this furtive expose on skepticism – is that – the predictive strength of standing theory needs no longer be strengthened by the process of iterative predictive confirmation. No actual science deduction or induction ever seems warranted in his small world – inference only meriting a twisted form of club-quality trained ‘finding’.

Take his chosen example on the right, regarding disease which is one in a variety of poorly understood immune responses, can be complicated by a multifaceted appearance and nonspecific symptoms, mimics at least 8 other diseases, and requires more than clinical neurology to diagnose.2 He chooses habitually to handle this process of logical inference with knee-jerk abduction, in one single discussion – by a non-related field clinical technician. All because the patient used a bad word from the bad people. Perhaps Dr. Novella might be freed from his skeptic community shackles here, and perform his job (or refer continued diagnosis to a nephrologist, rheumatologist, endocrinologist, hematologist, gastroenterologist, etc.) – if we took a page from pop star Prince’s notes and renamed this disease “The Disease Formerly Known as Chronic Lyme Disease” – then perhaps this pseudo scientific spell he is operating under might be dispelled and actual science might get conducted. He is so fixated on a moniker and the propaganda surrounding the bad idea and the bad people, that he cannot let one scientific or medical thought enter his brain as a result.3 This is no different, in terms of process of inference, than pulling leeches out of a jar and setting them on the patient.

This is why astronomers and doctors make for the poorest skeptics. They mistakenly believe that the rest of life, indeed all other science, is as straightforward in the linear employment of one single process of inference, as is their discipline.

But alas, this sad play outlined above might have been half palatable had Dr. Novella actually applied even diagnostics – instead, the reality is that here he has not even served diagnostic abduction. If we had, the patient might have even been helped with Lyme-disease mimicking symptom treatment, and a suspension of disposition might have been warranted. We neither have offered the patient a clear pathway of diagnostic delineation, nor leveraged off any diagnostic data to develop an inference. Our only options left are to either jam the symptom set into another malady definition, or if the other suitable malady does not exist – defacto proffer the diagnosis of hypochondria, without saying as much. Novella simply and in knee-jerk fashion, tells the patient that they are following a narrative from people he does not like and to ‘use skepticism’. He has even failed the standard of abductive logical inference (see below). I have fired five doctors in differing circumstances, all who have done this to me in the past. It turned out that I was right to do so, all five times. In each case I on my own, or another less dogmatic doctor, found the actual solution – and the doctor in question turned out to be wrong. Take two skepticisms and don’t call me in the morning.

If your doctor ever does something like this to you, fire his ass quick. He is more concerned about a club agenda than he is science or your well being.

I am sure Dr. Novella’s patient’s suffering went away, probably with the patient leaving his practice (poetically offering him absolutely no feedback on his method, other than inference on his part that he was right). With that being said, let’s examine these three types of reason, all of which Steven Novella failed in the example above.

The Valid Reasoning (Logical Inference) Types

.

Abductive Reason

/Diagnostic Inference/ : a form of precedent based inference which starts with an observation then seeks to find the simplest or most likely explanation. In abductive reasoning, unlike in deductive reasoning, the premises do not guarantee the conclusion. One can understand abductive reasoning as inference to the best known explanation.4

Strength – quick to the answer. Usually a clear pathway of delineation. Leverages strength of diagnostic data.

Weakness – Uses the simplest answer (ergo most likely). Does not back up its selection with many key mechanisms of the scientific method. If an abductive model is not periodically tested for its predictive power, such can result in a state of dogmatic axiom. Can be used by those who do not wish to address clarity, value or risk, as an excuse to avoid undertaking the process of science; yet tender the appearance that they have done so.

Risk of Methodical Error:  Moderate

plausible propter hoc ergo hoc solus (Plausible Deniability) – Given X, and Given X can cause, contribute to or bear risk exposure of Y,   and Given Y’   ∴ X, and only X, caused Y’

Effect of Horizontal or Vertical Pluralistic Stacking:  Whipsaw Error Amplification

Chief Mechanism: Occam’s Razor

“All things being equal, the simplest explanation tends to be the correct one.”

Two Forms of Abductive Reason

ex ante – an inference which is derived from predictive, yet unconfirmed forecasts. While this may be a result of induction, the most common usage is in the context of abductive inference.

a priori – relating to or denoting reasoning or knowledge that proceeds from methods and motivations other than science, which preexist any form of observation or experience.

.

Inductive Reason

/Logical Inference/ : is reasoning in which the premises are viewed as supplying strong evidence for the truth of the conclusion. While the conclusion of a deductive argument is certain, the truth of the conclusion of an inductive argument may be probable, based upon the evidence given combined with its ability to predict outcomes.5

Strength – flexible and tolerant in using consilience of evidence pathways and logical calculus to establish a provisional answer (different from a simplest answer, however still imbuing risk into the decision set). Able to be applied in research realms where deduction or alternative falsification pathways are difficult to impossible to develop and achieve.

Weakness – can lead research teams into avenues of provisional conclusion bias, where stacked answers begin to become almost religiously enforced until a Kiuhn Paradigm shift or death of the key researchers involved is required to shake science out of its utility blindness on one single answer approach. May not have examined all the alternatives, because of pluralistic ignorance or neglect.

Risk of Methodical Error:  Moderate to Low

provisional propter hoc ergo hoc (Provisional Knowledge or House-of-Cards Knowledge) – Given provisionally known X,  and Given X provisionally causes, contributes to or bears risk exposure of Y,   and Given Y’   X, and provisionally for future consideration X, caused Y’

Effect of Horizontal or Vertical Pluralistic Stacking:  Linear Error Amplification

Chief Mechanism: Consilience

“Multiple avenues of investigation corroborate a provisional explanation as being likely.”

Chief Mechanism: Predictive Ability

“A provisional model is successful in prediction, and as it is matured, its predictive strength also increases.”

.

Deductive Reason

/Reductive Inference/ : is the process of reasoning by reduction in complexity, from one or more statements (premises) to reach a final, logically certain conclusion. This includes the instance where the elimination of alternatives (negative premises) forces one to conclude the only remaining answer.6

Strength – most sound and complete form of reason, especially when reduction of the problem is developed, probative value is high and/or alternative falsification has helped select for the remaining valid understanding.

Weakness – can be applied less often than inductive reason.

Risk of Methodical Error:  Low

Effect of Horizontal or Vertical Pluralistic Stacking:  Diminishing by Error Cancellation

Chief Mechanism: Ockham’s Razor

“Plurality should not be posited without necessity. Once plurality is necessary, it should be served.”

Chief Mechanism: Consensus

“Several alternative explanations were considered, and researchers sponsoring each differing explanation came to agreement that the remaining non-falsified alternative is most conclusive.”

 

And the astute ethical skeptic will perceive that this last quote relates to the true definition of consensus. Take note when abductive or inductive methods are employed to arrive artificially at consensus. Odds are that such a matching of sustained logical inference with science communicator claims to ‘consensus’ in the media, amount to nothing but pluralistic – or worse jackboot – ignorance.

The Ethical Skeptic, “The Three Types of Reason”; The Ethical Skeptic, WordPress, 25 Jun 2017; Web, https://wp.me/p17q0e-6fD