Panduction: The Invalid Form of Inference

One key, if not the primary form of invalid inference on the part of fake skeptics, resides in the methodology of panductive inference. A pretense of Popper demarcation, panduction is employed as a masquerade of science in the form of false deduction. Moreover it constitutes an artifice which establishes the purported truth of a favored hypothesis by means of the extraordinary claim of having falsified every competing idea in one fell swoop of rationality. Panduction is the most common form of pseudoscience.

Having just finished my review of the Court’s definition of malice and oppression in the name of science, as outlined in the Dewayne Johnson vs. Monsanto Company case, my thinking broached a category of pseudoscience which is practiced by parties who share similar motivations to the defendant in that landmark trial. Have you ever been witness to a fake skeptic who sought to bundle all ‘believers’ as one big deluded group, who all hold or venerate the same credulous beliefs? Have you ever read a skeptic blog, claiming a litany of subjects to be ‘woo’ – yet fully unable to cite any evidence whatsoever which served to epistemologically classify that embargoed realm of ideas under such an easy categorization of dismissal? What you are witness to, is the single most common, insidious and pretend-science habit of fake skeptics, panduction.

It’s not that all the material contained in the embargoed hypotheses realm has merit. Most of it does not. But what is comprised therein, even and especially in being found wrong, resides along the frontier of new discovery. You will soon learn on this journey of ethical skepticism, that discovery is not the goal of the social skeptic; rather that is exactly what they have been commissioned to obfuscate.

Science to them is nothing more than an identity which demands ‘I am right’.

There exist three forms of valid inference, in order of increasing scientific gravitas: abduction, induction and deduction. Cleverly downgrading science along these forms of inference in order to avoid more effective inference methods which might reveal a disliked outcome, constitutes another form of fallacy altogether, called methodical deescalation.  We shall not address methodical deescalation here, but rather, a fourth common form of inference, which is entirely invalid in itself. Panduction is a form of ficta rationalitas; an invalid attempt to employ critical failures in logic and evidence in order to condemn a broad array of ideas, opinions, hypotheses, constructs and avenues of research as being Popper falsified; when in fact nothing of the sort has been attained. It is a method of proving yourself correct, by impugning everyone and everything besides the idea you seek to protect, all in one incredible feat of armchair or bar-stool reasoning. It is often peddled as critical thinking by fake skeptics.

Panduction is a form of syllogism derived from extreme instances of Appeal to Ignorance, Inverse Negation and/or Bucket Characterization from a Negative Premise. It constitutes a shortcut attempt to promote one idea at the expense of all other ideas, or kill an array of ideas one finds objectionable. Nihilists employ panduction for example, as a means to ‘prove’ that nothing exists aside from the monist and material entities which they approve as real. They maintain the fantasy that science has proved that everything aside from what they believe, is false by a Popperian standard of science – i.e. deducted. This is panduction.

Panduction

/philosophy : invalid inference/ : an invalid form of inference which is spun in the form of pseudo-deductive study. Inference which seeks to falsify in one fell swoop ‘everything but what my club believes’ as constituting one group of bad people, who all believe the same wrong and correlated things – this is the warning flag of panductive pseudo-theory. No follow up series studies nor replication methodology can be derived from this type of ‘study’, which in essence serves to make it pseudoscience.  This is a common ‘study’ format which is conducted by social skeptics masquerading as scientists, to pan people and subjects they dislike.

There are three general types of Panduction. In its essence, panduction is any form of inference used to pan an entire array of theories, constructs, ideas and beliefs (save for one favored and often hidden one), by means of the following technique groupings:

  1. Extrapolate and Bundle from Unsound Premise
  2. Impugn through Invalid Syllogism
  3. Mischaracterize though False Observation

The first is executed through attempting to falsify entire subject horizons through bad extrapolation. The second involves poorly developed philosophies of denial. Finally the third involves the process of converting disliked observations or failures to observe, into favorable observations:

Panduction Type I

Extrapolate and Bundle from Unsound Premise – Bucket Characterization through Invalid Observation – using a small, targeted or irrelevant sample of linear observations to extrapolate and further characterize an entire asymmetric array of ideas other than a preferred concealed one. Falsification by:

Absence of Observation (praedicate evidentia modus ponens) – any of several forms of exaggeration or avoidance in qualifying a lack of evidence, logical calculus or soundness inside an argument. Any form of argument which claims a proposition consequent ‘Q’, which also features a lack of qualifying modus ponens, ‘If P then’ premise in its expression – rather, implying ‘If P then’ as its qualifying antecedent. This as a means of surreptitiously avoiding a lack of soundness or lack of logical calculus inside that argument; and moreover, enforcing only its conclusion ‘Q’ instead. A ‘There is not evidence for…’ claim made inside a condition of little study or full absence of any study whatsoever.

Insignificant Observation (praedicate evidentia) – hyperbole in extrapolating or overestimating the gravitas of evidence supporting a specific claim, when only one examination of merit has been conducted, insufficient hypothesis reduction has been performed on the topic, a plurality of data exists but few questions have been asked, few dissenting or negative studies have been published, or few or no such studies have indeed been conducted at all.

Anecdote Error – the abuse of anecdote in order to squelch ideas and panduct an entire realm of ideas. This comes in two forms:

Type I – a refusal to follow up on an observation or replicate an experiment, does not relegate the data involved to an instance of anecdote.

Type II – an anecdote cannot be employed to force a conclusion, such as using it as an example to condemn a group of persons or topics – but an anecdote can be employed however to introduce Ockham’s Razor plurality. This is a critical distinction which social skeptics conveniently do not realize nor employ.

Cherry Picking – pointing to a talking sheet of handpicked or commonly circulated individual cases or data that seem to confirm a particular position, while ignoring or denying a significant portion of related context cases or data that may contradict that position.

Straw Man – misrepresentation of either an ally or opponent’s position, argument or fabrication of such in absence of any stated opinion.

Dichotomy of Specific Descriptives – a form of panduction, wherein anecdotes are employed to force a conclusion about a broad array of opponents, yet are never used to apply any conclusion about self, or one’s favored club. Specific bad things are only done by the bad people, but very general descriptives of good, apply when describing one’s self or club. Specifics on others who play inside disapproved subjects, general nebulous descriptives on self identity and how it is acceptable ‘science’ or ‘skepticism’.

Associative Condemnation (Bucket Characterization and Bundling) – the attempt to link controversial subject A with personally disliked persons who support subject B, in an effort to impute falsehood to subject B and frame its supporters as whackos. Guilt through bundling association and lumping all subjects into one subjective group of believers. This will often involve a context shift or definition expansion in a key word as part of the justification. Spinning for example, the idea that those who research pesticide contribution to cancer, are also therefore flat Earther’s.

Panduction Type II

Impugn through Invalid Syllogism – Negative Assertion from a Pluralistic, Circular or Equivocal Premise – defining a set of exclusive premises to which the contrapositive applies, and which serves to condemn all other conditions.

Example (Note that ‘paranormal’ here is defined as that which a nihilist rejects a being even remotely possible):

All true scientists are necessarily skeptics. True skeptics do not believe in the paranormal. Therefore no true scientist can research the paranormal.

All subjects which are true are necessarily not paranormal. True researchers investigate necessarily true subjects. Therefore to investigate a paranormal subject makes one not a true researcher.

All false researchers are believers. All believers tend to believe the same things. Therefore all false researchers believe all the same things.

Evidence only comes from true research. A paranormal investigator is not a true researcher. Therefore no evidence can come from a paranormal subject.

One may observe that the above four examples, thought which rules social skepticism today, are circular in syllogism and can only serve to produce the single answer which was sought in the first place. But ruling out entire domains of theory, thought, construct, idea and effort, one has essentially panned everything, except that which one desires to be indeed true (without saying as much).  It would be like Christianity pointing out that every single thought on the part of mankind, is invalid, except what is in the Bible. The Bible being the codification equivalent of the above four circular syllogisms, into a single document.

Panduction Type III

Mischaracterize through False Observation – Affirmation from Manufacturing False Positives or Negatives – manipulating the absence of data or the erroneous nature of various data collection channels to produce false negatives or positives.

Panduction Type III is an extreme form of an appeal to ignorance. In an appeal to ignorance, one is faced with observations of negative conditions which could tempt one to infer inductively that there exists nothing but the negative condition itself. An appeal to ignorance simply reveals one of the weaknesses of inductive inference.  Let’s say that I find a field which a variety of regional crow murders frequent. So I position a visual motion detection camera on a pole across from the field in order to observe crow murders who frequent that field. In my first measurement and observation instance, I observe all of the crows to be black. Let us further then assume that I then repeat that observation exercise 200 times on that same field over the years. From this data I may well develop a hypothesis that includes a testable mechanism in which I assert that all crows are black. I have observed a large population size, and all of my observations were successful, to wit: I found 120,000 crows to all be black. This is inductive inference. Even though this technically would constitute an appeal to ignorance, it is not outside of reason to assert a new null hypothesis, that all crows are black – because my inference was derived from the research and was not a priori favored. I am not seeking to protect the idea that all crows are black simply because I or my club status are threatened by the specter of a white crow. The appeal to ignorance fallacy is merely a triviality in this case, and does not ‘disprove’ the null (see the Appeal to Fallacy). Rather it stands as a caution, that plurality should be monitored regarding the issue of all crows being black.

But, what if I become so convinced that the null hypothesis in this case is the ‘true’ hypothesis, or even preferred that idea in advance because I was a member of a club which uses a black crow as its symbol? In such a case I approach the argument with an a priori belief which I must protect. I begin to craft my experimental interpretation of measurement such that it conforms to this a priori mandate in understanding. This will serve to produce four species of study observation procedural error, which are in fact, pseudoscience; the clever masquerade of science and knowledge:

A.  Affirmation from Result Conversion  – employing a priori assumptions as filters or data converters, in order to produce desired observational outcomes.

1.  Conversion by a priori Assumption (post hoc ergo propter hoc). But what if the field I selected, bore a nasty weather phenomenon of fog, on an every other day basis. Further then, this fog obscured a good view of the field, to the point where I could only observe the glint of sunlight off the crow’s wings, which causes several of them to appear white, even though they are indeed black. But because I know there are no white crows now, I use a conversion algorithm I developed to count the glints inside the fog, and register them as observations of black crows? Even though a white crow could also cause the same glint. I have created false positives by corrupted method.

2.  Conversion by Converse a priori Assumption (propter hoc ergo hoc – aka plausible deniability). Further then, what if I assumed that any time I observed a white crow, that this would therefore be an indication that fog was present, and a condition of Data Conversion by a priori Assumption was therefore assumed to be in play? I would henceforth, never be able to observe a white crow at all, finding only results which conform to the null hypothesis, which would now be an Omega Hypothesis (see The Art of Professional Lying: The Tower of Wrong).

Example: Viking Mars Lander Data Manipulation

Two Mars Viking Landers were sent to Mars, in part to study for signs of life. NASA researchers took soil samples the Viking landers scooped from the surface and mixed it with nutrient-rich water. If the soil had life, the theory went that the soil’s microbes would metabolize the nutrients in the water and release a certain signature of radioactive molecules. To their pleasant surprise, the nutrients metabolized and radioactive molecules were released – suggesting that Mars’ soil contained life. However, the Viking probes’ other two experiments found no trace of organic material, which prompted the question: If there were no organic materials, what could be doing the metabolizing? So by assumption, the positive results from the metabolism test, were dismissed as derivative from some other chemical reaction, which has not been identified to date. The study was used as rational basis from which to decline further search for life on Mars, when it should have been appropriately deemed ‘inconclusive’ instead (especially in light of our finding organic chemicals on Mars in the last several months)1

B. Affirmation from Observation Failure Conversion – errors in observation are counted as observations of negative conditions, further then used as data or as a data screening criterion.

Continuing with our earlier example, what if on 80% of the days in which I observed the field full of crows, the camera malfunctioned and errantly pointed into the woods to the side, and I was fully unable to make observations at all on those days? Further then, what if I counted those non-observing days as ‘black crow’ observation days, simply because I had defined a black crow as being the ‘absence of a white crow’ (pseudo-Bayesian science) instead of being constrained to only the actual observation of an actual physical white crow? Moreover, what if, because of the unreliability of this particular camera, any observations of white crows it presented were tossed out, so as to prefer observations from ‘reliable’ cameras only? This too, is pseudoscience in two forms:

1.  Observation Failure as Observation of a Negative (utile absentia). – a study which observes false absences of data or creates artificial absence noise through improper study design, and further then assumes such error to represent verified negative observations. A study containing field or set data in which there exists a risk that absences in measurement data, will be caused by external factors which artificially serve to make the evidence absent, through risk of failure of detection/collection/retention of that data. The absences of data, rather than being filtered out of analysis, are fallaciously presumed to constitute bonafide observations of negatives. This is improper study design which will often serve to produce an inversion effect (curative effect) in such a study’s final results. Similar to torfuscation.

2.  Observation Failure as Basis for Selecting For Reliable over Probative Data (Cherry Sorting) – when one applies the categorization of ‘anecdote’ to screen out unwanted observations and data. Based upon the a priori and often subjective claim that the observation was ‘not reliable’. Ignores the probative value of the observation and the ability to later compare other data in order to increase its reliability in a more objective fashion, in favor of assimilating an intelligence base which is not highly probative, and can be reduced only through statistical analytics – likely then only serving to prove what one was looking for in the first place (aka pseudo-theory).

These two forms of conversion of observation failures into evidence in favor of a particular position, are highlighted no better than studies which favor healthcare plan diagnoses over cohort and patient input surveys. Studies such as the Dutch MMR-Autism Statistical Meta-Analysis or the Jain-Marshall Autism Statistical Analysis failed precisely because of the two above fallacious methods regarding the introduction of data. Relying only upon statistical analytics of risk-sculpted and cherry sorted data, rather than direct critical path observation.

 Example: Jain-Marshall Autism Study

Why is the 2015 Jain-Marshall Study of weak probative value? Because it took third party, unqualified (health care plan) sample interpretations of absences (these are not observations – they are ‘lack-of’ observations – which are not probative data to an intelligence specialist – nor to a scientist – see pseudo-theory) from vaccinated and non-vaccinated children’s final medical diagnoses at ages 2, 3, and 5. It treated failures in the data collection of these healthcare databases, as observations of negative results (utile absentia). A similar data vulnerability to the National Vaccine Injury Compensation System’s ‘self-volunteering’ of information and limitation of detection to within 3 years. This favors a bad, non-probative data repository, simply because of its perception as being ‘reliable’ as a source of data. This fails to catch 99% of signal observations (Cherry Sorting), and there is good demonstrable record of that failure to detect actual injury circumstances.2

One might chuckle at the face value ludicrousness of either Panduction Type III A and B. But Panduction Type III is regularly practiced inside of peer reviewed journals of science. Its wares constitute the most insidious form of malicious and oppressive fake science. One can certainly never expect a journalist to understand why this form of panduction is invalid, but certainly one should expect it of their peer review scientists – those who are there to protect the public from bad science. And of course, one should expect it from an ethical skeptic.

The Ethical Skeptic, “Panduction: The Invalid Form of Inference” The Ethical Skeptic, WordPress, 31 Aug 2018; Web, https://wp.me/p17q0e-8c6

‘Anecdote’ – The Cry of the Pseudo-Skeptic

Intelligence professionals are trained to watch for the most reasonable terrifying answer – while science professionals are trained to develop the most comforting answer – that which is the simplest and most career-sustaining. Intelligence professionals regard no story as too odd, no datum as too small – and for good reason. These are the details which can make or break a case, or maybe save lives.

Fake skeptics rarely grasp that an anecdote is used to establish plurality, not a conclusion. Particularly in the circumstance where an anecdote is of possible probative value, screening these out prematurely is a method of pseudo-scientific data filtering (cherry sorting). A facade of appearing smart and scientific, when nothing of the kind is even remotely true.

‘Anecdote’ is not a permissive argument affording one the luxury of dismissing any stand-alone observation they disdain. Such activity only serves to support a narrative. The opposite of anecdote after all, is narrative.

“Nothing is too small to know, and nothing too big to attempt.” Or so the utterance goes attributed to William Van Horne, namesake of the Van Horne Institute, a quasi-academic foundation focusing on infrastructure and information networks. In military intelligence, or more specifically inside an AOC (area of concern) or threat analysis, one learns early in their career an axiom regarding data: ‘Nothing is too small’. This simply means that counter-intelligence facades are often broken or betrayed by the smallest of detail, collected by the most unlikely of HUMINT (Human Intelligence) or ELINT (Electronic Intelligence) channels. In this context, observations are not simply vetted by the presumed reliability of their source, but also by the probative value which they offer inside the question at hand. If one rejects highly probative observations, simply because one has made an assumption as to their ‘reliability’ – this is a practice of weak intelligence/science. Why do you think it no coincidence, that the people who actually sincerely want to figure things out (Intelligence Agency Professionals), maintain the most powerful processors and comprehensive data marts of intelligence data in the world? Be wary of a skeptic who habitually rejects highly probative observations, ostensibly because of a question surrounding their ‘reliability’, and subsequently pretends that their remaining and preferred data set is not cherry picked. Such activity is data skulpting – in other words, pseudoscience.

Intelligence is the process of taking probative observations and making them reliable – not, taking reliable information and attempting to make it probative.

A sixth sigma set of observations, all multiplied or screened by a SWAG estimate of reliability, equals and always equals a SWAG set of observations.

Should not science function based upon the same principles? I mean if we really wanted to know things, why would science not adopt the methods, structures, and lingo of our most advanced intelligence agencies? This is the definition of intelligence after all, the ‘means of discovery’ – and a particular reason why I include Observation and Intelligence Aggregation right along with establishing Necessity, as the first three steps of the scientific method. Or could it be that certain syndicates do not want specific things to be known in the first place? That the ‘science’ of these syndicates, is less akin to an intelligence agency, and more akin to a club, activist-endorsing and virtue signalling over corporate and Marxist agendas. In this article, we delver more specifically into the principle of anecdote, and how this rhetorical football plays into just such method of deception.

The plural of anecdote, is data.

~ Raymond Wolfinger, Berkeley Political Scientist, in response to a student's dismissal of a simple factual statement by means of the pejorative categorization 'anecdote'1

The opposite of anecdote, is narrative.

Anecdote to the affirmation is data, while anecdote to the absence is merely a story. Deception habitually conflates the two.

A dismissal of ‘anecdote’, is every bit the cherry picking which its accuser decries.

~ The Ethical Skeptic

One should take note that RationalWiki incorrectly frames the meaning of anecdote as “use of one or more stories, in order to draw a conclusion.”2 This is a purposefully pedestrian and narrative-friendly framing of the definition of anecdote. In reality, an anecdote is used to establish plurality – not a conclusion. One specific function of social skepticism is to prevent plurality from broaching on critical topics, at any cost. No surprise here that they would also therefore seek to cite anecdote as being tantamount to an attempt at proof. In such a manner, all disdained data can be dismissed at its very inception – through a rhetorical artifice alone. Plurality can never become necessary under Ockham’s Razor.

Had the author of RationalWiki ever prosecuted an intelligence scenario, or directed a research lab (both of which I have done) – they might have understood the difference between ‘conclusion’ and ‘plurality’ inside the scientific method. But this definition was easy, simpleton, and convenient to narrative-building methodology. Sadly, all one has to do in this day and age of narrative and identity – is declare one’s self a skeptic, and start spouting off stuff you have heard in the form of one liners. Below one will observe an example where celebrity skeptic Michael Shermer also fails to grasp this important discipline of skepticism and science (see The Real Ockham’s Razor).

But first, let us examine the definition of the term anecdote itself. Google Dictionary defines anecdote as the following (time graph and definition, both from Google Dictionary):3

Anecdote

/noun : communication : late 17th century: from French, or via modern Latin from Greek anekdota ‘things unpublished,’ from an- ‘not’ + ekdotos, from ekdidōnai ‘publish’/ : a short and amusing or interesting story about a real incident or person. An account regarded as unreliable or hearsay.

Anecdote Error

/philosophy pseudoscience : invalid inference/ : the abuse of anecdote in order to squelch ideas and panduct an entire realm of ideas. This comes in two forms:

Type I – a refusal to follow up on an observation or replicate an experiment, does not relegate the data involved to an instance of anecdote.

Type II – an anecdote cannot be employed to force a conclusion, such as using it as an example to condemn a group of persons or topics – but an anecdote can be employed however to introduce Ockham’s Razor plurality. This is a critical distinction which social skeptics conveniently do not realize nor employ.

Under this context of definition, below is a generally accepted footprint of usage of the term anecdote. One should take note of the the luxuriously accommodating equivocal reach of this word, ranging very conveniently from being defined as both ‘real incident’ all the way to ‘lie’.

Fake skeptics wallow and thrive in broadly equivocal terminology like this. Indeed, this shift in definition has been purposeful over time. Notice how the pejorative use of the term came into popularity just as the media began to proliferate forbidden ideas and was no longer controllable by the agency of social skepticism. Social Skeptics were enlisted to be the smart but dumb (Nassim Taleb’s ‘Intellectual-Yet-Idiot’ class) players who helped craft the new methods of thought enforcement, the net effect of which you may observe on the graph below:

Cherry Sorting: A Sophist’s Method of Screening Out Data they Find ‘Not Reliable’

Set aside of course the context of rumors and unconfirmed stories. Such things indeed constitute anecdote under the broad equivocal footprint of the term; however, this is not what is typically targeted when one screens out observations by means of the fallacious use of the term. Being terrified (both existential or career impact contexts) of an answer can serve to effect a bias, an imbalance which is critical in the a priori or premature assessment of an observation as being ‘unreliable’. Even the most honest of scientist can succumb to such temptation.  Are you as a scientist going to include observations which will serve to show that vaccines are causing more human harm than assumed? Hell no, because you will lose your hard earned career in science. Thus, crafting studies which employ high reliability/low probative value observations is paramount under such a reality. If you ensure that your data is reliable and that your databases are intimidatingly large (a meta-study for instance), then you are sure to appear scientific; even if you have never once spoken to a stakeholder inside the subject in question, nor conducted a single experimental observation.

Developing information of a probative nature naturally involves more expense, since more effort is required in its collection – which also frequently happens to render it more informative. Such information demands a greater level of subject matter expertise, effort, and often set of logical skills as well. To retreat to solely an analytical position towards information which avoids such risk/expertise/cost, rather than groom probative information and seek to mitigate its risk through consilience efforts and direct observation – such is the means by which society actively crafts false knowledge on a grand scale.

Intelligence professionals are trained to watch for the most reasonable terrifying answer – while science professionals are trained to develop the most comforting answer – that which is the simplest and most career-sustaining.

The bottom line is this, Intelligence Professionals, those who truly seek the answers from the bottom of their very being – are promoted in their career for regarding data differently than does weak scientific study or do pseudo-skeptics. The issue, as these professionals understand, is NOT that anecdotes are unreliable. All questionable, orphan or originating observations are ‘unreliable’ to varying degrees, until brought to bear against other consilience/concomitance: other data to help corroborate or support its indication. This is the very nature of intelligence, working to increase the reliability and informative (probative) nature of your data.

One should guard against the circumstance wherein an appeal to authority defacto decides any assumption of reliability and informative ability on the part of an observation. One falsifying anecdote may be unreliable, but is potentially also highly informative. Ethical skepticism involves precisely the art of effecting an increase in reliability of one’s best or most highly probative data. Of course I do not believe with credulity in aliens visiting our Earth; however, if we habitually screen out all the millions of qualified reports of solid, clear and intelligent craft of extraordinary capability, flying around in our oceans and skies – we are NOT doing science, we are NOT doing skepticism. In such a play we become merely anecdote data skulpting pseudo-scientists. Furthermore, if we regard the people who are conducting such probative database assembly to be ‘pseudo-scientists’ – we are exercising nothing even remotely associated with skepticism.

A 70% bootstrap deductive observation is worth ten 90% bootstrap inductive ones…
If one examines only 90% plus reliable data, one will merely confirm what we already know.
Reliability can be deceiving when assumed a priori. It is logical inference through repeated observation, and not the bias of authority we bring to the equation, which must serve and accrue any assessment of reliability.

The habit of first assaying the reliability of an observation constitutes what is known inside the philosophy of skepticism as a ‘streetlight effect’ error, regardless of any presence or absence of bias. In intelligence, and in reality, the collection of probative evidence is always the first priority. No inference in science hinges on the accuracy/bias status of one point of evidence alone. Bias always exists. The key is to filter such influence out through accrued observation, not another biased subjective assessment. Never prematurely toss out data you don’t like, or which just does not seem to fit the understanding. The inference itself is that which bears the critical path of establishing reliability. Reliability and Probative Value in data is something which is established after the fact through comparison and re-observation, not something which is presumed during data collection. Assuming such things early in the data collection process constitutes the ultimate form of subjective cherry picking – cherry sorting. With cherry sorting, one will be sure to find the thing one sought to find in the first place, or confirm the common understanding. Research awards will be granted. Careers will be boosted.

[As a note: one of my intelligence professional associates who read this article has emailed me and reminded me of the “I” community’s mission to focus on ‘capability and intent’, in addition to probative and reliable factors inside observation gathering. In contrast, capability and intent are assumptions which social skeptics bring to the dance, already formed – a surreptitious method of already forming the answer as well. Capability and intent assumptions help the social skeptic to justify their cherry sorting methodologies.  It is a form of conspiracy theory spinning being applied inside science. This is why ethical skepticism, despite all temptation to the contrary, must always be cautious of ‘doubt’ – doubt is a martial art, which if not applied by the most skilled of practitioner, serves only to harm the very goals of insight and knowledge development. Or conversely, is a very effective weapon for those who desire to enforce a specific outcome/conclusion.]

Be cautious therefore when ‘reliability’ constitutes a red herring, employed during the sponsorship stage of the scientific method. With today’s information storage/handling capacity and the relative inexpensive nature of our computational systems – there exists no excuse for tossing out data for any reason – especially during the observation, intelligence and necessity steps of the scientific method. We might possibly apply a plug confidence factor to data, as long as all data can be graded on such a scale, otherwise an effort in evaluating reliability is a useless and symbolic gesture of virtue. Three errors which data professionals make inside Anecdote Data Skulpting, which involve this misunderstanding of the role of anecdote, and which are promoted and taught by social skeptics today, follow:

Anecdote Data Skulpting (Cherry Sorting)

/philosophy : pseudo-science : bias : data collection : filtering error/ : when one applies the categorization of ‘anecdote’ to screen out unwanted observations and data. Based upon the a priori and often subjective claim that the observation was ‘not reliable’. Ignores the probative value of the observation and the ability to later compare other data in order to increase its reliability in a more objective fashion, in favor of assimilating an intelligence base which is not highly probative, and can be reduced only through statistical analytics – likely then only serving to prove what one was looking for in the first place (aka pseudo-theory).

‘Anecdote’ is not a permissive argument affording one the luxury of dismissing any stand alone observation they desire. This activity only serves to support a narrative. After all, the opposite of anecdote, is narrative.

1.  Filtering – to imply that information is too difficult to reduce/handle or make reliable, is unnecessary and/or is invalid(ated); and therefore filtering of erstwhile observations is necessary by means of the ‘skeptical’ principle of declaring ‘anecdote’.

Example:  This is not the same as the need to filter information during disasters.4 During the acute rush of information stage inside a disaster – databases are often of little help – so responses must be more timely than intelligence practices can respond. But most of science, in fact pretty much all of it, does not face such an urgent response constraint. Save for an ebola epidemic or something of that nature. What we speak of here, is where professionals purposely ignore data, at the behest of ‘third party’ skeptics – and as a result, science is blinded and emasculated.

The discovery of penicillin was a mere anecdote. It took 14 years to convince the scientific community to even try the first test of the idea. Dr. Alexander Fleming’s accident, and the ensuing case study regarding his contamination of his Petri dishes with Penicilium mold, indeed constituted an observation of science, and not a mere ‘fantabulous story’ as skeptics of the day contended.5 In similar fashion, the anecdotes around the relationship between h. pylori and peptic ulcers were rejected for far too long by a falsely-skeptical scientific community. The one liner retort by social skeptics that ‘The science arrived right on time, after appropriate initial skepticism’ collapses to nothing but utter bullshit, once one researches the actual narratives being protected, methods, and familiar/notorious special interest (pharmaceutical) clubs involved.

Another example of anecdote data skulpting can be found here, as portrayed in celebrity sskeptic Michael Shermer’s treatise on the conclusions he claims science has made about the afterlife and the concept of an enduring soul-identity.  Never mind that the title implies that the book is about the ‘search’ (of which there has been a paltry little) – rest assured that the book is only about the conclusion: his religion of nihilism.

In this demonstration of the martial art of denial, Michael exhibits the foible of data skulpting, by eliminating 99% of the observational database, the things witnessed by you, nurses, doctors, your family and friends, through regarding every bit of it as constituting dismissible ‘anecdote’. He retains for examination, the very thin margin of ‘reliable’ data, which he considers to be scientific. The problem is, that the data studies he touts are not probative – but rather merely and mildly suggestive forms of abductive reasoning and not bases for actual scientific inference. Convenient rationalizing which just also happens to support his null hypothesis – which just happens to be his religious belief as well.  As an ignostic atheist and evolutionist, I also reject this form of pseudo science.

For example, in the book Michael makes the claim that “science tells us that all three of these characteristics [of self and identity] are illusions.” Science says no such thing. Only nihilists and social skeptics make this claim – the majority of scientists do not support this notion at all.6

If one were to actually read Shermer’s famously touted Libet and Haynes study on free will and identity, one would find that the authors only claim their studies to be mildly suggestive at best – in need of further development.  Michael employs this instead as an appeal to authority. This is dishonest. The studies are good, but they are inductive in an extremely best perspective. The difference here being, ethical skeptics do not reject material monist studies; however nihilists, through their habit of rejecting near death studies and through ample hand waving, enforce merely single-answer echo chamber and pseudoscience. They are forcing to conclusion through cherry sorting, a single answer during the data collection process.

As well, this boastful conclusion requires one to ignore, and artificially dismiss or filter out as ‘anecdote’, specific NDE and other studies which falsify or countermand Shermer’s fatal claim to science.7

Yes, these case studies are difficult to replicate – but they were well conducted, are well documented, and were indeed falsifying (the strongest and most probative basis for inference). And they are science. So Shermer’s claim that ‘science tells us’ – is simply a boast. Dismissing these type of studies because they cannot be replicated or because of a Truzzi Fallacy of plausible deniability or because they ‘lacked tight protocols’ (a red herring foisted by Steven Novella) – is the very essence of cherry sorting we decry in this article: dismissing probative observations in favor of ‘reliable’ sources. In the end, all this pretend science amounts to nothing but a SWAG in terms of accuracy in conclusion. Something real skeptics understand but fake skeptics never seem to grasp.

Under any circumstance, certainly not a sound basis for a form of modus ponens assertion containing the words ‘science tells us’. Beware when a person denies, based merely upon such a boast of authority.

Moreover, one must pretend that the majority of scientists really do not count in this enormous boast as to what ‘science tells us’. In Anthropogenic Global Warming science, the majority opinion stands tantamount to a final conclusion. In the case of material monism however, suddenly scientific majorities no longer count, because a couple celebrity skeptics do not favor the ‘wrong’ answer. Do we see the intimidation and influencing game evolving here? The graph below, from a Pew Research 2009 survey of scientists, illustrates that in actually the majority of scientists believe in a ‘god’ concept. That stands as a pretty good indicator on the identity-soul construct itself, as many people believe in a soul/spirit-identity, but not necessarily in the gods of our major religions. I myself fall inside the 41% below (although I do not carry a ‘belief’, rather simply a ‘lack of allow-for’), and I disagree with Shermer’s above claim.8

Pew Research: Scientists and Belief, November 5, 2009

In similar fashion, using habits one can begin to observe and document, fake skeptics block millions of public observations as ‘anecdotes’; for instance, regarding grain toxicity, skin inflammation, chronic fatigue syndrome, autism, etc., through either calling those observers liars, or by declaring that ‘it doesn’t prove anything’. Of course they do not prove anything. That is not the step of science these sponsors are asking be completed in the first place.

2.  Complexifuscating – expanding the information around a single datum of observation, such that it evolves into an enormously complicated claim, or a story expanded to the point of straw man or adding a piece of associatively condemning information.  These three elements in bold are all key signature traits which ethical skeptics examine in order to spot the tradecraft of fake skeptics:  Observation versus claim blurring, straw man fallacy and associative condemnation.

Example:  Michael Shermer regularly dismisses as ‘anecdote’, instances wherein citizen observations run afoul of the corporate interests of his clients. This is a common habit of Steven Novella as well.  The sole goal (opposite of anecdote is narrative) involved is to obfuscate liability on the part of their pharmaceutical, agri-giant or corporate clients. This is why they run to the media with such ill-formed and premature conclusions as the ‘truth of science’. In the example below, Michael condemned an individual’s medical case report – rather than treating such reports as cases for an argument of plurality and study sponsorship (there are literally hundreds of thousands of similar cases. Here he converts it to a stand alone complex claim, contending ‘proof of guidelines for nutrition’ (dismissed by cherry sorting), and associatively condemns it by leveraging social skeptic bandwagon aversion to the ‘Paleo Diet’. I assure you, that if the man in the case study below had lost weight from being a vegan, Michael would have never said a word. All we have to do is replicate this method, straw man other similar reports and continually reject them as anecdote – and we will end with the defacto scientific conclusion we sought at the outset.

Of course I am not going to adopt the Paleo Diet as authority from just this claim alone – but neither have I yet seen any good comparative cohort studies on the matter. Why? Because it has become a career killer issue – by the very example of celebrity menacing provided below – pretending to champion science, but in reality championing political and corporate goals:

This is not skepticism in the least. It is every bit the advocacy which could also be wound up in the man’s motivation to portray his case. The simple fact is, if one can set aside their aversion to cataloging information from the unwashed and non -academic, that it is exactly the stories of family maladies, personal struggles and odd and chronic diseases which we should be gathering as intelligence – if indeed the story is as ‘complicated’ as Michael Shermer projects here. These case stories help us rank our reduction hierarchy and sets of alternatives, afford us intelligence data (yes, the plurality of anecdote is indeed data) on the right questions to ask – and inevitably will lead us to impactful discoveries which we could never capture under linear incremental and closed, appeal to authority only, science – science methods inappropriately applied to asymmetric, complex systems.

An experiment is never a failure solely because it fails to achieve predicted results. An experiment is a failure only when it also fails adequately to test the hypothesis in question, when the data it produces don’t prove anything one way or another.

~ Robert Pirsig, American writer, philosopher and author of Zen and the Art of Motorcycle Maintenance: An Inquiry into Values (1974)

In similar regard, and as part of a complexifuscation strategy in itself, Michael Shermer in the tweet above, misses the true definition of anecdote, and instead applies it as a pejorative inside a scientific context. That meaning, a case which fails to shed light on the idea in question, and not the distinguishing criteria that the case ‘stands alone’, lacking sponsorship by science.  All data can easily be made to appear ‘stand-alone’, if one’s club desires this to be so.

3.  Over Playing/Streetlight Effect – to assume that once one has then bothered to keep/analyze data, they hold now some authoritative piece of ‘finished science’. The process of taking a sponsorship level of reliable data, and in absence of any real probative value, declaring acceptance and/or peer review success. A form of taking the easy (called ‘reliable’) route to investigation.

Streetlight Effect

/philospophy : science : observation intelligence/ : is a type of observational bias that occurs when people only search for something where it is easiest to look.

Example:  A study in the March 2017 Journal of Pediatrics, as spun by a narrative promoter CNN9 incorrectly draws grand conclusions off of 3 and 5 year old data, derived from questionnaires distributed to cohorts of breast feeding and bottle feeding Irish mothers. While useful, the study is by no means the conclusive argument which it is touted as being by CNN (no surprise here); rendered vulnerable to selection, anchoring, and inclusion biases.

This is an example of an instance wherein the traditional approach to making and reputed reliability of the observation method was regarded as being high, but the actual probative value of the observations themselves was low. We opted for the easier route in collection and inference. This case was further then touted by several advocacy groups as constituting finished science (pseudo-theory) – an oft practiced bias inside data-only medical study cabal.

Not to mention the fact that other, more scientific alternative study methods, based upon more sound observational criteria, could have been employed instead. Data is easy now – so lets exploit ‘easy’ to become poseurs at science, use our interns to save money – all the while filtering out the instances where ‘easy’ data might serve to promote an idea we do not like.

Inference to be drawn from a single linear inductive or shallow data study hinges on the constraint, assumption, and inclusion biases/decisions made by the data gatherers and analysts involved in that study. Such activity is no better than an anecdote in reality, when one considers that the simple notion that both cohorts are going to spin their preferred method of feeding, nay their child – as being/performing superior. All this is no better than anecdote itself – and as such is acceptable for inclusion to establish plurality, yes – but is by no means the basis for a conclusion.

Anecdote is not tantamount to conclusion. However, neither is its exclusion.

~ The Ethical Skeptic

And conclusions are what fake skeptics are all about.  Easy denial-infused conclusions. Easily adopted and promulgated conclusions. Because, that is ‘skeptical’.

The Ethical Skeptic, “’Anecdote’ – The Cry of the Pseudo-Skeptic” The Ethical Skeptic, WordPress, 7 Jan 2018, Web; https://wp.me/p17q0e-6Yx

Garbage Skepticism: The Definition

The role of those who identify as ‘skeptic’ is to act in lieu of science in tendering and rigorously and openly enforcing provisional personally preferred conclusions and beliefs. Bullshit. Skepticism is more about asking the right question at the right time, and being able to handle the answer which results – than anything else.
A skeptic does not ‘apply science and reason’ – Rather, a sincere researcher employs skepticism.

Critical thinking, it is the watchword of the scientifically and skeptically minded. A call to arms on the part of those who seek to ensure that our bodies of knowledge are infused with an indignant form of immunity regarding the bunk, pseudoscience, woo and credulousness proffered by the unwashed masses of believers. Its incumbent and implied skepticism, indeed the preamble held by the fraternity which views itself as the keepers of the Grail of science, is codified by celebrity skeptic, Steven Novella.

The following definition, is brought to you by the man who does not appear to know what a p-value is, cannot consistently define correlation and habitually mis-frames the methods of science so as to favor and dis-favor subjects according to his club’s likes and dislikes (under the guise of ‘scientific’ reason). But we take his word on skepticism, in exemplary credulousness. Yes, celebrity ‘skeptic’ Steven Novella, pretty much sums up the whole fake skepticism movement below. His preferred definition’s codification of abductive logical inference, as it contrasts with ethical (scientific) skepticism, follows thereafter. (Please note, I refer to him as Dr. Novella inside issues of the neurosciences, but in regard to issues of deontology, we are simply peers).

Novellas New ClothesA skeptic is one who prefers beliefs and conclusions that are reliable and valid to ones that are comforting or convenient, and therefore rigorously and openly applies the methods of science and reason to all empirical claims, especially their own.

A skeptic provisionally proportions acceptance of any claim to valid logic and a fair and thorough assessment of available evidence, and studies the pitfalls of human reason and the mechanisms of deception so as to avoid being deceived by others or themselves.

Skepticism values method over any particular conclusion. 1

Yes, Novella’s definition can slip by the sensibility litmus of most persons; specifically because it contains socially charged popular phrases, crafted in a type of academic/sciencey, doctor didactic, believe-no-bullshit-sounding milieu of authority. But one must understand, that often such critical virtue signalling constitutes no more than a desire to push a preferred personal cosmology, one not actually vetted by science, via means of Appeal to Skepticism. More specifically, an inverse negation fallacy.

If you have ever seriously pursued a scientific discovery, patent filing or feat of novel engineering – life accomplishments which bear enough difficulty in realization that you begin to garner a wisdom of how such nascence works – this ilk of quick and shallow definition from the more-critical-thinker-than-thou begins to grate on one’s soul. To the ethical skeptic, the tenderfoot mind replete with its procedural or conceptual credulousness, is not nearly as alarming nor infuriating as is the curmudgeonly old mind, seeking to ascetically enforce its preferred model of methodical cynicism.

In order to understand why this definition is agenda-flawed, we must first understand the game of the fake skeptic, wound up inside a tactic of cultivated ignorance called methodical deescalation.

When the Continuance of Knowledge is Not Necessarily the Goal: Use Preferential Abduction

Deduction being the most robust form of inference available to the researcher, the provisional methods of short cut inference – and the fact that conclusion is forced prematurely to begin with (see The Real Ockham’s Razor) – are among the principal five errors which are plied by the fake skeptic in their authoritative role representing ‘method’ – and inside the definition framed above by Steven Novella:

Error 1.  Force to Conclusion – the forcing of a conforming answer when no conclusions may even be warranted.

Error 2.   Skepticism in Lieu of Science – skepticism is never to be employed by a casual thinker in lieu of science – it is a discipline of the mind when one prepares to conduct actual science (not pretend science).

Error 3.  Methodical Deescalation of Rigor – deescalate a deductive or inductive challenge to abductive diagnostic inference – when this is an erroneous approach.

Error 4.  Social Inertia – the failure to recognize the negative whipsaw effect of forced, ideologue-driven, provisional or diagnostic abduction authority through society and media structures; ultimately polluting the deontological process of knowledge development (black accrued error curve in the graphic to the right).

Error 5.  Risk Amplification – the failure to recognize the risk gain leveraging effect of multiple stacked provisional or diagnostic abduction inferences on the deontological process of knowledge development (see The Warning Indicators of Stacked Provisional Knowledge).

doubtResearcher beware, as the Novella definition above implies abduction as the method of skepticism. Choosing a lower order of logical inference such as abduction can be a method by which one avoids challenging answers, yet still tenders the appearance of conducting science. But there is a cost in the progression of mankind’s understanding, which arrives at the heels of such errant methods of skepticism.

We highlight here first a favorite trick of social skeptics – i.e. employing abductive reason in instances where deductive discipline or inductive study are warranted (see Diagnostician’s Error). A second trick can involve the affectation of science through an intensive focus on one approach, at the purposeful expense of necessary and critical alternatives (see The Omega Hypothesis). Both tricks result in an erosion of understanding on the part of mankind; something I refer to as Cultivated Ignorance – a condition inside of which one cannot gauge empirical risk, and which cannot be distinguished from social conformance (see Constrasting Deontological Intelligence with Cultivated Ignorance). One can dress up in an abductive robe and tender an affectation of science – but an ethical skeptic is armed to know otherwise (see The Tower of Wrong: The Art of the Professional Lie).

Methodical Deescalation

/philosophy : pseudoscience : inadequate inference method/ : employing abductive inference in lieu of inductive inference when inductive inference could have, and under the scientific method should have, been employed. In similar fashion employing inductive inference in lieu of deductive inference when deductive inference could have, and under the scientific method should have, been employed.

All things being equal, the latter is superior to the midmost, which is superior to the former:

  • Conformance of panduction​ (while a type/mode of inference this is not actually a type of reasoning)
  • Convergence of abductions
  • ​Consilience of inductions
  • Consensus of deductions

One of the hallmarks of skepticism is grasping the distinction between a ‘consilience of inductions’ and a ‘convergence of deductions’. All things being equal, a convergence of deductions is superior to a consilience of inductions. When science employs a consilience of inductions, when a convergence of deductions was available, yet was not pursued – then we have an ethical dilemma called Methodical Deescalation.

The Pretend Definition

A skeptic

First, an authentic skeptic does not identify themselves as ‘a skeptic.’2  To do so raises the specter of bias and agenda before one even begins to survey the world around us all. Skepticism, is something an active researcher employs inside the method of science, it is not something you are. Why? Because of two very important laws of human nature, which those who apply real skepticism understand, and fake skeptics do not get:

Neuhaus’s Law

/philosophy : skepticism : fallacies/ : where orthodoxy is optional, orthodoxy will sooner or later be proscribed. Skepticism, as a goal in and of itself will always escalate to extremism.

Goodhart’s Law of Skepticism

/philosophy : skepticism : fallacies/ : when skepticism itself becomes the goal, it ceases to be skepticism.

is one who prefers

A person who practices skepticism does not prefer anything. In fact, a true skeptic finds satisfaction at proving his bias preferences wrong. ‘Wrongness’ resides at the heart of scientific integrity, and a true researcher celebrates the value of something or an idea being found wrong. A person who practices skepticism defends a knowledge development process which is consistent with the ethical practices of science. He or she finds integrity in the lack of the prefer.

beliefs and conclusions

A person who practices skepticism does not hold beliefs and must be forced by falsified conjecture, into conclusion – rather they recognize the valid outcomes which have arisen as a result of sound scientific method. Nothing else. Beliefs and conclusions are for the religious among us: those seeking to promote a pre-cooked cosmology and block the ethical actions of sciences one does not like (see ‘prefer’ above).

that are reliable and valid

A principal error of fake skeptics is the propensity to prefer what they deem to be ‘reliable’ information over probative observations, what they call ‘ancedotes’ or anything which can otherwise be cherry sorted out of existence through some fallacious mechanism of fake skepticism.  Real science, like real intelligence, takes probative observations and conducts corroborative and follow up field work to increase the reliability of inference drawn from them. That is what constitutes validity, not an exercise in how authoritative is your a priori knowledge. Such pretend a priori confidence science falls under the error of the ‘streelight effect’. Real science does not take a subjective SWAG of reliability factor screening and multiply analytical data and observations by that factor – because, all one gets in that method of science, is a highly risky and SWAG-generated answer in the end – surrounded by lots of made up numbers and intimidating-in-appearance databases.  This is a cheap, hide in your clinical neurologist office from 8:45 am till 4:50 pm, write-articles way to do science, but not a very effective (and ironically reliable) one.

to ones that are comforting or convenient

In this statement, the one who has identified them self as a ‘skeptic’ has made the claim that any attestation outside what they personally hold to be ‘reliable and valid,’ is the result of personal emotional or easy pathways of philosophy or verity. This is both a bifurcation (my way or the highway) and a rather extraordinary claim, implicit in this poorly crafted amphibious and equivocal expression. Everyone besides me composes an entire realm of seething, mindless, moaning, religiously orgasmic protoplasm. How wonderful I am (you will notice that the promotion of self is key – inside fake skepticism)!

ideam tutela – agency. A questionable idea or religious belief which is surreptitiously promoted through an inverse negation. A position which is concealed by an arguer because of their inability to defend it, yet is protected at all costs without its mention – often through attacking without sound basis, every other form of opposing idea.

and therefore rigorously and openly applies the methods of science and reason

And there you have it: The job of skepticism is to act in lieu of science to tender and enforce as reason, personal provisional conclusions. Sophomoric and incorrect philosophy. Amazing that this person ever successfully defended a dissertation (see The Riddle of Skepticism).

A skeptic does not ‘apply science and reason’ – A researcher employs skepticism. Grasping this understanding is key to discerning sciencey-sounding chicanery from ethical research.

Reason is not an a priori art. See Rationality is Not What False Skeptics Portray.

Implicit inside this statement is the provision, wherein, if one does not want to go through the bother of using the methods of science in order to derive a conclusion, then the magic of ¡reason! can also be used (equally valid to scientific method). Therefore, one can also sit in their university office, or basement, or celebrity convention and completely fabricate their scientific conclusions, and this all still stands as valid – beliefs and conclusions from reason, acting in lieu of science! Will wonders never cease. Our entire knowledge base as humanity, derived via basement and cubicle keyboards; shoved down the throats of anyone who is comforted or convenienced by daring to ponder anything different.

Rigorously, as cited here can mean – that one drives home a conclusion, even in absence of sufficient evidence to do so. ‘Rigorously destroy’ is the implied context, not rigorously research. Openly means, to declare your preferences on Twitter and in ‘science’ blogs to all the world; nay promulgate this to your malevolent minions, once you have reasoned your conclusion through the insufficient but ‘rigorous‘ evidence which allowed for its adoption.

So, far 100% bullshit – a moron’s definition of skepticism – but let’s continue.

to all empirical claims, especially their own.

Now here, a slip up of sanity encroaches on this fantasy of personal power and aggrandizement. Yes, skepticism is applied to ‘claims’ and not observations (but you will find that conflating the two is a key habit of fake skeptics – who have never filed a patent nor issued a lab report). Skepticism is not applied to observations, intelligence and data, not to faith, not hopes, not art, not music and drama, not to subjects and not to persons. It is applied to the process of vetting hypotheses (not screening the intelligence which drives their necessity, …and there is a difference), asking procedural and contextual scientific questions and undertaking the scientific method, on the part of someone qualified inside the research at hand. If this is what Novella means by ‘empirical claims,’ or more accurately, claims to empiricism, then this is correct. The purpose of skepticism is not to prove that a priori reason is right, or to prove or disprove religions, nor act as the whip of authority proffered by external observers, nor to settle arguments. These are the abuses of skepticism by the dilettante and malevolent.

If by ‘their own‘ he means: “First and foremost finds fulfillment through disciplined pursuit of an insatiable curiosity; scrutinizing and maintaining caution around his own assumptions, regardless of where they are obtained; discriminating with discipline, ontological and religious cosmologies from actual science.” Then he is correct on this point.  If however, the contention that one examines their own claims, rises tantamount to an apologetic as to why one’s beliefs and conclusions are therefore superior through purported self-examination, then this is not what skepticism involves. Skepticism is never employed as a boast, and fake skeptics do not get this.

A skeptic provisionally proportions acceptance

A skeptic does no such thing. A skeptic is averse to any such action. A skeptic may entertain multiple constructs as possible or likely, but they do not call those assessments conclusions, nor do they stack such risk-bearing sentences into a religion they call science or skepticism – yes, even provisionally.

acatalepsia Fallacy

/philosophy : fallacy : skepticism/ : a flaw in critical path logic wherein one appeals to the Pyrrhonistic Skepticism principle that no knowledge can ever be entirely certain – and twists it into the implication that therefore, knowledge is ascertained by the mere establishment of some form of ‘probability’. Moreover, that therefore, when a probability is established, no matter how plausible, slight or scant in representation of the domain of information it might constitute, it is therefore now accepted truth.  Because all knowledge is only ‘probable’ knowledge, all one has to do is spin an apparent probability, and one has ascertained accepted knowledge. Very similar in logic to the Occam’s Razor aphorism citing that the ‘simplest explanation’ is the correct explanation.

The skeptic must recognize that any logical inference is not stand alone. Our need in science is to sequence and stack inferences so that they become useful. But in such stacking we imbue risk into the equation – risk which is often times not acknowledged. Such activity inevitably leads to large ‘simplest explanation’ abductive reasoning houses of cards. These houses of cards further then becoming proscribed orthodoxy (reason), under Neuhaus’s Law. This is the methodical process of a pretend skeptic.

to valid logic and a fair and thorough assessment of available evidence

Again, all the fake skeptic needs in his quiver under this framing, is to declare something logical, and to base a prematurely forced conclusion upon the ‘available evidence‘ (which is very often woefully inadequate to support any conclusion).  This constitutes a Transactional Occam’s Razor fallacy. Its being ‘thorough‘ in no way excuses the pseudoscience entailed therein. The phrase is an amphibology crafted so as to excuse any mode of thought one or one’s club chooses (describing this as ‘fair‘), as qualifying to stand in lieu of science. This is institutionalized dishonesty plain and simple.

and studies the pitfalls of human reason

Aha! Finally some actual study! Unfortunately it arrives in the form of “I am here to study the reasons why, despite my being rational, you are stupid-pseudoscience.” So far the definition framer has completely ignored the ‘observing, assimilating intelligence, reducing, developing necessity and exploring several diametrically opposed constructs’ actual research work, the hallmark of real skepticism. They have invested their sole effort regarding actual study – into the discipline of understanding why everyone else is so stupid besides them self (see The Habits of the Pseudo-Skeptic Sleuth). This is a game of pretense and malevolence. It is the hallmark of a spoiled, ego laden and arrogant person.

and the mechanisms of deception

Whoops, they missed this definition – as this is a pretty large set of study, which they have failed to apply to their cabal.

so as to avoid being deceived by others or themselves.

One does not avoid being deceived by ‘provisionally proportioning acceptance to beliefs and conclusions that are reliable and valid’ – this simply means that you are just another one of the con men yourself. Just with a different flim-flam pitch, called ‘skepticism.’ Fake skepticism: The best con job in the business. Con yourself first, con a club of con men, then con others.

First, the surest way to bring a con job into inception, is to begin to enforce it by means of an non-dissent-tolerant and punitive club (see Why Club Quality Does Not Work).

Second, in true skepticism, one avoids being deceived by holding pre-scientific dispositions in an attitude of suspended judgement, epoché. One meticulously, and as a priority, avoids joining clubs of consensus. Hence the statement Epoché Vanguards Gnosis. Errant information will eventually step on its own dick and falsify itself, all you have to do is be patient. Squelching of information and ideas, does nothing but squelch this natural reductive process. This is the process of skepticism, it does not involve prematurely adopting and shooting down things we choose as valid and invalid. It is not something you are, it is a discipline you practice. Its virtues are curiosity, intelligence, tolerance and patience (see The Nurturing of the New Mind).

Skepticism values method over any particular conclusion.

Close, but no cigar. Steven wants this to sound like he is referring to the scientific method – but he is not. This phrase, especially given context preceding it, is referential to methodical cynicism, not scientific method. True skepticism values qualified knowledge, ie. that which is effective at underpinning the further improvement of understanding or in alleviating suffering, and the scientific method. Over anything else. Even their own provisionally proportioned acceptance of beliefs and conclusions that are reliable and valid. What hogwash.

The definition framed here by Steven Novella is not what skepticism is at all. This is childishly obvious to a graduate level philosopher or anyone who has reduced a set of hypotheses to isolate an actual scientific discovery. Understandably, most people do not bear these qualifications, and fall easy prey to this errant pop-definition. But this is the fight we ethical skeptics must undertake. Changing the minds of those who have been media brainwashed. Allowing them see the farce for what it is, maybe for the first time.

The Emperor Wears No Clothes.

For an accurate and agenda free definition of scientific skepticism, see A New Ethic.

epoché vanguards gnosis