The Ethical Skeptic

Challenging Pseudo-Skepticism, its Agency and Cultivated Ignorance

Where Were the ‘Skeptics’?

It is oft said that fortune favors the brave. But what society witnessed during the last two years of raging human rights abuse, was a feckless ‘skeptic’ community, formerly raging with swagger of doubt – now silent and groveling, prostrate before their bare-naked Emperors.

Our contemporary form of skepticism since the time of Descartes has been defined as

Philosophical views which advance some degree of doubt regarding claims that are elsewhere taken for granted.

But what happens when those who have assumed the task of ‘methodical doubt’ on everyone’s behalf, fail to undertake even basic forms of the very skepticism with which they formerly intimidated all around them? Such a charade now reveals itself to all concerned, as constituting nothing more than a huckster’s act. ‘Why assume the risk of doubting? Let’s let the people we disdain take the flak.’ Having cowered in their basements for two years, my regard for pop skeptics hit rock bottom during the Covid-19 pandemic – and I did not think it could possibly sink any further. Cowards.

Cowardice, has displaced doubt in skepticism.

Fortune Favors the Huckster. Why take a chance leap, when you have a much greater chance of finding a patsy who will leap on your behalf?

Fear doesn’t change people. In fact, it exposes them.

~ Ann Bauer, Author @annbauerwriter

The Ethical Skeptic, “Where Were the Skeptics?”; The Ethical Skeptic, WordPress, 8 Jan 2022; Web, https://theethicalskeptic.com/?p=60162

Panduction: The Invalid Form of Inference

One key, if not the primary form of invalid inference on the part of fake skeptics, resides in the methodology of panductive inference. A pretense of Popper demarcation, panduction is employed as a masquerade of science in the form of false deduction. Moreover it constitutes an artifice which establishes the purported truth of a favored hypothesis by means of the extraordinary claim of having falsified every competing idea in one fell swoop of rationality. Panduction is the most common form of pseudoscience.

Having just finished my review of the Court’s definition of malice and oppression in the name of science, as outlined in the Dewayne Johnson vs. Monsanto Company case, my thinking broached a category of pseudoscience which is practiced by parties who share similar motivations to the defendant in that landmark trial. Have you ever been witness to a fake skeptic who sought to bundle all ‘believers’ as one big deluded group, who all hold or venerate the same credulous beliefs? Have you ever read a skeptic blog, claiming a litany of subjects to be ‘woo’ – yet fully unable to cite any evidence whatsoever which served to epistemologically classify that embargoed realm of ideas under such an easy categorization of dismissal? What you are witness to, is the single most common, insidious and pretend-science habit of fake skeptics, panduction.

It’s not that all the material contained in the embargoed hypotheses realm has merit. Most of it does not. But what is comprised therein, even and especially in being found wrong, resides along the frontier of new discovery. You will soon learn on this journey of ethical skepticism, that discovery is not the goal of the social skeptic; rather that is exactly what they have been commissioned to obfuscate.

Science to them is nothing more than an identity which demands ‘I am right’.

There exist three forms of valid inference, in order of increasing scientific gravitas: abduction, induction and deduction. Cleverly downgrading science along these forms of inference in order to avoid more effective inference methods which might reveal a disliked outcome, constitutes another form of fallacy altogether, called methodical deescalation.  We shall not address methodical deescalation here, but rather, a fourth common form of inference, which is entirely invalid in itself. Panduction is a form of ficta rationalitas; an invalid attempt to employ critical failures in logic and evidence in order to condemn a broad array of ideas, opinions, hypotheses, constructs and avenues of research as being Popper falsified; when in fact nothing of the sort has been attained. It is a method of proving yourself correct, by impugning everyone and everything besides the idea you seek to protect, all in one incredible feat of armchair or bar-stool reasoning. It is often peddled as critical thinking by fake skeptics.

Panduction is a form of syllogism derived from extreme instances of Appeal to Ignorance, Inverse Negation and/or Bucket Characterization from a Negative Premise. It constitutes a shortcut attempt to promote one idea at the expense of all other ideas, or kill an array of ideas one finds objectionable. Nihilists employ panduction for example, as a means to ‘prove’ that nothing exists aside from the monist and material entities which they approve as real. They maintain the fantasy that science has proved that everything aside from what they believe, is false by a Popperian standard of science – i.e. deducted. This is panduction.

Panduction

/philosophy : invalid inference/ : an invalid form of inference which is spun in the form of pseudo-deductive study. Inference which seeks to falsify in one fell swoop ‘everything but what my club believes’ as constituting one group of bad people, who all believe the same wrong and correlated things – this is the warning flag of panductive pseudo-theory. No follow up series studies nor replication methodology can be derived from this type of ‘study’, which in essence serves to make it pseudoscience.  This is a common ‘study’ format which is conducted by social skeptics masquerading as scientists, to pan people and subjects they dislike.

There are three general types of Panduction. In its essence, panduction is any form of inference used to pan an entire array of theories, constructs, ideas and beliefs (save for one favored and often hidden one), by means of the following technique groupings:

  1. Extrapolate and Bundle from Unsound Premise
  2. Impugn through Invalid Syllogism
  3. Mischaracterize though False Observation

The first is executed through attempting to falsify entire subject horizons through bad extrapolation. The second involves poorly developed philosophies of denial. Finally the third involves the process of converting disliked observations or failures to observe, into favorable observations:

Panduction Type I

Extrapolate and Bundle from Unsound Premise – Bucket Characterization through Invalid Observation – using a small, targeted or irrelevant sample of linear observations to extrapolate and further characterize an entire asymmetric array of ideas other than a preferred concealed one. Falsification by:

Absence of Observation (praedicate evidentia modus ponens) – any of several forms of exaggeration or avoidance in qualifying a lack of evidence, logical calculus or soundness inside an argument. Any form of argument which claims a proposition consequent ‘Q’, which also features a lack of qualifying modus ponens, ‘If P then’ premise in its expression – rather, implying ‘If P then’ as its qualifying antecedent. This as a means of surreptitiously avoiding a lack of soundness or lack of logical calculus inside that argument; and moreover, enforcing only its conclusion ‘Q’ instead. A ‘There is not evidence for…’ claim made inside a condition of little study or full absence of any study whatsoever.

Insignificant Observation (praedicate evidentia) – hyperbole in extrapolating or overestimating the gravitas of evidence supporting a specific claim, when only one examination of merit has been conducted, insufficient hypothesis reduction has been performed on the topic, a plurality of data exists but few questions have been asked, few dissenting or negative studies have been published, or few or no such studies have indeed been conducted at all.

Anecdote Error – the abuse of anecdote in order to squelch ideas and panduct an entire realm of ideas. This comes in two forms:

Type I – a refusal to follow up on an observation or replicate an experiment, does not relegate the data involved to an instance of anecdote.

Type II – an anecdote cannot be employed to force a conclusion, such as using it as an example to condemn a group of persons or topics – but an anecdote can be employed however to introduce Ockham’s Razor plurality. This is a critical distinction which social skeptics conveniently do not realize nor employ.

Cherry Picking – pointing to a talking sheet of handpicked or commonly circulated individual cases or data that seem to confirm a particular position, while ignoring or denying a significant portion of related context cases or data that may contradict that position.

Straw Man – misrepresentation of either an ally or opponent’s position, argument or fabrication of such in absence of any stated opinion.

Dichotomy of Specific Descriptives – a form of panduction, wherein anecdotes are employed to force a conclusion about a broad array of opponents, yet are never used to apply any conclusion about self, or one’s favored club. Specific bad things are only done by the bad people, but very general descriptives of good, apply when describing one’s self or club. Specifics on others who play inside disapproved subjects, general nebulous descriptives on self identity and how it is acceptable ‘science’ or ‘skepticism’.

Associative Condemnation (Bucket Characterization and Bundling) – the attempt to link controversial subject A with personally disliked persons who support subject B, in an effort to impute falsehood to subject B and frame its supporters as whackos. Guilt through bundling association and lumping all subjects into one subjective group of believers. This will often involve a context shift or definition expansion in a key word as part of the justification. Spinning for example, the idea that those who research pesticide contribution to cancer, are also therefore flat Earther’s.

Panduction Type II

Impugn through Invalid Syllogism – Negative Assertion from a Pluralistic, Circular or Equivocal Premise – defining a set of exclusive premises to which the contrapositive applies, and which serves to condemn all other conditions.

Example (Note that ‘paranormal’ here is defined as that which a nihilist rejects a being even remotely possible):

All true scientists are necessarily skeptics. True skeptics do not believe in the paranormal. Therefore no true scientist can research the paranormal.

All subjects which are true are necessarily not paranormal. True researchers investigate necessarily true subjects. Therefore to investigate a paranormal subject makes one not a true researcher.

All false researchers are believers. All believers tend to believe the same things. Therefore all false researchers believe all the same things.

Evidence only comes from true research. A paranormal investigator is not a true researcher. Therefore no evidence can come from a paranormal subject.

One may observe that the above four examples, thought which rules social skepticism today, are circular in syllogism and can only serve to produce the single answer which was sought in the first place. But ruling out entire domains of theory, thought, construct, idea and effort, one has essentially panned everything, except that which one desires to be indeed true (without saying as much).  It would be like Christianity pointing out that every single thought on the part of mankind, is invalid, except what is in the Bible. The Bible being the codification equivalent of the above four circular syllogisms, into a single document.

Panduction Type III

Mischaracterize through False Observation – Affirmation from Manufacturing False Positives or Negatives – manipulating the absence of data or the erroneous nature of various data collection channels to produce false negatives or positives.

Panduction Type III is an extreme form of an appeal to ignorance. In an appeal to ignorance, one is faced with observations of negative conditions which could tempt one to infer inductively that there exists nothing but the negative condition itself. An appeal to ignorance simply reveals one of the weaknesses of inductive inference.  Let’s say that I find a field which a variety of regional crow murders frequent. So I position a visual motion detection camera on a pole across from the field in order to observe crow murders who frequent that field. In my first measurement and observation instance, I observe all of the crows to be black. Let us further then assume that I then repeat that observation exercise 200 times on that same field over the years. From this data I may well develop a hypothesis that includes a testable mechanism in which I assert that all crows are black. I have observed a large population size, and all of my observations were successful, to wit: I found 120,000 crows to all be black. This is inductive inference. Even though this technically would constitute an appeal to ignorance, it is not outside of reason to assert a new null hypothesis, that all crows are black – because my inference was derived from the research and was not a priori favored. I am not seeking to protect the idea that all crows are black simply because I or my club status are threatened by the specter of a white crow. The appeal to ignorance fallacy is merely a triviality in this case, and does not ‘disprove’ the null (see the Appeal to Fallacy). Rather it stands as a caution, that plurality should be monitored regarding the issue of all crows being black.

But, what if I become so convinced that the null hypothesis in this case is the ‘true’ hypothesis, or even preferred that idea in advance because I was a member of a club which uses a black crow as its symbol? In such a case I approach the argument with an a priori belief which I must protect. I begin to craft my experimental interpretation of measurement such that it conforms to this a priori mandate in understanding. This will serve to produce four species of study observation procedural error, which are in fact, pseudoscience; the clever masquerade of science and knowledge:

A.  Affirmation from Result Conversion  – employing a priori assumptions as filters or data converters, in order to produce desired observational outcomes.

1.  Conversion by a priori Assumption (post hoc ergo propter hoc). But what if the field I selected, bore a nasty weather phenomenon of fog, on an every other day basis. Further then, this fog obscured a good view of the field, to the point where I could only observe the glint of sunlight off the crow’s wings, which causes several of them to appear white, even though they are indeed black. But because I know there are no white crows now, I use a conversion algorithm I developed to count the glints inside the fog, and register them as observations of black crows? Even though a white crow could also cause the same glint. I have created false positives by corrupted method.

2.  Conversion by Converse a priori Assumption (propter hoc ergo hoc – aka plausible deniability). Further then, what if I assumed that any time I observed a white crow, that this would therefore be an indication that fog was present, and a condition of Data Conversion by a priori Assumption was therefore assumed to be in play? I would henceforth, never be able to observe a white crow at all, finding only results which conform to the null hypothesis, which would now be an Omega Hypothesis (see The Art of Professional Lying: The Tower of Wrong).

Example: Viking Mars Lander Data Manipulation

Two Mars Viking Landers were sent to Mars, in part to study for signs of life. NASA researchers took soil samples the Viking landers scooped from the surface and mixed it with nutrient-rich water. If the soil had life, the theory went that the soil’s microbes would metabolize the nutrients in the water and release a certain signature of radioactive molecules. To their pleasant surprise, the nutrients metabolized and radioactive molecules were released – suggesting that Mars’ soil contained life. However, the Viking probes’ other two experiments found no trace of organic material, which prompted the question: If there were no organic materials, what could be doing the metabolizing? So by assumption, the positive results from the metabolism test, were dismissed as derivative from some other chemical reaction, which has not been identified to date. The study was used as rational basis from which to decline further search for life on Mars, when it should have been appropriately deemed ‘inconclusive’ instead (especially in light of our finding organic chemicals on Mars in the last several months)1

B. Affirmation from Observation Failure Conversion – errors in observation are counted as observations of negative conditions, further then used as data or as a data screening criterion.

Continuing with our earlier example, what if on 80% of the days in which I observed the field full of crows, the camera malfunctioned and errantly pointed into the woods to the side, and I was fully unable to make observations at all on those days? Further then, what if I counted those non-observing days as ‘black crow’ observation days, simply because I had defined a black crow as being the ‘absence of a white crow’ (pseudo-Bayesian science) instead of being constrained to only the actual observation of an actual physical white crow? Moreover, what if, because of the unreliability of this particular camera, any observations of white crows it presented were tossed out, so as to prefer observations from ‘reliable’ cameras only? This too, is pseudoscience in two forms:

1.  Observation Failure as Observation of a Negative (utile absentia). – a study which observes false absences of data or creates artificial absence noise through improper study design, and further then assumes such error to represent verified negative observations. A study containing field or set data in which there exists a risk that absences in measurement data, will be caused by external factors which artificially serve to make the evidence absent, through risk of failure of detection/collection/retention of that data. The absences of data, rather than being filtered out of analysis, are fallaciously presumed to constitute bonafide observations of negatives. This is improper study design which will often serve to produce an inversion effect (curative effect) in such a study’s final results. Similar to torfuscation.

2.  Observation Failure as Basis for Selecting For Reliable over Probative Data (Cherry Sorting) – when one applies the categorization of ‘anecdote’ to screen out unwanted observations and data. Based upon the a priori and often subjective claim that the observation was ‘not reliable’. Ignores the probative value of the observation and the ability to later compare other data in order to increase its reliability in a more objective fashion, in favor of assimilating an intelligence base which is not highly probative, and can be reduced only through statistical analytics – likely then only serving to prove what one was looking for in the first place (aka pseudo-theory).

These two forms of conversion of observation failures into evidence in favor of a particular position, are highlighted no better than studies which favor healthcare plan diagnoses over cohort and patient input surveys. Studies such as the Dutch MMR-Autism Statistical Meta-Analysis or the Jain-Marshall Autism Statistical Analysis failed precisely because of the two above fallacious methods regarding the introduction of data. Relying only upon statistical analytics of risk-sculpted and cherry sorted data, rather than direct critical path observation.

 Example: Jain-Marshall Autism Study

Why is the 2015 Jain-Marshall Study of weak probative value? Because it took third party, unqualified (health care plan) sample interpretations of absences (these are not observations – they are ‘lack-of’ observations – which are not probative data to an intelligence specialist – nor to a scientist – see pseudo-theory) from vaccinated and non-vaccinated children’s final medical diagnoses at ages 2, 3, and 5. It treated failures in the data collection of these healthcare databases, as observations of negative results (utile absentia). A similar data vulnerability to the National Vaccine Injury Compensation System’s ‘self-volunteering’ of information and limitation of detection to within 3 years. This favors a bad, non-probative data repository, simply because of its perception as being ‘reliable’ as a source of data. This fails to catch 99% of signal observations (Cherry Sorting), and there is good demonstrable record of that failure to detect actual injury circumstances.2

One might chuckle at the face value ludicrousness of either Panduction Type III A and B. But Panduction Type III is regularly practiced inside of peer reviewed journals of science. Its wares constitute the most insidious form of malicious and oppressive fake science. One can certainly never expect a journalist to understand why this form of panduction is invalid, but certainly one should expect it of their peer review scientists – those who are there to protect the public from bad science. And of course, one should expect it from an ethical skeptic.

The Ethical Skeptic, “Panduction: The Invalid Form of Inference” The Ethical Skeptic, WordPress, 31 Aug 2018; Web, https://wp.me/p17q0e-8c6

‘Anecdote’ – The Cry of the Pseudo-Skeptic

It is no coincidence that the people who actually sincerely want to figure things out (Intelligence Agency Professionals), maintain the most powerful processors and comprehensive data marts of intelligence anecdote in the world. They regard no story as too odd, no datum as too small – and for good reason. These are the details which can make or break a case; save lives. Fake skeptics rarely grasp that an anecdote is used to establish plurality, not a conclusion. Particularly in the case where an anecdote is of possible probative value, screening these out prematurely is a method of pseudo-scientific data filtering (cherry sorting). A facade of appearing smart and scientific, when nothing of the sort is even remotely true.
‘Anecdote’ is not a permissive argument affording one the luxury of dismissing any stand alone observation they desire. This activity only serves to support a narrative. The opposite of anecdote, is narrative.

“Nothing is too small to know, and nothing too big to attempt.” Or so the utterance goes attributed to William Van Horne, namesake of the Van Horne Institute, a quasi-academic foundation focusing on infrastructure and information networks. In military intelligence, or more specifically inside an AOC (area of concern) or threat analysis, one learns early in their career an axiom regarding data: ‘Nothing is too small’. This simply means that counter-intelligence facades are often broken or betrayed by the smallest of detail, collected by the most unlikely of HUMINT or ELINT channels. Observations are not simply vetted by presuming the reliability of their source, but also by the probative value which they may serve to impart to the question at hand. If you reject highly probative observations, simply because you have made an assumption as to their reliability – this is a practice of weak science. Why do you think it no coincidence, that the people who actually sincerely want to figure things out (Intelligence Agency Professionals), maintain the most powerful processors and comprehensive data marts of intelligence data in the world? Be wary of a skeptic who habitually rejects highly probative observations, because of a question surrounding their reliability, and subsequently pretends that the remaining data set is not cherry picked. This is data skulpting – in other words, pseudoscience.

Intelligence is the process of taking probative observations and making them reliable – not, taking reliable information and attempting to make it probative.

A sixth sigma set of measurements, all multiplied or screened by a SWAG measure of reliability, equals and always equals a SWAG set of measurements.

Should not science be the same way? I mean if we really wanted to know things, why would not science adopt the methods, structures and lingo of our most advanced intelligence agencies? After all, this is what intelligence means, ‘means of discovery’ – and a particular reason why I include Observation and Intelligence Aggregation right along with establishing Necessity, as the first three steps of the scientific method. Or could it be that they do not want to know certain things? That science, is less akin to an intelligence organization, and more akin to a group, activist-endorsing and virtue signalling over corporate and social agendas. So, if you are in the mood for some one liners along this line, here you go:

The plural of anecdote, is data.

    ~ Raymond Wolfinger, Berkeley Political Scientist, in response to a student’s dismissal of a simple factual statement by means of the pejorative categorization ‘anecdote’1

The opposite of anecdote, is narrative.

The declaration of anecdote, is every bit the cherry picking which its accuser implies.

    ~ The Ethical Skeptic

One will notice that RationalWiki, incorrectly frames the meaning of anecdote as “use of one or more stories, in order to draw a conclusion”2 This is a purposefully layman and narrative-driven version of the term anecdote. An anecdote, is used to establish plurality – not a conclusion. Had the author of RationalWiki ever prosecuted an intelligence scenario, or directed a research lab (both of which I have done) – they might have understood the difference between ‘conclusion’ and ‘plurality’ inside the scientific method. But this definition was easy, simpleton and convenient to the narrative methodology. Sadly, all one has to do in this day and age of narrative and identity – is declare one’s self a skeptic, and start spouting off stuff you have heard in the form of one liners. Below you will see an example where, celebrity skeptic Michael Shermer also fails to grasp this important discipline of skepticism and science (see The Real Ockham’s Razor).

Which of course brings up the subject of the word anecdote itself. Google Dictionary defines anecdote as the following (time graph and definition, both from Google Dictionary):3

Anecdote

/noun : communication : late 17th century: from French, or via modern Latin from Greek anekdota ‘things unpublished,’ from an- ‘not’ + ekdotos, from ekdidōnai ‘publish’/ : a short and amusing or interesting story about a real incident or person. An account regarded as unreliable or hearsay.

Anecdote Error

/philosophy pseudoscience : invalid inference/ : the abuse of anecdote in order to squelch ideas and panduct an entire realm of ideas. This comes in two forms:

Type I – a refusal to follow up on an observation or replicate an experiment, does not relegate the data involved to an instance of anecdote.

Type II – an anecdote cannot be employed to force a conclusion, such as using it as an example to condemn a group of persons or topics – but an anecdote can be employed however to introduce Ockham’s Razor plurality. This is a critical distinction which social skeptics conveniently do not realize nor employ.

This is a reasonable, generally accepted range of usage of the word. Notice the large and accommodating equivocal footprint of this word, ranging very conveniently from ‘actual story’ to ‘lie’:

Fake skeptics wallow & thrive in luxuriously equivocal terminology like this. And indeed, this shift in definition has been purposeful over time. Notice how the pejorative use of the term came into popularity just as the media began to proliferate forbidden ideas and was no longer controllable by social skeptic clients. Social Skeptics were enlisted to be the smart but dumb (Taleb IYI) players who helped craft the new methods of thought enforcement, the net effect of which you may observe on the graph below:

Cherry Sorting: A Sophist’s Method of Screening Out Data they Find ‘Not Reliable’

Set aside of course rumor and unconfirmed stories. These are anecdotes yes because of the broad equivocal footprint of the term; however, this is not what is typically targeted when one screens out observations by means of the fallacious use of the term. Yes, being terrified (both existential or career impact contexts) of an answer, can serve to affect the bias used in the a priori assessment of an observation as ‘unreliable’. Even by the most honest of scientist.  Are you as a scientist going to include observations which will serve to show that vaccines are causing more human harm than assumed? Hell no, because you will lose your hard earned career in science. So, crafting of a study employing high reliability/low probative value observations is paramount is such a case. If you ensure that your data is reliable and that your databases are intimidatingly large (a meta-study for instance), then you are sure to appear scientific; even if you have never once spoken to a stakeholder inside the subject in question.

Information of a probative nature, naturally involves more risk, since more conjecture is placed on the line in its collection (which makes it more informative). It involves more subject matter expertise, more effort and expense as well. To retreat to solely an analytical position; include only information which avoids risk/expertise/cost, rather than groom probative information and seek to mitigate its risk through consilience efforts and direct observation – this is the process by which we actively craft false knowledge on a grand scale.

Intelligence professionals in contrast are trained to examine the terrifying answer – science professionals are trained to examine the most comforting answer (simplest and career safest). The bottom line is this, Intelligence Professionals, those who truly seek the answers from the bottom of their very being – are promoted in their career for regarding data differently than does weak scientific study or do pseudo-skeptics. The issue, as these professionals understand, is NOT that anecdotes are unreliable. All questionable, orphan or originating observations are ‘unreliable’ to varying degrees, until brought to bear against other consilience: other data to help corroborate or support its indication. This is the very nature of intelligence, working to increase the reliability and informative (probative) nature of your data. Be sure to not allow an appeal to authority, to defacto stand in as assumption of reliability and informative ability. One falsifying anecdote may be unreliable, but is potentially also highly informative. Ethical skepticism involves precisely the art of effecting an increase in reliability of one’s best and highly probative data. Of course I do not believe with credulity in aliens visiting our Earth, but if we habitually screen out all the millions of qualified reports of solid, clear and intelligent craft of extraordinary capability, flying around in our oceans and skies – we are NOT doing science, we are NOT doing skepticism. We are anecdote data skulpting pseudo-scientists. And if we take the people who are conducting this probative database assembly, and call them pseudo-scientists – we are nothing even remotely associated with skepticism.

A 70% bootstrap deductive observation is worth ten 90% bootstrap inductive ones…
If I only examine 90% plus reliable data, I will get entirely the wrong answer.
Reliability can be deceiving when assumed a priori. It is my inference which must accrue reliability through repeated observation.

Only assaying the reliability of an observation constitutes a streetlight effect error, regardless of a presence or absence of bias. Probative evidence is always the priority. No inference in science hinges on the accuracy/bias status of one point of evidence alone. There is always bias, the key is to filter such influence out through accrued observation. Not to toss data you don’t like. The inference itself is that which bears the critical path of establishing reliability. Reliability and Probative Value in data is something which is established after the fact through comparison and re-observation, not something which is presumed during data collection. Assuming such things constitutes the ultimate form of subjective cherry picking – cherry sorting. One will be sure to find the thing one sought to find in the first place.

[Please note: one of my intelligence professional associates has emailed me and reminded me of the “I” community’s mission to focus on ‘capability and intent’ as well as probative and reliable factors inside observation gathering and data mart assembly. Capability and intent, are assumptions which social skeptics bring to the dance, already formed. Assumptions which help them justify their cherry sorting methodologies.  It is a form of conspiracy theory being applied to science. This is why ethical skepticism, despite all temptation to the contrary, must always be cautious of ‘doubt’ – doubt is a martial art, which if not applied by the most skilled of practitioner, serves only to harm the very goals of insight and knowledge development. Or conversely, is a very effective weapon for those who desire to enforce a specific outcome/conclusion.]

Be cautious therefore, when ‘reliability’ is a bit of a red herring employed during the sponsorship stage of the scientific method. Today, with the storage and handling capacity, and relative inexpensive nature of our computational systems – there is no excuse for tossing out data for any reason – especially during the observation, intelligence and necessity steps of the scientific method. We might apply a confidence factor to data, as long as all data can be graded on such a scale, otherwise such an effort is symbolic and useless. Three errors which data professionals make inside Anecdote Data Skulpting, which involve this misunderstanding of the role of anecdote, and which are promoted and taught by social skeptics today, follow:

Anecdote Data Skulpting (Cherry Sorting)

/philosophy : pseudo-science : bias : data collection : filtering error/ : when one applies the categorization of ‘anecdote’ to screen out unwanted observations and data. Based upon the a priori and often subjective claim that the observation was ‘not reliable’. Ignores the probative value of the observation and the ability to later compare other data in order to increase its reliability in a more objective fashion, in favor of assimilating an intelligence base which is not highly probative, and can be reduced only through statistical analytics – likely then only serving to prove what one was looking for in the first place (aka pseudo-theory).

‘Anecdote’ is not a permissive argument affording one the luxury of dismissing any stand alone observation they desire. This activity only serves to support a narrative. The opposite of anecdote, is narrative.

1.  Filtering – to imply that information is too difficult to reduce/handle or make reliable, is unnecessary and invalid(ated); and therefore filtering of erstwhile observations is necessary by means of the ‘skeptical’ principle of declaring ‘anecdote’

Example:  This is not the same as the need to filter information during disasters.4 During the acute rush of information stage inside a disaster – databases are often of little help – so responses must be more timely than intelligence practices can respond. But most of science, in fact pretty much all of it, does not face such an urgent response constraint. Save for an ebola epidemic or something of that nature. What we speak of here, is where professionals purposely ignore data, at the behest of ‘third party’ skeptics – and as a result, blind science.

The discovery of penicillin was a mere anecdote. It took 14 years to convince the scientific community to even try the first test of the idea, that Dr. Alexander Fleming’s accident, and reported case study regarding his contamination of his Petri dishes with Penicilium mold – was indeed an observation of science, and not a mere fantabulous story.5 In similar fashion, the anecdotes around the relationship between h. pylori and peptic ulcers were rejected for far too long by a falsely-skeptical scientific community. The one liner retort by social skeptics that ‘The science arrived right on time, after appropriate initial skepticism’ collapses to nothing but utter bullshit, once one researches the actual narratives being protected and groups involved.

Another example of anecdote data skulpting can be found here, as portrayed in celebrity sskeptic Michael Shermer’s treatise on the conclusions he claims science has made about the afterlife and the concept of an enduring soul-identity.  Never mind that the title implies that the book is about the ‘search’ (of which there has been a paltry little) – rest assured that the book is only about the conclusion: his religion of nihilism.

In this demonstration of the martial art of denial, Michael exhibits the ethic of data skulpting, by eliminating 99% of the observational database, the things witnessed by you, nurses, doctors, your family and friends, through every bit of it constituting dismissable ‘anecdote’. He retains for examination, the very thin margin of ‘reliable’ data which he considers scientific. The problem is, that these data studies he touts are not probative – merely and mildly suggestive forms of abduction. Convenient inference which supports his null hypothesis – which just happens to be his religious belief as well.  As an atheist and evolutionist, I also reject this form of pseudo science.

For example, in the book Michael makes the claim that “science tells us that all three of these characteristics [of self and identity] are illusions.” Science says no such thing. Only nihilists and social skeptics make this claim – the majority of scientists do not support it at all.6 If one were to actually read even the famously touted Libet and Haynes study on free will and identity, one would find that the authors only claim their studies to be mildly suggestive at best. In need of further study.  Actually read the studies before spouting what amounts to a Skeptic Appeal to Authority claim. The studies are good, but they are inductive in the extreme best perspective. The difference here being, ethical skeptics do not reject material monist studies, as do nihilists in their habit of rejecting NDE studies through hand waving, echo chamber and pseudoscience.

As well, this boastful conclusion requires one to ignore, and artificially dismiss or filter out as ‘anecdote’, specific NDE and other studies which falsify or countermand Shermer’s fatal claim to science.7 Yes, these cases are difficult to replicate – but they were well conducted, are well documented and were indeed falsifying. And they are science. So Shermer’s claim that ‘science tells us’ – is simply a boast. Dismissing these type of studies because they cannot be replicated or because of a Truzzi Fallacy of plausible deniability or because they ‘lacked tight protocols’ (a red herring foisted by Steven Novella) – is the very form of cherry sorting we are decrying here. Dismissing probative observations in favor of ‘reliable’ sources. In the end, all this pretend science amounts to nothing but a SWAG in terms of accuracy in conclusion. Something real skeptics understand but fake skeptics never seem to grasp.

And under any circumstance, certainly not a sound basis for a form of modus ponens assertion containing the words ‘science tells us’.

Moreover, one must pretend that the majority of scientists really do not count in this enormous boast as to what ‘science tells us’. In Anthropogenic Global Warming science, the majority opinion stands tantamount to final. Now of course suddenly, scientific majorities no longer count, because a couple celebrity skeptics do not favor the answer. Do we see the intimidation and influencing game evolving here? The graph below, from a Pew Research 2009 survey of scientists, illustrates that actually the majority of scientists believe in a god concept. That is a pretty good start on the identity-soul itself, as many people believe in a soul/spirit-identity, but not necessarily in the gods of our major religions. I myself fall inside the 41% below (although I do not carry a ‘belief’, rather simply a ‘lack of allow-for’), and I disagree with Shermer’s above claim.8

In similar fashion, using habits one can begin to observe and document, fake skeptics block millions of public observations as ‘anecdotes’; for instance, regarding grain toxicity, skin inflammation, chronic fatigue syndrome, autism, etc., through either calling those observers liars, or by declaring that ‘it doesn’t prove anything’. Of course they do not prove anything. That is not the step of science they are asking to be done in the first place.

2.  Complexifuscating – expanding the information around a single datum of observation, such that it evolves into an enormously complicated claim, or a story expanded to the point of straw man or adding a piece of associatively condemning information.  These three elements in bold, all key signature traits which ethical skeptics examine, in order to spot the habits of fake skeptics:  Observation versus claim blurring, straw man fallacy and associative condemnation.

Example:  Michael Shermer regularly dismisses as ‘anecdote’, observations in cases where citizen observations run afoul of the corporate interests of his clients. This is a common habit of Steven Novella as well.  The sole goal (opposite of anecdote is narrative) involved is to obfuscate liability on the part of their pharmaceutical, agri-giant or corporate clients. This is why they run to the media with such ill-formed and premature conclusions as the ‘truth of science’. In the example below, Michael condemned a medical case report by a man – by, rather than treating such reports as cases for an argument of plurality and study (there are literally hundreds of thousands of similar cases) – he converts it to a stand alone complex claim contending ‘proof of guidelines for nutrition’ (dismissed by cherry sorting) and associatively condemns it by leveraging social skeptic bandwagon aversion to the ‘Paleo Diet’. I assure you, that if this man had lost weight from being a vegan, Michael would have never said a thing. All we have to do is replicate this method, straw man other similar reports and continually reject them as anecdote – and we will end with a defacto scientific conclusion.

Of course I am not going to adopt the Paleo Diet as authority from just this claim alone – but neither have I yet seen any good comparative cohort studies on the matter. Why? Because it has become a career killer issue – by the very example of celebrity menacing provided below – pretending to champion science, but in reality championing political and corporate goals:

This is not skepticism in the least. It is every bit the advocacy which could also be wound up in the man’s motivation to portray his case. The simple fact is, if one can set aside their aversion to cataloging information from the unwashed and non -academic, that it is exactly the stories of family maladies, personal struggles and odd and chronic diseases which we should be gathering as intelligence – if indeed the story is as ‘complicated’ as Michael Shermer projects here. These case stories help us rank our reduction hierarchy and sets of alternatives, afford us intelligence data (yes, the plurality of anecdote is indeed data) on the right questions to ask – and inevitably will lead us to impactful discoveries which we could never capture under linear incremental and closed, appeal to authority only, science – science methods inappropriately applied to asymmetric, complex systems.

An experiment is never a failure solely because it fails to achieve predicted results. An experiment is a failure only when it also fails adequately to test the hypothesis in question, when the data it produces don’t prove anything one way or another.

   ~ Robert Pirsig, American writer, philosopher and author of Zen and the Art of Motorcycle Maintenance: An Inquiry into Values (1974)

In similar regard, and as part of a complexifuscation strategy in itself, Michael Shermer in the tweet above, misses the true definition of anecdote, as a pejorative inside a scientific context. That is, a case which fails to shed light on the idea in question. And not the distinguishing criteria that the case ‘stands alone’, unsponsored by science.  All data can easily be made to stand alone, if you want it to be so.

3.  Over Playing/Streetlight Effect – to assume that once one has then bothered to keep/analyze data, they hold now some authoritative piece of ‘finished science’. The process of taking a sponsorship level of reliable data, and in absence of any real probative value, declaring acceptance and/or peer review success. A form of taking the easy (called ‘reliable’) route to investigation.

Streetlight Effect

/philospophy : science : observation intelligence/ : is a type of observational bias that occurs when people only search for something where it is easiest to look.

Example:  A study in the March 2017 Journal of Pediatrics, as spun by narrative promoter CNN9 incorrectly draws grand conclusions off of 3 and 5 year old data, derived from questionnaires distributed to cohorts of breast feeding and bottle feeding Irish mothers. While useful, the study is by no means the conclusive argument which it is touted as being by CNN (no surprise here); rendered vulnerable to selection, anchoring and inclusion biases.

This is a keen example of an instance where the traditional approach and reputed reliability of the observation method was regarded as being high, but the actual probative value of the observations themselves was low. And we opted for the easier route, as a result. This case was further then taken by several advocacy groups as constituting finished science (pseudo-theory) – an oft practiced bias inside data-only medical studies.

Not to mention the fact that other, more scientific alternative study methods, based upon more sound observational criteria, could have been employed instead. Data is now easy – so now lets exploit easy to pretend science, use our interns to save money – with the exception the instance where ‘easy’ might serve to promote an idea we do not like.

The simple creation of a single data study – hinges on the constraint, assumption and inclusion biases/decisions made by the data gatherers and analysts. It is no better than an anecdote in reality, when one considers that the simple notion that both cohorts are going to spin their preferred method, nay their child – as being/performing superior.  This is an anecdote – and as such is acceptable for inclusion to establish plurality, yes – but is by no means the basis of a conclusion. Essentially the same mistake as the first point above, just the opposite side of the same coin.

Anecdote is not equal to conclusion. Neither is its exclusion.

   ~ The Ethical Skeptic

And conclusions are what fake skeptics are all about.  Easy denied conclusions, easy adopted conclusions. Because, that is ‘skeptical’.

The Ethical Skeptic, “’Anecdote’ – The Cry of the Pseudo-Skeptic” The Ethical Skeptic, WordPress, 7 Jan 2018, Web; https://wp.me/p17q0e-6Yx

Chinese (Simplified)EnglishFrenchGermanHindiPortugueseRussianSpanish
%d bloggers like this: