Sciebam – Religion with P-Values

At the heart of sciebam, resides what is in essence an embargo of specific ideas, avenues of research, and methods of investigation. It is a simultaneous appeal to truth, appeal to embargo, and proselytization all in one. Religion dressed up in p-values and confidence intervals.

Years ago, I was hired as President of a materials research company, brought into resolve the capital-consuming logjam situation which had developed among the lab’s researchers. The lab and its research team were funded under a scope of finding the solution to an impasse inside material physics which had existed in industry for decades (many of the particulars are/were 32 C.F.R. 2001 Classified). A grand set of assumptions as to what was possible and impossible, had grown like ivy around the state of knowledge inside the subject. The team’s task was to break this impasse in knowledge, so that critical applications of the resulting technology could be developed inside the aerospace and energy industries.

To make a long story very short, we were successful in this development – the first time the particular achievement had ever been accomplished (by mankind at least). I will never forget the 3:45 pm Friday call from one of the techs, “Mr. G, we got 18 successful tests out of 240 iterations, and they all pertained to a single parameter and single reactor setting.” That was exactly what we had been looking for. While this set of happenstance does not serve to make me any kind of expert in material physics by any means, it did derive in part from a prowess in skepticism and the processes of science. It stemmed from a discernment of the difference between real and protect-my-job scientific activity.

I had the team immediately lock the accesses to the building, shut off the internet and phones, and station an armed guard at the front door – and to not allow anyone to enter or leave the facility until I could come (I was a mere 5 minutes away at a key vendor’s shop) and debrief them. This also involved a team celebration of course. It was an exciting evolution for everyone involved. The board members were all very noteworthy and powerful persons, who then unfortunately split according to their level of greed and desire for control of both company and intellectual property – each team encouraging me to join them.

Aside from issues of greed and power, the chief principle which served as the basis of the logjam inside that particular research, involved the conflation of two differing notions of science. One which venerates challenge and incremental discovery, versus another which prioritizes the control and career stability achievable through exploiting established knowledge. The reason I was hired in the first place is because one, I don’t venerate old untested knowledge and two, I am not intimidated in the least by scientists flaunting degrees, exclusionary lexicon, and technical notation jargon. The research team knew it too. Such barriers to entry, can only stand for so long before one eventually figures out the game. Within a week, I was pressing the old guard past their ability to defend the status quo and began developing new testing approaches, lab procedures, and shift dockets.

Sciebam – Consensus Through Appeal to Truth (Implicit Embargo)

In similar principle, much of what is conducted in the name of science today, is not science for the most part – rather a technical process of career qualification through methodical and linear confirmation bias, a set of activities which I call sciebam. Such activity contrasts itself with the discipline of science along the following lines:

Science (Latin: scī́mus/sciḗmus -‘we know/we will know’)1leveraging challenging thinking, deductive falsification, straightforward complexity, and consilience to infer a critical path of novel comprehension – one prosecutes (pursues) truth.

Sciebam (Latin: sciēbā́mus -‘we knew’)2exploiting assumption, abduction, panduction, complicated simplicity, and linear/statistical induction to confirm an existing or orphan understanding – one is holder of the truth.

†See The Distinction Between Comprehension and Understanding (The Problem of Abduction)

At the heart of sciebam, resides what is in essence an embargo of specific ideas, avenues of research, and methods of investigation. Of course most researchers do not typically perceive their habits in such fashion, so it often takes an outsider to come in and shake things up. To in effect, sweep out the cobwebs of sciebam and renew an interest in a passion for true discovery (see The Strategic Mindset).

There exists a principle of philosophy that I observe, which falls along the lines of Sir Isaac Newton’s third law of motion, also known as the law of action-reaction. That is, unless one is very careful, an appeal to truth will almost always be accompanied by, at the very least, an implicit appeal to embargo​. Nihilism is the embargo, which comes commensurate with the ‘truth’ of not being able to measure outside the bounds of physical reality. Collectivism suffers the embargo which results from the ‘truth’ of a successful deployment of capital. Freedom suffers the embargo which arrives under the awesome specter of an impending cataclysm‘s ‘truth’.

When an advocate appeals to authority of truth, by means of enforcing an embargo of competing ideas – they are typically protecting a lie, or at least hyperbole. Even if that advocate is accidentally correct about their truth in the end, such mechanism still constitutes a lie because of the way in which the truth was enforced – by means of explicit or implicit false dichotomy, and enforcement of a false null hypothesis with no competing alternative idea. Accuracy in such a case, is a mere triviality.

Ethical Skeptic’s Law – if science won’t conduct the experiment, society will force the experiment. One can only embargo an idea for so long.

An Example of Appeal to Truth (Implicit Embargo)

Psychologist (no doubt psychologists often fall prey to this error – observe here which disciplines dominate inside fake skepticism) Dr. Gary Marcus at the University of Washington is a proponent of an embargo regarding any research which pursues a pathway other than material nihilism – particularly as it pertains to arguments regarding the mind/brain relationship and near death experiences. In an interview with Alex Tsakiris of Skeptico several years back, he leads into his argument with these critical thesis statements (they are statements, not arguments):3

I don’t doubt that there’s a [Mind≡Brain] phenomena that needs to be explained, but I doubt that the explanation is that the brain is not [the entire source] of the experience that’s being processed [in an NDE]. I cannot conceive of how that would be true. (Embargo by means of premature rationality and appeal to authority – sciebam, or a religion passed off as science)

Discussion about the brain is basically the province of neuroscience. (Appeal to Authority, Appeal to Privation/Province)

My understanding is that the mind is essentially a property of the brain. (Appeal to Self Authority and Rationality)

I don’t see a lot of room for any alternative [Brain⊆Mind] which does not have something to do with [consciousness being constrained solely to] the physiology of the brain [Mind≡Brain]. (Appeal to Embargo as an Appeal to Truth)

~ Psychologist Dr. Gary Marcus, Skeptico, 6 Aug 2015

Please take note ethical skeptic, of the extraordinary amount of ambiguity, authority, and finality in these statements; crafted in such a way so as to appear non-declarative in nature, and posed inside a fake context of objectivity. Never fall for this. This is an appeal to authority, coupled with an appeal to embargo. This would not be a problem if it were merely a personal metaphysical notion, or if this line of thinking were not further then enforced inside Dr. Marcus’ discipline of study. This is where the error occurs.

These are not statements which should pass peer review (however they often do), because of the incumbent ambiguity and lack of epistemological backing. But his meaning inside them is illuminated in the rest of that same interview with Alex Tsakiris. He is both making a final claim to conclusiveness about the nature of mind and brain, and also is making the assertion that he does not have to back up any of these claims. This was very much akin to the ‘it can’t be done’ proclamations of the older scientists in the lab over which I presided earlier in this article.

Since, in this particular deliberation we have two necessary constructs and a constraint in terms of discipline history of study, plurality under Ockham’s Razor therefore, exists. Two valid ‘working hypotheses’ or constructs (placeholders until true hypothesis can be developed) are at play. Notice that I do not encourage embargo of research regarding Dr. Marcus’ preferred construct (I want that idea researched as well). In contrast, he chooses to embargo those ideas of his opposing camp – to use ignorance as a playground for consensus. I am perfectly fine with a [Mind≡Brain] reality. I however, do not want it forced upon me as a religion, nor even worse a religion which is masquerading as science (sciebam).

Our chief problem is when people, who purport to be persons of ‘science’, such as does Dr. Marcus above – try and push a concept or construct, which is not even mature enough to be a true scientific hypothesis, to status as final truth – skipping all the intervening steps. Then they appeal to their authority as a PhD, to make a claim inside a discipline which is either not their home discipline (Plaiting Fallacy), or is a discipline in which everyone has a stake, not just psychologists – metaphysical choice.

Dr. Marcus is therefore a Type I Expert. He cannot appeal to rationality nor authority here – but does so anyway.

Type I: Bowel Movement Expertise

The bowel movement expert (derived from an activity in which everyone is an expert, but some regard that their own expertise therein is superior – a perfect storm of ‘their shit don’t stink’ and ‘don’t know jack shit’) is an advisor inside a subject which is part of everyday life or is commonly experienced by many or most people.

In other words, I am just as much an expert in the construct [Mind≡Brain] as is Dr. Marcus – he cannot ethically bully me with his PhD in Psychology into accepting his religion, no matter the lexicon, no matter the p-values, no matter the confidence intervals. If alternately he demanded that I accept Heaven as a reality, I would object on the same basis of argument. As a skeptic, what I observe is that the materialist is too lazy and agency-insistent to wait for an actual hypothesis of scientific discipline to mature – so they take the concept/construct that our reality is the only possible basal reality (singular, closed I E and M fields, and non-derivative) and enforce it by means of a heavy handed embargo. Such is a mistake of skepticism.

These are the challenges, which a person like me faces, when tasked to manage a legacy scientific effort. The challenges of sciebam.

Einfach Mechanism

/philosophy : skepticism : pseudo-science : alternative reduction/ : an idea which is not yet mature under the tests of valid hypothesis, yet is installed as the null hypothesis or best explanation regardless. An explanation, theory or idea which sounds scientific, yet resolves a contention through bypassing the scientific method, then moreover is installed as truth thereafter solely by means of pluralistic ignorance around the idea itself. Pseudo-theory which is not fully tested at its inception, nor is ever held to account thereafter. ​

Anomie

/philosophy : ethics : circularity/ : a condition in which a club, group or society provides little or negative ethical guidance to the individuals which inhabit it or craft its direction.​

Pluralistic Ignorance

ad populum – a condition wherein the majority of individuals believe without evidence, either that everyone else assents or that everyone else dissents upon a specific idea.

ad consentum – a self-reinforcing cycle wherein wherein the majority of members in a body believe without evidence, that a certain consensus exists, and they therefore support that idea as consensus as well.

ad immunitatem – a condition wherein the majority of individuals are subject to a risk, however most individuals regard themselves to reside in the not-at-risk group – often because risk is not measured.

ad salutem – a condition wherein a plurality or majority of individuals have suffered an injury, however most individuals regard themselves to reside in the non-injured group – often because they cannot detect such injury.

If an idea is not even mature enough to qualify as a scientific hypothesis, it also cannot be installed as truth, no matter how ‘likely’ you regard it to be. Such error is made even worse if that truth comes bundled with an implicit embargo of any competing research (see The Ethical Skeptic’s definition of religion). If Dr. Marcus were to pursue his notions above as just one ‘working hypothesis’, then that would be ethically acceptable – however he is instead enforcing truth as an expert. He is taking a God position.

There exist at least 1700 other Elisabeth Noelle-Neumann’s Spiral of Silence-styled efforts in oppression, on the part of people who have declared themselves to be ‘experts in all that is wrong’, just as Dr. Marcus has done above. We at The Ethical Skeptic oppose such arrogance and false parsimony, even if we end up defending a hypothesis which itself turns out to be false in the end. Being found wrong is informative – being correct, is not. In other words, what we at The Ethical Skeptic object to, is pseudo-science, underpinned by pseudo-skepticism.

The Ethical Skeptic, “Sciebam – Religion with P-Values”; The Ethical Skeptic, WordPress, 17 Feb 2022; Web, https://theethicalskeptic.com/?p=38018

Panduction: The Invalid Form of Inference

One key, if not the primary form of invalid inference on the part of fake skeptics, resides in the methodology of panductive inference. A pretense of Popper demarcation, panduction is employed as a masquerade of science in the form of false deduction. Moreover it constitutes an artifice which establishes the purported truth of a favored hypothesis by means of the extraordinary claim of having falsified every competing idea in one fell swoop of rationality. Panduction is the most common form of pseudoscience.

Having just finished my review of the Court’s definition of malice and oppression in the name of science, as outlined in the Dewayne Johnson vs. Monsanto Company case, my thinking broached a category of pseudoscience which is practiced by parties who share similar motivations to the defendant in that landmark trial. Have you ever been witness to a fake skeptic who sought to bundle all ‘believers’ as one big deluded group, who all hold or venerate the same credulous beliefs? Have you ever read a skeptic blog, claiming a litany of subjects to be ‘woo’ – yet fully unable to cite any evidence whatsoever which served to epistemologically classify that embargoed realm of ideas under such an easy categorization of dismissal? What you are witness to, is the single most common, insidious and pretend-science habit of fake skeptics, panduction.

It’s not that all the material contained in the embargoed hypotheses realm has merit. Most of it does not. But what is comprised therein, even and especially in being found wrong, resides along the frontier of new discovery. You will soon learn on this journey of ethical skepticism, that discovery is not the goal of the social skeptic; rather that is exactly what they have been commissioned to obfuscate.

Science to them is nothing more than an identity which demands ‘I am right’.

There exist three forms of valid inference, in order of increasing scientific gravitas: abduction, induction and deduction. Cleverly downgrading science along these forms of inference in order to avoid more effective inference methods which might reveal a disliked outcome, constitutes another form of fallacy altogether, called methodical deescalation.  We shall not address methodical deescalation here, but rather, a fourth common form of inference, which is entirely invalid in itself. Panduction is a form of ficta rationalitas; an invalid attempt to employ critical failures in logic and evidence in order to condemn a broad array of ideas, opinions, hypotheses, constructs and avenues of research as being Popper falsified; when in fact nothing of the sort has been attained. It is a method of proving yourself correct, by impugning everyone and everything besides the idea you seek to protect, all in one incredible feat of armchair or bar-stool reasoning. It is often peddled as critical thinking by fake skeptics.

Panduction is a form of syllogism derived from extreme instances of Appeal to Ignorance, Inverse Negation and/or Bucket Characterization from a Negative Premise. It constitutes a shortcut attempt to promote one idea at the expense of all other ideas, or kill an array of ideas one finds objectionable. Nihilists employ panduction for example, as a means to ‘prove’ that nothing exists aside from the monist and material entities which they approve as real. They maintain the fantasy that science has proved that everything aside from what they believe, is false by a Popperian standard of science – i.e. deducted. This is panduction.

Panduction

/philosophy : invalid inference/ : an invalid form of inference which is spun in the form of pseudo-deductive study. Inference which seeks to falsify in one fell swoop ‘everything but what my club believes’ as constituting one group of bad people, who all believe the same wrong and correlated things – this is the warning flag of panductive pseudo-theory. No follow up series studies nor replication methodology can be derived from this type of ‘study’, which in essence serves to make it pseudoscience.  This is a common ‘study’ format which is conducted by social skeptics masquerading as scientists, to pan people and subjects they dislike.

There are three general types of Panduction. In its essence, panduction is any form of inference used to pan an entire array of theories, constructs, ideas and beliefs (save for one favored and often hidden one), by means of the following technique groupings:

  1. Extrapolate and Bundle from Unsound Premise
  2. Impugn through Invalid Syllogism
  3. Mischaracterize though False Observation

The first is executed through attempting to falsify entire subject horizons through bad extrapolation. The second involves poorly developed philosophies of denial. Finally the third involves the process of converting disliked observations or failures to observe, into favorable observations:

Panduction Type I

Extrapolate and Bundle from Unsound Premise – Bucket Characterization through Invalid Observation – using a small, targeted or irrelevant sample of linear observations to extrapolate and further characterize an entire asymmetric array of ideas other than a preferred concealed one. Falsification by:

Absence of Observation (praedicate evidentia modus ponens) – any of several forms of exaggeration or avoidance in qualifying a lack of evidence, logical calculus or soundness inside an argument. Any form of argument which claims a proposition consequent ‘Q’, which also features a lack of qualifying modus ponens, ‘If P then’ premise in its expression – rather, implying ‘If P then’ as its qualifying antecedent. This as a means of surreptitiously avoiding a lack of soundness or lack of logical calculus inside that argument; and moreover, enforcing only its conclusion ‘Q’ instead. A ‘There is not evidence for…’ claim made inside a condition of little study or full absence of any study whatsoever.

Insignificant Observation (praedicate evidentia) – hyperbole in extrapolating or overestimating the gravitas of evidence supporting a specific claim, when only one examination of merit has been conducted, insufficient hypothesis reduction has been performed on the topic, a plurality of data exists but few questions have been asked, few dissenting or negative studies have been published, or few or no such studies have indeed been conducted at all.

Anecdote Error – the abuse of anecdote in order to squelch ideas and panduct an entire realm of ideas. This comes in two forms:

Type I – a refusal to follow up on an observation or replicate an experiment, does not relegate the data involved to an instance of anecdote.

Type II – an anecdote cannot be employed to force a conclusion, such as using it as an example to condemn a group of persons or topics – but an anecdote can be employed however to introduce Ockham’s Razor plurality. This is a critical distinction which social skeptics conveniently do not realize nor employ.

Cherry Picking – pointing to a talking sheet of handpicked or commonly circulated individual cases or data that seem to confirm a particular position, while ignoring or denying a significant portion of related context cases or data that may contradict that position.

Straw Man – misrepresentation of either an ally or opponent’s position, argument or fabrication of such in absence of any stated opinion.

Dichotomy of Specific Descriptives – a form of panduction, wherein anecdotes are employed to force a conclusion about a broad array of opponents, yet are never used to apply any conclusion about self, or one’s favored club. Specific bad things are only done by the bad people, but very general descriptives of good, apply when describing one’s self or club. Specifics on others who play inside disapproved subjects, general nebulous descriptives on self identity and how it is acceptable ‘science’ or ‘skepticism’.

Associative Condemnation (Bucket Characterization and Bundling) – the attempt to link controversial subject A with personally disliked persons who support subject B, in an effort to impute falsehood to subject B and frame its supporters as whackos. Guilt through bundling association and lumping all subjects into one subjective group of believers. This will often involve a context shift or definition expansion in a key word as part of the justification. Spinning for example, the idea that those who research pesticide contribution to cancer, are also therefore flat Earther’s.

Panduction Type II

Impugn through Invalid Syllogism – Negative Assertion from a Pluralistic, Circular or Equivocal Premise – defining a set of exclusive premises to which the contrapositive applies, and which serves to condemn all other conditions.

Example (Note that ‘paranormal’ here is defined as that which a nihilist rejects a being even remotely possible):

All true scientists are necessarily skeptics. True skeptics do not believe in the paranormal. Therefore no true scientist can research the paranormal.

All subjects which are true are necessarily not paranormal. True researchers investigate necessarily true subjects. Therefore to investigate a paranormal subject makes one not a true researcher.

All false researchers are believers. All believers tend to believe the same things. Therefore all false researchers believe all the same things.

Evidence only comes from true research. A paranormal investigator is not a true researcher. Therefore no evidence can come from a paranormal subject.

One may observe that the above four examples, thought which rules social skepticism today, are circular in syllogism and can only serve to produce the single answer which was sought in the first place. But ruling out entire domains of theory, thought, construct, idea and effort, one has essentially panned everything, except that which one desires to be indeed true (without saying as much).  It would be like Christianity pointing out that every single thought on the part of mankind, is invalid, except what is in the Bible. The Bible being the codification equivalent of the above four circular syllogisms, into a single document.

Panduction Type III

Mischaracterize through False Observation – Affirmation from Manufacturing False Positives or Negatives – manipulating the absence of data or the erroneous nature of various data collection channels to produce false negatives or positives.

Panduction Type III is an extreme form of an appeal to ignorance. In an appeal to ignorance, one is faced with observations of negative conditions which could tempt one to infer inductively that there exists nothing but the negative condition itself. An appeal to ignorance simply reveals one of the weaknesses of inductive inference.  Let’s say that I find a field which a variety of regional crow murders frequent. So I position a visual motion detection camera on a pole across from the field in order to observe crow murders who frequent that field. In my first measurement and observation instance, I observe all of the crows to be black. Let us further then assume that I then repeat that observation exercise 200 times on that same field over the years. From this data I may well develop a hypothesis that includes a testable mechanism in which I assert that all crows are black. I have observed a large population size, and all of my observations were successful, to wit: I found 120,000 crows to all be black. This is inductive inference. Even though this technically would constitute an appeal to ignorance, it is not outside of reason to assert a new null hypothesis, that all crows are black – because my inference was derived from the research and was not a priori favored. I am not seeking to protect the idea that all crows are black simply because I or my club status are threatened by the specter of a white crow. The appeal to ignorance fallacy is merely a triviality in this case, and does not ‘disprove’ the null (see the Appeal to Fallacy). Rather it stands as a caution, that plurality should be monitored regarding the issue of all crows being black.

But, what if I become so convinced that the null hypothesis in this case is the ‘true’ hypothesis, or even preferred that idea in advance because I was a member of a club which uses a black crow as its symbol? In such a case I approach the argument with an a priori belief which I must protect. I begin to craft my experimental interpretation of measurement such that it conforms to this a priori mandate in understanding. This will serve to produce four species of study observation procedural error, which are in fact, pseudoscience; the clever masquerade of science and knowledge:

A.  Affirmation from Result Conversion  – employing a priori assumptions as filters or data converters, in order to produce desired observational outcomes.

1.  Conversion by a priori Assumption (post hoc ergo propter hoc). But what if the field I selected, bore a nasty weather phenomenon of fog, on an every other day basis. Further then, this fog obscured a good view of the field, to the point where I could only observe the glint of sunlight off the crow’s wings, which causes several of them to appear white, even though they are indeed black. But because I know there are no white crows now, I use a conversion algorithm I developed to count the glints inside the fog, and register them as observations of black crows? Even though a white crow could also cause the same glint. I have created false positives by corrupted method.

2.  Conversion by Converse a priori Assumption (propter hoc ergo hoc – aka plausible deniability). Further then, what if I assumed that any time I observed a white crow, that this would therefore be an indication that fog was present, and a condition of Data Conversion by a priori Assumption was therefore assumed to be in play? I would henceforth, never be able to observe a white crow at all, finding only results which conform to the null hypothesis, which would now be an Omega Hypothesis (see The Art of Professional Lying: The Tower of Wrong).

Example: Viking Mars Lander Data Manipulation

Two Mars Viking Landers were sent to Mars, in part to study for signs of life. NASA researchers took soil samples the Viking landers scooped from the surface and mixed it with nutrient-rich water. If the soil had life, the theory went that the soil’s microbes would metabolize the nutrients in the water and release a certain signature of radioactive molecules. To their pleasant surprise, the nutrients metabolized and radioactive molecules were released – suggesting that Mars’ soil contained life. However, the Viking probes’ other two experiments found no trace of organic material, which prompted the question: If there were no organic materials, what could be doing the metabolizing? So by assumption, the positive results from the metabolism test, were dismissed as derivative from some other chemical reaction, which has not been identified to date. The study was used as rational basis from which to decline further search for life on Mars, when it should have been appropriately deemed ‘inconclusive’ instead (especially in light of our finding organic chemicals on Mars in the last several months)1

B. Affirmation from Observation Failure Conversion – errors in observation are counted as observations of negative conditions, further then used as data or as a data screening criterion.

Continuing with our earlier example, what if on 80% of the days in which I observed the field full of crows, the camera malfunctioned and errantly pointed into the woods to the side, and I was fully unable to make observations at all on those days? Further then, what if I counted those non-observing days as ‘black crow’ observation days, simply because I had defined a black crow as being the ‘absence of a white crow’ (pseudo-Bayesian science) instead of being constrained to only the actual observation of an actual physical white crow? Moreover, what if, because of the unreliability of this particular camera, any observations of white crows it presented were tossed out, so as to prefer observations from ‘reliable’ cameras only? This too, is pseudoscience in two forms:

1.  Observation Failure as Observation of a Negative (utile absentia). – a study which observes false absences of data or creates artificial absence noise through improper study design, and further then assumes such error to represent verified negative observations. A study containing field or set data in which there exists a risk that absences in measurement data, will be caused by external factors which artificially serve to make the evidence absent, through risk of failure of detection/collection/retention of that data. The absences of data, rather than being filtered out of analysis, are fallaciously presumed to constitute bonafide observations of negatives. This is improper study design which will often serve to produce an inversion effect (curative effect) in such a study’s final results. Similar to torfuscation.

2.  Observation Failure as Basis for Selecting For Reliable over Probative Data (Cherry Sorting) – when one applies the categorization of ‘anecdote’ to screen out unwanted observations and data. Based upon the a priori and often subjective claim that the observation was ‘not reliable’. Ignores the probative value of the observation and the ability to later compare other data in order to increase its reliability in a more objective fashion, in favor of assimilating an intelligence base which is not highly probative, and can be reduced only through statistical analytics – likely then only serving to prove what one was looking for in the first place (aka pseudo-theory).

These two forms of conversion of observation failures into evidence in favor of a particular position, are highlighted no better than studies which favor healthcare plan diagnoses over cohort and patient input surveys. Studies such as the Dutch MMR-Autism Statistical Meta-Analysis or the Jain-Marshall Autism Statistical Analysis failed precisely because of the two above fallacious methods regarding the introduction of data. Relying only upon statistical analytics of risk-sculpted and cherry sorted data, rather than direct critical path observation.

 Example: Jain-Marshall Autism Study

Why is the 2015 Jain-Marshall Study of weak probative value? Because it took third party, unqualified (health care plan) sample interpretations of absences (these are not observations – they are ‘lack-of’ observations – which are not probative data to an intelligence specialist – nor to a scientist – see pseudo-theory) from vaccinated and non-vaccinated children’s final medical diagnoses at ages 2, 3, and 5. It treated failures in the data collection of these healthcare databases, as observations of negative results (utile absentia). A similar data vulnerability to the National Vaccine Injury Compensation System’s ‘self-volunteering’ of information and limitation of detection to within 3 years. This favors a bad, non-probative data repository, simply because of its perception as being ‘reliable’ as a source of data. This fails to catch 99% of signal observations (Cherry Sorting), and there is good demonstrable record of that failure to detect actual injury circumstances.2

One might chuckle at the face value ludicrousness of either Panduction Type III A and B. But Panduction Type III is regularly practiced inside of peer reviewed journals of science. Its wares constitute the most insidious form of malicious and oppressive fake science. One can certainly never expect a journalist to understand why this form of panduction is invalid, but certainly one should expect it of their peer review scientists – those who are there to protect the public from bad science. And of course, one should expect it from an ethical skeptic.

The Ethical Skeptic, “Panduction: The Invalid Form of Inference” The Ethical Skeptic, WordPress, 31 Aug 2018; Web, https://wp.me/p17q0e-8c6

Proof Gaming

Science begins its work based upon a principle called necessity, not upon proof. Science then establishes proof, if such can be had. Popper critical rationality as it turns out, involves more than irrefutable proof, contrary to what gaming social skeptics might contend. Proof Gaming is a method of tendering an affectation of sciencey methodology, yet still effectively obfuscating research and enforcing acceptable thought.

im-a-skeptic-burden-of-proofIn order for science to begin to prove the existence of the strange animal tens of thousands of credible persons report roaming in the woods, I must first bring in its dead carcass.  But if I bring in its dead body, then I have no need for science to examine that such an animal exists in the first place; I have already done the science.  The demand that I bring in a dead body, given a sufficient level of Ockham’s Razor necessity-driving information, is a false standard threshold for science to begin its diligence, and such a demand constitutes pseudoscience.

Now of course, Karl Popper in his brief entitled Die beiden Grundprobleme der Erkenntnistheorie contended that science should be demarcated by the proper assignment of truth values to its assertions, or ‘sentences’: ergo, science is the set of sentences with justifiably assigned truth values.¹ This was called a mindset of ‘critical rationality’.¹ It was a step above simple scientific skepticism. The task of the philosophy of science is to explain suitable methods by which these assignments are then properly made.¹ However, one can extend the philosophy of science to construct elaborate methods, which prevent the assimilation of ideas or research which one disfavors, by gaming these methods such that philosophy stands and acts in lieu of science. One such trick of conducting science research by means of solely philosophy, all from the comfort of one’s arm chair, is called Proof Gaming. Popper contended later in his work, as outlined by the Internet Encyclopedia of Philosophy here:

As a consequence of these three difficulties [the problem or necessity of induction] Popper developed an entirely different theory of science in chapter 5, then in Logik der Forschung. In order to overcome the problems his first view faced, he adopted two central strategies. First, he reformulated the task of the philosophy of science. Rather than presenting scientific method as a tool for properly assigning truth values to sentences, he presented rules of scientific method as conducive to the growth of knowledge. Apparently he still held that only proven or refuted sentences could take truth values. But this view is incompatible with his new philosophy of science as it appears in his Logik der Forschung: there he had to presume that some non-refuted theories took truth values, that is, that they are true or false as the case may be, even though they have been neither proved nor refuted [William of Ockham’s ‘plurality’]. It is the job of scientists to discover their falsity when they can. (IEoP)¹

Social skeptics will cite the base logic of Popper’s first work, yet omit his continued work on induction (Logik der Forschung) – as a process of sleight-of-hand in argument. So, critical rationality as it turns out, involves more than irrefutable proof, contrary to what gaming social skeptics might contend. Science begins its work based upon a principle called necessity, not upon proof. Science then establishes proof, if such can be had. Sadly, much of science cannot be adjudicated on anything like what we would call iron-clad proof, and instead relies upon a combination of falsified antithetical alternatives or induction based consilience.

The most anti-science position one can adopt is the insistence that the scientific method consists of one step: 1. Proof.

Proof gaming is the warning flag that someone neither understands, but even moreover, is terrified of science.

The gaming of this reality constitutes a process of obfuscation and deceit called Proof Gaming. Proof Gaming is the process of employing dilettante concepts of ‘proof’ as a football in order to win arguments, disfavor disliked groups or thought, or exercise fake versions or standards of science. Proof gaming presents itself in seven speciations. In the presence of sufficient information or Ockham’s Razor plurality, such tactics as outlined below, constitute a game of pseudoscience. Posing the appearance of science-sounding methods, yet still enabling obfuscation and a departure from the scientific method in order to protect the religious ideas one adopted at an early age. On the internet this is known as sea lioning.

Sea Lioning

/philosophy : deception : fake skepticism/ : is a type of Internet trolling which consists of bad-faith requests for evidence, recitations, or repeated questions, the purpose of which is not clarification or elucidation, but rather an attempt to derail a discussion, appeal to authority as if representing science or ultimate truth, or to wear down the patience of one’s opponent through half listening and repeated request. May involve invalid repetitive requests for proof which fall under a proof gaming fallacy and highlight the challenger’s lack of scientific literacy.

Let’s examine the seven types of this common social skeptic bad science method, formal and informal fallacy.

Proof Gaming

/philosophy : argument : pseudoscience : false salience/ : employing dilettante concepts of ‘proof’ as a football in order to win arguments, disfavor disliked groups or thought, or exercise fake versions of science. Asking for proof before the process of science can ostensibly even start, knowing that plurality is what begins the scientific method not proof, and further exploiting the reality that science very seldom arrives at a destination called ‘proof’ anyway. Proof gaming presents itself in seven speciations:

Catch 22 (non rectum agitur fallacy) – the pseudoscience of forcing the proponent of a construct or observation, to immediately and definitively skip to the end of the scientific method and single-handedly prove their contention, circumventing all other steps of the scientific method and any aid of science therein; this monumental achievement prerequisite before the contention would ostensibly be allowed to be considered by science in the first place. Backwards scientific method and skipping of the plurality and critical work content steps of science. A trick of fake skeptic pseudoscience, which they play on non-science stakeholders and observers they wish to squelch.

Fictitious Burden of Proof – declaring a ‘burden of proof’ to exist when such an assertion is not salient under science method at all. A burden of proof cannot possibly exist if neither the null hypothesis or alternative theories nor any proposed construct possesses a Popper sufficient testable/observable/discernible/measurable mechanism; nor moreover, if the subject in the matter of ‘proof’ bears no Wittgenstein sufficient definition in the first place (such as the terms ‘god’ or ‘nothingness’).

Herculean Burden of Proof – placing a ‘burden of proof’ upon an opponent which is either arguing from ignorance (asking to prove absence), not relevant to science or not inside the relevant range of achievable scientific endeavor in the first place. Assigning a burden of proof which cannot possibly be provided/resolved by a human being inside our current state of technology or sophistication of thought/knowledge (such as ‘prove abiogenesis’ or ‘prove that only the material exists’). Asking someone to prove an absence proposition (such as ‘prove elves do not exist’).

Fictus Scientia – assigning to disfavored ideas, a burden of proof which is far in excess of the standard regarded for acceptance or even due consideration inside science methods. Similarly, any form of denial of access to acceptance processes normally employed inside science (usually peer review both at theory formulation and at completion). Request for proof as the implied standard of science – while failing to realize or deceiving opponents into failing to realize that 90% of science is not settled by means of ‘proof’ to begin with.

Observation vs Claim Blurring – the false practice of calling an observation or data set, a ‘claim’ on the observers’ part.  This in an effort to subjugate such observations into the category of constituting scientific claims which therefore must be now ‘proved’ or dismissed (the real goal: see Transactional Occam’s Razor Fallacy).  In fact an observation is simply that, a piece of evidence or a cataloged fact. Its false dismissal under the pretense of being deemed a ‘claim’ is a practice of deception and pseudoscience.

As Science as Law Fallacy – conducting science as if it were being reduced inside a court of law or by a judge (usually the one forcing the fake science to begin with), through either declaring a precautionary principle theory to be innocent until proved guilty, or forcing standards of evidence inside a court of law onto hypothesis reduction methodology, when the two processes are conducted differently.

Proof by Non-falsifiability (Defaulting) – by selecting and promoting a pet theory or religious tenet which resides inside the set of falsification-prohibited constructs, SSkeptics establish popular veracity of favored beliefs, by default. Since their favored theory cannot be approached for falsification, it would be pseudoscience to compete it with other falsifiable constructs and claim it to be an outcome of the Scientific Method. Therefore the scientific method is disposed of, the non-falsifiable theory is assigned a presumption of truth, and furthermore can never be disproved. A flavor of unseatable ‘King of the Hill’ status is established for pet SSkeptic beliefs.

All of these tactics are common practices which abrogate the role and discipline of science.  Additionally, a key set of symptoms to look for, in determining that Proof Gaming is underway, are when

  1. one of these tactics is conducted inside a media spotlight,  and when
  2. every media outlet is reciting the same story, and same one liner such as ‘extraordinary claims demand extraordinary evidence’, verbatim.

This is an indicator that a campaign is underway to quash a subject.

The sad reality is, that on most tough issues, any one single person or small group of outsiders is poorly equipped to prove a subject beyond question. Popper recognized this later in his life work.  We simply do not have the resources and time to accomplish such a task.  SSkeptics know this and use it to their advantage.  The people who are calling for research for example on the connection between cognitive delays in children and the potential role which immunizations have had on this, are simply asking for science to do the research. The response they receive is “You can’t prove the link,” thus we are justified in waging a media campaign against you and scientifically ignoring this issue. This is Proof Gaming.  Complicating this is the fact that the issue is broader than simply MMR and Thimerosal (the majority body of current study), involving the demand for science to research the causes of valid skyrocketing levels of developmental delays, autoimmune disorders, and learning disabilities in our children. The issue bears plurality and precaution, but is answered with ignorance. The Proof Gamers who sling epithets such as “Deniers” and “Anti-vaccinationistas” and “Autistic Moms” are committing scientific treason. One should note that the handiwork of such SSkeptics is rarely characterized by outcomes of value or clarity, is typically destructive and control oriented, and is reliably made media-visible (see our next Poser Science series on the tandem symbiosis between virtue signalling and malevolence).

Hype and name calling has no place in pluralistic research, and the media pundits who commit this are practicing pseudoscience plain and simple. Once plurality has been established, the games should be over.  But not for Proof Gamers.   Attacking proponents who have done case research to call for further science (not proving the subject) for not “proving beyond a shadow of a doubt,” their contentions, is an act of pseudoscience.

This fake demand for proof before research is Proof Gaming, is an abrogation of the Scientific Method and is Pseudoscience.

epoché vanguards gnosis


¹  The Internet Encyclopedia of Philosophy, “Karl Popper: Critical Rationalism”; http://www.iep.utm.edu/cr-ratio/#H2.