The Tree of Knowledge Obfuscation: Misrepresentation of Evidence or Data

The following is The Ethical Skeptic’s list, useful in spotting both formal and informal logical fallacies, cognitive biases, statistical broaches and styles of crooked thinking on the part of those in the Social Skepticism movement. It is categorized by employment groupings so that it can function as a context appropriate resource in a critical review of an essay, imperious diatribe or publication by a thought enforcing Social Skeptic. To assist this, we have comprised the list inside an intuitive taxonomy of ten contextual categories of mischaracterization/misrepresentation:

Tree of Knowledge Obfuscation The Ethical Skeptic

.

Misrepresentation of Evidence or Data

acatalepsia Fallacy – a flaw in critical path logic wherein one appeals to the Pyrrhonistic Skepticism principle that no knowledge can ever be entirely certain – and twists it into the implication that therefore, knowledge is ascertained by the mere establishment of some form of ‘probability’. Moreover, that therefore, when a probability is established, no matter how plausible, slight or scant in representation of the domain of information it might constitute, it is therefore now accepted truth.  Because all knowledge is only ‘probable’ knowledge, all one has to do is spin an apparent probability, and one has ascertained accepted knowledge. Very similar in logic to the Occam’s Razor aphorism citing that the ‘simplest explanation’ is the correct explanation.

Aleatoric Casuistry – employing statistical uncertainty, representative of unknowns that differ each time we run the same experiment, observation or situation, in order to push the idea that something very uncommon is actually common, can be, or has been common. A version of ‘you never know’ or ‘I bet this happens a lot’. A way of implying that an epistemological basis for a frequency of epistemic uncertainty exists, when indeed it does not.

Amateur Confidence Fallacy – the act of substituting simple probability math to manipulate outcomes, because one does not understand the difference, or because it looks like the same thing to a layman, in instances where only confidence intervals can be correctly applied under the scientific method.

Amplanecdote – something which has occurred a scientifically significant number or many times over, however which is ignored through agenda or by means of a baseless claim to categorization as anecdote.

Anchoring Bias – when a person is over-reliant on the first piece of information they have encountered, or begin a branch and bound search or negotiation at a starting point which is arbitrary, yet causes them cede credence to that range from then on.

Anecdote Data Skulpting (Cherry Sorting) – when one applies the categorization of ‘anecdote’ to screen out unwanted observations and data. Based upon the a priori and often subjective claim that the observation was ‘not reliable’. Ignores the probative value of the observation and the ability to later compare other data in order to increase its reliability in a more objective fashion, in favor of assimilating an intelligence base which is not highly probative, and can be reduced only through statistical analytics – likely then only serving to prove what one was looking for in the first place (aka pseudo-theory).

Antiquing Fallacy – the dismissal of an entire field of data by showing its false, hoax based or dubious past inside a set of well known anecdotal cases. Also the instance where a thesis is deemed incorrect because it was commonly held when something else, clearly false, was also commonly held.

Apophenia Bias – the immediate dismissal of data as being manufactured, mis-analyzed, or reflecting random patterns which are ascribed empirical meaning, without having conducted the research into the data, nor possessing the background in the discipline involved in order to be able to make such a claim.

Appeal to Apati Fallacy – ‘Appeal to the hoax’ fallacy of presumption and irrelevance.  The attempt to impugn a subject by citing or fabricating a history or incident involving a hoax of one or more of the subject’s contentions.  The fallacy resides in the fact that if it exists, there is porn of it; and likewise, if it exists or not, there is a hoax of it.

Arrival Bias – the tendency to tender more credibility or gravitas to information which is hot off the press or has just been introduced.

Ascertainment Bias – a form of inclusion or exclusion criteria error where the mechanism of sampling, specimen selection, data screening or sub-population selection is inherently flawed, is developed based upon an inadequate or sophomoric view of the observations base or produces skewed representations of actual conditions.

Associative Condemnation – the attempt to link controversial subject A with personally disliked subject B, in an effort to impute falsehood to subject B though the association of some idea or keyword common to both topics.  Guilt through association and lumping all subjects into one subjective category.  This typically will involve a context shift or definition expansion in a key word as part of the justification.

Attentodemic – a pandemic which arises statistically for the most part from an increase in testing and case-detection activity. From the two Latin roots attento (test, tamper with, scrutinize) and dem (the people). A pandemic, whose curve arises solely from increases in statistical examination and testing, posting of latent cases or detected immunity as ‘current new cases’, as opposed to true increases in fact.

Availability Error – to adjudicate the answer to questions according to the examples that come most easily to mind, rather than a wide or representative sample of salient evidence.

Availability Heuristic – to adjudicate the answer to a question according to only the information which is available at that time.

Bandwagon Blindness – when a group fails to see their own mistakes or errors inside a hot issue, usually obscured by the common spread of propaganda, and therefore must view any critique of, error or data contradiction as being the fault of opposition or outside parties.

Base Rate Bozo – when employing a baseline reference from the past produces a forbidden observation – this poseur will make up some generalized claim about the ‘past not predicting the future’ and give it a sophist name (Base Rate Fallacy) to intimidate those who do not understand.

Base Rate Fallacy – an error in thinking where, if presented with related base rate information (i.e. generic, general information) and specific information (information only pertaining to a certain anecdotal case), the mind tends to ignore the former and focus on the latter in characterizing the whole set of relevant data regarding a subject.

Bespoke Truth – the idea that truth is not congruent with facts. Nobody thinks their own beliefs are untrue or nonfactual. The problem resides instead wherein people pick and choose what they decide to accept from among the array of facts in order to fit or craft a truth of their liking. A flawed application of inference or scientific study (torfuscation) – wherein it is not that the study or facts are incorrect, rather that they stand merely as excuses to adopt an extrapolated and tailored ‘truth’ which is not soundly represented by such ‘facts’.

Bias Error – when using bias, fallacy or judgement error proclamations to condemn opinions of those who disagree with you, solely to push political, social or economic goals inside decision sets which are not clearly addressed by empirical or scientific backing.

Bias Inflation – a baseless claim of discrediting an observation set through identifying a plausible, relevant or even salient observer bias, which might have or did contribute to the observational profile, yet which can at most only explain a small or negligible portion of the observation base itself.

Big is Science Error – bigger sample sizes and study data is a way of bypassing the scientific method, yet still tender an affectation of science and gravitas. Any time a study cannot be replicated, and a call to consensus is made simply because it would be too difficult to replicate the study basis of the consensus.

Bigger Data is Better Fallacy – the invalid assumption which researchers make when attempting to measure a purported phenomena, that data extraction inside a very large source population as the first and only step of scientific study, will serve to produce results which are conclusive or more scientific. The same presumption error can apply to meta-analysis. Wherein such an analysis is conducted in a context of low/detached, rather than informed knowledge sets, and will serve to dilute critical elements of signal and intelligence which could be used to elucidate the issue further.

Black Swan Fallacy – a claim that the highly improbable will never happen, or that one has a grasp of the full set of the highly improbable, or that one has ascertained the likelihood of a highly improbable event through statistics – when there is not a precedent or knowledge base which allows for any objective epistemic basis for such a calculation. Any form of lack of knowledge which dismisses any highly improbable even from happening, based upon a faulty estimation of the likelihood of a single unlikely event not happening.

Blind-spot Bias – when one fails to recognize or attempt to recognize, or be circumspect about their own biases in side an argument or data set. To notice a bias, fallacy or error in others more readily than in one’s self.

Bootstrapping (Index/Strength) – from the tall tales about the 18th-century German nobleman Baron Munchausen and his wartime exploits against the Ottoman Empire; specifically wherein he pulled himself up out of a well by his own bootstraps. A computational technique for estimating a statistical set for which the underlying distribution is unknown, or a sampling technique which estimates sampling distribution by repeatedly sampling data from the original observation set. It is most often employed as a means to estimate confidence levels of clade structures within a phylogenetic tree in genetics. However, it can be used to describe an inference which is measured as to its risk in draw. A 50 Bootstrap index bears significant risk, whereas a 90/100 Bootstrap index implies a greater degree of confidence in the inference, and therefore less risk.

Bradley Effect – the principle wherein a person being polled will, especially in the presence of trial heat or iteration-based polls, tend to answer a poll question with a response which they believe the polling organization or the prevailing social pressure, would suggest they should vote or which will not serve to identify them into the wrong camp on a given issue. The actual sentiment of the polled individual is therefore not actually captured.

brevis lapsus (‘Word Salad’ Fallacy) – the inability to understand technical or precise writing, mistaking it for constituting a pleonasm. This in favor of simplistic writing which is, either with or without the intent of the opponent, subsequently rendered vulnerable to equivocation. An accusation made when a dilettante person fails to understand philosophical or technical writing, wherein the base argument or its requisite vocabulary reside completely over the head of the individual who started the argument to begin with.

Caesar’s Wife (Must be Above Suspicion) – a principle inside the philosophy of skepticism which cites that a mechanism, research/polling effort, or study which bears an implicit a priori claim to innocence (i.e. soundness, salience, precision, accuracy and/or lack of bias/agency) must transparently and demonstratively prove this claim before being assumed as such, executed or relied upon as scientific.

Chekhov’s Gun – is a dramatic principle that states that every element in a fictional story must be necessary to support the plot or moral, and irrelevant elements should be excluded. It is used as part of a method of detecting lies and propaganda. In a fictional story, every detail or datum is supportive of, and accounted for, as to its backing of the primary agenda/idea. A story teller will account in advance for every loose end or ugly underbelly of his moral message, all bundled up and explained nicely – no exception or tail conditions will be acknowledged. They are not part of the story.

Cladistic Dismantling (Deconstruction) – the definition of a malady or observed set of phenomena, into numerous ‘distinction without a difference’ subsets in an attempt to disguise or cloud noise around the overall trend in numbers involved in the malady or the phenomenon’s overall impact.

Clarke’s Second Law – the only way of discovering the limits of the possible is to venture a little way past them into the impossible.

Click Bait (or Headline) Skepticism – a position backed by articles or studies in which the headline appears to support the position contended, however which in reality actually contend something completely or antithetically different. A skeptical understanding which is developed though sound bytes and by never actually reading the abstract, method or content of cited articles or studies.

Close-Hold Embargo – is a common form of dominance lever exerted by scientific and government agencies to control the behavior of the science press. In this model of coercion, a governmental agency or scientific entity offers a media outlet the chance to get the first or most in-depth scoop on an important new ruling, result or disposition – however, only under the condition that the media outlet not seek any dissenting input, nor critically question the decree, nor seek its originating authors for in-depth query.

Compactifuscation – the merging of several disparate but associated concepts or definitions into one single descriptive term, so that epistemological weakness or strengths characteristic of a subset of the definitions held equivocally inside the term, can be ported over to the remaining set of definitions, without overt support or challenge in doing so. For instance the merging of sentience, awareness, meta-awareness, identity and meta-identity all into the term ‘consciousness’, so that studies on beetles can be ported over and apply to the hard problem of human consciousness.

Complexifuscation – the introduction of similar signals, inputs or measures, alongside a control measure or an experimental measure, in an attempt to create a ‘cloud of confusion or distraction’ around the ability to effect observation, control or measure of a targeted set of data. Preemption of a phenomena with in-advance flurries of fake hoaxes, in order obscure the impact, or jade the attention span of a target audience, around a genuine feared phenomena.

Confirmation Bias – the tendency to immediately accept propaganda published in the opponent’s favored group, and to reject observations, data or ideas which do not fit the opponent’s favored models.

Continuum Fallacy – erroneous rejection of a vague claim or loosely defined data set simply because it is not as precise as one would like it to be.

contra ad populum – citing that, since an argument or preference for a conclusion is growing in popularity, it must therefore only be growing in such acceptance because of argumentum ad populum pressure, human foibles or through media promotion – and cannot possibly be growing because of the persistent and robust nature of the associated evidence.

Contrathetic Impasse – a paradoxical condition wherein multiple competing hypotheses and/or ad hoc plausible explanations bear credible inductive evidence and research case history – yet each/all hypotheses or explanations have been falsified/eliminated as being sufficiently explanatory for more than a minor portion of a defined causal domain or observation set. For instance, the MiHoDeAL explanation contains 5 very credible possible explanations for challenging phenomena. However, the sum total of those 5 explanations often only amounts to explaining maybe 5 – 15% of many persistent paranormal phenomena. The presumption that one of those explanations is comprehensively explanatory, is a trick of pseudoscience. Another new hypothesis is therefore demanded in the circumstance of a contrathetic impasse paradox.

Causes or influences which contribute to a contrathetic impasse:*

1.  Foundational assumptions/investigation are flawed or have been tampered with.
2.  Agency has worked to fabricate and promote falsifying or miscrafted information as standard background material.
3.  Agency has worked to craft an Einfach Mechanism (Omega Hypothesis) from an invalid null hypothesis.
4.  Agency has worked to promote science of psychology, new popular theory or anachronistic interpretation spins on the old mystery.
5.  SSkeptics have worked to craft and promote simple, provisional and Occam’s Razor compliant conclusions.
6.  Agency has worked to foist ridiculous Imposterlösung constructs in the media.
7.  Agency has worked to foist shallow unchallenged ad hoc explanations in the media.
8.  SSkeptics seem to have organized to promote MiHoDeAL constructs in the media.
9.  There exist a set of repeatedly emphasized and/or ridiculously framed Embargo Hypotheses.
10.  Agency has worked to promote conspiracy theory, lob & slam Embargo Hypotheses as an obsession target to distract or attract attack-minded skeptics to the mystery. The reason this is done is not the confusion it provides, rather the disincentive which patrolling skeptics place on the shoulders of the genuine skilled researcher. These forbidden alternatives may be ridiculous or indeed ad hoc themselves – but the reason they are raised is to act as a warning to talented researchers that ‘you might be tagged as supporting one of these crazy ideas’ if you step out of line regarding the Omega Hypothesis.

Contrathetic Inference Paradox – a condition where abductive and inductive inference point to one outcome or understanding, and deductive inference points in another antithetical direction entirely. Since science often begins with inductive study stemming from abductive understanding, it will dogmatically hold fast to an inductive understanding until a paradigm shift occurs – a result of the weight of deductive evidence pointing in a different direction. The job of fake skepticism is to ensure that this deductive evidence or any thought resulting from it, is never accepted into science in the first place.

Correlation Dismissal Error – when employing the ‘correlation does not prove causality’ quip to terminally dismiss an observed correlation, when the observation is being used to underpin a construct or argument possessing consilience, is seeking plurality, constitutes direct fingerprint evidence and/or is not being touted as final conclusive proof in and of itself.

Crate Effect – impact of persons who purposely give the opposite response as to what they really think because of animosity towards the polling group or the entailed issue (especially if non-free press) and/or their perceived history of bias, and/or animosity towards the circus around elections or the elections themselves. This false left leaning bias is generated most often inside groups who believe media outlets to be left-leaning and unfair.

Cunningham’s Law – an approach to Akratic Trolling which states that the best way to get the right answer on the Internet is not to ask a question, it’s to post the wrong answer.

Cut the Feed – a tactic employed by oppressive agency, to withhold observation or data from public purview when it serves to falsify the Narrative, especially inside a circumstance wherein debunking or plausible deniability might only serve to bring the issue even more attention. Also known as an Entscheiden Mechanism – the pseudoscientific or tyrannical approach of, when faced with epistemology or observation which is heading in an undesired direction, artificially declaring under a condition of praedicate evidentia, the science as ended or ‘settled’ – or to cut off all scientific examination of the subject – as in a NASA notorious ‘cut the feed’ event.

Dichotomy of Compartmented Intelligence – a method of exploiting the compartmented skill sets of scientists, in order to preclude any one of them from fully perceiving or challenging a study direction or result. Call your data analysts ‘data scientists’ and your scientists who do not understand data analysis at all, the ‘study leads’. Ensure that those who do not understand the critical nature of the question being asked (the data scientists) are the only ones who can feed study results to people who exclusively do not grasp how to derive those results in the first place (the study leads). This is a method of intelligence laundering and is critical to everyone’s being deceived, including study authors, peer reviewers, media and public at large.

The Diebold Test – if you and a friend are walking down the sidewalk, and a Diebold ATM machine falls from ten stories above and crushes him, you do not need a doctor to pronounce death, nor to take an EEG nor blood pressure in order to scientifically establish that your buddy is dead. Those who appeal for such false epistemology typically are not actually looking for verity – rather just seeking to deflect anything which competes with the answer they desire to enforce.

Doubtcasting – a form of rhetorical critique in which a person casts inexpert doubt upon every facet of an opponent’s argument, while adding no value themself in the process – nor offering up their own ideas to the risk of critique. Raising doubt to perpetuate ignorance. A combative method of arguing without tendering the appearance of doing so, in the case where an agent is not interested in anything other than maligning their opponent or appearing to win an argument.

Duty to Address and Inform (Hypothesis) – a critical element and aspect of parsimony regarding a scientific hypothesis. The duty of such a hypothesis to expose and address in its syllogism, all known prior art in terms of both analytical intelligence obtained or direct study mechanisms and knowledge. If information associated with a study hypothesis is unknown, it should be simply mentioned in the study discussion. However, if countermanding information is known, the structure of the hypothesis itself must both inform of its presence and as well address its impact.

Ecological Fallacy – a logical fallacy in the interpretation of statistical data where inferences about the nature of individuals or isolated observations/studies are deduced from inference regarding the group or broader study domain to which those individuals or observations belong. A compliment of the Yule-Simpson Effect, wherein results obtained for subsets of data, tend to disappear when those subsets are combined.

Effect Inversion – one sign of a study which has been tampered with through inclusion and exclusion criteria, in an effort to dampen below significance or eliminate an undesired signal, is the circumstance where an inverse or opposite relationship effect is observed in the data when the inverse question is asked concerning the same set of data. If a retrospective cohort study purportedly shows no effect relationship between a candidate cause and a malady – there should also be no relationship between the candidate cause and the absence of the malady as well (if the two are indeed unrelated epidemiology). The presence of a reverse or apparently ‘curative’ influence of the candidate cause being evaluated in the data may signal the impact of data manipulation.

Elemental Pleading – breaking down the testing of data or a claim into testing of its constituents, in order to remove or filter an effect which can only be derived from the combination of elements being claimed. For instance, to address the claim that doxycycline and EDTA reduce arterial plaque, testing was developed to measure the impact of each item individually, and when no effect was found, the combination was dismissed as well without study, and further/combination testing was deemed to be ‘pseudoscience’.

Embargo Hypothesis (Hξ) – was the science terminated years ago, in the midst of large-impact questions of a critical nature which still remain unanswered? Is such research now considered ‘anti-science’ or ‘pseudoscience’?

Entscheiden Mechanism – the pseudoscientific or tyrannical approach of, when faced with epistemology which is heading in an undesired direction, artificially declaring under a condition of praedicate evidentia, the science as ‘settled’.

Poison Pill Hypothesis – the instance wherein sskeptics or agency work hard to promote lob & slam condemnation of particular ideas. A construct obsession target used to distract or attract attack-minded skeptics into a contrathetic impasse or argument. The reason this is done is not the confusion or clarity it provides, rather the disincentive which patrolling skeptics place on the shoulders of the genuine skilled researcher. These forbidden alternatives (often ‘paranormal’ or ‘pseudoscience’ or ‘conspiracy theory’ buckets) may be ridiculous or indeed ad hoc themselves – but the reason they are raised is to act as a warning to talented researchers that ‘you might be tagged as supporting one of these crazy ideas’ if you step out of line and do not visibly support the Omega Hypothesis. A great example is the skeptic community tagging of anyone who considers the idea that the Khufu pyramid at Giza might have not been built by King Khufu in 2450 bce, as therefore now supporting conspiracy theories or aliens as the builders – moreover, their being racist against Arabs who now are the genetic group which occupies modern Egypt.

Epistemological Psychic Study – when a statistical trend study shows a trend in data, but the only data elements supporting the contended trend all exist in the future and/or none of the past data supports such a trend; their being included simply to serve an appearance of statistical rigor and significance.  Usually applied to support a political cause, and in an arena where the observable final outcome will be well beyond the proponent’s lifespan or political career.

epoché (ἐποχή, “suspension”) – a suspension of disposition. The suspended state of judgement exercised by a disciplined and objective mind. A state of neutrality which eschews the exercise of religious, biased rational or critical, risky provisional and dogmatic dispositions when encountering new observations, ideas and data. It is the step of first being skeptical of self, before addressing challenging or new phenomena. It is underpinned by both a examination of the disciplines of true knowledge development (epignosis) and the repository of vetted and accepted knowledge (gnosis). If someone relates a challenging observation to you, you suspend disposition, and catalog it. If you toss it out based upon a fallacy or trivial flaw – then you are a cynic, not a skeptic.

Ergodicity Ignorance – the failure of a data scientist or one who is attempting to interpret a series or body of data inside a dynamic system in that they are unable to grasp that the system will possess the same behavior averaged over time, regardless of the makeup or state of the system. If you tax the richest 15%, a new 15% rich who benefit from the new dynamic will arrive to take its place, however the historical statistical tier breakout will not change. Hence, Goodhart’s Law. When one statistic becomes the focus of an attempted system change, it ceases to be a good statistic.

Essential Schema Filtering Error – when one uses pop psychology studies such as the 1980’s Loftus Study to dismiss memories and observations which they do not like. By citing that memories and eyewitness testimony are unreliable forms of evidence, pretend skeptics present an illusion of confidence on dismissing disliked eyewitness essential schema data, when neither the Federal Rules of Evidence, science nor even the cited studies make such a claim which allows the dismissal of eyewitness testimony at all.

Ethical Inversion –  a social condition and form of linear induction which prohibits the execution of actual science, ironically in the name of ethics. A false condition wherein proper study design is deemed to involve unethical or inhumane techniques, thus researchers are allowed to employ flawed study design, which in turn only serves to produce results that mildly suggest there is no need to conduct proper study design in the first place. So none is undertaken.

Ethical Skeptic’s Axiom – accurate, is simple. But that does not serve to make simple, therefore accurate.

Ethical Skeptic’s Axiom of Fact and Belief – the direct derivation of belief from mere ‘fact’ is one level of competence below ignorance.

Ethical Skeptic’s Razor – never ascribe to happenstance or incompetence, that which coincidentally, surreptitiously and elegantly supports a preexisting agency. Never attribute to a conspiracy of millions, what can easily arise from a handful of the clever manipulating the ignorance of millions.

ex ante – an inference which is derived from predictive, yet unconfirmed forecasts. While this may be a result of induction, the most common usage is in the context of abductive inference.

Exception Fallacy – is the converse of the ecological fallacy, wherein one infers a group conclusion on the basis of individual, small study or exceptional cases. This is the kind of fallacious reasoning that is at the core of stereotyping.

Exclusion Without Exception Fallacy – the circumstance where one excludes an argument or datum, without making it clear that the criterion of exclusion being used would also exclude every possible argument or datum as well. A rhetorical method of changing the defining language regarding an issue so as to make an exception of a threatening observation or circumstance so that it no longer applies under the threat principle itself, without revealing the sleight-of-hand that the exception applies to literally almost everything. Similar in nature to distinction without a difference.

Existential Fallacy of Data – the implication or contention that there is an absence of observation or data supporting an idea, when in fact no observational study at all, or of any serious import has been conducted by science on the topic at hand.

Experimenter’s Bias – the tendency for experimenters to believe, certify, and publish data that agree with their expectations for the outcome of an experiment, and to disbelieve, discard, or downgrade the corresponding weightings for data that appear to conflict with those expectations.

Expression Failure (Micro Should Express in the Macro) – an effect which is measured in a microcosm, and which applies to all circumstances, should necessarily express itself in measures involving a macrocosm. Failure to observe an effect on a large scale, which has been observed in a small scale, brings the small scale measure into question. If a month-long study reports that Jim saves $350 a month into his savings account, yet when examined two years later Jim only has $100 in his savings, the month-long study was wrong, no matter what precision, heuristic, p-value, or confidence interval was used to certify the microcosm measure.

Fabutistic – a statistic which is speciously cited from a study or set of skeptical literature, around which the recitation user misrepresents its employment context or possesses scant idea as to what it means, how it was derived, or what it is saying or is indeed not saying. Includes the instance wherein a cited statistic is employed in a fashion wherein only the numerals are correct (ie. “97%”) and the context of employment is extrapolated, hyperbole or is completely incorrect.

Facile – appearing neat and comprehensive only by ignoring the true complexities of an issue; superficial. Easily earned, arrived at or won – derived without the requisite rigor or effort.

¡fact! – lying through facts. Data or a datum which is submitted in order to intimidate those in a discussion, is not really understood by the claimant, or rather which is made up, is not salient or relevant to the question being addressed, or is non-sequitur inside the argument being made. The relating of a fact which might be true, does not therefore mean that one is relating truth.

Fact Checker – one subset of the class of Nitzsche’s bildungsphilister – one who fails to grasp that fact bears only a specious relation to truth. They should be ranked for trust between the liar and the dilettante.

Factcidental – facts which are correct, but only by trivia or accident or which possess no real sequitur relationship with the discussion/issue context or logical calculus.

Fair Dinkum – a question which essentially asks ‘Have you put in the hard work and direct experience, ability or integrity which substantiates the quality of your offering?’ ‘Dinkum’ is a slang term that appears to have grown up from a variety of stories and dialects in Australia – and bearing two meanings, proof in the pudding by means of ‘work’ ‘value’ and/or ‘fair play’.

Fallacy of Excluded Exceptions – a form of data skulpting in which a proponent bases a claim on an apparently compelling set of confirming observations to an idea, yet chooses to ignore an also robust set of examples of a disconfirming nature. One chisels away at, disqualifies or ignores large sets of observation which are not advantageous to the cause, resulting only seeing what one sought to see to begin with.

Fallacy of Exclusion (Fallacy of Suppressed or Isolated Evidence) – One of the basic principles of argumentation is that a sound argument is one which presents all the relevant, and especially critical-path, evidence. A debunker will seek to isolate one single facet of an observation and then pretend that it is weak, when stripped of its corroborating observations, context, and facets of credibility. This is the warning flag that a pseudo-scientific method is at play.

Fallacy of Extrapolated Inversion – a form of straw man argument where data describing a phenomenon peculiar to one population under a specific set of circumstances, is extrapolated and applied to a completely different phenomenon or population under a completely different set of circumstances, to underpin a straw man assertion about the latter. For example, “Despite a supposed (note the prejudicial language) surge in nationalism across the globe, many people like to watch movies and TV shows from other countries. The xenophobic leaders aren’t succeeding in changing people’s interest in others.”

Fallacy of Relative Privation – dismissing an avenue of research due its waste of scientists’ time and to the existence of more important, but unrelated, problems in the world which require priority research.

Fallacy of Scientific Composition – The fallacy of contending explicitly or implicitly that legitimate science consists only as a set of approved or published studies. The failure to realize that professionals making trained observations in their field and operating environment, are more 1. timely, 2. accurate, and 3. scientific than studies which try and replicate the same through skeptic, academic, or cubicle work, even when followed by peer review.

Fallacy of The Control – a condition wherein invalid objection is raised to a valid observation, citing that the observation was not conducted against a differential cohort or control. Situations were the extreme nature of the observation is exceptional inside any normal context, to the point of not requiring a control, in order to be valid for inference or inclusion. A fake skeptic’s method of dismissing observations which threaten their religion, by means of sophistry and pretend science.

Fallacy of Univariate Linear Inductive Inquiry into Complex Asymmetric Systems (Univariate Fallacy) – the informal fallacy of attempting to draw inference regarding complex dynamic systems, such as biological, economic or human systems, through employment of singular, linear or shallow inductive analytical methods. A deep understanding of complex systems first demands a conceptual and analytical strategy that respects, defines and adequately reduces for analysis, that complexity. Only then can multiple component analyses be brought to bear as a consilience in understanding the whole.

Falling Between the Cracks – data which should have been brought into an argument, but which was neglected because each of the responsible members in a research or petitioning group assumed that such data was the responsibility of the other parties.

False Domain Equivalence – a form of ambiguity slack exploitation wherein one equates the probability metrics which can be derived from a qualified, constrained or specific domain or circumstance, to be comparable to the use of ‘probability’ inside a broad domain or one lacking Wittgenstein parameters, constraints or descriptives.

Fat Tony – an observer-stakeholder who is tail or fat tail focused in instinct. Fat Tony is dismissive of the probabilistic heuristic involved in a decision process and instead either spots and exploits the role of agency inside a purportedly probabilistic process, or spots and exploits the black swan events in the portfolio of those who’s decision heuristics depend solely upon probabilistic outcomes. In either case Fat Tony uses the unlikely or paradigm breaking aspect of a set distribution, in order to make a derived advantage.

Filbert’s Law – to find a result use a small sample population, to hide a result use a large one. More accurately expressed as the law of diminishing information return. Increasing the amount of data brought into an analysis does not necessarily serve to improve salience, precision nor accuracy of the analysis, yet comes at the cost of stacking risk in signal. The further away one gets from direct observation, and the more one gets into ‘the data’ only, the higher is the compounded stack of risk in inference; while simultaneously the chance of contribution from bias is also greater.

Five Percent Fake – when a methodical cynic cites the existence of the “5% of unexplained cases” myth in an effort to appear objective. This in an effort to avoid the perception of not possessing a series of outlier data points (the 5%), the absence of which would tender the appearance of cynical bias.

Forer Effect – when an individual tenders estimations of high accuracy to descriptions of their personality or life circumstance that supposedly are tailored specifically for them, but are in fact vague and general enough to apply to a wide range or a common subset of people.

Forer Error – ascribing accurate forecasting to be merely the outcome of Forer Effect, in absence of adequate knowledge, personal research or statistical rigor regarding the particular circumstance, which could substantiate such such a claim. The principle being that an accurate forecast establishes plurality and both parties now hold a burden of proof if a claim is tendered.

Fruit of the Poisonous Tree – an invalid principle of fake skepticism drawn from legal precedent, which stipulates that evidence which is obtained from illegal sources or methods is inadmissible in a court of law. Skeptics appropriate this principle to allow them to dismiss valid evidence simply because it came from a source, method or person whom they dislike – regardless of how valid the evidence is, stand alone. It is a skulptur mechanism, a method of evidence filtering.

Furtive Confidence Fallacy – the refusal to estimate, grasp or apply principles of statistical confidence to collected data and observed arrival distributions, as a means of falsely bolstering the perceived validity of or avoid the signalling of validity in an observation set or body of data. The act of prematurely declaring or doggedly denying a multiplicity of anecdote to be equal to data.

Furtive Fallacy – undesired data and observations are asserted to have been caused by the malfeasance of researchers or laymen.

Gaslighting – a form of manipulation that seeks to sow seeds of doubt in a targeted individual or in members of a targeted group, hoping to make them question their own memory, perception, and/or sanity. Using persistent denial, disinformation, misdirection, contradiction, manipulated statistics and organic untruths, in an attempt to destabilize the target and delegitimize the target’s beliefs, understanding or confidence in self.

Gaussian Blindness (see medium fallax) – the tendency to characterize an entire population by both the mean (μ) of the population as well as a Normal Distribution profile or other easily applied distribution, as being descriptive of the whole body of a set of data. I’ve got my head in the oven, and my ass in the fridge, so I’m OK.

Ghost Variable/Ghost Assumptiona novel variable which is inserted into a model to artificially compensate for the model’s lapse in matching observed reality, or to ‘describe’ a putative effect which itself is not fully understood, has not been scientifically reduced, or has not been confirmed to actually exist. Despite an idea or input being implausible, inelegant, and non-observable in any direct fashion, a ghost variable solely exists because it is required in order to sustain the model which requires its existence. A type of circular reasoning. When used as a placeholder, such an assumption can bear scientific utility; however, a ghost variable or assumption should never thereafter be assumed as conclusive, based upon its model-utility alone. Eventually the idea behind the ghost variable itself, must be demonstrated as valid.

Goodhart’s Law – when a specific measure becomes the sole or primary target, it ceases to be a good measure.

Google Goggles – warped or errant information cultivated through reliance on web searches as one’s resource or base of understanding. Vulnerability to web opinions where every street doubter can dismiss observations as a pretend authority on science, every claim represented as being by science is immediately accepted and every public observation is deemed a fable or a hoax.

Google Reaction Effect – the tendency to discount as unimportant or invalid, information that can be found readily online by using Internet search engines.

Granger Causality – a refutation of a post hoc ergo propter hoc claim in that a variable X that evolves over time Granger-causes another evolving variable Y if predictions of the value of Y based on its own past values and on the past values of X are better than predictions of Y based only on Y’s own past values (or especially another assumed causality variable X2). Granger causality may not indicate direct causation; however suggests a common mechanism of some type via a fingerprint signal means which is much stronger than mere correlation.

Green Eggs and Ham (Poll) Error – the combined Crate-Bradley Effect in polling error. Including sentiment of those who have never heard of the topic. Including responses from those who know nothing about the topic, but were instructed to throw the poll results. Finally, treating both of these groups as valid ‘disagree/agree’ sentiment signal data. The presence of excessively small numbers of ‘I don’t know’ responses in controversial poll results. There exists an ethical difference between an informed-yet-mistaken hunch, versus making a circular-club-recitation claim to authority based upon a complete absence of exposure (ignorance) to a topic at all. In reality, the former is participating in the poll, the latter is not. The latter ends up constituting only a purely artificial agency-bias, which requires an oversampling or exclusion adjustment. One cannot capture a sentiment assay about the taste of green eggs and ham, among people who either don’t even know what green eggs and ham is, or have never even once tasted it because they were told it was bad.

Gresham’s Law of Information – Gresham’s Law is a monetary principle stating that “bad money drives out good.” In the currency of information, bad information will also displace good information into private chambers of power or collections of artifact hoarders, while the pervasive currency of bad information, fake news, and noise will inhabit the public commons, naturally biasing its narrative towards a lack of soundness.

Bradley Effect – the principle wherein a person being polled will, especially in the presence of trial heat or iteration-based polls, tend to answer a poll question with a response which they believe the polling organization or the prevailing social pressure, would suggest they should vote or which will not serve to identify them into the wrong camp on a given issue. The actual sentiment of the polled individual is therefore not actually captured.

Crate Effect – impact of persons who purposely give the opposite response as to what they really think because of animosity towards the polling group or the entailed issue (especially if non-free press) and/or their perceived history of bias, and/or animosity towards the circus around elections or the elections themselves. This false left leaning bias is generated most often inside groups who believe media outlets to be left-leaning and unfair.

Hamming’s Axiom – a quip by mathematician Richard Hamming, “You get what you measure.” Which means that one will not find what they are not measuring for in the first place.

Hawthorne Observer Effects – an expansion of thought based around the principle in which individuals modify an aspect of their behavior in response to their awareness of being observed or to a test-change in their environment. The simple act of observation changes both the state of mind of the observer as well as those being observed. These effects principally center on the state of mind change in the observer:

  1. That which is observed, changes
  2. That which is observed for the first time, appears exceptional
  3. That which is observed obsessively, instills angst
  4. That which bears risk, is observed less
  5. That which is observed to move more slowly, is perceived as the greater risk

Hegelian Deception – a form of employment of the Hegelian Dialectic where a fourth party deceiver manipulates the dialectic process to guide external witnesses to synthesize a false truth that excludes an actual hidden tertiary element. In Hegelian philosophy, the dialectical process involves the interaction of opposing ideas (thesis and antithesis) leading to a higher level of understanding or resolution (synthesis). In this scenario, the deceiver sets up a primary event (thesis) and a secondary event (antithesis) to create a false narrative or synthesis that relates the false perception that a higher level of understanding has been achieved, and more importantly diverts attentions from the actual truth (a tertiary event).

Historian’s Fallacy – occurs when one assumes that decision makers of the past viewed events from the same perspective and having the same information as those subsequently analyzing the decisions; therefore the levels of bunk, belief and credulity can be used to dismiss past events with historically credible persons, just as the same as they are employed in modern discourse.

Hoax (Backfitted) – a video, recording or photo typically, in which a conventionally surprising event is documented, which appears to engage humans and/or animals regarding the curious nature of the event. However, the event is backfitted with an additional hoaxed entity which alters the subject or object nature of the amazing observation, by means of a false overlay or impression of a more astounding or entirely different nature. For instance, overlaying the fake-image of an angel over a blown-lightbulb video, and leveraging off the shock of bystanders to falsely imply that they were reacting as witnesses to the appearance of an angel.

Hoax (Fake) – hoax perpetrated to “Show how easy it is to fake this stuff.” A hoax in which the perpetrator discloses that the evidence is a fake; at some later time after they have gained the adrenaline rush of deception or when the revelation will increase their celebrity status to the greatest effect. The implication is that this hoax-and-reveal process is some sort of grand ethical action on their part.

Hoax (Straight) – anonymous or not anonymous hoax perpetrated to fool an audience of the credulous, entertain one’s self and obtain the adrenaline rush of magician-ship. The goal is to hold the deceived audience enraptured with the magician’s personal demonstrated skill, intellect, sophistry and implied authority and/or technical ability. Deception provides an adrenaline rush, especially when spun inside a cocoon of apparent correctness.

Hoax (Strawman) – anonymous hoax perpetrated to discredit. Typically outfitted fitted with a hidden “key” – the obvious or semi-obvious flaw or Achilles Heel which reveals the event or contention to be merely a hoax; purposely set to be discovered at a later time, to discredit a specific targeted subject or persons to whom the hoax relates.

Hate Hoax – A skit/joke or special kind of strawman hoax which celebrates oppression or mocks people based upon their opposition to oppression, based upon race, religion, sexual orientation, nationality or political beliefs – is indistinguishable from and should be treated as, the real thing.

Hoax-Baiting – fraudulent data crafted by SSkeptics to stand as evidence cited by other SSkeptics in countering disfavored subjects. In an epidemic of Hoax-Baiting, a SSkeptic will even cite that their evidence has been manufactured “to show how easy it is to fake this stuff.” In many instances Trolls or other SSkeptics are paid to create hoax videos on YouTube for instance, by third parties seeking to sway public discourse in and around a disfavored subject, and quash any relevant serious material on the subject.

Homoscedastic Failure/Fading – a statistical association is homoscedastic if its random variables or their associations feature a consistent variance. This can also be described as a homogeneity in variance. A statistical analysis which derives its trend from data in which homoscedasticity is fading or has failed completely, will bear a low likelihood of the claimed statistical association being valid.

Hyperbolic Constraint – a form of special pleading wherein a ludicrous or cherry picked statistical constraint upon a set of data is used to lens that data in such a way as to make it appear to be more favorable to one’s a priori position. If they ended each game they played at the half, they would be undefeated. See also Goodhart’s Law.

idem existimatis – attempting to obscure the contributing error or risk effect of imprecise estimates or assumptions, through an overt focus on the precision or accuracy of other measures inputs inside a calculation, study or argument.

Ignoring as Accident – exceptions or even massive sets of data and observational counter-evidence to an enforced generalization are ignored as anecdotes or accidents.

ignoro eventum – institutionalized pseudoscience wherein a group ignores or fails to conduct follow-up study after the execution of a risk bearing decision. The instance wherein a group declares the science behind a planned action which bears a risk relationship, dependency or precautionary principle, to be settled, in advance of this decision/action being taken. Further then failing to conduct any impact study or meta-analysis to confirm their presupposition as correct. This is not simply pseudoscience, rather it is a criminal action in many circumstances.

Impasse of Contrathetic Inference – a situation wherein abductive and inductive evidence points in one direction, while field research and/or deductive evidence point in a completely different direction of inference. Often this is a circumstance wherein fake skeptics will cite ‘prior plausibility’ in order to suppress deductive research they find not to their liking. The circumstance flags: 1) bad Wittgenstein definition, 2) flawed assumptions, 3) poor research history, 4) scientific oppression, or 5) honest scientific conundrum.

“Inconclusive” is a Conclusion – the fake sleuth is desperate to issue a conclusion and obtain club credit for having reached it. Their goal is to stamp the observation (what they incorrectly call a ‘claim’) with the word ‘Debunked’. However, they also know that most neutral parties have this trick figured out now. So they prematurely reach a conclusion which appears to be skeptically neutral, but tenders the same desired result: Inconclusive. It is like declaring two opponents in a field game to be of equal team strength through a tie, 0 to 0 – by means of simply turning on the scoreboard and walking off the field after 15 seconds of play. By means of an inconclusive status, the observation can be neutralized and tossed upon a ‘never have to examine this again’ heap. Defacto, this is the same as ‘debunked’. It is a trick, wherein, the fake skeptics takes on the appearance of true skeptical epoché, while still condemning an observation or subject, wherein it is nothing of the sort.

ingens vanitatum – (Latin: ingens ‘vast’ and vanitatum ‘archives’ or ‘vanities’) – knowing a great deal of irrelevance (noise: lack of relevance) and/or inconsequence (smoke & mirrors: lack of salience), or the citing of such disinformation. A form of rhetoric through Nelsonian knowledge of most facets of a subject and most of the latest propaganda therein. A condition which bears irony however, in that this supervacuous, irrelevant, or inconsequential set of knowledge stands as all that composes the science, or all that is possessed by the person making a claim to knowledge. A useless set of information which serves only to displace any relevance, salience, or logical calculus of the actual argument, principle or question at hand. The skillful exploitation of irrelevance and/or inconsequence which serves to disinform or deceive.

Inversion Effect – an opposite and compensating signal effect inside a research study which has filtered out sample data through invalid study design or an exclusion bias towards a specific observation. By offsetting a subset of the population being studied which bears a condition that is not desired for examination or detection, a study can introduce an opposite or contrapositive effect in its analytical results.  Vaccines not only do not cause autism, but two major studies showed they actually cure autism. Persons vaccinated for Covid, die at 35% the rate of unvaccinated persons in terms of all non-Covid deaths (natural and non-natural). These are outcomes which are impossible, however show up statistically because of an invalid exclusion bias which served to produce the effect.

Journalistic Hyperbole, The Principle of – when a journalist cites that an issue is the ‘most’, ‘worst’, ‘deadliest’, ‘open-shut’ or scientifically settled, you can safely discern that the journalist fears that the issue might not even be real and/or is attempting to escalate the language of the argument to intimidate opponents. Hyperbole betrays self doubt or a lack of real evidence.

Kabā Alternative – when a more highly embarrassing or deadly event looms, create a less embarrassing or deadly but more viral event, in advance – in order to obscure the presence of the former.

Klassing – when one offers payment of money or threatens the well being or career of a person in order to get them to recant, deny, keep silent on, or renounce a previously stated observation or finding. The work of a malicious fake investigator who seeks to completely destroy an idea being researched and to actively cast aspersion on a specific subject as part of a broader embargo policy. A high visibility reputation assassin hired to intimidate future witnesses or those who might consider conducting/supporting investigative work.

The Law of Colossal Numbers – when one is surrounded by people who have been affected or injured through incidents which are very ‘rare’.

Law of Large Numbers Fallacy – the Law of Large Numbers does not apply ex ante, nor in any other case where there is not a large number domain to sample from in the first place. Any instance where the wrong species of probability event is selected, there does not exist a suitable measure of what is ‘large’ or ‘probable’ or the event being described constitutes only the single opportunity for the improbable event to have occurred. An ad hoc denial tactic which dismisses by presupposing the idea that one holds statistical refutation evidence based on plenitude of a sample domain. The rigor-less assumption that mass statistics will prove out any strange or unlikely observation one chooses to dismiss.  It is a form of the MiHoDeAL Fallacy. See also Appeal to Plenitude/Appeal to Lotto.

Layogenic – looks or sounds great when examined from a distance or described in concept, but reveals to be a horrid mess when inspected up close or to any kind of level of detail.

Less is Better Bias – the tendency to prefer a smaller set of data to a larger set of data judged separately, but not jointly, so as to capitalize off the increased variability of the small set of data as it supports an extreme or conforming opinion.

Linear Affirmation Bias – a primarily inductive methodology of deriving inference in which the researcher starts in advance with a premature question or assumed answer they are looking for. Thereafter, observations are made. Affirmation is a process which involves only positive confirmations of an a priori assumption or goal. Accordingly, under this method of deriving inference, observations are classified into three buckets:

1. Affirming
2. In need of reinterpretation
3. Dismissed because they are not ‘simple’ (conforming to the affirmation underway).

Under this method, the model is complicated by reinterpretations. Failing the test that a model should be elegant, not exclusively simple. By means of this method, necessity under Ockham’s Razor is assumed in advance and all observations thereafter are merely reconfigured to fit the assumed model. At the end of this process, the idea which was posed in the form of a question, or sought at the very start, is affirmed as valid. Most often this idea thereafter is held as an Omega Hypothesis (more important to protect than the integrity of science itself).

The Logical Truth of Extraordinary Evidence – any claim which exposes a stakeholder to risk, ignorance or loss of value – regardless of how ordinary, virtuous or correct – demands extraordinary evidence. The correct version of Carl Sagan’s ‘extraordinary claims demand extraordinary evidence’.

Lotto Ticket Shuffle Scam – a scam wherein two persons pool money and buy 10 lottery tickets, however only one of them goes to the store and buys the lotto tickets; subsequently reporting back that all his tickets won and all his partner’s tickets lost. Or a business wherein one partner owns all the profitable aspects of the business, and his partners own all the non-profitable ones. The same is done with ‘reliable’ data being produced only by a authorized club – all my data is fact and all your data is anecdote.

McLuhan’s Axiom – “Only small secrets need to be protected. The large ones are kept secret by the public’s incredulity.” A quip attributed to philosopher on media theory, Herbert Marshall McLuhan.

McNamara Fallacy – named for Robert McNamara, the US Secretary of Defense from 1961 to 1968, involves making a decision based solely on quantitative observations or ‘metrics’ and ignoring all other factors. Often the rationale behind this is that other forms of observations cannot be proven. But the reality is often that metrics obsession stems from mostly laziness, cost avoidance or the fact that metrics tend toward certain answers. See phantasiae vectis.

medium fallax (see Gaussian Blindness) – the tendency to regard or promote the mean (μ) or other easily derived or comprehensive statistic as constituting an equivalent descriptive of the whole body of a set of data or a closely related issue – assuming immunity from the burden of identifying a causal critical path or developing testable mechanism to prove out the contention made (critical elements of scientific theory); or the process of misleading with statistical indications as to the makeup and nature of a body of data. I’ve got my head in the oven, and my ass in the fridge, so I’m OK.

Meet God Argument/Doubtcasting – a form of rhetorical critique in which a person casts inexpert doubt upon (while adding no value) or quibbles with each assumption in their opponent’s argument, reducing it continually to the point of winnowing it down to essential challenges such as ‘prove energy exists’, or ‘prove there is such a thing as entropy’, or ‘prove that the universe is quantum’, etc. A circumstance in which the arguer employs Herculean burdens of proof or opportunistic-as-is-favorable deep or wide questioning with their conversant, and is not interested in anything other than dispute or appearing to win an argument.

mésa éxo Prose or Communication– (from Greek: μέσα έξω; mésa éxo – ‘inside out’) – when one employs sophisticated style in communication as a substitution for competence. Communication is achieved by means of two elements: style and logical delivery. Mésa éxo prose is that writing or speech which is delivered in a compliant and user friendly style (a grammar, flow, idiom, and sentence structure with which the reader will most likely be familiar and view as culturally sophisticated), however is convoluted in terms of its inference, logical structure, integrity or ability to deliver actual intelligence. Just because its style may appear elite or comfortable does not mean that a piece of communication has been skillfully delivered. A common deception applied in journalism. See also ‘Bridgman Point’.

Methodical Doubt – doubt employed as a skulptur mechanism, to slice away disliked observations until one is left with the data set they favored before coming to an argument. The first is the questionable method of denying that something exists or is true simply because it defies a certain a priori stacked provisional knowledge. This is nothing but a belief expressed in the negative, packaged in such a fashion as to exploit the knowledge that claims to denial are afforded immediate acceptance over claims to the affirmative. This is a religious game of manipulating the process of knowledge development into a whipsaw effect supporting a given conclusion set

MiHoDeAL Bias – when one dismisses an observation prematurely as only being classifiable into a preassigned bucket of Misidentifications, Hoaxes, Delusions, Anecdote and Lies, in absence of having conducted any actual research in to the subject.

missam singulia shortfall in scientific study wherein two factors are evaluated by non equivalent statistical means. For instance, risk which is evaluated by individual measures, compared to benefit which is evaluated as a function of the whole – at the ignorance of risk as a whole. Conversely, risk being measured as an effect on the whole, while benefit is only evaluated in terms of how it benefits the individual or a single person.

Mission Directed Blindness – when one believes from being told, that they serve a greater cause, or that some necessary actions must be taken to avoid a specific disaster. Usually this renders the participant unable to handle evidence adeptly under Ockham’s Razor, once adopted.

Mode Median Mean Manipulation – choosing from the statistical mean, mode or median as needed to obtain the most favorable sounding statistic for your argument.

Monkey Suit Fallacy – the dismissal of an entire field of data simply by abusing a position of authority or citing an authority declaring the data to all to be the result of hoaxes.

Must-Explain Observation – (in contrast with ‘it would be nice if it could explain’) a specific observation inside a domain of science which cannot be brushed off as anecdote and is critical path to the argument being evaluated. In this circumstance a scientific argument or theory must explain the observation, or have its credibility be eroded. Most theoretical explanations of observed phenomena are linear inductive in their inferential strength. Must-explain observations provide a deductive lever for strong theories to emerge as superior from the field of linear inductive theories. Never accept a theory which can only explain convenient observations. You have no claim to science, if you cannot explain even a basic must-explain observation.

Mystery Monger – an individual who gatekeeps and wallows in the ignorance and mystique surrounding a fringe subject; who habitually dismisses any reductive, deductive or falsifying scientific study regarding that topic. Often purporting to (breaking from their normal habit) hold final scientific authority, ‘peer review’ or proof that the reductive, deductive or falsifying evidence can be completely dismissed (suddenly there is no mystery). Exploits the general public’s lack of knowledge about the subject they represent, in order to perpetuate and market the mystery and their central ‘investigator’ role therein. This type of smoke and mirrors professional is usually making a living from the mysterious subject under consideration.

Nelsonian KnowledgeA precise and exhaustive knowledge, about that which one claims is not worth examining. No expertise is so profound in its depth as that expertise prerequisite in establishing what not to know. Such Nelsonian knowledge takes three forms:

1. a meticulous attentiveness to and absence of, that which one should ‘not know’,
2. an inferential method of avoiding such knowledge, and finally as well,
3. that misleading knowledge or activity which is used as a substitute in place of actual knowledge (Nelsonian Displacement).

The former (#1) is taken to actually be known on the part of a poseur. It is dishonest for a man deliberately to shut his eyes to principles/intelligence which he would prefer not to know. If he does so, he is taken to have actual knowledge of the facts to which he shut his eyes. Such knowledge has been described as ‘Nelsonian knowledge’, meaning knowledge which is attributed to a person as a consequence of his ‘willful blindness’ or (as American legal analysts describe it) ‘contrived ignorance’.

Newton’s Flameout –  One who thinks something can be settled merely by a single experiment probably does not understand the question in the first place.

Normative Convergence Paradox – the observation or reality inside of systems theory and modeling that, even in the case wherein all optimal constraints, arrivals, feedback and functions inside a system are modeled to perfect accuracy – a decision or optimal outcome may not necessarily be producible.

Not Invented Here Bias – aversion to contact with or use of products, research, standards, or knowledge developed outside a group in which one is a member or with which one associates.

Observation Denial Special Pleading   a form of spurious data and observation dismissal where a proponent introduces favorable details or excludes unfavorable details regarding the observation, through alleging a need to apply additional considerations, without proper criticism or vetting of these considerations.

Observation vs Claim Blurring – the false practice of calling an observation of data, a ‘claim’ on the observers’ part.  This in an effort to subjugate such observations into the category of constituting scientific claims which therefore must be supported by sufficient data before they may be regarded by science.  In fact an observation is simply that, a piece of evidence or a fact, and its false dismissal under the pretense of being deemed a ‘claim’ is a practice of deception and pseudoscience.

Observational Occam’s Razor Fallacy (Exclusion Bias) – through insisting that observations and data be falsely addressed as ‘claims’ needing immediate explanation, and through rejecting such a ‘claim’ (observation) based upon the idea that it introduces plurality (it is not simple), one effectively ensures that no observations will ever be recognized which serve to frame and reduce a competing alternative.  One will in effect perpetually prove only what they have assumed as true, regardless of the idea’s inherent risk. No competing idea can ever be formulated because outlier data and observations are continuously discarded immediately, one at a time by means of being deemed ‘extraordinary claims’.

Occam’s Razorall things being equal, that which is easy for most to understand, and as well conforms with an a priori stack of easy-to-understands, along with what I believe most scientists think, tends to obviate the need for any scientific investigation. A false logical construct invented by SSkepticism to replace and change the efficacy of Ockham’s Razor, the latter employed as a viable principle in scientific logic. Occam’s Razor was a twist off the older Ockham’s Razor, which was slight and almost undetectable, but can be used to reverse the applicability of the more valid thought discipline inside of Ockham’s Razor. “All things being equal, the simplest explanation tends to be the correct one” is a logical fallacy; constituting a completely different and antithetical approach than that of Ockham’s Razor. Occam’s Razor can only result in conformance based explanations, regardless of their scientific validity.

Occam’s Razor Fallacy – abuse of Ockham’s Razor (and misspelling) in order to to enact a process of sciencey-looking ignorance and to impose a favored idea. All things being equal, that which is easy for most to understand, and as well conforms with an a priori stack of easy-to-understands, along with what I believe most scientists think, tends to obviate the need for any scientific investigation. Can exist in four forms, transactional, existential, observational and utility blindness.

Transactional Occam’s Razor Fallacy (Appeal to Ignorance) – the false contention that a challenging construct, observation or paradigm must immediately be ‘explained.’ Sidestepping of the data aggregation, question development, intelligence and testing/replication steps of the scientific method and forcing a skip right to its artificially conclusive end (final peer review by ‘Occam’s Razor’).

Existential Occam’s Razor Fallacy (Appeal to Authority) – the false contention that the simplest or most probable explanation tends to be the scientifically correct one. Suffers from the weakness that myriad and complex underpinning assumptions, based upon scant predictive/suggestive study, provisional knowledge or Popper insufficient science, result in the condition of tendering the appearance of ‘simplicity.’

Observational Occam’s Razor Fallacy (Exclusion Bias) – through insisting that observations and data be falsely addressed as ‘claims’ needing immediate explanation, and through rejecting such a ‘claim’ (observation) based upon the idea that it introduces plurality (it is not simple), one effectively ensures that no observations will ever be recognized which serve to frame and reduce a competing alternative.  One will in effect perpetually prove only what they have assumed as true, regardless of the idea’s inherent risk. No competing idea can ever be formulated because outlier data and observations are continuously discarded immediately, one at a time by means of being deemed ‘extraordinary claims’.

Utility Blindness – when simplicity or parsimony are incorrectly applied as excuse to resist the development of a new scientific explanatory model, data or challenging observation set, when indeed the participant refuses to consider or examine the explanatory utility of any similar new model under consideration.

Facile – appearing neat and comprehensive only by ignoring the true complexities of an issue; superficial. Easily earned, arrived at or won – derived without the requisite rigor or effort. Something easy to understand, which is compatible with a predicate or associated stack of also easy-to-understands.

Ockham’s Inversion – the condition when the ‘rational or simple explanation’ requires so many risky, stacked or outlandish assumptions in order to make it viable, that is has become even more outlandish than the complex explanation it was originally posed against and was supposed to surpass in likelihood. Similarly, a condition wherein the proposed ‘more likely or simple’ alternative is just as outlandish in reality as is the originally considered one.

Omnifabulance – (Latin: omni- ‘the complete’, and fabula- ‘narrative’ or ‘the complete narrative or fable’) – the characteristic of an anti-God-like entity, which is the contrapositive of omniscience. That characteristic of a God, Religion, or Central Authority which frames its ability to deliver a coherent and pervasive lie, a complete set of interwoven lies, or highly-fabricated narrative or doctrine. That characteristic ability of an artificial intelligence or other central authority, to warehouse a full array of pseudo-knowledge about a variety of topics, which is narrative-framed, based upon no critical path of inquiry, and mostly absent of true logical calculus. Little of this information can be challenged by specific groups holding the correct knowledge set, because they lack the power to correct the central entity to begin with.

Outference – a critical (not rhetorical) argument which bases its inference or conclusions upon cultivated ignorance and the resulting lack of information, rather than the presence of sound information. More than simply an appeal to ignorance, this ‘lack’ of information is specifically engineered to produce specious conclusion in the first place. This type of argument gets stronger and stronger the less and less critical information one holds. This is a warning flag of agenda or political shenanigans at play.

Pareidolia Bias –  a presumption that any challenging observation can only be solely the result of vague and random stimulus (often an image or sound) errantly perceived as significant by the observer.

pavor mensura – an effect of apophenia wherein, upon analyzing a system or looking at a set of analytics or an issue for the very fist time, observers will often mistakenly perceive that a disaster is in the making. The erroneous tendency to perceive that the first statistical measures of a disease, natural system, dynamic process or social trend – can be reliably extrapolated to predict calamity therein.

Pharmaceutical Research Fraud – nine methods of attaining success results from pharmaceutical studies, which are borderline or fraudulent in nature. The first eight of these are developed by Richard Smith, editor of the British Medical Journal. The ninth is a converse way of applying these fraud accusations to filter out appeals for study under Ockham’s Razor, in situations where studies run counter to pharmaceutical/research revenue goals.

1.  Conduct a trail of your drug against a treatment known to be inferior.

2.  Trial your drug against too low of a dose of a competitor drug.

3.  Conduct a trial of your drug against too high of a dose of a competitor drug (making your drug seem less toxic).

4.  Conduct trials which are too small to show differences from competitor drugs.

5.  Use multiple endpoints in the trial and select for publication those that give favorable results.

6.  Do multicenter trials and select for publication results from centers that are favorable.

7.  Conduct subgroup analyses and select for publication those that are favorable.

8.  Present results that are most likely to impress – for example, reductions in relative risk rather than absolute risk.

9.  Conduct high inclusion bias statistical studies or no studies at all, and employ items 1 – 8 above to discredit any studies which indicate dissenting results.

Pleonasm – is the use of more words or parts of words than is necessary for clear expression, in an attempt to load language supporting, or add judgmental bias to a contention. A form of hyperbole.

phantasiae vectis – the principle outlining that, when a human condition is monitored publicly through the use of one statistic/factor, that statistic/factor will trend more favorable over time, without any actual real underlying improvement in its relevant domain or condition. Such singular focus often to the detriment of all other related and appropriate factors. Unemployment not reflecting true numbers out of work, electricity rates or inflation measures before key democratic elections, efficiency focus instead of effectiveness, crime being summed up by burglaries or gun deaths only, etc.

Policy Based Evidence Manipulation – when an Einfach or Höchste Mechanism is enforced socially by a governing body or a group enforcing false consensus and pluralistic ignorance to such an extent that the researching, data collection, analytical, legislative or other presiding research group is incentivized to construct objective adjustments to the data collection entailed around the issue being enforced.  Such adjustments, while often scientifically justifiable, introduce bias in two ways: 1) equally scientific counter adjustments are not considered (error by omission), and 2) the magnitude of such adjustments are left up to the sole discretion of the data analysis group. This introduces a guaranteed bias into most information sets featuring a high number or dynamic set of contributing factors/influences or a high number of measurement points.

Poll Skewing Factors – well known in industry, but ignored by ‘statisticians’ in highly contested or manipulated public polls:

I.  Means of Collection – bias-infusing polls use exclusively land line phones as their channel and means of respondent communication – a tactic which is notorious in excluding males, mobile professionals and the full time employed.

II.  Regional Bias Exploitation – call sampling is conducted in the New England states or in California, reflecting a bias towards tax oriented businesses, such as healthcare, insurance, government offices, and the corporations who work and contract with such agencies.

III.  Bradley Effect – people have a tendency to express opinions and intent which fit a social pressure model or keep themselves out of the ‘bad guy’ bucket when polled on polarizing issues. This tends to skew polls notoriously to the left.

IV. Crate Effect – impact of persons who purposely give the opposite response as to what they really think because of animosity towards the polling group (especially if non-free press) and/or their perceived history of bias, and/or animosity towards the circus around elections or the elections themselves. This false left leaning bias is generated most often inside groups who believe media outlets to be left-leaning and unfair.

V. Crate/Bradley Power Effect – the misleading impact of the Crate and Bradley Effects falsely convinces poll administrators of the power they hold to sway the opinion of ‘undecideds’ and misleads their sponsors into funding more and more polls which follow the same flawed protocols and traps.

VI. Streetlight Effect – is a type of observational bias that occurs when people only search for something where it is easiest to look.

VII.  Trial Heat – the overall pressure which is placed on respondent results based on the structure of or questions inside the poll itself (1 Pew Research)

a.  Leading preparatory questions – employing questions which are pejoratively framed or crafted to lead the poll respondent, in order to skew undecided voters, prior to asking the core question, and

b.  Iterative poisoning – running the same poll over and over again in the same community and visibly publishing the desired results – akin to poisoning the jury pool.

VIII.  Crazy-8 Factor – for any question you pose, there is always a naturally errant 8 percent quotient who do not understand, don’t care, or purposely screw with you, or really think that gravity pulls up and not down. All which has to be done, to effect a 2 – 4 percentage point skew in the data – is bias the questioning so that the half of the Crazy-8 which disfavors your desired result, are filtered out through more precise or recursive questions – which are not replicated in the converse for the other half of the Crazy-8 which favor your desired result. The analytics which detect this poll manipulation is called a ‘forced-choice slack analysis’ – which examines the Crazy-8 and/or neutral respondents to see if they skew to the a bias in any particular direction.

IX.  Form of Core Question – asking different forms of THE CORE question than is implied by the poll, or different question by polling group. 1. Who do you favor, vs. 2. Who will you vote (will vote) for? vs. 3. Who do you think will win? (3 Pew Research)

X.   Follow Through Effect – only 35 to 55% of people who are polled, on average, will actually turn out to vote. (6 2016 General Election Turnout)

XI.  Oversamplingdeclaring a bias to exist in a population a priori, in the larger S pool from which an s sample is derived. Then further crafting a targeted addition of population members from S, to influence sample s in the opposite signal (direction and magnitude) from the anticipated bias. (1, 4 Pew Research)

XII. Influencing Effect – the technique of a polling group to release preliminary polling results during the influencing stage of iterative polling (for example, election sentiment). Results which do not fully reflect all the data they have gathered yet, rather target implanting a specific perception or message in the mind of the target polling population. Thereafter, to subsequently show results which include all collected data during the critical actual measurement phase, or in the anticipated completion stages (fictus scientia – see at end of this article).

XIII.  Gaussian Parametrization – the error made by statistical analytical processors of polling data, in which they assume that humans reliably follow a Gaussian distribution. Therefore smaller sample sizes can be used reliably to parametrize the whole.

XIV.  Early Poll Bias/Tip-in – polls early in a process, election cycle or addressing a question for the first time, tend to reflect the bias of the poll sponsors or developers to a more hyperbolic degree. Early election and primary returns will always favor the Left and then hone gradually in to a more accurate representation as time progresses and other competitive polls force them to come clean. Their final poll is always justifiable, and the earlier polls are simply portrayed as resulting from changes in participant sentiment (which is usually baloney).

praedicate evidentia – any of several forms of exaggeration or avoidance in qualifying a lack of evidence, logical calculus or soundness inside an argument. A trick of preemptive false-inference, which is usually issued in the form of a circular reasoning along the lines of ‘it should not be studied, because study will prove that it is false, therefore it should not be studied’ or ‘if it were true, it would have been studied’.

praedicate evidentia – hyperbole in extrapolating or overestimating the gravitas of evidence supporting a specific claim, when only one examination of merit has been conducted, insufficient hypothesis reduction has been performed on the topic, a plurality of data exists but few questions have been asked, few dissenting or negative studies have been published, or few or no such studies have indeed been conducted at all.

praedicate evidentia modus ponens – any form of argument which claims a proposition consequent ‘Q’, which also features a lack of qualifying modus ponens, ‘If P then’ premise in its expression – rather, implying ‘If P then’ as its qualifying antecedent. This as a means of surreptitiously avoiding a lack of soundness or lack of logical calculus inside that argument; and moreover, enforcing only its conclusion ‘Q’ instead. A ‘There is not evidence for…’ claim made inside a condition of little study or full absence of any study whatsoever.

Principle of Diminishing Percentage – over time, an increase in a cumulative amount which is the same each period, will represent a lower and lower percentage increase over each successive previous periods’ percentage – tendering the appearance of a reduction in growth to those who do not understand basic statistics. A common press headline trick is ‘lower percentage growth’ used as a way of implying a ‘reduction’.

Procrustean Solution – the undesirable practice of tailoring data to fit its container or some other preconceived argument being promoted by a carrier of an agenda.

Procrustitute – a data scientist or researcher who tailors the data or analytical method so that the results are supportive of a preconceived argument being promoted under a motive of agency.

Psychologism – data or principles purported to be of scientific origin in which psychology plays a key or sole role in gathering, grounding or explaining. Suffers from the weakness that psychological principles enjoy a perch which can never be falsified, therefore they are at risk of standing as pseudoscience.

quod gratis asseritur, gratis negatur – that which can be declared without basis, can be dismissed without basis. This phrase, does not mean that the subject declaration is existentially incorrect, nor that the antithetical set therefore bears truth – rather simply that I can refuse to accede to such a declaration, without any particular reason or soundness in arguing. While this is a form of skepticism, the apothegm can be abused to mistakenly perform debunking. The clarifying action on the part of the skeptic being its usage as a refusal to accede versus a negation of an idea (inverse negation fallacy). The latter is not warranted inside this principle of philosophy.

Recency Bias – the tendency to accept more recent information as being more credible or holding more gravitas in research.

Salami Slicing – the practice of artificially breaking a scientific study down into sub-components and publishing each sub-component as a separate study. If the paper involves consilience from studies derived from a number of disciplines, this might be acceptable.  However, when the studies are broken down simply to increase the publishing history of its authors or make a political position appear to be more scientific, through ‘a thousand studies support the idea that…’ styled articles, then this is a form of pseudoscience.

Sampled Population Mismatch – a study design methodology wherein a small population is sampled for a signal which is desired for detection, and a large population cohort is sampled regarding a signal which is not desired for detection. For example, using n=44 (unvaccinated) and 1,680 (vaccinated) in order to show that those vaccinated exhibit a lower rate of atopic allergies. The variance required to upshift the desired signal in the smaller group is on the order of a mere 2 or 3 persons. The odds of this occurring are very high, especially if the small group all originates from the same or similar lifestyle.

Scale Obfuscation – shooting high and shooting low. Any form of a priori assumption wherein a researcher presumes that larger or elemental sample populations are congruent with more study or better science. This as contrasted with an incremental scientific method under a condition of making open context field observations first, crafting the right (rather than presumed right) question, followed by study of multiple iterations of smaller, discrete, cohort and more focused populations, built then into larger field databases and study.

Bigger Data is Better Fallacy – the invalid assumption which researchers make when attempting to measure a purported phenomena, that data extraction inside a very large source population as the first and only step of scientific study, will serve to produce results which are conclusive or more scientific. The same presumption error can apply to meta-analysis. Wherein such an analysis is conducted in a context of low/detached, rather than informed knowledge sets, and will serve to dilute critical elements of signal and intelligence which could be used to elucidate the issue further.

Big is Science Error – bigger sample sizes and study data is a way of bypassing the scientific method, yet still tender an affectation of science and gravitas. Any time a study cannot be replicated, and a call to consensus is made simply because it would be too difficult to replicate the study basis of the consensus.

Yule-Simpson Paradox – a trend appears in different groups of data can be manipulated to disappear or reverse (see Effect Inversion) when these groups are combined.

Elemental Pleading – breaking down the testing of data or a claim into testing of its constituents, in order to remove or filter an effect which can only be derived from the combination of elements being claimed. For instance, to address the claim that doxycycline and EDTA reduce arterial plaque, testing was developed to measure the impact of each item individually, and when no effect was found, the combination was dismissed as well without study, and further/combination testing was deemed to be ‘pseudoscience’.

Scienter – is a legal term that refers to intent or knowledge of wrongdoing while in the act of executing it. An offending party then has knowledge of the ‘wrongness’ of their dealings, methods or data, but choose to turn a blind eye to the issue, commensurate to executing or employing such dealings, methods or data. This is typically coupled with a failure to follow up on the impacts of the action taken, a failure of science called ignoro eventum.

Sea Lioning – is a type of Internet trolling which consists of bad-faith requests for evidence, or repeated questions, the purpose of which is not clarification or elucidation, but rather an attempt to derail a discussion, appeal to authority as if representing science, or to wear down the patience of one’s opponent. May involve invalid repetitive requests for proof which fall under a proof gaming fallacy and highlight the challenger’s lack of scientific literacy.

Semmelweis Reflex – the tendency to reject new evidence that contradicts one’s held paradigm.

Shared Information Bias – known as the tendency for group members to spend more time and energy discussing information that all members are already familiar with (i.e., shared information), and less time and energy discussing information that only some members are aware of (i.e., unshared information).

Shopcraft – traits, arrival forms and distributions of data which exhibit characteristics of having been produced by a human organization, policy or mechanism. A result which is touted to be natural, random or unconstrained, however which features patterns or mathematics which indicate human intervention is at play inside its dynamics. A method of detecting agency, and not mere bias, inside a system.

Shotgun Barn Fallacy – when one takes the available set of evidence and habitually draws favored conclusions from it. Firing a shotgun at the broadside of a barn and then drawing the bulls-eye around the pellet holes.

Shrouded Furtive Fallacy – extensive recitation of historical and peripheral information tender the semblance of deep research and professionalism; yet are only posed to serve as a distraction inside an article whose crucial argument centers solely on accusations of malfeasance or lying on the part of its target opponents.

Simplicity Sell – when making a pitch through the contention that something is easy.  “Look, its simple right?”

Social Skepticism

1. a form of social activism which seeks abuse of science through a masquerade of its underlying philosophical vulnerability, skepticism. An imperious set of political, social, and religious beliefs which proliferate through teaching weaponized fake skepticism to useful idiots. Agency which actively seeks to foment conflict between science and the lay public, which then exploits such conflict to bolster its celebrity and influence.

2. a form of weaponized philosophy which masquerades as science, science enthusiasm or science communication. Social skepticism enforces specific conclusions and obfuscates competing ideas via a methodical and heavy-handed science embargo. It promotes charades of critical thought, self aggrandizement, and is often chartered to defend corporate/Marxist agendas – all while maintaining a high priority of falsely impugning eschewed individuals and topics. Its philosophies and conclusions are imposed through intimidation on the part of its cabal and cast of dark actors, and are enacted in lieu of and through bypassing actual scientific method. One of the gravest weaknesses of human civilization is its crippling and unaccountable bent toward social coercion. This form of oppression disparages courage and curiosity inside the very arenas where they are most sorely needed.

Failures with respect to science are the result of flawed or manipulated philosophy of science. When social control, close-hold embargo or conformance agents subject science to a state of being their lap-dog, serving specific agendas, such agents err in regard to the philosophical basis of science, skepticism. They are not bad scientists, rather bad philosophers, seeking a socialized goal. They are social skeptics.

Sowell’s Axiom – for every expert, there is an equal and opposite expert, but for every fact there is not necessarily an equal and opposite fact.

Sowert’s Law – a technique of deflection wherein a claim to supposed fact is derived from a stand-alone, manufactured, or trivial observation which is made inside a purposeful context of ignorance or isolation, stripped of its corroborating or supporting aspects. Ignorance + Trivia = “Fact”.

Streetlight Effect – is a type of observational bias that occurs when people only search for something where it is easiest to look.

Survivorship Bias – concentrating on the people or data that “survived” some process and inadvertently overlooking those that didn’t because of their lack of observability.

synítheia – (Greek: ‘habit, convention’) – the tendency to attribute patterns in data to underlying factors which an individual can control, as opposed to first examining demographic or primary factors which are not the result of personal habits or control. The bias of explaining that one’s own perceived or stereotypical habits, when contrasted with a target population, created therefore an observed statistical difference in that target population. Ascribing better disease outcomes in a population which is younger, falsely to their ‘being less obese’ or ‘more healthy’, for instance.

Tail to Body Transference – a statistical fallacy of relevant range in which one statistic is assumed to be fully descriptive across all conditions of its arrival range. An error wherein a descriptive suitable to frame a tail condition, is also applied to the main body of a distribution and assumed to be also descriptive therein. IQ measures which allow us to discriminate special needs persons, are not also useful in determining who should lead nations or corporations at the other end. Cholesterol figures which allow us to highlight who is at risk of arterial plaque do not apply to persons who have a familial history of higher cholesterol stats, etc. Mistakenly applying tail condition gradients to also be salient in the main body of the same descriptive data.

Taleb’s But – the principle which proceeds along the line of the Nassim Nicholas Taleb quote “Everything before the “but” is meant to be ignored by the speaker; and everything after the “but” should be ignored by the listener.”

Taleb’s Contraposition – For real people, if something works in theory, but not in practice, it doesn’t work. For social skeptics and many academics, if something works in practice, but not in theory, it doesn’t exist.

Taleb’s Law of Abundance – Abundance is harder for us to handle than scarcity. ~ Nassim Nicholas Taleb

Taleb’s Law of Data – The more data you get, the less you know what’s going on. ~ Nassim Nicholas Taleb

Taleb’s Law of Intelligence – In a complex world, intelligence consists in ignoring things that are irrelevant ~ Nassim Nicholas Taleb

Taleb’s Law of Tolerance – a toleration of intolerance will always escalate to extremism and proscription as the standard. The most intolerant, wins.

Taleb’s Tale of the Tail – fat tailed distributions hide their tails.

Torfuscation – pseudoscience or obfuscation enacted through a Nelsonian knowledge masquerade of scientific protocol and study design. Inappropriate, manipulated or shallow study design crafted so as to obscure or avoid a targeted/disliked inference. A process, contended to be science, wherein one develops a conclusion through cataloging study artifice or observation noise as valid data. Invalid observations which can be parlayed into becoming evidence of absence or evidence of existence as one desires – by accepting only the appropriate hit or miss grouping one desires as basis to support an a priori preference, and as well avoid any further needed ex ante proof.  A refined form of praedicate evidentia or utile abstentia employed through using less rigorous or probative methods of study than are requisite under otherwise ethical science.  Exploitation of study noise generated through first level ‘big data’ or agency-influenced ‘meta-synthesis’, as the ‘evidence’ that no further or deeper study is therefore warranted – and moreover that research of the subject entailed is now socially embargoed.

Trivia Fallacy – the rejection of a entire set of data by the pointing out of one questionable or disliked element inside the data.

Twitter’s Razor – all things being equal, the shorter the video or the less information related, the more likely a just indignation can be derived.

Univariate Error – a procedural error (not a ‘fallacy’) wherein one is misled by the phenomenon where it’s possible for two multivariate distributions to overlap along any one variable, but be cleanly separable or have the relationship disappear when one examines the whole relational or configuration space in its entirety.

utile absentia – a study which observes false absences of data or creates artificial absence noise through improper study design, and further then assumes such error to represent verified negative or positive observations. A study containing field or set data in which there exists a risk that absences in measurement data, will be caused by external factors which artificially serve to make the evidence absent, through risk of failure of detection/collection/retention of that data. The absences of data, rather than being filtered out of analysis, are fallaciously presumed to constitute bonafide observations of negatives. This is improper study design which will often serve to produce an inversion effect (curative effect) in such a study’s final results. Similar to torfuscation. As well, the instance when an abstract offers a background summary or history of the topic’s material argument as part of its introductory premise, and thereafter mentions that its research supports one argument or position – however fails to define the inference or critical path which served to precipitate that ‘support’ – or even worse, tenders research about a related but not-critical aspect of the research. Like pretending to offer ‘clinical research’ supporting opinions about capitalism, inside a study of the methods employed by bank tellers – it only sounds related. In this case you have converted an absence into a positive. A formal error called utile absentia. This sleight-of-hand allows an opinion article to masquerade as a ‘research study’. It allows one to step into the realm of tendering an apparent epistemological result, which is really nothing more than a ‘they said/I say’ editorial with a scientific analysis bolted onto it, which may or may not present any bearing whatsoever into the subject at hand. Most abstract surveyors do not know the difference – and most study leads cannot detect when this has occurred.​

via negativa (vs. via positiva) – a way of describing something by saying what it is not. For example, employing a finite concept of attribute or object which can be employed as a non-descriptive for God or ultimate reality or a human condition, etc. This bears less confidence in inference than does a via positiva attribute or description.

Virtue Telescope – employment of a theoretical virtue benefit projected inside a domain which is distant, slow moving, far into the future, diffuse or otherwise difficult to measure in terms of both potential and resulting impact, as exculpatory immunity for commission of an immoral act which is close by, obvious, defined and not as difficult to measure. Similar to but converse of an anachronistic fallacy, or judging distant events based on current norms.

‘Vitamins Don’t Cure Cancer’ Fallacy – an informal fallacy wherein the task of proof or effect threshold assigned to the test subject is far in excess of, or out of context with, the contentions being made about the test subject. Studies which show that supplements don’t cure cancer, therefore they are all a waste of money, or there are 28 types of depression, so inflammation is not related to depression, or quality of life improvement is not a ‘medical outcome’. Ridiculous contentions which are not backed by the study which they cite as evidence.

Von Restorff Effect – the bias inducing principle that an item that sticks out is more likely to be remembered than other items.

Whipping Horse – a martyr issue, study, chart, graphic or event which is cited as exemplary in condemning a targeted group or thought set – which is over-employed or used in a domain of ample equivocation, equivocal slack, straw man or other ambiguity that it simply becomes a symbol and ironically maintains no real salience to the argument at hand.

Wolfinger’s Inductive Paradox – an ‘anecdote’ to the modus praesens (observation or case which supports an objective presence of a state or object) constitutes data, while an anecdote to the modus absens (observation supporting an appeal to ignorance claim that a state or object does not exist) is merely an anecdote. One’s refusal to collect or document the former, does not constitute skepticism. Relates to Hempel’s Paradox.

Wolfinger’s Misquote – you may have heard the phrase ‘the plural of anecdote is not data’. It turns out that this is a misquote. The original aphorism, by the political scientist Ray Wolfinger, was just the opposite: ‘The plural of anecdote is data’. The only thing worse than the surrendered value (as opposed to collected value, in science) of an anecdote is the incurred bias of ignoring anecdotes altogether.  This is a method of pseudoscience.

Yule-Simpson Paradox – a trend appears in different groups of data can be manipulated to disappear or reverse (see Effect Inversion) when these groups are combined.

Yule’s Razor – all things being equal, favor a successive falsifying study/conclusion which was serendipitous in nature, over an initial suggestive one featuring anchoring bias in its discovery, inclusion, and methodology. Data volume and Yule-Simpson contradiction bear less importance than this razor.

Zeigarnik Effect – states that people remember uncompleted or interrupted tasks better than completed tasks. This imparts a bias to refute arguments or ideas which are unfinished.

The Ethical Skeptic, “The Tree of Knowledge Obfuscation: Misrepresentation of Evidence or Data”; The Ethical Skeptic, WordPress; Web, https://theethicalskeptic.com/2009/09/23/the-tree-of-knowledge-obfuscation-misrepresentation-of-evidence-or-data/

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

2 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
James

Bespoke Truth
*Christian apologetics alert*

James

Arrival Bias – the tendency to tender more credibility or gravitas to information which is hot off the press or has just been introduced.
Very common with peer review articles. It’s seldom you see or hear anyone respond with skepticism to the latest peer review buzz