The Ethical Skeptic

Challenging Pseudo-Skepticism, Institutional Propaganda and Cultivated Ignorance

The Tree of Knowledge Obfuscation: Misrepresentation of Evidence or Data

The following is The Ethical Skeptic’s list, useful in spotting both formal and informal logical fallacies, cognitive biases, statistical broaches and styles of crooked thinking on the part of those in the Social Skepticism movement. It is categorized by employment groupings so that it can function as a context appropriate resource in a critical review of an essay, imperious diatribe or publication by a thought enforcing Social Skeptic. To assist this, we have comprised the list inside an intuitive taxonomy of ten contextual categories of mischaracterization/misrepresentation:

Opponents, Locution or Semantics, Evidence or Data, Bias or Method, Science, Argument, Assumption, Groups, Self and Authority

.Tree of Knowledge Obfuscation The Ethical Skeptic.

Misrepresentation of Evidence or Data

acatalepsia Fallacy – a flaw in critical path logic wherein one appeals to the Pyrrhonistic Skepticism principle that no knowledge can ever be entirely certain – and twists it into the implication that therefore, knowledge is ascertained by the mere establishment of some form of ‘probability’. Moreover, that therefore, when a probability is established, no matter how plausible, slight or scant in representation of the domain of information it might constitute, it is therefore now accepted truth.  Because all knowledge is only ‘probable’ knowledge, all one has to do is spin an apparent probability, and one has ascertained accepted knowledge. Very similar in logic to the Occam’s Razor aphorism citing that the ‘simplest explanation’ is the correct explanation.

Aleatoric Casuistry – employing statistical uncertainty, representative of unknowns that differ each time we run the same experiment, observation or situation, in order to push the idea that something very uncommon is actually common, can be, or has been common. A version of ‘you never know’ or ‘I bet this happens a lot’. A way of implying that an epistemological basis for a frequency of epistemic uncertainty exists, when indeed it does not.

Amateur Confidence Fallacy – the act of substituting simple probability math to manipulate outcomes, because one does not understand the difference, or because it looks like the same thing to a layman, in instances where only confidence intervals can be correctly applied under the scientific method.

Amplanecdote – something which has occurred a scientifically significant number or many times over, however which is ignored through agenda or by means of a baseless claim to categorization as anecdote.

Anchoring Bias – when a person is over-reliant on the first piece of information they have encountered, or begin a branch and bound search or negotiation at a starting point which is arbitrary, yet causes them cede credence to that range from then on.

Anecdote Data Skulpting (Cherry Sorting) – when one applies the categorization of ‘anecdote’ to screen out unwanted observations and data. Based upon the a priori and often subjective claim that the observation was ‘not reliable’. Ignores the probative value of the observation and the ability to later compare other data in order to increase its reliability in a more objective fashion, in favor of assimilating an intelligence base which is not highly probative, and can be reduced only through statistical analytics – likely then only serving to prove what one was looking for in the first place (aka pseudo-theory).

Antiquing Fallacy – the dismissal of an entire field of data by showing its false, hoax based or dubious past inside a set of well known anecdotal cases. Also the instance where a thesis is deemed incorrect because it was commonly held when something else, clearly false, was also commonly held.

Apophenia Bias – the immediate dismissal of data as being manufactured, mis-analyzed, or reflecting random patterns which are ascribed empirical meaning, without having conducted the research into the data, nor possessing the background in the discipline involved in order to be able to make such a claim.

Appeal to Apati Fallacy – ‘Appeal to the hoax’ fallacy of presumption and irrelevance.  The attempt to impugn a subject by citing or fabricating a history or incident involving a hoax of one or more of the subject’s contentions.  The fallacy resides in the fact that if it exists, there is porn of it; and likewise, if it exists or not, there is a hoax of it.

Arrival Bias – the tendency to tender more credibility or gravitas to information which is hot off the press or has just been introduced.

Ascertainment Bias – a form of inclusion or exclusion criteria error where the mechanism of sampling, specimen selection, data screening or sub-population selection is inherently flawed, is developed based upon an inadequate or sophomoric view of the observations base or produces skewed representations of actual conditions.

Associative Condemnation – the attempt to link controversial subject A with personally disliked subject B, in an effort to impute falsehood to subject B though the association of some idea or keyword common to both topics.  Guilt through association and lumping all subjects into one subjective category.  This typically will involve a context shift or definition expansion in a key word as part of the justification.

Availability Error – to adjudicate the answer to questions according to the examples that come most easily to mind, rather than a wide or representative sample of salient evidence.

Availability Heuristic – to adjudicate the answer to a question according to only the information which is available at that time.

Bandwagon Blindness – when a group fails to see their own mistakes or errors inside a hot issue, usually obscured by the common spread of propaganda, and therefore must view any critique of, error or data contradiction as being the fault of opposition or outside parties.

Base Rate Fallacy – an error in thinking where, if presented with related base rate information (i.e. generic, general information) and specific information (information only pertaining to a certain anecdotal case), the mind tends to ignore the former and focus on the latter in characterizing the whole set of relevant data regarding a subject.

Bergen’s Principle – 1.  Reward always outweighs miscalculated or ignored risk. 2. Misevaluated risk is never cumulative.

Bias Error – when using bias, fallacy or judgement error proclamations to condemn opinions of those who disagree with you, solely to push political, social or economic goals inside decision sets which are not clearly addressed by empirical or scientific backing.

Bias Inflation – a baseless claim of discrediting an observation set through identifying a plausible, relevant or even salient observer bias, which might have or did contribute to the observational profile, yet which can at most only explain a small or negligible portion of the observation base itself.

Big is Science Error – bigger sample sizes and study data is a way of bypassing the scientific method, yet still tender an affectation of science and gravitas. Any time a study cannot be replicated, and a call to consensus is made simply because it would be too difficult to replicate the study basis of the consensus.

Bigger Data is Better Fallacy – the invalid assumption which researchers make when attempting to measure a purported phenomena, that data extraction inside a very large source population as the first and only step of scientific study, will serve to produce results which are conclusive or more scientific. The same presumption error can apply to meta-analysis. Wherein such an analysis is conducted in a context of low/detached, rather than informed knowledge sets, and will serve to dilute critical elements of signal and intelligence which could be used to elucidate the issue further.

Black Swan Fallacy – a claim that the highly improbable will never happen, or that one has a grasp of the full set of the highly improbable, or that one has ascertained the likelihood of a highly improbable event through statistics – when there is not a precedent or knowledge base which allows for any objective epistemic basis for such a calculation. Any form of lack of knowledge which dismisses any highly improbable even from happening, based upon a faulty estimation of the likelihood of a single unlikely event not happening.

Blind-spot Bias – when one fails to recognize or attempt to recognize, or be circumspect about their own biases in side an argument or data set. To notice a bias, fallacy or error in others more readily than in one’s self.

Bootstrapping (Index/Strength) – from the tall tales about the 18th-century German nobleman Baron Munchausen and his wartime exploits against the Ottoman Empire; specifically wherein he pulled himself up out of a well by his own bootstraps. A computational technique for estimating a statistical set for which the underlying distribution is unknown, or a sampling technique which estimates sampling distribution by repeatedly sampling data from the original observation set. It is most often employed as a means to estimate confidence levels of clade structures within a phylogenetic tree in genetics. However, it can be used to describe an inference which is measured as to its risk in draw. A 50 Bootstrap index bears significant risk, whereas a 90/100 Bootstrap index implies a greater degree of confidence in the inference, and therefore less risk.

brevis lapsus (‘Word Salad’ Fallacy) – the inability to understand technical or precise writing, mistaking it for constituting a pleonasm. This in favor of simplistic writing which is, either with or without the intent of the opponent, subsequently rendered vulnerable to equivocation. An accusation made when a dilettante person fails to understand philosophical or technical writing, wherein the base argument or its requisite vocabulary reside completely over the head of the individual who started the argument to begin with.

Chekhov’s Gun – is a dramatic principle that states that every element in a fictional story must be necessary to support the plot or moral, and irrelevant elements should be excluded. It is used as part of a method of detecting lies and propaganda. In a fictional story, every detail or datum is supportive of, and accounted for, as to its backing of the primary agenda/idea. A story teller will account in advance for every loose end or ugly underbelly of his moral message, all bundled up and explained nicely – no exception or tail conditions will be acknowledged. They are not part of the story.

Cladistic Dismantling (Deconstruction) – the definition of a malady or observed set of phenomena, into numerous ‘distinction without a difference’ subsets in an attempt to disguise or cloud noise around the overall trend in numbers involved in the malady or the phenomenon’s overall impact.

Clarke’s Second Law – the only way of discovering the limits of the possible is to venture a little way past them into the impossible.

Click Bait (or Headline) Skepticism – a position backed by articles or studies in which the headline appears to support the position contended, however which in reality actually contend something completely or antithetically different. A skeptical understanding which is developed though sound bytes and by never actually reading the abstract, method or content of cited articles or studies.

Compactifuscation – the merging of several disparate but associated concepts or definitions into one single descriptive term, so that epistemological weakness or strengths characteristic of a subset of the definitions held equivocally inside the term, can be ported over to the remaining set of definitions, without overt support or challenge in doing so. For instance the merging of sentience, awareness, meta-awareness, identity and meta-identity all into the term ‘consciousness’, so that studies on beetles can be ported over and apply to the hard problem of human consciousness.

Complexifuscation – the introduction of similar signals, inputs or measures, alongside a control measure or an experimental measure, in an attempt to create a ‘cloud of confusion or distraction’ around the ability to effect observation, control or measure of a targeted set of data. Preemption of a phenomena with in-advance flurries of fake hoaxes, in order obscure the impact, or jade the attention span of a target audience, around a genuine feared phenomena.

Confirmation Bias – the tendency to immediately accept propaganda published in the opponent’s favored group, and to reject observations, data or ideas which do not fit the opponent’s favored models.

Continuum Fallacy – erroneous rejection of a vague claim or loosely defined data set simply because it is not as precise as one would like it to be.

Contrathetic Impasse – a paradoxical condition wherein multiple competing hypotheses and/or ad hoc plausible explanations bear credible inductive evidence and research case history – yet each/all hypotheses or explanations have been falsified/eliminated as being sufficiently explanatory for more than a minor portion of a defined causal domain or observation set. For instance, the MiHoDeAL explanation contains 5 very credible possible explanations for challenging phenomena. However, the sum total of those 5 explanations often only amounts to explaining maybe 5 – 15% of many persistent paranormal phenomena. The presumption that one of those explanations is comprehensively explanatory, is a trick of pseudoscience. Another new hypothesis is therefore demanded in the circumstance of a contrathetic impasse paradox.

Causes or influences which contribute to a contrathetic impasse:*

1.  Foundational assumptions/investigation are flawed or have been tampered with.
2.  Agency has worked to fabricate and promote falsifying or miscrafted information as standard background material.
3.  Agency has worked to craft an Einfach Mechanism (Omega Hypothesis) from an invalid null hypothesis.
4.  Agency has worked to promote science of psychology, new popular theory or anachronistic interpretation spins on the old mystery.
5.  SSkeptics have worked to craft and promote simple, provisional and Occam’s Razor compliant conclusions.
6.  Agency has worked to foist ridiculous Imposterlösung constructs in the media.
7.  Agency has worked to foist shallow unchallenged ad hoc explanations in the media.
8.  SSkeptics seem to have organized to promote MiHoDeAL constructs in the media.
9.  There exist a set of repeatedly emphasized and/or ridiculously framed Embargo Hypotheses.
10.  Agency has worked to promote conspiracy theory, lob & slam Embargo Hypotheses as an obsession target to distract or attract attack-minded skeptics to the mystery. The reason this is done is not the confusion it provides, rather the disincentive which patrolling skeptics place on the shoulders of the genuine skilled researcher. These forbidden alternatives may be ridiculous or indeed ad hoc themselves – but the reason they are raised is to act as a warning to talented researchers that ‘you might be tagged as supporting one of these crazy ideas’ if you step out of line regarding the Omega Hypothesis.

Correlation Dismissal Error – when employing the ‘correlation does not prove causality’ quip to terminally dismiss an observed correlation, when the observation is being used to underpin a construct or argument possessing consilience, is seeking plurality, constitutes direct fingerprint evidence and/or is not being touted as final conclusive proof in and of itself.

Cunningham’s Law – an approach to Akratic Trolling which states that the best way to get the right answer on the Internet is not to ask a question, it’s to post the wrong answer.

Dichotomy of Compartmented Intelligence – a method of exploiting the compartmented skill sets of scientists, in order to preclude any one of them from fully perceiving or challenging a study direction or result. Call your data analysts ‘data scientists’ and your scientists who do not understand data analysis at all, the ‘study leads’. Ensure that those who do not understand the critical nature of the question being asked (the data scientists) are the only ones who can feed study results to people who exclusively do not grasp how to derive those results in the first place (the study leads). This is a method of intelligence laundering and is critical to eveyone’s being deceived, including study authors, peer reviewers, media and public at large.

Duty to Address and Inform (Hypothesis) – a critical element and aspect of parsimony regarding a scientific hypothesis. The duty of such a hypothesis to expose and address in its syllogism, all known prior art in terms of both analytical intelligence obtained or direct study mechanisms and knowledge. If information associated with a study hypothesis is unknown, it should be simply mentioned in the study discussion. However, if countermanding information is known, the structure of the hypothesis itself must both inform of its presence and as well address its impact.

Ecological Fallacy – a logical fallacy in the interpretation of statistical data where inferences about the nature of individuals or isolated observations/studies are deduced from inference regarding the group or broader study domain to which those individuals or observations belong. A compliment of the Yule-Simpson Effect, wherein results obtained for subsets of data, tend to disappear when those subsets are combined.

Effect Inversion – one sign of a study which has been tampered with through inclusion and exclusion criteria, in an effort to dampen below significance or eliminate an undesired signal, is the circumstance where an inverse or opposite relationship effect is observed in the data when the inverse question is asked concerning the same set of data. If a retrospective cohort study purportedly shows no effect relationship between a candidate cause and a malady – there should also be no relationship between the candidate cause and the absence of the malady as well (if the two are indeed unrelated epidemiology). The presence of a reverse or apparently ‘curative’ influence of the candidate cause being evaluated in the data may signal the impact of data manipulation.

Elemental Pleading – breaking down the testing of data or a claim into testing of its constituents, in order to remove or filter an effect which can only be derived from the combination of elements being claimed. For instance, to address the claim that doxycycline and EDTA reduce arterial plaque, testing was developed to measure the impact of each item individually, and when no effect was found, the combination was dismissed as well without study, and further/combination testing was deemed to be ‘pseudoscience’.

Embargo Hypothesis (Hξ) – was the science terminated years ago, in the midst of large-impact questions of a critical nature which still remain unanswered? Is such research now considered ‘anti-science’ or ‘pseudoscience’?

Entscheiden Mechanism – the pseudoscientific or tyrannical approach of, when faced with epistemology which is heading in an undesired direction, artificially declaring under a condition of praedicate evidentia, the science as ‘settled’.

Poison Pill Hypothesis – the instance wherein sskeptics or agency work hard to promote lob & slam condemnation of particular ideas. A construct obsession target used to distract or attract attack-minded skeptics into a contrathetic impasse or argument. The reason this is done is not the confusion or clarity it provides, rather the disincentive which patrolling skeptics place on the shoulders of the genuine skilled researcher. These forbidden alternatives (often ‘paranormal’ or ‘pseudoscience’ or ‘conspiracy theory’ buckets) may be ridiculous or indeed ad hoc themselves – but the reason they are raised is to act as a warning to talented researchers that ‘you might be tagged as supporting one of these crazy ideas’ if you step out of line and do not visibly support the Omega Hypothesis. A great example is the skeptic community tagging of anyone who considers the idea that the Khufu pyramid at Giza might have not been built by King Khufu in 2450 bce, as therefore now supporting conspiracy theories or aliens as the builders – moreover, their being racist against Arabs who now are the genetic group which occupies modern Egypt.

Epistemological Psychic Study – when a statistical trend study shows a trend in data, but the only data elements supporting the contended trend all exist in the future and/or none of the past data supports such a trend; their being included simply to serve an appearance of statistical rigor and significance.  Usually applied to support a political cause, and in an arena where the observable final outcome will be well beyond the proponent’s lifespan or political career.

epoché (ἐποχή, “suspension”) – a suspension of disposition. The suspended state of judgement exercised by a disciplined and objective mind. A state of neutrality which eschews the exercise of religious, biased rational or critical, risky provisional and dogmatic dispositions when encountering new observations, ideas and data. It is the step of first being skeptical of self, before addressing challenging or new phenomena. It is underpinned by both a examination of the disciplines of true knowledge development (epignosis) and the repository of vetted and accepted knowledge (gnosis). If someone relates a challenging observation to you, you suspend disposition, and catalog it. If you toss it out based upon a fallacy or trivial flaw – then you are a cynic, not a skeptic.

Ergodicity Ignorance – the failure of a data scientist or one who is attempting to interpret a series or body of data inside a dynamic system in that they are unable to grasp that the system will possess the same behavior averaged over time, regardless of the makeup or state of the system. If you tax the richest 15%, a new 15% rich who benefit from the new dynamic will arrive to take its place, however the historical statistical tier breakout will not change. Hence, Goodhart’s Law. When one statistic becomes the focus of an attempted system change, it ceases to be a good statistic.

Essential Schema Filtering Error – when one uses pop psychology studies such as the 1980’s Loftus Study to dismiss memories and observations which they do not like. By citing that memories and eyewitness testimony are unreliable forms of evidence, pretend skeptics present an illusion of confidence on dismissing disliked eyewitness essential schema data, when neither the Federal Rules of Evidence, science nor even the cited studies make such a claim which allows the dismissal of eyewitness testimony at all.

ex ante – an inference which is derived from predictive, yet unconfirmed forecasts. While this may be a result of induction, the most common usage is in the context of abductive inference.

Exception Fallacy – is the converse of the ecological fallacy, wherein one infers a group conclusion on the basis of individual, small study or exceptional cases. This is the kind of fallacious reasoning that is at the core of stereotyping.

Existential Fallacy of Data – the implication or contention that there is an absence of observation or data supporting an idea, when in fact no observational study at all, or of any serious import has been conducted by science on the topic at hand.

Experimenter’s Bias – the tendency for experimenters to believe, certify, and publish data that agree with their expectations for the outcome of an experiment, and to disbelieve, discard, or downgrade the corresponding weightings for data that appear to conflict with those expectations.

Fabutistic – a statistic which is speciously cited from a study or set of skeptical literature, around which the recitation user misrepresents its employment context or possesses scant idea as to what it means, how it was derived, or what it is saying or is indeed not saying. Includes the instance wherein a cited statistic is employed in a fashion wherein only the numerals are correct (ie. “97%”) and the context of employment is extrapolated, hyperbole or is completely incorrect.

Facile – appearing neat and comprehensive only by ignoring the true complexities of an issue; superficial. Easily earned, arrived at or won – derived without the requisite rigor or effort.

¡fact! – lying through facts. Data or a datum which is submitted in order to intimidate those in a discussion, is not really understood by the claimant, or rather which is made up, is not salient or relevant to the question being addressed, or is non-sequitur inside the argument being made. The relating of a fact which might be true, does not therefore mean that one is relating truth.

Factcidental – facts which are correct, but only by trivia or accident or which possess no real sequitur relationship with the discussion/issue context or logical calculus.

Fair Dinkum – a question which essentially asks ‘Have you put in the hard work and direct experience, ability or integrity which substantiates the quality of your offering?’ ‘Dinkum’ is a slang term that appears to have grown up from a variety of stories and dialects in Australia – and bearing two meanings, proof in the pudding by means of ‘work’ ‘value’ and/or ‘fair play’.

Fallacy of Excluded Exceptions – a form of data skulpting in which a proponent bases a claim on an apparently compelling set of confirming observations to an idea, yet chooses to ignore an also robust set of examples of a disconfirming nature. One chisels away at, disqualifies or ignores large sets of observation which are not advantageous to the cause, resulting only seeing what one sought to see to begin with.

Fallacy of Extrapolated Inversion – a form of straw man argument where data describing a phenomenon peculiar to one population under a specific set of circumstances, is extrapolated and applied to a completely different phenomenon or population under a completely different set of circumstances, to underpin a straw man assertion about the latter. For example, “Despite a supposed (note the prejudicial language) surge in nationalism across the globe, many people like to watch movies and TV shows from other countries. The xenophobic leaders aren’t succeeding in changing people’s interest in others.”

Fallacy of Relative Privation – dismissing an avenue of research due its waste of scientists’ time and to the existence of more important, but unrelated, problems in the world which require priority research.

Fallacy of Univariate Linear Inductive Inquiry into Complex Asymmetric Systems (Univariate Fallacy) – the informal fallacy of attempting to draw inference regarding complex dynamic systems, such as biological, economic or human systems, through employment of singular, linear or shallow inductive analytical methods. A deep understanding of complex systems first demands a conceptual and analytical strategy that respects, defines and adequately reduces for analysis, that complexity. Only then can multiple component analyses be brought to bear as a consilience in understanding the whole.

Falling Between the Cracks – data which should have been brought into an argument, but which was neglected because each of the responsible members in a research or petitioning group assumed that such data was the responsibility of the other parties.

False Domain Equivalence – a form of ambiguity slack exploitation wherein one equates the probability metrics which can be derived from a qualified, constrained or specific domain or circumstance, to be comparable to the use of ‘probability’ inside a broad domain or one lacking Wittgenstein parameters, constraints or descriptives.

Fat Tony – an observer-stakeholder who is tail or fat tail focused in instinct. Fat Tony is dismissive of the probabilistic heuristic involved in a decision process and instead either spots and exploits the role of agency inside a purportedly probabilistic process, or spots and exploits the black swan events in the portfolio of those who’s decision heuristics depend solely upon probabilistic outcomes. In either case Fat Tony uses the unlikely or paradigm breaking aspect of a set distribution, in order to make a derived advantage.

Filbert’s Law – or the law of diminishing information return. Increasing the amount of data brought into an analysis does not necessarily serve to improve salience, precision or accuracy of the analysis, yet comes at the cost of stacking risk in veracity. The further away one gets from direct observation, and the more one gets into ‘the data’ only, the higher is the stack of chairs upon which one stands. To find a result use a small sample population, to hide a result use a large one.

Five Percent Fake – when a methodical cynic cites the existence of the “5% of unexplained cases” myth in an effort to appear objective. This in an effort to avoid the perception of not possessing a series of outlier data points (the 5%), the absence of which would tender the appearance of cynical bias.

Forer Effect – when an individual tenders estimations of high accuracy to descriptions of their personality or life circumstance that supposedly are tailored specifically for them, but are in fact vague and general enough to apply to a wide range or a common subset of people.

Forer Error – ascribing accurate forecasting to be merely the outcome of Forer Effect, in absence of adequate knowledge, personal research or statistical rigor regarding the particular circumstance, which could substantiate such such a claim. The principle being that an accurate forecast establishes plurality and both parties now hold a burden of proof if a claim is tendered.

Fruit of the Poisonous Tree – an invalid principle of fake skepticism drawn from legal precedent, which stipulates that evidence which is obtained from illegal sources or methods is inadmissible in a court of law. Skeptics appropriate this principle to allow them to dismiss valid evidence simply because it came from a source, method or person whom they dislike – regardless of how valid the evidence is, stand alone. It is a skulptur mechanism, a method of evidence filtering.

Furtive Confidence Fallacy – the refusal to estimate, grasp or apply principles of statistical confidence to collected data and observed arrival distributions, as a means of falsely bolstering the perceived validity of or avoid the signalling of validity in an observation set or body of data. The act of prematurely declaring or doggedly denying a multiplicity of anecdote to be equal to data.

Furtive Fallacy – undesired data and observations are asserted to have been caused by the malfeasance of researchers or laymen.

Gaslighting – a form of manipulation that seeks to sow seeds of doubt in a targeted individual or in members of a targeted group, hoping to make them question their own memory, perception, and/or sanity. Using persistent denial, disinformation, misdirection, contradiction, manipulated statistics and organic untruths, in an attempt to destabilize the target and delegitimize the target’s beliefs, understand or confidence in self.

Gaussian Blindness (see medium fallax) – the tendency to characterize an entire population by both the mean (μ) of the population as well as a Normal Distribution profile or other easily applied distribution, as being descriptive of the whole body of a set of data. I’ve got my head in the oven, and my ass in the fridge, so I’m OK.

Goodhart’s Law – when a specific measure becomes the sole or primary target, it ceases to be a good measure.

Google Goggles – warped or errant information cultivated through reliance on web searches as one’s resource or base of understanding. Vulnerability to web opinions where every street doubter can dismiss observations as a pretend authority on science, every claim represented as being by science is immediately accepted and every public observation is deemed a fable or a hoax.

Google Reaction Effect – the tendency to discount as unimportant or invalid, information that can be found readily online by using Internet search engines.

Historian’s Fallacy – occurs when one assumes that decision makers of the past viewed events from the same perspective and having the same information as those subsequently analyzing the decisions; therefore the levels of bunk, belief and credulity can be used to dismiss past events with historically credible persons, just as the same as they are employed in modern discourse.

Hoax (Backfitted) – a video, recording or photo typically, in which a conventionally surprising event is documented, which appears to engage humans and/or animals regarding the curious nature of the event. However, the event is backfitted with an additional hoaxed entity which alters the subject or object nature of the amazing observation, by means of a false overlay or impression of a more astounding or entirely different nature. For instance, overlaying the fake-image of an angel over a blown-lightbulb video, and leveraging off the shock of bystanders to falsely imply that they were reacting as witnesses to the appearance of an angel.

Hoax (Fake) – hoax perpetrated to “Show how easy it is to fake this stuff.” A hoax in which the perpetrator discloses that the evidence is a fake; at some later time after they have gained the adrenaline rush of deception or when the revelation will increase their celebrity status to the greatest effect. The implication is that this hoax-and-reveal process is some sort of grand ethical action on their part.

Hoax (Straight) – anonymous or not anonymous hoax perpetrated to fool an audience of the credulous, entertain one’s self and obtain the adrenaline rush of magician-ship. The goal is to hold the deceived audience enraptured with the magician’s personal demonstrated skill, intellect, sophistry and implied authority and/or technical ability. Deception provides an adrenaline rush, especially when spun inside a cocoon of apparent correctness.

Hoax (Strawman) – anonymous hoax perpetrated to discredit. Typically outfitted fitted with a hidden “key” – the obvious or semi-obvious flaw or Achilles Heel which reveals the event or contention to be merely a hoax; purposely set to be discovered at a later time, to discredit a specific targeted subject or persons to whom the hoax relates.

Hate Hoax – A skit/joke or special kind of strawman hoax which celebrates oppression or mocks people based upon their opposition to oppression, based upon race, religion, sexual orientation, nationality or political beliefs – is indistinguishable from and should be treated as, the real thing.

Hoax-Baiting – fraudulent data crafted by SSkeptics to stand as evidence cited by other SSkeptics in countering disfavored subjects. In an epidemic of Hoax-Baiting, a SSkeptic will even cite that their evidence has been manufactured “to show how easy it is to fake this stuff.” In many instances Trolls or other SSkeptics are paid to create hoax videos on YouTube for instance, by third parties seeking to sway public discourse in and around a disfavored subject, and quash any relevant serious material on the subject.

Hyperbolic Constraint – a form of special pleading wherein a ludicrous or cherry picked statistical constraint upon a set of data is used to lens that data in such a way as to make it appear to be more favorable to one’s a priori position. If they ended each game they played at the half, they would be undefeated. See also Goodhart’s Law.

idem existimatis – attempting to obscure the contributing error or risk effect of imprecise estimates or assumptions, through an overt focus on the precision or accuracy of other measures inputs inside a calculation, study or argument.

Ignoring as Accident – exceptions or even massive sets of data and observational counter-evidence to an enforced generalization are ignored as anecdotes or accidents.

ignoro eventum – institutionalized pseudoscience wherein a group ignores or fails to conduct follow-up study after the execution of a risk bearing decision. The instance wherein a group declares the science behind a planned action which bears a risk relationship, dependency or precautionary principle, to be settled, in advance of this decision/action being taken. Further then failing to conduct any impact study or meta-analysis to confirm their presupposition as correct. This is not simply pseudoscience, rather it is a criminal action in many circumstances.

ingens vanitatum – knowing a great deal of irrelevance – knowledge of every facet of a subject and all the latest breaking information therein, but the reality being that no real understanding is possessed or a supervacuous set of useless knowledge is either the reality inside the science, or all that is known by the person making the claim to knowledge.

Klassing – when one offers payment of money or threatens the well being or career of a person in order to get them to recant, deny, keep silent on, or renounce a previously stated observation or finding. The work of a malicious fake investigator who seeks to completely destroy an idea being researched and to actively cast aspersion on a specific subject as part of a broader embargo policy. A high visibility reputation assassin hired to intimidate future witnesses or those who might consider conducting/supporting investigative work.

Law of Large Numbers Fallacy – the Law of Large Numbers does not apply ex ante, nor in any other case where there is not a large number domain to sample from in the first place. Any instance where the wrong species of probability event is selected, there does not exist a suitable measure of what is ‘large’ or ‘probable’ or the event being described constitutes only the single opportunity for the improbable event to have occurred. An ad hoc denial tactic which dismisses by presupposing the idea that one holds statistical refutation evidence based on plenitude of a sample domain. The rigor-less assumption that mass statistics will prove out any strange or unlikely observation one chooses to dismiss.  It is a form of the MiHoDeAL Fallacy. See also Appeal to Plenitude/Appeal to Lotto.

Less is Better Bias – the tendency to prefer a smaller set of data to a larger set of data judged separately, but not jointly, so as to capitalize off the increased variability of the small set of data as it supports an extreme or conforming opinion.

Lotto Ticket Shuffle Scam – a scam wherein two persons pool money and buy 10 lottery tickets, however only one of them goes to the store and buys the lotto tickets; subsequently reporting back that all his tickets won and all his partner’s tickets lost. Or a business wherein one partner owns all the profitable aspects of the business, and his partners own all the non-profitable ones. The same is done with ‘reliable’ data being produced only by a authorized club – all my data is fact and all your data is anecdote.

medium fallax (see Gaussian Blindness) – the tendency to regard or promote the mean (μ) or other easily derived or comprehensive statistic as constituting an equivalent descriptive of the whole body of a set of data or a closely related issue – assuming immunity from the burden of identifying a causal critical path or developing testable mechanism to prove out the contention made (critical elements of scientific theory); or the process of misleading with statistical indications as to the makeup and nature of a body of data. I’ve got my head in the oven, and my ass in the fridge, so I’m OK.

Methodical Doubt – doubt employed as a skulptur mechanism, to slice away disliked observations until one is left with the data set they favored before coming to an argument. The first is the questionable method of denying that something exists or is true simply because it defies a certain a priori stacked provisional knowledge. This is nothing but a belief expressed in the negative, packaged in such a fashion as to exploit the knowledge that claims to denial are afforded immediate acceptance over claims to the affirmative. This is a religious game of manipulating the process of knowledge development into a whipsaw effect supporting a given conclusion set

MiHoDeAL Bias – when one dismisses an observation prematurely as only being classifiable into a preassigned bucket of Misidentifications, Hoaxes, Delusions, Anecdote and Lies, in absence of having conducted any actual research in to the subject.

missam singulia shortfall in scientific study wherein two factors are evaluated by non equivalent statistical means. For instance, risk which is evaluated by individual measures, compared to benefit which is evaluated as a function of the whole – at the ignorance of risk as a whole. Conversely, risk being measured as an effect on the whole, while benefit is only evaluated in terms of how it benefits the individual or a single person.

Mission Directed Blindness – when one believes from being told, that they serve a greater cause, or that some necessary actions must be taken to avoid a specific disaster. Usually this renders the participant unable to handle evidence adeptly under Ockham’s Razor, once adopted.

Monkey Suit Fallacy – the dismissal of an entire field of data simply by abusing a position of authority or citing an authority declaring the data to all to be the result of hoaxes.

Nelsonian Knowledge – Nelsonian knowledge takes three forms

1. a meticulous attentiveness to and absence of, that which one should ‘not know’,
2. an inferential method of avoiding such knowledge, and finally as well,
3. that misleading knowledge or activity which is used as a substitute in place of actual knowledge (organic untruth or disinformation).

The former (#1) is taken to actually be known on the part of a poseur. It is dishonest for a man deliberately to shut his eyes to principles/intelligence which he would prefer not to know. If he does so, he is taken to have actual knowledge of the facts to which he shut his eyes. Such knowledge has been described as ‘Nelsonian knowledge’, meaning knowledge which is attributed to a person as a consequence of his ‘willful blindness’ or (as American legal analysts describe it) ‘contrived ignorance’.

Newton’s Flameout –  One who thinks something can be settled merely by an experiment probably does not understand the question in the first place.

Normative Convergence Paradox – the observation or reality inside of systems theory and modeling that, even in the case wherein all optimal constraints, arrivals, feedback and functions inside a system are modeled to perfect accuracy – a decision or optimal outcome may not necessarily be producible.

Not Invented Here Bias – aversion to contact with or use of products, research, standards, or knowledge developed outside a group in which one is a member or with which one associates.

Observation Denial Special Pleading   a form of spurious data and observation dismissal where a proponent introduces favorable details or excludes unfavorable details regarding the observation, through alleging a need to apply additional considerations, without proper criticism or vetting of these considerations.

Observation vs Claim Blurring – the false practice of calling an observation of data, a ‘claim’ on the observers’ part.  This in an effort to subjugate such observations into the category of constituting scientific claims which therefore must be supported by sufficient data before they may be regarded by science.  In fact an observation is simply that, a piece of evidence or a fact, and its false dismissal under the pretense of being deemed a ‘claim’ is a practice of deception and pseudoscience.

Observational Occam’s Razor Fallacy (Exclusion Bias) – through insisting that observations and data be falsely addressed as ‘claims’ needing immediate explanation, and through rejecting such a ‘claim’ (observation) based upon the idea that it introduces plurality (it is not simple), one effectively ensures that no observations will ever be recognized which serve to frame and reduce a competing alternative.  One will in effect perpetually prove only what they have assumed as true, regardless of the idea’s inherent risk. No competing idea can ever be formulated because outlier data and observations are continuously discarded immediately, one at a time by means of being deemed ‘extraordinary claims’.

Occam’s Razor Fallacy – abuse of Ockham’s Razor (and misspelling) in order to to enact a process of sciencey-looking ignorance and to impose a favored idea. Can exist in four forms, transactional, existential, observational and utility blindness.

Transactional Occam’s Razor Fallacy (Appeal to Ignorance) – the false contention that a challenging construct, observation or paradigm must immediately be ‘explained.’ Sidestepping of the data aggregation, question development, intelligence and testing/replication steps of the scientific method and forcing a skip right to its artificially conclusive end (final peer review by ‘Occam’s Razor’).

Existential Occam’s Razor Fallacy (Appeal to Authority) – the false contention that the simplest or most probable explanation tends to be the scientifically correct one. Suffers from the weakness that myriad and complex underpinning assumptions, based upon scant predictive/suggestive study, provisional knowledge or Popper insufficient science, result in the condition of tendering the appearance of ‘simplicity.’

Observational Occam’s Razor Fallacy (Exclusion Bias) – through insisting that observations and data be falsely addressed as ‘claims’ needing immediate explanation, and through rejecting such a ‘claim’ (observation) based upon the idea that it introduces plurality (it is not simple), one effectively ensures that no observations will ever be recognized which serve to frame and reduce a competing alternative.  One will in effect perpetually prove only what they have assumed as true, regardless of the idea’s inherent risk. No competing idea can ever be formulated because outlier data and observations are continuously discarded immediately, one at a time by means of being deemed ‘extraordinary claims’.

Utility Blindness – when simplicity or parsimony are incorrectly applied as excuse to resist the development of a new scientific explanatory model, data or challenging observation set, when indeed the participant refuses to consider or examine the explanatory utility of any similar new model under consideration.

Pareidolia Bias –  a presumption that any challenging observation can only be solely the result of vague and random stimulus (often an image or sound) errantly perceived as significant by the observer.

Pharmaceutical Research Fraud – nine methods of attaining success results from pharmaceutical studies, which are borderline or fraudulent in nature. The first eight of these are developed by Richard Smith, editor of the British Medical Journal. The ninth is a converse way of applying these fraud accusations to filter out appeals for study under Ockham’s Razor, in situations where studies run counter to pharmaceutical/research revenue goals.

1.  Conduct a trail of your drug against a treatment known to be inferior.

2.  Trial your drug against too low of a dose of a competitor drug.

3.  Conduct a trial of your drug against too high of a dose of a competitor drug (making your drug seem less toxic).

4.  Conduct trials which are too small to show differences from competitor drugs.

5.  Use multiple endpoints in the trial and select for publication those that give favorable results.

6.  Do multicenter trials and select for publication results from centers that are favorable.

7.  Conduct subgroup analyses and select for publication those that are favorable.

8.  Present results that are most likely to impress – for example, reductions in relative risk rather than absolute risk.

9.  Conduct high inclusion bias statistical studies or no studies at all, and employ items 1 – 8 above to discredit any studies which indicate dissenting results.

Pleonasm – is the use of more words or parts of words than is necessary for clear expression, in an attempt to load language supporting, or add judgmental bias to a contention. A form of hyperbole.

phantasiae vectis – the principle outlining that, when a human condition is monitored publicly through the use of one statistic/factor, that statistic/factor will trend more favorable over time, without any actual real underlying improvement in its relevant domain or condition. Such singular focus often to the detriment of all other related and appropriate factors. Unemployment not reflecting true numbers out of work, electricity rates or inflation measures before key democratic elections, efficiency focus instead of effectiveness, crime being summed up by burglaries or gun deaths only, etc.

Policy Based Evidence Manipulation – when an Einfach or Höchste Mechanism is enforced socially by a governing body or a group enforcing false consensus and pluralistic ignorance to such an extent that the researching, data collection, analytical, legislative or other presiding research group is incentivized to construct objective adjustments to the data collection entailed around the issue being enforced.  Such adjustments, while often scientifically justifiable, introduce bias in two ways: 1) equally scientific counter adjustments are not considered (error by omission), and 2) the magnitude of such adjustments are left up to the sole discretion of the data analysis group. This introduces a guaranteed bias into most information sets featuring a high number or dynamic set of contributing factors/influences or a high number of measurement points.

Poll Skewing Factors – well known in industry, but ignored by ‘statisticians’ in highly contested or manipulated public polls:

I.  Means of Collection – bias-infusing polls use exclusively land line phones as their channel and means of respondent communication – a tactic which is notorious in excluding males, mobile professionals and the full time employed.

II.  Regional Bias Exploitation – call sampling is conducted in the New England states or in California, reflecting a bias towards tax oriented businesses, such as healthcare, insurance, government offices, and the corporations who work and contract with such agencies.

III.  Bradley Effect – people have a tendency to express opinions and intent which fit a social pressure model or keep themselves out of the ‘bad guy’ bucket when polled on polarizing issues. This tends to skew polls notoriously to the left.

IV. Crate Effect – impact of persons who purposely give the opposite response as to what they really think because of animosity towards the polling group (especially if non-free press) and/or their perceived history of bias, and/or animosity towards the circus around elections or the elections themselves. This false left leaning bias is generated most often inside groups who believe media outlets to be left-leaning and unfair.

V. Crate/Bradley Power Effect – the misleading impact of the Crate and Bradley Effects falsely convinces poll administrators of the power they hold to sway the opinion of ‘undecideds’ and misleads their sponsors into funding more and more polls which follow the same flawed protocols and traps.

VI. Streetlight Effect – is a type of observational bias that occurs when people only search for something where it is easiest to look.

VII.  Trial Heat – the overall pressure which is placed on respondent results based on the structure of or questions inside the poll itself (1 Pew Research)

a.  Leading preparatory questions – employing questions which are pejoratively framed or crafted to lead the poll respondent, in order to skew undecided voters, prior to asking the core question, and

b.  Iterative poisoning – running the same poll over and over again in the same community and visibly publishing the desired results – akin to poisoning the jury pool.

VIII.  Crazy-8 Factor – for any question you pose, there is always a naturally errant 8 percent quotient who do not understand, don’t care, or purposely screw with you, or really think that gravity pulls up and not down. All which has to be done, to effect a 2 – 4 percentage point skew in the data – is bias the questioning so that the half of the Crazy-8 which disfavors your desired result, are filtered out through more precise or recursive questions – which are not replicated in the converse for the other half of the Crazy-8 which favor your desired result. The analytics which detect this poll manipulation is called a ‘forced-choice slack analysis’ – which examines the Crazy-8 and/or neutral respondents to see if they skew to the a bias in any particular direction.

IX.  Form of Core Question – asking different forms of THE CORE question than is implied by the poll, or different question by polling group. 1. Who do you favor, vs. 2. Who will you vote (will vote) for? vs. 3. Who do you think will win? (3 Pew Research)

X.   Follow Through Effect – only 35 to 55% of people who are polled, on average, will actually turn out to vote. (6 2016 General Election Turnout)

XI.  Oversamplingdeclaring a bias to exist in a population a priori, in the larger S pool from which an s sample is derived. Then further crafting a targeted addition of population members from S, to influence sample s in the opposite signal (direction and magnitude) from the anticipated bias. (1, 4 Pew Research)

XII. Influencing Effect – the tendency of a polling group to exaggerate polling results in favor of their preferred outcome during the influencing stage of polling, only to subsequently retract such collection/analysis tampering at the end of a polling period so that their final tally aligns more in sync with the actual outcome, or anticipated final results (fictus scientia – see at end of this article).

XIII.  Gaussian Parametrization – the error made by statistical analytical processors of polling data, in which they assume that humans reliably follow a Gaussian distribution. Therefore smaller sample sizes can be used reliably to parametrize the whole.

Principle of Diminishing Percentage – over time, an increase in a cumulative amount which is the same each period, will represent a lower and lower percentage increase over each successive previous periods’ percentage – tendering the appearance of a reduction in growth to those who do not understand basic statistics. A common press headline trick is ‘lower percentage growth’ used as a way of implying a ‘reduction’.

Procrustean Solution – the undesirable practice of tailoring data to fit its container or some other preconceived argument being promoted by a carrier of an agenda.

Procrustitute – a data scientist or researcher who tailors the data or analytical method so that the results are supportive of a preconceived argument being promoted under a motive of agency.

Psychologism – data or principles purported to be of scientific origin in which psychology plays a key or sole role in gathering, grounding or explaining. Suffers from the weakness that psychological principles enjoy a perch which can never be falsified, therefore they are at risk of standing as pseudoscience.

quod gratis asseritur, gratis negatur – that which can be declared without basis, can be dismissed without basis. This phrase, does not mean that the subject declaration is existentially incorrect, nor that the antithetical set therefore bears truth – rather simply that I can refuse to accede to such a declaration, without any particular reason or soundness in arguing. While this is a form of skepticism, the apothegm can be abused to mistakenly perform debunking. The clarifying action on the part of the skeptic being its usage as a refusal to accede versus a negation of an idea (inverse negation fallacy). The latter is not warranted inside this principle of philosophy.

Recency Bias – the tendency to accept more recent information as being more credible or holding more gravitas in research.

Salami Slicing – the practice of artificially breaking a scientific study down into sub-components and publishing each sub-component as a separate study. If the paper involves consilience from studies derived from a number of disciplines, this might be acceptable.  However, when the studies are broken down simply to increase the publishing history of its authors or make a political position appear to be more scientific, through ‘a thousand studies support the idea that…’ styled articles, then this is a form of pseudoscience.

Scale Obfuscation – shooting high and shooting low. Any form of a priori assumption wherein a researcher presumes that larger or elemental sample populations are congruent with more study or better science. This as contrasted with an incremental scientific method under a condition of making open context field observations first, crafting the right (rather than presumed right) question, followed by study of multiple iterations of smaller, discrete, cohort and more focused populations, built then into larger field databases and study.

Bigger Data is Better Fallacy – the invalid assumption which researchers make when attempting to measure a purported phenomena, that data extraction inside a very large source population as the first and only step of scientific study, will serve to produce results which are conclusive or more scientific. The same presumption error can apply to meta-analysis. Wherein such an analysis is conducted in a context of low/detached, rather than informed knowledge sets, and will serve to dilute critical elements of signal and intelligence which could be used to elucidate the issue further.

Big is Science Error – bigger sample sizes and study data is a way of bypassing the scientific method, yet still tender an affectation of science and gravitas. Any time a study cannot be replicated, and a call to consensus is made simply because it would be too difficult to replicate the study basis of the consensus.

Yule-Simpson Paradox – a trend appears in different groups of data can be manipulated to disappear or reverse (see Effect Inversion) when these groups are combined.

Elemental Pleading – breaking down the testing of data or a claim into testing of its constituents, in order to remove or filter an effect which can only be derived from the combination of elements being claimed. For instance, to address the claim that doxycycline and EDTA reduce arterial plaque, testing was developed to measure the impact of each item individually, and when no effect was found, the combination was dismissed as well without study, and further/combination testing was deemed to be ‘pseudoscience’.

Scienter – is a legal term that refers to intent or knowledge of wrongdoing while in the act of executing it. An offending party then has knowledge of the ‘wrongness’ of their dealings, methods or data, but choose to turn a blind eye to the issue, commensurate to executing or employing such dealings, methods or data. This is typically coupled with a failure to follow up on the impacts of the action taken, a failure of science called ignoro eventum.

Sea Lioning – is a type of Internet trolling which consists of bad-faith requests for evidence, or repeated questions, the purpose of which is not clarification or elucidation, but rather an attempt to derail a discussion, appeal to authority as if representing science, or to wear down the patience of one’s opponent. May involve invalid repetitive requests for proof which fall under a proof gaming fallacy and highlight the challenger’s lack of scientific literacy.

Semmelweis Reflex – the tendency to reject new evidence that contradicts one’s held paradigm.

Shared Information Bias – known as the tendency for group members to spend more time and energy discussing information that all members are already familiar with (i.e., shared information), and less time and energy discussing information that only some members are aware of (i.e., unshared information).

Shotgun Barn Fallacy – when one takes the available set of evidence and habitually draws favored conclusions from it. Firing a shotgun at the broadside of a barn and then drawing the bulls-eye around the pellet holes.

Shrouded Furtive Fallacy – extensive recitation of historical and peripheral information tender the semblance of deep research and professionalism; yet are only posed to serve as a distraction inside an article whose crucial argument centers solely on accusations of malfeasance or lying on the part of its target opponents.

Simplicity Sell – when making a pitch through the contention that something is easy.  “Look, its simple right?”

Social Skepticism

1. organized agency which is engineered by means of teaching weaponized fake skepticism to useful idiots.

2. a form of weaponized philosophy which masquerades as science, science enthusiasm or science communication. Social skepticism enforces specific conclusions and obfuscates competing ideas via a methodical and heavy-handed science embargo. It promotes charades of critical thought, self aggrandizement and is often chartered to defend corporate agendas; all while maintaining a high priority of falsely impugning eschewed individuals and topics. Its philosophies and conclusions are imposed through intimidation on the part of its cabal and cast of dark actors, and are enacted in lieu of and through bypassing actual scientific method. Failures with respect to science are the result of flawed or manipulated philosophy of science. When social control, change or conformance agents subject science to a state of being their lap-dog, serving specific agendas, such agents err in regard to the philosophical basis of science, skepticism. They are not bad scientists, rather bad philosophers, seeking a socialized goal. They are social skeptics.

Sowell’s Axiom – for every expert, there is an equal and opposite expert, but for every fact there is not necessarily an equal and opposite fact.

Sowert’s Law – Ignorace + Trivia = Fact

Streetlight Effect – is a type of observational bias that occurs when people only search for something where it is easiest to look.

Survivorship Bias – concentrating on the people or data that “survived” some process and inadvertently overlooking those that didn’t because of their lack of observability.

Tail to Body Transference – a statistical fallacy of relevant range in which one statistic is assumed to be fully descriptive across all conditions of its arrival range. An error wherein a descriptive suitable to frame a tail condition, is also applied to the main body of a distribution and assumed to be also descriptive therein. IQ measures which allow us to discriminate special needs persons, are not also useful in determining who should lead nations or corporations at the other end. Cholesterol figures which allow us to highlight who is at risk of arterial plaque do not apply to persons who have a familial history of higher cholesterol stats, etc. Mistakenly applying tail condition gradients to also be salient in the main body of the same descriptive data.

Taleb’s Contraposition – For real people, if something works in theory, but not in practice, it doesn’t work. For social skeptics and many academics, if something works in practice, but not in theory, it doesn’t exist.

Torfuscation – pseudoscience or obfuscation enacted through a Nelsonian knowledge masquerade of scientific protocol and study design. Inappropriate, manipulated or shallow study design crafted so as to obscure or avoid a targeted/disliked inference. A process, contended to be science, wherein one develops a conclusion through cataloging study artifice or observation noise as valid data. Invalid observations which can be parlayed into becoming evidence of absence or evidence of existence as one desires – by accepting only the appropriate hit or miss grouping one desires as basis to support an a priori preference, and as well avoid any further needed ex ante proof.  A refined form of praedicate evidentia or utile abstentia employed through using less rigorous or probative methods of study than are requisite under otherwise ethical science.  Exploitation of study noise generated through first level ‘big data’ or agency-influenced ‘meta-synthesis’, as the ‘evidence’ that no further or deeper study is therefore warranted – and moreover that research of the subject entailed is now socially embargoed.

Trivia Fallacy – the rejection of a entire set of data by the pointing out of one questionable or disliked element inside the data.

utile absentia – a study which observes false absences of data or creates artificial absence noise through improper study design, and further then assumes such error to represent verified negative observations. A study containing field or set data in which there exists a risk that absences in measurement data, will be caused by external factors which artificially serve to make the evidence absent, through risk of failure of detection/collection/retention of that data. The absences of data, rather than being filtered out of analysis, are fallaciously presumed to constitute bonafide observations of negatives. This is improper study design which will often serve to produce an inversion effect (curative effect) in such a study’s final results. Similar to torfuscation.

via negativa (vs. via positiva) – a way of describing something by saying what it is not. For example, employing a finite concept of attribute or object which can be employed as a non-descriptive for God or ultimate reality or a human condition, etc. This bears less confidence in inference than does a via positiva attribute or description.

Virtue Telescope – employment of a theoretical virtue benefit projected inside a domain which is distant, slow moving, far into the future, diffuse or otherwise difficult to measure in terms of both potential and resulting impact, as exculpatory immunity for commission of an immoral act which is close by, obvious, defined and not as difficult to measure. Similar to but converse of an anachronistic fallacy, or judging distant events based on current norms.

Von Restorff Effect – the bias inducing principle that an item that sticks out is more likely to be remembered than other items.

Whipping Horse – a martyr issue, study, chart, graphic or event which is cited as exemplary in condemning a targeted group or thought set – which is over-employed or used in a domain of ample equivocation, equivocal slack, straw man or other ambiguity that it simply becomes a symbol and ironically maintains no real salience to the argument at hand.

Wolfinger’s Inductive Paradox – an ‘anecdote’ to the modus praesens (observation or case which supports an objective presence of a state or object) constitutes data, while an anecdote to the modus absens (observation supporting an appeal to ignorance claim that a state or object does not exist) is merely an anecdote. One’s refusal to collect or document the former, does not constitute skepticism. Relates to Hempel’s Paradox.

Wolfinger’s Misquote – you may have heard the phrase ‘the plural of anecdote is not data’. It turns out that this is a misquote. The original aphorism, by the political scientist Ray Wolfinger, was just the opposite: ‘The plural of anecdote is data’. The only thing worse than the surrendered value (as opposed to collected value, in science) of an anecdote is the incurred bias of ignoring anecdotes altogether.  This is a method of pseudoscience.

Yule-Simpson Paradox – a trend appears in different groups of data can be manipulated to disappear or reverse (see Effect Inversion) when these groups are combined.

Zeigarnik Effect – states that people remember uncompleted or interrupted tasks better than completed tasks. This imparts a bias to refute arguments or ideas which are unfinished.

epoché vanguards gnosis

How to MLA cite this blog post => 1
  1. The Ethical Skeptic, “The Tree of Knowledge Obfuscation: Misrepresentation of Evidence or Data” The Ethical Skeptic, WordPress, 17 Feb 2018, Web;

September 23, 2009 - Posted by | Argument Fallacies, Ethical Skepticism | ,

Leave a Reply


This site uses Akismet to reduce spam. Learn how your comment data is processed.

Notify of
Chinese (Simplified)EnglishFrenchGermanHindiPortugueseRussianSpanish
%d bloggers like this: