The Ethical Skeptic

Challenging Pseudo-Skepticism, Institutional Propaganda and Cultivated Ignorance

The Tree of Knowledge Obfuscation: Misrepresentation of Evidence or Data

The following is The Ethical Skeptic’s list, useful in spotting both formal and informal logical fallacies, cognitive biases, statistical broaches and styles of crooked thinking on the part of those in the Social Skepticism movement. It is categorized by employment groupings so that it can function as a context appropriate resource in a critical review of an essay, imperious diatribe or publication by a thought enforcing Social Skeptic. To assist this, we have comprised the list inside an intuitive taxonomy of ten contextual categories of mischaracterization/misrepresentation:

Opponents, Locution or Semantics, Evidence or Data, Bias or Method, Science, Argument, Assumption, Groups, Self and Authority

.Tree of Knowledge Obfuscation The Ethical Skeptic.

Misrepresentation of Evidence or Data

Amateur Confidence Fallacy – the act of substituting simple probability math to manipulate outcomes, because one does not understand the difference, or because it looks like the same thing to a layman, in instances where only confidence intervals can be correctly applied under the scientific method.

Amplanecdote – something which has occurred a scientifically significant number or many times over, however which is ignored through agenda or by means of a baseless claim to categorization as anecdote.

Anchoring Bias – when a person is over-reliant on the first piece of information they have encountered, or begin a branch and bound search or negotiation at a starting point which is arbitrary, yet causes them cede credence to that range from then on.

Anecdote Data Skulpting (Cherry Sorting) – when one applies the categorization of ‘anecdote’ to screen out unwanted observations and data. Based upon the a priori and often subjective claim that the observation was ‘not reliable’. Ignores the probative value of the observation and the ability to later compare other data in order to increase its reliability in a more objective fashion, in favor of assimilating an intelligence base which is not highly probative, and can be reduced only through statistical analytics – likely then only serving to prove what one was looking for in the first place (aka pseudo-theory).

Antiquing Fallacy – the dismissal of an entire field of data by showing its false, hoax based or dubious past inside a set of well known anecdotal cases. Also the instance where a thesis is deemed incorrect because it was commonly held when something else, clearly false, was also commonly held.

Apophenia Bias – the immediate dismissal of data as being manufactured, mis-analyzed, or reflecting random patterns which are ascribed empirical meaning, without having conducted the research into the data, nor possessing the background in the discipline involved in order to be able to make such a claim.

Appeal to Apati Fallacy – ‘Appeal to the hoax’ fallacy of presumption and irrelevance.  The attempt to impugn a subject by citing or fabricating a history or incident involving a hoax of one or more of the subject’s contentions.  The fallacy resides in the fact that if it exists, there is porn of it; and likewise, if it exists or not, there is a hoax of it.

Arrival Bias – the tendency to tender more credibility or gravitas to information which is hot off the press or has just been introduced.

Ascertainment Bias – a form of inclusion or exclusion criteria error where the mechanism of sampling, specimen selection, data screening or sub-population selection is inherently flawed, is developed based upon an inadequate or sophomoric view of the observations base or produces skewed representations of actual conditions.

Associative Condemnation – the attempt to link controversial subject A with personally disliked subject B, in an effort to impute falsehood to subject B though the association of some idea or keyword common to both topics.  Guilt through association and lumping all subjects into one subjective category.  This typically will involve a context shift or definition expansion in a key word as part of the justification.

Availability Error – to adjudicate the answer to questions according to the examples that come most easily to mind, rather than a wide or representative sample of salient evidence.

Availability Heuristic – to adjudicate the answer to a question according to only the information which is available at that time.

Bandwagon Blindness – when a group fails to see their own mistakes or errors inside a hot issue, usually obscured by the common spread of propaganda, and therefore must view any critique of, error or data contradiction as being the fault of opposition or outside parties.

Base Rate Fallacy – an error in thinking where, if presented with related base rate information (i.e. generic, general information) and specific information (information only pertaining to a certain anecdotal case), the mind tends to ignore the former and focus on the latter in characterizing the whole set of relevant data regarding a subject.

Bergen’s Principle – 1.  Reward always outweighs miscalculated or ignored risk. 2. Misevaluated risk is never cumulative.

Bias Error – when using bias, fallacy or judgement error proclamations to condemn opinions of those who disagree with you, solely to push political, social or economic goals inside decision sets which are not clearly addressed by empirical or scientific backing.

Bias Inflation – a baseless claim of discrediting an observation set through identifying a plausible, relevant or even salient observer bias, which might have or did contribute to the observational profile, yet which can at most only explain a small or negligible portion of the observation base itself.

Big is Science Error – bigger sample sizes and study data is a way of bypassing the scientific method, yet still tender an affectation of science and gravitas. Any time a study cannot be replicated, and a call to consensus is made simply because it would be too difficult to replicate the study basis of the consensus.

Bigger Data is Better Fallacy – the invalid assumption which researchers make when attempting to measure a purported phenomena, that data extraction inside a very large source population as the first and only step of scientific study, will serve to produce results which are conclusive or more scientific. The same presumption error can apply to meta-analysis. Wherein such an analysis is conducted in a context of low/detached, rather than informed knowledge sets, and will serve to dilute critical elements of signal and intelligence which could be used to elucidate the issue further.

Black Swan Fallacy – a claim that the highly improbable will never happen, or that one has a grasp of the full set of the highly improbable, or that one has ascertained the likelihood of a highly improbable event through statistics – when there is not a precedent or knowledge base which allows for any objective epistemic basis for such a calculation. Any form of lack of knowledge which dismisses any highly improbable even from happening, based upon a faulty estimation of the likelihood of a single unlikely event not happening.

Blind-spot Bias – when one fails to recognize or attempt to recognize, or be circumspect about their own biases in side an argument or data set. To notice a bias, fallacy or error in others more readily than in one’s self.

brevis lapsus (‘Word Salad’ Fallacy) – the inability to understand technical or precise writing, mistaking it for constituting a pleonasm. This in favor of simplistic writing which is, either with or without the intent of the opponent, subsequently rendered vulnerable to equivocation. An accusation made when a dilettante person fails to understand philosophical or technical writing, wherein the base argument or its requisite vocabulary reside completely over the head of the individual who started the argument to begin with.

Chekhov’s Gun – is a dramatic principle that states that every element in a fictional story must be necessary to support the plot or moral, and irrelevant elements should be excluded. It is used as part of a method of detecting lies and propaganda. In a fictional story, every detail or datum is supportive of, and accounted for, as to its backing of the primary agenda/idea. A story teller will account in advance for every loose end or ugly underbelly of his moral message, all bundled up and explained nicely – no exception or tail conditions will be acknowledged. They are not part of the story.

Cladistic Dismantling (Deconstruction) – the definition of a malady or observed set of phenomena, into numerous ‘distinction without a difference’ subsets in an attempt to disguise or cloud noise around the overall trend in numbers involved in the malady or the phenomenon’s overall impact.

Clarke’s Second Law – the only way of discovering the limits of the possible is to venture a little way past them into the impossible.

Click Bait (or Headline) Skepticism – a position backed by articles or studies in which the headline appears to support the position contended, however which in reality actually contend something completely or antithetically different. A skeptical understanding which is developed though sound bytes and by never actually reading the abstract, method or content of cited articles or studies.

Complexifuscation – the introduction of similar signals, inputs or measures, alongside a control measure or an experimental measure, in an attempt to create a ‘cloud of confusion or distraction’ around the ability to effect observation, control or measure of a targeted set of data. Preemption of a phenomena with in-advance flurries of fake hoaxes, in order obscure the impact, or jade the attention span of a target audience, around a genuine feared phenomena.

Confirmation Bias – the tendency to immediately accept propaganda published in the opponent’s favored group, and to reject observations, data or ideas which do not fit the opponent’s favored models.

Continuum Fallacy – erroneous rejection of a vague claim or loosely defined data set simply because it is not as precise as one would like it to be.

Correlation Dismissal Error – when employing the ‘correlation does not prove causality’ quip to terminally dismiss an observed correlation, when the observation is being used to underpin a construct or argument possessing consilience, is seeking plurality, constitutes direct fingerprint evidence and/or is not being touted as final conclusive proof in and of itself.

Cunningham’s Law – an approach to Akratic Trolling which states that the best way to get the right answer on the Internet is not to ask a question, it’s to post the wrong answer.

Effect Inversion – one sign of a study which has been tampered with through inclusion and exclusion criteria, in an effort to dampen below significance or eliminate an undesired signal, is the circumstance where an inverse or opposite relationship effect is observed in the data when the inverse question is asked concerning the same set of data. If a retrospective cohort study purportedly shows no effect relationship between a candidate cause and a malady – there should also be no relationship between the candidate cause and the absence of the malady as well (if the two are indeed unrelated epidemiology). The presence of a reverse or apparently ‘curative’ influence of the candidate cause being evaluated in the data may signal the impact of data manipulation.

Elemental Pleading – breaking down the testing of data or a claim into testing of its constituents, in order to remove or filter an effect which can only be derived from the combination of elements being claimed. For instance, to address the claim that doxycycline and EDTA reduce arterial plaque, testing was developed to measure the impact of each item individually, and when no effect was found, the combination was dismissed as well without study, and further/combination testing was deemed to be ‘pseudoscience’.

Epistemological Psychic Study – when a statistical trend study shows a trend in data, but the only data elements supporting the contended trend all exist in the future and/or none of the past data supports such a trend; their being included simply to serve an appearance of statistical rigor and significance.  Usually applied to support a political cause, and in an arena where the observable final outcome will be well beyond the proponent’s lifespan or political career.

epoché (ἐποχή, “suspension”) – a suspension of disposition. The suspended state of judgement exercised by a disciplined and objective mind. A state of neutrality which eschews the exercise of religious, biased rational or critical, risky provisional and dogmatic dispositions when encountering new observations, ideas and data. It is the step of first being skeptical of self, before addressing challenging or new phenomena. It is underpinned by both a examination of the disciplines of true knowledge development (epignosis) and the repository of vetted and accepted knowledge (gnosis). If someone relates a challenging observation to you, you suspend disposition, and catalog it. If you toss it out based upon a fallacy or trivial flaw – then you are a cynic, not a skeptic.

Essential Schema Filtering Error – when one uses pop psychology studies such as the 1980’s Loftus Study to dismiss memories and observations which they do not like. By citing that memories and eyewitness testimony are unreliable forms of evidence, pretend skeptics present an illusion of confidence on dismissing disliked eyewitness essential schema data, when neither the Federal Rules of Evidence, science nor even the cited studies make such a claim which allows the dismissal of eyewitness testimony at all.

Existential Fallacy of Data – the implication or contention that there is an absence of observation or data supporting an idea, when in fact no observational study at all, or of any serious import has been conducted by science on the topic at hand.

Experimenter’s Bias – the tendency for experimenters to believe, certify, and publish data that agree with their expectations for the outcome of an experiment, and to disbelieve, discard, or downgrade the corresponding weightings for data that appear to conflict with those expectations.

Fabutistic – a statistic which is speciously cited from a study or set of skeptical literature, around which the recitation user misrepresents its employment context or possesses scant idea as to what it means, how it was derived, or what it is saying or is indeed not saying. Includes the instance wherein a cited statistic is employed in a fashion wherein only the numerals are correct (ie. “97%”) and the context of employment is extrapolated, hyperbole or is completely incorrect.

¡fact! – lying through facts. Data or a datum which is submitted in order to intimidate those in a discussion, is not really understood by the claimant, or rather which is made up, is not salient or relevant to the question being addressed, or is non-sequitur inside the argument being made. The relating of a fact which might be true, does not therefore mean that one is relating truth.

Factcidental – facts which are correct, but only by trivia or accident or which possess no real sequitur relationship with the discussion/issue context or logical calculus.

Fallacy of Relative Privation – dismissing an avenue of research due its waste of scientists’ time and to the existence of more important, but unrelated, problems in the world which require priority research.

Falling Between the Cracks – data which should have been brought into an argument, but which was neglected because each of the responsible members in a research or petitioning group assumed that such data was the responsibility of the other parties.

Filbert’s Law – or the law of diminishing information return. Increasing the amount of data brought into an analysis does not necessarily serve to improve salience, precision or accuracy of the analysis, yet comes at the cost of stacking risk in veracity. The further away one gets from direct observation, and the more one gets into ‘the data’ only, the higher is the stack of chairs upon which one stands. To find a result use a small sample population, to hide a result use a large one.

Five Percent Fake – when a methodical cynic cites the existence of the “5% of unexplained cases” myth in an effort to appear objective. This in an effort to avoid the perception of not possessing a series of outlier data points (the 5%), the absence of which would tender the appearance of cynical bias.

Forer Effect – when an individual tenders estimations of high accuracy to descriptions of their personality or life circumstance that supposedly are tailored specifically for them, but are in fact vague and general enough to apply to a wide range or a common subset of people.

Forer Error – ascribing accurate forecasting to be merely the outcome of Forer Effect, in absence of adequate knowledge, personal research or statistical rigor regarding the particular circumstance, which could substantiate such such a claim. The principle being that an accurate forecast establishes plurality and both parties now hold a burden of proof if a claim is tendered.

Furtive Confidence Fallacy – the refusal to estimate, grasp or apply principles of statistical confidence to collected data and observed arrival distributions, as a means of falsely bolstering the perceived validity of or avoid the signalling of validity in an observation set or body of data. The act of prematurely declaring or doggedly denying a multiplicity of anecdote to be equal to data.

Furtive Fallacy – undesired data and observations are asserted to have been caused by the malfeasance of researchers or laymen.

Gaslighting – a form of manipulation that seeks to sow seeds of doubt in a targeted individual or in members of a targeted group, hoping to make them question their own memory, perception, and/or sanity. Using persistent denial, disinformation, misdirection, contradiction, manipulated statistics and organic untruths, in an attempt to destabilize the target and delegitimize the target’s beliefs, understand or confidence in self.

Google Goggles – warped or errant information cultivated through reliance on web searches as one’s resource or base of understanding. Vulnerability to web opinions where every street doubter can dismiss observations as a pretend authority on science, every claim represented as being by science is immediately accepted and every public observation is deemed a fable or a hoax.

Google Reaction Effect – the tendency to discount as unimportant or invalid, information that can be found readily online by using Internet search engines.

Historian’s Fallacy – occurs when one assumes that decision makers of the past viewed events from the same perspective and having the same information as those subsequently analyzing the decisions; therefore the levels of bunk, belief and credulity can be used to dismiss past events with historically credible persons, just as the same as they are employed in modern discourse.

Hoax (Fake) – hoax perpetrated to “Show how easy it is to fake this stuff.” A hoax in which the perpetrator discloses that the evidence is a fake; at some later time after they have gained the adrenaline rush of deception or when the revelation will increase their celebrity status to the greatest effect. The implication is that this hoax-and-reveal process is some sort of grand ethical action on their part.

Hoax (Straight) – anonymous or not anonymous hoax perpetrated to fool an audience of the credulous, entertain one’s self and obtain the adrenaline rush of magician-ship. The goal is to hold the deceived audience enraptured with the magician’s personal demonstrated skill, intellect, sophistry and implied authority and/or technical ability. Deception provides an adrenaline rush, especially when spun inside a cocoon of apparent correctness.

Hoax (Strawman) – anonymous hoax perpetrated to discredit. Typically outfitted fitted with a hidden “key” – the obvious or semi-obvious flaw or Achilles Heel which reveals the event or contention to be merely a hoax; purposely set to be discovered at a later time, to discredit a specific targeted subject or persons to whom the hoax relates.

Hoax-Baiting – fraudulent data crafted by SSkeptics to stand as evidence cited by other SSkeptics in countering disfavored subjects. In an epidemic of Hoax-Baiting, a SSkeptic will even cite that their evidence has been manufactured “to show how easy it is to fake this stuff.” In many instances Trolls or other SSkeptics are paid to create hoax videos on YouTube for instance, by third parties seeking to sway public discourse in and around a disfavored subject, and quash any relevant serious material on the subject.

idem existimatis – attempting to obscure the contributing error or risk effect of imprecise estimates or assumptions, through an overt focus on the precision or accuracy of other measures inputs inside a calculation, study or argument.

Ignoring as Accident – exceptions or even massive sets of data and observational counter-evidence to an enforced generalization are ignored as anecdotes or accidents.

ignoro eventum – institutionalized pseudoscience wherein a group ignores or fails to conduct follow-up study after the execution of a risk bearing decision. The instance wherein a group declares the science behind a planned action which bears a risk relationship, dependency or precautionary principle, to be settled, in advance of this decision/action being taken. Further then failing to conduct any impact study or meta-analysis to confirm their presupposition as correct. This is not simply pseudoscience, rather it is a criminal action in many circumstances.

ingens vanitatum – knowing a great deal of irrelevance – knowledge of every facet of a subject and all the latest breaking information therein, but the reality being that no real understanding is possessed or a supervacuous set of useless knowledge is either the reality inside the science, or all that is known by the person making the claim to knowledge.

Klassing – when one offers payment of money or threatens the well being or career of a person in order to get them to recant, deny, keep silent on, or renounce a previously stated observation or finding. The work of a malicious fake investigator who seeks to completely destroy an idea being researched and to actively cast aspersion on a specific subject as part of a broader embargo policy. A high visibility reputation assassin hired to intimidate future witnesses or those who might consider conducting/supporting investigative work.

Law of Large Numbers Fallacy – a denial tactic which dismisses by presupposing the idea that one holds statistical refutation evidence. The rigor-less assumption that mass statistics will prove out any strange or unlikely observation one chooses to dismiss.  It is a form of the MiHoDeAL Fallacy. See also Appeal to Lotto.

Less is Better Bias – the tendency to prefer a smaller set of data to a larger set of data judged separately, but not jointly, so as to capitalize off the increased variability of the small set of data as it supports an extreme or conforming opinion.

Lotto Ticket Shuffle Scam – a scam wherein two persons pool money and buy 10 lottery tickets, however only one of them goes to the store and buys the lotto tickets; subsequently reporting back that all his tickets won and all his partner’s tickets lost. Or a business wherein one partner owns all the profitable aspects of the business, and his partners own all the non-profitable ones. The same is done with ‘reliable’ data being produced only by a authorized club – all my data is fact and all your data is anecdote.

medium fallax – the tendency to regard or promote the mean (μ) or other easily derived or comprehensive statistic as constituting an equivalent descriptive of the whole body of a set of data or a closely related issue – assuming immunity from the burden of identifying a causal critical path or developing testable mechanism to prove out the contention made (critical elements of scientific theory); or the process of misleading with statistical indications as to the makeup and nature of a body of data. I’ve got my head in the oven, and my ass in the fridge, so I’m OK.

Methodical Doubt – doubt employed as a skulptur mechanism, to slice away disliked observations until one is left with the data set they favored before coming to an argument. The first is the questionable method of denying that something exists or is true simply because it defies a certain a priori stacked provisional knowledge. This is nothing but a belief expressed in the negative, packaged in such a fashion as to exploit the knowledge that claims to denial are afforded immediate acceptance over claims to the affirmative. This is a religious game of manipulating the process of knowledge development into a whipsaw effect supporting a given conclusion set

MiHoDeAL Bias – when one dismisses an observation prematurely as only being classifiable into a preassigned bucket of Misidentifications, Hoaxes, Delusions, Anecdote and Lies, in absence of having conducted any actual research in to the subject.

Mission Directed Blindness – when one believes from being told, that they serve a greater cause, or that some necessary actions must be taken to avoid a specific disaster. Usually this renders the participant unable to handle evidence adeptly under Ockham’s Razor, once adopted.

Monkey Suit Fallacy – the dismissal of an entire field of data simply by abusing a position of authority or citing an authority declaring the data to all to be the result of hoaxes.

Not Invented Here Bias – aversion to contact with or use of products, research, standards, or knowledge developed outside a group in which one is a member or with which one associates.

Observation Denial Special Pleading   a form of spurious data and observation dismissal where a proponent introduces favorable details or excludes unfavorable details regarding the observation, through alleging a need to apply additional considerations, without proper criticism or vetting of these considerations.

Observation vs Claim Blurring – the false practice of calling an observation of data, a ‘claim’ on the observers’ part.  This in an effort to subjugate such observations into the category of constituting scientific claims which therefore must be supported by sufficient data before they may be regarded by science.  In fact an observation is simply that, a piece of evidence or a fact, and its false dismissal under the pretense of being deemed a ‘claim’ is a practice of deception and pseudoscience.

Pareidolia Bias –  a presumption that any challenging observation can only be solely the result of vague and random stimulus (often an image or sound) errantly perceived as significant by the observer.

Pharmaceutical Research Fraud – nine methods of attaining success results from pharmaceutical studies, which are borderline or fraudulent in nature. The first eight of these are developed by Richard Smith, editor of the British Medical Journal. The ninth is a converse way of applying these fraud accusations to filter out appeals for study under Ockham’s Razor, in situations where studies run counter to pharmaceutical/research revenue goals.

1.  Conduct a trail of your drug against a treatment known to be inferior.

2.  Trial your drug against too low of a dose of a competitor drug.

3.  Conduct a trial of your drug against too high of a dose of a competitor drug (making your drug seem less toxic).

4.  Conduct trials which are too small to show differences from competitor drugs.

5.  Use multiple endpoints in the trial and select for publication those that give favorable results.

6.  Do multicenter trials and select for publication results from centers that are favorable.

7.  Conduct subgroup analyses and select for publication those that are favorable.

8.  Present results that are most likely to impress – for example, reductions in relative risk rather than absolute risk.

9.  Conduct high inclusion bias statistical studies or no studies at all, and employ items 1 – 8 above to discredit any studies which indicate dissenting results.

Pleonasm – is the use of more words or parts of words than is necessary for clear expression, in an attempt to load language supporting, or add judgmental bias to a contention. A form of hyperbole.

phantasiae vectis – the principle outlining that, when a human condition is monitored publicly through the use of one statistic, that statistic will trend more favorable over time, without any real underlying improvement in its related human condition. Unemployment not reflecting true numbers out of work, electricity rates or inflation measures before key democratic elections, crime being summed up by burglaries or gun deaths only, etc.

Policy Based Evidence Manipulation – when an Einfach or Höchste Mechanism is enforced socially by a governing body or a group enforcing false consensus and pluralistic ignorance to such an extent that the researching, data collection, analytical, legislative or other presiding research group is incentivized to construct objective adjustments to the data collection entailed around the issue being enforced.  Such adjustments, while often scientifically justifiable, introduce bias in two ways: 1) equally scientific counter adjustments are not considered (error by omission), and 2) the magnitude of such adjustments are left up to the sole discretion of the data analysis group. This introduces a guaranteed bias into most information sets featuring a high number or dynamic set of contributing factors/influences or a high number of measurement points.

Poll Skewing Factors – well known in industry, but ignored by ‘statisticians’ in highly contested or manipulated public polls:

I.  Means of Collection – bias-infusing polls use exclusively land line phones as their channel and means of respondent communication – a tactic which is notorious in excluding males, mobile professionals and the full time employed.

II.  Regional Bias Exploitation – call sampling is conducted in the New England states or in California, reflecting a bias towards tax oriented businesses, such as healthcare, insurance, government offices, and the corporations who work and contract with such agencies.

III.  Bradley Effect – people have a tendency to express opinions and intent which fit a social pressure model or keep themselves out of the ‘bad guy’ bucket when polled on polarizing issues. This tends to skew polls notoriously to the left.

IV. Crate Effect – impact of persons who purposely give the opposite response as to what they really think because of animosity towards the polling group (especially if non-free press) and/or their perceived history of bias, and/or animosity towards the circus around elections or the elections themselves. This false left leaning bias is generated most often inside groups who believe media outlets to be left-leaning and unfair.

V. Crate/Bradley Power Effect – the misleading impact of the Crate and Bradley Effects falsely convinces poll administrators of the power they hold to sway the opinion of ‘undecideds’ and misleads their sponsors into funding more and more polls which follow the same flawed protocols and traps.

VI. Streetlight Effect – is a type of observational bias that occurs when people only search for something where it is easiest to look.

VII.  Trial Heat – the overall pressure which is placed on respondent results based on the structure of or questions inside the poll itself (1 Pew Research)

a.  Leading preparatory questions – employing questions which are pejoratively framed or crafted to lead the poll respondent, in order to skew undecided voters, prior to asking the core question, and

b.  Iterative poisoning – running the same poll over and over again in the same community and visibly publishing the desired results – akin to poisoning the jury pool.

VIII.   Form of Core Question – asking different forms of THE CORE question than is implied by the poll, or different question by polling group. 1. Who do you favor, vs. 2. Who will you vote (will vote) for? vs. 3. Who do you think will win? (3 Pew Research)

IX.   Follow Through Effect – only 35 to 55% of people who are polled, on average, will actually turn out to vote. (6 2016 General Election Turnout)

X.  Oversamplingdeclaring a bias to exist in a population a priori, in the larger S pool from which an s sample is derived. Then further crafting a targeted addition of population members from S, to influence sample s in the opposite signal (direction and magnitude) from the anticipated bias. (1, 4 Pew Research)

XI. Influencing Effect – the tendency of a polling group to exaggerate polling results in favor of their preferred outcome during the influencing stage of polling, only to subsequently retract such collection/analysis tampering at the end of a polling period so that their final tally aligns more in sync with the actual outcome, or anticipated final results (fictus scientia – see at end of this article)

Principle of Diminishing Percentage – over time, an increase in a cumulative amount which is the same each period, will represent a lower and lower percentage increase over each successive previous periods’ percentage – tendering the appearance of a reduction in growth to those who do not understand basic statistics. A common press headline trick is ‘lower percentage growth’ used as a way of implying a ‘reduction’.

Procrustean Solution – the undesirable practice of tailoring data to fit its container or some other preconceived argument being promoted by a carrier of an agenda.

Psychologism – data or principles purported to be of scientific origin in which psychology plays a key or sole role in gathering, grounding or explaining. Suffers from the weakness that psychological principles enjoy a perch which can never be falsified, therefore they are at risk of standing as pseudoscience.

quod gratis asseritur, gratis negatur – that which can be declared without basis, can be dismissed without basis. This phrase, does not mean that the subject declaration is existentially incorrect, nor that the antithetical set therefore bears truth – rather simply that I can refuse to accede to such a declaration, without any particular reason or soundness in arguing. While this is a form of skepticism, the apothegm can be abused to mistakenly perform debunking. The clarifying action on the part of the skeptic being its usage as a refusal to accede versus a negation of an idea (inverse negation fallacy). The latter is not warranted inside this principle of philosophy.

Recency Bias – the tendency to accept more recent information as being more credible or holding more gravitas in research.

Salami Slicing – the practice of artificially breaking a scientific study down into sub-components and publishing each sub-component as a separate study. If the paper involves consilience from studies derived from a number of disciplines, this might be acceptable.  However, when the studies are broken down simply to increase the publishing history of its authors or make a political position appear to be more scientific, through ‘a thousand studies support the idea that…’ styled articles, then this is a form of pseudoscience.

Scale Obfuscation – shooting high and shooting low. Any form of a priori assumption wherein a researcher presumes that larger or elemental sample populations are congruent with more study or better science. This as contrasted with an incremental scientific method under a condition of making open context field observations first, crafting the right (rather than presumed right) question, followed by study of multiple iterations of smaller, discrete, cohort and more focused populations, built then into larger field databases and study.

Bigger Data is Better Fallacy – the invalid assumption which researchers make when attempting to measure a purported phenomena, that data extraction inside a very large source population as the first and only step of scientific study, will serve to produce results which are conclusive or more scientific. The same presumption error can apply to meta-analysis. Wherein such an analysis is conducted in a context of low/detached, rather than informed knowledge sets, and will serve to dilute critical elements of signal and intelligence which could be used to elucidate the issue further.

Big is Science Error – bigger sample sizes and study data is a way of bypassing the scientific method, yet still tender an affectation of science and gravitas. Any time a study cannot be replicated, and a call to consensus is made simply because it would be too difficult to replicate the study basis of the consensus.

Yule-Simpson Paradox – a trend appears in different groups of data can be manipulated to disappear or reverse (see Effect Inversion) when these groups are combined.

Elemental Pleading – breaking down the testing of data or a claim into testing of its constituents, in order to remove or filter an effect which can only be derived from the combination of elements being claimed. For instance, to address the claim that doxycycline and EDTA reduce arterial plaque, testing was developed to measure the impact of each item individually, and when no effect was found, the combination was dismissed as well without study, and further/combination testing was deemed to be ‘pseudoscience’.

Sea Lioning – is a type of Internet trolling which consists of bad-faith requests for evidence, or repeated questions, the purpose of which is not clarification or elucidation, but rather an attempt to derail a discussion, appeal to authority as if representing science, or to wear down the patience of one’s opponent. May involve invalid repetitive requests for proof which fall under a proof gaming fallacy and highlight the challenger’s lack of scientific literacy.

Semmelweis Reflex – the tendency to reject new evidence that contradicts one’s held paradigm.

Shared Information Bias – known as the tendency for group members to spend more time and energy discussing information that all members are already familiar with (i.e., shared information), and less time and energy discussing information that only some members are aware of (i.e., unshared information).

Shotgun Barn Fallacy – when one takes the available set of evidence and habitually draws favored conclusions from it. Firing a shotgun at the broadside of a barn and then drawing the bulls-eye around the pellet holes.

Shrouded Furtive Fallacy – extensive recitation of historical and peripheral information tender the semblance of deep research and professionalism; yet are only posed to serve as a distraction inside an article whose crucial argument centers solely on accusations of malfeasance or lying on the part of its target opponents.

Simplicity Sell – when making a pitch through the contention that something is easy.  “Look, its simple right?”

Social Skepticism – a form of weaponized philosophy which masquerades as science, science enthusiasm or science communication. Social skepticism enforces specific conclusions and obfuscates competing ideas via a methodical and heavy-handed science embargo. It promotes charades of critical thought, self aggrandizement and is often chartered to defend corporate agendas; all while maintaining a high priority of falsely impugning eschewed individuals and topics. Its philosophies and conclusions are imposed through intimidation on the part of its cabal and cast of dark actors, and are enacted in lieu of and through bypassing actual scientific method. Failures with respect to science are the result of flawed or manipulated philosophy of science. When social control, change or conformance agents subject science to a state of being their lap-dog, serving specific agendas, such agents err in regard to the philosophical basis of science, skepticism. They are not bad scientists, rather bad philosophers, seeking a socialized goal. They are social skeptics.

Sowell’s Axiom – for every expert, there is an equal and opposite expert, but for every fact there is not necessarily an equal and opposite fact.

Sowert’s Law – Ignorace + Trivia = Fact

Streetlight Effect – is a type of observational bias that occurs when people only search for something where it is easiest to look.

Survivorship Bias – concentrating on the people or data that “survived” some process and inadvertently overlooking those that didn’t because of their lack of observability.

Torfuscation – a common form of mainstream or corporate pseudo science which is one step beyond common argument from ignorance, and further is used to conceal the presence of such a fallacy. A process, contended to be science, wherein one develops a conclusion through cataloging as data, absences of observation; which of course then default to a condition of absence of evidence, which is then parlayed into becoming evidence of absence.  A refined form of praedicate evidentia which is used, in conjunction with statistical studies which exploit the absence of observation as a basis for a claim that something does not exist. For instance, using medical plan filed diagnoses to indicate the level of prevalence of a disease. When such data is fraught with holes, incomplete reports and system failures, all of which work in favor of the desired conclusion of ‘evidence of absence’.

Trivia Fallacy – the rejection of a entire set of data by the pointing out of one questionable or disliked element inside the data.

Von Restorff Effect – the bias inducing principle that an item that sticks out is more likely to be remembered than other items.

Whipping Horse – a martyr issue, study, chart, graphic or event which is cited as exemplary in condemning a targeted group or thought set – which is over-employed or used in a domain of ample equivocation, equivocal slack, straw man or other ambiguity that it simply becomes a symbol and ironically maintains no real salience to the argument at hand.

Wolfinger’s Misquote – you may have heard the phrase ‘the plural of anecdote is not data’. It turns out that this is a misquote. The original aphorism, by the political scientist Ray Wolfinger, was just the opposite: ‘The plural of anecdote is data’. The only thing worse than the surrendered value (as opposed to collected value, in science) of an anecdote is the surrendered bias of ignoring anecdotes altogether.  This is a method of pseudoscience.

Yule-Simpson Paradox – a trend appears in different groups of data can be manipulated to disappear or reverse (see Effect Inversion) when these groups are combined.

Zeigarnik Effect – states that people remember uncompleted or interrupted tasks better than completed tasks. This imparts a bias to refute arguments or ideas which are unfinished.

epoché vanguards gnosis

How to MLA cite this blog post => 1


  1. The Ethical Skeptic, “The Tree of Knowledge Obfuscation: Misrepresentation of Evidence or Data” The Ethical Skeptic, WordPress, 17 Feb 2018, Web;

February 23, 2015 - Posted by | Argument Fallacies, Ethical Skepticism | ,

Leave a Reply

Notify of
Chinese (Simplified)EnglishFrenchGermanHindiPortugueseRussianSpanish
%d bloggers like this: