The Ethical Skeptic

Challenging Pseudo-Skepticism, Institutional Propaganda and Cultivated Ignorance

The Plural of Anecdote is Data

A single observation does not necessarily constitute an instance of the pejorative descriptive ‘anecdote’. Not only do anecdotes constitute data, but one anecdote can serve to falsify the null hypothesis and settle a scientific question in short order. Such is the power of a single observation. Such is the power of wielding skillfully, scientific inference. Fake skeptics seek to emasculate the power of the falsifying observation, at all costs.

It is incumbent upon the ethical skeptic, those of us who are researchers if you will – those who venerate science both as an objective set of methods as well as their underlying philosophy – incumbent that we understand the nature of anecdote and how the tool is correctly applied inside scientific inference. Anecdotes are not ‘Woo’, as most fake skeptics will imply through a couple of notorious memorized one-liners. Never mind what they say, nor might claim as straw man of their intent, and watch instead how they apply their supposed wisdom. You will observe such abuse of the concept to be most often the case. We must insist upon the theist and nihilist religious community of deniers, that inside the context of falsification/deduction in particular, a single observation does not constitute an instance of ‘anecdote’ (in the pejorative). Not only do anecdotes constitute data, but one anecdote can serve to falsify the Null (or even null hypothesis) and settle the question in short order. Such is the power of a single observation.

See ‘Anecdote’ – The Cry of the Pseudo-Skeptic

To an ethical skeptic, inductive anecdotes may prove to be informative in nature if one gives structure to and catalogs them over time. Anecdotes which are falsifying/deductive in nature are not only immediately informative, but moreover they are even more importantly, probative. Probative with respect to the null. I call the inferential mode modus absens the ‘null’ because usually in non-Bayesian styled deliberation, the null hypothesis, the notion that something is absent, is not actually a hypothesis at all. Rather, this species of idea constitutes simply a placeholder – the idea that something is not, until proved to be. And while this is a good common sense structure for the resolution of a casual argument, it does not mean that one should therefore believe or accept the null, as merely outcome of this artifice in common sense. In a way, deflecting observations by calling them ‘anecdote’ is a method of believing the null, and not in actuality conducting science nor critical thinking. However, this is the reality we face with unethical skeptics today. The tyranny of the religious default Null.

The least scientific thing a person can do, is to believe the null hypothesis.

Wolfinger’s Misquote

/philosophy : skepticism : pseudoscience : apothegm/ : you may have heard the phrase ‘the plural of anecdote is not data’. It turns out that this is a misquote. The original aphorism, by the political scientist Ray Wolfinger, was just the opposite: ‘The plural of anecdote is data’. The only thing worse than the surrendered value (as opposed to collected value, in science) of an anecdote is the incurred bias of ignoring anecdotes altogether. This is a method of pseudoscience.

Our opponents elevate the scientific status of a typical placeholder Null (such-and-such does not exist) and pretend that the idea, 1. actually possesses a scientific definition and 2. bears consensus acceptance among scientists. These constitute their first of many magician’s tricks, that those who do not understand the context of inference fall-for, over and over. Even scientists will fall for this ole’ one-two, so it is understandable as to why journalists and science communicators will as well. But anecdotes are science, when gathered under the disciplined structure of Observation (the first step of the scientific method). Below we differentiate four contexts of the single observation, in the sense of both two inductive and two deductive inference contexts, only one of which fits the semantics regarding ‘anecdote’ which is exploited by fake skeptics.

Inductive Anecdote

Inductive inference is the context wherein a supporting case or story can be purely anecdotal (The plural of anecdote is not data). This apothegm is not a logical truth, as it could apply to certain cases of induction, however does not apply universally.

Null:  Dimmer switches do not cause house fires to any greater degree than do normal On/Off flip switches.

Inference Context 1 – Inductive Data Anecdote:  My neighbor had dimmer switched lights and they caused a fire in his house.

Inference Context 2 – Mere Anecdote (Appeal to Ignorance):  My neighbor had dimmer switched lights and they never had a fire in their house.

Hence we have Wolfinger’s Inductive Paradox.

Wolfinger’s Inductive Paradox

/philosophy : science : data collection : agency/ : an ‘anecdote’ to the modus praesens (observation or case which supports an objective presence of a state or object) constitutes data, while an anecdote to the modus absens (observation supporting an appeal to ignorance claim that a state or object does not exist) is merely an anecdote. One’s refusal to collect or document the former, does not constitute skepticism. Relates to Hempel’s Paradox.

Finally, we have the instance wherein we step out of inductive inference, and into the stronger probative nature of deduction and falsification. In this context an anecdote is almost always probative. As in the case of Wolfinger’s Inductive Paradox above, one’s refusal to collect or document such data, does not constitute skepticism.

Deductive or Falsifying Anecdote

Deductive inference leading to also, falsification (The plural of anecdote is data). Even the singular of anecdote is data under the right condition of inference.

Null:  There is no such thing as a dimmer switch.

Inference Context 3 – Deductive Anecdote:  I saw a dimmer switch in the hardware store and took a picture of it.

Inference Context 4 – Falsifying Anecdote:  An electrician came and installed a dimmer switch into my house.

For example, what is occurring when one accepts materialism as an a priori truth pertains to those who insert that religious agency between steps 2 and 3 above. They contend that dimmer switches do not exist, so therefore any photo of one necessarily has to be false. And of course, at any given time, there is only one photo of one at all (all previous photos were dismissed earlier in similar exercises). Furthermore they then forbid any professional electrician from installing any dimmer switches (or they will be subject to losing their license). In this way – dimmer switches can never ‘exist’ and deniers endlessly can proclaim to non-electricians ‘you bear the burden of proof’ (see Proof Gaming). From then on, deeming all occurrences of step 2 to constitute lone cases of ‘anecdote’, while failing to distinguish between inductive and deductive contexts therein.

Our allies and co-observers as ethical skeptics need bear the knowledge of philosophy of science (skepticism) sufficient to stand up and and say, “No – this is wrong. What you are doing is pseudoscience”.

Hence, one of my reasons for creating The Map of Inference.

     How to MLA cite this article:

The Ethical Skeptic, “The Plural of Anecdote is Data”; The Ethical Skeptic, WordPress, 1 May 2019; Web, https://wp.me/p17q0e-9HJ

May 1, 2019 Posted by | Argument Fallacies, Tradecraft SSkepticism | , , | Leave a comment

Torfuscation – Gaming Study Design to Effect an Outcome

As important as is the mode of inference one employs commensurate with study completion, is the design of the study itself. Before one begins to attempt to reduce and analyze a body of observational resource, the ethical scientist must first select the study type and design that will afford them the greatest draw in terms of probative potential. The intricacies of this process present the poseur an opportunity to game outcomes of science through study design, type and PICO features, such that it produces outcomes which serve to further the political, hate or religious causes of their sponsors.

There are several ways to put on the appearance of conducting serious science, yet still effect outcomes which maintain alignment with the agency of your funders, sponsors, mentors or controlling authorities. Recent ethical evolution inside science, has highlighted the need for understanding that a researcher’s simply having calculated a p-value, applied an arrival distribution or bounded an estimate inside a confidence interval, does not necessarily mean that they have developed a sound basis from which to draw any quality inference. In similar philosophy, one can develop a study – and completely mislead the scientific community as to the nature of reality inside a given issue of contention or science.

We are all familiar with the trick of falsely calling a ‘survey of study abstracts’, or a meta-synthesis of best evidence, or an opinion piece summarizing a body of study from one person’s point of view – a ‘meta-analysis’. A meta-analysis combines congruent study designs and bodies of equivalent data, in order to improve the statistical power of the combined entailed analyses.1 The fake forms of meta-analysis do no such thing. A meta-analysis is a secondary or filtered systematic review which only bears leveraged strength in the instance wherein randomized controlled trials or longitudinal studies of the same species, are able to be combined in order to derive this higher statistical power. Every other flavor of such ‘blending of study’, does not accomplish such an objective. Such blending may, and this is important, actually serve to reduce the probative power of the systematic review itself. Nonetheless, you will find less-than-ethical scientists trying to push their opinion/summary articles upon the community as if they reflected through convenient misnomer, this ‘most rigorous form of study design’. One can find an example of this within the study: Taylor, Swerdfeger, Eslick; An evidence-based meta-analysis of case-control and cohort studies; Elsevier, 2014.2

This equivocal sleight-of-hand stands as merely one example of the games played within the agency-influenced domains of science. With regard to manipulating study design in order to effect a desired scientific outcome, there are several means of accomplishing this feat. Most notably the following methods, which are all called collectively, torfuscation. Torfuscation involves employing a less rigorous study type (lower rank on the Chart below), an ineffective study design, or a type of flawed methodical PICO-time analysis, which will serve most often to weaken the probative potential of a study which could ostensibly serve to produce an outcome which threatens its sponsors.

Torfuscation

/philosophy : pseudoscience : study fraud : Saxon : ‘hide in the bog’/ : pseudoscience or obfuscation enacted through a Nelsonian knowledge masquerade of scientific protocol and study design. Inappropriate, manipulated or shallow study design crafted so as to obscure or avoid a targeted/disliked inference. A process, contended to be science, wherein one develops a conclusion through cataloging study artifice or observation noise as valid data. Invalid observations which can be parlayed into becoming evidence of absence or evidence of existence as one desires – by accepting only the appropriate hit or miss grouping one desires as basis to support an a priori preference, and as well avoid any further needed ex ante proof.  A refined form of praedicate evidentia or utile abstentia employed through using less rigorous or probative methods of study than are requisite under otherwise ethical science.  Exploitation of study noise generated through first level ‘big data’ or agency-influenced ‘meta-synthesis’, as the ‘evidence’ that no further or deeper study is therefore warranted – and moreover that research of the subject entailed is now socially embargoed.

Study design which exploits the weakness potential entailed inside the PICO-time Study Design Development Model3 (see Study to Inference Strength and Risk Chart below), through the manipulation of the study

P – patient, problem or population
I – intervention, indicator
C – comparison, control or comparator
O – outcome, or
time – time series

Which seeks to compromise the outcome or conclusion in terms of the study usage; more specifically: prevention, screening, diagnostic, treatment, quality of life, compassionate use, expanded access, superiority, non-inferiority and or equivalence.

Meta-Garbage, Deescalation and PICO-time Manipulation

One example of tampering with the PICO-time attributes of a study, would consist of the circumstance wherein only medical plan completed diagnostic data is used as the sample base for a retrospective observational cohort study’s ‘outcome’ data. Such data is highly likely to be incomplete or skewed in a non-probative direction, under a condition of linear induction (a weaker form of inference) and utile abstentia (a method of exclusion bias through furtive data-source selection). In similar fashion and as example, if the average age of outcome diagnosis is 5.5 years, and the average slack time between diagnosis and first possible recording into a medical plan database is 4 to 18 months, then a constraining of the time-series involved inside a study examining that data, to 4.5 years, is an act of incompetent or malicious study design. But you will find both of these tricks to be common in studies wherein a potential outcome is threatening to a study’s sponsors; agents who hope to prove by modus absens shallow and linear inductive inference that the subject can be embargoed from then on. Just such a study can be found here: Madsen, Hviid; A Population-Based Study of Measles, Mumps, and Rubella Vaccination and Autism, 2002.4 A study may also be downgraded (lower on the chart below), and purposely forced to employ a lesser form of design probative strength (Levels 1 – 7); precisely because its sponsors suspect the possibility of a valid risk they do not want exposed. This is very similar to a downgrading in inference method called methodical deescalation – a common trick of professional pseudoscience. One may also notice that often, studies employing these three tricks are held as proprietary, concealed from the public during the critical study design phase. This is purposeful. This is oppression in the name of science. One may also notice that the ‘meta-analysis’ decried earlier in this article, cited this very study just mentioned as a ‘best evidence study’ inside its systematic review. If you meta-study garbage, you will produce meta-garbage as well (see Secondary Study in the Chart below).

The following is The Ethical Skeptic’s chart indexing study design against mode of inference, strength and its risk in torfuscation. It is a handy tool for helping spot torfuscation of the three example types elicited above, and more. The study types are ranked from top to bottom in terms of Level in probative strength (1 – 7), and as well are arranged into Direct, Analytical and Descriptive study groupings by color. Torfuscation involves the selection of a study type with a probative power lower down on the chart, when a higher probative level of study was available and/or warranted; as well as in tampering with the PICO-time risk elements (right side of chart under the yellow header) characteristic of each study type so as to weaken its overall ability to indicate a potential disliked outcome. The Chart is followed up by a series of definitions for each study type listed. The myriad sources for this compiled set of industry material are listed at the end of this article – however, it should be noted that the sources cited did not agree with each other on the material/level, structure nor definitions of various study designs. Therefore modifications and selections were made as to the attributes of study, which allowed for the entire set of alternatives/definitions to come into synchrony with each other – with minimal overlap and confusion. So you will not find 100% of this chart replicated inside any single resource or textbook. (note: My past lab experience has been mostly in non-randomized controlled factorial trial study – whose probative successes were fed into a predictive model, then confirmed by single mechanistic lab tests. I found this approach to be highly effective in my past professional work. But that lab protocol may not apply to other types of study challenge and could be misleading if applied as a panacea. Hence the need for the chart below.)

Study Design Type Definitions

PRIMARY/DIRECT STUDY

Experimental– A study which involves a direct physical test of the material or principal question being asked.

Mechanistic/Lab – A direct study which examines a physical attribute or mechanism inside a controlled closed environment, influencing a single input variable, while observing a single output variable – both related to that attribute or mechanism.

Controlled Trial

Randomized (Randomized Controlled Trial) – A study in which people are allocated at random (by chance alone) to receive one of several clinical interventions. One of these interventions is the standard of comparison or the ‘control’. The control may be a standard practice, a placebo (“sugar pill”), or no intervention at all.

Non-Randomized Controlled Trial – A study in which people are allocated by a discriminating factor (not bias), to receive one of several clinical interventions. One of these interventions is the standard of comparison or the ‘control’. The control may be a standard practice, a placebo (“sugar pill”), or no intervention at all.

Parallel – A type of controlled trial where two groups of treatments, A and B, are given so that one group receives only A while another group receives only B. Other names for this type of study include “between patient” and “non-crossover” studies.

Crossover – A longitudinal direct study in which subjects receive a sequence of different treatments (or exposures). In a randomized controlled trial with repeated measures design, the same measures are collected multiple times for each subject. A crossover trial has a repeated measures design in which each patient is assigned to a sequence of two or more treatments, of which one may either be a standard treatment or a placebo. Nearly all crossover controlled trial studies are designed to have balance, whereby all subjects receive the same number of treatments and participate for the same number of periods. In most crossover trials each subject receives all treatments, in a random order.

Factorial – A factorial study is an experiment whose design consists of two or more factors, each with discrete possible values or ‘levels’, and whose experimental units take on all possible combinations of these levels across all such factors. A full factorial design may also be called a fully-crossed design. Such an experiment allows the investigator to study the effect of each factor on the response variable or outcome, as well as the effects of interactions between factors on the response variable or outcome.

Blind Trial – A trial or experiment in which information about the test is masked (kept) from the participant (single blind) and/or the test administerer (double blind), to reduce or eliminate bias, until after a trial outcome is known.

Open Trial – A type of non-randomized controlled trial in which both the researchers and participants know which treatment is being administered.

Placebo-Control Trial – A study which blindly and randomly allocates similar patients to a control group that receives a placebo and an experimental test group. Therein investigators can ensure that any possible placebo effect will be minimized in the final statistical analysis.

Interventional (Before and After/Interrupted Time Series/Historical Control) – A study in which observations are made before and after the implementation of an intervention, both in a group that receives the intervention and in a control group that does not. A study that uses observations at multiple time points before and after an intervention (the ‘interruption’). The design attempts to detect whether the intervention has had an effect significantly greater than any underlying trend over time.

Adaptive Clinical Trial – A controlled trial that evaluates a medical device or treatment by observing participant outcomes (and possibly other measures, such as side-effects) along a prescribed schedule, and modifying parameters of the trial protocol in accord with those observations. The adaptation process generally continues throughout the trial, as prescribed in the trial protocol. Modifications may include dosage, sample size, drug undergoing trial, patient selection criteria or treatment mix. In some cases, trials have become an ongoing process that regularly adds and drops therapies and patient groups as more information is gained. Importantly, the trial protocol is set before the trial begins; the protocol pre-specifies the adaptation schedule and processes. 

Observational – Analytical

Cohort/Panel (Longitudinal) – A study in which a defined group of people (the cohort – a group of people who share a defining characteristic, typically those who experienced a common event in a selected period) is followed over time, to examine associations between different interventions received and subsequent outcomes.  

Prospective – A cohort study which recruits participants before any intervention and follows them into the future.

Retrospective – A cohort study which identifies subjects from past records describing the interventions received and follows them from the time of those records.

Time-Series – A cohort study which identifies subjects from a particular segment in time following an intervention (which may have also occurred in a time series) and follows them during only the duration of that time segment. Relies upon robust intervention and subject tracking databases. For example, comparing lung health to pollution during a segment in time.

Cross-Sectional/Transverse/Prevalence – A study that collects information on interventions (past or present) and current health outcomes, i.e. restricted to health states, for a group of people at a particular point in time, to examine associations between the outcomes and exposure to interventions.

Case-Control – A study that compares people with a specific outcome of interest (‘cases’) with people from the same source population but without that outcome (‘controls’), to examine the association between the outcome and prior exposure (e.g. having an intervention). This design is particularly useful when the outcome is rare.

Nested Case-Control – A study wherein cases of a health outcome that occur in a defined cohort are identified and, for each, a specified number of matched controls is selected from among those in the cohort who have not developed the health outcome by the time of occurrence in the case. For many research questions, the nested case-control design potentially offers impressive reductions in costs and efforts of data collection and analysis compared with the full case-control or cohort approach, with relatively minor loss in statistical efficiency.

Community Survey – An observational study wherein a targeted cohort or panel is given a set of questions regarding both interventions and observed outcomes over the life or a defined time period of the person, child or other close family member. These are often conducted in conjunction with another disciplined polling process (such as a census or general medical plan survey) so as to reduce statistical design bias or error.

Ecological (Correlational) – A study of risk-modifying factors on health or other outcomes based on populations defined either geographically or temporally. Both risk-modifying factors and outcomes are averaged or are linear regressed for the populations in each geographical or temporal unit and then compared using standard statistical methods.

Observational – Descriptive

Population – A study of a group of individuals taken from the general population who share a common characteristic, such as age, sex, or health condition. This group may be studied for different reasons, such as their response to a drug or risk of getting a disease. 

Case Series – Observations are made on a series of specific individuals, usually all receiving the same intervention, before and after an intervention but with no control group.

Case Report – Observation is made on a specific individual, receiving an intervention, before and after an intervention but with no control group/person other than the general population.

SECONDARY/FILTERED STUDY

Systematic Review/Objective Meta-Analysis – A method for systematically combining pertinent qualitative and quantitative study data from several selected studies to develop a single conclusion that has greater statistical power. This conclusion is statistically stronger than the analysis of any single study, due to increased numbers of subjects, greater diversity among subjects, or accumulated effects and results. However, researchers must ensure that the quantitative and study design attributes of the contained studies all match, in order to retain and enhance the statistical power entailed. Mixing lesser rigorous or incongruent studies with more rigorous studies will only result in a meta-analysis which bears the statistical power of only a portion of the studies, or of the least rigorous study type contained, in decreasing order along the following general types of study:

Controlled Trial/Mechanism
Longitudinal/Cohort
Cross-Sectional
Case-Control
Survey/Ecological
Descriptive

Interpretive/Abstract ‘Meta-Synthesis’ – A study which surveys the conclusion or abstract of a pool of studies in order to determine the study authors’ conclusions along a particular line of conjecture or deliberation. This may include a priori conclusions or author preferences disclosed inside the abstract of each study, which were not necessarily derived as an outcome of the study itself. This study may tally a ‘best evidence’ subset of studies within the overall survey group, which stand as superior in their representation of the conclusion, methodology undertaken or breadth in addressing the issue at hand.

Editorial/Expert Opinion – A summary article generally citing both scientific outcomes and opinion, issued by an expert within a given field, currently active and engaged in research inside that field. The article may or may not refer to specific examples of studies, which support an opinion that a consilience of evidence points in a given direction regarding an issue of deliberation. The author will typically delineate a circumstance of study outcome, consilience or consensus as separate from their personal professional opinion.

Critical Review/Skeptic Opinion – A self-identified skeptic or science enthusiast, applies a priori thinking with no ex ante accountability, in order to arrive at a conclusion. The reviewer may or may not cite a couple examples or studies to back their conclusion.

Sources: 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22

     How to MLA cite this article:

The Ethical Skeptic, “Torfuscation – Gaming Study Design to Effect an Outcome”; The Ethical Skeptic, WordPress, 15 Apr 2019; Web, https://wp.me/p17q0e-9yQ

April 15, 2019 Posted by | Agenda Propaganda, Institutional Mandates, Tradecraft SSkepticism | , , | 2 Comments

Nelsonian Inference and Cultivated Ignorance

Nelsonian knowledge is the virtual forbidden knowledge, which betrays its possession through one’s exacting efforts to avoid it in the first place. Nelsonian knowledge involves a keen prowess in knowing what to not-know, where to not-look and how not-to-look at it. As regards the poseur, intelligence cannot be derived from the ‘reliable’ sources they choose to examine. Rather it is those sources which they conspicuously choose to avoid, which tend to offer the greatest probative potential.

The Riddle of Not-Knowledge

When I was little my parents used to set up an elaborate play each Easter. My Dad would compete with my brother and I in the hunting of Easter Eggs in our backyard. Each spring we would get out our baskets after church, and set our minds ready to find as many decorated eggs or chocolates as we could muster, hidden in various places in the backyard. After all the hunting was over, Dad would without exception, come in an abysmal last place as compared to the parity in haul attained by my brother and me. It took a couple years before we finally began to wonder, since our parents had hidden all the decorated eggs and chocolates in the first place, why my Dad was so abysmally poor at then finding them. He would inevitably end up with only 1 or 2 eggs. What a terrible Easter Egg hunter my Dad was! Eventually I figured out that my Dad was avoiding all the spots where the eggs actually were hidden; in the end, only ‘finding’ those eggs we did not find. He was simply doing cleanup duty and putting on a fun charade for us kids.

One year my Dad ‘found’ an Easter Egg hidden from the previous year. Curious as to why we found 25 eggs when he had hidden only 24, its was an unforgettably entertaining moment for two little boys when he cracked open that extra egg in order to eat it.

Now the term ‘Nelsonian knowledge’ (and inference) is derived from a tale told of Admiral Horatio Nelson of the British Navy. A colorful tradition has been derived from this legend derived purportedly from the Battle of Copenhagen: wherein being informed of a command signal to cease action and retreat, holding a telescope to his blind eye, Nelson exclaimed, “I really do not see the signal!”1 Similar doctrine and result can be advanced in politics through a form of governance called Tyflocracy (blind-eye government). Either way, Nelsonian knowledge involves a keen prowess in knowing what to not-know, where to not-look and how not-to-look at it.

It is said that a secret not worth sharing, is not worth keeping. In this same vein, a forbidden topic not worth studying, is not worth squelching.

Later in life, while advising a company encountering problems with fraud-embezzlement, the company’s CFO was implicated in the embezzlement, not because she was caught with her hand in the cookie jar per se – but rather, because of her conspicuous absence at every critical meeting and detachment from every decision point in which the fraud mechanisms were approved/executed. In other words, although she did not personally reap any proceeds from the fraud, her knowledge of the fraud was betrayed by those meetings and decisions points she avoided, and not prima facia by those inside of which she was present. Hers’ was a crime of virtual knowledge; investigators amazed at her skill in avoiding such a pervasive deceptive element under her professional purview. This ‘amazing skill’ involved a knowledge called Nelsonian knowledge. One betrays their possession of Nelsonian knowledge, through their robust efforts undertaken to avoid it. And just like my father, who managed to search every single darn spot in the backyard in which an Easter Egg was not located, her exacting efforts to avoid all facets of the fraud, demonstrated her intimate knowledge of the fraud itself.

The Riddle of Nelsonian Knowledge

It behooves the holder of Nelsonian knowledge to know more about this embargoed knowledge than would be reasonably expected inside standard ignorance. The irony with Nelsonian knowledge is that it demands of its ‘ignorant party’ a detailed awareness of schema, its depth and a flawless monitoring, which is unparalleled in official knowledge.

If our desire to avoid so-called ‘baseless pseudoscience’ is as casual as we imply;
casual to such an extent so as to justify our complete disinterest in it as a species,
then why is our knowledge of specifically what is forbidden-to-study, so damned accurate and thorough?

If it is all worthless fodder, then why are its opponents so well organized, trained and armed?

Such knowledge is called ‘contrived ignorance’ or Nelsonian knowledge and inference.2 And if as to prove the point, please note that Wikipedia fails to define this principle correctly (rendering it as ‘Willful blindness’). Nelsonian knowledge and contrived ignorance are active process (agency), and bear less in common with the passive state of willful blindness (apathy). Those are not the same thing.

Even to the point of crafting its very language, Wikipedia employs Nelsonian knowledge, in the defining of the term Nelsonian knowledge itself.

Nelsonian inference would be the treasure digs and trail in blue on the treasure map above, while the Nelsonian knowledge would be the treasure map itself. Such is the game played by our most talented ‘skeptics’. Their ability to conspicuously look only at evidence which will show them to be correct (what they call ‘reliable’ sources), betrays that they bear virtual knowledge of that which so threatens their very being (probative sources). A terror so deep, that they would willingly deceive themselves in the process of deceiving others (see The New Debunker: Pseudo-Skeptic Sleuth)

Nelsonian Knowledge (Inference)

/philosophy : pretense : knowledge obfuscation/ : Nelsonian knowledge takes three forms

1. a meticulous attentiveness to and absence of, that which one should ‘not know’,
2. an inferential method of avoiding such knowledge, and finally as well,
3. that misleading knowledge or activity which is used as a substitute in place of actual knowledge (organic untruth or disinformation).

The former (#1) is taken to actually be known on the part of a poseur. It is dishonest for a man deliberately to shut his eyes to principles/intelligence which he would prefer not to know. If he does so, he is taken to have actual knowledge of the facts to which he shut his eyes. Such knowledge has been described as ‘Nelsonian knowledge’, meaning knowledge which is attributed to a person as a consequence of his ‘willful blindness’ or (as American legal analysts describe it) ‘contrived ignorance’.

Nelsonian knowledge is that set of inferences at the bottom of The Map of Inference. Its expressions include abductive, panductive, revelatory and critical thinking forms of inference. In other words, ‘ways to not know’.

Nelsonian Inferences – Ways to Not Know

Nelsonian knowledge goes a step further than does mere willful blindness or apathy however, in that Nelsonian knowledge is 1. a meticulous absence of that which one should ‘not know’, 2. an inferential method of avoiding such knowledge, and finally as well, 3. that knowledge or activity which is used as a substitute in its place (organic untruth or disinformation). However, when the tactics of Nelsonian knowledge are deployed on a social scale, such an effort is known as cultivated ignorance. Remember that in all these contexts however, the word ignorance is a verb.

Not-Knowledge and Cultivated Ignorance

When Nelsonian knowledge becomes a goal of a cult, institution, academia or society as a whole, such activities and the resulting sets of ‘wisdom’ constitute a wholly new entity called cultivated ignorance. This form of ignorance does not constitute any kind of nescience – the accidental ignorance which is a result of inexperience or being a novel player. Rather, cultivated ignorance is a form of pluralistic ignorance which is crafted purposefully as a means of establishing control of a population. A means of influence and oppression which avoids the appearances of conspiracy theory; through tactics of false or semi-true slogans, the six mechanisms of the professional lie, fake science principles, compartmentalization, disinformation, partial information, counter-intelligence, social skeptic patrols and media control. As I mentioned above, ignorance is a verb in this context. It is cultivated when the population is trained to avoid those avenues through which they might receive contrary information to that which is being promoted by the Cabal in the first place. First mastered by Abrahamic religions, the torch of managing cultivated ignorance has been passed to academic nihilists, oligarchs and their global-socialist governance partners.

Cultivated Ignorance

/philosophy : counterintelligence : Nelsonian knowledge/ :

If one is to deceive, yet also fathoms the innate spiritual decline incumbent with such activity – then one must abstract a portion of the truth, such that it serves and cultivates ignorance – a dismissal of the necessity to seek what is unknown.

The purposeful spread and promotion or enforcement of Nelsonian knowledge and inference. Official knowledge or Omega Hypothesis which is employed to displace/squelch both embargoed knowledge and the entities who research such topics. Often the product of a combination of pluralistic ignorance and the Lindy Effect, its purpose is to socially minimize the number of true experts within a given field of study. This in order to ensure that an embargoed topic is never seriously researched by more members of the body of science than Michael Shermer’s ‘dismissible margin’ of researchers. By acting as the Malcolm Gladwell connectors, and under the moniker of ‘skeptics’, Social Skeptics can then leverage the popular mutual ignorance of the members and begin to spin misconceptions as to what expert scientists think. Moreover, then cultivate these falsehoods among scientists and the media at large. True experts who dissent are then intimidated and must remain quiet so as not to seem anathema, nor risk possibly being declared fringe by the patrolling Cabal of fake skeptics.

Cultivated ignorance is the effort on the part of social skepticism, to promote invalid forms of inference – forms of inference which only serve to obfuscate and block knowledge, not derive it. Its heart and soul resides in the practice of employing Nelsonian knowledge.

This is only part of the reason why we as mankind, are clueless as to critical issues of our being. Who are we? Where did we come from? Why are there mysteries of which everyone is aware, yet no one seems to want to urgently solve, or even solve at all?

Not-knowledge is a peer to knowledge. And if not-knowledge is based upon risky stacks of linear induction, is employed as a political weapon, or is based upon Nelsonian inference, then it is our job to challenge not-knowledge as well.

Nelsonian knowledge. As an ethical skeptic, never allow yourself nor anyone in your organization, to work under such a principle. Ignorance causes suffering.

     How to MLA cite this article:

The Ethical Skeptic, “Nelsonian Inference and Cultivated Ignorance”; The Ethical Skeptic, WordPress, 7 Mar 2019; Web, https://wp.me/p17q0e-9se

 

March 7, 2019 Posted by | Tradecraft SSkepticism | , | 2 Comments

Chinese (Simplified)EnglishFrenchGermanHindiPortugueseRussianSpanish
%d bloggers like this: