The Ethical Skeptic

Challenging Pseudo-Skepticism, Institutional Propaganda and Cultivated Ignorance

The Dual-Burden Model of Inferential Ethics

Trust everyone, but cut the deck. So goes the famous apothegm regarding accountability being a double-edged sword. There exist certain logical critical paths in which both the sponsor as well as the null hypothesis defender, each bear the burden of proof of their contention. Inside a sufficiently complex or unknown system domain, a claim to absence of intent must also be proved to a reasonable certainty.

There are certain circumstances wherein, both sides in an argument bear the burden of proof. Let’s elicit this through the game of poker. The rules of poker are formulated around a persistent and robust human foible called cheating. Cheating is the condition where an intelligent mind, chooses to intervene (intent) and insert into a model, a constraint which normally does not exist. An ace card taped under the table or a method of dumping poker chips on insignificant hands, with specific intent to force a weaker-funds player to call a bid or withdraw from the game artificially. These are just two simple examples among many more methods of cheating at card games of chance.1

In signal intelligence, a fairly common form of encryption involves the masking of a transmission, such that it cannot be distinguished from white noise or background static.2 In such instances, finding an intervention inside such stochasticity, a contribution or presence of an external constraint of intent, is the key to detecting an encrypted signal as distinct from the background noise. In such a case of intelligence prosecution – all one need do in order to prove their case, is find one single instance of contrived signal. The signal bears intelligent intent, regardless of how random or ‘intent-lacking’ the rest of the signal might appear. Intent only has to be detected once, in order to falsify its absence.

Intent (Burden of Proof)

/philosophy : science : systems engineering : modeling and simulation/ : a novel constraint which arrives into a chaotic/complex process or a domain of high unknown, which does not originate from the natural background set of constraints, and further serves to produce a consistent pattern of ergodicity – when no feedback connection between outcome and constraint is possible. An intervening constraint in which every reasonable potential cause aside from intelligent derivation has been reduced, even if such constraint is accompanied or concealed by other peer stochastic and non-intent influences.

When one makes or implies a claim to lack of intent, one has made the first scientific claim and cannot therefore be exempted from the burden of proof regarding that claim, nor reside inside the luxury of a false null hypothesis (einfach mechanism).

So, let us then outline the practical ethics (praxis) of how card games are managed in light of such a reality of intent. A praxis which involves a burden of proof that is demanded of both parties in a deliberation.

The Dual-Burden of Intent: Trust Everyone But Cut the Deck

Two distinct conditions of proof exist with regard to poker playing. These conditions involve both the burden of proof of an intent (cheating) as well as proof that the domain is devoid of intent (visibly shuffling the deck of all cards). This double-edged sword of accountability or dual-burden with respect to intent, is outlined below. Both of these claims, bear the simultaneous burden of proof.

Yes, in order to accuse someone of cheating/intent, one bears a burden of at the least inductive plurality, if not proof. However, when one sits at a table to play poker, one is also making an implicit claim to honesty/absence of intent – which also must be proved, each and every hand of cards. Both types of claim explicit and implicit, simultaneously bear the burden of proof.

Claim 1: Accusation of Intent (Detecting the Cheat) – A sponsor must eventually prove intent, this is true. However a sponsor can raise objection and ask for research, even if such proof is not readily available. This according to house rules to prove cheating; to wit:3

If something is non-provable, your best bet is to leave the game and make mention of it to the host or the poker room manager. There won’t be much they can immediately do about it, but they can keep an eye out for it and maybe do something in the future (inductive plurality). If something is provable, you should voice your opinion to the host or poker room manager as soon as possible. If it is something that they’ll need to witness to prove, mention it to them in private so they can begin keeping an eye out for it. If it is something you can immediately prove, you can mention it out loud to the dealer and the table so they can catch the perpetrator immediately (proof).

Claim 2:  Averring Absence of Intent (The Shuffle, Cut and Player Etiquette) – However, lack of intent cannot also be casually assumed – when doubt exists as to the presence of an unseen hand. Inside a sufficiently complex or unknown system, absence of intent must also be proved to a reasonable certainty. In cards, this absence of intent is fairly easy to establish via the quod erat demonstrandum experiments of the shuffle, cut and poker-player etiquette. However, in a large domain of unknown, such a logical proof (one cannot inductively prove a modus absens) is very difficult to attain, and if concluded at all, such conclusion resides at the end of the deliberative process and not its beginning. Such lack of intent cannot be casually assumed from a small set of domain sample; to wit:4

In a player-dealt game, the pack must be shuffled and cut before the cards are dealt. The recommended method to protect the integrity of the game is to have three people involved instead of only two. The dealer on the previous hand takes in the discards and squares up the deck prior to the shuffle. The player on the new dealer’s left shuffles the cards and then slides the pack to the new dealer, who gets them cut by the player on his right. The deck must be riffled a minimum of four times. The cut must leave a minimum of four cards in each portion. The bottom of the deck should be protected so nobody can see the bottom card. This is done by using a cut-card. A joker can be used as a cut-card.

As a note, please resist the temptation to conflate absence of intent (agency) in the methods of science, as being congruent with an absence of intent in the objective system being studied (shuffling analogy above). In all cases, an absence of agency inside the methods of science, must be presumed. In the case of card games of chance cited above, as regards intent of an unseen hand inside a study domain which is chaotic or of large uncertainty – neither intent nor its lack thereof, may be assumed. In this analogy, the game of chance is the object being studied, and the House (Casino) Surveillance is the entity employing the scientific method (ensuring veracity of the ‘studied’ game).

Let’s examine now an example of just such a domain of chaotic and large unknown – inside which we cannot yet aver an absence of intent, nor currently also claim any manifestation of a ‘tampering hand’.

Example Inside Evolutionary Genetics

We are all very familiar with the contentious arguments surrounding whether or not a ‘hand of God’ has either initiated and/or guided evolution. A series of Pew Research Polls showed that most Americans both believe in God, and believe evolutionary theory at the same time.5 Does this serve to imply that all these people are irrational? No, of course not. Sadly, arguments as to the veracity of creation or intelligent design are red herring arguments, simply posed by religious agency. While I suspect that both sides in the extremist Nihilist/Fundamentalist debate have played a role in the inappropriate escalation of these constructs, who originated this agency actually is not my concern; rather simply that these straw men concepts exist to mislead scientist and lay person alike. I do not have to show who crafted a fallacious argument, in order to shoot it down as invalid. The actual deliberation which exists inside of evolutionary genetics is the issue of whether or not intent is a contributing constraint to any one of five observed ergodicity sets (below). Creation, Nihilism, Materialism and Intelligent Design are irrelevant both as arguments and as contexts of research inside science. Intent however, is not.

Before we jump into this issue however, adjudicated in light of our understanding of the dual-burden ethical model above, let me comment that scientifically, I do not care what is determined to be its outcome. I mean I do care; but I divorce that care from my discipline of skepticism and epistemology. When I examine the five issues which are casually and incorrectly called ‘evolution’, I find that I cannot discern sufficient rationale to dismiss intent as a construct, a priori. In this article however we shall focus upon this issue of intent, solely with regard to Human Accelerated Regions (HAR Acceleration in red bold below), as depicted in the graphic to the right; sourced from the Doan-Bae study quoted below.6

Speciation (Darwinism)
Human Acceleration

These are all separate sub-disciplines, often referred to incorrectly as ‘evolution’. Evolution is a fact, and an observed ergodicity (outcome) – it is not however a religion and should not be defended by hyperbole and apologetics. Evolution does not disprove God, it does not serve to even suggest Nihilism, nor does it prove materialism, does not make a case for atheism, does not disprove aliens nor angels and does not serve in any way shape or form, to comment upon abiogenesis. Be wary of people who seek to conflate one or more of these in terms of inferential outcome.

Most importantly, evolution does not prove, nor need assume, absence of intent.
‘Creation’ and ‘Intelligent Design’ are irrelevant red herring arguments – borne of agency.

Do not engage with people on either side of the argument who inexpertly wield such terminology.

Intent is the salient and sequitur critical path principle. Otherwise we might as well don a costume and start performing magician tricks with intimidating terminology. Watch for people who equivocally imply such derivative conclusions, employing evolution as a weapon word. They are not to be trusted – and you should certainly never get your science from them. I am not a bio-genetics expert, however I do possess sufficient organic chemistry background, and more importantly – decades of professional neural feedback systems modeling and simulation experience – experience directly critical path to genetics. In my layman studies on evolution, and in the few genetic projects I have commissioned or funded (you can find similar on the web, but I am drawing this from my Genes IX graduate course text by Benjamin Lewin), categorical (mutations) DNA changes comprise the following types:7

Base Substitutions:​

Silent – single nucleotide (letter) change, does not materially alter the amino acid expressed​
Missense – single nucleotide (letter) change, alters the amino acid expressed​
Nonsense – single nucleotide (letter) change, results in insertion of a codon stop or methionine start​
Jibberish – single nucleotide (letter) change, results in a chemical coupling which is not A, C, T nor G​
Base Mispairing – any form of anti-parallel base coupling which does not conform to the Watson-Crick rule (A-C, T-G)​

​Structure or Block Changes:​

Insertion – increases a contiguous number of codon bases inside a gene, at a specific edit location​
Deletion – remove a contiguous number of codon bases inside a gene, resplice the new regions on either side​
Duplication – an insertion which is an exact copy of another codon segment of DNA​
Frameshift – an insertion or deletion which does not adhere to a triplet (3 letter) codon basis, thereby changing the frame of codon reference​
Repeat Expansion – an insertion which replicates one codon which is adjacent to the insertion point, a number of times​
Direct Repeat – replication of an identical codon sequence in the same orientation (5′ to 3′), inside the same gene​
Codon Substitution – a non-frameshift segment of DNA is deleted and an insertion is placed into the splice where it resided​
Inversion – a segment of DNA is rotated from its 5′ to 3′ orientation, by 180 degrees​

Now, stepping into the functional-value (use) judgement of any of these above changes – not talking about the mechanics of the mutation, one could suppose then the following value assessment for any given allele, base pair or gene mutation:


Silent – expressive DNA is impacted by mutation but its function is not altered​
Benign – mutation occurs, but no expressive DNA is impacted​


Repression – function altered by missense, (substitution protein) mutation ​
Blocked – all other forms of mutation besides missense and silent which result in loss of a function​


Fortuitous Degeneration – a Repression, reactivated Benign or Silent, or Blocked which is coincidentally an advantageous adaptive​
Novel (Constructive) – any Base or Structural mutation which results in a new expression which is coincidentally an advantageous adaptive​

A question therefore arises in the genomic modeling (theory of constraints models sufficient to comprehensively and completely describe genetic ergodicity – not just throw out intimidating sounding terms and guess at it) of evolutionary processes:

To what portion does each type of mutation (red in group A above) inside evolution involve Novel Constructive (red in group B above), Fortuitous Degeneration, Neutral and Disadvantageous allele changes? The answer to this would be rather cool to observe and attempt to model. Because if we end up with an extreme representation of Advantageous Novel and Fortuitously Degenerative mutations (say in the 43+ Human Accelerated Regions of our genome for example) – then a priori non-intent evolution has a problem. Which it indeed does…

Human Accelerated Regions (HAR) – of the human genome.

HARs are short, [approximately 270 base pair] on an average, stretches of DNA, [which are] 97% non[protein]coding. They are conserved in vertebrates, including Pan troglodytes, but not in Homo sapiens, in whom the conserved sequences were subjected to significantly, in many cases dramatically, higher rates of single nucleotide substitutions.8 A number of genes, associated with these human-specific alleles, often through novel enhancer activity, were in fact shown to be implicated in human-specific development of certain brain areas, including the prefrontal cortex.9 10

A number of contiguous and single point intron regulatory sequences [2.5% protein coding exon] codon substitution and insertion allele differences, of 270 base pairs in average length, between humans and their last universal common ancestor (LUCA) with hominidae (apes, australopithecines and archaic homo). Non-precedented/de-Novo/non-GenBank, non-feedback-derived, non-stochastic, fatally improbable happenstance of novel first-time ergodicity inside an absence of genetic pressure – occurring simultaneously and all advantageously, 43+ times, all between 60k and 350k years ago (Neanderthal and Denisovan extant pre-archaic only).11 12 13

“Human accelerated regions exhibit regulatory activity during neural development.” (Doan-Bae, et. al.)14 Fourty-three percent of HARs function as neuronal enhancers. HARs are also enriched for de novo copy number variants and biallelic mutations in individuals with Autism Spectrum Disorders.15

This is called Ordination (and of course Acceleration). Darwin did not address either of these facets of evolution. Our domain knowledge of this sub-discipline inside evolution is very scant. One can make no claim herein to a priori exclusions of intent. Given the fortuitous emergence of the 43 Human Accelerated Regions – their regulation of and association with human cerebral, neural and limb articulation expression, Ockham’s Razor plurality has been surpassed. The argument is manifest and the dual-burden proof ethic broaches.

Three rather stark implications develop from this understanding (much of which has arisen since 2018):

1. “Non-coding” regions is a misnomer, because these HAR non-coding regions are coding for morphological changes to the brain, neural development and limb articulation. This is deductive in its implication as to intent.​

2. The pace of these mutations far exceed the Roach-Glusman human mutation rate of 1 per 100,000,000 base pairs every 20 years.16 100 to 300 base pairs should have mutated on average in these regions – and maybe, maybe have served to produce one trivial novel trait of pan troglodytes speciation (a chimp with lighter skin tones, at the extreme). Instead, 12,000 base pairs mutated and every single one of them produced novel, first time, and highly advantageous traits with regard to neural and cerebral development. – In other words, Ordination.

And one is being gracious here by affording these changes 290,000 years inside of which to occur. The vast likelihood is that they all occurred in a shorter time span than even this.

Therefore, materialists are incorrect.

3. One must prove that intent is absent here. Such an input to evolutionary constructs and theory cannot be assumed a priori, nor as the null hypothesis (einfach mechanism).

Science has produced no evidence which rules out intent in the origin nor ascendancy of life on this planet.
However, because of the dual-burden regarding the role of intent in chaotic or large unknown domains –
it does bear the burden of proving that intent is absent (modus absens).

Previously we have even considered within this blog (see Embargo of The Necessary Alternative is Not Science), a deliberate codex which related the second digit of the DNA codon to its linear protein assignment molecule complexity. A codex which could not have evolved, since the codex was required in order to have evolution happen in the first place. Unprecedentable organization, which is arguably deduced to intent. Yet intent is embargoed from science by material nihilists who apply their religious beliefs therein. And as we have observed with regard to other embargoed subjects before:

Intent as a construct, is the necessary alternative.

This is not a case of ‘being smart enough to justify irrational things’ as fake skeptics have begun to issue as a memorized tag-line. If one is unable to discern these things, inferences which are both sound and critical path to the argument, then one has no business telling everyone what science thinks nor what evolution is or is not.

Genomic Intent

Given all this then, dismissing a priori, intent as a part or small contributor inside the ascendancy of life on this planet, is tantamount to a personal religious choice. As an atheist, I respect and understand that personal choice of faith – but I bristle when it is advertised as a conclusion of science. Such is not the case. Science makes no comment upon intent, to the positive or negative. In contrast however, sponsoring intent for Ockham’s Razor consideration, is not a religious choice, rather part of the scientific method. Modus praesens and modus absens are two completely different ethical standards of scientific inference.17 Those who insist that modus absens (intent is comprehensively absent) has been proved, are simply wrong. The standard to prove modus absens is very high – and most science communicators and enthusiasts do not understand this. So employing one’s personal religious choice that intent cannot exist, in order to squelch the scientific method – is disingenuous. A scientist ethically should say ‘Not so fast’.

Neither is intent then pareidolia nor apophenia. Intent can be established by both science and a court of law, without knowing who bore the intention – and by means of only examining the patterns of inferential suggestion therein. Presence of intent can be inferred inductively – absence of intent cannot. Such deliberation is a must in information technology, hacking and murder prosecutions. I do not have to say where the intent came from, and indeed should not conjecture such – until I have a scientific mechanism and hypothesis which is mature and can be pursued by research. I do not have to prove intent from the beginning of space or time, nor where it originated. I only have to spot it once. In order to prove that an encrypted signal of noise bears intelligence (as an intelligence officer), I only need demonstrate one translated segment. I do not need to prove who sent it, nor that the rest of the transmission was or was not intelligence. I only have to provide veracity for that one segment.

Intent is a white crow standard of inference.

Intent is also not a means to fill a gap in scientific understanding with a ‘God of the Gaps’ argument. Such contentions are dilettante and shallow; often constituting propaganda speak on the part of amateur science enthusiasts. The 43 human accelerated regions (HAR) for instance are critical path to this argument regarding intent. The ‘gap’ in the case of HARs is 95% of the knowledge domain; so this in no way constitutes a small shortfall in understanding. No one is pretending to fill that gaping absence of domain knowledge with an intelligent designer; as that is the habit of two opposing agencies who control argument around this issue. They are both wrong in such religious pandering. In science we are trying to extricate ourselves from religion, not jump from one religion into another.

Yes, eventually we would prefer to identify the intender – maybe even one which is dead and gone now, or perhaps left us all alone. However we have to accept the reality that we may never actually resolve such understanding. We may be stuck inside ‘intent without identified intender’ for centuries. Nonetheless, science does not answer every question all at once. Such amateur insistences constitute a non rectum agitur fallacy – forcing every question to be answered before any question can be answered. Science does not work in this manner. Questions are answered incrementally – along a critical path of inference. Understanding this is critical to any claim or implication to be scientifically literate. I am an atheist; however, I cannot ethically throw out the construct of intent, just because my socially-primed buddies and I are emotionally upset about the idea of an ‘Intender’ – that is not fair to science, not fair to humanity – to force one church’s doctrinal anger upon everyone around us. Just because a few peoples’ terror-filled urges scream “There is no Intender!”, does not mean that science and humanity must thereafter cower in the shadow of that imperious religious insistence. We learned this lesson when Christianity controlled science. I do not want another religion sneaking in and doing this to us again.

If intent is here, even if tucked away and hard to find, I want it found. As an ethical skeptic, I will stand up for that human right: The Right to Know.

     How to MLA cite this article:

The Ethical Skeptic, “The Dual-Burden Model of Inferential Ethics”; The Ethical Skeptic, WordPress, 30 June 2019; Web,

June 30, 2019 Posted by | Ethical Skepticism | , , | Leave a comment

The Demarcation of Skepticism

A competent understanding of the demarcation of what constitutes skepticism, is absolutely essential to the ethical skeptic’s ability to spot agency and agency’s poseur. I don’t know exactly how to define skepticism, but I know it when I am in it. This is the purpose of the four demarcation boundaries of skepticism.

While significant debate exists as to what indeed constitutes a suitable definition of skepticism, meantime this does not prevent flurries of false definitions to be bandied about by social skeptics, featuring intimidating equivocal concepts such as ‘doubt’, ‘scrutinizing claims’ or ‘applying the methods of science’. While such constructs involve failed attempts to define skepticism on the part of agenda-laden amateurs, these are not concepts which actually pertain to, nor more importantly serve to demark what is indeed skepticism.

Skepticism is not ‘doubt’ as doubt is a destination and attitude toward a construct, observation or idea; skepticism is not a destination nor attitude toward anything (except agency) – rather it is the journey.

Skepticism is not ‘scrutiny’ as scrutiny is applying one’s existing knowledge in attempt to force inference. Such is foolishness. Skepticism is going and looking; an innate dissatisfaction with existing knowledge and breached critical inference.

Nor is skepticism ‘applying the methods of science’. Skepticism is philosophy, whereas it is science which ‘applies of the methods of science’ and not skepticism. Philosophy cannot step in and pretend to act in lieu of science.

Thus the ethical skeptic faces the realization that most of the skeptic world is composed of fakers, amateurs, agenda pushers and poseurs. A sad matter of affairs. However, perhaps we can outline certain precepts which serve to demark true skepticism, rather than define it per se. This according to the Popper principle that a demarcation can be identified, even in absence of a suitable definition to which all agree, about that which resides on either extreme of the demarcation itself.1

I may not be able to define love, but I know when I am in it.

This familiar quip concerning love, is a great example of the principle of a demarcation (albeit a personal observer-effected version of one). Of such fabric of quandary is the nature of skepticism as well. Moreover, this visceral boundary is laid to threshold by means four key indicators (below). Flags which serve to elicit whether the arguer is operating inside of, or outside of, skepticism. Although I am of the opinion that skepticism can be objectively defined, sometimes a demarcation guideline is much more effective in highlighting the chicanery those who have mis-defined it, for sordid purposes – warning flags for the ethical skeptic.

The Demarcation of Skepticism

I.  Once plurality is necessary under Ockham’s Razor, it cannot be dismissed by means of skepticism alone.

II.  Casual Inference and Risk – once plurality of risk is necessary under Ockham’s Razor, he who aspires to dismiss it must be 100% rigorous on the strength of the critical logic, supporting study as well as probative strength of the mode and form of inference draw.

a.  There is no such thing as casual or ad hoc plausible denial under Ockham’s Razor plurality.

b.  There is no such thing as casual, ad hoc nor virtuous dismissal of precaution under Ockham’s Razor plurality. When an innocent stakeholder is placed at risk, this must be done under a condition of 100% knowledge of such risks, combined with the vigilance to recognize and measure the impact of hazard outcomes.

c.  Virtue and the presence of other theoretical counter-risks, are not sufficient rationale to abandon a. and b. under a condition of Ockham’s Razor plurality.

III.  Corber’s Burden – the mantle of ethics undertaken when one claims the role of representing conclusive scientific truth, ascertained by means other than science, such as ‘rational thinking,’ ‘critical thinking,’ ‘common sense,’ or skeptical doubt. An authoritative claim or implication as to possessing knowledge of a complete set of that which is incorrect. The nature of such a claim to authority on one’s part demands that the skeptic who assumes such a role be 100% correct. If however, one cannot be assured of being 100% correct, then the poseur must tender a similitude of such.

a.  When one tenders an authoritative claim as to what is incorrect – one must be perfectly correct.

b.  When a person or organization claims to be an authority on all that is bunk, their credibility decays in inverse exponential proportion to the number of subjects in which authority is claimed.

c.  A sufficiently large or comprehensive set of claims to conclusive evidence in denial, is indistinguishable from an appeal to authority.

IV.  If one exclusively fails to police one’s own groupthink, one is not a skeptic.

For the ethical skeptic, these are the indicators of true skepticism. Clumsy doubters who regularly stumble across these demarcation lines, are not skeptics – rather most often, clowns and celebrities, pushing agency and targeting those they hate.

     How to MLA cite this article:

The Ethical Skeptic, “The Demarcation of Skepticism”; The Ethical Skeptic, WordPress, 22 June 2019; Web,

June 22, 2019 Posted by | Ethical Skepticism | | Leave a comment

Epistemological Domain and Objective Risk Strategy

If the relevant domain of a subject is largely unknown, or insufficient study along any form of critical path of inference has been developed, then it is invalid to claim or imply through a claim, that ignorance has been sufficiently dispelled in order to offset risk. Especially that ignorance which is prolific and may serve to result in harm imparted to at-risk stakeholders, not simply our cronies. After dealing with the malice of those who shortcut science in order to turn a quick profit, one often is left feeling the need for a clean shower.

C’mon Chief, You’re Overthinking This Thing

As a younger man, I ventured out one afternoon with the thought in mind of buying a new car. My old Toyota had 185,000 miles on it, and despite all the love and care I had placed into that reliable vehicle, it was time to upgrade. ‘Timothy’ as I called my car, had served me well through years of Washington D.C.’s dreadfully monotonous 6:00 am Woodrow Wilson Bridge traffic, getting to Arlington, the Pentagon and the Capitol District, through to graduate school classes, and finally getting home nightly at 11:00 pm. My beloved car was just about worn out. So I selected the new model that I wanted and proceeded one Saturday to a local dealer. The salesperson and I struck a deal on my preferred model and color, with approval from the sales manager skulking remotely as the negotiator within some back office. Always take this as a warning sign: any time a person who is imbued with the power to strike a deal, will not sit with you face to face during the execution of that deal. This is a form of good-cop/bad-cop routine. However, being that this was only my second car purchase, I accepted this as normal and shook hands with the salesperson upon what was in reality, a very nice final price on my prospective new car.

The polite and professional salesperson led me down a hallway and handed me off and into the office of the closing manager. The closing manager was a fast-talking administrative professional who’s job it was to register the sale inside the corporate system, arrange all the payment terms, declarations, insurance and contracts, remove the car from inventory, register the sale with the State and affix all the appropriate closing signatures. A curiously high paying position assigned to execute such a very perfunctory set of tasks. The closing manager sat down and remarked what an excellent Saturday it had been, and then added that he was glad that I was his “last sale of the evening.” He had a bottle of cognac staged on his desk, ready to share a shot with the sales guys who had delivered an excellent performance week. The closing manager pulled up the inventory record and then printed out the sales contract in order to review it with me. In reviewing the document, I noted that the final closing figure listed at the bottom of the terms structure was $500 higher than was the agreed price I had just struck with the sales manager. The closing manager pointed out that the figure we had negotiated did not reflect the ‘mandatory’ addition of the VIN number being laser engraved into the bottom of each of the windows. The fee for the laser engraving, and believe him (*chuckle) it was well worth it, was $500. If the vehicle was ever stolen, the police would be asking me for this to help them recover the vehicle. Not to worry however, the laser engraving had already been applied at the factory. This was an administrative thing, really.

Raising objection to this sleight-of-hand tactic, I resolved to remain firm in that objection and expressed my intent to walk out the door if the $500 adder was not removed from the contract.  The closing manager then retorted that he did not have time to correct the contract as “the agreement had already been registered in the corporate system” and he would “have to back that out of the system and begin all over again.” To which I responded, “Then let’s begin all over again.” Thereupon, the closing manager said that he had to make a quick call home. He called his spouse and in very dramatic fashion exclaimed “Honey, tell our son that we will be late to his graduation because I have to re-enter a new contract here at the last hour. What? He can’t wait on us?” The clerk held the phone to his chest and said, “I am going to have to miss my son’s graduation.” (This reminded me of being told that, since I question Herodotus’ dating of the Khufu Pyramid, along with his claim that he even physically traveled to Egypt in the first place – that therefore I ‘believe that aliens built the pyramids and am racist towards Egyptians’). Having grown absolutely disillusioned as to the integrity of this whole farce, I responded “OK, attend your son’s graduation and I will come back some other time.” “Surely they do not think I am this dumb. Do I look stupid or something?” I mulled while getting up from my chair and proceeding out the door in disgust.

I was met in the exit hallway by the previously hidden bad-cop, the sales manager. “Wait wait, Chief you’re overthinking this thing. You don’t understand, that we have given you a great price on this vehicle. I have a guy who wants to take this particular inventory first thing in the morning.” To which I responded, “Well, make sure you tell him about the mandatory laser engraving fee”, fluttering my hands upward in mock excitement. My valuable weekend car shopping time had been wasted by manipulative and dishonest fools. It was not simply that I did not know about the engraving fee, rather that I did not even know, that I did not know about the potential of any such fake fee. The epistemic domain had been gamed for deception. They had allowed me to conduct my science if you will, inside a purposeful and crafted charade in ignorance – Descriptive Wittgenstein Error. They had hoped that the complexity of the sales agreement would provide disincentive for me to ‘overthink’, and spot the deal shenanigans. I walked out of the showroom feeling like I needed to immediately go home and take a shower.

Whenever someone pedantically instructs you that your are overthinking something,
under a condition
of critical path or high domain unknown, be very wary. You are being pitched a con job.

If you have not departed from the critical path of necessary inference,
or if the domain is large and clouded with smoke and mirrors, never accept an accusation of ‘overthinking’.
Such cavil constitutes merely a simpleton or manipulative appeal to ignorance.

Domain Ignorance and Epistemological Risk

What this used car sales comedy serves to elicit is a principle in philosophy called an ‘ignorance of the necessary epistemological domain’, or the domain of the known and unknown regarding one cohesive scientific topic or question. Understanding both the size of, as well as that portion of science’s competent grasp of such domain/unknown, is critical in assessing scientific risk – to wit: the chance that one might be bamboozled on a car contract because of a lack of full disclosure, or the chance that millions of people will be harmed through a premature rollout of a risky corporate technology which has ‘over-driven its headlights’ of domain competency, and is now defended by an illegitimate and corrupt form of ‘risk strategy’ as a result.

There are two distinct species of scientific risk: epistemological risk and risk involving an objective outcome. In more straightforward terminology, the risk that we don’t know something, and the risk that such not-knowing could serve to impart harm.

Before we introduce those two types of risk however, we must define how they relate to and leverage from a particular willful action, a verb which goes by the moniker, ignorance. Ignorance is best defined in its relationship to the three forms of Wittgenstein error.1 2 3

Ignorance – a willful set of assumptions or lacks thereof, outside the context of scientific method and inference, which result in personal or widespread presence of three Wittgenstein states of error (for a comprehensive description of these error states, see Wittgenstein Error and Its Faithful Participants):

Wittgenstein Error (Contextual)
    Situational:  I can shift the meaning of words to my favor or disfavor by the context in which they are employed
Wittgenstein Error (Descriptive)
    Describable:  I cannot observe it because I refuse to describe it
    Corruptible:  Science cannot observe it because I have crafted language and definition so as to preclude its description
    Existential Embargo:  By embargoing a topical context (language) I favor my preferred ones through means of inverse negation
Wittgenstein Error (Epistemological)
    Tolerable: My science is an ontology dressed up as empiricism
        bedeutungslos – meaningless or incoherent
        unsinnig – nonsense or non-science
        sinnlos – mis-sense, logical untruth or lying.

Now that we have a frame of reference as to what is indeed ignorance (the verb), we can now cogently and in straightforward manner, define epistemological domain, along with the two forms of scientific risk: epistemological risk and objective risk. This is how a risk strategy is initiated.

Epistemological Domain (What We Should Know)

/philosophy : skepticism/ : what we should know. That full set of critical path sequences of study, along with the salient influencing factors and their imparted sensitivity, which serve to describe an entire arena of scientific endeavor, study or question, to critical sufficiency and plenary comprehensiveness.

Epistemological Risk (What We Don’t Know and Don’t Know That We Don’t Know)

/philosophy : skepticism : science : risk/ : what we don’t know and don’t know that we don’t know. That risk in ignorance of the necessary epistemological domain, which is influenced by the completeness of science inside that domain; as evidenced by any form of shortfall in

•  quality of observational research,
•  nature and reach of hypothesis structure,
•  appropriateness of study type and design,
•  bootstrap strength of the type and mode of inference drawn,
•  rigor of how and why we know what we know,
•  absence or presence of operating agency, and finally
•  predominance or subordinace of the subject domain’s established domain risk (subject of this blog)

The next step after defining these elements of risk, is to undertake a Risk Strategy. The purpose of a risk strategy is to translate epistemological risk into objective risk and then set out an ethical plan, which serves to shield at-risk stakeholders from its impact. As a professional who develops value chain and risk strategies, I remain shocked at the number of risky technological roll-outs, enacted by large and supposedly competent field-subject corporations, which are executed inside a complete vacuum in terms of any form of risk strategy at all. When the lay public or their representative challenge your technology’s safety – your ethical burden is not to craft propaganda and social advocacy; but rather to issue the Risk Strategy which was prosecuted, in advance of the technology rollout, to address their concerns. Two current examples of such unacceptable circumstance, framed inside the analogy of ‘car headlights’, are highlighted later in this article.

What is a Risk Strategy?

One way in which such matters are addressed in industry (when they are addressed – which is rarely), is to conduct a form of value chain strategy called a risk chain evaluation or ‘risk strategy’. Risk flows in similar fashion to a value or margin chain, it concatenates, snowballs and increases non-linearly. It is not a stand alone element unto itself, rather part of the fabric of the mission, product or channel of service being undertaken. A risk strategy is usually done as part of a value chain strategy.4 Both forms of analysis involve the flow of value, matched against the counter-flow of resources. Risk is simply an objectified species of value – so the competent technology company, when choosing to conduct a risk strategy, will often seek the counsel of a value chain strategy firm, in order to come alongside and assist its project executives and managers through a Risk Strategy workplan. Despite the complex-sounding framework presented here, the subject is only complex in its generic description. Once applied to a specific technology, market or treatment, the actual execution of a risk strategy as part of a value chain or branding strategy, becomes very straightforward.

A risk strategy is not congruent with a hazard assessment. In assessing hazards, one already knows what the dangers are, and is measuring potential harm (exposure) to earnings/insurers/stockholders.

In a risk strategy, an operating group is identifying what they do not know (in advance of identifying hazards), and how that lack of knowing can serve to harm brand/mission/stakeholders/environment/clients.

A risk strategy is developed in industry by first conducting a piloting session, which kicks off two steps. The first tasks a team which is assigned to develop the value chain description (Question 1 below) of the entailed domain (the critical path of a product development horizon, a brand strategy, a legal argument, or an imports channel for example). A second step then involves development of epistemological risk and slack factors, measures and sensitivities which can be assigned to each node (action/decision) in the risk chain series mapped during the first step (Questions 2 – 7 below). These shortfalls in diligence are derived from the general categorizations defined (with links) under ‘Domain Epistemological Risk’ above. This does not actually take that long if the group is guided by an experienced professional. The groups who conducted the two steps above then reconvene and develop the answer to Question 8 as the final step.

A Risk Strategy seeks to prosecute the following set of questions, in order:

1.  What is the state of current industry of observational research, and how much of the subject domain has been touched? Map the subject domain and its core critical path arguments/issues (elements)/sensitivities (the ‘footprint’).

2.  How many novel and tested hypotheses have addressed this domain footprint (articles, systematic reviews, editorials do not count)? How many are actually needed in order to fairly address the footprint domain risk?

3.  What types and designs of study have been completed regarding each hypothesis, and were they sufficient to the problem? Has there been torfuscation?

4.  What was the bootstrap strength of the type and mode of inference drawn from these studies? Was it merely inductive? Can deductive work be done? Does methodical deescalation exist inside the industry?

5.  Prosecute the state of the industry under the standard of ‘How we know, what we know’. Is it sound? What ‘agencies’ exist and do they constitute a domain problem?

6.  Establish the risk horizon of ‘unknown knowns’ and ‘unknown unknowns’. How predominant or subordinate is this set, as compared to the overall domain of knowledge?

7.  Finalize Risk Chain mapping and develop a Risk Horizon by Type (see below) for each critical path issue identified in step 1.

8.  How do we take actions to mitigate the Risk Horizon, and how do we craft organization mission and brand around these now ethical principles.

Once done competently, the company which conducts a risk strategy will shine like a beacon, competitively against short-cut minded competitors. The two colors orange and red, on the right in the following chart depict our ‘risk horizon’. That which we as a deploying corporate entity do not know that science already knows, and that which we do not know that we do not know. These are the domains of ignorance which serve to endanger an at-risk technology stakeholder through objective risk.

The Horizon of Epistemological Risk

High Epistemological Domain Risk: there exist a high number of critical paths of consideration, along with a high degree of sensitive and influencing factors – very few of which we have examined nor understood sufficiently.

Lower Epistemological Domain Risk: there exist a low or moderate number of critical paths of consideration, along with a reasonable degree of sensitive and influencing factors – many or most of which we have examined and begun to understand sufficiently.

Once epistemological risk is mapped (1-7 above, or ‘what we don’t know’), then a mitigation approach is developed which can serve to rate, triage and then minimize each risk element, or reduce the effect of risk elements combining into unintended consequences (how what we don’t know, can serve to harm someone or something). Stand alone risks are treated differently than are concatenated or cumulative escalating (snowballing) risks. However all risks are measured in terms of virtual (non-realized) consequences. These consequences are what is deemed inside risk theory as ‘objective risk’.

Objective Risk (What Harm Might Result)

/philosophy : science : technology : risk/ : what harm might result from our not knowing. The risk entailed as a result of an outcome inside a particular state of being or action, stemming from a state of high epistemological risk, and which might result in an increase in the ignorance itself and/or in harm and suffering to any form of at-risk stakeholder. Hazards are identified along with estimates for exposure and robustness efforts inside a Mitigation Plan. Objective risk comes in two forms.

Risk Type I constitutes a condition of smaller Risk Horizon (lower epistemological risk) wherein our exposure resides in deploying a technology faster than our rate of competence development inside its use context.

Risk Type II is the condition wherein the Risk Horizon is extensive (our knowledge is low), yet we elect to deploy a technology or treatment despite these unknown levels of risk horizon exposure.

The last step involving a plan to address how we head off the virtual hazards the team has identified. However, there are certain things which ‘How we head it off’ does not mean; those dark and questionable practices of monist, oligarch and crony driven corporations, to wit:

What a Risk Strategy Does NOT Do

Do the following set of activities look familiar? They should, as this is the ethic of today’s monist/oligarch/crony operated entity. A real risk strategy conducts real science (see the definition and links under ‘Domain Epistemological Risk’ above) and follows generally, the above process. Risk resides in what one does not know, not in what one does know. Its client is the technology company at-risk stakeholder community – and NOT the corporation, its insurers nor stockholders. The following very common tactics in contrast, are not elements of a real risk strategy; constituting rather a half-assed strategy of Court-defined malice and oppression:

Fake Risk Strategy

•  Identify ‘hazards’ and assess their likelihood of causing harm, and call that ‘risk’
•  Identify only hazards which bear a ‘risk’ of harming the insurer or stockholder
•  Identify foes and research their backgrounds for embarrassing information and smear campaigns
•  Develop a ‘talking points’ sheet of propaganda to hand to the media in advance of potential bad news
•  Develop astroturf ‘skeptics’ who are given target groups and individuals to harass with ‘science’
•  Hire celebrity skeptics to accuse anyone who dissents, of being ‘anti-science’
•  Hire Facebook, Twitter or Forbes to manage which voices get heard or ‘liked’
•  Identify the money needed to pay off legislative representatives for protection
•  Threaten universities with funding cuts if their tenured staff speak up about your technology
•  Execute mergers and acquisitions before stockholders have a chance to tender input to the Board of Directors
•  Prop up fictitious one-and-done labs to develop some quick shallow inductive study showing your product was proved safe
•  Identify that level of intimidating-sounding ‘science’ which would impress layman science communicators and the media.
•  Seek to bundle one’s technology risk with other technologies so as to hide any potential risk flagging signal
•  Pay university professors under the table, in order to engender their support against targeted enemies
•  Develop accounting practices which allow risk based profits to be hidden inside other organizations or facets of the organization

In other words, a real risk strategy does real science – and a fake risk strategy pretends that it already knows everything it needs to know, does no more research, and just estimates the odds of something blowing up on them. A fake risk strategy then conducts social manipulation in place of managing exposure and robustness through a Mitigation Plan. Very much akin to what fake skepticism does. This is why you observe these types of companies conducting their robust science after they have already rolled out their dangerous product. They got caught, and now the public is forcing them to do a risk strategy a posteriori.

A Risk Strategy is not the process of ‘identifying hazards’, and then assessing the ‘likelihood that a specific hazard will cause harm’ (our exposure). Unless you identify the hazard as ‘We lack knowledge’, all this charade does is serve to confirm what we already knew a priori. This is not the definition of risk, nor is this how a risk strategy is conducted regarding complex horizons. A mitigation plan serves to identify hazards, along with our exposure or robustness therein (Taleb, The Black Swan), but this cannot be done in a vacuum, nor as the first step.

Before we move on, as you can observe inside the definition of epistemological risk above, we have addressed inside six recent blog articles (each one hyperlinked in blue), the principles of sound research effort, the elements of hypothesis, study design and type, agency risk, along with the types and modes of inference and how we know what we know. These first six links constitute ‘the science’ behind a risk strategy. Which leaves open of course the final and seventh defining element in that same links list, the topic of ‘subject epistemological domain’. Domain epistemological risk is a component of the definition which is critical before one can assess the subject of objective risk in sufficient ethical fashion. This of course is the purpose and focus of this blog article; thus we continue with domain epistemological risk as it is defined inside a concept called the Risk Horizon.

If your Big-Corp has conducted all the scientific diligence necessary in the rollout of a risk-bearing technology
or medical intervention, then show me the Risk Strategy they employed
and should have posted & available for stakeholder review.

Third party systematic reviews conducted after the rollout of the technology or treatment, do not constitute sufficient ethics nor science.

Inference Inside the Context of a Risk Horizon

What we have introduced with the above outline of risk, is the condition wherein we as a body of science, or the society which accepts that body of science, have deployed a technology at a rate which has outpaced our competence with that technology domain. In other words we have over-driven our headlights. We are either driving too fast for our headlights to help keep us safe, or we are driving on a surface which we are not even sure is a road, because our headlamps are too dim to begin with. This latter condition; the circumstance where our headlamps are so dim that we cannot distinguish the road, involves a principle which is the subject of this blog article. A principle called domain epistemological risk, or more accurately the size of the domain of established competence and the resulting Risk Horizon. Below, we have redeveloped The Map of Inference, such that it contrasts standard context inference, with that special hierarchy of inference which is exercised in the presence of either epistemological or objective risk. The decision theory as well as types of inference and study designs are starkly different under each scenario of confidence development, per the following chart.

The Map of Inference Versus Risk Horizon

The first thing one may observe inside the domain chart above, is that it is much easier to establish a case of risk (Objective Risk – modus praesens), than it is to conclusively dismiss one (Objective Risk – modus absens). That ethic may serve to piss off extraction-minded stockholders, but those are the breaks when one deploys a technology bearing public stakeholder risk. Rigor must be served. What one may also observe in the above chart are two stark contrasts between risk based inference and standard inference. These two contrasts in Risk Types I and II are outlined below via the analogies of over-driving headlights, or possessing too-dim a set of headlamps. Each bears implications with regard to waste, inefficiency and legal liability.

Risk Type I: Over-driving Our Headlights

Smaller Risk Horizon (Lower State of Domain Epistemological Risk)

First when one moves from the context of the trivial ascertainment of knowledge and into an arena wherein a population of stakeholders is placed at risk; say for example as in the case of broadscale deployment of a pesticide or an energy emitting system – the level of rigor in epistemology required increases substantially. One can see this under the column ‘Objective Risk modus absens‘. Here the null hypothesis shifts to the assumed presence of risk, not its absence (the precautionary principle). In other words, in order to prove to the world that your product is safe, it is not sufficient to simply publish a couple Hempel’s Paradox inductive studies. The risk involved in a miscall is too high. Through the rapid deployment of technology, society can outrun our ability to competently use or maintain that technology safely – as might be evidenced by nuclear weapons or a large dam project in a third world nation which does not have the educational nor labor resources to support operation of the dam. When we as a corporate technology culture are moving so fast that our pace outdistances our headlights – risk concatenates or snowballs.

Example:  5G is a promising and powerful technology. I love the accessibility and speeds it offers. However there is legitimate concern that it may suffer being deployed well before we know enough about this type of pervasive radiation impact on human and animal physiology. A wave of the indignant corporate hand, and inchoate appointment of the same skeptics who defended Vioxx and Glyphosate, is not sufficient scientific diligence. If I see the same old tired skeptics being dragged out to defend 5G – that is my warning sign that the powers deploying it, have no idea what they are doing. I am all for 5G – but I want scientific deductive rigor (modus absens) in its offing.

Risk Type II: Headlamps Not Bright Enough

Extensive Risk Horizon (High State of Domain Epistemological Risk)

Second and moreover, this problem exacerbates when the topic suffers from a high state of epistemological domain risk. In other words, there exist a high number of critical paths of consideration, along with a high degree of sensitive and influencing factors – very few of which we have examined nor understand sufficiently. Inside this realm of deliberation, induction under the Popper Demarcation of Science not only will not prove out the safety of our product, but we run a high risk of not possessing enough knowledge to even know how to test our product adequately for its safety to begin with. The domain epistemological risk is high. When a corporate technology is pushed onto the public at large under such a circumstance, this can be indicative of greed, malice or oppression. Risk herein becomes exponential. A technology company facing this type of risk strategy challenge, needs to have its legal counsel present at its piloting and closing sessions.

Example: Vaccines offer a beneficial bulwark against infectious diseases. Most vaccines work. However there is legitimate concern that we have not measured their impact in terms of unintended health consequences – both as individual treatments and as treatments in groups, nor at the ages administered. There exists a consilience (Consilient Induction modus praesens) of stark warning indicators that vaccines may be impacting the autoimmune, cognitive and emotional well being of our children.

We do not possess the knowledge which will allow us to deductively prove that our vaccines do not carry such unintended consequences. If one cites this as a condition which allows for exemption from having to conduct such study – such a disposition is shown in the chart above to constitute malice. When domain epistemological risk is high, and an authority which stands to derive power or profits from deployment of a technology inside that domain, applies it by means of less-than-rigorous science (eg. linear induction used to infer safety of vaccines), this constitutes a condition of malice on the part of that authority.

Such conditions where society is either outrunning its headlights, or does not maintain bright enough headlamps, is what we as ethical skeptics must watch for. We must be able to discern the good-cop/bad-cop masquerade, the posturing poseur used car salesmen of science, and the stop the charade which makes a farce of science, injures our children or serves to harm us all.

     How to MLA cite this article:

The Ethical Skeptic, “Epistemological Domain and Objective Risk”; The Ethical Skeptic, WordPress, 23 May 2019; Web,

May 23, 2019 Posted by | Ethical Skepticism | , , | 2 Comments

Chinese (Simplified)EnglishFrenchGermanHindiPortugueseRussianSpanish
%d bloggers like this: