If the relevant domain of a subject is largely unknown, or insufficient study along any form of critical path of inference has been developed, then it is invalid to claim or imply through a claim, that ignorance has been sufficiently dispelled in order to offset risk. Especially that ignorance which is prolific and may serve to result in harm imparted to at-risk stakeholders, not simply constituting exposure of Cronies to a hazard. After dealing with the malice of those who shortcut science in order to turn a quick profit, one often is left feeling the need for a clean shower.
C’mon Chief, You’re Overthinking This Thing
As a younger man, I ventured out one afternoon with the thought in mind of buying a new car. My old Toyota had 185,000 miles on it, and despite all the love and care I had placed into that reliable vehicle, it was time to upgrade. ‘Timothy’ as I called my car, had served me well through years of Washington D.C.’s dreadfully monotonous 6:00 am Woodrow Wilson Bridge traffic, getting to Arlington, the Pentagon and the Capitol District, through to graduate school classes, and finally getting home nightly at 11:00 pm. My beloved car was just about worn out. So I selected the new model that I wanted and proceeded one Saturday to a local dealer. The salesperson and I struck a deal on my preferred model and color, with approval from the sales manager skulking remotely as the negotiator within some back office. Always take this as a warning sign: any time a person who is imbued with the power to strike a deal, will not sit with you face to face during the execution of that deal. This is a form of good-cop/bad-cop routine. However, being that this was only my second car purchase, I accepted this as normal and shook hands with the salesperson upon what was in reality, a very nice final price on my prospective new car.
The polite and professional salesperson led me down a hallway and handed me off and into the office of the closing manager. The closing manager was a fast-talking administrative professional who’s job it was to register the sale inside the corporate system, arrange all the payment terms, declarations, insurance and contracts, remove the car from inventory, register the sale with the State and affix all the appropriate closing signatures. A curiously high paying position assigned to execute such a very perfunctory set of tasks. The closing manager sat down and remarked what an excellent Saturday it had been, and then added that he was glad that I was his “last sale of the evening.” He had a bottle of cognac staged on his desk, ready to share a shot with the sales guys who had delivered an excellent performance week. The closing manager pulled up the inventory record and then printed out the sales contract in order to review it with me. In reviewing the document, I noted that the final closing figure listed at the bottom of the terms structure was $500 higher than was the agreed price I had just struck with the sales manager. The closing manager pointed out that the figure we had negotiated did not reflect the ‘mandatory’ addition of the VIN number being laser engraved into the bottom of each of the windows. The fee for the laser engraving, and believe him (*chuckle) it was well worth it, was $500. If the vehicle was ever stolen, the police would be asking me for this to help them recover the vehicle. Not to worry however, the laser engraving had already been applied at the factory. This was an administrative thing, really.
Raising objection to this sleight-of-hand tactic, I resolved to remain firm in that objection and expressed my intent to walk out the door if the $500 adder was not removed from the contract. The closing manager then retorted that he did not have time to correct the contract as “the agreement had already been registered in the corporate system” and he would “have to back that out of the system and begin all over again.” To which I responded, “Then let’s begin all over again.” Thereupon, the closing manager said that he had to make a quick call home. He called his spouse and in very dramatic fashion exclaimed “Honey, tell our son that we will be late to his graduation because I have to re-enter a new contract here at the last hour. What? He can’t wait on us?” The clerk held the phone to his chest and said, “I am going to have to miss my son’s graduation.” (This reminded me of being told that, since I question Herodotus’ dating of the Khufu Pyramid, along with his claim that he even physically traveled to Egypt in the first place – that therefore I ‘believe that aliens built the pyramids and am racist towards Egyptians’). Having grown absolutely disillusioned as to the integrity of this whole farce, I responded “OK, attend your son’s graduation and I will come back some other time.” “Surely they do not think I am this dumb. Do I look stupid or something?” I mulled while getting up from my chair and proceeding out the door in disgust.
I was met in the exit hallway by the previously hidden bad-cop, the sales manager. “Wait wait, Chief you’re overthinking this thing. You don’t understand, that we have given you a great price on this vehicle. I have a guy who wants to take this particular inventory first thing in the morning.” To which I responded, “Well, make sure you tell him about the mandatory laser engraving fee”, fluttering my hands upward in mock excitement. My valuable weekend car shopping time had been wasted by manipulative and dishonest fools. It was not simply that I did not know about the engraving fee, rather that I did not even know, that I did not know about the potential of any such fake fee. The epistemic domain had been gamed for deception. They had allowed me to conduct my science if you will, inside a purposeful and crafted charade in ignorance – Descriptive Wittgenstein Error. They had hoped that the complexity of the sales agreement would provide disincentive for me to ‘overthink’, and spot the deal shenanigans. I walked out of the showroom feeling like I needed to immediately go home and take a shower.
Whenever someone pedantically instructs you that your are overthinking something,
under a condition of critical path or high domain unknown, be very wary. You are being pitched a con job.
If you have not departed from the critical path of necessary inference,
or if the domain is large and clouded with smoke and mirrors, never accept an accusation of ‘overthinking’.
Such cavil constitutes merely a simpleton or manipulative appeal to ignorance.
Domain Ignorance and Epistemological Risk
What this used car sales comedy serves to elicit is a principle in philosophy called an ‘ignorance of the necessary epistemological domain’, or the domain of the known and unknown regarding one cohesive scientific topic or question. Understanding both the size of, as well as that portion of science’s competent grasp of such domain/unknown, is critical in assessing scientific risk – to wit: the chance that one might be bamboozled on a car contract because of a lack of full disclosure, or the chance that millions of people will be harmed through a premature rollout of a risky corporate technology which has ‘over-driven its headlights’ of domain competency, and is now defended by an illegitimate and corrupt form of ‘risk strategy’ as a result.
There are two distinct species of scientific risk: epistemological risk and risk involving an objective outcome. In more straightforward terminology, the risk that we don’t know something, and the risk that such not-knowing could serve to impart harm.
Before we introduce those two types of risk however, we must define how they relate to and leverage from a particular willful action, a verb which goes by the moniker, ignorance. Ignorance is best defined in its relationship to the three forms of Wittgenstein error.1 2 3
Ignorance – a willful set of assumptions or lacks thereof, outside the context of scientific method and inference, which result in personal or widespread presence of three Wittgenstein states of error (for a comprehensive description of these error states, see Wittgenstein Error and Its Faithful Participants):
Wittgenstein Error (Contextual)
Situational: I can shift the meaning of words to my favor or disfavor by the context in which they are employed
Wittgenstein Error (Descriptive)
Describable: I cannot observe it because I refuse to describe it
Corruptible: Science cannot observe it because I have crafted language and definition so as to preclude its description
Existential Embargo: By embargoing a topical context (language) I favor my preferred ones through means of inverse negation
Wittgenstein Error (Epistemological)
Tolerable: My science is an ontology dressed up as empiricism
bedeutungslos – meaningless or incoherent
unsinnig – nonsense or non-science
sinnlos – mis-sense, logical untruth or lying.
Now that we have a frame of reference as to what is indeed ignorance (the verb), we can now cogently and in straightforward manner, define epistemological domain, along with the two forms of scientific risk: epistemological risk and objective risk. This is how a risk strategy is initiated.
Epistemological Domain (What We Should Know)
/philosophy : skepticism/ : what we should know. That full set of critical path sequences of study, along with the salient influencing factors and their imparted sensitivity, which serve to describe an entire arena of scientific endeavor, study or question, to critical sufficiency and plenary comprehensiveness.
Epistemological Risk (What We Don’t Know and Don’t Know That We Don’t Know)
/philosophy : skepticism : science : risk/ : what we don’t know and don’t know that we don’t know. That risk in ignorance of the necessary epistemological domain, which is influenced by the completeness of science inside that domain; as evidenced by any form of shortfall in
• quality of observational research,
• nature and reach of hypothesis structure,
• appropriateness of study type and design,
• bootstrap strength of the type and mode of inference drawn,
• rigor of how and why we know what we know,
• absence or presence of operating agency, and finally
• predominance or subordinace of the subject domain’s established domain risk (subject of this blog)
The next step after defining these elements of risk, is to undertake a Risk Strategy. The purpose of a risk strategy is to translate epistemological risk into objective risk and then set out an ethical plan, which serves to shield at-risk stakeholders from its impact. As a professional who develops value chain and risk strategies, I remain shocked at the number of risky technological roll-outs, enacted by large and supposedly competent field-subject corporations, which are executed inside a complete vacuum in terms of any form of risk strategy at all. When the lay public or their representative challenge your technology’s safety – your ethical burden is not to craft propaganda and social advocacy; but rather to issue the Risk Strategy which was prosecuted, in advance of the technology rollout, to address their concerns. Two current examples of such unacceptable circumstance, framed inside the analogy of ‘car headlights’, are highlighted later in this article.
What is a Risk Strategy?
One way in which such matters are addressed in industry (when they are addressed – which is rarely), is to conduct a form of value chain strategy called a risk chain evaluation or ‘risk strategy’. Risk flows in similar fashion to a value or margin chain, it concatenates, snowballs and increases non-linearly. It is not a stand alone element unto itself, rather part of the fabric of the mission, product or channel of service being undertaken. A risk strategy is usually done as part of a value chain strategy.4 Both forms of analysis involve the flow of value, matched against the counter-flow of resources. Risk is simply an objectified species of value – so the competent technology company, when choosing to conduct a risk strategy, will often seek the counsel of a value chain strategy firm, in order to come alongside and assist its project executives and managers through a Risk Strategy workplan. Despite the complex-sounding framework presented here, the subject is only complex in its generic description. Once applied to a specific technology, market or treatment, the actual execution of a risk strategy as part of a value chain or branding strategy, becomes very straightforward.
A risk strategy is not congruent with a hazard exposure assessment. In assessing hazards, one already knows what the dangers are, and is measuring potential harm (exposure) to earnings/insurers/stockholders.
In a risk strategy, an operating group is identifying what they do not know (in advance of identifying hazards), and how that lack of knowing can serve to harm brand/mission/stakeholders/environment/clients.
A risk strategy is developed in industry by first conducting a piloting session, which kicks off two steps. The first tasks a team which is assigned to develop the value chain description (Question 1 below) of the entailed domain (the critical path of a product development horizon, a brand strategy, a legal argument, or an imports channel for example). A second step then involves development of epistemological risk and slack factors, measures and sensitivities which can be assigned to each node (action/decision) in the risk chain series mapped during the first step (Questions 2 – 7 below). These shortfalls in diligence are derived from the general categorizations defined (with links) under ‘Domain Epistemological Risk’ above. This does not actually take that long if the group is guided by an experienced professional. The groups who conducted the two steps above then reconvene and develop the answer to Question 8 as the final step.
A Risk Strategy seeks to prosecute the following set of questions, in order:
1. What is the state of current industry of observational research, and how much of the subject domain has been touched? Map the subject domain and its core critical path arguments/issues (elements)/sensitivities (the ‘footprint’).
2. How many novel and tested hypotheses have addressed this domain footprint (articles, systematic reviews, editorials do not count)? How many are actually needed in order to fairly address the footprint domain risk?
3. What types and designs of study have been completed regarding each hypothesis, and were they sufficient to the problem? Has there been torfuscation?
4. What was the bootstrap strength of the type and mode of inference drawn from these studies? Was it merely inductive? Can deductive work be done? Does methodical deescalation exist inside the industry?
5. Prosecute the state of the industry under the standard of ‘How we know, what we know’. Is it sound? What ‘agencies’ exist and do they constitute a domain problem?
6. Establish the risk horizon of ‘unknown knowns’ and ‘unknown unknowns’. How predominant or subordinate is this set, as compared to the overall domain of knowledge?
7. Finalize Risk Chain mapping and develop a Risk Horizon by Type (see below) for each critical path issue identified in step 1.
8. How do we take actions to mitigate the Risk Horizon, and how do we craft organization mission and brand around these now ethical principles.
Once done competently, the company which conducts a risk strategy will shine like a beacon, competitively against short-cut minded competitors. The two colors orange and red, on the right in the following chart depict our ‘risk horizon’. That which we as a deploying corporate entity do not know that science already knows, and that which we do not know that we do not know. These are the domains of ignorance which serve to endanger an at-risk technology stakeholder through objective risk.
The Horizon of Epistemological Risk
High Epistemological Domain Risk: there exist a high number of critical paths of consideration, along with a high degree of sensitive and influencing factors – very few of which we have examined nor understood sufficiently.
Lower Epistemological Domain Risk: there exist a low or moderate number of critical paths of consideration, along with a reasonable degree of sensitive and influencing factors – many or most of which we have examined and begun to understand sufficiently.
Once epistemological risk is mapped (1-7 above, or ‘what we don’t know’), then a mitigation approach is developed which can serve to rate, triage and then minimize each risk element, or reduce the effect of risk elements combining into unintended consequences (how what we don’t know, can serve to harm someone or something). Stand alone risks are treated differently than are concatenated or cumulative escalating (snowballing) risks. However all risks are measured in terms of virtual (non-realized) consequences. These consequences are what is deemed inside risk theory as ‘objective risk’.
Objective Risk (What Harm Might Result)
/philosophy : science : technology : risk/ : what harm might result from our not knowing. The risk entailed as a result of an outcome inside a particular state of being or action, stemming from a state of high epistemological risk, and which might result in an increase in the ignorance itself and/or in harm and suffering to any form of at-risk stakeholder. Hazards are identified along with estimates for exposure and robustness efforts inside a Mitigation Plan. Objective risk comes in two forms.
Risk Type I constitutes a condition of smaller Risk Horizon (lower epistemological risk) wherein our exposure resides in deploying a technology faster than our rate of competence development inside its use context.
Risk Type II is the condition wherein the Risk Horizon is extensive (our knowledge is low), yet we elect to deploy a technology or treatment despite these unknown levels of risk horizon exposure.
The last step involving a plan to address how we head off the virtual hazards the team has identified. However, there are certain things which ‘How we head it off’ does not mean; those dark and questionable practices of monist, oligarch and crony driven corporations, to wit:
What a Risk Strategy Does NOT Do
Do the following set of activities look familiar? They should, as this is the ethic of today’s monist/oligarch/crony operated entity. A real risk strategy conducts real science (see the definition and links under ‘Domain Epistemological Risk’ above) and follows generally, the above process. Risk resides in what one does not know, not in what one does know. Its client is the technology company at-risk stakeholder community – and NOT the corporation, its insurers nor stockholders. The following very common tactics in contrast, are not elements of a real risk strategy; constituting rather a half-assed strategy of Court-defined malice and oppression:
Fake Risk Strategy
•
Identify ‘hazards’ and assess their likelihood of causing harm, and call that ‘risk’
• Identify only hazards which bear a ‘risk’ of harming the insurer or stockholder
• Identify foes and research their backgrounds for embarrassing information and smear campaigns
• Develop a ‘talking points’ sheet of propaganda to hand to the media in advance of potential bad news
• Develop astroturf ‘skeptics’ who are given target groups and individuals to harass with ‘science’
• Hire celebrity skeptics to accuse anyone who dissents, of being ‘anti-science’
• Hire Facebook, Twitter or Forbes to manage which voices get heard or ‘liked’
• Identify the money needed to pay off legislative representatives for protection
• Threaten universities with funding cuts if their tenured staff speak up about your technology
• Execute mergers and acquisitions before stockholders have a chance to tender input to the Board of Directors
• Prop up fictitious one-and-done labs to develop some quick shallow inductive study showing your product was proved safe
• Identify that level of intimidating-sounding ‘science’ which would impress layman science communicators and the media.
• Seek to bundle one’s technology risk with other technologies so as to hide any potential risk flagging signal
• Pay university professors under the table, in order to engender their support against targeted enemies
• Develop accounting practices which allow risk based profits to be hidden inside other organizations or facets of the organizationCancer hazard exposure, corporate safety policy development or the definition of safety in a regulatory setting – these are not examples of true risk strategy. They are applications in hazard exposure mitigation. Essential activity no doubt, but not ones which properly address risk. They are not science (‘I learn’), but rather its pretense or finish, sciebam (‘I knew’).
This is like having regulatory officials who hand out speeding and parking tickets, yet turn a blind eye to thieves, violent or corporate criminals – then deeming themselves and their activity to constitute, ‘law enforcement’.
True risk value chain strategy assesses what we do not know, and how it node/function compounds through series or parallel activity, and how its flow and dynamic may impact a broad footprint of exposed stakeholders or value.
For an example of a simpleton science communicator who bears not the first clue about risk, look no further than Kavin Senapathy and her ‘expert’ article here: Clearing Up the Concept of Risk Assessment
In other words, a real risk strategy does real science – and a fake risk strategy pretends that it already knows everything it needs to know, does no more research, and just estimates the odds of something blowing up on them.
When knowledge is uncertain, experts should avoid pressures to simplify their advice. ~Andy Stirling, Nature – Keep it Complex
A fake risk strategy then conducts social manipulation in place of managing exposure and robustness through a Mitigation Plan. Very much akin to what fake skepticism does. This is why you observe these types of companies conducting their robust science after they have already rolled out their dangerous product. They got caught, and now the public is forcing them to do a risk strategy a posteriori.
A Risk Strategy is not the process of ‘identifying hazards’, and then assessing the ‘likelihood that a specific hazard will cause harm’ (our exposure). Unless you identify the hazard as ‘We lack knowledge’, all this charade does is serve to confirm what we already knew a priori. This is not the definition of risk, nor is this how a risk strategy is conducted regarding complex horizons. A mitigation plan serves to identify hazards, along with our exposure or robustness therein (Taleb, The Black Swan), but this cannot be done in a vacuum, nor as the first step.
Before we move on, as you can observe inside the definition of epistemological risk above, we have addressed inside six recent blog articles (each one hyperlinked in blue), the principles of sound research effort, the elements of hypothesis, study design and type, agency risk, along with the types and modes of inference and how we know what we know. These first six links constitute ‘the science’ behind a risk strategy. Which leaves open of course the final and seventh defining element in that same links list, the topic of ‘subject epistemological domain’. Domain epistemological risk is a component of the definition which is critical before one can assess the subject of objective risk in sufficient ethical fashion. This of course is the purpose and focus of this blog article; thus we continue with domain epistemological risk as it is defined inside a concept called the Risk Horizon.
If your Big-Corp has conducted all the scientific diligence necessary in the rollout of a risk-bearing technology
or medical intervention, then show me the Risk Strategy they employed
and should have posted & available for stakeholder review.
Third party systematic reviews conducted after the rollout of the technology or treatment, do not constitute sufficient ethics nor science.
Inference Inside the Context of a Risk Horizon
What we have introduced with the above outline of risk, is the condition wherein we as a body of science, or the society which accepts that body of science, have deployed a technology at a rate which has outpaced our competence with that technology domain. In other words we have over-driven our headlights. We are either driving too fast for our headlights to help keep us safe, or we are driving on a surface which we are not even sure is a road, because our headlamps are too dim to begin with. This latter condition; the circumstance where our headlamps are so dim that we cannot distinguish the road, involves a principle which is the subject of this blog article. A principle called domain epistemological risk, or more accurately the size of the domain of established competence and the resulting Risk Horizon. Below, we have redeveloped The Map of Inference, such that it contrasts standard context inference, with that special hierarchy of inference which is exercised in the presence of either epistemological or objective risk. The decision theory as well as types of inference and study designs are starkly different under each scenario of confidence development, per the following chart.
The Map of Inference Versus Risk Horizon
The first thing one may observe inside the domain chart above, is that it is much easier to establish a case of risk (Objective Risk – modus praesens), than it is to conclusively dismiss one (Objective Risk – modus absens). That ethic may serve to piss off extraction-minded stockholders, but those are the breaks when one deploys a technology bearing public stakeholder risk.
Under risk, even an appeal to convention must bring unassailable evidence.
Rigor must be served. What one may also observe in the above chart are two stark contrasts between risk based inference and standard inference. These two contrasts in Risk Types I and II are outlined below via the analogies of over-driving headlights, or possessing too-dim a set of headlamps. Each bears implications with regard to waste, inefficiency and legal liability.
Risk Type I: Over-driving Our Headlights
Smaller Risk Horizon (Lower State of Domain Epistemological Risk)
First when one moves from the context of the trivial ascertainment of knowledge and into an arena wherein a population of stakeholders is placed at risk; say for example as in the case of broadscale deployment of a pesticide or an energy emitting system – the level of rigor in epistemology required increases substantially. One can see this under the column ‘Objective Risk modus absens‘. Here the null hypothesis shifts to the assumed presence of risk, not its absence (the precautionary principle). In other words, in order to prove to the world that your product is safe, it is not sufficient to simply publish a couple Hempel’s Paradox inductive studies. The risk involved in a miscall is too high. Through the rapid deployment of technology, society can outrun our ability to competently use or maintain that technology safely – as might be evidenced by nuclear weapons or a large dam project in a third world nation which does not have the educational nor labor resources to support operation of the dam. When we as a corporate technology culture are moving so fast that our pace outdistances our headlights – risk concatenates or snowballs.
Example: 5G is a promising and powerful technology. I love the accessibility and speeds it offers. However there is legitimate concern that it may suffer being deployed well before we know enough about this type of pervasive radiation impact on human and animal physiology. A wave of the indignant corporate hand, and inchoate appointment of the same skeptics who defended Vioxx and Glyphosate, is not sufficient scientific diligence. If I see the same old tired skeptics being dragged out to defend 5G – that is my warning sign that the powers deploying it, have no idea what they are doing. I am all for 5G – but I want scientific deductive rigor (modus absens) in its offing.
Risk Type II: Headlamps Not Bright Enough
Extensive Risk Horizon (High State of Domain Epistemological Risk)
Second and moreover, this problem exacerbates when the topic suffers from a high state of epistemological domain risk. In other words, there exist a high number of critical paths of consideration, along with a high degree of sensitive and influencing factors – very few of which we have examined nor understand sufficiently. Inside this realm of deliberation, induction under the Popper Demarcation of Science not only will not prove out the safety of our product, but we run a high risk of not possessing enough knowledge to even know how to test our product adequately for its safety to begin with. The domain epistemological risk is high. When a corporate technology is pushed onto the public at large under such a circumstance, this can be indicative of greed, malice or oppression. Risk herein becomes exponential. A technology company facing this type of risk strategy challenge, needs to have its legal counsel present at its piloting and closing sessions.
Example: Vaccines offer a beneficial bulwark against infectious diseases. Most vaccines work. However there is legitimate concern that we have not measured their impact in terms of unintended health consequences – both as individual treatments and as treatments in groups, nor at the ages administered. There exists a consilience (Consilient Induction modus praesens) of stark warning indicators that vaccines may be impacting the autoimmune, cognitive and emotional well being of our children.
We do not possess the knowledge which will allow us to deductively prove that our vaccines do not carry such unintended consequences. If one cites this as a condition which allows for exemption from having to conduct such study – such a disposition is shown in the chart above to constitute malice. When domain epistemological risk is high, and an authority which stands to derive power or profits from deployment of a technology inside that domain, applies it by means of less-than-rigorous science (eg. linear induction used to infer safety of vaccines), this constitutes a condition of malice on the part of that authority.
Such conditions where society is either outrunning its headlights, or does not maintain bright enough headlamps, are what we as ethical skeptics must watch for. We must be able to discern the good-cop/bad-cop masquerade, the posturing poseur used car salesmen of science, and the stop the charade which makes a farce of science, injures our children or serves to harm us all.
Our first diligence as technology sponsors and deploying corporations, is the protection of our technology receiving/adopting community. We, more than anyone, should be absolutely convinced that we substantially know our domain, and more importantly that we know what we do not know, before we can parade out our technology with the ethical confidence that we have protected our stakeholders.

The Ethical Skeptic, “Epistemological Domain and Objective Risk”; The Ethical Skeptic, WordPress, 23 May 2019; Web, https://wp.me/p17q0e-9ME