Trouble on the Way from Notion to Inference

As a child, my favorite bedtime story was Dr. Seuss’ I Had Trouble in Getting to Solla Sollew.1 I never forgot the story line segment wherein the hero is involuntarily conscripted inside an army, in order to confront the ‘Perilous Poozer of Pomplemoose Pass’. The erstwhile army ends up bailing on the hero and he is left alone, surrounded, and without a real weapon, to fight not one, but many Perilous Poozers.

During a severe market recession a couple decades ago, I was a junior partner in a firm whose principals and owners all bailed on the business and absconded in short order with most all the accounts, clients, and assets. These partners secretly knew that one major client was about to go bankrupt, while another was being acquired and merged. This left me alone to rescue the enterprise, and during a severe recession no less. We were abandoned with a mere two months of backlogged sales, while employees fretted over what was to happen with their jobs, families, and lives. We faced a monthly payroll that was alone twice the size of all backlogged sales. It was a dark time.2

I was quite happy and lived by the ocean
Not far from a place called the Valley of Notion
Where nothing, not anything ever was wrong
Until… well, one day I was walking along

And I learned there are troubles of more than one kind
Some come from ahead and some come from behind

There I was, all completely surrounded by trouble,
When a chap rumbled up in a One-Wheeler Wubble

“Young Fellow,” he said, “what has happened to you
has happened to me and to other folks, too
What I’ve decided to do is to think in more sense…
So I’m off to the City of True Inference

I was able leverage my house and retirement accounts, borrow money and time, change our market message and approach, and through an intense road campaign, raise new business to replace the old – and not let a single employee down through forfeiture of their job. We even brought the company back to equal its heights of record business – selling the business at a premium nine times earnings years later. I also ensured that the employees who stuck with the business were rewarded well in that sale. Such experience and willingness to stand in the gap, is essential to the life of the true philosopher. The stark challenge to think without coercion, and under differing goal structures. Such lessons are not learned in academia nor government, and yet are also critically essential to good science.

In the end, the hero of the Dr. Seuss story turns back to confront his troubles, and becomes trouble to them instead. When making the journey from notion to inference, there exists a cast of standard nefarious pretenders – characters who have never done a thing with their life, and for whatever reason, are angry at you over this reality. They will attempt to make the journey confusing and ineffective. These are the Perilous Poozers one must face down, in order to discern sound science or public policy.

The Perilous Poseurs of Pompelmoose Pass

Fallacy Falcons

They don’t actually ever create anything. They hide inside the lack of accountability automatically afforded denial and critique. They never get into the mix, but rather fly high above it, merely to swoop down and point out the informal fallacy you have committed. The problem with garden variety fallacies is, they lend a false confidence into the mind of this form of poseur skeptic. The notion that, because they have filtered out disliked ideas by means of informal violations, they have therefore increased the likelihood that their own ideas are correct. But you will also notice that they never expose their own ideas to critique and never show their hand at actual logical calculus built into an argument or refutation – this is part of the massive ego complex they conceal. In the end, their debunking only constitutes a form of punishing those who disagree and has nothing to do with any form of inference, rationality, or scientific ‘likelihood’.

It is commonly claimed that giving a fallacy a name and studying it will help the student identify the fallacy in the future and will steer them away from using the fallacy in their own reasoning. Fallacy theory is criticized by some teachers of informal reasoning for its over-emphasis on poor reasoning rather than good reasoning. Do colleges teach the Calculus by emphasizing all the ways one can make mathematical mistakes?

~ Internet Encyclopedia of Philosophy: Fallacies: 3. Pedagogy

Bayesian Bullies

Bayes Theorem is founded upon scientific estimations of probability, which are confirmed and then updated by series inductive tests. However, poseurs therein are often not aware of when such a process does and does not bear utility. These poseurs will constantly sea-lion for ‘studies’, ‘recitations’, ‘proof’, knowing that most subjects are not easily reduced much less resolved by Bayesian induction under confidence. They use linear induction and abductive reasoning, in place of deduction, consilience, and falsification. They elect to be scientists when an investigator is needed most, and then become technicians when they need to be scientists. Shrinking from the true prosecution of ideas. They intimidate by means of unjustifiable levels of precisely framed outcome, or precision as a substitute for accuracy. They frame a complete guess, by means of boastfully confident (hedging) error bands. They resolve the answer before determining the right question. They forecast the future before defining correctly the present, hoping to be lucky rather than good. They harden their model to inaccurate outcomes, failing to realize its incumbent brittleness.

Bayesian methods are presented as an automatic inference engine, and this raises suspicion in any-one with applied experience… such methods being oversold as an all-purpose statistical solution to genuinely hard problems. Compared to classical inference, which focuses on how to extract the information available in data, Bayesian methods seem to quickly move to elaborate computation rather than the deeper questions of inference.

~ quoted and condensed from Andrew Gelman, Objections to Bayesian Statistics, 2008, Colombia University

Parable Probabilizers

Since all knowledge is uncertain, therefore knowledge can be gained by merely establishing a scant level of likelihood regarding it. Such probabilizers exploit the cache of obviousness as evidence that the ‘simple is more likely’. They stack up comfortable and understandable parables, asking you to ignore the risk, and just focus on the ‘explain it in simple terms’ answer they have crafted. They contend that ‘until God gets here and establishes truth for us, I will explain that which is more likely in his stead’. Yes, this is a claim to being God. They ‘results gauge’, or produce answers which are at face value simple, conforming or understandable (concealed complication in reality) as opposed to answers which are complex, informative, challenging or push our development envelope. They fail to understand Ockham’s Razor, and thus crafted this mutated version called Occam’s Razor, affording one permission to wrap up all epistemological loose ends as ‘finished science’ in one fell swoop of fatal logic. They ignore the riddle of Lindy:

The fact that an opinion has been widely held is no evidence whatever that it is not utterly absurd; indeed in view of the silliness of the majority of mankind, a widely spread belief is more likely to be foolish than sensible.

~ Bertrand Russell, Marriage and Morals

Process Ponzi Schemers

One key method of pretend science is to borrow assumptions from early in the scientific method, and apply them later as pretend held assets, asking one to invest belief in such process of science. This is at its heart a Ponzi Scheme. Paying off scientific answers by means of borrowed assumptions, premature questions, and gravitas that are not real owned assets. They ‘ask a question’ before conducting any kind of intelligence development or establishment of necessity. They promote a mere notion to the vaulted office of hypothesis, and then prove it by its ‘simplicity’ alone. They fail to ask ‘What do we not know?’ or ‘Can this lack of knowledge cause harm?’ They use the process of reduction and linear analysis to affirm what they already ‘knew’ (sciebam), rather than seek to challenge and falsify (science). They declare (scientific claim) something ‘supernatural’ or ‘pseudoscience’ and not approachable by science, so that it does not have to be studied in the first place and therefore can never become science either. They use accidental absences of data in a discovery protocol, to stand as evidence of absence. They start with the answer, and finish in the very next step by means of the awesome insistence of meta-analysis. They view science as a bludgeon to conclusion, and not as a feedback cycle.

When I give a thesis to students, most of the time the problem I give for a thesis is not solved. It’s not solved because the solution of the question, most of the time, is not in solving the question, it’s in questioning the question itself. It’s realizing that in the way the problem was formulated there was some implicit prejudice or assumption that should be dropped.

~ Carlo Rovelli, The New Republic: Science Is Not About Certainty, 11 Jul 2014

At times, it is indeed the job of the ethical skeptic to stand in the gap on behalf of the innocent. To make life hell, for those who choose to be abusive troubles rather than thoughtful contributors.

The Ethical Skeptic, “Trouble on the Way from Notion to Inference”; The Ethical Skeptic, WordPress, 17 Feb 2022; Web, https://theethicalskeptic.com/?p=50006

Epistemological Domain and Objective Risk Strategy

If the relevant domain of a subject is largely unknown, or insufficient study along any form of critical path of inference has been developed, then it is invalid to claim or imply through a claim, that ignorance has been sufficiently dispelled in order to offset risk. Especially that ignorance which is prolific and may serve to result in harm imparted to at-risk stakeholders, not simply constituting exposure of Cronies to a hazard. After dealing with the malice of those who shortcut science in order to turn a quick profit, one often is left feeling the need for a clean shower.

C’mon Chief, You’re Overthinking This Thing

As a younger man, I ventured out one afternoon with the thought in mind of buying a new car. My old Toyota had 185,000 miles on it, and despite all the love and care I had placed into that reliable vehicle, it was time to upgrade. ‘Timothy’ as I called my car, had served me well through years of Washington D.C.’s dreadfully monotonous 6:00 am Woodrow Wilson Bridge traffic, getting to Arlington, the Pentagon and the Capitol District, through to graduate school classes, and finally getting home nightly at 11:00 pm. My beloved car was just about worn out. So I selected the new model that I wanted and proceeded one Saturday to a local dealer. The salesperson and I struck a deal on my preferred model and color, with approval from the sales manager skulking remotely as the negotiator within some back office. Always take this as a warning sign: any time a person who is imbued with the power to strike a deal, will not sit with you face to face during the execution of that deal. This is a form of good-cop/bad-cop routine. However, being that this was only my second car purchase, I accepted this as normal and shook hands with the salesperson upon what was in reality, a very nice final price on my prospective new car.

The polite and professional salesperson led me down a hallway and handed me off and into the office of the closing manager. The closing manager was a fast-talking administrative professional who’s job it was to register the sale inside the corporate system, arrange all the payment terms, declarations, insurance and contracts, remove the car from inventory, register the sale with the State and affix all the appropriate closing signatures. A curiously high paying position assigned to execute such a very perfunctory set of tasks. The closing manager sat down and remarked what an excellent Saturday it had been, and then added that he was glad that I was his “last sale of the evening.” He had a bottle of cognac staged on his desk, ready to share a shot with the sales guys who had delivered an excellent performance week. The closing manager pulled up the inventory record and then printed out the sales contract in order to review it with me. In reviewing the document, I noted that the final closing figure listed at the bottom of the terms structure was $500 higher than was the agreed price I had just struck with the sales manager. The closing manager pointed out that the figure we had negotiated did not reflect the ‘mandatory’ addition of the VIN number being laser engraved into the bottom of each of the windows. The fee for the laser engraving, and believe him (*chuckle) it was well worth it, was $500. If the vehicle was ever stolen, the police would be asking me for this to help them recover the vehicle. Not to worry however, the laser engraving had already been applied at the factory. This was an administrative thing, really.

Raising objection to this sleight-of-hand tactic, I resolved to remain firm in that objection and expressed my intent to walk out the door if the $500 adder was not removed from the contract.  The closing manager then retorted that he did not have time to correct the contract as “the agreement had already been registered in the corporate system” and he would “have to back that out of the system and begin all over again.” To which I responded, “Then let’s begin all over again.” Thereupon, the closing manager said that he had to make a quick call home. He called his spouse and in very dramatic fashion exclaimed “Honey, tell our son that we will be late to his graduation because I have to re-enter a new contract here at the last hour. What? He can’t wait on us?” The clerk held the phone to his chest and said, “I am going to have to miss my son’s graduation.” (This reminded me of being told that, since I question Herodotus’ dating of the Khufu Pyramid, along with his claim that he even physically traveled to Egypt in the first place – that therefore I ‘believe that aliens built the pyramids and am racist towards Egyptians’). Having grown absolutely disillusioned as to the integrity of this whole farce, I responded “OK, attend your son’s graduation and I will come back some other time.” “Surely they do not think I am this dumb. Do I look stupid or something?” I mulled while getting up from my chair and proceeding out the door in disgust.

I was met in the exit hallway by the previously hidden bad-cop, the sales manager. “Wait wait, Chief you’re overthinking this thing. You don’t understand, that we have given you a great price on this vehicle. I have a guy who wants to take this particular inventory first thing in the morning.” To which I responded, “Well, make sure you tell him about the mandatory laser engraving fee”, fluttering my hands upward in mock excitement. My valuable weekend car shopping time had been wasted by manipulative and dishonest fools. It was not simply that I did not know about the engraving fee, rather that I did not even know, that I did not know about the potential of any such fake fee. The epistemic domain had been gamed for deception. They had allowed me to conduct my science if you will, inside a purposeful and crafted charade in ignorance – Descriptive Wittgenstein Error. They had hoped that the complexity of the sales agreement would provide disincentive for me to ‘overthink’, and spot the deal shenanigans. I walked out of the showroom feeling like I needed to immediately go home and take a shower.

Whenever someone pedantically instructs you that your are overthinking something,
under a condition
of critical path or high domain unknown, be very wary. You are being pitched a con job.

If you have not departed from the critical path of necessary inference,
or if the domain is large and clouded with smoke and mirrors, never accept an accusation of ‘overthinking’.
Such cavil constitutes merely a simpleton or manipulative appeal to ignorance.

Domain Ignorance and Epistemological Risk

What this used car sales comedy serves to elicit is a principle in philosophy called an ‘ignorance of the necessary epistemological domain’, or the domain of the known and unknown regarding one cohesive scientific topic or question. Understanding both the size of, as well as that portion of science’s competent grasp of such domain/unknown, is critical in assessing scientific risk – to wit: the chance that one might be bamboozled on a car contract because of a lack of full disclosure, or the chance that millions of people will be harmed through a premature rollout of a risky corporate technology which has ‘over-driven its headlights’ of domain competency, and is now defended by an illegitimate and corrupt form of ‘risk strategy’ as a result.

There are two distinct species of scientific risk: epistemological risk and risk involving an objective outcome. In more straightforward terminology, the risk that we don’t know something, and the risk that such not-knowing could serve to impart harm.

Before we introduce those two types of risk however, we must define how they relate to and leverage from a particular willful action, a verb which goes by the moniker, ignorance. Ignorance is best defined in its relationship to the three forms of Wittgenstein error.1 2 3

Ignorance – a willful set of assumptions or lacks thereof, outside the context of scientific method and inference, which result in personal or widespread presence of three Wittgenstein states of error (for a comprehensive description of these error states, see Wittgenstein Error and Its Faithful Participants):

Wittgenstein Error (Contextual)
    Situational:  I can shift the meaning of words to my favor or disfavor by the context in which they are employed
Wittgenstein Error (Descriptive)
    Describable:  I cannot observe it because I refuse to describe it
    Corruptible:  Science cannot observe it because I have crafted language and definition so as to preclude its description
    Existential Embargo:  By embargoing a topical context (language) I favor my preferred ones through means of inverse negation
Wittgenstein Error (Epistemological)
    Tolerable: My science is an ontology dressed up as empiricism
        bedeutungslos – meaningless or incoherent
        unsinnig – nonsense or non-science
        sinnlos – mis-sense, logical untruth or lying.

Now that we have a frame of reference as to what is indeed ignorance (the verb), we can now cogently and in straightforward manner, define epistemological domain, along with the two forms of scientific risk: epistemological risk and objective risk. This is how a risk strategy is initiated.

Epistemological Domain (What We Should Know)

/philosophy : skepticism/ : what we should know. That full set of critical path sequences of study, along with the salient influencing factors and their imparted sensitivity, which serve to describe an entire arena of scientific endeavor, study or question, to critical sufficiency and plenary comprehensiveness.

Epistemological Risk (What We Don’t Know and Don’t Know That We Don’t Know)

/philosophy : skepticism : science : risk/ : what we don’t know and don’t know that we don’t know. That risk in ignorance of the necessary epistemological domain, which is influenced by the completeness of science inside that domain; as evidenced by any form of shortfall in

•  quality of observational research,
•  nature and reach of hypothesis structure,
•  appropriateness of study type and design,
•  bootstrap strength of the type and mode of inference drawn,
•  rigor of how and why we know what we know,
•  absence or presence of operating agency, and finally
•  predominance or subordinace of the subject domain’s established domain risk (subject of this blog)

The next step after defining these elements of risk, is to undertake a Risk Strategy. The purpose of a risk strategy is to translate epistemological risk into objective risk and then set out an ethical plan, which serves to shield at-risk stakeholders from its impact. As a professional who develops value chain and risk strategies, I remain shocked at the number of risky technological roll-outs, enacted by large and supposedly competent field-subject corporations, which are executed inside a complete vacuum in terms of any form of risk strategy at all. When the lay public or their representative challenge your technology’s safety – your ethical burden is not to craft propaganda and social advocacy; but rather to issue the Risk Strategy which was prosecuted, in advance of the technology rollout, to address their concerns. Two current examples of such unacceptable circumstance, framed inside the analogy of ‘car headlights’, are highlighted later in this article.

What is a Risk Strategy?

One way in which such matters are addressed in industry (when they are addressed – which is rarely), is to conduct a form of value chain strategy called a risk chain evaluation or ‘risk strategy’. Risk flows in similar fashion to a value or margin chain, it concatenates, snowballs and increases non-linearly. It is not a stand alone element unto itself, rather part of the fabric of the mission, product or channel of service being undertaken. A risk strategy is usually done as part of a value chain strategy.4 Both forms of analysis involve the flow of value, matched against the counter-flow of resources. Risk is simply an objectified species of value – so the competent technology company, when choosing to conduct a risk strategy, will often seek the counsel of a value chain strategy firm, in order to come alongside and assist its project executives and managers through a Risk Strategy workplan. Despite the complex-sounding framework presented here, the subject is only complex in its generic description. Once applied to a specific technology, market or treatment, the actual execution of a risk strategy as part of a value chain or branding strategy, becomes very straightforward.

A risk strategy is not congruent with a hazard exposure assessment. In assessing hazards, one already knows what the dangers are, and is measuring potential harm (exposure) to earnings/insurers/stockholders.

In a risk strategy, an operating group is identifying what they do not know (in advance of identifying hazards), and how that lack of knowing can serve to harm brand/mission/stakeholders/environment/clients.

A risk strategy is developed in industry by first conducting a piloting session, which kicks off two steps. The first tasks a team which is assigned to develop the value chain description (Question 1 below) of the entailed domain (the critical path of a product development horizon, a brand strategy, a legal argument, or an imports channel for example). A second step then involves development of epistemological risk and slack factors, measures and sensitivities which can be assigned to each node (action/decision) in the risk chain series mapped during the first step (Questions 2 – 7 below). These shortfalls in diligence are derived from the general categorizations defined (with links) under ‘Domain Epistemological Risk’ above. This does not actually take that long if the group is guided by an experienced professional. The groups who conducted the two steps above then reconvene and develop the answer to Question 8 as the final step.

A Risk Strategy seeks to prosecute the following set of questions, in order:

1.  What is the state of current industry of observational research, and how much of the subject domain has been touched? Map the subject domain and its core critical path arguments/issues (elements)/sensitivities (the ‘footprint’).

2.  How many novel and tested hypotheses have addressed this domain footprint (articles, systematic reviews, editorials do not count)? How many are actually needed in order to fairly address the footprint domain risk?

3.  What types and designs of study have been completed regarding each hypothesis, and were they sufficient to the problem? Has there been torfuscation?

4.  What was the bootstrap strength of the type and mode of inference drawn from these studies? Was it merely inductive? Can deductive work be done? Does methodical deescalation exist inside the industry?

5.  Prosecute the state of the industry under the standard of ‘How we know, what we know’. Is it sound? What ‘agencies’ exist and do they constitute a domain problem?

6.  Establish the risk horizon of ‘unknown knowns’ and ‘unknown unknowns’. How predominant or subordinate is this set, as compared to the overall domain of knowledge?

7.  Finalize Risk Chain mapping and develop a Risk Horizon by Type (see below) for each critical path issue identified in step 1.

8.  How do we take actions to mitigate the Risk Horizon, and how do we craft organization mission and brand around these now ethical principles.

Once done competently, the company which conducts a risk strategy will shine like a beacon, competitively against short-cut minded competitors. The two colors orange and red, on the right in the following chart depict our ‘risk horizon’. That which we as a deploying corporate entity do not know that science already knows, and that which we do not know that we do not know. These are the domains of ignorance which serve to endanger an at-risk technology stakeholder through objective risk.

The Horizon of Epistemological Risk

High Epistemological Domain Risk: there exist a high number of critical paths of consideration, along with a high degree of sensitive and influencing factors – very few of which we have examined nor understood sufficiently.

Lower Epistemological Domain Risk: there exist a low or moderate number of critical paths of consideration, along with a reasonable degree of sensitive and influencing factors – many or most of which we have examined and begun to understand sufficiently.

Once epistemological risk is mapped (1-7 above, or ‘what we don’t know’), then a mitigation approach is developed which can serve to rate, triage and then minimize each risk element, or reduce the effect of risk elements combining into unintended consequences (how what we don’t know, can serve to harm someone or something). Stand alone risks are treated differently than are concatenated or cumulative escalating (snowballing) risks. However all risks are measured in terms of virtual (non-realized) consequences. These consequences are what is deemed inside risk theory as ‘objective risk’.

Objective Risk (What Harm Might Result)

/philosophy : science : technology : risk/ : what harm might result from our not knowing. The risk entailed as a result of an outcome inside a particular state of being or action, stemming from a state of high epistemological risk, and which might result in an increase in the ignorance itself and/or in harm and suffering to any form of at-risk stakeholder. Hazards are identified along with estimates for exposure and robustness efforts inside a Mitigation Plan. Objective risk comes in two forms.

Risk Type I constitutes a condition of smaller Risk Horizon (lower epistemological risk) wherein our exposure resides in deploying a technology faster than our rate of competence development inside its use context.

Risk Type II is the condition wherein the Risk Horizon is extensive (our knowledge is low), yet we elect to deploy a technology or treatment despite these unknown levels of risk horizon exposure.

The last step involving a plan to address how we head off the virtual hazards the team has identified. However, there are certain things which ‘How we head it off’ does not mean; those dark and questionable practices of monist, oligarch and crony driven corporations, to wit:

What a Risk Strategy Does NOT Do

Do the following set of activities look familiar? They should, as this is the ethic of today’s monist/oligarch/crony operated entity. A real risk strategy conducts real science (see the definition and links under ‘Domain Epistemological Risk’ above) and follows generally, the above process. Risk resides in what one does not know, not in what one does know. Its client is the technology company at-risk stakeholder community – and NOT the corporation, its insurers nor stockholders. The following very common tactics in contrast, are not elements of a real risk strategy; constituting rather a half-assed strategy of Court-defined malice and oppression:

Fake Risk Strategy

•  Identify ‘hazards’ and assess their likelihood of causing harm, and call that ‘risk’
•  Identify only hazards which bear a ‘risk’ of harming the insurer or stockholder
•  Identify foes and research their backgrounds for embarrassing information and smear campaigns
•  Develop a ‘talking points’ sheet of propaganda to hand to the media in advance of potential bad news
•  Develop astroturf ‘skeptics’ who are given target groups and individuals to harass with ‘science’
•  Hire celebrity skeptics to accuse anyone who dissents, of being ‘anti-science’
•  Hire Facebook, Twitter or Forbes to manage which voices get heard or ‘liked’
•  Identify the money needed to pay off legislative representatives for protection
•  Threaten universities with funding cuts if their tenured staff speak up about your technology
•  Execute mergers and acquisitions before stockholders have a chance to tender input to the Board of Directors
•  Prop up fictitious one-and-done labs to develop some quick shallow inductive study showing your product was proved safe
•  Identify that level of intimidating-sounding ‘science’ which would impress layman science communicators and the media.
•  Seek to bundle one’s technology risk with other technologies so as to hide any potential risk flagging signal
•  Pay university professors under the table, in order to engender their support against targeted enemies
•  Develop accounting practices which allow risk based profits to be hidden inside other organizations or facets of the organization

Cancer hazard exposure, corporate safety policy development or the definition of safety in a regulatory setting – these are not examples of true risk strategy. They are applications in hazard exposure mitigation. Essential activity no doubt, but not ones which properly address risk. They are not science (‘I learn’), but rather its pretense or finish, sciebam (‘I knew’).

This is like having regulatory officials who hand out speeding and parking tickets, yet turn a blind eye to thieves, violent or corporate criminals – then deeming themselves and their activity to constitute, ‘law enforcement’.

True risk value chain strategy assesses what we do not know, and how it node/function compounds through series or parallel activity, and how its flow and dynamic may impact a broad footprint of exposed stakeholders or value.

For an example of a simpleton science communicator who bears not the first clue about risk, look no further than Kavin Senapathy and her ‘expert’ article here: Clearing Up the Concept of Risk Assessment

In other words, a real risk strategy does real science – and a fake risk strategy pretends that it already knows everything it needs to know, does no more research, and just estimates the odds of something blowing up on them.

When knowledge is uncertain, experts should avoid pressures to simplify their advice. ~Andy Stirling, Nature – Keep it Complex

A fake risk strategy then conducts social manipulation in place of managing exposure and robustness through a Mitigation Plan. Very much akin to what fake skepticism does. This is why you observe these types of companies conducting their robust science after they have already rolled out their dangerous product. They got caught, and now the public is forcing them to do a risk strategy a posteriori.

A Risk Strategy is not the process of ‘identifying hazards’, and then assessing the ‘likelihood that a specific hazard will cause harm’ (our exposure). Unless you identify the hazard as ‘We lack knowledge’, all this charade does is serve to confirm what we already knew a priori. This is not the definition of risk, nor is this how a risk strategy is conducted regarding complex horizons. A mitigation plan serves to identify hazards, along with our exposure or robustness therein (Taleb, The Black Swan), but this cannot be done in a vacuum, nor as the first step.

Before we move on, as you can observe inside the definition of epistemological risk above, we have addressed inside six recent blog articles (each one hyperlinked in blue), the principles of sound research effort, the elements of hypothesis, study design and type, agency risk, along with the types and modes of inference and how we know what we know. These first six links constitute ‘the science’ behind a risk strategy. Which leaves open of course the final and seventh defining element in that same links list, the topic of ‘subject epistemological domain’. Domain epistemological risk is a component of the definition which is critical before one can assess the subject of objective risk in sufficient ethical fashion. This of course is the purpose and focus of this blog article; thus we continue with domain epistemological risk as it is defined inside a concept called the Risk Horizon.

If your Big-Corp has conducted all the scientific diligence necessary in the rollout of a risk-bearing technology
or medical intervention, then show me the Risk Strategy they employed
and should have posted & available for stakeholder review.

Third party systematic reviews conducted after the rollout of the technology or treatment, do not constitute sufficient ethics nor science.

Inference Inside the Context of a Risk Horizon

What we have introduced with the above outline of risk, is the condition wherein we as a body of science, or the society which accepts that body of science, have deployed a technology at a rate which has outpaced our competence with that technology domain. In other words we have over-driven our headlights. We are either driving too fast for our headlights to help keep us safe, or we are driving on a surface which we are not even sure is a road, because our headlamps are too dim to begin with. This latter condition; the circumstance where our headlamps are so dim that we cannot distinguish the road, involves a principle which is the subject of this blog article. A principle called domain epistemological risk, or more accurately the size of the domain of established competence and the resulting Risk Horizon. Below, we have redeveloped The Map of Inference, such that it contrasts standard context inference, with that special hierarchy of inference which is exercised in the presence of either epistemological or objective risk. The decision theory as well as types of inference and study designs are starkly different under each scenario of confidence development, per the following chart.

The Map of Inference Versus Risk Horizon

The first thing one may observe inside the domain chart above, is that it is much easier to establish a case of risk (Objective Risk – modus praesens), than it is to conclusively dismiss one (Objective Risk – modus absens). That ethic may serve to piss off extraction-minded stockholders, but those are the breaks when one deploys a technology bearing public stakeholder risk.

Under risk, even an appeal to convention must bring unassailable evidence.

Rigor must be served. What one may also observe in the above chart are two stark contrasts between risk based inference and standard inference. These two contrasts in Risk Types I and II are outlined below via the analogies of over-driving headlights, or possessing too-dim a set of headlamps. Each bears implications with regard to waste, inefficiency and legal liability.

Risk Type I: Over-driving Our Headlights

Smaller Risk Horizon (Lower State of Domain Epistemological Risk)

First when one moves from the context of the trivial ascertainment of knowledge and into an arena wherein a population of stakeholders is placed at risk; say for example as in the case of broadscale deployment of a pesticide or an energy emitting system – the level of rigor in epistemology required increases substantially. One can see this under the column ‘Objective Risk modus absens‘. Here the null hypothesis shifts to the assumed presence of risk, not its absence (the precautionary principle). In other words, in order to prove to the world that your product is safe, it is not sufficient to simply publish a couple Hempel’s Paradox inductive studies. The risk involved in a miscall is too high. Through the rapid deployment of technology, society can outrun our ability to competently use or maintain that technology safely – as might be evidenced by nuclear weapons or a large dam project in a third world nation which does not have the educational nor labor resources to support operation of the dam. When we as a corporate technology culture are moving so fast that our pace outdistances our headlights – risk concatenates or snowballs.

Example:  5G is a promising and powerful technology. I love the accessibility and speeds it offers. However there is legitimate concern that it may suffer being deployed well before we know enough about this type of pervasive radiation impact on human and animal physiology. A wave of the indignant corporate hand, and inchoate appointment of the same skeptics who defended Vioxx and Glyphosate, is not sufficient scientific diligence. If I see the same old tired skeptics being dragged out to defend 5G – that is my warning sign that the powers deploying it, have no idea what they are doing. I am all for 5G – but I want scientific deductive rigor (modus absens) in its offing.

Risk Type II: Headlamps Not Bright Enough

Extensive Risk Horizon (High State of Domain Epistemological Risk)

Second and moreover, this problem exacerbates when the topic suffers from a high state of epistemological domain risk. In other words, there exist a high number of critical paths of consideration, along with a high degree of sensitive and influencing factors – very few of which we have examined nor understand sufficiently. Inside this realm of deliberation, induction under the Popper Demarcation of Science not only will not prove out the safety of our product, but we run a high risk of not possessing enough knowledge to even know how to test our product adequately for its safety to begin with. The domain epistemological risk is high. When a corporate technology is pushed onto the public at large under such a circumstance, this can be indicative of greed, malice or oppression. Risk herein becomes exponential. A technology company facing this type of risk strategy challenge, needs to have its legal counsel present at its piloting and closing sessions.

Example: Vaccines offer a beneficial bulwark against infectious diseases. Most vaccines work. However there is legitimate concern that we have not measured their impact in terms of unintended health consequences – both as individual treatments and as treatments in groups, nor at the ages administered. There exists a consilience (Consilient Induction modus praesens) of stark warning indicators that vaccines may be impacting the autoimmune, cognitive and emotional well being of our children.

We do not possess the knowledge which will allow us to deductively prove that our vaccines do not carry such unintended consequences. If one cites this as a condition which allows for exemption from having to conduct such study – such a disposition is shown in the chart above to constitute malice. When domain epistemological risk is high, and an authority which stands to derive power or profits from deployment of a technology inside that domain, applies it by means of less-than-rigorous science (eg. linear induction used to infer safety of vaccines), this constitutes a condition of malice on the part of that authority.

Such conditions where society is either outrunning its headlights, or does not maintain bright enough headlamps, are what we as ethical skeptics must watch for. We must be able to discern the good-cop/bad-cop masquerade, the posturing poseur used car salesmen of science, and the stop the charade which makes a farce of science, injures our children or serves to harm us all.

Our first diligence as technology sponsors and deploying corporations, is the protection of our technology receiving/adopting community. We, more than anyone, should be absolutely convinced that we substantially know our domain, and more importantly that we know what we do not know, before we can parade out our technology with the ethical confidence that we have protected our stakeholders.

The Ethical Skeptic, “Epistemological Domain and Objective Risk”; The Ethical Skeptic, WordPress, 23 May 2019; Web, https://wp.me/p17q0e-9ME

Heteroduction – When Classic Inference Proves Unsound

There exists a circumstance for skepticism wherein a nagging repetitive anecdote inside the general public experience just will not go away. Sometimes inference can be drawn from what is denied, contradictory, or unknown, and not simply from what is consistent with what we know. Heteroduction is a disruptive and asymmetric form of inference which resides at the heart of the Kuhn-Planck Theory of Scientific Revolution.

Much to the chagrin of fake skeptics, certain phenomena and archetypes in the realm of human experience, will just not go away. Specific subjects they disdain are irritatingly bolstered by almost daily repeated observation on the part of the general public. Inside many of these topics the idea that such disdained phenomena constitute a mere figment of overzealous imaginations has been falsified over and over. But this will never satisfy the mind of a fake skeptic. They extrapolate a condition of difficulty in terms of classic inference, to therefore stand as basis for inferring the phenomenon’s absence as well (appeal to ignorance). They then invoke the name of science, as a USDA stamp of certification on such putrid products of ‘critical thinking’. To the ethical skeptic, such skeptical casuistry is folly.

My thoughts regarding this condition, what I have termed the contrathetic impasse, revolve are around a new approach to research and inference. One which we employed inside Intelligence, during my days therein. This is the form of research which might be performed by an investigator. This ilk of researcher does not hold an entire body of pre-knowledge (prior art), and must assemble such as part of their discovery process inside their research method. Not that this mode of inference or means of research has not existed all along; rather my point is, that this form of research is denied its own meaning and identity inside acceptable science method. Skeptics regard investigators and sponsors as lower, invalid forms of scientist. Pseudo scientists. Nothing could be further from the truth.

A Necessity for Heteroduction

The form of research and mode of inference this style of researcher employs involves a circumstance/conundrum exhibiting the following cohesive set of characteristics – ones common to all subjects which labor under this burden:

1.  Locus of study resides inside an enigma or apparent enigma which bears detection, but is denied meaning (See Descriptive Wittgenstein Error)

2.  Its logical critical path bears asymmetry or is unduly influenced by agency

3.  Its observations are ephemeral, hard to quantify and involve apparent sublime factors

4.  Observations are cherry sorted by skeptics in favor of reliability over their probative potential

5.  There exists an appeal-to-authority hostility toward the subject domain (Embargo Hypothesis – Hξ)

6.  The disciplines of lab/linear style hypothesis, deduction and induction have not proved to constitute sufficient inference methodologies to make progress inside the enigma

7.  More is unknown than is known regarding the entailed subject domain.

Solving a murder (deduction) or discovering a non-chlorine hand sanitizer for Ebola stricken areas (linear induction), or arriving at a conclusion about the character of a person (triangulating induction) – none of these constitute a sufficient method of inference under the condition outlined above. This condition demands much more, a form of Intelligence if you will, than it demands a basic form of intellectual exercise or inference.

Sometimes inference can be drawn from what is denied, contradictory, or unknown, and not simply from what is consistent with what we know.

In the list to the right, you can observe the various modes of induction, ranked according to probative strength. Heteroduction (in red) is not so much strong in its relative ranking as a form of inference, as it is key in its role as possibly the only avenue of recourse once science and society have reached a contrathetic impasse. Observations have been proven to exist, but classic means of research have failed to produce critical answers.

Maybe one of the first steps inside this battle revolves around prompting philosophers of science to recognize this ‘new’ form of induction in the first place. Perhaps this is why fake skeptics patrol philosophy as well, to ensure that this form of inference is never understood nor accepted.

Heteroduction

/philosophy : inference/ : a disruptive and asymmetric form of inference necessary when classic modes of inference have served to produce or enforce incoherent and/or falsified conclusions. Heteroduction is associated less commonly with classic incremental hypothesis, and more with a process of investigation called intelligence assimilation. A novel form of inference which does not or cannot rely solely upon leveraging an incremental extrapolation of risk from that which is alike to our prior art. Rather, this method of inference must pool and draw inference from that which is unlike our prior art. It is the basis of the Kuhn-Planck Paradigm Shift understanding of scientific revolutions.

Heteroduction is strong because it leverages inconsistent observation as a form of coincident falsification and deduction.
Falsifications and deductions of high probative value which are erroneously or surreptitiously dismissed
because of their perceived lack of consistency, conformance or salience.

One must establish a consilient shitload in confirmation of standing wisdom, in order to counter for one violation of it.
Because a single instance of violation of our wisdom is vastly more scientifically informative than is any particular instance of its confirmation.

There are certain subjects, wherein their modus absens (absence as an object or state) has been falsified. In other words, Ockham’s Razor plurality has been surpassed and ethical research now demands their investigation. These are the domains which are best researched by the intelligence specialist; that form of investigator who knows how to assemble prior art and chase a consilience of information, all of which have proved to be unlike much of what we have seen before. But such a researcher must understand, that what is forbidden, and the puzzle piece nubs which are cut off in order to make the pieces a better ‘fit’ inside the a priori puzzle, can also often be assembled into the truth. Such is a predictable foible of mankind.

An Example of Heteroduction

For instance, dark matter is a one-idea-solves-all proposition which is raised as a result of cataloging a set of anomalous observations regarding universal/galactic motions in their relation to our understanding of gravity.  Classic linear induction would dictate that we craft dark matter as the incremental element which would function to conserve general relativity and Lambda-CDM models as the null hypothesis in the face of such a growing set conflicting observations. The reader may be forgiven for confusing such activity with ‘belief’. An ethical skeptic understands that the null hypothesis should never enjoy the luxury of becoming a belief.

Heteroduction in contrast, would coalesce all these same anomalous observations (see below) into a competing paradigm; observations which either are unlike anything we have ever seen, or even contradict our current prior art on the subject.  Heteroduction in this instance serves to develop a grounded-but-novel explanatory schema for these into a new competing construct (hopefully later hypothesis, if it can survive fake skepticism). Quantized Inertia stands as a key example of heteroduction in action.

Linear Induction

Dark Matter – a hypothetical form of matter that is thought to account for approximately 85% of the matter in the universe, and about a quarter of its total energy density. Its presence is inductively implied in a variety of astrophysical observations, including gravitational effects that cannot be explained unless more matter is present than can be seen.1

A person conducting heteroduction would sound warning on this line of reasoning – if enforced as a truth, rather than as the null hypothesis (note that I am not arguing against Dark Matter as a construct, simply using its deliberation as exemplary here).

Heteroduction

Quantized Inertia (QI) – previously known by the acronym MiHsC (Modified Inertia from a Hubble-scale Casimir effect), is the concept first proposed in 2007 by physicist Mike McCulloch, as an alternative to general relativity and the mainstream Lambda-CDM model. Quantized Inertia is posited to explain various anomalous effects such as the Pioneer and flyby anomalies, observations of galaxy rotation which forced Dark Matter’s introduction and propellantless propulsion experiments such as the EmDrive and the Woodward effect. It is a theory of inertia-like resistance arising from quantum effects, which serves to function in the place of dark matter –  as the necessary conjecture explaining ‘missing matter/gravitation’ in our cosmological models.2

For a better framing of QI Theory than I can render here, one can find a common sense summary within this video (which is also recommended by physicist Mike McCulloch):  The Fringe Theory Which Could Disprove Dark Matter

The Unruh effect, Casimir effect, information coding/compression theory and missing mass of galactic rotation, all of which provide the praedicate to QI theory, are all well established constructs inside modern science. Each subject outlines artifacts of observation unlike any we have observed before – anomalies which prompt scientists to go ‘huh?’. However it is the probative potential of such observations combined with this very nature of being unlike our standing prior art on the subject, which suggests their necessary combination into a new theoretical paradigm. This process/mode of inference is called heteroduction. It becomes necessary when classic forms of inference (the top ones in the chart above) have run their course in ability to provide explanatory or predictive power, and a critical mass of exception/falsifying observations continue to accrue.

True science challenges its null hypothesis, and this construct/hypothesis challenges the null hypothesis within a reasonable basis of soundness. This does not mean that QI therefore as an idea is correct, rather that it stands as a potential foundational stone inside a Kuhn-Planck Paradigm Shift. The mode of inference and the method of investigation remain valid, despite whether or not the QI alternative pans out to be true in the end. It is indeed science.

In contrast, there exist several darker forms of inference, a key one of which is panduction.

Panduction

/philosophy : invalid inference/ : an invalid form of inference which is spun in the form of pseudo-deductive study. Inference which seeks to falsify in one fell swoop ‘everything but what my club believes’ as constituting one group of bad people, who all believe the same wrong and correlated things – this is the warning flag of panductive pseudo-theory. No follow up series studies nor replication methodology can be derived from this type of ‘study’, which in essence serves to make it pseudoscience.  This is a common ‘study’ format which is conducted by social skeptics masquerading as scientists, to pan people and subjects they dislike.

As such an idea like QI, which hinges upon heteroduction, cannot be equated with pseudoscience, as did Brian Koberlein in a Forbes (no surprise here to followers of The Ethical Skeptic) article on 15 February 2017.3 I am not a proponent necessarily of Quantized Inertia, but this form of ‘I am God’ journalism, purposed a priori with the sole objective of harming (scienter) researchers for daring to think differently, constitutes a Richeliean appeal-to-authority on the part of Brian Koberlein. Brian exhibits here a longstanding problem in science and not any form of its valid expression. His appeal to ‘peer review’ and opponent ‘resistance to criticism (infer: invalidation)’ ring with sounds of familiarity to the experience ethical skeptic and investigator. Not that those things are wrong as aspects of science, rather they are the common last resort implements of the scoundrel, when used to counter otherwise sound evidence and scientific method. A circumstance wherein the poseur has exhausted the depths of their technical competence and now must resort to sciencey-sounding rhetoric.

One can ascertain from the Forbes article, that Brian understands fully he will be rewarded with immediate monkey-with-a-gas-can credibility (and future income) through visibly bullying a weaker target and slinging a couple familiar terms about. It is one thing to professionally disagree – another thing altogether to call something which possesses valid mechanism and observation, ‘pseudoscience’. This is not ‘scientific criticism’. This is a Wittgenstein object called evil (harm as a first priority, through misrepresentation with scienter):

Rather than addressing criticism, you start building a story where your idea is obviously right, and others are simply too closed-minded to see it. Down that path lies pseudoscience, and sometimes you can watch it happening. Take for example, Mike McCulloch’s theory of Modified inertia by a Hubble-scale Casimir effect (MiHsC), also known as quantized inertia.4

~Brian Koberlein, Astrophysicist and Forbes Contributor

It is not that Brian’s conclusion is wrong. But more importantly, his mode of inference (panduction) is unsound. His method is wrong and will only serve to propagate ignorance. It forces science advancement to rely critically upon, not discovery, rather the eventual passing of its participants.

Science advances through disruptive shifts based upon heteroduction, and only after the posing skeptics of conformance all die.
The intrinsically deductive nature of death therefore, may stand as mankind’s most profound form of scientific inference.

Brian starts by assuming the proposition to be wrong (an amazing feat of panductive critical thinking – see chart above), and then straw man frames the thought behind its competing idea as originating from ‘building a story’ (infer ‘lie’ dear reader). This constitutes an overreach in skepticism, as this circumstance may constitute simply a matter of a necessary competing construct (see Embargo of the Necessary Alternative is Not Science).

Under Brian’s method outlined here, we are done with science as a key bulwark to the future of humanity – as no new idea can ever be developed again. Nothing but academic journalism from here on out folks – get on the bus or be pseudoscience. We are the science, you are not. Papers published will be constrained to only those which serve to stroke the egos of those who achieved journalistic tenure, and can only serve to propose hypotheses which conjecture additional novel tidbits outlining how brilliant and correct we have always been. This is nonscientific propaganda, a form of bullshit common with Forbes and its contributors.

It is not that dark matter is invalid as a construct or theory; rather, the challenge resides in exposing this fake form of its enforcement. A philosophical experiment which will serve to benefit future generations in combating methodical cynicism and ignorance.

 It is this very process of

  • denying a whole method of inference its own meaning and role
  • invalidating (not ‘criticizing’) a scientific enigma because of its asymmetrical challenge and sublime observation base
  • obsessing over reliability to the sacrifice of understanding, and
  • Richeliean appeal to authority

which stand as the conditions which make heteroduction necessary as now an accepted mode of inference. A mode of inquiry which resides at the heart of the ethical talented intelligence specialist. It is up to the ethical skeptic to ensure that such researchers and avenues of research are shielded from the nefarious forces which would see to their premature demise.

The Ethical Skeptic, “Heteroduction – When Classic Inference Proves Unsound”; The Ethical Skeptic, WordPress, 27 Jan 2019; Web, https://wp.me/p17q0e-9kh