The Ethical Skeptic

Challenging Agency of Pseudo-Skepticism & Cultivated Ignorance

Epistemological Domain and Objective Risk Strategy

If the relevant domain of a subject is largely unknown, or insufficient study along any form of critical path of inference has been developed, then it is invalid to claim or imply through a claim, that ignorance has been sufficiently dispelled in order to offset risk. Especially that ignorance which is prolific and may serve to result in harm imparted to at-risk stakeholders, not simply constituting exposure of Cronies to a hazard. After dealing with the malice of those who shortcut science in order to turn a quick profit, one often is left feeling the need for a clean shower.

C’mon Chief, You’re Overthinking This Thing

As a younger man, I ventured out one afternoon with the thought in mind of buying a new car. My old Toyota had 185,000 miles on it, and despite all the love and care I had placed into that reliable vehicle, it was time to upgrade. ‘Timothy’ as I called my car, had served me well through years of Washington D.C.’s dreadfully monotonous 6:00 am Woodrow Wilson Bridge traffic, getting to Arlington, the Pentagon and the Capitol District, through to graduate school classes, and finally getting home nightly at 11:00 pm. My beloved car was just about worn out. So I selected the new model that I wanted and proceeded one Saturday to a local dealer. The salesperson and I struck a deal on my preferred model and color, with approval from the sales manager skulking remotely as the negotiator within some back office. Always take this as a warning sign: any time a person who is imbued with the power to strike a deal, will not sit with you face to face during the execution of that deal. This is a form of good-cop/bad-cop routine. However, being that this was only my second car purchase, I accepted this as normal and shook hands with the salesperson upon what was in reality, a very nice final price on my prospective new car.

The polite and professional salesperson led me down a hallway and handed me off and into the office of the closing manager. The closing manager was a fast-talking administrative professional who’s job it was to register the sale inside the corporate system, arrange all the payment terms, declarations, insurance and contracts, remove the car from inventory, register the sale with the State and affix all the appropriate closing signatures. A curiously high paying position assigned to execute such a very perfunctory set of tasks. The closing manager sat down and remarked what an excellent Saturday it had been, and then added that he was glad that I was his “last sale of the evening.” He had a bottle of cognac staged on his desk, ready to share a shot with the sales guys who had delivered an excellent performance week. The closing manager pulled up the inventory record and then printed out the sales contract in order to review it with me. In reviewing the document, I noted that the final closing figure listed at the bottom of the terms structure was $500 higher than was the agreed price I had just struck with the sales manager. The closing manager pointed out that the figure we had negotiated did not reflect the ‘mandatory’ addition of the VIN number being laser engraved into the bottom of each of the windows. The fee for the laser engraving, and believe him (*chuckle) it was well worth it, was $500. If the vehicle was ever stolen, the police would be asking me for this to help them recover the vehicle. Not to worry however, the laser engraving had already been applied at the factory. This was an administrative thing, really.

Raising objection to this sleight-of-hand tactic, I resolved to remain firm in that objection and expressed my intent to walk out the door if the $500 adder was not removed from the contract.  The closing manager then retorted that he did not have time to correct the contract as “the agreement had already been registered in the corporate system” and he would “have to back that out of the system and begin all over again.” To which I responded, “Then let’s begin all over again.” Thereupon, the closing manager said that he had to make a quick call home. He called his spouse and in very dramatic fashion exclaimed “Honey, tell our son that we will be late to his graduation because I have to re-enter a new contract here at the last hour. What? He can’t wait on us?” The clerk held the phone to his chest and said, “I am going to have to miss my son’s graduation.” (This reminded me of being told that, since I question Herodotus’ dating of the Khufu Pyramid, along with his claim that he even physically traveled to Egypt in the first place – that therefore I ‘believe that aliens built the pyramids and am racist towards Egyptians’). Having grown absolutely disillusioned as to the integrity of this whole farce, I responded “OK, attend your son’s graduation and I will come back some other time.” “Surely they do not think I am this dumb. Do I look stupid or something?” I mulled while getting up from my chair and proceeding out the door in disgust.

I was met in the exit hallway by the previously hidden bad-cop, the sales manager. “Wait wait, Chief you’re overthinking this thing. You don’t understand, that we have given you a great price on this vehicle. I have a guy who wants to take this particular inventory first thing in the morning.” To which I responded, “Well, make sure you tell him about the mandatory laser engraving fee”, fluttering my hands upward in mock excitement. My valuable weekend car shopping time had been wasted by manipulative and dishonest fools. It was not simply that I did not know about the engraving fee, rather that I did not even know, that I did not know about the potential of any such fake fee. The epistemic domain had been gamed for deception. They had allowed me to conduct my science if you will, inside a purposeful and crafted charade in ignorance – Descriptive Wittgenstein Error. They had hoped that the complexity of the sales agreement would provide disincentive for me to ‘overthink’, and spot the deal shenanigans. I walked out of the showroom feeling like I needed to immediately go home and take a shower.

Whenever someone pedantically instructs you that your are overthinking something,
under a condition
of critical path or high domain unknown, be very wary. You are being pitched a con job.

If you have not departed from the critical path of necessary inference,
or if the domain is large and clouded with smoke and mirrors, never accept an accusation of ‘overthinking’.
Such cavil constitutes merely a simpleton or manipulative appeal to ignorance.

Domain Ignorance and Epistemological Risk

What this used car sales comedy serves to elicit is a principle in philosophy called an ‘ignorance of the necessary epistemological domain’, or the domain of the known and unknown regarding one cohesive scientific topic or question. Understanding both the size of, as well as that portion of science’s competent grasp of such domain/unknown, is critical in assessing scientific risk – to wit: the chance that one might be bamboozled on a car contract because of a lack of full disclosure, or the chance that millions of people will be harmed through a premature rollout of a risky corporate technology which has ‘over-driven its headlights’ of domain competency, and is now defended by an illegitimate and corrupt form of ‘risk strategy’ as a result.

There are two distinct species of scientific risk: epistemological risk and risk involving an objective outcome. In more straightforward terminology, the risk that we don’t know something, and the risk that such not-knowing could serve to impart harm.

Before we introduce those two types of risk however, we must define how they relate to and leverage from a particular willful action, a verb which goes by the moniker, ignorance. Ignorance is best defined in its relationship to the three forms of Wittgenstein error.1 2 3

Ignorance – a willful set of assumptions or lacks thereof, outside the context of scientific method and inference, which result in personal or widespread presence of three Wittgenstein states of error (for a comprehensive description of these error states, see Wittgenstein Error and Its Faithful Participants):

Wittgenstein Error (Contextual)
    Situational:  I can shift the meaning of words to my favor or disfavor by the context in which they are employed
Wittgenstein Error (Descriptive)
    Describable:  I cannot observe it because I refuse to describe it
    Corruptible:  Science cannot observe it because I have crafted language and definition so as to preclude its description
    Existential Embargo:  By embargoing a topical context (language) I favor my preferred ones through means of inverse negation
Wittgenstein Error (Epistemological)
    Tolerable: My science is an ontology dressed up as empiricism
        bedeutungslos – meaningless or incoherent
        unsinnig – nonsense or non-science
        sinnlos – mis-sense, logical untruth or lying.

Now that we have a frame of reference as to what is indeed ignorance (the verb), we can now cogently and in straightforward manner, define epistemological domain, along with the two forms of scientific risk: epistemological risk and objective risk. This is how a risk strategy is initiated.

Epistemological Domain (What We Should Know)

/philosophy : skepticism/ : what we should know. That full set of critical path sequences of study, along with the salient influencing factors and their imparted sensitivity, which serve to describe an entire arena of scientific endeavor, study or question, to critical sufficiency and plenary comprehensiveness.

Epistemological Risk (What We Don’t Know and Don’t Know That We Don’t Know)

/philosophy : skepticism : science : risk/ : what we don’t know and don’t know that we don’t know. That risk in ignorance of the necessary epistemological domain, which is influenced by the completeness of science inside that domain; as evidenced by any form of shortfall in

•  quality of observational research,
•  nature and reach of hypothesis structure,
•  appropriateness of study type and design,
•  bootstrap strength of the type and mode of inference drawn,
•  rigor of how and why we know what we know,
•  absence or presence of operating agency, and finally
•  predominance or subordinace of the subject domain’s established domain risk (subject of this blog)

The next step after defining these elements of risk, is to undertake a Risk Strategy. The purpose of a risk strategy is to translate epistemological risk into objective risk and then set out an ethical plan, which serves to shield at-risk stakeholders from its impact. As a professional who develops value chain and risk strategies, I remain shocked at the number of risky technological roll-outs, enacted by large and supposedly competent field-subject corporations, which are executed inside a complete vacuum in terms of any form of risk strategy at all. When the lay public or their representative challenge your technology’s safety – your ethical burden is not to craft propaganda and social advocacy; but rather to issue the Risk Strategy which was prosecuted, in advance of the technology rollout, to address their concerns. Two current examples of such unacceptable circumstance, framed inside the analogy of ‘car headlights’, are highlighted later in this article.

What is a Risk Strategy?

One way in which such matters are addressed in industry (when they are addressed – which is rarely), is to conduct a form of value chain strategy called a risk chain evaluation or ‘risk strategy’. Risk flows in similar fashion to a value or margin chain, it concatenates, snowballs and increases non-linearly. It is not a stand alone element unto itself, rather part of the fabric of the mission, product or channel of service being undertaken. A risk strategy is usually done as part of a value chain strategy.4 Both forms of analysis involve the flow of value, matched against the counter-flow of resources. Risk is simply an objectified species of value – so the competent technology company, when choosing to conduct a risk strategy, will often seek the counsel of a value chain strategy firm, in order to come alongside and assist its project executives and managers through a Risk Strategy workplan. Despite the complex-sounding framework presented here, the subject is only complex in its generic description. Once applied to a specific technology, market or treatment, the actual execution of a risk strategy as part of a value chain or branding strategy, becomes very straightforward.

A risk strategy is not congruent with a hazard exposure assessment. In assessing hazards, one already knows what the dangers are, and is measuring potential harm (exposure) to earnings/insurers/stockholders.

In a risk strategy, an operating group is identifying what they do not know (in advance of identifying hazards), and how that lack of knowing can serve to harm brand/mission/stakeholders/environment/clients.

A risk strategy is developed in industry by first conducting a piloting session, which kicks off two steps. The first tasks a team which is assigned to develop the value chain description (Question 1 below) of the entailed domain (the critical path of a product development horizon, a brand strategy, a legal argument, or an imports channel for example). A second step then involves development of epistemological risk and slack factors, measures and sensitivities which can be assigned to each node (action/decision) in the risk chain series mapped during the first step (Questions 2 – 7 below). These shortfalls in diligence are derived from the general categorizations defined (with links) under ‘Domain Epistemological Risk’ above. This does not actually take that long if the group is guided by an experienced professional. The groups who conducted the two steps above then reconvene and develop the answer to Question 8 as the final step.

A Risk Strategy seeks to prosecute the following set of questions, in order:

1.  What is the state of current industry of observational research, and how much of the subject domain has been touched? Map the subject domain and its core critical path arguments/issues (elements)/sensitivities (the ‘footprint’).

2.  How many novel and tested hypotheses have addressed this domain footprint (articles, systematic reviews, editorials do not count)? How many are actually needed in order to fairly address the footprint domain risk?

3.  What types and designs of study have been completed regarding each hypothesis, and were they sufficient to the problem? Has there been torfuscation?

4.  What was the bootstrap strength of the type and mode of inference drawn from these studies? Was it merely inductive? Can deductive work be done? Does methodical deescalation exist inside the industry?

5.  Prosecute the state of the industry under the standard of ‘How we know, what we know’. Is it sound? What ‘agencies’ exist and do they constitute a domain problem?

6.  Establish the risk horizon of ‘unknown knowns’ and ‘unknown unknowns’. How predominant or subordinate is this set, as compared to the overall domain of knowledge?

7.  Finalize Risk Chain mapping and develop a Risk Horizon by Type (see below) for each critical path issue identified in step 1.

8.  How do we take actions to mitigate the Risk Horizon, and how do we craft organization mission and brand around these now ethical principles.

Once done competently, the company which conducts a risk strategy will shine like a beacon, competitively against short-cut minded competitors. The two colors orange and red, on the right in the following chart depict our ‘risk horizon’. That which we as a deploying corporate entity do not know that science already knows, and that which we do not know that we do not know. These are the domains of ignorance which serve to endanger an at-risk technology stakeholder through objective risk.

The Horizon of Epistemological Risk

High Epistemological Domain Risk: there exist a high number of critical paths of consideration, along with a high degree of sensitive and influencing factors – very few of which we have examined nor understood sufficiently.

Lower Epistemological Domain Risk: there exist a low or moderate number of critical paths of consideration, along with a reasonable degree of sensitive and influencing factors – many or most of which we have examined and begun to understand sufficiently.

Once epistemological risk is mapped (1-7 above, or ‘what we don’t know’), then a mitigation approach is developed which can serve to rate, triage and then minimize each risk element, or reduce the effect of risk elements combining into unintended consequences (how what we don’t know, can serve to harm someone or something). Stand alone risks are treated differently than are concatenated or cumulative escalating (snowballing) risks. However all risks are measured in terms of virtual (non-realized) consequences. These consequences are what is deemed inside risk theory as ‘objective risk’.

Objective Risk (What Harm Might Result)

/philosophy : science : technology : risk/ : what harm might result from our not knowing. The risk entailed as a result of an outcome inside a particular state of being or action, stemming from a state of high epistemological risk, and which might result in an increase in the ignorance itself and/or in harm and suffering to any form of at-risk stakeholder. Hazards are identified along with estimates for exposure and robustness efforts inside a Mitigation Plan. Objective risk comes in two forms.

Risk Type I constitutes a condition of smaller Risk Horizon (lower epistemological risk) wherein our exposure resides in deploying a technology faster than our rate of competence development inside its use context.

Risk Type II is the condition wherein the Risk Horizon is extensive (our knowledge is low), yet we elect to deploy a technology or treatment despite these unknown levels of risk horizon exposure.

The last step involving a plan to address how we head off the virtual hazards the team has identified. However, there are certain things which ‘How we head it off’ does not mean; those dark and questionable practices of monist, oligarch and crony driven corporations, to wit:

What a Risk Strategy Does NOT Do

Do the following set of activities look familiar? They should, as this is the ethic of today’s monist/oligarch/crony operated entity. A real risk strategy conducts real science (see the definition and links under ‘Domain Epistemological Risk’ above) and follows generally, the above process. Risk resides in what one does not know, not in what one does know. Its client is the technology company at-risk stakeholder community – and NOT the corporation, its insurers nor stockholders. The following very common tactics in contrast, are not elements of a real risk strategy; constituting rather a half-assed strategy of Court-defined malice and oppression:

Fake Risk Strategy

•  Identify ‘hazards’ and assess their likelihood of causing harm, and call that ‘risk’
•  Identify only hazards which bear a ‘risk’ of harming the insurer or stockholder
•  Identify foes and research their backgrounds for embarrassing information and smear campaigns
•  Develop a ‘talking points’ sheet of propaganda to hand to the media in advance of potential bad news
•  Develop astroturf ‘skeptics’ who are given target groups and individuals to harass with ‘science’
•  Hire celebrity skeptics to accuse anyone who dissents, of being ‘anti-science’
•  Hire Facebook, Twitter or Forbes to manage which voices get heard or ‘liked’
•  Identify the money needed to pay off legislative representatives for protection
•  Threaten universities with funding cuts if their tenured staff speak up about your technology
•  Execute mergers and acquisitions before stockholders have a chance to tender input to the Board of Directors
•  Prop up fictitious one-and-done labs to develop some quick shallow inductive study showing your product was proved safe
•  Identify that level of intimidating-sounding ‘science’ which would impress layman science communicators and the media.
•  Seek to bundle one’s technology risk with other technologies so as to hide any potential risk flagging signal
•  Pay university professors under the table, in order to engender their support against targeted enemies
•  Develop accounting practices which allow risk based profits to be hidden inside other organizations or facets of the organization

Cancer hazard exposure, corporate safety policy development or the definition of safety in a regulatory setting – these are not examples of true risk strategy. They are applications in hazard exposure mitigation. Essential activity no doubt, but not ones which properly address risk. They are not science (‘I learn’), but rather its pretense or finish, sciebam (‘I knew’).

This is like having regulatory officials who hand out speeding and parking tickets, yet turn a blind eye to thieves, violent or corporate criminals – then deeming themselves and their activity to constitute, ‘law enforcement’.

True risk value chain strategy assesses what we do not know, and how it node/function compounds through series or parallel activity, and how its flow and dynamic may impact a broad footprint of exposed stakeholders or value.

For an example of a simpleton science communicator who bears not the first clue about risk, look no further than Kavin Senapathy and her ‘expert’ article here: Clearing Up the Concept of Risk Assessment

In other words, a real risk strategy does real science – and a fake risk strategy pretends that it already knows everything it needs to know, does no more research, and just estimates the odds of something blowing up on them. A fake risk strategy then conducts social manipulation in place of managing exposure and robustness through a Mitigation Plan. Very much akin to what fake skepticism does. This is why you observe these types of companies conducting their robust science after they have already rolled out their dangerous product. They got caught, and now the public is forcing them to do a risk strategy a posteriori.

A Risk Strategy is not the process of ‘identifying hazards’, and then assessing the ‘likelihood that a specific hazard will cause harm’ (our exposure). Unless you identify the hazard as ‘We lack knowledge’, all this charade does is serve to confirm what we already knew a priori. This is not the definition of risk, nor is this how a risk strategy is conducted regarding complex horizons. A mitigation plan serves to identify hazards, along with our exposure or robustness therein (Taleb, The Black Swan), but this cannot be done in a vacuum, nor as the first step.

Before we move on, as you can observe inside the definition of epistemological risk above, we have addressed inside six recent blog articles (each one hyperlinked in blue), the principles of sound research effort, the elements of hypothesis, study design and type, agency risk, along with the types and modes of inference and how we know what we know. These first six links constitute ‘the science’ behind a risk strategy. Which leaves open of course the final and seventh defining element in that same links list, the topic of ‘subject epistemological domain’. Domain epistemological risk is a component of the definition which is critical before one can assess the subject of objective risk in sufficient ethical fashion. This of course is the purpose and focus of this blog article; thus we continue with domain epistemological risk as it is defined inside a concept called the Risk Horizon.

If your Big-Corp has conducted all the scientific diligence necessary in the rollout of a risk-bearing technology
or medical intervention, then show me the Risk Strategy they employed
and should have posted & available for stakeholder review.

Third party systematic reviews conducted after the rollout of the technology or treatment, do not constitute sufficient ethics nor science.

Inference Inside the Context of a Risk Horizon

What we have introduced with the above outline of risk, is the condition wherein we as a body of science, or the society which accepts that body of science, have deployed a technology at a rate which has outpaced our competence with that technology domain. In other words we have over-driven our headlights. We are either driving too fast for our headlights to help keep us safe, or we are driving on a surface which we are not even sure is a road, because our headlamps are too dim to begin with. This latter condition; the circumstance where our headlamps are so dim that we cannot distinguish the road, involves a principle which is the subject of this blog article. A principle called domain epistemological risk, or more accurately the size of the domain of established competence and the resulting Risk Horizon. Below, we have redeveloped The Map of Inference, such that it contrasts standard context inference, with that special hierarchy of inference which is exercised in the presence of either epistemological or objective risk. The decision theory as well as types of inference and study designs are starkly different under each scenario of confidence development, per the following chart.

The Map of Inference Versus Risk Horizon

The first thing one may observe inside the domain chart above, is that it is much easier to establish a case of risk (Objective Risk – modus praesens), than it is to conclusively dismiss one (Objective Risk – modus absens). That ethic may serve to piss off extraction-minded stockholders, but those are the breaks when one deploys a technology bearing public stakeholder risk. Rigor must be served. What one may also observe in the above chart are two stark contrasts between risk based inference and standard inference. These two contrasts in Risk Types I and II are outlined below via the analogies of over-driving headlights, or possessing too-dim a set of headlamps. Each bears implications with regard to waste, inefficiency and legal liability.

Risk Type I: Over-driving Our Headlights

Smaller Risk Horizon (Lower State of Domain Epistemological Risk)

First when one moves from the context of the trivial ascertainment of knowledge and into an arena wherein a population of stakeholders is placed at risk; say for example as in the case of broadscale deployment of a pesticide or an energy emitting system – the level of rigor in epistemology required increases substantially. One can see this under the column ‘Objective Risk modus absens‘. Here the null hypothesis shifts to the assumed presence of risk, not its absence (the precautionary principle). In other words, in order to prove to the world that your product is safe, it is not sufficient to simply publish a couple Hempel’s Paradox inductive studies. The risk involved in a miscall is too high. Through the rapid deployment of technology, society can outrun our ability to competently use or maintain that technology safely – as might be evidenced by nuclear weapons or a large dam project in a third world nation which does not have the educational nor labor resources to support operation of the dam. When we as a corporate technology culture are moving so fast that our pace outdistances our headlights – risk concatenates or snowballs.

Example:  5G is a promising and powerful technology. I love the accessibility and speeds it offers. However there is legitimate concern that it may suffer being deployed well before we know enough about this type of pervasive radiation impact on human and animal physiology. A wave of the indignant corporate hand, and inchoate appointment of the same skeptics who defended Vioxx and Glyphosate, is not sufficient scientific diligence. If I see the same old tired skeptics being dragged out to defend 5G – that is my warning sign that the powers deploying it, have no idea what they are doing. I am all for 5G – but I want scientific deductive rigor (modus absens) in its offing.

Risk Type II: Headlamps Not Bright Enough

Extensive Risk Horizon (High State of Domain Epistemological Risk)

Second and moreover, this problem exacerbates when the topic suffers from a high state of epistemological domain risk. In other words, there exist a high number of critical paths of consideration, along with a high degree of sensitive and influencing factors – very few of which we have examined nor understand sufficiently. Inside this realm of deliberation, induction under the Popper Demarcation of Science not only will not prove out the safety of our product, but we run a high risk of not possessing enough knowledge to even know how to test our product adequately for its safety to begin with. The domain epistemological risk is high. When a corporate technology is pushed onto the public at large under such a circumstance, this can be indicative of greed, malice or oppression. Risk herein becomes exponential. A technology company facing this type of risk strategy challenge, needs to have its legal counsel present at its piloting and closing sessions.

Example: Vaccines offer a beneficial bulwark against infectious diseases. Most vaccines work. However there is legitimate concern that we have not measured their impact in terms of unintended health consequences – both as individual treatments and as treatments in groups, nor at the ages administered. There exists a consilience (Consilient Induction modus praesens) of stark warning indicators that vaccines may be impacting the autoimmune, cognitive and emotional well being of our children.

We do not possess the knowledge which will allow us to deductively prove that our vaccines do not carry such unintended consequences. If one cites this as a condition which allows for exemption from having to conduct such study – such a disposition is shown in the chart above to constitute malice. When domain epistemological risk is high, and an authority which stands to derive power or profits from deployment of a technology inside that domain, applies it by means of less-than-rigorous science (eg. linear induction used to infer safety of vaccines), this constitutes a condition of malice on the part of that authority.

Such conditions where society is either outrunning its headlights, or does not maintain bright enough headlamps, are what we as ethical skeptics must watch for. We must be able to discern the good-cop/bad-cop masquerade, the posturing poseur used car salesmen of science, and the stop the charade which makes a farce of science, injures our children or serves to harm us all.

Our first diligence as technology sponsors and deploying corporations, is the protection of our technology receiving/adopting community. We, more than anyone, should be absolutely convinced that we substantially know our domain, and more importantly that we know what we do not know, before we can parade out our technology with the ethical confidence that we have protected our stakeholders.

   How to MLA cite this article:

The Ethical Skeptic, “Epistemological Domain and Objective Risk”; The Ethical Skeptic, WordPress, 23 May 2019; Web, https://wp.me/p17q0e-9ME

May 23, 2019 Posted by | Ethical Skepticism | , , | 2 Comments

Heteroduction – When Classic Inference Proves Unsound

There exists a circumstance for skepticism wherein a nagging repetitive anecdote inside the general public experience just will not go away. The impasse wherein its absence has been falsified, yet classic forms of inference fail in deriving its presence. Such instance stands as Ockham’s Razor necessity for the introduction of a new form of inference – one better suited to intelligence assimilation, than classic academic study. A disruptive and asymmetric form of inference which resides at the heart of the Kuhn-Planck Theory of Scientific Revolution.

Much to the chagrin of fake skeptics, certain phenomena and archetypes in the realm of human experience, will just not go away. Specific subjects they disdain are irritatingly bolstered by almost daily repeated observation on the part of the general public. Inside many of these topics the idea that such disdained phenomena constitute a mere figment of overzealous imaginations has been falsified over and over. But this will never satisfy the mind of a fake skeptic. They extrapolate a condition of difficulty in terms of classic inference, to therefore stand as basis for inferring the phenomenon’s absence as well (appeal to ignorance). They then invoke the name of science, as a USDA stamp of certification on such putrid products of ‘critical thinking’. To the ethical skeptic, such skeptical casuistry is folly.

My thoughts regarding this condition, what I have termed the contrathetic impasse, revolve are around a new approach to research and inference. One which we employed inside Intelligence, during my days therein. This is the form of research which might be performed by an investigator. This ilk of researcher does not hold an entire body of pre-knowledge (prior art), and must assemble such as part of their discovery process inside their research method. Not that this mode of inference or means of research has not existed all along; rather my point is, that this form of research is denied its own meaning and identity inside acceptable science method. Skeptics regard investigators and sponsors as lower, invalid forms of scientist. Pseudo scientists. Nothing could be further from the truth.

A Necessity for Heteroduction

The form of research and mode of inference this style of researcher employs involves a circumstance/conundrum exhibiting the following cohesive set of characteristics – ones common to all subjects which labor under this burden:

1.  Locus of study resides inside an enigma or apparent enigma which bears detection, but is denied meaning (See Descriptive Wittgenstein Error)

2.  Its logical critical path bears asymmetry or is unduly influenced by agency

3.  Its observations are ephemeral, hard to quantify and involve apparent sublime factors

4.  Observations are cherry sorted by skeptics in favor of reliability over their probative potential

5.  There exists an appeal-to-authority hostility toward the subject domain (Embargo Hypothesis – Hξ)

6.  The disciplines of lab/linear style hypothesis, deduction and induction have not proved to constitute sufficient inference methodologies to make progress inside the enigma

7.  More is unknown than is known regarding the entailed subject domain.

Solving a murder (deduction) or discovering a non-chlorine hand sanitizer for Ebola stricken areas (linear induction), or arriving at a conclusion about the character of a person (triangulating induction) – none of these constitute a sufficient method of inference under the condition outlined above. This condition demands much more, a form of Intelligence if you will, than it demands a basic form of intellectual exercise or inference. In the list to the right, you can observe the various modes of induction, ranked according to probative strength. Heteroduction (in red) is not so much strong in its relative ranking as a form of inference, as it is key in its role as possibly the only avenue of recourse once science and society have reached a contrathetic impasse. Observations have been proven to exist, but classic means of research have failed to produce critical answers.

Maybe one of the first steps inside this battle revolves around prompting philosophers of science to recognize this ‘new’ form of induction in the first place. Perhaps this is why fake skeptics patrol philosophy as well, to ensure that this form of inference is never understood nor accepted.

Heteroduction

/philosophy : inference/ : a disruptive and asymmetric form of inference necessary when classic modes of inference have served to produce or enforce incoherent and/or falsified conclusions. Heteroduction is associated less commonly with classic incremental hypothesis, and more with a process of investigation called intelligence assimilation. A novel form of inference which does not or cannot rely solely upon leveraging an incremental extrapolation of risk from that which is alike to our prior art. Rather, this method of inference must pool and draw inference from that which is unlike our prior art. It is the basis of the Kuhn-Planck Paradigm Shift understanding of scientific revolutions.

Heteroduction is strong because it leverages inconsistent observation as a form of coincident falsification and deduction.
Falsifications and deductions of high probative value which are erroneously or surreptitiously dismissed
because of their perceived lack of consistency, conformance or salience.

One must establish a consilient shitload in confirmation of standing wisdom, in order to counter for one violation of it.
Because a single instance of violation of our wisdom is vastly more scientifically informative than is any particular instance of its confirmation.

There are certain subjects, wherein their modus absens (absence as an object or state) has been falsified. In other words, Ockham’s Razor plurality has been surpassed and ethical research now demands their investigation. These are the domains which are best researched by the intelligence specialist; that form of investigator who knows how to assemble prior art and chase a consilience of information, all of which have proved to be unlike much of what we have seen before. But such a researcher must understand, that what is forbidden, and the puzzle piece nubs which are cut off in order to make the pieces a better ‘fit’ inside the a priori puzzle, can also often be assembled into the truth. Such is a predictable foible of mankind.

An Example of Heteroduction

For instance, dark matter is a one-idea-solves-all proposition which is raised as a result of cataloging a set of anomalous observations regarding universal/galactic motions in their relation to our understanding of gravity.  Classic linear induction would dictate that we craft dark matter as the incremental element which would function to conserve general relativity and Lambda-CDM models as the null hypothesis in the face of such a growing set conflicting observations. The reader may be forgiven for confusing such activity with ‘belief’. An ethical skeptic understands that the null hypothesis should never enjoy the luxury of becoming a belief.

Heteroduction in contrast, would coalesce all these same anomalous observations (see below) into a competing paradigm; observations which either are unlike anything we have ever seen, or even contradict our current prior art on the subject.  Heteroduction in this instance serves to develop a grounded-but-novel explanatory schema for these into a new competing construct (hopefully later hypothesis, if it can survive fake skepticism). Quantized Inertia stands as a key example of heteroduction in action.

Linear Induction

Dark Matter – a hypothetical form of matter that is thought to account for approximately 85% of the matter in the universe, and about a quarter of its total energy density. Its presence is inductively implied in a variety of astrophysical observations, including gravitational effects that cannot be explained unless more matter is present than can be seen.1

A person conducting heteroduction would sound warning on this line of reasoning – if enforced as a truth, rather than as the null hypothesis (note that I am not arguing against Dark Matter as a construct, simply using its deliberation as exemplary here).

Heteroduction

Quantized Inertia (QI) – previously known by the acronym MiHsC (Modified Inertia from a Hubble-scale Casimir effect), is the concept first proposed in 2007 by physicist Mike McCulloch, as an alternative to general relativity and the mainstream Lambda-CDM model. Quantized Inertia is posited to explain various anomalous effects such as the Pioneer and flyby anomalies, observations of galaxy rotation which forced Dark Matter’s introduction and propellantless propulsion experiments such as the EmDrive and the Woodward effect. It is a theory of inertia-like resistance arising from quantum effects, which serves to function in the place of dark matter –  as the necessary conjecture explaining ‘missing matter/gravitation’ in our cosmological models.2

For a better framing of QI Theory than I can render here, one can find a common sense summary within this video (which is also recommended by physicist Mike McCulloch):  The Fringe Theory Which Could Disprove Dark Matter

The Unruh effect, Casimir effect, information coding/compression theory and missing mass of galactic rotation, all of which provide the praedicate to QI theory, are all well established constructs inside modern science. Each subject outlines artifacts of observation unlike any we have observed before – anomalies which prompt scientists to go ‘huh?’. However it is the probative potential of such observations combined with this very nature of being unlike our standing prior art on the subject, which suggests their necessary combination into a new theoretical paradigm. This process/mode of inference is called heteroduction. It becomes necessary when classic forms of inference (the top ones in the chart above) have run their course in ability to provide explanatory or predictive power, and a critical mass of exception/falsifying observations continue to accrue.

True science challenges its null hypothesis, and this construct/hypothesis challenges the null hypothesis within a reasonable basis of soundness. This does not mean that QI therefore as an idea is correct, rather that it stands as a potential foundational stone inside a Kuhn-Planck Paradigm Shift. The mode of inference and the method of investigation remain valid, despite whether or not the QI alternative pans out to be true in the end. It is indeed science.

In contrast, there exist several darker forms of inference, a key one of which is panduction.

Panduction

/philosophy : invalid inference/ : an invalid form of inference which is spun in the form of pseudo-deductive study. Inference which seeks to falsify in one felled swoop ‘everything but what my club believes’ as constituting one group of bad people, who all believe the same wrong and correlated things – this is the warning flag of panductive pseudo-theory. No follow up series studies nor replication methodology can be derived from this type of ‘study’, which in essence serves to make it pseudoscience.  This is a common ‘study’ format which is conducted by social skeptics masquerading as scientists, to pan people and subjects they dislike.

As such an idea like QI, which hinges upon heteroduction, cannot be equated with pseudoscience, as did Brian Koberlein in a Forbes (no surprise here to followers of The Ethical Skeptic) article on 15 February 2017.3 I am not a proponent necessarily of Quantized Inertia, but this form of ‘I am God’ journalism, purposed a priori with the sole objective of harming (scienter) researchers for daring to think differently, constitutes a Richeliean appeal-to-authority on the part of Brian Koberlein. Brian exhibits here a longstanding problem in science and not any form of its valid expression. His appeal to ‘peer review’ and opponent ‘resistance to criticism (infer: invalidation)’ ring with sounds of familiarity to the experience ethical skeptic and investigator. Not that those things are wrong as aspects of science, rather they are the common last resort implements of the scoundrel, when used to counter otherwise sound evidence and scientific method. A circumstance wherein the poseur has exhausted the depths of their technical competence and now must resort to sciencey-sounding rhetoric.

One can ascertain from the Forbes article, that Brian understands fully he will be rewarded with immediate monkey-with-a-gas-can credibility (and future income) through visibly bullying a weaker target and slinging a couple familiar terms about. It is one thing to professionally disagree – another thing altogether to call something which possesses valid mechanism and observation, ‘pseudoscience’. This is not ‘scientific criticism’. This is a Wittgenstein object called evil (harm as a first priority, through misrepresentation with scienter):

Rather than addressing criticism, you start building a story where your idea is obviously right, and others are simply too closed-minded to see it. Down that path lies pseudoscience, and sometimes you can watch it happening. Take for example, Mike McCulloch’s theory of Modified inertia by a Hubble-scale Casimir effect (MiHsC), also known as quantized inertia.4

~Brian Koberlein, Astrophysicist and Forbes Contributor

It is not that Brian’s conclusion is wrong. But more importantly, his mode of inference (panduction) is unsound. His method is wrong and will only serve to propagate ignorance. It forces science advancement to rely critically upon, not discovery, rather the eventual passing of its participants.

Science advances through disruptive shifts based upon heteroduction, and only after the posing skeptics of conformance all die.
The intrinsically deductive nature of death therefore, may stand as mankind’s most profound form of scientific inference.

Brian starts by assuming the proposition to be wrong (an amazing feat of panductive critical thinking – see chart above), and then straw man frames the thought behind its competing idea as originating from ‘building a story’ (infer ‘lie’ dear reader). This constitutes an overreach in skepticism, as this circumstance may constitute simply a matter of a necessary competing construct (see Embargo of the Necessary Alternative is Not Science).

Under Brian’s method outlined here, we are done with science as a key bulwark to the future of humanity – as no new idea can ever be developed again. Nothing but academic journalism from here on out folks – get on the bus or be pseudoscience. We are the science, you are not. Papers published will be constrained to only those which serve to stroke the egos of those who achieved journalistic tenure, and can only serve to propose hypotheses which conjecture additional novel tidbits outlining how brilliant and correct we have always been. This is nonscientific propaganda, a form of bullshit common with Forbes and its contributors.

It is not that dark matter is invalid as a construct or theory; rather, the challenge resides in exposing this fake form of its enforcement. A philosophical experiment which will serve to benefit future generations in combating methodical cynicism and ignorance.

 It is this very process of

  • denying a whole method of inference its own meaning and role
  • invalidating (not ‘criticizing’) a scientific enigma because of its asymmetrical challenge and sublime observation base
  • obsessing over reliability to the sacrifice of understanding, and
  • Richeliean appeal to authority

which stand as the conditions which make heteroduction necessary as now an accepted mode of inference. A mode of inquiry which resides at the heart of the ethical talented intelligence specialist. It is up to the ethical skeptic to ensure that such researchers and avenues of research are shielded from the nefarious forces which would see to their premature demise.

   How to MLA cite this article:

The Ethical Skeptic, “Heteroduction – When Classic Inference Proves Unsound”; The Ethical Skeptic, WordPress, 27 Jan 2019; Web, https://wp.me/p17q0e-9kh

 

January 27, 2019 Posted by | Ethical Skepticism | , , | Leave a comment

Intuitionism: Inference versus Impulse

Ethical Skepticism maintains a healthy respect for Inductive and Deductive epistemological inference methods. However the philosophy itself, upon which these logical inference methods are based, stems from sources which cannot be fully defined as epistemological in the first place – save for the instance wherein we are able to test each derived tenet’s mettle through real world application. An additional species of inference exists inside philosophy: that of Ethical Intuitionism. Unlike Impulse Inference, Ethical Intuitionism derives its based development practices from necessity and skilled instinct, not doctrine nor coerced conviction. It focuses primarily on the goals of value, clarity, risk, and probability as paramount above any particular conclusion alone.
Much of impulse originates through emotional damage and fear. But faith and metaphysical selection may still be ethical forms of inference exercised apart from such vulnerability.

Now we just completed a blog about three types of logical inference. To be clear, these three species of logical inference are all logic based forms of reason (see the left side of the chart to the right). There exist as well several other forms of inference. For instance, in mathematics we have the three disciplines of modeling & simulation, mathematical derivation itself and computation (the basis of Artificial Intelligence). There is however another and much more common (but often decried and denied) genre of inference methods. In order to introduce this form of inference we should take a quick look again at the three common rational forms which were developed in our last blog (see The Three Types of Reason).

Abductive Reason

/Diagnostic Inference/ : a form of precedent based inference which starts with an observation then seeks to find the simplest or most likely explanation. In abductive reasoning, unlike in deductive reasoning, the premises do not guarantee the conclusion. One can understand abductive reasoning as inference to the best known explanation.1

Inductive Reason

/Logical Inference/ : is reasoning in which the premises are viewed as supplying strong evidence for the truth of the conclusion. While the conclusion of a deductive argument is certain, the truth of the conclusion of an inductive argument may be probable, based upon the evidence given combined with its ability to predict outcomes.2

Deductive Reason

/Reductive Inference/ : is the process of reasoning by reduction in complexity, from one or more statements (premises) to reach a final, logically certain conclusion. This includes the instance where the elimination of alternatives (negative premises) forces one to conclude the only remaining answer.3

Derivation

Our fourth form of inference is mathematics itself. However, let’s set that aside for now and focus on the next successive block after Derivation in the chart above, that of Intuitionism. Intuitionism involves a combination of both abductive and inductive pre-mindset, a mathematician’s discipline, combined with a philosopher’s license to base conjecture of principle. I say mindsets, because deeming this form of inference a logical method is not a certainty. This form of inference can sometimes follow a method of logic, but often does not. It involves a set of hunch-based logics known collectively as Intuitionism.

Intuitionism

/Inference by Hunch/ : is the process of reasoning from a set of internally developed ideas – in part or alone without necessary reference to objective and a priori reality, sources, epistemology or belief. Such ideas may originate in part from unconscious or conscious extrapolations from prior training, including scientific, mathematical, social, experiential and religious. There are three general forms of Intuitionism.

Reason Based (Philosophy and Mathematics)

Ethical Intuitionism – a set of ideas that our intuitive awareness of value, or intuitive knowledge of clear evaluative facts and our ability to sense and measure plausibility, risk and probability, form the foundation of our ethical knowledge and knowledge development processes. This form of inference derives its basis from a solid background in inductive and deductive training and experience; however does not demand that every inference be based upon solely sources, epistemology or belief. Since philosophy derives (by necessity) many times from relatively intuition based inferences – it is rightfully thought of as a type of Ethical Intuitionism. It’s quality is proved out through the success of the science which employs methods adhering to its tenets.4 5

Mathematical/Physical Intuitionism – an approach wherein mathematics (or alternately physics as well) is considered to be purely the result of the constructive mental activity of humans rather than inferred through our discovery of fundamental principles claimed to exist in a referenceless, objective and a priori reality. That is, logic and mathematics are not considered analytic activities wherein deep properties of objective reality are revealed; rather, are instead considered the application of internally consistent methods used to realize more complex mental constructs, regardless of their possible independent existence in an objective reality.6 7

Impulse (pathos)

Intuitionism (Metaphysical Selection) – the philosophical theory that basic truths can be derived or are always known intuitively. The opposite of empirical and epistemological inference methods, often involving some degree of teleology. The philosophical basis of the idea that existence, cause, effect, purpose, being, origination of existence, theology or lack thereof, can all be derived through the foundationalism about moral knowledge: the view that some moral truths or views about god, existence, cause and purpose can be known non-inferentially (i.e., known without one needing to infer them from other sources, epistemology or beliefs). It revolves around three principles:

1.  Objective moral truths do exist (and for some, objective moral and causal Agents do exist)

2.  Fundamental moral truths (and moral and causal agents) have no precedent, nor can they be broken down into simpler or predicate components (this is parallel to the position of Philosophy – however extends to conclusions, rather than simply practices and disciplines)

3.  The belief that human beings are granted, can freely derive or have a past innate memory of such moral truths (and moral or causal agents).8

This is a form of metaphysical selection (a belief) – rather than a derivation which is achieved at the end of a process of logical/mathematical calculus or philosophical development of practice standards. A danger resides in conflating the pathos based intuitionism of belief, with the reason based intuitionism of mathematics and ethics.

Faith

When one elects to undertake a pathway involving ontological or impulse intuitionism, one should be honest and understand that this process of metaphysical selection (belief) – stands distinct from any form of mathematical derivation or intuitionism, ethical intuitionism, philosophy, abduction, induction or deduction. When exercised sincerely, and in this circumspect light of understanding – the practitioner is executing a principle called faith.  Faith is the condition wherein no pretense is offered by the claimant as to proof, evidence, logic, science, epistemology, right, wrong, authority, etc. The claimant simply and transparently makes it clear that they have exercised a metaphysical selection. It fits their gut. This is why faith is considered a more virtuous form of pathos and ontological intuition.

The telltale earmarks which serve to distinguish Religious Doctrine from Faith are the urgency, one way communication and coercion typically involved.

Impulse Inference (Religious Doctrine and Dogma)

This is a twisted and sick-minded form of metaphysical selection or faith. The only practice set which operates under a masquerade in this set of inference species and genres, is the practice of religious assumption, doctrine and dogma. This of course includes the habits of those who practice social skepticism. These religions will attempt to pass their doctrines as species of logical inference – through a process known as apologetics. This is a type of pathology wherein the participant very desperately wants to seek validation for a taught or personally adopted set of metaphysical conclusions. This is not truly an actual form of inference, however because of its peer status as an exception to the other genres and forms, it is depicted on the chart alongside all the forms of legitimate inference (including metaphysical selection and faith) which it pretends to be. The key clarifying aspect of this species of inference, is that it is in at least part based upon forms of coercion, fear and duty.

Of the three forms of Intuitionism above however – only Ethical Intuitionism provides for the distinct possibility to inductively or deductively test each assumption as to how it performs ‘in real life’. Mathematics intuition claims can be independently derived in proof, but such pathways inevitably progress into realms where definition uncertainty begins to provide for shaky ground in terms of final certainty (not to mention utility); for instance, in the case of the dispute over infinity as an existential or only practical incremental concept.9

Ethical Intuitionism, because of its philosophical basis – and focus on clarity, value, risk and probability over any specific conclusions, is often mistaken for sophistry by those unfamiliar with skepticism or who are highly committed to an abductive or impulse intuition based set of answers. 

The willingness to tolerate an unknown – the staunch defense of the methods of science against the twisted logic of the agenda laden poser – these standards serve to aggravate and inflame the religious impulse minded. The religious rarely ever ‘get’ Ethical Intuitionism. After all, its very core philosophy is anathema to religion – not necessarily metaphysical selection or faith – but religion. However, pointing this out rarely does any good.

As an ethical skeptic, one should just chuckle at such ignorance and move on; hoping that some day the accuser will see the light of their own bullheadedness.

epoché vanguards gnosis

July 8, 2017 Posted by | Ethical Skepticism | , , , , | Leave a comment

   

Chinese (Simplified)EnglishFrenchGermanHindiPortugueseRussianSpanish
%d bloggers like this: