The Ethical Skeptic

Challenging Pseudo-Skepticism, Institutional Propaganda and Cultivated Ignorance

Epistemological Domain and Objective Risk

If the relevant domain of a subject is largely unknown, or insufficient study along any form of critical path of inference has been developed, then it is invalid to claim or imply through a claim, that ignorance has been sufficiently dispelled. Especially that ignorance which is prolific and may serve to result in harm imparted to at-risk stakeholders. After dealing with the malice of those who shortcut science in order to turn a quick profit, or their skeptic apparatchiks, one often is left feeling the need for a clean shower.

C’mon Chief, You’re Overthinking This Thing

As a younger man, I ventured out one afternoon with the thought in mind of buying a new car. My old Toyota had 185,000 miles on it, and despite all the love and care I had placed into that reliable vehicle, it was time to upgrade. ‘Timothy’ as I called my car, had served me well through years of Washington D.C.’s dreadfully monotonous 6:00 am Woodrow Wilson Bridge traffic, getting to Arlington, the Pentagon and the Capitol District, through to graduate school classes, and finally getting home nightly at 11:00 pm. My beloved car was just about worn out. So I selected the new model that I wanted and proceeded one Saturday to a local dealer. The salesperson and I struck a deal on my preferred model and color, with approval from the sales manager skulking remotely as the negotiator within some back office. Always take this as a warning sign: any time a person who is imbued with the power to strike a deal, will not sit with you face to face during the execution of that deal. This is a form of good-cop/bad-cop routine. However, being that this was only my second car purchase, I accepted this as normal and shook hands with the salesperson upon what was in reality, a very nice final price on my prospective new car.

The polite and professional salesperson led me down a hallway and handed me off and into the office of the closing manager. The closing manager was a fast-talking administrative professional who’s job it was to register the sale inside the corporate system, arrange all the payment terms, declarations, insurance and contracts, remove the car from inventory, register the sale with the State and affix all the appropriate closing signatures. A curiously high paying position assigned to execute such a very perfunctory set of tasks. The closing manager sat down and remarked what an excellent Saturday it had been, and then added that he was glad that I was his “last sale of the evening.” He had a bottle of cognac staged on his desk, ready to share a shot with the sales guys who had delivered an excellent performance week. The closing manager pulled up the inventory record and then printed out the sales contract in order to review it with me. In reviewing the document, I noted that the final closing figure listed at the bottom of the terms structure was $500 higher than was the agreed price I had just struck with the sales manager. The closing manager pointed out that the figure we had negotiated did not reflect the ‘mandatory’ addition of the VIN number being laser engraved into the bottom of each of the windows. The fee for the laser engraving, and believe him (*chuckle) it was well worth it, was $500. If the vehicle was ever stolen, the police would be asking me for this to help them recover the vehicle. Not to worry however, the laser engraving had already been applied at the factory. This was an administrative thing, really.

Raising objection to this sleight-of-hand tactic, I resolved to remain firm in that objection and expressed my intent to walk out the door if the $500 adder was not removed from the contract.  The closing manager then retorted that he did not have time to correct the contract as “the agreement had already been registered in the corporate system” and he would “have to back that out of the system and begin all over again.” To which I responded, “Then let’s begin all over again.” Thereupon, the closing manager said that he had to make a quick call home. He called his spouse and in very dramatic fashion exclaimed “Honey, tell our son that we will be late to his graduation because I have to re-enter a new contract here at the last hour. What? He can’t wait on us?” The clerk held the phone to his chest and said, “I am going to have to miss my son’s graduation.” (This reminded me of being told that, since I question Herodotus’ dating of the Khufu Pyramid, along with his claim that he even physically traveled to Egypt in the first place – that therefore I ‘believe that aliens built the pyramids and am racist towards Egyptians’). Having grown absolutely disillusioned as to the integrity of this whole farce, I responded “OK, attend your son’s graduation and I will come back some other time.” “Surely they do not think I am this dumb. Do I look stupid or something?” I mulled while getting up from my chair and proceeding out the door in disgust.

I was met in the exit hallway by the previously hidden bad-cop, the sales manager. “Wait wait, Chief you’re overthinking this thing. You don’t understand, that we have given you a great price on this vehicle. I have a guy who wants to take this particular inventory first thing in the morning.” To which I responded, “Well, make sure you tell him about the mandatory laser engraving fee”, fluttering my hands upward in mock excitement. My valuable weekend car shopping time had been wasted by manipulative and dishonest fools. It was not simply that I did not know about the engraving fee, rather that I did not even know, that I did not know about the potential of any such surprise fee. They had allowed me to conduct my science if you will, inside a purposeful and crafted charade in ignorance – Descriptive Wittgenstein Error. They had hoped that the complexity of the sales agreement would provide disincentive for me to ‘overthink’, and spot the deal shenanigans. I walked out of the showroom feeling like I needed to immediately go home and take a shower.

Whenever someone pedantically instructs you that your are overthinking something,
under a condition
of critical path or high domain unknown, be very wary. You are being pitched a con job.

If you have not departed from the critical path of necessary inference,
or if the domain is large and clouded with smoke and mirrors, never accept an accusation of ‘overthinking’.
Such cavil constitutes merely a simpleton or manipulative appeal to ignorance.

Risk Strategy: Domain Ignorance, Epistemological and Objective Risk

What this used car sales comedy serves to elicit is a principle in philosophy called an ‘ignorance of the necessary epistemological domain’, or the domain of the known and unknown regarding one cohesive scientific topic or question. Understanding both the size of, as well as that portion of science’s competent grasp of such domain/unknown, is critical in assessing scientific risk – to wit: the chance that one might be bamboozled on a car contract because of a lack of full disclosure, or the chance that millions of people will be harmed through a premature rollout of a risky corporate technology which has ‘over-driven its headlights’ of domain competency, and is now defended by an illegitimate and corrupt form of ‘risk strategy’ as a result.

There are two distinct species of scientific risk: epistemological risk and risk involving an objective outcome. In more straightforward terminology, the risk that we don’t know something, and the risk that such not-knowing could serve to impart harm.

Before we introduce those two types of risk however, we must define how they relate to and leverage from a particular willful action, a verb which goes by the moniker, ignorance. Ignorance is best defined in its relationship to the three forms of Wittgenstein error.1 2 3

Ignorance – a willful set of assumptions or lacks thereof, outside the context of scientific method and inference, which result in personal or widespread presence of three Wittgenstein states of error (for a comprehensive description of these error states, see Wittgenstein Error and Its Faithful Participants):

Wittgenstein Error (Contextual)
    Situational:  I can shift the meaning of words to my favor or disfavor by the context in which they are employed
Wittgenstein Error (Descriptive)
    Describable:  I cannot observe it because I refuse to describe it
    Corruptible:  Science cannot observe it because I have crafted language and definition so as to preclude its description
    Existential Embargo:  By embargoing a topical context (language) I favor my preferred ones through means of inverse negation
Wittgenstein Error (Epistemological)
    Tolerable: My science is an ontology dressed up as empiricism
        bedeutungslos – meaningless or incoherent
        unsinnig – nonsense or non-science
        sinnlos – mis-sense, logical untruth or lying.

Now that we have a frame of reference as to what is indeed ignorance (the verb), we can now cogently and in straightforward manner, define epistemological domain, along with the two forms of scientific risk: epistemological risk and objective risk. This is how a risk strategy is initiated.

Epistemological Domain (What We Should Know)

/philosophy : skepticism/ : what we should know. That full set of critical path sequences of study, along with the salient influencing factors and their imparted sensitivity, which serve to describe an entire arena of scientific endeavor, study or question, to critical sufficiency and plenary comprehensiveness.

Epistemological Risk (What We Don’t Know and Don’t Know That We Don’t Know)

/philosophy : skepticism : science : risk/ : what we don’t know and don’t know that we don’t know. That risk in ignorance of the necessary epistemological domain, which is influenced by the completeness of science inside that domain; as evidenced by any form of shortfall in

•  quality of observational research,
•  nature and reach of hypothesis structure,
•  appropriateness of study type and design,
•  bootstrap strength of the type and mode of inference drawn,
•  rigor of how and why we know what we know, and finally
•  predominance or subordinace of the subject domain’s established domain risk (subject of this blog)

The two colors orange and red, on the right in the following chart depict our ‘risk horizon’. That which we as a deploying corporate entity do not know that science already knows, and that which we do not know that we do not know. These are the domains of ignorance which serve to endanger an at-risk technology stakeholder through objective risk.

The Horizon of Epistemological Risk

High Epistemological Domain Risk: there exist a high number of critical paths of consideration, along with a high degree of sensitive and influencing factors – very few of which we have examined nor understood sufficiently.

Lower Epistemological Domain Risk: there exist a low or moderate number of critical paths of consideration, along with a reasonable degree of sensitive and influencing factors – many or most of which we have examined and begun to understand sufficiently.

The next step after defining these elements of risk, is to undertake a Risk Strategy. The purpose of a risk strategy is to translate epistemological risk into objective risk and then set out an ethical plan, which serves to shield at-risk stakeholders from its impact. As a professional who develops value chain and risk strategies, I remain shocked at the number of risky technological roll-outs, enacted by large and supposedly competent field-subject corporations, which are executed inside a complete vacuum in terms of any form of risk strategy at all. When the lay public or their representative challenge your technology’s safety – your ethical burden is not to craft propaganda and social advocacy; but rather to issue the Risk Strategy which was prosecuted to address their concerns. Two current examples of such unacceptable circumstance, framed inside the analogy of ‘car headlights’, are highlighted later in this article.

What a Risk Strategy Does

One way in which such matters are addressed in industry (when they are addressed – which is rarely), is to conduct a form of value chain strategy called a risk chain evaluation or ‘risk strategy’. A risk strategy is developed in industry by first conducting a piloting session, which kicks off two efforts. The first tasks a team which is assigned to develop the value chain description of the entailed domain (the critical path of a product development horizon, a brand strategy, a legal argument, or an imports channel for example). A second team is tasked to develop epistemological risk and slack factors, measures and sensitivities which can be assigned to each node (action/decision) in the risk chain series mapped by the first team. Once epistemological risk is mapped (what we don’t know), then a mitigation approach is developed which can serve to rate, triage and then minimize each risk element, or reduce the effect of risk elements combining into unintended consequences (how what we don’t know, can serve to harm someone or something). Stand alone risks are treated differently than are concatenated or cumulative escalating (snowballing) risks. However all risks are measured in terms of virtual (non-realized) consequences. These consequences are what is deemed inside risk theory as ‘objective risk’.

Objective Risk (What Harm Might Result)

/philosophy : science : technology : risk/ : what harm might result. The risk entailed as a result of an outcome inside a particular state of being or action, stemming from a state of high epistemological risk, and which might result in an increase in the ignorance itself and/or in harm and suffering to any form of at-risk stakeholder. Objective risk comes in two forms.

Risk Type I constitutes a condition of smaller Risk Horizon (lower epistemological risk) wherein our exposure resides in deploying a technology faster than our rate of competence development inside its use context.

Risk Type II is the condition wherein the Risk Horizon is extensive (our knowledge is low), yet we elect to deploy a technology or treatment despite these unknown levels of risk horizon exposure.

However, there are certain things which ‘How we head it off’ does not mean. Those dark and questionable practices of monist, oligarch and crony driven corporations:

What a Risk Strategy Does NOT Do

Do the following set of activities look familiar? They should, as this is the ethic of today’s monist/oligarch/crony operated entity. A real risk strategy conducts real science (see the definition and links under ‘Domain Epistemological Risk’ above) and follows generally, the above process. Its client is the technology company at-risk stakeholder community – and NOT the corporation. The following very common tactics in contrast, are not elements of a real risk strategy; constituting rather a strategy of Court-defined malice and oppression:

•  Identify foes and research their backgrounds for embarrassing information and smear campaigns
•  Develop a ‘talking points’ sheet of propaganda to hand to the media in advance of potential bad news
•  Develop astroturf ‘skeptics’ who are given target groups and individuals to harass with ‘science’
•  Hire celebrity skeptics to accuse anyone who dissents, of being ‘anti-science’
•  Seek to bundle one’s technology risk with other technologies so as to hide any potential risk flagging signal
•  Identify the money needed to pay off legislative representatives for protection
•  Execute mergers and acquisitions before stockholders have a chance to tender input to the Board of Directors
•  Prop up fictitious one-and-done labs to develop some quick shallow inductive study showing your product was proved safe
•  Threaten universities with funding cuts if their tenured staff speak up about your technology
•  Pay university professors under the table, in order to engender their support against targeted enemies
•  Develop accounting practices which allow risk based profits to be hidden inside other organizations or facets of the organization

In other words, a real risk strategy does real science – and a fake risk strategy does social manipulation in place of science. Very much akin to what fake skepticism does. Before we move on, as you can observe inside the definition of epistemological risk above, we have addressed inside five recent blog articles (each one hyperlinked in blue), the principles of sound research effort, the elements of hypothesis, study design and type, along with the types and modes of inference and how we know what we know. These first five links constitute ‘the science’ behind a risk strategy. Which leaves open of course the final defining element in that same links list, the topic of ‘subject epistemological domain’. Domain epistemological risk is a component of the definition which is critical before one can assess the subject of objective risk in sufficient ethical fashion. This of course is the purpose and focus of this blog article; thus we continue with domain epistemological risk as it is defined inside a concept called the Risk Horizon.

If your Big-Corp has conducted all the scientific diligence necessary in the rollout of a risk-bearing technology
or medical intervention, then show me the Risk Strategy they employed
and should have posted & available for stakeholder review.

Third party systematic reviews conducted after the rollout of the technology or treatment, do not constitute sufficient ethics nor science.

Inference Inside the Context of a Risk Horizon

What we have introduced with the above outline of risk, is the condition wherein we as a body of science, or the society which accepts that body of science, have deployed a technology at a rate which has outpaced our competence with that technology domain. In other words we have over-driven our headlights. We are either driving too fast for our headlights to help keep us safe, or we are driving on a surface which we are not even sure is a road, because our headlamps are too dim to begin with. This latter condition; the circumstance where our headlamps are so dim that we cannot distinguish the road, involves a principle which is the subject of this blog article. A principle called domain epistemological risk, or more accurately the size of the domain of established competence and the resulting Risk Horizon. Below, we have redeveloped The Map of Inference, such that it contrasts standard context inference, with that special hierarchy of inference which is exercised in the presence of either epistemological or objective risk. The decision theory as well as types of inference and study designs are starkly different under each scenario of confidence development, per the following chart.

The Map of Inference Versus Risk Horizon

The first thing one may observe inside the domain chart above, is that it is much easier to establish a case of risk (Objective Risk – modus praesens), than it is to conclusively dismiss one (Objective Risk – modus absens). That ethic may serve to piss off extraction-minded stockholders, but those are the breaks when one deploys a technology bearing public stakeholder risk. Rigor must be served. What one may also observe in the above chart are two stark contrasts between risk based inference and standard inference. These two contrasts in Risk Types I and II are outlined below via the analogies of over-driving headlights, or possessing too-dim a set of headlamps. Each bears implications with regard to waste, inefficiency and legal liability.

Risk Type I: Over-driving Our Headlights

Smaller Risk Horizon (Lower State of Domain Epistemological Risk)

First when one moves from the context of the trivial ascertainment of knowledge and into an arena wherein a population of stakeholders is placed at risk; say for example as in the case of broadscale deployment of a pesticide or an energy emitting system – the level of rigor in epistemology required increases substantially. One can see this under the column ‘Objective Risk modus absens‘. Here the null hypothesis shifts to the assumed presence of risk, not its absence (the precautionary principle). In other words, in order to prove to the world that your product is safe, it is not sufficient to simply publish a couple Hempel’s Paradox inductive studies. The risk involved in a miscall is too high. Through the rapid deployment of technology, society can outrun our ability to competently use or maintain that technology safely – as might be evidenced by nuclear weapons or a large dam project in a third world nation which does not have the educational nor labor resources to support operation of the dam. When we as a corporate technology culture are moving so fast that our pace outdistances our headlights – risk concatenates or snowballs.

Example:  5G is a promising and powerful technology. I love the accessibility and speeds it offers. However there is legitimate concern that it may suffer being deployed well before we know enough about this type of pervasive radiation impact on human and animal physiology. A wave of the indignant corporate hand, and inchoate appointment of the same skeptics who defended Vioxx and Glyphosate, is not sufficient scientific diligence. If I see the same old tired skeptics being dragged out to defend 5G – that is my warning sign that the powers deploying it, have no idea what they are doing. I am all for 5G – but I want scientific deductive rigor (modus absens) in its offing.

Risk Type II: Headlamps Not Bright Enough

Extensive Risk Horizon (High State of Domain Epistemological Risk)

Second and moreover, this problem exacerbates when the topic suffers from a high state of epistemological domain risk. In other words, there exist a high number of critical paths of consideration, along with a high degree of sensitive and influencing factors – very few of which we have examined nor understand sufficiently. Inside this realm of deliberation, induction under the Popper Demarcation of Science not only will not prove out the safety of our product, but we run a high risk of not possessing enough knowledge to even know how to test our product adequately for its safety to begin with. The domain epistemological risk is high. When a corporate technology is pushed onto the public at large under such a circumstance, this can be indicative of greed, malice or oppression. Risk herein becomes exponential. A technology company facing this type of risk strategy challenge, needs to have its legal counsel present at its piloting and closing sessions.

Example: Vaccines offer a beneficial bulwark against infectious diseases. Most vaccines work. However there is legitimate concern that we have not measured their impact in terms of unintended health consequences – both as individual treatments and as treatments in groups, nor at the ages administered. There exists a consilience (Consilient Induction modus praesens) of stark warning indicators that vaccines may be impacting the autoimmune, cognitive and emotional well being of our children.

We do not possess the knowledge which will allow us to deductively prove that our vaccines do not carry such unintended consequences. If one cites this as a condition which allows for exemption from having to conduct such study – such a disposition is shown in the chart above to constitute malice. When domain epistemological risk is high, and an authority which stands to derive power or profits from deployment of a technology inside that domain, applies it by means of less-than-rigorous science (eg. linear induction used to infer safety of vaccines), this constitutes a condition of malice on the part of that authority.

Such conditions where society is either outrunning its headlights, or does not maintain bright enough headlamps, is what we as ethical skeptics must watch for. We must be able to discern the good-cop/bad-cop masquerade, the posturing poseur used car salesmen of science, and the stop the charade which makes a farce of science, injures our children or serves to harm us all.

     How to MLA cite this article:

The Ethical Skeptic, “Epistemological Domain and Objective Risk”; The Ethical Skeptic, WordPress, 23 May 2019; Web, https://wp.me/p17q0e-9ME

May 23, 2019 Posted by | Ethical Skepticism | , , | 2 Comments

Necessity – A Case for Plurality

Necessity and plurality are the critical elements inside Ockham’s Razor, and not this business of the simplicity of any particular explanation. How does agency block the introduction of a necessary alternative, and how does the ethical skeptic go about establishing a case for such necessity in response? These are the critical questions which face groundbreaking science today.

Ockham’s Razor is the mandate that, inside reductionist science: plurality should not be posited without necessity.1 In straightforward terms this means that, in order to conserve parsimony, one should not propose study of a new explanatory alternative, nor add features to an existing alternative or the null, without a well established case of necessity to do so. Both of these flavors of additions in conjecture if you will, constitute conditions of what is called ‘plurality’. Plurality means in this context, ‘more than one’ or ‘more than currently exist’. Ockham’s Razor therefore is asking the researcher, ‘When is it indeed necessary to add another explanation, or to add features which should be tested in order to improve an existing explanation?’

This principle of course has nothing to do with simplicity and everything to do with epistemological risk. A state of being apparently ‘simple’ can be just as risky in epistemology as can a state of being ‘complicated’. Ockham’s Razor therefore, leverages its application upon this fulcrum – this threshold of a Wittgenstein logical object called necessity.  Of course, implicit in this apothegm is the principle that, once a solid case for necessity has been established, then both the consideration of a new explanatory alternative, or added features to an existing alternative, become justified inside any pathway of research. The purpose of this blog article is to address that threshold, that fulcrum upon which Ockham’s Razor leverages its wisdom – the Necessity of Plurality. When and how do we achieve such a feat, and what are the symptoms of a condition wherein forces of fake skeptical agency are at work feverishly to prevent such a critical path step of science from ever occurring in the first place? Possessing the skill to discern a condition of plurality is critical to any claim to represent the philosophy of science, skepticism.

Its Abrogation: The Circular Simplicity Sell

God exists. This is the critical thesis of Question 2 inside Saint Thomas Aquinas’ The Summa Theologica.2 That such a critical path assumption is ‘simple’, is the critical thesis of Question 3 inside that same foundational work of Christian apologetica.3 Finally, Saint Thomas Aquinas begins that work with the provision that such a simple presumption is ‘necessary’.4 The theological argument outlined in The Summa Theologica flows along the following critical path of logic:

Question 1:  God is necessary as an explanatory basis for all that we see

Question 2:  God is self evident

Question 3:  God as a principle in and of itself, is simple (return to Question 1)

The sequence of actual logic employed in Aquinas’ argument here starts with Question 3, flowing into Question 1 and finally to Question 2. However, he purposely places these out of order in an effort to conceal from his reader the circular nature of the contention, along with the subsequent basis from which he drew necessity. The actual flow of Aquinas’ argument proceeds thusly:

And thus Aquinas’ argument circularly complies with ‘Occam’s Razor’, in that since God is by manifest evidence (this is not the same thing as ‘probative observation’), a simple principle, therefore now it is also a necessary one. You draw attention to and speak of yourself. Everyone knows that drawing attention to or speaking of one’s self is the practice of a narcissist, therefore it is a simple explanatory option, and now the necessary null, that you be regarded as a narcissist. Furthermore, since that accusation is assumed now ‘necessary’ it can be shown to be self evident, through this new attention towards your every action, along with the resulting linear confirmation bias (see below) – that you will ultimately be confirmed as a narcissist.5 Under this same logic, there does not exist one thing now that I will encounter in my life, which cannot be framed or re-explained inside the conforming context of my desired conclusion (see Qualifying Theory and Pseudo-Theory). This is how the circular simplicity sell works.

Now, set aside of course the premature contentions made inside Questions 2 and 3, that ‘God is self evident’ and ‘God is simple’, and focus instead on the first trick of this pseudo-philosophy. The sleight-of-hand proffered in this form of casuistry, is embodied in a violation of Ockham’s Razor called an invalid ‘necessity of plurality’. Saint Aquinas begins his work in Question 1 by assuming necessity. A nice trick, and one I would love to pull off inside a laboratory setting. Things would always go my way; conclusions would always turn out as planned – as long as I can dictate the necessity of the questions I want asked.

As a note, the ignostic atheist does not object to the idea of God as does the nihilist Atheist. They simply object to logically invalid or manipulative pathways of arriving at such knowledge as a predefined destination. If one is to discover God, the above pathway is not the way of going about such a task. Logic’ing your way into a proof or disproof of God is foolishness; the self-aggrandizing fantasy of the theist and Atheist.

Moreover, as long as you afford me exemption from having to establish a case for necessity, I can invalidly swing the direction of research towards any particular conclusion I desire. Both the religious and the nihilist use this technique in order to squelch thinking they find offensive. Both cults have learned how to employ this technique of linear induction, along with its interpretive confirmations (see below), to control the direction of science. Hence existence of the obtuse form of parsimony, ‘Occam’s Razor’.

Allow me the privilege of establishing the necessity of the questions asked, and I can prove absolutely anything that I want.

This is much akin to the philosophical apothegm ‘Grant me one miracle and I can explain all the rest’. It behooves the ethical skeptic to understand the role of necessity in the development of an alternative – and in particular, understand when an agency has bypassed the burden of necessity (as has been done by Aquinas above), or even more importantly when a case for necessity has indeed been established – and is being artificially blocked by fake skeptics.

It is not proof which they are in reality demanding, contrary to what fake skeptics contend
(in science there is really no such thing as proof), rather it is plurality of explanation which they fear most.

As it pertains to fake skepticism, it is this case for necessity of plurality which must be blocked at all costs.
They cannot afford to have more than one explanation (their simple one) be researched by science.

A Case for Necessity

It is a set of observations, and nothing else, which ultimately serves to establish a case for necessity. This is why the scientific method must begin with observation and not by means of the ‘asking of a question’. When one asks a question – one assumes necessity; much akin to the trick played by Saint Aquinas above. For example ‘Are you beating your dog?’ is not a scientific question, because it exposes itself to the ad hoc nature of arrivals of confirming evidence (conforming confirmations), and does not rely upon the epistemological principle of a model’s predictive or probative power. This contrast between confirmatory versus predictive model development – in essence the contrast between Occam’s Razor and Ockham’s Razor, is exhibited below.

Predictions are derived in advance from inductive modeling, and are placed at risk. Confirmations are ad hoc in nature, and further then are reinterpreted to fit according to an anchoring bias. Necessity (N) therefore, is something one establishes – and can never be something that a researcher assumes.

So, given the abject reality of confirmation basis which inhabits The Confirmative Model ‘Occam’s Razor’ above, when indeed does a body of observation, therefore lead to a case for necessity? There are five conditions wherein a case for scientific plurality can be established through observational necessity. It is important to remember however, that in none of these cases is any particular answer proved (in other words, plurality exists or persists). Nor is any claim being made that science has been completed – as a case for necessity constitutes the beginning of the scientific method and not its completion.

   The Five Cases for Necessity

I.  When a consilience of observation has implied/predicted that the null may indeed possibly be false.

Example:  The idea that our current 53 event vaccine schedule is ‘safe’, has been brought into question through a myriad of observations by parents and administering medical professionals. This does not mean that vaccines do not work – but rather straightforwardly, that the unchallenged null hypothesis, that vaccines do not also bear unintended and long term health consequences, has been brought into legitimate scientific question.

II.  When one or more null-falsifying/probative observations have been recorded.

Example:  Observing the presence of something assumed to be absent. Finding one incident of permanent cerebral injury (encephalitis) immediately after receipt of a vaccine. Such an observation serves to introduce necessity of plurality.

III.  When risk incumbent with the null is greater than any particular alternative’s risk in epistemology.

Example:  Deploying a pesticide under a presumption of safety, backed up by a couple confirmatory studies conducted by the company which stands to profit from the deployment of that pesticide. Despite immaturity of the study suggesting a lack of safety, such plurality must be assumed under a principle of precaution.

IV.  When the null hypothesis, in order to avoid being falsified, has become more complicated than any of its alternatives.

Example:  Monarch butterflies are being impacted indirectly by man made global warming and not by the direct impact of glyphosate on their annual migratory breeding grounds.

V.  When the credibility or unbiased nature of the science which served to establish the null, has come into question.

Example:  When Monsanto internal communications shifted from a theme of promoting scientific studies, to an oppressive policy of ‘no challenge left unanswered’ – countering and squelching opposing viewpoints in the media at all costs.

This therefore serves to introduce the condition wherein agency attempts to manipulate necessity so as to control the reigns of science.

   The Four Abuses of Necessity on the Part of Fake Skeptics

Blocking Necessity – the principal tactic of fake skepticism. By asking for ‘proof’, the fake skeptic is in essence attempting to block access to the subject by science. This is a Catch 22 Proof Gaming or non rectum agitur fallacy.

Equating Simplicity with a Lack of Necessity – another key tactic of fake skeptics. An appeal to ignorance is an example of this tactic. Because we see no evidence for something, the simplest explanation is that it does not exist.

Equating Simplicity with Necessity – a tactic of fake skepticism. The existential Occam’s Razor fallacy appeal to authority is an example of this. Because an explanation is simple, therefore it is the truth. No further science is warranted.

Assuming Necessity – a final tactic of fake skepticism. The Confirmative Model (above) is an example of this. Because an idea tickles me emotionally, therefore it is granted immediate necessity – not having to justify itself nor satisfy the rigor of probativeness, consilience nor predictiveness.

All of these conditions are conditions of corruption of the scientific method, for which the ethical skeptic must be ever vigilant.

     How to MLA cite this article:

The Ethical Skeptic, “Necessity – A Case for Plurality”; The Ethical Skeptic, WordPress, 19 May 2019; Web, https://wp.me/p17q0e-9KR

May 19, 2019 Posted by | Ethical Skepticism | , , | Leave a comment

The Plural of Anecdote is Data

A single observation does not necessarily constitute an instance of the pejorative descriptive ‘anecdote’. Not only do anecdotes constitute data, but one anecdote can serve to falsify the null hypothesis and settle a scientific question in short order. Such is the power of a single observation. Such is the power of wielding skillfully, scientific inference. Fake skeptics seek to emasculate the power of the falsifying observation, at all costs.

It is incumbent upon the ethical skeptic, those of us who are researchers if you will – those who venerate science both as an objective set of methods as well as their underlying philosophy – incumbent that we understand the nature of anecdote and how the tool is correctly applied inside scientific inference. Anecdotes are not ‘Woo’, as most fake skeptics will imply through a couple of notorious memorized one-liners. Never mind what they say, nor might claim as straw man of their intent, and watch instead how they apply their supposed wisdom. You will observe such abuse of the concept to be most often the case. We must insist upon the theist and nihilist religious community of deniers, that inside the context of falsification/deduction in particular, a single observation does not constitute an instance of ‘anecdote’ (in the pejorative). Not only do anecdotes constitute data, but one anecdote can serve to falsify the Null (or even null hypothesis) and settle the question in short order. Such is the power of a single observation.

See ‘Anecdote’ – The Cry of the Pseudo-Skeptic

To an ethical skeptic, inductive anecdotes may prove to be informative in nature if one gives structure to and catalogs them over time. Anecdotes which are falsifying/deductive in nature are not only immediately informative, but moreover they are even more importantly, probative. Probative with respect to the null. I call the inferential mode modus absens the ‘null’ because usually in non-Bayesian styled deliberation, the null hypothesis, the notion that something is absent, is not actually a hypothesis at all. Rather, this species of idea constitutes simply a placeholder – the idea that something is not, until proved to be. And while this is a good common sense structure for the resolution of a casual argument, it does not mean that one should therefore believe or accept the null, as merely outcome of this artifice in common sense. In a way, deflecting observations by calling them ‘anecdote’ is a method of believing the null, and not in actuality conducting science nor critical thinking. However, this is the reality we face with unethical skeptics today. The tyranny of the religious default Null.

The least scientific thing a person can do, is to believe the null hypothesis.

Wolfinger’s Misquote

/philosophy : skepticism : pseudoscience : apothegm/ : you may have heard the phrase ‘the plural of anecdote is not data’. It turns out that this is a misquote. The original aphorism, by the political scientist Ray Wolfinger, was just the opposite: ‘The plural of anecdote is data’. The only thing worse than the surrendered value (as opposed to collected value, in science) of an anecdote is the incurred bias of ignoring anecdotes altogether. This is a method of pseudoscience.

Our opponents elevate the scientific status of a typical placeholder Null (such-and-such does not exist) and pretend that the idea, 1. actually possesses a scientific definition and 2. bears consensus acceptance among scientists. These constitute their first of many magician’s tricks, that those who do not understand the context of inference fall-for, over and over. Even scientists will fall for this ole’ one-two, so it is understandable as to why journalists and science communicators will as well. But anecdotes are science, when gathered under the disciplined structure of Observation (the first step of the scientific method). Below we differentiate two contexts of the single observation, in the sense of both inductive and deductive inference.

Inductive Anecdote

Inductive inference is the context wherein a supporting case or story can be purely anecdotal (The plural of anecdote is not data). This apothegm is not a logical truth, as it could apply to certain cases of induction, however does not apply universally.

Null:  Dimmer switches do not cause house fires to any greater degree than do normal On/Off flip switches.

Inductive Anecdote:  My neighbor had dimmer switched lights and they caused a fire in his house.

Deductive Anecdote

Deductive inference leading to also, falsification (The plural of anecdote is data). Even the singular of anecdote is data under the right condition of inference.

1.  Null:  There is no such thing as a dimmer switch.

2.  Deductive Anecdote:  I saw a dimmer switch in the hardware store and took a picture of it.

3.  Falsification:  An electrician came and installed a dimmer switch into my house.

For example, what is occurring when one accepts materialism as an a priori truth pertains to those who insert that religious agency between steps 2 and 3 above. They contend that dimmer switches do not exist, so therefore any photo of one necessarily has to be false. And of course, at any given time, there is only one photo of one at all (all previous photos were dismissed earlier in similar exercises). Furthermore they then forbid any professional electrician from installing any dimmer switches (or they will be subject to losing their license). In this way – dimmer switches can never ‘exist’ and deniers endlessly can proclaim to non-electricians ‘you bear the burden of proof’ (see Proof Gaming). From then on, deeming all occurrences of step 2 to constitute lone cases of ‘anecdote’, while failing to distinguish between inductive and deductive contexts therein.

Our allies and co-observers as ethical skeptics need bear the knowledge of philosophy of science (skepticism) sufficient to stand up and and say, “No – this is wrong. What you are doing is pseudoscience”.

Hence, one of my reasons for creating The Map of Inference.

     How to MLA cite this article:

The Ethical Skeptic, “The Plural of Anecdote is Data”; The Ethical Skeptic, WordPress, 1 May 2019; Web, https://wp.me/p17q0e-9HJ

May 1, 2019 Posted by | Argument Fallacies, Tradecraft SSkepticism | , , | Leave a comment

Chinese (Simplified)EnglishFrenchGermanHindiPortugueseRussianSpanish
%d bloggers like this: