The Ethical Skeptic

Challenging Pseudo-Skepticism, Institutional Propaganda and Cultivated Ignorance

The Elements of Hypothesis

One and done statistical studies, based upon a single set of statistical observations (or even worse lacks thereof), are not much more credible in strength than a single observation of Bigfoot or a UFO. The reason, because they have not served to develop the disciplines of true scientific hypothesis. They fail in their duty to address and inform.

As most scientifically minded persons realize, hypothesis is the critical foundation in exercise of the scientific method. It is the entry door which demonstrates the discipline and objectivity of the person asking to promote their case in science. Wikipedia cites the elements of hypothesis in terms of the below five features, as defined by philosophers Theodore Schick and Lewis Vaughn:1

  • Testability (involving falsifiability)
  • Parsimony (as in the application of “Occam’s razor” (sic), discouraging the postulation of excessive numbers of entities)
  • Scope – the apparent application of the hypothesis to multiple cases of phenomena
  • Fruitfulness – the prospect that a hypothesis may explain further phenomena in the future
  • Conservatism – the degree of “fit” with existing recognized knowledge-systems.

Equivocally, these elements are all somewhat correct; however none of the five elements listed above constitute logical truths of science nor philosophy. They are only correct under certain stipulations. The problem resides in that this renders these elements not useful, and at worst destructive in terms of the actual goals of science. They do not bear utility in discerning when fully structured hypothesis is in play, or some reduced set thereof. Scope is functionally moot at the point of hypothesis, because in the structure of Intelligence, the domain of observation has already been established – it had to have been established, otherwise you could not develop the hypothesis from any form of intelligence to begin with.2 3 To address scope again at the hypothesis stage is to further tamper with the hypothesis without sound basis. Let the domain of observation stand, as it was observed – science does not advance when observations are artificially fitted into scope buckets (see two excellent examples of this form of pseudoscience in action, with Examples A and B below).

Fruitfulness can mean ‘producing that which causes our paradigm to earn me more tenure or money’ or ‘consistent with subjects I favor and disdain’ or finally and worse, ‘is able to explain everything I want explained’. Predictive strength, or even testable mechanism, are much stronger and less equivocal elements of hypothesis. So, these two features of hypothesis defined by Schick and Vaughn are useless to malicious in terms of real contribution to scientific study. These two bad philosophies of science (social skepticism) serve to produce inevitably a fallacy called explanitude. A condition wherein the hypothesis is considered stronger the more historical observations it serves to explain and how flexible it can be in predicting or explaining future observations. Under ethical skepticism, this qualification of an alternative or especially null hypothesis is a false notion.

Finally, parsimony and conservatism are functionally the same thing – conserving and leveraging prior art along a critical path of necessary incremental conjecture risk. This is something which few people aside from experienced patent filers understand. If I constrain my conjecture to simply one element of risk along a critical path of syllogism, I am both avoiding ‘excessive numbers of entities’ and exercising ‘fit with existing recognized knowledge systems’ at the same time. Otherwise, I am proposing an orphan question, and although it might appear to be science, p-values and all, it is not. Thus, a lack of understanding on the part of the Schick and Vaughn inside How to think about weird things: critical thinking for a New Age, as to how true science works, misled them into believing that these two principles needed to be addressed separately. One is a fortiori with the other inside Parsimony (see below). Unless of course one is implying that ‘fit’ means ‘to comply’ (as the authors probably do, being that both authors are social skeptics and have no professional experience managing a lab) – then of course we are dealing with a completely different paradigm of science called sciebam: the only answers I will accept, until I die, are answers which help me improve or modify my grasp of how correct I am. The duty of a hypothesis is to inform about and address standing evidence and inference, not to necessarily conform to it. To not start with an orphan question, sponsored by agency. Those are two different things. A discernment critical in the contrast between science and sciebam.

Orphan Question

/philosophy : pseudoscience : sciebam/ : a question, purported to be the beginning of the scientific method, which is asked in the blind, without sufficient intelligence gathering or preparation research, and is as a result highly vulnerable to being manipulated or posed by means of agency. The likelihood of a scientifically valid answer being developed from this question process, is very low. However, an answer of some kind can almost always be developed – and is often spun by its agency as ‘science’. This form of question, while not always pseudoscience, is a part of a modified process of science called sciebam. It should only be asked when there truly is no base of intelligence or body of information regarding a subject. A condition which is rare.


/philosophy : science : method : sciebam/ : (Latin: I knew) An alternative form of knowledge development, which mandates that science begins with the orphan/non-informed step of ‘ask a question’ or ‘state a hypothesis’. A pseudoscientific process which bypasses the first steps of the scientific method: observation, intelligence development and formulation of necessity. This form of pseudoscience presents three vulnerabilities:

First it presumes that the researcher possesses substantially all the knowledge they need, lacking only to fill in final minor gaps in understanding. This creates an illusion of knowledge effect on the part of the extended domain of researchers. As each bit of provisional knowledge is then codified as certain knowledge based upon prior confidence. Science can only progress thereafter through a series of shattering paradigm shifts.

Second, it renders science vulnerable to the possibility that, if the hypothesis itself is unacceptable at the very start, then its researcher therefore is necessarily conducting pseudoscience. This no matter the results, nor how skillfully and expertly they may apply the methods of science. And since the hypothesis is now a pseudoscience, no observation, intelligence development or formulation of necessity are therefore warranted. The subject is now closed/embargoed by means of circular appeal to authority.

Finally, the question asked at the beginning of a process of inquiry can often prejudice the direction and efficacy of that inquiry. A premature or poorly developed question, and especially one asked under the influence of agency (not simply bias) – and in absence of sufficient observation and intelligence – can most often result quickly in a premature or poorly induced answer.

Real Hypothesis

Ethical skepticism proposes a different way of lensing the above elements. Under this philosophy of hypothesis development, I cannot make any implication of the ilk that ‘I knew’ the potential answer a priori. Such implication biases both the question asked, as well as the processes of inference employed. Rather, hypothesis development under ethical skepticism involves structure which is developed around the facets of Intelligence, Mechanism and Wittgenstein Definition/Domain. A hypothesis is neither a hunch, assumption, suspicion nor idea. Rather it is


/philosophy : skepticism : scientific method/ : a disciplined and structured incremental risk in inquiry, relying upon the co-developed necessity of mechanism and intelligence. A hypothesis necessarily features seven key elements which serve to distinguish it from pseudoscience.

The Seven Elements of Hypothesis

1.  Construct based upon necessity. A construct is a disciplined ‘spark’ (scintilla) of an idea, on the part of a researcher or type I, II or III sponsor, educated in the field in question and experienced in its field work. Once a certain amount of intelligence has been developed, as well as definition of causal mechanism which can eventually be tested (hopefully), then the construct becomes ‘necessary’ (i.e. passes Ockham’s Razor). See The Necessary Alternative.

2.  Wittgenstein definition and defined domain. A disciplined, exacting, consistent, conforming definition need be developed for both the domain of observation, as well as the underpinning terminology and concepts. See Wittgenstein Error.

3.  Parsimony. The resistance to expand explanatory plurality or descriptive complexity beyond what is absolutely necessary, combined with the wisdom to know when to do so. Conjecture along an incremental and critical path of syllogism. Avoidance of unnecessarily orphan questions, even if apparently incremental in the offing. See The Real Ockham’s Razor. Two character traits highlight hypothesis which has adeptly posed inside parsimony.

a. Is incremental and critical path in its construct – the incremental conjecture should be a reasoned, single stack and critical path new construct. Constructs should follow prior art inside the hypothesis (not necessarily science as a whole), and seek an answer which serves to reduce the entropy of knowledge.

b. Methodically conserves risk in its conjecture – no question may be posed without risk. Risk is the essence of hypothesis. A hypothesis, once incremental in conjecture, should be developed along a critical path which minimizes risk in this conjecture by mechanism and/or intelligence, addressing each point of risk in increasing magnitude or stack magnitude.

c. Posed so as to minimize stakeholder risk – (i.e. precautionary principle) – a hypothesis should not be posed which suggests that a state of unknown regarding risk to impacted stakeholders is acceptable as central aspect of its ongoing construct critical path. Such risk must be addressed first in critical path as a part of 3. a. above.

4.  Duty to Reduce Address and Inform. A critical element and aspect of parsimony regarding a scientific hypothesis. The duty of such a hypothesis to expose and address in its syllogism, all known prior art in terms of both analytical intelligence obtained or direct study mechanisms and knowledge. If information associated with a study hypothesis is unknown, it should be simply mentioned in the study discussion. However, if countermanding information is known or a key assumption of the hypothesis appears magical, the structure of the hypothesis itself must both inform of its presence and as well address its impact. See Methodical Deescalation and The Warning Signs of Stacked Provisional Knowledge.

Unless a hypothesis offers up its magical assumption for direct testing, it is not truly a scientific hypothesis. Nor can its conjecture stand as knowledge.


/philosophy : pseudoscience/ : A pseudo-hypothesis fails in its duty to reduce, address or inform. A pseudo-hypothesis states a conclusion and hides its critical path risk (magical assumption) inside its set of prior art and predicate structure. A hypotheses on the other hand reduces its sets of prior art, evidence and conjecture and makes them manifest. It then addresses critical path issues and tests its risk (magical assumption) as part of its very conjecture accountability. A hypothesis reduces, exposes and puts its magical assertion on trial. A pseudo-hypothesis hides is magical assumptions woven into its epistemology and places nothing at risk thereafter. A hypothesis is not a pseudo-hypothesis as long as it is ferreting out its magical assumptions and placing them into the crucible of accountability. Once this process stops, the hypothesis has become an Omega Hypothesis. Understanding this difference is key to scientific literacy.

Grant me one hidden miracle and I can explain everything.

5.  Intelligence. Data is denatured into information, and information is transmuted into intelligence. Inside decision theory and clandestine operation practices, intelligence is the first level of illuminating construct upon which one can make a decision. The data underpinning the intelligence should necessarily be probative and not simply reliable. Intelligence skills combine a healthy skepticism towards human agency, along with an ability to adeptly handle asymmetry, recognize probative data, assemble patterns, increase the reliability of incremental conjecture and pursue a sequitur, salient and risk mitigating pathway of syllogism. See The Role of Intelligence Inside Science.

6.  Mechanism. Every effect in the universe is subject to cause. Such cause may be mired in complexity or agency; nonetheless, reducing a scientific study into its components and then identifying underlying mechanisms of cause to effect – is the essence of science. A pathway from which cause yields effect, which can be quantified, measured and evaluated (many times by controlled test) – is called mechanism. See Reduction: A Bias for Understanding.

7.  Exposure to Accountability.  This is not peer review. While during the development phase, a period of time certainly must exist in which a hypothesis is held proprietary so that it can mature – and indeed fake skeptics seek to intervene before a hypothesis can mature and eliminate it via ‘Occam’s Razor’ (sic) so that it cannot be researched. Nonetheless, a hypothesis must be crafted such that its elements 1 – 6 above can be held to the light of accountability, by 1. skepticism (so as to filter out sciebam and fake method) which seeks to improve the strength of hypothesis (this is a ‘ally’ process and not peer review), and 2. stakeholders who are impacted or exposed to its risk. Hypothesis which imparts stakeholder risk, which is held inside proprietary cathedrals of authority – is not science, rather oppression by court definition.

It is developed from a construct – which is a type of educated guess (‘scintilla’ in the chart below). One popular method of pseudoscience is to bypass the early to mid disciplines of hypothesis and skip right from data analysis to accepted proof. This is no different ethically, from skipping right from a blurry photo of Blobsquatch, to conjecture that such cryptic beings are real and that they inhabit all of North America. It is simply a pattern in some data. However, in this case, blurry data which happened to fit or support a social narrative.

A hypothesis reduces, exposes and puts its magical assertion on trial.
A pseudo-hypothesis hides is magical assumptions woven into its epistemology and places nothing at risk thereafter.

Another method of accomplishing inference without due regard to science, is to skip past falsifying or countermanding information and simply ignore it. This is called The Duty to Address and Inform. A hypothesis, as part of its parsimony, cannot be presented in the blind – bereft of any awareness of prior art and evidence. To undertake such promotional activity is a sale job and not science. Why acknowledge depletion of plant food nutrients on the part of modern agriculture, when you have a climate change message to push? Simply ignore that issue and press your hypothesis anyway (see Examples A and B below).

However, before we examine that and other examples of such institutional pseudoscience, let’s first look at what makes for sound scientific hypothesis. Inside ethical skepticism, a hypothesis bears seven critical elements which serve to qualify it as science.

These are the seven elements which qualify whether or not an alternative hypothesis becomes real science. They are numbered in the flow diagram below and split by color into the three discipline streams of Indirect Study (Intelligence), Parsimony and Conservatism (Knowledge Continuity) and Direct Study (Mechanism).

A Few Examples

In the process of defining this philosophical basis over the years, I have reviewed several hundred flawed and agency-compliant scientific studies. Among them existed several key examples, wherein the development of hypothesis was weak to non-existent, yet the conclusion of the study was accepted as ‘finished science’ from its publishing onward.

Most institutional pseudoscience spins its wares under a failure to address and/or inform.

If you are going to accuse your neighbor of killing your cat, if their whereabouts were unknown at the time, then your hypothesis does not have to address such an unknown. Rather merely acknowledge it (inform). However much your neighbor disliked your cat (intelligence), if your neighbor was in the Cayman Islands that week, your hypothesis must necessarily address such mechanism. You cannot ignore that fact simply because it is inconvenient to your inductive/abductive evidence set.

Most all of these studies skip the hypothesis discipline by citing a statistical anomaly (or worse lack thereof), and employing a p-value masquerade as means to bypass the other disciplines of hypothesis and skip right to the peer review and acceptance steps of the scientific method. Examples A and B below fail in their duty to address critical mechanism, while Examples B and C fail in their duty to inform the scientific community of all the information they need, in order to tender peer review. Such studies end at the top left hand side of the graphic above and call the process done, based upon one scant set of statistical observation – in ethical reality not much more credible in strength than a single observation of Bigfoot or a UFO.

Example A – Failure in Duty to Address Mechanism

Increasing CO2 threatens human nutrition. Meyers, Zanobetti, et. al. (Link)

In this study, and in particular Extended Data Table 1, a statistical contrast was drawn between farms located in elevated CO2 regions versus ambient CO2 regions. The contrast resulted in a p-value significance indicating that levels of  Iron, Zinc, Protein and Phytate were lower in areas where CO2 concentrations exhibited an elevated profile versus the global ambient average. This study was in essence a statistical anomaly; and while part of science, should never be taken to stand as neither a hypothesis, nor even worse a conclusion – as is indicated in the social skeptic ear-tickling and sensationalist headline title of the study ‘Increasing CO2 threatens human nutrition’. The study has not even passed the observation step of science (see The Elements of Hypothesis graphic above). Who allowed this conclusion to stand inside peer review? There are already myriad studies showing that modern (1995+) industrial farming practices serve to dramatically reduced crop nutrient levels.4 Industrial farms tend to be nearer to heavy CO2 output regions. Why was this not raised inside the study? What has been accomplished here is to merely hand off a critical issue of health risk, for placement into the ‘climate change’ explanitude bucket, rather than its address and potential resolution. It begs the question, since the authors neither examined the above alternative, nor raised it inside their Discussion section – that they care neither about climate change nor nutrition dilution – viewing both instead as political football means to further their careers. It is not that they have to confirm this existing study direction, however they should at least acknowledge this in their summary of analytics and study limitations. The authors failed in their duty to address standing knowledge about industrial farming nutrient depletion. This would have never made it past my desk. Grade = C (good find, harmful science).

Example B – Failure in Both Duty to Inform of Intelligence and Duty to Address Mechanism

Possible future impacts of elevated levels of atmospheric CO2 on human cognitive performance and on the design and operation of ventilation systems in buildings. Lowe, Heubner, et. al. (Link)

This study cites its review of the immature body of research surrounding the relationship between elevated CO2 and cognitive ability. Half of the studies reviewed indicated that human cognitive performance declines with increasing CO2 concentrations. The problem entailed in this study, similar to the Zanobetti study above in Example 1, is that it does not develop any underlying mechanism which could explain instances how elevated CO2 directly impacts cognitive performance. This is not a condition of ‘lacking mechanism’ (as sometimes the reality is that one cannot assemble such), rather one in which the current mechanism paradigm falsifies the idea. The study should be titled ‘Groundbreaking new understanding on the toxicity of carbon dioxide’. This is of earth-shattering import. There is a lot of science which needs to be modified if this study proved correct at face value. The sad reality is that the study does not leverage prior art in the least. As an experienced diver, I know that oxygen displacement on the order of 4 percentage points is where the first slight effects of cognitive performance come into play. Typical CO2 concentrations in today’s atmosphere are in the range of 400 ppm – not even in the relevant range for an oxygen displacement argument. However, I would be willing to accept this study in sciebam, were they to offer another mechanism of direct effect; such as ‘slight elevations in CO2 and climate temperature serve to toxify the blood’, for example. But no such mechanism exists – in other words, CO2 is only a toxicant as it becomes an asphyxiant.5 This study bears explanitude, it allows for an existing paradigm to easily blanket-explain an observation which might have otherwise indicated a mechanism of risk – such as score declines being attributable to increases in encephalitis, not CO2. It violates the first rule of ethical skepticism, If I was wrong, would I even know it? The authors failed in their duty to inform about the known mechanisms of CO2 interaction inside the body, and as well failed to address standing knowledge about industrial farming nutrient depletion. As well, this study was a play for political sympathy and club rank. Couching this pseudo-science with the titular word ‘Possible’ is not excuse to pass this off as science. Grade = D (inexpert find, harmful science).

Example C – Orphan Question, Failing in All Seven Elements of Hypothesis, and Especially Duty to Inform of Intelligence

A Population-Based Study of Measles, Mumps, and Rubella Vaccination and Autism. Madsen, Hviid, et. al. (Link)

This is the notorious ‘Danish Study’ of the relationship between the MMR vaccination and observed rates of autism psychiatric confirmed diagnoses inside the Danish Psychiatric Central Register. These are confirmed diagnoses of autism spectrum disorders (Autism, ADD/PDD and Asperger’s) over a nine year tracking period (see Methodology and Table 2). In Denmark, children are referred to specialists in child psychiatry by general practitioners, schools, and psychologists if autism is suspected. Only specialists in child psychiatry diagnose autism and assign a diagnostic code, and all diagnoses are recorded in the Danish Psychiatric Central Register. The fatal flaw in this study resided in its data domain analyzed and the resulting study design. 77% of autism cases are not typically diagnosed until past 4.5 years of age. Based upon a chi-squared cumulative distribution fit at each individual μ below from the CDC, and 1.2 years degree of freedom, and 12 months of Danish bureaucratic bias = .10 + .08 + .05 = 0.23 chance of detection by CDC statistical practices – or 77% chance of a false negative (miss). The preponderance of diagnoses in the ADD/PDD and Asperger’s sets serves to weight the average age of diagnosis well past the average age of the subjects in this nine year study – tracking patients from birth (average age = 4.5 years at study end). See graphic to the right. From the CDC data on this topic, the mean age of diagnosis for ASD spectrum disorders in the United States, where particular focus has tightened this age data in recent years:6

   •  Autistic disorder: 3 years, 10 months
   •  ASD/pervasive developmental disorder (PDD): 4 years, 8 months
   •  Asperger disorder: 5 years, 7 months

Note: A study released 8 Dec 2018 showed a similar effect through data manipulation-exclusion techniques in the 2004 paper by DeStefano et al.; Age at first measles-mumps-rubella vaccination in children with autism and school-matched control subjects: a population-based study in metropolitan Atlanta. Pediatrics 2004;113:259-266.7

Neither did the study occur in a society which has observed a severe uptick in autism, nor during a timeframe which has been most closely associated with autism diagnoses, (2005+).8 Of additional note is the fact that school professionals refer non-profound autism diagnosis cases to the specialists in child psychiatry, effectively ensuring that all such diagnoses occurred after age 5, by practice alone. Exacerbating this is the fact that a bureaucratic infrastructure will be even more slow in/fatal in posting diagnoses to a centralized system of this type. These two factors alone will serve to force large absences in the data, which mimic confirmatory negatives. The worse the data collection is, the better the study results. A fallacy called utile absentia. The study even shows the consequent effect inversion (vaccines prevent autism), incumbent with utile absentia. In addition, the overt focus on the highly precise aspects of the study, and away from its risk exposures and other low-confidence aspects and assumptions, is a fallacy called idem existimatis. I will measure the depth of the water into which you are cliff diving, to the very millimeter – but measure the cliff you are diving off of, to the nearest 100 feet. The diver’s survival is now an established fact of science by the precision of the water depth measure alone.

In other words this study did not examine the relevant domain of data acceptable to underpin the hypothesis which it purported to support. Forget mechanism and parsimony to prior art – as those waved bye-bye to this study a long time ago. Its conclusions were granted immunity and immediate acclaim because they fit an a priori social narrative held by their sponsors. It even opened with a preamble citing that it was a study to counter a very disliked study on the part of its authors. Starting out a process purported to be of science, by being infuriated about someone else’s study results is not science, not skepticism, not ethical.

Accordingly, this study missed 80% of its relevant domain data. It failed in its duty to inform the scientific community of peers. It is almost as if a closed, less-exposed bureaucracy were chosen precisely because of its ability to both present reliable data, and yet at the same time screen out the maximum number of positives possible. Were I a criminal, I could not have selected a more sinister means of study design myself. This was brilliance in action. Grade = F (diabolical study design, poor science).

All of the above studies failed in their duty to inform. They failed in their responsibility to communicate the elements of hypothesis to the outside scientific community. They were sciebam – someone asked a question, poorly framed and without any background research – and by golly they got an answer. They sure got an answer. They were given free pass, because they conformed to political will. But they were all bad science.

It is the duty of the ethical skeptic to be aware of what constitutes true hypothesis, and winnow out those pretenders who vie for a claim to status as science.

The Ethical Skeptic, “The Elements of Hypothesis”; The Ethical Skeptic, WordPress, 13 Dec 2018; Web,

December 13, 2018 Posted by | Ethical Skepticism | , | Leave a comment

Skeptics Need You – But You Don’t Need Them

Stop striving to impress skeptics. Just because scientists employ skepticism, does not mean therefore that skeptics represent science. In fact, they only serve to personify a straw man of science. They seek to foment conflict between the public and scientists – because that serves to impart power to them and their club.
A hypocrisy meme, where a man disdainfully holds his intellectual looking spectacles in the air and cites that the job of skeptics is to promote a better understanding of science. Then ironically, starts spinning a whole slew of reasons why science finds the reader unacceptable and calls them names and irrational.

Skeptics have placed you under the spell of a little mind trick. They do not seek the truth of any particular matter, rather they seek only to leverage your sincerity, wonder and inquisitiveness towards a goal of power, humiliation and polarization. They wish you to infer that scientists regard your lines of inquiry, rights and notions – as woo. They wish to imply that science relies upon proof and that scientists have disproved you, and further regard you as anti-science (q.e.d. anti-them).  Upon sensing this finger-point generated animus, scientists begin to perceive much of the public as a frothing, anti-science horde who cannot fathom what they do, and further must now be ignored in order to save the world. This is the actual lesson skeptics are teaching all concerned on both sides – “You must worship me as the smartest, cede unto me the power of punishment (of both the public and scientists) – as I now represent science.” It is a clever little social trick of identity bullying.

In this they ironically pose as a factor which promotes understanding of science on the part of the public.

Skeptics desperately need you – to add fuel to their superiority complex, polarizing message, power to humiliate, club member ranking, acclaim, and to tacitly reinforce their religious view of the world however, you do not need them. You do not need to invite them to events to ‘provide a skeptical perspective’, as this is part of the game of misrepresentation which they play on everyone. Most researchers are already skeptical in their work; most scientists are skeptics by nature and training. This infusion of discipline is a natural part of living a sincere, hard working life. But this does not mean that self-identity skeptics do any research, nor that they are sincere, nor that they are scientists – nor especially that they represent science.

Through personifying a straw man of science, skeptics seek to foment conflict between the public and science
– a state wherein their club gains authority along with the power to punish;
because both science and the public now perceive each other as the denialist enemy.
An enemy which you must fear, mistrust and marginalize.

Do not fall for this game. You will know that you have won, when skeptics ignore you back.

†  When we speak of ‘skeptics’ in this article, we are speaking of those who identify as ‘skeptic’ publicly, as a means of bullying, posturing and self-congratulation. Social skeptics. Fake skeptics. Those who regularly point the finger of ‘pseudoscience’, ‘woo’, ‘credulity’, and ‘anti-science’. Your identity as an ethical skeptic, is simply a means to say ‘I do not participate in that game – other than to oppose agency and bullying, I am not here to promote any given conclusion nor myself. I love science. I love mankind – let’s solve this together and without identity bullying.’

     How to MLA cite this article:

The Ethical Skeptic, “Skeptics Need You – But You Don’t Need Them”; The Ethical Skeptic, WordPress, 4 Dec 2018; Web,

December 4, 2018 Posted by | Ethical Skepticism | , , | 4 Comments

The Apothegm Makes the Poison

The dose makes the poison. This statement is not a logical truth. To cough up this notorious fur-ball of an apothegm in a serious broadscope discussion concerning toxicology risk, informs all concerned about your personal ignorance and desire to deceive – moreso than it speaks anything particular about me. The masters who let loose the dogs of skepticism have found such organic lying to be very effective in asset preservation.

One of the most notorious catch-phrases of pseudo-wisdom the ethical skeptic will encounter from a social skeptic poseur, is the apothegm ‘The dose makes the poison’. It is not that this statement is false. The basis of the quip resides in scientific validity and it is categorically true regarding lethality, yes. However the statement is not a logical truth.1 Logical truth is the state of syllogism which the utterer is deceitfully wishing for you to infer regarding this football of an apothegm. It is a means of lying through stating something which is only conditionally accurate – hoping that their victim will accept the statement as one which addresses the context of toxicity. Discussions of this ilk are rarely over lethality, and most often pertain to the impact of a toxin on the population, environment or family. If your conversant conflates these two concepts in order to enforce the entailed organic lie, or hands you cartoon LD50 charts comparing glyphosate with table salt, stop talking with them immediately. They are a non-player character. A social skeptic.

As an ethical skeptic, never ever ever conduct your communication under such misrepresentation by locution – as people spot this, but will not mention it to you. You will lose credibility, yet not know it, nor understand why in the end. The apothegm is not necessarily true (different from being ‘false’), and that is what disqualifies it from being a logical truth (ethical knowledge). This is of critical path importance to the ethical skeptic. Let’s examine a couple examples before we look at the entire domain of such a statement’s limited applicability (Exhibits I and II below).

If I am asked to consume diazinon in my drinking water (we are never ‘asked’, but let’s pretend we live in such an ethical world) for example, because its use increases corn yields 14%, when we have a glut of corn production each year for decades now in the US as it is, the ppm tolerance for diazinon in my water in such a circumstance is ZERO ppm. A Mean Lethal Dose measure-LD50 does not apply because there is no economic benefit to be derived from the risk I undertake. This, though a simple exercise example, is actually how ethical toxicology is done in the big boy world. When I work establishing food and trade markets, this is the type of mechanism I petition to have inserted in the market constraint dynamics and enterprise API’s used by large trade aggregation desks. This is ethics. Everything else is academic – and possibly immoral. I do not care how much you know or that you use pedophrasty to promote your product, placing pictures of starving children into your ads – if you are lazy/greedy, and that laziness or greed serves to harm others – you are acting under malice and oppression by court definition.

The Puppet Show: Comparing Aggregate Benefit to Individual Risk (while Ignoring Aggregate Risk)

If however, I am forced to drink say some dosage of diazinon, because involved stockholders inside several companies know my representatives and key regulatory agency members, and they were able to get the pesticide pushed through for higher-risk use; and furthermore, these stockholders are now able to buy beachfront vacation homes on St. George Island rather than rent smaller back-lane beach cottages – well under that stark risk/benefit scenario, I will then drink the toxin I suppose. Their benefit outweighs my risk. Now the astute ethical skeptic will observe that, toxin risk is never measured in terms of population descriptives – only individual risk. Individual risk LD50 versus a diffuse set of poorly estimated and confirmed aggregate benefits – the risk is never expressed in terms of aggregate risk – and is never followed up on. In reality the state of ethics in toxicology – per below – is one sad state of affairs.

Social skeptics, as usual, provide no help at all in this matter – ironic, when this is their claimed identity and life goal.

Notice that all the measures regarding toxin risk, relate to the individual.2 There are no studies which attach a measured population affect in humans, to an introduced toxin. There are studies of the farming community, and there exists some study of environmental impact – but no studies following up with human populations as a group. Not even devisement of a suitable measure.3 I find that amusing (horrifying), given that the ethical assessment of toxin risk pertains to impacts and measures relating to populations, not individuals. All of the following entries below, two new observations and five previous ones, are cataloged into The Tree of Knowledge Obfuscation: Misrepresentation of Evidence or Data and apply in this circumstance:

missam singuli

/philosophy : pseudoscience : study design/ : a shortfall in scientific study wherein two factors are evaluated by non equivalent statistical means. For instance, risk which is evaluated by individual measures, compared to benefit which is evaluated as a function of the whole – at the ignorance of risk as a whole. Conversely, risk being measured as an effect on the whole, while benefit is only evaluated in terms of how it benefits the individual or a single person.

Virtue Telescope

/philosophy : sophistry : deception/ : employment of a theoretical virtue benefit projected inside a domain which is distant, slow moving, far into the future, diffuse or otherwise difficult to measure in terms of both potential and resulting impact, as exculpatory immunity for commission of an immoral act which is close by, obvious, defined and not as difficult to measure. Similar to but converse of an anachronistic fallacy, or judging distant events based on current norms.

And of course a smattering of fallacies and crooked thinking art which we have examined before.

idem existimatis – attempting to obscure the contributing error or risk effect of imprecise estimates or assumptions, through an overt focus on the precision or accuracy of other measures inputs inside a calculation, study or argument.

ignoro eventum – institutionalized pseudoscience wherein a group ignores or fails to conduct follow-up study after the execution of a risk bearing decision. The instance wherein a group declares the science behind a planned action which bears a risk relationship, dependency or precautionary principle, to be settled, in advance of this decision/action being taken. Further then failing to conduct any impact study or meta-analysis to confirm their presupposition as correct. This is not simply pseudoscience, rather it is a criminal action in many circumstances.

phantasiae vectis – the principle outlining that, when a human condition is monitored publicly through the use of one statistic, that statistic will trend more favorable over time, without any real underlying improvement in its related human condition. Unemployment not reflecting true numbers out of work, electricity rates or inflation measures before key democratic elections, crime being summed up by burglaries or gun deaths only, etc.

Yule-Simpson Paradox – a trend appears in different groups of data can be manipulated to disappear or reverse (see Effect Inversion) when these groups are combined.

Elemental Pleading – breaking down the testing of data or a claim into testing of its constituents, in order to remove or filter an effect which can only be derived from the combination of elements being claimed. For instance, to address the claim that doxycycline and EDTA reduce arterial plaque, testing was developed to measure the impact of each item individually, and when no effect was found, the combination was dismissed as well without study, and further/combination testing was deemed to be ‘pseudoscience’.

However, given that somebody out there is benefiting, I will gladly accept a drink containing 2 ppm (parts per million) diazinon over one containing 10 ppm, based upon this necessity of individual risk compared to aggregate benefit. Now diazinon features no Efficacy Curve (EC) of benefit for me to ingest, however it does exhibit toxicity measures in 240-day rat studies. Certainly studies of value, and I am glad we completed such diligence. The NOAEL (No Observed Adverse Effect Level) of diazinon is set, as a result of such studies, at 0.02 mg/kg-bodyweight per day.4 This would equate to an 8 ounce glass of water per day containing 8.4 ppm or less of the chemical, for my body weight (the ‘lethal concentration mean-LCt50’ being much higher than this – so below this NOAEL level is considered safe). Thus, in theory, that same glass of water with 10 ppm would prompt observable adverse effects in my physiology. It won’t kill me though, right? That’s great news.

Nonetheless, yes, I will choose to drink the lower 2 ppm dose any day. The dose does make the poison, inside this highly constrained conflation of adverse effect and toxicity.

However, to cough up this statement fur-ball at me, in a serious debate about food and water contaminants, means that you are first, clueless enough to have highly underestimated your opponent and second, don’t really understand toxicology nor adverse effect all that well. It tells all concerned more about you, than it does about me. Yes, this includes the case wherein you hold a PhD. LD50, LCt50, NOAEL and other exculpatory idem existimatis contentions of that ilk are most often cited by lazy science poseurs. These measures do not even begin to bear salience or relevance around the list of 20 different ways in which toxicity can harm our citizens and our family members (Exhibit II below).

No, the dose will not kill me. Lethality, and even Adverse Effects are red herrings. We are discussing toxicity.
The discussion has never been about whether or not the contaminant in the glass of water will hurt me right this moment.

If these stats do not address the questions which our families have intelligently raised about toxins
– then why should our scientists and skeptics not have already raised the same questions?

But Table Salt Had a Higher ‘LD50’ What Happened?

But does the dose actually make the poison? Is that a logical truth? If your child accidentally ingests some rat poison – such measures are absolutely critical. But for you and millions of others, hold on just a second. Here it is 20 years later, two decades of confidently ingesting a NOAEL-safe .5 to 2 ppm diazinon glasses of water, most every day, and suddenly, you’ve gained 100 lbs across 3 years and have had to have both of your knees replaced because an aggressive form of rheumatoid arthritis has kicked in. Your same-age colleagues at the plant fully understand and cover for you. Your orthopedic surgeon is hesitant to undergo the procedure because she wants you to lose 70 lbs first. She is not sure that you will be able to handle the difficulties involved in the surgery with the extra weight. Your spouse feels like he must have done something wrong. He changes his diet in an effort to help out, but to no avail. IBS and diabetes start to creep up periodically. All at a fairly young age. But but but… the LD50 of table salt was higher though!5 Must have been the table salt, and coffee too. It’s always the coffee.

We have an apothegm just for this type of circumstance as well: ‘Luck of the draw’.

OK, in an effort to be truthful when held to public account, social skeptics will admit that we have enough epidemiological data to know that the table salt and coffee did not cause your long term exposure physical ailments after all. They just brought up those red herrings years ago in order to look smarter than you – and because this was what they were told to say. Can you as an experienced skeptic now go back then and contact the study group which set the rat-240d-NOAEL for diazinon, and say “Hey, we might need to examine this with a bit more scientific rigor and follow-up.” The fact is, that I just observed adverse effects from something – and there are only a couple culpable ‘somethings’ which could be considered – a set which includes diazinon, the least likely candidate of which is ‘luck of the draw’ (pseudo-theory). The fact is, that what we really needed were human-30y-NOLTAEL, statistics to be derived from comprehensive community data to begin with. The sad fact however is, that they are rarely if ever done. Nobody wants to find out who had the bullet in a one-bullet firing squad.

And herein resides the rub – we don’t think we need to develop human-30y-NOLTAEL because we already have rat-derived LD50, LCt50 and NOAEL data.

To push for further science might endanger the St. George beachfront property. Better enlist the aid of some, compromised-ethics, fake experts who are smart-but-dumb, with dark teeth. If they don’t have any qualifications, have them call themselves ‘skeptics’. You can hire them cheap, all you have to do is pay their celebrity leaders a pittance, and they will do anything. Ignorance is asset preserving. The science is settled. (Another deadly apothegm of social skepticism)

In the Real World, Acute Lethal Dose is Rarely the Issue

These ethical dilemmas, along with the ‘our pesticide is less toxic than table salt’ baloney, elicits just one simple example problem with ‘the dose makes the poison’ apothegm applied as panacea to the entire issue domain inside toxicology. However, even more compounding in risk, is the specter that, there are at least 17 other toxicity expression vectors, which bear a similar incompatibility to the classic ‘LD50 – dose makes the poison’ paradigm. For most toxicity vectors, those we have understood much better than our 1920’s-minded skeptics – the dose does not make the poison. And you are particularly stupid-to-gullible to believe otherwise.

The safety of glyphosate, the active ingredient in the Roundup weedkiller, has been compared to many things over the years, but the table salt comparison stands out as particularly ridiculous. In fact the state of New York took legal action against Monsanto for false advertising for making this very claim. Monsanto agreed to cease and desist from making this claim, but it is still commonly parroted by aggressive supporters of GMOs and chemical company apologists.

Suffice it to say that no one’s going to intentionally ingest enough salt or glyphosate to immediately die from their exposure, and comparing the LD50 values of chemicals that can have serious health harms other than immediate mortality is so misleading as to be irresponsible.

~ Dr. Nathan Donley, Center for Biological Diversity6

The following pages are available for your use, as you see fit – to partly educate the vulnerable public about what they need to know regarding food/water/medicines toxicology. This is not a case of ‘Dunning-Kruger’ – as toxicology’s application inside this context fails the limits test for application of that ‘fallacy’.7

Such matters are your responsibility as well as your right. If you and your family are getting sick for no reason – raise hell about it. They are just gonna have to put up with us.

However, if you are a professional toxicologist/epidemiologist and wish to make comment/input on the graphics below – I will certainly consider improving them with your help. That would be absolutely appreciated.

Exhibit I

Exhibit II


epoché vanguards gnosis


How to MLA cite this blog post =>

The Ethical Skeptic, “The Apothegm Makes the Poison” The Ethical Skeptic, WordPress, 29 Nov 2018; Web,

November 29, 2018 Posted by | Agenda Propaganda, Argument Fallacies | , | Leave a comment

Chinese (Simplified)EnglishFrenchGermanHindiPortugueseRussianSpanish
%d bloggers like this: