The Ethical Skeptic

Challenging Agency of Pseudo-Skepticism & Cultivated Ignorance

The Definitive Guide to Ethical Skeptic’s (TES/ES) Coronavirus SARS-CoV-2 (2019) Analysis

Below are some key terms, charts and principles which are necessary in a Wittgenstein-level understanding of what is occurring behind the scenes with Covid-19, as tracked and described by The Ethical Skeptic.

“Don’t you see that the whole aim of Newspeak is to narrow the range of thought? In the end we shall make thoughtcrime literally impossible, because there will be no words in which to express it.” ~1984, George Orwell

Foremost one should bear in mind that that name ‘The Ethical Skeptic’ is posed as a discipline of thinking and not in reality a personal appellation. Indeed it is framed in the impersonal, in such context as one might place The Creative Architect or The American Practical Navigator; more publication title than boast. With that now clear, during the Coronavirus-CoV-2 (2019) pandemic I have made an effort to lend my professional skills as best they could be applied to aid in its response. I have had the good fortune to be called to apply such skills in minor contribution to both the response effort itself, and as well on the Twitter social media platform. Therein, particularly in terms of moral and knowledge support to those at risk of both the virus impact and our over-reaction to/ignorance of the Covid-19 virus’ dynamics.

During that time, I began to grow uncomfortable with the ways in which this pandemic was exploited to craft political power and allow extremists to enact harm on their opponents’ economics, well being and lives. Thus I began to apply my skills as well to see if the media and agency-bearing academics were indeed telling the truth. As it turned out, in critical significance, they were not.

Marketing disinformation as an academic, governmental or media authority,
to compel despair, panic or compliance inside a population under duress of epidemic, war or economic collapse –
these things are indistinguishable from war crimes. They are a violation of basic human rights, and as such
constitute acts of class harm, scienter, racketeering and oppression.

During this time frame and series of tweets, a number of key concepts and terms have arisen which have provided to be a source of confusion for new readers of The Ethical Skeptic. Terms however which were necessary in dispelling cultivated ignorance around the topic – both Orwell’s Newspeak and enforced Nelsonian ignorance. Since the account is accruing a number of new followers each day, I cannot possibly attempt to re-define those terms every day for every single new follower. I would never have time left to post a new idea or conduct my real life professional work as well.

Accordingly, below are some key terms and principles which are necessary in a Wittgenstein-level understanding of what is occurring behind the scenes with Covid-19. It is not that I am forecasting, conspiracy spinning, nor that everything I say is correct – rather that these terms are necessary as underpinning to comprehend the risk entailed to the general stakeholding public – and the hazard presented by those who wish to exploit the tragedy as a means to abuse of their enemies and in furtherance of their political power.

Please note that general terms of Ethical Skepticism can be found at the following two links as well:

The Tree of Knowledge Obfuscation

The Ethical Skeptic Glossary

As well, you can hear The Ethical Skeptic speak about Covid-19 from his perspective, on the Todd Herman Show and 770 KTTH, from 3 September 2020 here:

The Ethical Skeptic on The Todd Herman Show – KTTH 770

Key Covid-19 Charts/Terms/Principles Employed by The Ethical Skeptic

Agency – an activated, intentional and methodical form of bias, often generated by organization, membership, politics, hate or fear based agenda and disdain. Agency and bias are two different things. Ironically, agency can even tender the appearance of mitigating bias, as a method of its very insistence. Agency is different from either conflict of interest or bias. It is actually stronger than either, and more important in its detection. Especially when a denial is involved, the incentive to double-down on that denial, to preserve office, income or celebrity – is larger than either bias or nominal conflict of interest. One common but special form of agency, is the condition wherein it is concealed, and expresses through a denial/inverse negation masquerade called ideam tutela. When such agency is not concealed it may be call tendentiousness.

ideam tutela – concealed agency. A questionable idea or religious belief which is surreptitiously promoted through an inverse negation. A position which is concealed by an arguer because of their inability to defend it, yet is protected at all costs without its mention – often through attacking without sound basis, every other form of opposing idea.

Tendentious – showing agency towards a particular point of view, especially around which there is serious disagreement in the at-large population. The root is the word ‘tendency’, which means ‘an inclination toward acting a certain way.’ A tendentious person holds their position from a compulsion which they cannot overcome through objective evaluation. One cannot be reasoned out of, a position which they did not reason themselves into to begin with.

Amphibology – is a situation where a contention may be interpreted in more than one way, due to ambiguous sentence structure. An amphibology is permissible, but not preferable, only if all of its various interpretations are simultaneously and organically/logically true (not semantically).

Anecdote – a story, recount or stand-alone observation which may or may not constitute epistemic data.

modus praesens – an anecdote to the presence. Often can constitute data in that it observes a presence supporting a contention. It is not proof, however neither can it dismissed by wave-of-the-hand false skepticism.

modus absens – an anecdote to the absence. This is not data, rather most often serves as an attempt to craft data-of-denial (an appeal to ignorance).

Aperçu – the signature terminology or catch-phrases that a person will employ to demonstrate that they are inside the approved club or academic circle around a particular topic. If you identify the same principle, however employ a different name for it, this will stir anger in this type of poseur.

Apparatchik – the opposite of being a skeptic. A blindly devoted official, follower, or organization member, of a corporation, club or political party. One who either ignorantly or obdurately lacks any concern or circumspection ability which might prompt them to examine the harm their position may serve to cause.

Arrival/Arrival Distribution – the novel and incremental pattern and count/quantity of how a given outcome or particular thing occurs over a series of consecutive time (days, weeks, minutes, months, etc.). New cases each day, or fatalities each day, etc.

Attentodemic – a pandemic or other social malady which arises statistically for the most part from an increase in testing and observation activity. From the two Latin roots attento (test, tamper with, scrutinize) and dem (the people). A pandemic/tragedy, whose curve arises solely from increases in statistical examination and testing, posting of latent cases or detected immunity as ‘current new cases’, as opposed to true increases in fact. For a graphic depiction of the results of attentodemic practices inside Covid-19, see Salting/Juking Reported Cases below.

Backlog Stuffing (BS) – State departments of health may unilaterally, or through direction by higher agency or political intermediaries, choose to delay reporting of data inside one week’s period, and further then report several days of data as if it occurred in a single day. In this manner, a sufficient amount of backlog or infrequent report arrivals, can be exploited to craft the appearance a false trend, rise or record in data. When exploited by media to incite panic or despair inside the context of a population under risk, it is a human rights crime as well.

The Bricklayer’s Error – the presumption that academic, heuristic or deep single-function expertise (bricklaying) qualify one to stand as authority as to how the broader issue is to be managed (house is to be built or lived-in). Presupposing that a physicist who studies precession, should be the foremost expert on bicycling. Related to the phrase: ‘Experience trumps consilience. Consilience trumps heuristic.’

Bridgman Point – the point at which a principle can no longer be dumbed-down any further, without sacrifice of its coherency, accuracy, salience or context.

Bridgman Point Paradox – if you understood, I could explain it to you – but then again – if you understood I wouldn’t have to explain it to you.

Broken Window Parable (Bastiat Fallacy) – actually a counter to the broken window parable which proposes that even in disaster, an economy profits on the repair and recovery. The Bastiat Fallacy points out the logical failure of such reasoning.  Proposed by Nineteen Century French economist Frederic Bastiat, the fallacy states that the economic benefit derived from recovering from disaster is never superior to the economic benefit which was lost as opportunity cost, as a result of sacrificing the resources sacrificed in the disaster, nor committed to repair the damage or fix the disaster. The economic benefit of war is never compared to what was lost as a result of the war.

Broken Window Certainty Parable (Bastiat’s Certainty Fallacy) – a modified form of the Broken Window Parable, wherein the claim is made that harm imparted by a bad actor, or disaster cannot be claimed to ‘have been going to happen anyway, even if good decisions were made’. If benefit from such a disaster cannot be claimed as a positive credit for the disaster (Broken Window Parable), then neither can an argument that ‘harm would have happened anyway’ stand as a permissive nor partial exoneration of the disaster or bad action/decisions. Covid upheaval deaths, even though they might have happened under circumstances of good decision making, cannot be therefore deducted from the set of deaths which resulted from a reality of bad decision making.

Case Adjustment Methodology – case raw reporting data was not acceptable to The Ethical Skeptic, as it contained too much agency to be trustworthy. The data was refined by an algorithm, and then those results were tracked for accuracy over time. This algorithm performed very well when compared to other data results and when used as a calculation touch point (consilience). Reported cases were factored by the rate of hyper-testing (T) after April 5, and then also by hospitalization census decreases (H) and increase in hospital dwell time (h) after May 6, by the following formula:

Casuistry – the use of clever but unsound reasoning, especially in relation to moral questions; sophistry. Daisy chaining contentions which lead to a preferred moral outcome, by means of the equivocal use of the words within them unfolding into an apparent logical calculus – sometimes even done in a humorous, ironic or mocking manner. A type of sophistry.

Catalyseur – a third party or media member who seeks to instigate conflict between science and its at-risk public – who further then exploits such conflict to attain career or club advancement, money or power. A conflict exploitation specialist, or any entity which stands to gain under the outcome of a lose-lose conflict scenario which they have served to create, abet or foment. Someone who acts as a third party to two sides in an argument or conflict, who advises about the ‘truth’ of the other party involved, respectively and urges an escalation of factors which drove the conflict to begin with.

CDC Excess All-Cause Deaths Chart 1 – those deaths each week, as tracked in the CDC MMWR database, which are in excess or rise above the typical number of deaths for that same week over the average of a set of previous years. The database used to derive this can be found here:

In the associated chart, we compile each week’s total reported deaths and then track how low weeks -1 through -12 are relative to the final number they arrive at on week 13. This is called the lag curve, and is used to normalize or adjust each week’s Morbidity and Mortality Weekly Report (MMWR) death report by the CDC. Any lag which surprises us above this projected level of cases is termed ‘supralag’. Each week the CDC Lag Curve is adjusted if consistent supralag is observed, in an effort to make sure that supralag is indeed an exception each week.

‘Lockdown Deaths’ are estimated separately by means of the Full Covid Death Accountability Chart each week, and the net figure (net of reduced car accidents and iatrogenic accidental medical deaths) is published as the new baseline (solid beige line), a modified average of each week from 2014 – 2017 (2018 was an exception year and threw the average off to a mis-representative level).

Died ‘With’ calculations come from overlap with the 14 major causes of death in the US, and their commensurate surge with Covid-19 deaths week for week. These death statistics for primary mortality are derived from the National Center for Health Statistics; Weekly Deaths by State and Cause of Death; 12 Aug 2020;

Epidemic Threshold is determined to be 7.2% of all MMWR deaths after Lockdown Deaths have been netted out. The CDC uses anywhere from 5.9% to 7.2% for more flu-like illnesses. Thus this latter figure was chosen for Covid-19.

CDC Excess All-Cause Deaths Detail Chart 2 – those deaths each week, as tracked in the CDC MMWR database, which are in excess or rise above the typical number of deaths for that same week over the average of a set of previous years. The database used to derive this can be found here: This chart shows the calculations which drive and feed Chart 1. Each MMWR weekly report is adjusted for measured CDC lag, compared to last week for determining supralag, netted down by Lockdown Deaths, and then is compared to the actual number of Covid deaths reported by the states 7 days after the date of the MMWR report which Chart 2 is based upon.

At the bottom of the chart resides the latest CDC lag curve, exhibiting the math used to adjust and normalize weeks 1 – 21 of the CDC MMWR report each week. As well, an estimation is made of how many excess deaths have not yet been reported by the states as of 7-days later (green tally at bottom). Finally the full tally of estimated deaths from Lockdown is shown in the green calculation at the bottom.

CDC Wonder Database – a database managed by the CDC which provides comprehensive breakouts of US fatality data by year and MMWR week, along with a query access tool – all of which can be found here:

Close-Hold Embargo – is a common form of dominance lever exerted by scientific and government agencies to control the behavior of the science press. In this model of coercion, a governmental agency or scientific entity offers a media outlet the chance to get the first or most in-depth scoop on an important new ruling, result or disposition – however, only under the condition that the media outlet not seek any dissenting input, nor critically question the decree, nor seek its originating authors for in-depth query.

Chart – as distinct from a ‘graph’, in which two orthogonal measures are compared to each other mathematically, a chart is a demonstration of a set of relationships (mathematical, non-math and/or both) which are being considered for contribution to a specific inference. A graph contains a mathematical function displayed across two labeled axes (abscissa and ordinate). A chart does not necessarily conform to this simple principle. Forcing a chart one does not understand, to become a graph, is a common sign of inexperience.

Consilience – a form of derivation of inference which is stronger than mere inductive inference. A method of deriving confidence in a hypothesis from disparate analyses, sources, media, methods, heuristics, calculations and perspectives – which then without prior manipulation, bear symmetry or agreement in their conclusions. If the banker, butcher, brick-layer, baker and barber all agree that the economy is bad – it is probably bad.

Consilience Touch Point – one instance of consilience between two independently derived observations which bear symmetry or agreement.

Covid-19 Fatality Full Accountability Charts 1 and 2 – this chart attempts to break out the entire body of excess deaths, as reported in the CDC Excess All-Cause Deaths Chart 1 and 2, into the significant type of fatality concerned

  • Died of Covid-19 – self explanatory
  • Died with Covid-19 – significant not that it is implying these are not real Covid fatalities, rather that a pull-forward effect will happen later in the year wherein deaths are actually lower than average
  • Deaths Not Reported – potential Covid-19 deaths, still as such unreported by state DHS/DOH offices
  • Covid-19 Reaction Fatalities – avoidable non-Covid deaths which were actualized either past (or future, but this is not tallied on the charts above) through politically-motivated, propaganda-based, irrational, money-opportunistic, virtue-symbolic, or hate-fueled decision making
  • Covid-19 Upheaval Fatalities – non-Covid deaths which were an unavoidable consequence of a reasoned, well planned, and limited-scope pandemic response
  • Net Accident Reduction – reduction in auto fatalities and iatrogenic accidental deaths

Finally, the total deaths are compared between Covid caused and Covid Reaction cause deaths. Life-years lost are estimated by the following calculations

Excess Cardio/Diabetes x 15 years remaining
Alzheimer’s x 4 years
Stroke Access x 6 years
Flu & Pneumonia x 4 years
Cancer/Medical Access x 20 years
Suicide Addiction Abandonment & Abuse x 40 years

This is compared to an average years remaining for the average Covid fatality of 5.6 years (from the FL risk of death by Covid-Age Chart). The sources for this data are many, and include:

  1. CDC Wonder – Weekly Counts of Deaths by State & Select Causes, 2014-2020;
  2. National Cancer Institute;,and%20139.6%20per%20100,000%20women).
  3. CDC Heart Disease Facts;
  4. CDC Stroke Facts;
  5. CDC Vitals Signs: Suicide on the Increase in US;
  6. Scientific American: COVID-19 Is Likely to Lead to an Increase in Suicides;‘Cries for help’: Drug overdoses are soaring during the coronavirus pandemic;
  7. CDC Drug Overdose Death Statistics; & Talbott Recovery: Alcoholism Statistics You Need to Know
  8. CDC FluView Weekly Influenza and Pneumonia;
  9. Association of Adverse Effects of Medical Treatment With Mortality in the United States; & Wikipedia: Annual Motor Vehicle Fatalities;
  10. Dierenbach: The coronavirus response has been deadly;

Cultivated Ignorance/Cultivation of Ignorance – If one is to deceive, yet also fathoms the innate spiritual decline incumbent with such activity – then one must control and abstract a portion of the truth, such that it serves and sustains ignorance on the part of the general population – a dismissal of the necessity to seek what is unknown.​ The purposeful spread and promotion or enforcement of Nelsonian knowledge and inference. Official knowledge or Omega Hypothesis which is employed to displace/squelch both embargoed knowledge and the entities who research such topics. Often the product of a combination of pluralistic ignorance and the Lindy Effect, its purpose is to socially minimize the number of true experts within a given field of study.

Daily Case Arrivals Chart (Covid-19) – depicts four independently derived elements in comparison, so as to evaluate consilience between those elements and their base assumptions, constraints and calculations. The core arrival form depicted is then used as the basis to derive other charts, which test its validity.

Element 1 – Daily Case Arrivals (Blue Vertical Columns) – These are daily new cases reported by the states at The Covid Tracking Project. These reported cases are actuals as-reported, up until 4/5, whereupon every daily report is then divided by a factor derived from the percentage increase in testing after 4/5. For instance, 10,000 positives derived from a 150% testing rate relative to 4/5, would result in 6,667 positives ‘relative to positives measured on 4/5’. This is called the ‘strike date’. Any date between 4/5 to 4/29 may be chosen as the strike date, and essentially the same curve manifests. A strike date is a point, reasonably close to an inflection point, in which three things occur: 1. we are past a cases peak, 2. testing of only-the-sick has ended, and 3. test kits have just begun proliferating to all areas of the nation in large quantity. This is known as an Indigo Inflection Point. In this way, the true arrival distribution of cases is estimated, and the gain-amplification of testing increases is filtered out of its arrival distribution. As are all constraints, calculations and observations, this arrival distribution is then monitored against other indices over time, to evaluate its performance. It is not take as immediate and final gospel.

After 5/6, the same formulaic principle is utilized, factoring reported cases against hospital admissions – so that an increase in hospitalization admissions can manifest in the case arrival data and not allow the increase in testing alone, to falsely mask or hide a regional surge or outbreak (as did occur in south border counties during the July time frame). Admissions were estimated using state reported daily hospital census, factored by Hospital Dwell Time (see its entry in this lexicon). The following formula encompasses both of these factoring principles after 5/6.

Element 2 – China Reported Case Arrivals (Green Vertical Columns) – These are the actual new Covid-19 cases per day reported by China to the international community. A green dotted line is fitted as an estimate of a more likely case level, based upon the rates of transmission observed by other nations. This dotted line is not further tested for validity, however is also not used in any successive calculations or models.

Element 3 – WHO Consensus Report on SARS 2003-Cov-1 Arrival Form – At the bottom of the chart is a cut and paste of the WHO Consensus raw new cases arrival form. Raw numbers are usable here because it is the only data we hold, and the context of measure is essentially the same throughout its horizon. The arrival form is not valid in the vertical dimension, however is valid in terms of its seasonality (horizontal dimension). The reason this is used in the chart is that coronaviruses are considered to be ‘sharply seasonal’. A quote from one of the studies which concluded this (Monto, et al. Journ of Infect Dis), is contained in the window for this WHO arrival form.

Element 4 – CDC Excess All-Cause Deaths Curve (Yellow Arrival Curve) – This is the number of excess deaths in the US, as derived from the CDC’s weekly MMWR Report. These deaths are then partitioned into Covid ‘with and of’, and ‘Lockdown’ deaths, by separate methodology. This resulting curve of death arrivals by date is then used here to match to case arrivals by date and to examine for consilience in both magnitude and function. Such consilience was consistently observed throughout the modeling period.

Data : Information : Intelligence – data does not inform on its own; especially raw data – which most often will serve to dis-inform one who is not used to being held accountable for the results of their analytical work. A professional principle which cites that ‘data must be denatured of its noise, into information. Information must be then transmuted through consilience and deduction, into intelligence. Intelligence is then the only basis from which to infer or take action. Be wary of agency which exploits the appearance of raw data.’

Death ‘from’/Death ‘with’ Covid – an important principle in that there are those who died of Covid, who simply/trivially had Covid RNA in the nostrils while they were already dying of something else, as opposed to actually dying from Covid as a primary or secondary cause of death. The reason this is important is that the former group causes a ‘pull forward’ effect which will artificially make it appear that suddenly Americans are not dying of other diseases any longer, either now or in the future. This effect as raw data, will serve to mislead.

Deductive Argument/Inference – an argument which uses premises and logic to eliminate all reasonable alternative considerations, or sets of possible contribution/consideration, through comparison to the strength of its primary assertions. The conclusion is contended to follow with logical necessity from the premises and reductions. Reductions can exist as either elimination of alternatives by hypothesis falsification research, or simply by set constrainment. For example, All men are mortal. Plato is a man. Therefore, Plato is mortal.

Demoveogenic Shift – a condition wherein amateurs of a science are proactive, well versed and investigate more depth/critical path, while in contrast the academic fellows of the discipline are habitually feckless, cocooned and privileged.

Dunning-Kruger Abuse – a form of ad hominem attack. Inappropriate application of the Dunning-Kruger fallacy in circumstances where it should not apply; instances where every person has a right, responsibility or qualification as a victim/stakeholder to make their voice heard, despite not being deemed a degree, competency or title holding expert in that field.

Embargo Hypothesis (Hξ) – an idea or dissent which must be squelched at all costs – even and especially unto the sacrifice of the integrity of science itself.

Epidemic Threshold – when a virus is ‘in season’ – the point at which fatalities from that virus exceed 7.2% of all fatalities in a given week.

epoché (ἐποχή, “suspension”) – a suspension of disposition. The suspended state of judgement exercised by a disciplined and objective mind.

Equivocation – the misleading use of a term with more than one meaning, sense, or use in professional context by glossing over which meaning is intended in the instance of usage, in order to mis-define, inappropriately include or exclude data in an argument.

Eristic Argument – an argument which is posed with the goal of winning and embarrassing an opposing arguer, as opposed to seeking clarity, value or common ground. Usually stems from the arguer’s past psychological injury, narcissism and combative habituation.

Ethical Skeptic’s Dictum of Rhetoric – what is posed in the rhetorical, can only be opposed with the rhetorical. One cannot answer a rhetorical question with objective reason and evidence.

Ethical Skeptic’s Razor – never ascribe to happenstance or incompetence, that which coincidentally, consistently and elegantly supports a preexisting agency. Never attribute to a conspiracy of millions, what can easily arise from a handful of the clever manipulating the ignorance of the millions.

Ethical Skeptic’s Five Laws of Risk – in order of progression of application logic, five laws frame the ethics of risk in a social context:

  1. A system which imparts risk upon stakeholders, perpetually bears the burden of proof of any reasonable or implicit claim to have mitigated that risk.
  2. In absence of a reasonable accounting of risks, there is no such thing as a claim to virtue.
  3. A peer reviewing a risk strategy must also bear that risk them self.
  4. Stakeholders placed at risk, are peers in its review.
  5. An ignorance of risk or absence of risk strategy, is itself a risk strategy.

Experience Trumps Consilience/Consilience Trumps Heuristic – means that a consilience of having tested/failed at everything which does not work (deductive experience) is more powerful in its inference than is a conslience of suggestive (inductive ‘might be’) observations alone. However, consilience of inductive observations is stronger than the sophistication, reliability or academic correctness of any single given heuristic. A method of detecting the purely academic or poseur inside a topic. See Jamais l’a Fait.

Fallacy of Interest Conflict – a condition wherein a stakeholder bearing an opinion inside a legitimately plural scientific or public-impact disagreement, is falsely accused of bearing a conflict of interest for any form of desire to protect from harm or ruin their family, business, home, or those they hold dear. Ironically, the accusation of ‘conflict of interest’ in such circumstance, often itself constitutes a suppression of human rights (an action which can itself bear a conflict of interest).

False Positive Rate – a PCT RNA test of high sensitivity and tolerant specificity is designed for purposes of mercy – to not miss diagnostic cases. This is done specifically to minimize suffering from missed illness. However, it is well established that tests of such design may also produce false positive outcomes as part of their assay design. When a population is tested by PCR tests, and 99% of that population is well, then there will be a high number of false positives arising from the testing of that population, even and especially compared to false negatives. In addition, beside the issue of test design, is the reality that testing labs may suffer from laxity in procedure, kit contamination or employee error or malfeasance. All of these factors combine into what is known as a ‘False Positive Rate’ for a particular set of tests.

If we have a 1% rate of false positives, inside a population which is testing at 1% prevalence, in theory almost all of the positives being detected, are indeed false. As of late August the US was conducting on average about 680,000 tests per day. A 2.3% false positive rate would yield 15,640 false positives per day. The average positives detected during that same time was around 45,000 positives per day. Thus, potentially 35% of those reported positives in late August 2020, were indeed false. A study by Cohen and Kessel, updated and re-printed 18 August 2020, cited a measured median false positive rate of 2.3% for Covid RT-PCR testing. They confirmed the reality that “the likely sources of these false positives (contamination, human error) are more directly connected to laboratory practices and layouts than to which particular assay is used.”

False Tail (The Principle of) – the condition at the end of the 2020 Covid-19 pandemic where cases began to dwindle to the point where the content of false positives, number of persons conducting a second validation test after a positive first test, and RNA-shedding detections (both trivial and 12 week shadow of infection) – all three of these circumstances comprised a significant portion of daily reported cases by the states. In the chart to the right one can see the false tail depicted as the horizontal line progressing over time to the right.

The charts to the upper and lower right depict the Cohen-Kessel measured rate of false positives overlain on the positive case detects reported by the states and tallied at The Covid Tracking Project. This is a significant issue of concern, and citizens should be highly upset that this raw data, was passed to the media as ‘truth’. Raw data is never ‘truth’ even though it can be sold as fact (which in fact it is not). We do not use raw data when analyzing the flu each year. Instead we take raw lab reports and use them to project actual cases of the flu each season. We’ve gotten pretty good at this. This exhibits clearly the delineation between ‘data’ and ‘information’. We owed our population at risk ‘information’ and not mere ‘data’.

On this particular day in the upper right hand chart, it was estimated that 56% of the reported cases that day were from either

1. The 12 week shadow of RNA PCR detectability

2. Duplicate testing to confirm positive or detect recovery at 35%

3. The lower-band rate of false positives at 0.8 – 2.3% (avg = .0155, see Cohen-Kessel Study in ‘False Positive Rate’)

In additional example, two of these three false tail contributors are then deducted from the positive test result total in the second chart to the right, resulting in the intelligence that actual cases were flat, despite a 22% weekly increase in testing. This amplified caseload lent false support to the notion of keeping society partially shut down – a condition which was fatal to small and medium-sized businesses in the US, but not their conglomerate competitors.

Florida Risk of Death Case Study – a tiered-risk study by age, done on Covid-19 deaths in the state of Florida, which compared the frequency of cases and deaths by age tier for Covid-19 with the general nation’s actuarial risk of death by age. There were two right hand y-axes, the first indicated the risk by age from US actuarial tables (gathered from US Social Security Administration Actuarial Life Table Risk of Death by Age;, while the second calculated a comparable risk of death by age among those in Florida who had contracted Covid-19 and died at any time during that year of age (gathered from Florida Dept of Health Open Data;

What this served to show was, that those who were dying of Covid in Florida were dying much later in life than the average person in the US dies, by age tier for all causes. This demonstrated that the vast majority of those dying of Covid-19, were indeed dying only months earlier than they normally would have. This is still tragic, but constituted critical information we should have had early in the Covid response effort. This allows for the calculation of life-years lost comparatives between Covid-19 fatalities and fatalities from overreaction to Covid-19.

Gaussian Blindness (see medium fallax) – the tendency to characterize an entire population by both the mean (μ) of the population as well as a Normal Distribution profile or other easily applied distribution, as being descriptive of the whole body of a set of data. I’ve got my head in the oven, and my ass in the fridge, so I’m OK.

Gompertz Curve – is a compound mathematical model for a time series arrival form, named after Benjamin Gompertz (1779–1865). It is a sigmoid function which comprises and describes a normal distribution of one activity, blended with a Poisson or similar distribution of delay or bureaucratic effect – producing a unique form which hints at a blending of a natural arrival distribution, combined with a human arrival distribution (shopcraft).

The Great Repression – the period of severe depression economic fallout greater than that of the Great Depression, characterized by 38 million jobs (11.5%) and 500,000 or more lockdown lives lost and 6 million famine lives lost, which resulted from an over-reaction to and/or incompetent understanding of coronaviruses and Covid-19 in particular. Marketing disinformation as an academic, governmental or media authority, in order to compel despair, panic or compliance inside a population under duress of epidemic, war or economic collapse – these things are indistinguishable from war crimes. They are a violation of basic human rights, and as such constitute acts of class harm, scienter, racketeering and oppression.

Within the context of an impingement of human rights, incompetence and malice are indistinguishable.

Heat/Heat Map – a map showing where the hottest US Counties are located or showing where the greatest increase in cases or fatalities has been recently (the ‘heat’).

Herd Resistance vs Herd Immunity – a 70% herd immunity ‘threshold’ is overly simplistic to a systems engineering or market analytical mind. Societies are not a homogeneous soup of peer-equals. For the purpose of introducing heterogeneity into the estimation of herd immunity, we propose two modifications to the idea. First, because the principle carries an enormous amount of mathematical slack (single elements) and elasticity (overall), it cannot be called a ‘threshold’ – as that is simply a term of ignorance as to how systems work and function. Therefore, a resistance range-band is more appropriate for this type of analytical estimation. Second, this resistance range band must be developed through a heterogeneous schema of both society and the type of detection of infection. For instance ‘asymptomatic-connectors’ are the highest HRT-sensitivity group in the genera. They both conduct the highest number of transactions between groups of processors, and also feature the greatest ‘reach’ (horizontal = days of active transmission and vertical = the environment in which they transact, ship, building, prison, school etc.) in viral exchange rate.

Accordingly, in our estimation of the 14 – 18% herd (or community) resistance band, we used the following genera to stratify the math (note that ‘Connector’, ‘Processor’ and ‘Sink’ are simulation/modeling entities). This banding estimation turned out to be correct at the end of the pandemic season months later.

Connector – an individual who, because of their role in society performs a high number of transactions per day. Airline counter agents, cash handlers, janitorial staff, delivery people, medical staff, food workers & preparers, point of sale operators, etc. Overall, infections come from primarily this group.

Processor – an individual who interacts on a regular basis with a Connector population yet less often with other Processors, and with only a specific Sink population each day. They may not however do so with as many transactions and may spend significant time between Connector and Processor/Sink transactions, such that a virus either dies or is detected before a Sink or other Processor transaction is encountered. Mothers, fathers, laborers, teachers, doctors, staff workers, etc.

Sink – an individual who does not leave their residence as a normal part of their day, save for one to three trips per week at most. They rarely transact with Connectors, and most often do so with a specific small group of Processors. The elderly, work-at-home, household managers, infirm and disabled or ill, etc. Overall, deaths come primarily from this group.

Symptomatic – those cases who showed symptoms of Covid-19 as confirmed subclincally or higher, by a medical office or doctor.

Asymptomatic – those cases who detected positive for Covid-19 by antibody or PCR testing, however who do not recall being sick.

RNA-Dirty – those who detected positive for Covid-19 however never really were infected with the virus at all, rather simply carried dead fragments in their mucus or clothing. Also those who were infected up to 12 weeks earlier and still shed the dead RNA fragments.

False Positive – a false detection of Covid-19 by

– sensitivity error
– excessive CT threshold >35
– failures of process and lab design
– individual errors/contamination
– malfeasance/maliciousness

Through conducting a weighted average analysis of each of these schema groups, and assuming that the most exposed group, Connector-Symptomatic achieved a 70% herd-segment immunity, math can be constructed to show how an 18% resistance range can be expressed inside such a population. The elasticity around this range is enormous, with herd resistance ranging from 5% in farming communities, all the way to 70% in prisons, for instance.

Histogram – a graph showing the density/breakout of distribution or arrival of one variable against time or another variable.

Hope-Simpson Latitude Effect – Hope-Simpson was a doctor who in 1965 established that childhood chickenpox virus reactivates in adults as the painful condition called shingles. He also became noteworthy by documenting that his hometown in Southwest England came down with epidemic influenza at the same time every year as Prague, which shared the same latitude. While Hope-Simpson attributed this to sun exposure and Vitamin D, with regard to the flu – it is clear that coronaviruses have sharply marked seasonal patterns, which also differ by tropical and northern latitude bands.

This effect is shown by the US Covid-19 progression inside the chart to the right. The blue vertical column bars indicate case arrivals each day in the northern 37 states, while the orange vertical bar arrivals show the patterns in the southerly hot 50 border counties or hot 13 states. While part of this orange bar surge was attributable to cross-border activity and migrant labor/wet food supply chain, there still transpired a bonafide increase in cases along the US south border states. This surge in cases peaked on July 16th, 13 or 14 weeks after the north latitude peak (one quarter of the year). As indicated in the chart, each case peak adhered to Farr’s Law, and as well conformed to the arrival pattern of SARS-CoV-1 (2003) as documented in the Consensus Report on SARS-2003 by the World Health Organization.

Hospital Dwell Time – a factor derived by a multiple of the increase in the average length of stay (ALoS) of a hospital census and the rate at which nosocomial cases were added to hospital census, both after May 6. This was derived through a sampling of states who either reported admissions, or reported enough data from which admissions could be calculated, matched against those states progressions in hospital census. An example is shown to the right. Average length of stay is estimated from reported discharges and census for certain sample states (AZ FL GA). This drift between admissions and census is then projected into the future, and then is corrected if it begins to depress estimated case count below what the Grand Conslience Chart might support, as compared to CDC all cause and state reported fatalities. One such adjustment was done on August 20th, to correct for an over-aggressive ALoS calculation from 4/29 – 6/09. Nosocomial cases are estimated as an addition to this factor – and is watched as well for its net effect on normalized cases over time, as well as the matching of ICU census versus admissions. So nosocomial factoring and ALoS are as much outputs of the model as they are its contraints.

F(DT) = ((ALoS(0)/ALoS(t)) x (EV admissions/(Nosocomial + EV admissions))

Hospitalization Admissions/Census/Dwell Time Chart – admissions is those census being admitted to a hospital for a given malady (and not a nosocomial instance) on a given date. The chart on the right is derived from the hospital census figures given in ‘Hospitalized – Currently’ column of the Covid Tracking Project’s Daily State 4pm SitRep Report. The blue line on the chart reflects this count of individuals and matches to the blue numbers on the left hand side y-axis. The orange line however, is what is used in most TES models as it reflects an estimate of real admissions to hospitals for a Covid-Like Illness on each given day. This number is estimated by means of a factor relative to census based upon total Dwell Time calculations (see Dwell Time above and its associated formula).

Hospitalization Census – those people lying in beds in a hospital on a given date. When discharges and census are compared, the two can be used to estimate admissions by state, thereby allowing further for an estimate of Dwell Time drift between admissions and census – which we have sampled by 3 key states (AZ FL and GA).

Hospital ICU Census – ICU census is those people lying in intensive care unit beds in a hospital on a given date. The chart on the right is derived from the hospital ICU census figures given in ‘In ICU – Currently’ column of the Covid Tracking Project’s Daily State 4pm SitRep Report. The orange line on the chart reflects a 7-day moving average count of ICU individuals and matches to the orange numbers on the right hand side y-axis. When states or territories are added into and removed from these numbers, a normalization adjustment is made so as to now allow the number to whipsaw accordingly. This provides for a better leveling than even the 7-day moving average can stand-alone, and has resulted in good velocity indicators for this important statistic.

The blue line is a 7-day average of total fatalities calculated from the differential between daily fatalities by state, as reported in the ‘Deaths’ column of the Covid Tracking Project’s Daily State 4pm SitRep Report. The comparative between ICU Census and deaths allows the astute analyst to observe that reported deaths can contain legacy data up to 12 weeks old in some cases, and most often in excess of 3 weeks old. This must be corroborated however by examining fatality epicurve or aging reports by those states who disclose such information. Through the analysis horizon, these two figures have held in common agreement. This is key in estimating Legacy Death Laundering (LDL2) in the Grand Consilience Chart.

Hot 50 Counties – the top 50 counties in terms of 7-day most recent growth in cases or fatalities during the July 4 – August 15 period of the pandemic tail. Typically ascertained by an average of the last three days’ reports divided by the same 3 days from last week (to avoid weekend effect), or by comparing an average of this week to the same average from last week. The Hot 50 County tracking data was scraped from this source each day of tracking: USAFacts: Coronavirus Locations: COVID-19 Map by County and State; What this chart served to show was twofold:

First, this latter surge in cases was not a ‘second wave’ as much as it was a sympathetic overflow from southerly latitude patterns of the same virus. Hence the 50 county concentration of the surge’s hottest case growth, shown as of 17 July in the map to the right. These were border or border-trade counties handling the wet food supply chain into the US (cut flowers, rendered pork-chicken-beef, vegetables and fruits). This latitudinal effect is called a Hope-Simpson subtropical latitude effect. One key weakness of our preparation for any form of pandemic, was our complete ignorance around this effect. This should have been known before we decided policy. Most predictive models for the pandemic were merely regression models, with high and low banding. None of them predicted the distinct Farr’s Law shape of this latitudinal surge.

Second, this surge in cases formed a peak around July 16th, which was immediately called by The Ethical Skeptic by means of the charts to the lower right, but took a full 3 weeks to be acknowledged by official channels and media, most of whom still did not even mention the tail off by early September. In the chart to the right, the 42-day growth to a peak was tracked for the hottest 50 counties in the US, regardless of which counties composed that grouping each day. The curve continued an upward progression until it lost that momentum on 16 July and never recovered it. The Ethical Skeptic was watching for a confirmatory rise in fatalities 28 days after the initial rise in cases – and indeed a mild rise in fatalities formed, confirming that this reported rise in cases was about 37% real, 63% salting through new practices of case detection. This salting was filtered out for the Case Arrival Curve Chart (see Salting/Juking of Reported Cases).

iCFR (often ‘IFR’ for short) – infectious case-fatality rate (risk). The total ‘died of’ and ‘died with’ fatalities from a given pathogen, divided by the estimated total population which was infected by that pathogen. Has nothing to do with symptomatic rates.

In Extremis – a condition of rising or extreme danger wherein a decision which is dependent upon an outcome of scientific study, must be made well in advance of any reasonable opportunity for peer review and/or consensus to be developed. This is one of the reasons why science does not dictate governance, but rather may only advise it. Science must ever operate inside the public trust, especially if that trust requires expertise from multiple disciplines.

Indigo Point (Inflection Point) – the early-on inflection point in a time series, system or set of events which is the compliment or impetus behind a Tau Point Inflection or ‘tipping point’. The indigo point is that event or mechanism which is manipulated early in a process, often surreptitiously and often representing an insignificant or underappreciated aspect of the system in question, which will alter/tip subsequent events towards a specific final outcome. It is the magician’s unremarkable sleeve.

ingens vanitatum Argument – citing a great deal of expert irrelevance. A posing of ‘fact’ or ‘evidence’ framed inside an appeal to expertise, which is correct and relevant information at face value; however which serves to dis-inform as to the nature of the argument being vetted or the critical evidence or question being asked.

Inchoate Action – a set of activity or a permissive argument which is enacted or proffered by a celebrity or power wielding sskeptic, which prepares, implies, excuses or incites their sycophancy to commit acts of harm against those who have been identified as the enemy, anti-science, credulous or ‘deniers’. Usually crafted is such a fashion as to provide a deniability of linkage to the celebrity or inchoate activating entity.

Inductive Argument/Inference – an argument in which if the predicates are true and the relative quality or structure of logic is sound, then it is more probable that the conclusion will also be true. The conclusion therefore does not follow with logical necessity from the predicates, but rather with an increase in likelihood, hopefully converging to certainty. For example, every time we measure the speed of light in various media, it asymptotes to 3 × 108 m/s. Therefore, the speed of light in a medium-less vacuum is 3 × 108 m/s. Inductive arguments usually proceed from specific instances to the more general. In science, one usually proceeds inductively from data to laws to theories, hence induction is the foundation of much of science. Induction is typically taken to mean testing a proposition on a sample, or testing an idea on an established predicate, either because it would be impractical or impossible to do otherwise.

Jamais l’a Fait – Never been there. Never done that. Someone pretending to the role of designer, manager or policy maker – when in fact they have never actually done the thing they are pretending to legislate, decide upon or design. A skeptic who teaches skepticism, but has never made a scientific discovery, nor produced an original thought for themself. Interest rate policy bureaucrats who have never themselves borrowed money to start a business nor been involved in anything but banks and policymaking. User manuals done by third parties, tax laws crafted by people who disfavor people unlike themselves more heavily, hotel rooms designed by people who do not travel much, cars designed by people who have never used bluetooth or a mobile device, etc.

Lag/Delay/Lag Curve – a delay is the period of time between when something actually occurs and when it is reported or cataloged as data. Lag is the mathematical description of how those instances of delay perform over time, on the part of one organization or measuring mechanism. A lag curve is the mathematical factoring which aids analysis by compensating for this characteristic set of delays/lag. An example CDC characteristic lag curve is shown below:

Laundering (of Legacy Cases and Fatalities) – partly a term from data management and partly coined for the Covid-19 response. Legacy data is any form of old data which involves work in transcribing into a new system, approach or context. Laundering is the process of removing the old undesirable context for an item transacted in a market (information is a market), and fabricating a new beneficial context of use – free of the old one. Inside the context of a pandemic, this constitutes a method of misrepresentation to at-risk stakeholders. When conducted inside the context of a population under risk, or when exploited by media to incite panic or despair, it is a human rights crime as well.

Legacy Data Laundering (LDL1) – state departments of health may unilaterally, or through direction by higher agency or political intermediaries, choose to report cases or fatalities over 7-days in age, as if they occurred on the day of reasonable reporting. In this manner, a sufficient amount of old data or newly converted old data (cases or fatalities), can be exploited to craft the appearance a false trend, rise or level.

Lockdown Death Laundering (LDL2) – state departments of health may unilaterally, or through direction by higher agency or political intermediaries, choose to designate past CDC excess all cause mortality deaths which were not attributed to Covid-19 on a death certificate, as ‘suspected’ or ‘probable’ Covid-19 deaths. In this manner, deaths from lockdown, access and despair can be attributed either as direct Covid deaths or ‘death from Covid upheaval’ – and be reported at a later date as a current Covid-19 fatality. A sufficient amount of these fatality conversions can be exploited to craft the appearance of a false trend, rise or level of fatality.

Law of Large Numbers – a fallacy wherein an arguer does not perceive that a perceptibly large effect on a small population might serve to produce rather small numbers of outcomes, while a very small or subtle effect on a very large population, may well serve to produce surprisingly large numbers in outcome.

Lemming Cycle (of Continuous Impairment) – both a parody and satire extended from an antithetical version of the operations model called the Deming Cycle of Continuous Improvement (named after its developer, Dr. William Edwards Deming) – the parody ‘Lemming Cycle of Continuous Impairment’ is also a satirical commentary upon how panicked state level government officials served to foment continued panic and despair around Covid-19. Also derived from the allegory of fictitious lemmings who run off a mythical cliff en masse, in their hysteria to achieve a given inductively derived or panic-fueled goal. The cycle wherein panic over perceived cases of Covid-19, drove demands inside certain state government levels for more testing and track-and-trace team funding, which resulted in increased PCR tests run and false/3-month-old positives being generated, which resulted in ever higher positive case-detects being reported as raw data by the states, which resulted in more panic/despair. Incompetence. Forgetting the data grooming methods employed with annual flu analytics and opting instead to simply report raw data – a scientific error.

The chart to the right depicts how The Lemming Cycle (boosted as well by false positive PCR testing results which were left unchallenged), fueled a feedback cycle of panic inside the media and administrative government decision-makers, regarding Covid-19.

Linear Affirmation Bias – a primarily inductive methodology of deriving inference in which the researcher starts in advance with a premature question or assumed answer they are looking for. Thereafter, observations are made. Affirmation is a process which involves only positive confirmations of an a priori assumption or goal. Accordingly, under this method of deriving inference, observations are classified into three buckets:

  1. Affirming
  2. In need of reinterpretation
  3. Dismissed because they are not ‘simple’ (conforming to the affirmation underway).

Lockdown – the term ‘lockdown’ is a twitter compression – a concise way of identifying a very complex set of intent and action, so that people understand what is being described in contrast to normal societal function. Very few nations actually locked-down, and when they did, it was not for very long. In the context used within this analysis, a lockdown is any coercion or social mandate which will serve to reduce moderate-sized businesses revenue by more than 25%, by more than three weeks. It is also any change in availability of medical services which serves to increase deaths which otherwise would not have occurred, by more than 2% of all-cause deaths in any given week.

Lockdown/Great Repression Fatality – a fatality reported by a state department of health and cataloged by the CDC in which the person’s death was caused by lack of access to medical services (diabetes, heart disease, stroke, other illnesses, etc.), lack of adequate diagnosis (cancer, etc.) or Suicide Addiction Abandonment & Abuse.

Logical Calculus – the quality of an argument or the ability of its objective features to commensurately lend support to its inference in terms of clarity, salience, soundness, critical path and depth.

MMWR Week – the US Centers for Disease Control ‘Morbidity and Mortality Report’ fiscal week of accounting for all US fatalities each year. For 2020, week 1 of the MMWR was the week ending January 4th 2020.

melochi kupets (Russian: мелочи купец) – trivia merchant. One who feigns competence or intimidates curious outsiders through display of detailed mundane knowledge of the industry in which they operate. One who cannot differentiate the distinction between a peripheral or irrelevant detail and a critical path element or principle.

missam singulia shortfall in scientific study wherein two factors are evaluated by non equivalent statistical means. For instance, risk which is evaluated by individual measures, compared to benefit which is evaluated as a function of the whole – at the ignorance of risk as a whole. Conversely, risk being measured as an effect on the whole, while benefit is only evaluated in terms of how it benefits the individual or a single person.

Mobsensus – the inferential will of media, club, mafia, cabal or cartel – in which a specific conclusion is enforced upon the rest of society, by means of threat, violence, social excoriation or professional penalty; further then the boasting of or posing such insistence as ‘scientific consensus’.

NCHS – National Center for Health Statistics.

NiCFR – net infectious case-fatality rate (risk). The total ‘died of’ and ‘died with’ fatalities from a given pathogen, minus those who died as a result of over-reaction or ignorance around that pathogen, divided by the estimated total population which was infected by that pathogen. Has nothing to do with symptomatic rates.

Nelsonian Knowledge – Nelsonian knowledge takes three forms

1. a meticulous attentiveness to and absence of, that which one should ‘not know’,
2. an inferential method of avoiding such knowledge, and finally as well,
3. that misleading knowledge or activity which is used as a substitute in place of actual knowledge (organic untruth or disinformation).

The former (#1) is taken to actually be known on the part of a poseur. It is dishonest for a man deliberately to shut his eyes to principles/intelligence which he would prefer not to know. If he does so, he is taken to have actual knowledge of the facts to which he shut his eyes. Such knowledge has been described as ‘Nelsonian knowledge’, meaning knowledge which is attributed to a person as a consequence of his ‘willful blindness’ or (as American legal analysts describe it) ‘contrived ignorance’.

Nosocomial – an illness which is contracted or occurs while one is a patient in a hospital, having been admitted for a completely separate condition.

nulla infantis – a pseudo-argument, sometimes cleverly disguised or hidden inside pleonasm, which basically is the equivalent of saying ‘nuh-uhhh’…  Latin for child’s ‘no’. Usually followed by an appeal to have the opponent shut-up or be silenced in some manner.

Ockham’s Razor – “Pluralitas non est ponenda sine neccesitate” or “Plurality should not be posited without necessity.” The words are those of the medieval English philosopher and Franciscan monk William of Ockham (ca. 1287-1347). This principle simply means that, until we have enough evidence to compel us, science should not consider outsider theories. But it also means that once there exists a sufficient threshold of evidence to warrant attention (plurality), then science should seek to address the veracity of a counter claim. SSkeptics bristle at the threat of this logic and have sought to replace this tenet with their shade-change version, “Occam’s Razor.”

Omega Hypothesis (HΩ) – the argument which is foisted to end all argument, period. A conclusion promoted under such an insistent guise of virtue or importance, that protecting it has become imperative over even the integrity of science itself.

Original Sin – the justification which was used to explain slavery worldwide, until the United States and non-religious minds changed that thinking in the 1800’s. The idea that a race, culture or skin color bears an inferiority or debt to another one, and that therefore it is the right of the superior/debt-holding culture to abuse the former in tyrannical rule, economy, taxation, representation and servitude. In absence of the power to enact this, the original sin stands as justification for violence against the group condemned under the original sin (quo facto malo).

Outbreak (see Epidemic Threshold) – when an epidemic threshold is attained in a given set of counties/cities, yet is not spreading geographically as did its original outbreak, or as would a new virus/season.

Pace Daily Cases & Fatalities Chart (Grand Consilience Chart) – the chart which compares extrapolated (estimated) actual infections of Covid-19, with both CDC excess all-cause and daily state SitRep-reported fatalities – to detect when one or more of these elements begins to drift out of sync or no longer make sense. This also allows for a reasonable estimate of infectious case-fatality rate (iCFR). This approach allowed TES to estimate an accurate iCFR of 0.26% weeks before the CDC published their ‘Best Scenario’ iCFR of the same magnitude. It brings together multiple results and evaluates their congruence for a moderate level of accountability in inference – superior to induction, but not yet fully deduction or falsification.

Element 1 – Extrapolated Total Infectious Cases (Blue Vertical Columns) – this is the estimated total infections among US citizens in all 50 states and 6 territories. It is scaled up by a fixed ratio of actual cases to detected cases (see Daily Case Arrivals Chart) based upon a moving index of seroprevalence across 11 studies which sampled this index in a variety of states and countries (see Seroprevalence Antibody IgG/IgM Studies Chart). The arrival form is from the Daily Case Arrivals Chart, while the numeric magnitude is calculated from the 6.9% seroprevalence sampled in the Seroprevalence Chart. The rest is simply a ratio calculation, to project actual US infectious cases.

Element 2 – CDC Excess All-Cause Fatalities (Yellow Line) – excess all cause fatalities (from the CDC Lag Adjusted CDC All Cause Fatalities vs CV19 Fatality by MMWR Week Chart) are assembled into a yellow line and superimposed over both the state SitRep-reported deaths each day, and the estimated daily infectious case arrivals identified in Element 1 above. These three curves should move in a given ratio throughout the horizon, unless an exception has occurred such as Legacy Data Laundering LDL-1 or Lockdown Death Laundering LDL-2.

Element 3 – Daily State SitRep Reported Fatalities (Orange Vertical Columns) – these are the deaths reported by the states each day in their 4:00pm SitReps and as reported by The Covid Tracking Project. Simple transcription of the number reported, is employed for this chart element. The degree of Lockdown Death Laundering (LDL-2) can be ascertained by comparing the height of each day’s orange column, versus the yellow CDC excess all-cause deaths line. Since that yellow line is both adjusted for documented CDC lag, and is matched consistently to the blue columns (case arrivals) it is considered a much higher confidence figure on actual deaths each day.

Paradox of Virtue (Covid-19) – if leadership had not shut down, all 150 k Covid deaths would be blamed on that mistake. This flawed virtue signal argument, then forces us to conduct activity which is 5 – 10 x more damaging. Like a bad SAW movie. We are exploited by evil.

Pluralistic Ignorance – most often, a situation in which a majority of scientists and researchers privately reject a norm, but incorrectly assume that most other scientists and researchers accept it, often because of a misleading portrayal of consensus by agenda carrying social skeptics. Therefore they choose to go along with something with which they privately dissent or are neutral.

Plurality – adding complexity to an argument. Introducing for active consideration, more than one idea, construct or theory attempting to explain a set of data, information or intelligence. Also, the adding of features or special pleading to an existing explanation, in order to adapt it to emerging data, information or intelligence – or in an attempt to preserve the explanation from being eliminated through falsification.

Pneumonia, Influenza, Covid-19/Lockdown (PIC) Fatalities – a scheme on the part of second tier government officials, including the US Centers for Disease Control, to conflate the upcoming 2020 influenza and pneumonia season as being one-in-the-same as the tail of the Covid-19 outbreak. By this mechanism, the dwindling 1.6% excess Covid deaths characteristic of the October 2020 timeframe, which did not classify as ‘epidemic level’ (5.7%), could be mixed equivocally with annual P&I deaths and were gain-boosted artificially back to the 7.0%+ range (CDC announced ‘7.2%’ on October 16th, 2020). In this manner, oppressive lockdown mandates could remain in place because of the epistemic doubt as to whether or not Covid-19 was beyond its end of season. Moreover, economic slow-downs and shelter in place orders could be extended until April 2020, when the flu season naturally ends.

This scheme was identified in the data released by the CDC, by The Ethical Skeptic on October 16th, and was cited as a ‘crime against humanity’.

Precautionary Principle Burden of Proof – when money or human rights are at stake, a claim to risk of exposure to corruption does not bear the burden of proof – the entity managing that money or human right (eg. voting or reporting on a pandemic risk) bears the active and ongoing burden of proof that their system bears 99.9997% integrity.

Principle of Peerhood – (‘peer’ is a word derived from nobility ranking and matching) – I shall not tell an epidemiologist his business, unless he infers from his work that I should necessarily undertake a harm or ruin – at that point, I am now a peer. The stakeholder placed at risk is the peer review.

Probative – a measure of a quality of a data or observation set such that it serves to inform a given critical path of reason or investigation, as opposed to generally informing about peripheral or circumstantial issues or not informing at all. This measure of the quality of data is orthogonal to the issue of the reliability of such data or observations. Observations which are salient to the question being asked tend to bear greater probative potential than do ignoratio elenchi observations. Observations which allow for deductive inference tend to bear greater probative potential than do inductive ones. Inductive observations are superior to abductive inferences, etc. The goal of systems intelligence is to assemble probative observations and derive perspectives/questions which improve their reliability, not assemble reliable information and attempt to make it therefore then probative (armchair intelligence or streetlight effect).

Pseudo-Theory – a catch-all explanation or critique, construct, belief or overarching idea which explains anything, everything and nothing – all at the same time. That which explains everything, likely explains nothing.

Pull-Forward Effect/Fatality – also see ‘Death from/Death with Covid’. The group of people who were already in the process of dying (age and/or illness related) – who died 1 to 40 weeks earlier than they would have, because they caught Covid-19. This shows as a dip in the death rate after Covid is over. The pull-forward effect can be seen in the chart to the right. In this case, two significant segments of time exist. First is the period in which Covid directly or indirectly precipitated an increase in deaths attributable to known natural causes (9,079 fatalities overage versus normal). Second is the period thereafter where these additional deaths served to also create a hole in the death rate starting 8 weeks later and onward (13,429 fatality deficit versus normal). In this instance, Covid-19 impacted death rates by ‘pulling forward’ deaths which would have occurred in July and August, and forced those to occur in April and May. As of the publication of this chart to the right, it is estimated that we have experienced possibly 40-45% of the pull-forward deficit. The full pull-forward deficit will not be known until as far out as May of 2021. This data was obtained from pivot analysis of the Big 14 mortality database scraped from the National Center for Health Statistics; Weekly Deaths by State and Cause of Death; 29 Aug 2020;

quo facto malo – Latin for ‘having done this evil’. When a person desires to do evil to another, they will manufacture or fantasize in their mind, offenses their target has committed, which serve to therefore justify their actions; harm which they had conducted or intended to conduct from very beginning, but were simply waiting for the right excuse to blame it upon. See ‘original sin’.

Reason (The Three Forms)

Abductive Reason (Diagnostic Inference) – a form of precedent based inference which starts with an observation then seeks to find the simplest or most likely explanation. In abductive reasoning, unlike in deductive reasoning, the premises do not guarantee the conclusion. One can understand abductive reasoning as inference to the best known explanation.

Inductive Reason (Logical Inference) – is reasoning in which the premises are viewed as supplying strong evidence for the truth of the conclusion. While the conclusion of a deductive argument is certain, the truth of the conclusion of an inductive argument may be probable, based upon the evidence given.

Deductive Reason (Reductive Inference) – is the process of reasoning from one or more statements (premises) to reach a logically certain conclusion. This includes the instance where the elimination of alternatives (negative premises) forces one to conclude the only remaining answer.

RNA/Virus Detection Half-Life – the period after which half of those who were infected at one time, can no longer present detectable dead or inactive Covid-19 RNA. The half-life period has been evaluated to be as much as 7 weeks. The entire period of detectability is 14 weeks. This noise in data collection was exploited by nefarious political and power hungry forces during the pandemic.

Salting/Juking Reported Cases – one or more of a variety of methods of boosting reported cases of Covid-19 to make the pandemic seem hotter, growing when actually in decline or larger than is its reality. Includes methods such as

legacy data laundering,
backlog stuffing,
AB testing results mixed with PCR testing,
focus on hot spots,
posting track and trace %-pos results only,
testing prisons care facilities and factories only,
temperature screening selected testing,
report of ‘suspected’ cases,
non-reporting of negative tests,
delay or lag exploitation,
multiple tests on one person,
cross-border cases,
hospital comprehensive screening,
nosocomial cases,
paying people who test positive,
batch swab testing followed by individual testing, etc.

In the chart to the right, the gentle rise in percent positive at the right hand side of the blue trend line was 37% real cases in our 50 hottest counties composing the July 16th surge, while 63% of that rise was crafted through salting/juking of reported cases.

sCFR (often ‘CFR’ for short) – symptomatic case-fatality rate (risk). The total ‘died of’ and ‘died with’ fatalities from a given pathogen, divided by the estimated total population which presented symptoms of that pathogen. Has nothing to do with infectious rates.

Semantic vs Logical Truth – a semantic truth only applies some of the time or inside a specific context (a semantic principle/doctrine). A logical truth (or law) applies to all conditions of the domain under discussion. A form of equivocation involves exploiting a semantic truth as a logical truth – through sleight-of-hand, changing the context of its employment without the soundness of first addressing whether or not the principle actually applies inside that new context.

Scienter – is a legal term that refers to intent or knowledge of wrongdoing while in the act of executing it. An offending party then has knowledge of the ‘wrongness’ of their dealings, methods or data, but choose to turn a blind eye to the issue, commensurate to executing or employing such dealings, methods or data. This is typically coupled with a failure to follow up on the impacts of the action taken, a failure of science called ignoro eventum.

Seroprevalence/Seroprevalence Escalation Chart – the prevalence (usually expressed as a percentage) of Covid IgG and/or IgM antibodies in the general population of a compartment, indicative of how extensive the spread of Covid-19 (or possibly an unknown similar and previous virus) has been at some point in the past (4 weeks for IgG and 1 week for IgM). It is important to remember that this number is always growing during a pandemic or outbreak. In the chart to the right, a strike point for seroprevalence was established using a weighted average from 11 key studies up until that date, which bore applicability to United States’ demographics in which Covid-19 was prevalent. An estimate was made of a 6.9% seroprevalence (95% CCI [5.01, 8.79]) for the nation as a whole, as of April 20th. This was then grown over time under a number of escalation scenarios, to watch for consilience with other models and observations. Current seroprevalence extrapolations based upon this, and which match CDC iCFR and TES models well, place total seroprevalence at 16.3% as of the end of August 2020. Most of these cases were asymptomatic.

Shopcraft – traits, arrival forms and distributions of data which exhibit characteristics of having been produced by a human organization, policy or mechanism. A result which is touted to be natural, random or unconstrained, however which features patterns or mathematics which indicate human intervention is at play inside its dynamics. A method of detecting agency, and not mere bias, inside a system.

Slapping the Grizzly/Bear – if someone implies to you that they know exactly the outcome of slapping a grizzly bear to get it to go away – they themselves are more dangerous than a virus by far – no matter how many risk PhD’s they may hold. The act of becoming robust to a constrained small-risk of harm/death bearing sound epistemic precedent (a virus with 4 months of observation and multiple precedents) – while exposing to an unconstrained large-risk of ruin/death, derived from our actions (an economic Great Repression), which bears no epistemic precedent …is like slapping a grizzly bear to compel it to go away.

Stakeholder Ethics – a principle or condition wherein those who bear the negative impact of a decision can hold those who make that decision, accountable. Further then may dissent, and reverse that decision or remove the decision maker, even if they claim to be an ‘expert’. A claim to science is not a free pass to tyranny.

Statistician – one who collects into a database salient and valid raw data and further then pulls selected features from that raw data to highlight to users, observers or participants therein.

SupraLag – that CDC lag in posting of actual deaths (shortfall in all-cause death count) by date in the MMWR weekly data, which exceeded by a large amount, the typical lag which the CDC had historically exhibited for that same week (current week minus x MMWR weeks). Ideally if both the CDC and analyst tracking them are performing well in their duties, SupraLag should be minimal.

Symmetry – the innate commonality of two independent objects to bear the same form or to look alike as a result of causal and not accidental inputs or constraints.

Systems Intelligence – in a system, it is not statistics nor the precision of any single measurement which provides for analytical confidence. Rather it is the consilient agreement of dozens of independently derived indicators in concert, which provides for intelligence. One does not drive a car by means of tape measure, physics text and calculator – as pretend precision, credential, correctness and reliability (the perfect) are the catastrophic enemy of the effective. Rather, a system is grasped through inductive consilience inside a neural nexus of simultaneous probative inputs and relationships. Experience also trumps such consilience, while consilience trumps any single heuristic. The process of collecting raw data and framing or denaturing that raw data such that it begins to offer information. This information is further then transmuted into intelligence: feedback, ergodicity, arrival distribution, confidence, constraint, sensitivity, consilience, discrete/continuous symmetry/asymmetry and input-mechanism-causal hypotheses. The goal is to increase the reliability of inference derived from probative data, not attempt to make reliable data then also probative (also called fake skepticism, ‘torturing the data’ or ‘streetlight effect’).

Wittgenstein Error (Contextual) – employment of words in such as fashion as to craft rhetoric, in the form of persuasive or semantic abuse, by means of shift in word or concept definition by emphasis, modifier, employment or context.

Wittgenstein Error (Descriptive) – the contention or assumption that science has no evidence for or ability to measure a proposition or contention, when in fact it is only a flawed crafting of language and definition, limitation of language itself or lack of a cogent question or (willful) ignorance on the part of the participants which has limited science and not in reality science’s domain of observability.

Describable: I cannot observe it because I refuse to describe it.

Corruptible: Science cannot observe it because I have crafted language and definition so as to preclude its description.

Existential Embargo:  By embargoing a topical context (language) I favor my preferred ones through means of inverse negation.

Yule-Simpson Paradox – a trend appears in different groups of data can be manipulated to disappear or reverse (see Effect Inversion) when these groups are combined.

The Ethical Skeptic, “The Definitive Guide to Ethical Skeptic’s (TES/ES) Coronavirus SARS-CoV-2 (2019) Analysis”; The Ethical Skeptic, WordPress, 9 Aug 2020; Web,

August 9, 2020 Posted by | Ethical Skepticism | | 19 Comments

Inflection Point Theory and the Dynamic of The Cheat

A mafia or cartel need not cheat at every level nor in every facet of its play. The majority of the time such collapsed-market mechanisms operate out of necessity under the integrity of discipline. After all, without standardized praxis, we all fail.
However, in order to effect a desired outcome an intervening agency may only need exploit specific indigo point moments in certain influential yet surreptitious ways. Inflection point theory is the discipline which allows the astute value chain strategist to sense the weather of opportunity and calamity – and moreover, spot the subtle methodology of The Cheat.

Note: because of the surprising and keen interest around this article by various groups, including rather vociferous Oakland Raider fans, I have expanded that section of the article for more clarity/depth on what I have observed; and further, added 6 excerpt links in each appropriate section outlaying the backup analysis for those wishing to review the data. One can scroll directly to that section of the article at about 45% through its essay length.

Inflection Point Theory

In one of my strategy firms over the decades of conducting numerous trade, infrastructure and market strategies, I had the good fortune to work as colleague with one of our Executive Vice Presidents, tier I B-school graduate whose specialty focused in and around inflection point theory. He adeptly grasped and instructed me as to how this species of analytical approach could be applied to develop brand, markets, infrastructure, inventories and even corporate focus or culture. Inflection point theory in a nutshell, is the sub-discipline of value chain analytics or strategy (my expertise) in which particular focus is given those nodes, transactions or constraints which cause the entire value chain to swing wildly (whipsaw) in its outcome (ergodicity). The point of inflection at which such signal is typically detected or hopefully even anticipated, is called an indigo point.

Columbia Business School Strategic advisor Rita McGrath defines an inflection point as “that single point in time when everything changes irrevocably. Disruption is an outcome of an inflection point.”1 While this is not entirely incorrect, in my experience, once an inflection point has been reached, the disruption has actually already taken place (see the oil rig example offered below), and an E-ruptive period of change has just precipitated. It is one thing to be adept with the buzzwords surrounding inflection point theory, and another thing altogether to have held hands with those CEO’s and executive teams while they have ridden out its dynamic, time and time again.

The savvy quietly analyzes the hurricane before its landfall. The expert makes much noise about it thereafter.
The savvy perceives the interleaving of elemental dynamics inside an industry. The expert dazzles himself with academic mathematical equations.
The savvy is employed on the team which is at risk. The expert brings self-attention and bears no skin in the game.

Such is not a retrospective science in the least. Nonetheless, adept understanding of business inflection point theory does in a manner allow one to ‘see around corners’, as McGrath aptly puts it.

Those who ignore inflection points, are destined to fail their clients, if not themselves; left wondering why such resulting calamity could have happened in such short order – or even denying that it has occurred through Nelsonian knowledge. Those who adeptly observe an indigo point signal may succeed, not through simply offering a better product or service, rather more through the act of rendering their organization robust to concavity (Black Swans) and exposed to convexity (White Swans). Conversely under a risk strategy, an inflection-point-savvy company may revise their rollout of a technology to be stakeholder-impact resistant under conditions of Risk Horizon Types I and II and rapid (speed to margin, not just speed for speed’s sake) under a confirmed absence of both risk types.2 As an example, in this chart data from an earlier blog post one can observe the disastrous net impact (either social perception, real or both) of the Centers for Disease Control’s ignoring a very obvious indigo pause-point regarding the dynamic between aggressive vaccine schedule escalations and changes in diagnostic protocol doctrine. Were the CDC my client, I would have advised them in advance to halt deployment at point Indigo, and wait for three years of observation before doing anything new. An indigo point is that point at which one should ethically, at the very least plan to take pause and watch for any new trends or unanticipated consequences in their industry/market/discipline – to make ready for a change in the wind. No science is 100% comprehensive nor 100% perfect – and it is foolishness to pretend that such a confidence in deployment exists a priori. This is the necessary ethic of technology strategy, even when only addressed as a tactic of precaution. When one is responsible for at-risk stakeholders, stockholders, clients or employee families, to ignore such inflection points, borders on criminally stupid activity.

Much of my career has been wound up in helping clients and nations address such daunting decision factors – When do we roll out a technology and how far? When do we pause and monitor results, and how do we do this? What quality control measures need to be in place? What agency, bias or entities may serve to impact the success of the technology or our specific implementation of it? etc. In the end, inflection point theory allows the professional to construct effective definitions, useful in spotting cartels, cabals and mafias. Skills which have turned out to be of help in my years conducting national infrastructure strategy as well. Later in this article, we will outline three cases where such inflection point ignorance is not simply a case of epistemological stupidity, but rather planned maliciousness. In the end, ethically when large groups of stakeholders are at risk, inflection point ignorance and maliciousness become indistinguishable traits.

Inflection Point

/philosophy : science : maths/philosophy : neural or dynamic change/ : inflection points are the points along a continuous mathematical function wherein the curvature changes its sign or there is a change in the underlying differential equation or its neural constants/constraints. In a market, it is the point at which a signal is given for a potential or even likely momentum shift away from that market’s most recent trend, range or dynamic.

An inflection point is the point at which one anticipates being able to thereafter analytically observe a change which has already occurred.

Inflection Point Theory (Indigo Point Dynamics)

/philosophy : science : philosophy : value chain theory : inflection point theory/ : the value chain theory which focuses upon the ergodicity entailed from neural or dynamic constraints change, which is a critical but not sufficient condition or event; however, nonetheless serves to impart a desired shift in the underlying dynamic inside an asymmetric, price taking or competitive system. The point of inflection is often called an indigo point (I). Inside a system which they do not control (price taking), successful players will want to be exposed to convexity and robust to concavity at an inflection point. Conversely under a risk horizon, the inflection point savvy company may revise their rollout of a technology to be stakeholder-impact resistant under conditions of Risk Horizon Types I and II and rapid under a confirmed absence of both risk types.

An Example: In March of 2016, monthly high capacity crude oil extraction rig counts by oil formation, had all begun to trend in synchronous patterns (see chart below extracted from University of New Mexico research data).3 This sympathetic and stark trend suggested a neural change in the dynamic driving oil rig counts inside New Mexico oil basin operations. An external factor was imbuing a higher sensitivity contribution to rig count dynamics, than were normal market forces/chaos. This suggested that not only was a change in the math in the offing, but a substantial change in rig dynamics was underway, the numerics of which had not yet surfaced.

Indeed, subsequently Enverus DrillingInfo confirmed that New Mexico’s high capacity crude extraction rig counts increased, against the national downward trend, by a rate of 50+% per year for the ensuing years 2017 and into 2018 – thereby confirming this Indigo Point (inflection point).4

I was involved in some of this analysis for particular clients in that industry. This post-inflection increase was driven by the related-but-unseen shortfall in shallow and shale rigs, lowering production capacity out of Texas during that same time frame and increasing opportunity to produce to price for New Mexico wells – a trend which formerly had served to precipitate the fall in monthly New Mexico Rig Count to an indigo point to begin with. Yet this pre-inflection trend also had to end because the supply of rigs in Texas could not be sustained under such heavy demand for shale production.

Astute New Mexico equipment planners who used Inflection Point theory, might have been able to head this off and ensure their inventories were stocked in order to take advantage of the ‘no-discounts’ margin to be had during the incumbent rush for rigs in New Mexico. This key pattern in the New Mexico well data in particular, was what is called in the industry, an inflection point. My clients were able to increase stocks of tertiary wells, and while not flooding the market, were able to offer ‘limited discount’ sales for the period of short supply. They made good money. They were not raising prices of plywood before a hurricane mind you, rather being a bit more stingy on their negotiated discounts because they had prepared accordingly.

To place it in sailing vernacular: the wind has backed rather than veered, the humidity has changed, the barometric pressure has dropped – get ready to reef your sails and set a run course. A smart business person both becomes robust to inflection point concavity (prepares), and as well is exposed to their convexity (exploits).

The net impact to margin (not revenue) achievable through this approach to market analytics is on the order of 8 to 1 in swing. It is how the successful, make their success. It is how real business is conducted. However, there exists a difference between survival and thriving due to adept perspective-use concerning indigo points, and that activity which seeks to exploit their dynamic for market failure and consolidation (cartel-like behavior).

Self Protection is One Thing – But What about Exploiting an Inflection Point?

There exists a form of inflection point analytics and strategy which is not as en milieu knight-in-shining-armor – one more akin to gaming an industry vertical or market in order to establish enormous barriers to entry, exploit consolidation failure or defraud its participants or stakeholders. This genus of furtive activity is enacted to establish a condition wherein one controls a system, or is a price maker and no longer a price taker – no more ‘a surfer riding the wave’, rather now the creator of the wave itself. Inflection points constitute an excellent avenue through which one may establish a cheat mechanism, without tendering the appearance of doing so.

Inflection Point Exploitation (The Cheat)

/philosophy : science : philosophy : agency/ – a flaw, exploit or vulnerability inside a business vertical or white/grey market which allows that market to be converted into a mechanism exhibiting syndicate (cartel, cabal or mafia-like) behavior. Rather than the market becoming robust to concavity and exposed to convexity – instead, this type of consolidation-of-control market becomes exposed to excessive earnings extraction and sequestration of capital/information on the part of its cronies. Often there is one raison d’être (reason for existence) or mechanism of control which allows its operating cronies to enact the entailed cheat enabling its existence. This single mechanism will serve to convert a price taking market into a price making market and allow the cronies therein to establish behavior which serves to accrete wealth/information/influence into a few hands, and exclude erstwhile market competition from being able to function. Three flavors of syndicated entity result from such inflection point exploitation:

Cartel – a syndicate entity run by cronies which enforces closed door price-making inside an entire economic white market vertical.

Functions through exploitation of buyers (monoopoly) and/or sellers (monopsony) through manipulation of inflection points. Inflection Points where sensitivity is greatest, and as early into the value chain as possible, and finally inside a focal region where attentions are lacking. Its actions are codified as virtuous.

Cabal – a syndicate entity run by a club which enforces closed door price-making inside an information or influence market.

Functions through exploitation of consumers and/or researchers through manipulation of the philosophy which underlies knowledge development (skepticism) or the praxis of the overall market itself. Inflection Points where they can manipulate the outcomes of information and influence, through tampering with a critical inflection point early in its development methodology. Its actions are secretive, or if visible, are externally promoted through media as virtue or for sake of intimidation.

Mafia – a syndicate entity run by cronies which enforces closed door price-making inside a business activity, region or sub-vertical.

Functions through exploitation of its customers and under the table cheating in order to eliminate all competition, manipulate the success of its members and the flow of grey market money to its advantage. Inflection Points where sensitivity is greatest, and where accountability is low or subjective. Its actions are held confidential under threat of severe penalty against its organization participants. It promotes itself through intimidation, exclusive alliance and legislative power.

Three key examples of such cartel, cabal and mafia-like entities follow.

The Cartel Cheat – Exemplified by Exploitation of a Critical Value Chain Inflection Point

Our first example of The Cheat involves the long-sustained decline of US agricultural producer markets. A condition which has persisted since the 1980’s, ironically despite the ‘help’ farmers get from the agricultural technology industry itself.

Cheat where sensitivity is greatest and as early into the value chain as possible, at a point where attentions are lacking. Codify the virtue of your action.

Indigo point raison d’être: Efficiency of Mixed Bin Supply Chain

The agriculture markets in the US are driven by one raison d’être. They principally ship logistically (85%) via a method of supply chain called ‘mixed bin’ shipping. This is a practice wherein every producer of a specific product and class within a region dumps their agri-product into a common stock for delivery (which is detached from the sell, by means of a future). Under this method, purportedly in the name of ‘efficiency’, the farmer is not actually able to sell the value of her crop, rather must sell at a single speculative price (reasonable worst case discounted aggregate) to a few powerful buyers (monopsony).

Another way to describe this in value chain terms, is by characterizing the impact of this ownership of the supply chain by means of common-interdependent practice, as a ‘horizontal monopoly’. The monopoly/oligopoly powers in the presiding ABCD Cartel (as it is called), do not own the vertical supply of Ag products; instead they dominate the single method (value chain) of supply and distribution for all those products. This is what Walmart used in the 1970’s and 80’s to gut regional competitors. Players of lesser clout who could not compete initially inside the 2 – 8% to sales freight margin advantage; fell vulnerable finally the cost purchase discounts on volume which Walmart was eventually able to drive once a locus of purchasing power was established. Own the horizontal supply chain and you will eventually own the vertical as well. You have captured monopoly by using the Indigo Point of mandatory supply chain consolidation. Most US Courts will not catch this trick (plus much of it is practiced offshore) and will miss the incumbent violation of both the Sherman Anti-Trust Act as well as the Clayton Act. By the time the industry began to mimic in the 90’s and 00’s what Walmart had done, it was too late for a majority of the small to medium consumer goods market. They tried to intervene at the later ‘Tau Point’, when the magic had already been effected by Walmart at the lesser obvious ‘Indigo Point’ two or three decades earlier.

Moreover, with respect to agriculture’s resulting extremely powerful middle market, the farmer faces a condition wherein, the only way to improve her earnings is through a process of ‘minimizing all (cost) inputs’. In other words, using excessive growth-accelerant pesticides and the cheapest means to produce as much caloric biomass as possible – even at the cost of critical phloem fulvic human nutrition content and toxin exposure. After all, if you exceed tolerance – your product is going to be mixed with everyone else’s product, so things should be fine. Dilution is the solution to pollution. In fact, such nutrient content and growth accelerant actual ppm’s are never actually monitored at all in the cartel-like agriculture industry. This is criminal activity, because the buyer and consumer are not getting the product which they think they are buying – and they are being poisoned and nutritionally starved in the process of being defrauded.

The net result? Autoimmune diseases of malnutrition skyrocket, market prices go into decades-sustained fall, microbiome impacts from bactericidal pesticide effects plague the global consumer base, nations begin to reject US agri-products, farms trend higher in Chapter 12 bankruptcies, and finally global food security decreases overall – ironically from the very methods which purport an increase in per acre yields.

The industry consolidates and begins to effect even more cartel-like activity. A death spiral of stupidity. 

This is the net effect of cartel-like activity. Activity which is always harmful in the end to either human health, society or economy. These cartels exploit one minor but key inflection point inside the supply chain, the virtuous efficiency of shipping and freight, in order to extract a maximum of earnings from that entire economic sub-vertical, at the harm of everything else. This is the tail wagging the dog and constitutes a prime example of inflection point exploitation (The Cheat).

Such unethical activity has resulted in enormous harm to human health, along with a sustained decades-long depression in the agriculture producer industry (as exemplified in the above ‘Chapter 12 Farm Bankruptcies by Region’ graphic by Forbes)5 – but not a commensurate depression in the agriculture futures nor speculator industry.6 Very curious indeed, that the cartel members at Point Tau (see below) are not hurt by their own deleterious activity at Point Indigo. This is part of the game. This is backasswards wrong. It is corruption in every sense of the word.

In order to effect The Cheat, one does not have to be a pervasive cheater.
One only need tweak specific inputs or methods at a paucity of specific points in a system or chain of accountability.

Thereafter an embargo on speaking about the indigo point must be enforced as well,
or an apothegm/buzzword phrase must be introduced which serves to obfuscate its true nature and impact potential.

The Cabal Cheat – Exemplified by Exploitation of Point Indigo for the Scientific Method – Ockham’s Razor

Our second example of The Cheat, cites how science communicators and fake skeptics manipulate the outcomes of science, through tampering with a critical inflection point early in its methodology.

All things being equal, that which appears compatible with what I superficially think scientists believe, tends to obviate the need for any scientific investigation.

Indigo point raison d’être: ‘Occam’s Razor’ Employed in Lieu of Ockham’s Razor

Point indigo for the scientific method is Ockham’s Razor. This is the point, early in the scientific method, at which a competing theory is allowed access into the halls of science for consideration. Remember from our definition above, that cheating is best done early, so as to minimize its necessary scale. Ockham’s Razor is that early point at which both a sponsor, and his or her ideas are considered worthy members of ‘plurality’ – those things to be seriously considered by the ranks of science.7 The method by which fake skeptics (cartel members, or cabal members when not an economy) manipulate what is and what is not admissible into the ranks of scientific endeavor, is by means of a flag they title ‘pseudoscience’. By declaring any idea they dislike to be a pseudoscience, or failing ‘Occam’s Razor’ (it is not simple) – skeptics game the inflection point of the entire means of enacting science, the scientific method. They are able to declare a priori, those answers which will or will not arrive at Point Tau, for tipping into consensus at a later time.

To spray the field of science at night with a pre-emergent pesticide which will ensure that only the answer they desire, will come true in the growing sunlight.

Most of the stakeholder public does not grasp this gaming of inflection theory. Most skeptics do not either, they just go along with it – failing to even perceive that skeptics are to be allies at the Ockham’s Razor sponsorship point, not foes. They are there to help the competitiveness of alternatives, not to corruptly certify the field of monist conclusion. This is after all, what it means to be a skeptic – to seriously consider alternative points of view. To come alongside and help them mature into true hypothesis. They want to see the answer for themselves.

If I do not like a particular avenue of study, all one need do is throw the penalty flag regarding that item’s ‘not being simple’ (Occam’s Razor). Thereafter, by citing its researchers to be pseudo-scientists, because they are using the ‘implements and methods of science to study a pseudoscience’, one has gamed the system of science by means of its inflection exploit mechanism.

They have effectively enacted cartel-like activity around the exercise of science on the public’s behalf. This is corruption. This is why science must ever operate inside the public trust – so that it does not become the lap-dog of such agency.

Seldom seek to influence point Tau as that is difficult and typically is conducted inside an arena of high visibility – your work in deception should always focus first on point Indigo – where stakeholders and monitors are rarely paying attention yet. One can control much, through the adept manipulation of inflection points.

Extreme measures taken to control Point Tau are unnecessary if one possesses the ability to manipulate Point Indigo.

The final step of the scientific method, consensus acceptance, constitutes more of a Malcolm Gladwell tipping point as opposed to an unconstrained inflection point. A tipping point is that point at which the past trend signal is now confirmed as valid or comprehensive in its momentum. An inflection point is that point at which a change in dynamic has transpired, and what has happened in the past is all but guaranteed not to happen next. Technically, a tipping point is nothing but a constrained inflection point. But for the purposes of this presentation and explanatory usefulness, the two need to be made distinct. The graphic to the right portrays these principles, in hope that one can relate the difference in ergodicity dynamic between inflection and tipping points, to their specific applications inside the scientific method. We must, as a scientific trust be extraordinarily wary of tipping points (T), as undeserved enthusiasm for a particular understanding may ironically serve to codify such notions into Omega Hypothesis – that hypothesis which has become more important to protect, than the integrity of science itself. In similar fashion, we must also protect indigo points (I) from the undue influence of agency seeking a desired outcome.

Having science communicators deem what is good and bad science, is like having a mafia set the exchange rate you get at your local bank. Everyone fails, but nobody knows why.

The art of the Indigo-Tau cheat works like this:  Game your inflection dynamics sparingly and only until such time as a tipping point has been achieved – and then game no further. Lock up your inflection mechanism and never let it be accessed nor spoken of again. Thereafter, momentum will win the day. Do all your dirty-work, or fail to do essential good-work (Indigo), when the game is in doubt, and then resume fair play and balance, after the game outcome is already fait accompli (Tau). Such activity resides at the very heart of fake skepticism and its highly ironic pretense in ‘communicating science’.

Indigo Point Man (Person) – one who conceals their cleverness or contempt.

Tau Point Man (Person) – one who makes their cleverness or contempt manifest.

Based upon the tenet of ethical skepticism which cites that a shrewdly apportioned omission at Point Indigo, an inflection point early in a system, event or process, is a much more effective and hard to detect cheat/skill, than that of more manifest commission at Point Tau, the tipping point near the end of a system, event or process. Based upon the notion ‘Watch for the gentlemanly Dr. Jekyl at Point Tau, who is also the cunning Mr. Hyde at Point Indigo’. It outlines a principle wherein those who cheat (or apply their skill in a more neutral sense) most effectively, such as in the creation of a cartel, cabal or mafia – tend do do so early in the game and while attentions are placed elsewhere. In contrast, a Tau Point man tends to make their cheat/skill more manifest, near the end of the game or at its Tau Point (tipping point).

Shrewdly apportioned omission at Point Indigo is a much more effective and hard to detect cheat,
than that of more manifest commission at Point Tau. This is the lesson of the ethical skeptic.

Watch for the gentlemanly Dr. Jekyl at Point Tau, who is also the cunning Mr. Hyde at Point Indigo.

Which serves to introduce and segue into our last and most clever form of The Cheat.

The Mafia Cheat – Exemplified by NFL’s Exploitation of Interpretive Penalty Call/No-Call Inflection Points

Our final example of The Cheat involves a circumstance which exhibits how The Cheat itself can be hidden inside the fabric of propriety, leveraging from the subjective nature of shades-of-color interpretations and hard-to-distinguish absences which are very cleverly apportioned to effect a desired outcome.8

Cheating is the spice which makes the chef d’oeuvre. Cheat through bias of omission not commission, only marginally enough to enact the goal and then no further, and while bearing a stately manner in all other things. Intimidate or coerce participants to remain silent.

Indigo point raison d’être: Interpretive Penalty Calls/No-Calls at Critical Indigo Points and Rates which Benefit Perennially Favored Teams and Disadvantage Others

I watched a National Football League (NFL) game last week (statistics herein have been updated for NFL end-of-season 2019) where the entire outcome of the game was determined by three specific and flawed penalty calls on the part of the game referees. The calls in review, were all invalid flag tosses of an interpretive nature, which reversed twice, one team’s (Detroit Lions) stopping a come-from-behind drive by the ‘winning’ team (Green Bay Packers). Twice their opponent was given a touchdown by means of invalid violations for ‘hands-to-the-face’, on the part of a defensive lineman. Penalty flag tosses which cannot be changed by countermanding and clear evidence, as was the case in this game. The flags alone artificially turned the tide of the entire game. The ‘winning’ quarterback Aaron Rodgers, a man of great talent and integrity, when interviewed afterwards humbly said “It didn’t really feel like we had won the game, until I looked up at the scoreboard at the end.” Aaron Rodgers is a forthright Tau Point Man – he does not hide his bias or agency inside noise. Such honesty serves to contrast the indigo point nature and influence of penalties inside of America’s pastime of professional football. Most of the NFL’s manner of exploitation does not present itself in such obvious Tau Point fashion, as occurred in this Lions-Packers game.

An interpretive penalty is the most high-sensitivity inflection point mechanism impacting the game of professional football. For some reason they are not as impactful in its analogue, the NCAA of college football. Not that referees are not frustrating in that league either, but they do not have the world-crushing and stultifying impact as do the officials inside of the NFL. NFL officials single-handedly and often determine the outcome of games, division competitions and Super Bowl appearances. They achieve this (whether intended or not) impact by means of a critically placed set of calls, and more importantly no-calls, with regard to these interpretive subjective penalties. Patterns which can be observed as consistent across decades of the NFL’s agency-infused and court-defined ‘entertainment’. Let’s examine these call (Indigo Point Commissions) and no-call (Indigo Point Omissions) patterns by means of two specific and recent team examples respectively – the cases of the 2019 Oakland Raiders and the 2017 New England Patriots.

Indigo-Commission Disadvantages Specific NFL Teams: Case of the 2019 Oakland Raiders

Argument #1 – The Penalty Detriment-Benefit Spread and Raider 60-Year Penalty History

The NFL Oakland Raiders have consistently been the ‘most penalized’ team by far, over the last 60 years of NFL operations. Year after year they are flagged more than any other team. For a while, this was an amusing shtick concerning the bad-guy aura the Raiders carried 40 or 50 years ago. But when one examines the statistics, and the types of penalties involved – consistent through six decades, multiple dozens of various level coaches who were not as highly penalized elsewhere in their careers, two owners and 10 varieties of front offices – the idea that this team gets penalized, ‘because they are supposed to’ begins to fall flat under the evidence. Of course it is also no surprise that the Raiders hold the record for the most penalties in a single game as well, 23 penalties and 200 yards penalized.9

A typical year can be observed in the chart to the right, which I created through analyzing the penalty databases at The detailed data analysis can be viewed by clicking here. True to form, the Oakland Raiders were penalized per play once again for 2019 (see the previous years here), more than any other NFL team (save for Jacksonville who narrowly overtook the Raiders with a late-season 16-penalty game). More to the point however, for the 2019 NFL Season the greatest differential between penalties-against and penalties-benefit, once again is held by the Oakland Raiders. What the chart shows is that in general, it takes 9 less plays executed for the Raiders (1 penalty every 21 plays) to be awarded their next penalty flag, as compared to their opponent (one penalty every 30 plays). Or put another way, the Raiders were flagged an average of 8 times per game, while comparatively their opponents were flagged on average 5.6 times per game – inside a range of feasibility which annually runs from about 8.2 to 5.4 to begin with.  These Oakland Raider 2019 penalty results are hugging the highest and lowest possible extremes for team versus opponent penalties respectively.

In other words, on average for 2019,
the Raiders were by far the most penalized team per game play in the NFL –
while the consistently least penalized team in the NFL was
whatever team happened to be playing the Oakland Raiders each week.

A popular older version of the graphic, which outlined this condition part way through the season can be viewed here: Most to Least Penalized – 2019 Oakland Raiders and Their Opponents.11 Nevertheless, the bottom line is this, and it is unassailable:

The Oakland Raiders are far and above more penalized than any other NFL team, leading the league as the most penalized team in season-years 1963, 1966, 1968-69, 1975, 1982, 1984, 1991, 1993-96, 2003-05, 2009-11, 2016, and most of 2019 – further then landing in the top 3 penalized teams every year from 1982 through to 2019 with only a few exceptions.12 13

Argument #2 – The Drive-Sustaining Penalty Deficit

In the case of the Raiders, the overcall/undercall of penalties is not a matter of coaching discipline, as one might reasonably presume at first blush – rather, in many of the years in question the vast majority of the penalty incident imbalances involve calls of merely subtle interpretation (marked in yellow in the chart below). Things which can be called on every single play, but for various reasons, are not called for certain teams, and are more heavily called on a few targeted teams – flags thrown or not thrown at critical moments in a drive, or upon a beneficial turnover or touchdown. To wit, in the chart which I developed to the right, one can discern that not only are the Oakland Raiders the most differentially-penalized team in the NFL for the 2019 season once again – but as well, the penalties which are thrown against the Raiders are done so at the most critically-disfavoring moments in their games. Times when the Raiders have forced the opposing team into 3rd down and long circumstances and their opponent therefore needed a break and an automatic first down in order to sustain a scoring drive. As you may observe in the chart, a team playing the Raiders in such a circumstance for 2019, bore by far the greatest likelihood of being awarded the subjective-call14 critical break they needed from NFL officials.15 16

The net uptake of this is that across their 16-game 2019 season the Raiders had 37 more drives impacted negatively by penalties versus the average NFL team on their schedule – equating to a whopping 96 additional opponent score points (by the Net Drive Points chart below). Above and beyond their opponents’ performances along this same index, this equates to at least an additional 6 points per game (because of unknown ball control minutes impact) being awarded to Raider 2019 opponents. Thereby making the difference between a 7 – 9 versus a 9 – 7 (or possibly even 10 – 6) record – not to mention the loss of a playoff berth. One can view the calculation tables for this set of data here. So yes, this disadvantage versus the NFL teams on the Raider’s 2019 schedule was a big deal in terms of their overall season success.

Calls for objective violations, such as delay of game, too many players, neutral zone infractions, encroachment and false starts – things which are not subject to interpretation – analyze these penalties and you will find that the Raiders actually perform better in these penalty categories than the NFL average (see chart on right for 2019 called penalties). These are the ‘discipline indicator’ class of penalties. What the astute investigator will find is that, contrary to the story-line foisted for decades concerning this reputation on the part of the Raiders, the team actually fares rather well in these measures. In contrast however, one can glean from the Net Drive Points chart below and derive the same number in the chart to the right, that the Raiders are penalized at double (2x) the rate of the average NFL team for scoring-drive subjective-call defensive penalties, and as well 16.3% higher for all interpretive penalty types in total (yellow Raider totals in the Net Drive Points chart below). In contrast, the Raiders are penalized at 72% of the League average for objective class or non-interpretive penalties. It is just a simple fact that the Raiders are examined by League officials with twice as much scrutiny for the violations of defensive holding, unnecessary roughness, offensive and defensive pass interference, roughing the passer, illegal pick, illegal contact and player disqualification. One can observe the analysis supporting this for 2019 Called Penalties here.17

The non-interpretive penalties (or ‘Discipline Class’ in the chart to the right) cannot be employed as inflection points of control, so their statistics will of course trend towards a more reasonable mean. Accordingly, this falsifies the notion that the Raiders are more penalized than other NFL teams because of shortfalls in coaching disciplines. If this were the case, there should be no differential between the objective versus interpretive penalty-type stats. In fact, inside this ‘discipline indicator’ penalty class, the Raiders fare better than the average NFL team. But this begs the question, do the coaching penalty statistics then corroborate this intelligence? Yes, as it happens, they do.

Argument #3 – Oakland Raider Head Coach Penalty Burden

Further then falsifying this notion that excess Raider penalties are a result of coaching and discipline, are the NFL penalty statistics of the Raider head coaches themselves. Such a notion does not pan out under that evidence either. On average Raider head coaches have been penalized 31.6% higher in their years as a Raider head coach than in their years as head coach of another NFL team. However, for conservancy we have chosen in the graph to the right to weight average coach’s contribution by the number of years coached in each role. Thus, conservatively a Raider head coach is penalized 26.3% more in that role as compared to their head coaching stints both before and after their tenure as head coach of the Oakland Raiders.18 Accordingly, this significant disadvantage has been part of the impetus which has shortened many coach tenures with the Raiders, thereby helping account for the 3.3 year Raider average tenure, versus the 6.6 year average tenure on the part of the same group of coaches both before and after being head coach of the Raiders. One can observe this in the graph, which reflects a blend of eight NFL coaches over the 1979 – 2019 NFL seasons; all prominent NFL coaches who spent significant time – 16 years on average coaching both the Raiders as well as other NFL teams.19

Not even one of the nineteen head coaches in the entire history of the Raider organization bucked this trend of being higher penalized as a Raider head coach. Not even one. Let that sink in.

There is no reasonable possibility, that all these coaches and their variety of organizations could be that undisciplined, almost every single season for 50 years. The data analysis supporting this graphic can be viewed here.

Argument #4 – Oakland Raider Player Penalty Burden

Statistically this coaching differential has to impute to the players’ performances as well, through the association of common-base data. Former Raider cornerback D.J. Hayden portrayed this well in his recent contention that he was penalized more as an Oakland Raider than with other teams. In fact if we examine the Pro Football Reference data, indeed Hayden was penalized a total of 35 times during his four years as a Raider defensive back, and only 11 times in his three years with Detroit and Jacksonville. This equates to 35 penalties in 45 games played for the Raiders, compared to only 11 penalties in 41 games played for other teams.20 That reflects a 65% reduction in penalty per game played and 55% reduction in penalty per snap played during his tenure with a team other than the Oakland Raiders.21

Such detriment constitutes a disincentive for players to want to play for a team which is penalized so often – potentially marring their careers and negatively impacting their dreams for Pro Bowl, MVP or even Hall of Fame selections. This is part of the reason I believe, as to why the badge-of-honor tag-phrase has evolved “Once a Raider, Always a Raider”. In order to play for the Raiders, you pretty much have to acknowledge this shtick inside your career, and live with it for life. Should we now asterisk every player and coach in the NFL Hall of Fame with a ‘Played for the Oakland Raiders’ asterisk now? A kind of reverse steroid-penalty bias negatively impacting a player’s career?

In the end, all such systemic bias serves to do is erode NFL brand, cost the NFL its revenue – and most importantly, harm fans, players, coaches and families.

NFL, your brand and reputation has drifted since the infamous Tuck Rule Game, into becoming ‘Bill Belichick and the Zebra Street Boys’. Your’s is a brand containing the word ‘National’, and as a league you should act accordingly to protect it. Nurture and protect it through a strategy of optimizing product quality.

And finally, the most idiotic thing one can do is to blame all this on the Oakland fans, as was done in this boneheaded article by the Bleacher Report on the Raider penalty problem from as far back as February 2012.

Collectively, all this is known inside any other professional context as ‘bias’ or could even be construed by angry fans as cheating – and when members of an organization are forced under financial/career penalty to remain silent about such activity (extortion), when you observe coaches and players and more importantly members of the free press as well, biting their tongue over this issue – this starts to become reminiscent of prohibition era 18 U.S.C. § 1961 – U.S. Code Racketeering activity.

When you examine the history of such data, much of this patterning in bias remains consistent, decade after decade. It is systemic. It is agency. One can find and download into a datamart or spreadsheet for intelligence derivation, the history of NFL penalties by game, type, team, etc. here: NFL Penalty Tracker. Go and look for yourself, and you will see that what I am saying is true. What we have outlined here is a version of the more obvious Indigo-point commission bias. Let’s examine now a more clever form of cheat, the Indigo-point omission bias.

Indigo-Omission Favors Specific NFL Teams: Case of the 2017 New England Patriots

Let’s address an example in contrast to the Oakland Raiders (also from the data set), the case of a perennial NFL Officials’ call-favored team, the New England Patriots. As one can see in an exemplary season for that franchise, portrayed in the chart to the right, the New England Patriots team that traveled to the 2017 Season Super Bowl, was flagged (from game 10 of the season through to the Super Bowl) at a rate which exceeded 2 standard deviations below, even the next least-flagged team inside the group of 31 other NFL teams. Two standard deviations below even the second best team in terms of penalties called against them. That is an enormous bias in signal. One can observe the 2017 game-by-game statistical data from which the graphic to the right is derived here. If one removes the flagrant, non-inflection-point-useful and very obvious penalties from the Patriots’ complete penalty log (non-highlited penalty types in the chart below), this further then means the Patriots were called for 29 interpretive penalties in these final 12 games – the average of which was not called until late in the 3rd quarter, after the game’s outcome was already determined in many cases.22

In the chart to the right, one may observe the Net Drive Points (score) which were the statistical result of each of the most common forms of NFL penalty (Of note is the dramatic skew in Raider penalties towards higher score-sensitive penalties versus the average NFL team (102%). For those penalties (highlighted in yellow in the chart) which can be called on any play, New England opponents for weeks 10 through the end of the 2017 season earned 6.6 interpretive penalties per game, in those same weeks in which New England was flagged 2.4 times on average. This equates to New England earning only 36% as many interpretive penalties as their average opponent during that same timeframe. As well, most teams average their interpretive penalties late in their second quarter of play (as statistically they should), while New England was awarded their interpretive penalties with less than 5 minutes left in the third quarter of each game on average.

This means that New England was very seldom interpretive-penalized during any time in a game in which the outcome of that game was in doubt. This is ‘exploitation of omissions at Point Indigo’ by means of an absence of interpretive calls against them, for on average of the first three quarters of each game played in late 2017.  This factor, as much as being a good team, is what propelled them to the Super Bowl.

Exploiting the Tau Point on specific critical plays near the end of a game, constitutes ironically a less effective and more obvious mode of cheating – one which will simply serve to piss-off alert fans, as happened in the January 20th 2019 Rams-Saints ‘No Call’ game. One cannot Indigo Point cheat viscerally for long and not get called on such obvious bias – the highly skilled cheat must be in the form of an exploit conducted when stakeholder attentions are not piqued.

Indigo Point Exploitation: The New England Patriots received their interpretive penalties at 36% the rate of the average NFL team, a full quarter later into the game than the average NFL team, most typically when the game outcome was already well in hand. This constitutes exploitation through omission at the Indigo Point.

In fact, for the entire AFC Championship and Super Bowl that season, New England was only flagged twice for any type of violation – a total of 15 yards. Their opponents? The Jaguars and the Eagles were flagged 10 and 7 times more yards respectively, than were the Patriots in their respective championship games. True to form for 2019, from the same NFLPenalites database employed for the Raiders Penalty Differential chart at the top of this article section, one can examine and find that New England was the second least penalized team in the NFL for most of the 2019 Season, only falling to 6th overall in the final games (after they were busted a 6th time for cheating, by filming the sidelines of next week’s opposing team) – and on track to another probable and tedious Super Bowl appearance.

To put it in gambling terms, or seriously tested means of quantification upon which bookies rely – the Patriot’s opponents in the 2017 NFL Season, on average for games 10 through the Super Bowl, were given 4 more penalties in each game than were the Patriots themselves (3 less awarded to them + 1 higher awarded to their opponent on average). Using the Net Drive Points for the most common interpretive penalty types (highlighted in yellow) from the chart immediately above (published at Sports Information Solutions)23, this equates to awarding 10.8 extra points to the Patriots, per game, every game, all the way from game 10 of the 2017 season, through to and including the Super Bowl. No wonder they got to the Super Bowl.

This equates to awarding the Patriots an extra 10.8 points per game in the second half of the season thru the playoffs.

Half the teams in the NFL could have gotten to the 2017 Season Super Bowl if they were given this
same dishonest two touchdown per game advantage afforded the New England Patriots by league officials that year.

Once again, as in the case of the Oakland Raiders earlier, one can make up the pseudo-theory that ‘hey they are more disciplined team, so they are penalized less’. That is, until one examines the data and observes that this condition has gone on for five decades (ostensibly since, but in reality much further back than the notorious ‘Tuck Rule’ AFC Championship Game, the video of which can no longer be found in its original form because the NFL edited out over 2 minutes in order to conceal the game’s penalty no-call Tau Point league phone call intervention). The penalties which are called or not-called are of an interpretive nature – again those that occur most every single down, but are called on some teams consistently, and on other teams not so much. Again here as well, the penalty classes which are not subject to interpretation, delay of game, false start, etc. – surprise, New England is just average in those ‘no doubt’ classes of penalty.24 If this were a matter of coaching discipline, New England should also therefore be two standard deviations below the mean for objective-class penalties as well. They are not. The subjective-class (yellow) penalty calls and no-calls have nothing whatsoever to do with coaching discipline, and everything to do with a statistically manifest bias on the part of the league and its officials.

The Economics of Mafia-Like Activity

It took me a while in order to come to this realization. Because of the presence of closed-door threats and fines to its members, monopolistic overcharging-for-services exploitation of its customers and illicit revenue gained through under-the-table manipulation of the success of its organizations and the flow of grey market (gambling) money, the National Football League is actually not a cartel, rather they are therefore more akin to a mafia by definition.

To annually bill customers who are being misled that they are watching or wagering upon unbiased games of skill, chance and coaching – $830 to DirectTV and $300 to NFL Sunday Ticket – bare-bones cost (both purchases are required and the reality cost for most consumers is on the order of $1,350 or more per year) – purchasing a product which is touted to be one thing, but is delivered as a form of dishonest charade – to my sense this constitutes consumer or gambling failure to deliver a contracted service.

I personally paid $29,000 to NFL Sunday Ticket and DirectTV over the last 15 years of viewing NFL games, being misled by the falsehood that I was watching a sporting event wherein my teams had a chance of success through skill, draft selection, talent, coaching and ball bounces. Fully unaware that in reality, my teams had little chance of success at all.

I was not delivered the product which was sold to me.

The NFL has actually counter-argued this very consumer accusation before the Supreme Court, as recently as 2010, contending that they are merely ‘a form of entertainment’. In 2007, a Jets season ticket holder sued the NFL for $185 million. The case reached the US Supreme Court. The Jets fan argued that, all Jets fans are entitled to refunds because they paid for a ticket to a competition of skill, coaching and chance. Further, had they been aware that the games were not real then the fans would not have bought tickets. This fan lost the case on the grounds that the fans were not buying a ticket to a ‘fair’ event, rather an entertainment event.

Accordingly the NFL contends that this Supreme Court precedent gives them contractual rights to be able to advantage or disadvantage a team without having to address their own bias or cheating. Further, that the league is legally entitled to do what is needed to entertain their audience, such as in the creation and promotion of certain ‘storylines’.25 Storylines of the evil people and the good people (sound familiar?) in order to stimulate ticket and media purchases. A farce wherein ironically, the league office actually thrives upon the brand-premise that they are administering a game of skill, chance and coaching. The reality is that NFL officials pick and choose who they want to win and who they want to lose, the same teams, decade in and decade out. None of its at-risk members (players, organizations, staff and coaches) are allowed to speak of this gaming, for threat of fines or their career. At least in professional wrestling, the league leadership and participants admit that it is all an act. In professional wrestling no one is fooled out of their money.

This is a pivotal reason why I dumped NFL Sunday Ticket and DirectTV. I am not into being bilked of hard-earned household money by a quasi-mafia.

Update (Dec 2019): NFL is reportedly planning a “top-down review” of the league’s officiating during the 2020 offseason.

Such shenanigans as exemplified in the three case studies above represent the everpresence and impact of agency (not merely bias). Bias can be mitigated; however, agency involves the removal and/or disruption of the power structures of the cartel, cabal and mafia. These case examples in corruption demonstrate how agency can manipulate inflection dynamics to reach a desired tipping point – after which one can sit in their university office and enjoy tenure, all the way to sure victory. The only tasks which remain are to protect the indigo point secret formula by means of an appropriate catch phrase, and as well ensure that one does not have any mirrors hanging about, so that you do not have to look at yourself.

An ethical skeptic maintains a different view as to how championships, ethical markets, as well as scientific understanding, should be prosecuted and won.

The Ethical Skeptic, “Inflection Point Theory and the Dynamic of The Cheat”; The Ethical Skeptic, WordPress, 20 Oct 2019; Web,

October 20, 2019 Posted by | Institutional Mandates, Tradecraft SSkepticism | , , , , | 17 Comments

The Earth-Lunar Lagrange 1 Orbital Rapid Response Array (ELORA)

Elora is a name meaning ‘The laurel of victory’. Within this paper, The Ethical Skeptic has proposed for consideration a concept for an elegant, flexible, high delivery-mass, rapid response, high kinetic-energy and low rubble-fragmentation system called ELORA. A Lagrange exploiting orbital array around the Moon, which can be rapidly deployed to interdict an approaching Earth-impactor threat, through massive, adaptable and repeated kinetic impact. It is the contention of this white paper that this concept system offers features superior in every facet of challenge, to the existing asteroid/comet deflection technologies under consideration.

Elora is a name bearing the meaning ‘the laurel of victory’. The symbol of the laurel wreath traces back to Greek mythology. Apollo, god of warfare archers and archery, was often represented wearing a laurel wreath which encircled his head, as a crown of symbolic power. Accordingly, in the Greek Olympics such laurel wreaths were crafted from a wild form of olive tree known as “kotinos” (κότινος). In the later Roman context, laurel wreaths were symbols of martial victory, crowning a successful commander for having just vanquished an enemy force with rapidity.1

Rapid is a business term, which is used to encompass both the contexts of quickness in response (Amazon) and fastness in delivery (FedEx). ELORA, is a gravity-exploiting wreath, worn around the head of the Moon, designed to mitigate large celestial future and importantly, emergent Earth-impacting orbital bodies, through a rapid, repeatable and overwhelming kinetic response. A system which solves (in the concept presented herein) many of the problems which face today’s proposed Earth-impactor mitigation ideas, and yet bears few of their disadvantages.

ELORA is an acronym for: Earth-Lunar Lagrange 1 (ELL-1) Orbital Rapid Response Array. ELORA is a proposed system to interdict and deflect Potential Hazardous Objects to Earth. It is a series of Lunar dust bags that each perform kinetically like shotgun pellets. They are bagged on the Moon and then individually launched to Earth-Lunar Lagrange point 1, in order to be assembled into massive single payloads of bound-but-separate dust bags – yielding a total of 1000 – 3000 kilotons of TNT (about 2.8 – 4.2 Petajoules) of direct kinetic energy per payload. Twelve of these 1728-bag/200,000 kilogram single payloads are to be assembled, which will station as trojan ELL-1 payloads; ready to be rapid deployed to any Lunar orbit inclination in order to interdict large (>50 meters) and short notice Near Earth or Potential Hazardous Objects (NEO/PHO) from space. The array as a concept is easy to assemble and offers redundancy, power and rapidity unparalleled by existing conceptual alternative interdiction approaches.

Of top concern among those scientists tasked to forward-think about threats to mankind, is the real possibility that the Earth will be someday threatened by a rogue asteroid, comet or other, even extra-solar space debris – which becomes a Potential Hazardous Object (PHO).2 3 Current plans to address cosmic impactor threats include nuclear warheads and various ingenious forms of imbuing physical effects to the PHO object or add or subtract momentum from its solar-orbital vector.

‘This one did sneak up on us’: Internal emails reveal how NASA almost missed Asteroid ‘2019 OK’ (a 130 meter asteroid) when it whizzed past Earth in July, within 24 hours of its detection.4

In 2011, the director of the Asteroid Deflection Research Center at Iowa State University, Professor Bong Wie began to study strategies that could deal with 50-to-500-metre-diameter (200–1,600 ft) objects when the time to Earth impact was less than one year. He concluded that to provide the required energy, a nuclear explosion or other event that could deliver the same power, are the only methods that can work against a very large asteroid within these time constraints.5 It is the contention of this author, that space deployed nuclear warheads constitute a dangerous, expensive and less effective means of mitigating such objects. A massive high-kinetic shotgun payload system such as ELORA will delivery more kinetic energy, more rapidly, and in more overwhelming fashion, than can nuclear warheads – bearing less of the downsides and costs of nuclear or other approaches.

Existing Approaches to Asteroid Deflection/Mitigation

Various PHO and emergent bolide collision avoidance techniques have different trade-offs with respect to metrics such as overall performance, cost, failure risks, redundancy, operations, and deployment readiness. There are various methods under serious consideration now, as means of changing the course of any potential Earth threat. These can be differentiated by various attributes such as the type of mitigation (deflection or fragmentation), energy source (kinetic, electromagnetic, gravitational, solar/thermal, or nuclear), and approach strategy (long term influence or immediate impact).6

Potential Hazardous Object (PHO) Problem Definition: Four Challenges Exist

1.  PHO interdiction technologies exist in a convex technology trade-off relationship of diminishing marginal returns (lower blue curve in the graphic below), in that,

a.  What can be deployed quickly or be easily maneuvered in space, is also not sufficient to do the job.

b.  What can do the job, cannot be deployed quickly nor be maneuvered easily in space.

2.  Hydrogen (lithium deuteride or equivalent) core detonations are theoretically effective for low diameter bodies, yet diminish in effectiveness (upper blue curve in the graphic below) asymptotically to a maximum of a 100 – 150 meter bolide, as constituting the largest effective body which the technology can be employed to interdict.

3.  Current estimates of effectiveness are theoretical only –  a condition wherein neither their adequacy at the job, nor rapidness/maneuverability in deployment can be easily tested against mock threat conditions prior to their actual need.

4.  No System to date has offered a low-cost, rapidly deployable, scalable, flexible, testable, centuries-durable, low maintenance, all aspect angle, low fragmentation, redundant, bolide-mass altering, high-mass/kinetic potential and multiple-impactor solution – which can address the emergent or otherwise 150+ meter diameter body.

The various current approaches to deflecting a wayward celestial body fall into four approach categories (Note: These are all derived/reworded and modified/categorized into a more logical taxonomy, from Wikipedia: Asteroid Impact Avoidance): 

Fragmentation – explosive or high velocity kinetic methods which seek to pulverize the orbital body into both bolides which take non-threatening orbital tracks (achieve orbital body escape velocity) or pose less of a destructive threat when they do eventually enter the Earth’s atmosphere (hopefully less than 35 meters in average diameter). These can be executed in either an emergent or long-term strategy.

1.  Hypervelocity Asteroid Mitigation Mission for Emergency Response (HAMMER) – a spacecraft (8 tonnes) capable of detonating a nuclear bomb to deflect an asteroid through two methods of approach:

a.  Nuclear Impact Device (NID) – a direct impact by a nuclear device causes the body to be broken through concussion into smaller pieces of both escape velocity and less-damaging characteristics.

b.  Nuclear Standoff Device (NSD) – a nuclear device or series thereof, are detonated a given distance from the orbital body. The kinetic energy of thermal and fast neutrons, along with x-rays and gamma rays causes a push which changes the track of the orbital body (note, this is not the same as cometization).

2.  Dual Warhead Nozzle-Ejecta – a two stage nuclear/nuclear approach, which combines an initial nuclear blast to create a provisional deep crater, which is then followed by a second subsurface nuclear detonation within that provisional crater (the nozzle), which would generate an ejecta effect and high degree of efficiency in the conversion of the x-ray and neutron energy that is released into propulsive energy to the orbital body.

Kinetic Energy/Impact – massive and high velocity man-assembled bodies which impact the orbital body directly and impart a resulting inertial/momentum transfer change to its orbit.

3.  Asteroid Redirect – capture and employment of another asteroid body as an inertial mass which is directed to impact and fragment or alter the trajectory of the threatening orbital body.

Earth-Lunar Lagrange 1 Orbital Quick Response Array (ELORA) – a large kinetic object and quick response approach developed by The Ethical Skeptic. A series of Lunar dust bag bundles, bound together into large massive projectiles held on station at Earth-Lunar Lagrange Point 1 and subsequently placed into any needed inclination Lagrange orbit around the Moon. These would be short notice directed by thruster and/or Moon-Earth slingshot towards the approaching orbital body, exploiting the low/zero gravity of Earth-Moon Lagrange 1, and targeted for a direct high velocity/high kinetic impact. The bags can be un-bound at the last minute, in order to form a larger impact pattern (shotgun effect) in the case of a rubble pile asteroid, thereby distributing the momentum over a larger area of the orbiting body and displacing a greater amount of the rubble and reducing fragmentation.

4.  Hypervelocity Asteroid Intercept Vehicle (HAIV) – a two stage kinetic/nuclear hybrid approach, which combines a kinetic impactor to create an initial crater, which is then followed by a subsurface nuclear detonation within that initial crater, which would generate a lensing effect and high degree of efficiency in the conversion of the x-ray and neutron energy that is released into propulsive energy to the orbital body.

5.  Conventional Rocket Engine – launching and attaching any spacecraft propulsion engine to the center of mass of the orbital object, and using the engine to give a push, possibly forcing the asteroid onto a non-threatening trajectory.

Gradualization – various approaches by means of technology, engines, colors, lasers or offset thrust devices which serve to push, pull, alter the solar pressure on or cometize the orbital body.

6.  Gravity Tractor Thrust Rockets – a more massive thruster spacecraft is placed into orbit around the Earth-threatening orbital body. A slow thrust is applied from the spacecrafts engines, never exceeding escape velocity. The mutual gravitation between the two bodies begins to alter alter the trajectory of the orbital body from its original course.

7.  Ion Beam Driver – involves the use of a low-divergence ion thruster mounted on an orbiting spacecraft, which is pointed at the center of mass of the asteroid. The momentum imparted by the ions reaching the asteroid surface produces a slow-but-continuous force that can deflect the asteroid in similar fashion to a gravity tractor, but with a much lighter spacecraft.

8.  Solar Sail Push/Pull – attaching a solar sail either behind or on the surface of the orbital body, in order to use the solar wind to alter the trajectory of the orbital body.

9.  Painting – altering the color of the orbital body to the opposite end of the color band from which it naturally exists. The whiter or blacker surface alteration would then provide for a differential dynamic in the absorption and reflection of solar photons and gradually alter the body’s trajectory over time via the Yarkovsky effect.

10.  Solar Focusing – a technique using a set of refractory lenses or a large reflector lens (probably deployed foil) which focuses a relatively narrow beam of reflected sunlight onto a specific region of the orbital body, creating thrust from the resulting vaporization of material, solar wind or through amplifying the Yarkovsky effect, wherein photons emitted from the body itself serve to alter its trajectory.

11.  Nuclear Pulse Propulsion – involves the use of a nuclear pulse engine mounted on a spacecraft, which lands on the surface of the asteroid. The momentum imparted by the nuclear pulses produces a slow-but-continuous force that can deflect the asteroid in similar fashion to a thruster rocket.

12.  Cometization – heating the surface of the orbital body through a thermonuclear release of neutrons, x-rays and gamma rays so that it begins to eject heated material from cracks or vents in the surface, in similar manner to a comet – thereby causing a thrust vector nudging of the orbital body itself for a short to moderate period of time. Depending on the brisance and yield of the nuclear device, the resulting ejecta exhaust and mass loss effects, would produce enough alteration in the object’s orbit to make it miss Earth.

13.  Laser Ablation – focus sufficient laser energy from Earth or a space deployed laser or laser array, onto the surface of an asteroid to cause flash vaporization and mass ablation and create either an impulse or mass alteration which changes the momentum of the orbital body.

14.  Magnetic Flux Compression – magnetically brakes objects that contain a high percentage of iron through deploying a wide coil of wire along the sides of its orbital path. When the body moves through the coil or tunnel, inductance creates an electromagnet solenoid effect which causes EM drag on the orbital body.

Mass Alteration – various methods of digging and ejecting or addition of added mass from/to the orbital body, thereby altering its long term orbital track.

15.  Deep Impact Collision – an impactor which injects itself deep into the surface of the orbital body, thereby changing both its velocity and net mass.

16.  Mass Driver – a system landed onto the surface of an orbital body, which ejects material into space, thus giving the object a slow steady push as well as decreasing its mass.

17.  Gravity Tractor Redirect – another smaller, but still significant spacecraft or redirected body is placed into orbit around the Earth-threatening orbital body. The added binary-systemic gravitation/mass of the new body alter the trajectory of the orbital body from its original course.

18.  Tether Tractor – attaching a mass by means of a tether or netting, to the orbital body, thereby altering the net mass of the system and as well its orbital trajectory.

19.  Dust/Steam Cloud Accretion – releasing dust or water vapor from a spacecraft or from a detonated redirected comet, which would subsequently be gathered/accreted by the orbital body and serve to alter its mass/trajectory over a long period of time.

20.  Coherent Digger Array – multiple mobile or fixed flat tractors which attach to the surface of the orbital body and dig up material, ejecting it into space and thereby significantly altering the mass of the orbital body and changing its trajectory. The material could also be released from one side of the body as a coordinated fountain array with an added propulsive effect.

21.  Net Drag – a durable net material which is deployed into the path of the orbital object, which then wraps around the object. This netting addition is added several times over until the net mass/momentum of the orbital body is changed.

Carl Sagan, in his book Pale Blue Dot, expressed concern about deflection technology, noting that any method capable of deflecting impactors away from Earth could also be abused to divert non-threatening bodies toward the planet.

If you can reliably deflect a threatening worldlet so it does not collide with the Earth, you can also reliably deflect a harmless worldlet so it does collide with the Earth. Suppose you had a full inventory, with orbits, of the estimated 300,000 near-Earth asteroids larger than 100 meters—each of them large enough, on impacting the Earth, to have serious consequences. Then, it turns out, you also have a list of huge numbers of inoffensive asteroids whose orbits could be altered with nuclear warheads so they quickly collide with the Earth…

Tracking asteroids and comets is prudent, it’s good science, and it doesn’t cost much. But, knowing our weaknesses, why would we even consider now developing the technology to deflect small worlds?…

If we’re too quick in developing the technology to move worlds around, we may destroy ourselves; if we’re too slow, we will surely destroy ourselves. The reliability of world political organizations and the confidence they inspire will have to make significant strides before they can be trusted to deal with a problem of this seriousness…

Since the danger of misusing deflection technology seems so much greater than the danger of an imminent impact, we can afford to wait, take precautions, rebuild political institutions—for decades certainly, probably centuries. If we play our cards right and are not unlucky, we can pace what we do up there by what progress we’re making down here…

The asteroid hazard forces our hand. Eventually, we must establish a formidable human presence throughout the inner Solar System. On an issue of this importance I do not think we will be content with purely robotic means of mitigation. To do so safely we must make changes in our political and international systems.

   ~[p 146-150], Pale Blue Dot, Carl Sagan

The critical path issue elucidated through this – is that a well designed and elegant deflection technology would be employed to increase the entropy of the interdiction circumstance, whereas using a redirect technology critically depends upon decreasing the entropy of that circumstance. In other words, by choosing a non-nuclear deflection (as opposed to redirection) we are pushing the threatening orbital body into any one of a billion potential outcomes, all of which are satisfactory in nature. In order to make a non-threatening orbital body suddenly become a threat, one must alter its trajectory to one specific outcome among billions. A task of extraordinarily greater difficulty – rendering that technology also not an optimal choice as an impactor-mitigating solution. I disagree with Sagan that all mitigation technologies will/can be used as an implement of warfare, and therefore must be delayed – as one need resign self to the single answer of nuclear detonations in order to assume that such a false dilemma exists.

Indeed, that dilemma does not necessarily exist. What we have proposed below, provides for a powerful, yet neutral, non-nuclear and single purpose system – which can only be employed to deflect incoming invaders with abandon, yet cannot be used to deflect them in order to purposely place Earth into harm’s way. The concept system resolves most every shortfall characteristic in the list of mitigation approaches above (see graph and list of technologies 1 – 21), and as well resolves Sagan’s concern, through use of simple technologies and focused on-task elegance in design.

Elegant Solution Approach: ELORA – Earth-Lunar Lagrange 1 Orbital Rapid Response Array

Below are presented five slides which serve to introduce the ELORA concept approach and feature set. The first, second and third slides serve to introduce the Lagrange exploitation construct, along with the principle involving 12 x 1728 bags of Lunar dust in trojan Earth-Lunar Lagrange 1 station or targeting orbit around the Moon. The fourth slide speaks to the establishment of all-Lunar-inclination-angle target interdiction capability, while the fifth slide depicts the multiple impactor (up to 12) and shotgun (1728 ‘pellets’) approaches which achieve the enormous kinetic energy payload and low fragmentation outcome.

The development process consists of simply harvesting dust from the surface of the Moon, so that large particles are not created from spills in orbit around the Moon or after impact with the targeted bolide. This dust is bagged and launched into space in quantities of 12 bags. After 144 launches (much more cheaply executed from the surface of the Moon and its low gravity than from Earth), these 1728 bags of Lunar dust are bound together as a single 200,000 kg ‘payload’ – one single impactor designed to mitigate an Earth endangering NEO/PHO. Each payload is then affixed with a rocket and attitude control system, and then parked at Lagrange 1 (or ready-placed into Lagrange elliptical orbit around the Moon, in a variety of orbit inclinations so as to maximize celestial omnidirectional coverage). The payload is preset with small deployment charges which allow the bags of dust to be burst apart slightly, and to separate during the last 5 minutes of terminal approach, so that they act as a kind of shotgun effect on the targeted bolide.

This is all accomplished at a space work-station called ELL-1 Payload Assembly, in trojan orbit at Earth-Lunar Lagrange point 1. The Earth-Lunar Lagrange 1 Payload Assembly station would be used to conduct monitoring, maintenance and upgrades of the system from then on. This would be absolutely essential due to the structure fatiguing and propellant degradation which each payload and its control system would experience, due to age or the constant repetitive changes in the Moon’s tidal gravity over each orbit. Alternatively, all 12 payloads may be kept on station as ready-station trojan bodies at ELL-1. The Moon orbital phase for payloads under this approach would only be initiated when the actual deployment of the system was needed. This would delay the rapidness of response only by a couple of days. Of course a hybrid system thereof may also be deployed, with a portion of the payloads in orbit and the remainder in trojan station-keeping reserve so as to minimize maintenance demand.

The result is a single payload impactor (200,000 kg) with the force of 1000 – 3000 kilotons of TNT (about 2.8 – 4.2 Petajoules); in the range of 60 to 90 times as much energy as that released from the atomic bomb detonated at Hiroshima.

However, unlike a nuclear fusion core detonation (used by the most effective alternative approaches in the chart above) – ALL of an ELORA payload’s kinetic potential is transferred into momentum imparted to the orbital body.

Alternative approaches above would require 672 static load launches or 50 to 85 – B83 hydrogen nuclear core detonations in order to achieve the same inertial effect as 12 single payloads from an ELORA intervention – all static assets needing to be maintained by an international body for centuries, and then without warning be required within a matter of days.

And of course, ELORA could be tested on 150+ meter asteroids and NEO’s, at low cost, whereas the Delta IV static load and B83 hydrogen warhead detonation approaches could not.

Now, it should be noted that the orbit paths of the payloads do not have to conform to the specific polar orbit depicted in the slides below. Alternative Lunar retrograde orbits and other oblique/equatorial/inclination offset orbits can be established to enhance the ability to deliver payloads to an impactor body approaching from a variety of aspect angles, and in the most rapid and low-energy-input to high kinetic payload ratio means as possible. The illustrations below depict only one type of potential prograde polar orbit, for conceptual simplicity.

notes: While the Lunar orbit is depicted as somewhat circular, the actual orbit would be elliptical. As well the relative sizes of the Moon and Earth bias towards presenting the Moon as larger and closer relative to the Earth than it really is, and both bodies larger to scale than reality. All of these are done for sake of presentation only.

Critical Advantages of ELORA over Other Interdiction Concepts/Approaches

The ELORA concept solution presents a number of advantages over currently proposed approaches:

1.  Low construction cost (Provided we are working on the Moon already)

2.  Repeated impacts and multiple attempts possible in quick response context (tolerates single failures)

3.  No fragmentation of threat – Impactor is fine dust and spreads over an area most of the size of the bolide immediately prior to impact so that it bears less likelihood of splitting it

4.  Low cost to maintain/launch/station-keep

5.  Very quick deployment – System can be deployed within hours after a five sigma track is established for the target object

6.  Extremely high velocities and impact reach possible – Superior kinetic energy potential – Superior inertia imparted as compared to hydrogen core detonation

7.  Modular/Scalable/’Magazine’ is cheaply and easily reload-able – the advantageous bag-by-bag method as to how it is assembled, becomes also a key strength in how it impacts the orbital body (like shotgun pellets) and reduces overall threat of fragmentation

8.  Can address multiple objects at once or persistent fragments which remain after first impact, with a second fusillade

9.  Can still be used with superior effectiveness for longer term intervention scenarios

10.  ‘Paints’ an asteroid white (for long term intervention scenario) – Increases Yarkovsky effect – Induces cometization on impact side

11.  Adds superior amount of mass to the target orbital body

12.  Spread pattern (shotgun blast) or single bullet projectile and variable velocities possible – tailored to orbital body challenge. Not vulnerable to the tumbling of the target bolide (roll, pitch, yaw) as are all other technologies

13.  Deflects very large orbital body mass threats compared to current conceptual approaches

14.  Remaining straggler threat fragments can be independently targeted and impacted separately

15.  Uses Lunar orbit angular momentum and/or Lunar/Earth slingshot effect for added kinetic energy at launch

16.  Vastly superior single impactor total mass (56 x) – equivalent to 1000 – 3000 kilotons of TNT (about 2.8 – 4.2 Petajoules), in the range of 60 to 90 times as much energy as that released from the atomic bomb detonated at Hiroshima. However, unlike a nuclear warhead blast – ALL of this kinetic potential is transferred into momentum imparted to the orbital body.

17.  Rapid intervention arrival time onto targeted threat

18.  Potential for deployment to not be controlled by a single nation nor launch station

19.  Lower chance of technology chain risk-failures/straightforward mechanisms

20.  Thrusters are only directional do not have to lift anything into space, nor expend regular fuel in order to keep dynamic orbit – Less fuel vulnerable/Lower fuel requirement

21.  Each impactor unit arrival provides ranging/correction for more accurate successive impacts – (shoot shoot look shoot)

22.  Employs the kinetic energy of the Moon’s orbit around the Earth like a pitcher’s throw in baseball

24.  Uses stationary Lagrange point 1 assembly – low G and low cost to assemble/handle impactor payloads

25.  Can be recaptured by Lagrange 1 assembly station and repair/maintenance done as needed

26.  Low cost of assembly/launch from low G of Moon surface

27.  System can be upgraded with better trajectory rockets, without having to change out the actual payload

28.  System can be tested repeatedly and at a low cost. Is easy to replace the expended round.

29.  Can deflect an irregular shape, long and tumbling bolide (such as 2017 Oumuamua)

30.  Trojan payloads in static orbit at Earth-Lunar Lagrange 1, can be launched/slingshot by the Moon and Earth along any selected initial Lunar orbit inclination vector desired (as well as corresponding Earth slingshot inclination), to interdict objects approaching from any direction inside the celestial grid.

31.  Assembly and trojan stationing at Earth-Lunar Lagrange Point 1 allows for a very large payload to be assembled in space, yet not have to carry the rockets and large fuel required to keep orbit station around the Moon, or even worse, Earth during its assembly – wherein one would constantly have to add energy, adjusting the orbit of the payload as bag mass is added to its structure over time.

Assembly and trojan stationing at Earth-Lunar Lagrange Point 1 allows for a very
large payload to be assembled in space, yet not have to carry the rockets and large
fuel required to keep orbit station around the Moon, or even worse, Earth during its assembly –
wherein one would constantly have to add energy, adjusting the orbit of the payload as bag mass is added to its structure over time.

Development and Phasing

While much work remains to be completed on the development phase obviously, and accordingly demands that a Moon base of operations be established (becoming only one of the reasons to mandate such a thing – so this project cannot be burdened with the full cost of establishing operations on the Moon itself), the deployment is conducted in relatively straightforward fashion, through beta testing and four deployment phases below.

2038  Beta 0 Testing – Earth based test of smaller trojan payload station-keeping at ELL-1

2040  Beta 1 Testing – ELL-1 in situ testing of larger payload assembly/station-keeping

2043  Beta 2 Testing – Trojan to Moon orbit transition test and asteroid test interdiction

2050  Phase I – Establish Moon surface station infrastructure

2055  Phase II – Lunar launch station assembly/operation/test bagging & launch

2058 – 2068  Phase III – Earth-Lunar L1 trojan impactor amassing (creating payloads)

2070 – 2075  Phase IV – Lunar Lagrange orbital array stationing/acceptance testing series

Thus we are probably at least 40 years from being able to begin to accomplish such a feat at face value as presented herein. However, it is the opinion of this author, that eventually the best minds in this discipline will conclude that this solution is the only real way in which an emergent, 150+ meter bolide interdiction could be achieved by mankind. In the meantime, the nuclear option (distasteful as that may be) appears to be the best stop-gap measure for Earth defense with respect to smaller, more likely, PHO bolides, while we obtain the political and social will to create the elegant and ethical ELORA architecture in our binary space.

However, there is nothing to say that we cannot in the meantime, create a couple of these payloads with conventional Delta IV launches over the next two decades, place a similar smaller sized payload at Lagrange 1, and then test the concept first. In fact, we should do this. But the question will remain, will we be this bold? Or are PHO/Earth-impactors just another myth to the assuredly skeptical mind?

In the meantime, respectfully submitted for your consideration.

The Ethical Skeptic, “The Earth-Lunar Lagrange 1 Orbital Rapid Response Array (ELORA)”; The Ethical Skeptic, WordPress, 14 Sep 2019; Web,

September 14, 2019 Posted by | Ethical Skepticism | , , , , , , , , , , | 5 Comments

The Elements of Hypothesis

One and done statistical studies, based upon a single set of statistical observations (or even worse lacks thereof), are not much more credible in strength than a single observation of Bigfoot or a UFO. The reason, because they have not served to develop the disciplines of true scientific hypothesis. They fail in their duty to address and inform.

As most scientifically minded persons realize, hypothesis is the critical foundation in exercise of the scientific method. It is the entry door which demonstrates the discipline and objectivity of the person asking to promote their case in science. Wikipedia cites the elements of hypothesis in terms of the below five features, as defined by philosophers Theodore Schick and Lewis Vaughn:1

  • Testability (involving falsifiability)
  • Parsimony (as in the application of “Occam’s razor” (sic), discouraging the postulation of excessive numbers of entities)
  • Scope – the apparent application of the hypothesis to multiple cases of phenomena
  • Fruitfulness – the prospect that a hypothesis may explain further phenomena in the future
  • Conservatism – the degree of “fit” with existing recognized knowledge-systems.

Equivocally, these elements are all somewhat correct; however none of the five elements listed above constitute logical truths of science nor philosophy. They are only correct under certain stipulations. The problem resides in that this renders these elements not useful, and at worst destructive in terms of the actual goals of science. They do not bear utility in discerning when fully structured hypothesis is in play, or some reduced set thereof. Scope is functionally moot at the point of hypothesis, because in the structure of Intelligence, the domain of observation has already been established – it had to have been established, otherwise you could not develop the hypothesis from any form of intelligence to begin with.2 3 To address scope again at the hypothesis stage is to further tamper with the hypothesis without sound basis. Let the domain of observation stand, as it was observed – science does not advance when observations are artificially fitted into scope buckets (see two excellent examples of this form of pseudoscience in action, with Examples A and B below).

Fruitfulness can mean ‘producing that which causes our paradigm to earn me more tenure or money’ or ‘consistent with subjects I favor and disdain’ or finally and worse, ‘is able to explain everything I want explained’. Predictive strength, or even testable mechanism, are much stronger and less equivocal elements of hypothesis. So, these two features of hypothesis defined by Schick and Vaughn are useless to malicious in terms of real contribution to scientific study. These two bad philosophies of science (social skepticism) serve to produce inevitably a fallacy called explanitude. A condition wherein the hypothesis is considered stronger the more historical observations it serves to explain and how flexible it can be in predicting or explaining future observations. Under ethical skepticism, this qualification of an alternative or especially null hypothesis is a false notion.

Finally, parsimony and conservatism are functionally the same thing – conserving and leveraging prior art along a critical path of necessary incremental conjecture risk. This is something which few people aside from experienced patent filers understand. If I constrain my conjecture to simply one element of risk along a critical path of syllogism, I am both avoiding ‘excessive numbers of entities’ and exercising ‘fit with existing recognized knowledge systems’ at the same time. Otherwise, I am proposing an orphan question, and although it might appear to be science, p-values and all, it is not. Thus, a lack of understanding on the part of the Schick and Vaughn inside How to think about weird things: critical thinking for a New Age, as to how true science works, misled them into believing that these two principles needed to be addressed separately. One is a fortiori with the other inside Parsimony (see below). Unless of course one is implying that ‘fit’ means ‘to comply’ (as the authors probably do, being that both authors are social skeptics and have no professional experience managing a lab) – then of course we are dealing with a completely different paradigm of science called sciebam: the only answers I will accept, until I die, are answers which help me improve or modify my grasp of how correct I am. The duty of a hypothesis is to inform about and address standing evidence and inference (Element 4 below), not to necessarily just conform to it. It should avoid beginning science by means of an orphan question, especially under a conflict of interest – and especially if that interest is ‘preservation of career reputation’. Thus the process of simply confirming standing theory, and the process of discovery are often two different things altogether. This leverages around the critical discernment between what ethical skepticism calls science and sciebam.

Orphan Question

/philosophy : pseudoscience : sciebam/ : a question, purported to be the beginning of the scientific method, which is asked in the blind, without sufficient intelligence gathering or preparation research, and is as a result highly vulnerable to being manipulated or posed by means of agency. The likelihood of a scientifically valid answer being developed from this question process, is very low. However, an answer of some kind can almost always be developed – and is often spun by its agency as ‘science’. This form of question, while not always pseudoscience, is a part of a modified process of science called sciebam. It should only be asked when there truly is no base of intelligence or body of information regarding a subject. A condition which is rare.


/philosophy : science : method : sciebam/ : (Latin: I knew) An alternative form of knowledge development, which mandates that science begins with the orphan/non-informed step of ‘ask a question’ or ‘state a hypothesis’. A non-scientific process which bypasses the first steps of the scientific method: observation, intelligence development and formulation of necessity. This form of pseudoscience/non-science presents three vulnerabilities:

First it presumes that the researcher possesses substantially all the knowledge or framework they need, lacking only to fill in final minor gaps in understanding. This creates an illusion of knowledge effect on the part of the extended domain of researchers. As each bit of provisional knowledge is then codified as certain knowledge based upon prior confidence. Science can only progress thereafter through a series of shattering paradigm shifts.

Second, it renders science vulnerable to the possibility that, if the hypothesis, framework or context itself is unacceptable at the very start, then its researcher therefore is necessarily conducting pseudoscience. This no matter the results, nor how skillfully and expertly they may apply the methods of science. And since the hypothesis is now a pseudoscience, no observation, intelligence development or formulation of necessity are therefore warranted. The subject is now closed/embargoed by means of circular appeal to authority.

Finally, the question asked at the beginning of a process of inquiry can often prejudice the direction and efficacy of that inquiry. A premature or poorly developed question, and especially one asked under the influence of agency (not simply bias) – and in absence of sufficient observation and intelligence – can most often result quickly in a premature or poorly induced answer.

Science – ‘I learn’ = using deduction and inductive consilience to infer a novel critical path understanding
Sciebam – ‘I knew’ = using abduction, panduction and linear/statistical induction to enforce an existing or orphan interpretation

Real Hypothesis

Ethical skepticism proposes a different way of lensing the above elements. Under this philosophy of hypothesis development, I cannot make any implication of the ilk that ‘I knew’ the potential answer a priori. Such implication biases both the question asked, as well as the processes of inference employed. Rather, hypothesis development under ethical skepticism involves structure which is developed around the facets of Intelligence, Mechanism and Wittgenstein Definition/Domain. A hypothesis is neither a hunch, assumption, suspicion nor idea. Rather it is


/philosophy : skepticism : scientific method/ : a disciplined and structured incremental risk in inquiry, relying upon the co-developed necessity of mechanism and intelligence. A hypothesis necessarily features seven key elements which serve to distinguish it from non-science or pseudoscience.

The Seven Elements of Hypothesis


1.  Construct based upon necessity. A construct is a disciplined ‘spark’ (scintilla) of an idea, on the part of a researcher or type I, II or III sponsor, educated in the field in question and experienced in its field work. Once a certain amount of intelligence has been developed, as well as definition of causal mechanism which can eventually be tested (hopefully), then the construct becomes ‘necessary’ (i.e. passes Ockham’s Razor). See The Necessary Alternative.

2.  Wittgenstein definition and defined domain. A disciplined, exacting, consistent, conforming definition need be developed for both the domain of observation, as well as the underpinning terminology and concepts. See Wittgenstein Error.

3.  Parsimony. The resistance to expand explanatory plurality or descriptive complexity beyond what is absolutely necessary, combined with the wisdom to know when to do so. Conjecture along an incremental and critical path of syllogism. Avoidance of unnecessarily orphan questions, even if apparently incremental in the offing. See The Real Ockham’s Razor. Three characteristic traits highlight hypothesis which has been adeptly posed inside parsimony.

a. Is incremental and critical path in its construct – the incremental conjecture should be a reasoned, single stack and critical path new construct. Constructs should follow prior art inside the hypothesis (not necessarily science as a whole), and seek an answer which serves to reduce the entropy of knowledge.

b. Methodically conserves risk in its conjecture – no question may be posed without risk. Risk is the essence of hypothesis. A hypothesis, once incremental in conjecture, should be developed along a critical path which minimizes risk in this conjecture by mechanism and/or intelligence, addressing each point of risk in increasing magnitude or stack magnitude.

c. Posed so as to minimize stakeholder risk – (i.e. precautionary principle) – a hypothesis should not be posed which suggests that a state of unknown regarding risk to impacted stakeholders is acceptable as central aspect of its ongoing construct critical path. Such risk must be addressed first in critical path as a part of 3. a. above.

4.  Duty to Reduce Address and Inform. A critical element and aspect of parsimony regarding a scientific hypothesis. The duty of such a hypothesis to expose and address in its syllogism, all known prior art in terms of both analytical intelligence obtained or direct study mechanisms and knowledge. If information associated with a study hypothesis is unknown, it should be simply mentioned in the study discussion. However, if countermanding information is known or a key assumption of the hypothesis appears magical, the structure of the hypothesis itself must both inform of its presence and as well address its impact. See Methodical Deescalation and The Warning Signs of Stacked Provisional Knowledge.

Unless a hypothesis offers up its magical assumption for direct testing, it is not truly a scientific hypothesis. Nor can its conjecture stand as knowledge.


/philosophy : pseudoscience/ : A pseudo-hypothesis explains everything, anything and nothing, all at the same time.

A pseudo-hypothesis fails in its duty to reduce, address or inform. A pseudo-hypothesis states a conclusion and hides its critical path risk (magical assumption) inside its set of prior art and predicate structure. A hypothesis on the other hand reduces its sets of prior art, evidence and conjecture and makes them manifest. It then addresses critical path issues and tests its risk (magical assumption) as part of its very conjecture accountability. A hypothesis reduces, exposes and puts its magical assertion on trial. A pseudo-hypothesis hides is magical assumptions woven into its epistemology and places nothing at risk thereafter. A hypothesis is not a pseudo-hypothesis as long as it is ferreting out its magical assumptions and placing them into the crucible of accountability. Once this process is ceased, the ‘hypothesis’ has been transformed into an Omega Hypothesis. Understanding this difference is key to scientific literacy.

Grant me one hidden miracle and I can explain everything else.

5.  Intelligence. Data is denatured into information, and information is transmuted into intelligence. Inside decision theory and clandestine operation practices, intelligence is the first level of illuminating construct upon which one can make a decision. The data underpinning the intelligence should necessarily be probative and not simply reliable. Intelligence skills combine a healthy skepticism towards human agency, along with an ability to adeptly handle asymmetry, recognize probative data, assemble patterns, increase the reliability of incremental conjecture and pursue a sequitur, salient and risk mitigating pathway of syllogism. See The Role of Intelligence Inside Science.

6.  Mechanism. Every effect in the universe is subject to cause. Such cause may be mired in complexity or agency; nonetheless, reducing a scientific study into its components and then identifying underlying mechanisms of cause to effect – is the essence of science. A pathway from which cause yields effect, which can be quantified, measured and evaluated (many times by controlled test) – is called mechanism. See Reduction: A Bias for Understanding.

7.  Exposure to Accountability.  This is not peer review. While during the development phase, a period of time certainly must exist in which a hypothesis is held proprietary so that it can mature – and indeed fake skeptics seek to intervene before a hypothesis can mature and eliminate it via ‘Occam’s Razor’ (sic) so that it cannot be researched. Nonetheless, a hypothesis must be crafted such that its elements 1 – 6 above can be held to the light of accountability, by 1. skepticism (so as to filter out sciebam and fake method) which seeks to improve the strength of hypothesis (this is a ‘ally’ process and not peer review), and 2. stakeholders who are impacted or exposed to its risk. Hypothesis which imparts stakeholder risk, which is held inside proprietary cathedrals of authority – is not science, rather oppression by court definition.

It is developed from a construct – which is a type of educated guess (‘scintilla’ in the chart below). One popular method of pseudoscience is to bypass the early to mid disciplines of hypothesis and skip right from data analysis to accepted proof. This is no different ethically, from skipping right from a blurry photo of Blobsquatch, to conjecture that such cryptic beings are real and that they inhabit all of North America. It is simply a pattern in some data. However, in this case, blurry data which happened to fit or support a social narrative.

A hypothesis reduces, exposes and puts its magical assertion on trial.
A pseudo-hypothesis hides is magical assumptions woven into its epistemology and places nothing at risk thereafter.

Another method of accomplishing inference without due regard to science, is to skip past falsifying or countermanding information and simply ignore it. This is called The Duty to Address and Inform. A hypothesis, as part of its parsimony, cannot be presented in the blind – bereft of any awareness of prior art and evidence. To undertake such promotional activity is a sale job and not science. Why acknowledge depletion of plant food nutrients on the part of modern agriculture, when you have a climate change message to push? Simply ignore that issue and press your hypothesis anyway (see Examples A and B below).

However, before we examine that and other examples of such institutional pseudoscience, let’s first look at what makes for sound scientific hypothesis. Inside ethical skepticism, a hypothesis bears seven critical elements which serve to qualify it as science.

These are the seven elements which qualify whether or not an alternative hypothesis becomes real science. They are numbered in the flow diagram below and split by color into the three discipline streams of Indirect Study (Intelligence), Parsimony and Conservatism (Knowledge Continuity) and Direct Study (Mechanism).

A Few Examples

In the process of defining this philosophical basis over the years, I have reviewed several hundred flawed and agency-compliant scientific studies. Among them existed several key examples, wherein the development of hypothesis was weak to non-existent, yet the conclusion of the study was accepted as ‘finished science’ from its publishing onward.

Most institutional pseudoscience spins its wares under a failure to address and/or inform.

If you are going to accuse your neighbor of killing your cat, if their whereabouts were unknown at the time, then your hypothesis does not have to address such an unknown. Rather merely acknowledge it (inform). However much your neighbor disliked your cat (intelligence), if your neighbor was in the Cayman Islands that week, your hypothesis must necessarily address such mechanism. You cannot ignore that fact simply because it is inconvenient to your inductive/abductive evidence set.

Most all of these studies skip the hypothesis discipline by citing a statistical anomaly (or worse lack thereof), and employing a p-value masquerade as means to bypass the other disciplines of hypothesis and skip right to the peer review and acceptance steps of the scientific method. Examples A and B below fail in their duty to address critical mechanism, while Examples B and C fail in their duty to inform the scientific community of all the information they need, in order to tender peer review. Such studies end at the top left hand side of the graphic above and call the process done, based upon one scant set of statistical observation – in ethical reality not much more credible in strength than a single observation of Bigfoot or a UFO.

Example A – Failure in Duty to Address Mechanism

Increasing CO2 threatens human nutrition. Meyers, Zanobetti, et. al. (Link)

In this study, and in particular Extended Data Table 1, a statistical contrast was drawn between farms located in elevated CO2 regions versus ambient CO2 regions. The contrast resulted in a p-value significance indicating that levels of  Iron, Zinc, Protein and Phytate were lower in areas where CO2 concentrations exhibited an elevated profile versus the global ambient average. This study was in essence a statistical anomaly; and while part of science, should never be taken to stand as neither a hypothesis, nor even worse a conclusion – as is indicated in the social skeptic ear-tickling and sensationalist headline title of the study ‘Increasing CO2 threatens human nutrition’. The study has not even passed the observation step of science (see The Elements of Hypothesis graphic above). Who allowed this conclusion to stand inside peer review? There are already myriad studies showing that modern (1995+) industrial farming practices serve to dramatically reduced crop nutrient levels.4 Industrial farms tend to be nearer to heavy CO2 output regions. Why was this not raised inside the study? What has been accomplished here is to merely hand off a critical issue of health risk, for placement into the ‘climate change’ explanitude bucket, rather than its address and potential resolution. It begs the question, since the authors neither examined the above alternative, nor raised it inside their Discussion section – that they care neither about climate change nor nutrition dilution – viewing both instead as political football means to further their careers. It is not that they have to confirm this existing study direction, however they should at least acknowledge this in their summary of analytics and study limitations. The authors failed in their duty to address standing knowledge about industrial farming nutrient depletion. This would have never made it past my desk. Grade = C (good find, harmful science).

Example B – Failure in Both Duty to Inform of Intelligence and Duty to Address Mechanism

Possible future impacts of elevated levels of atmospheric CO2 on human cognitive performance and on the design and operation of ventilation systems in buildings. Lowe, Heubner, et. al. (Link)

This study cites its review of the immature body of research surrounding the relationship between elevated CO2 and cognitive ability. Half of the studies reviewed indicated that human cognitive performance declines with increasing CO2 concentrations. The problem entailed in this study, similar to the Zanobetti study above in Example 1, is that it does not develop any underlying mechanism which could explain instances how elevated CO2 directly impacts cognitive performance. This is not a condition of ‘lacking mechanism’ (as sometimes the reality is that one cannot assemble such), rather one in which the current mechanism paradigm falsifies the idea. The study should be titled ‘Groundbreaking new understanding on the toxicity of carbon dioxide’. This is of earth-shattering import. There is a lot of science which needs to be modified if this study proved correct at face value. The sad reality is that the study does not leverage prior art in the least. As an experienced diver, I know that oxygen displacement on the order of 4 percentage points is where the first slight effects of cognitive performance come into play. Typical CO2 concentrations in today’s atmosphere are in the range of 400 ppm – not even in the relevant range for an oxygen displacement argument. However, I would be willing to accept this study in sciebam, were they to offer another mechanism of direct effect; such as ‘slight elevations in CO2 and climate temperature serve to toxify the blood’, for example. But no such mechanism exists – in other words, CO2 is only a toxicant as it becomes an asphyxiant.5 This study bears explanitude, it allows for an existing paradigm to easily blanket-explain an observation which might have otherwise indicated a mechanism of risk – such as score declines being attributable to increases in encephalitis, not CO2. It violates the first rule of ethical skepticism, If I was wrong, would I even know it? The authors failed in their duty to inform about the known mechanisms of CO2 interaction inside the body, and as well failed to address standing knowledge about industrial farming nutrient depletion. As well, this study was a play for political sympathy and club rank. Couching this pseudo-science with the titular word ‘Possible’ is not excuse to pass this off as science. Grade = D (inexpert find, harmful science).

Example C – Orphan Question, Failing in All Seven Elements of Hypothesis, and Especially Duty to Inform of Intelligence

A Population-Based Study of Measles, Mumps, and Rubella Vaccination and Autism. Madsen, Hviid, et. al. (Link)

This is the notorious ‘Danish Study’ of the relationship between the MMR vaccination and observed rates of autism psychiatric confirmed diagnoses inside the Danish Psychiatric Central Register. These are confirmed diagnoses of autism spectrum disorders (Autism, ADD/PDD and Asperger’s) over a nine year tracking period (see Methodology and Table 2). In Denmark, children are referred to specialists in child psychiatry by general practitioners, schools, and psychologists if autism is suspected. Only specialists in child psychiatry diagnose autism and assign a diagnostic code, and all diagnoses are recorded in the Danish Psychiatric Central Register. The fatal flaw in this study resided in its data domain analyzed and the resulting study design. 77% of autism cases are not typically diagnosed until past 4.5 years of age. Based upon a chi-squared cumulative distribution fit at each individual μ below from the CDC, and 1.2 years degree of freedom, and 12 months of Danish bureaucratic bias = .10 + .08 + .05 = 0.23 chance of detection by CDC statistical practices – or 77% chance of a false negative (miss). The preponderance of diagnoses in the ADD/PDD and Asperger’s sets serves to weight the average age of diagnosis well past the average age of the subjects in this nine year study – tracking patients from birth (average age = 4.5 years at study end). See graphic to the right, which depicts the Gompertzian age-arrival distribution function embedded inside this study’s population; an arrival distribution which Madsen and Hviid should have accounted for – but did not. This is a key warning flag of exclusion bias. From the CDC data on this topic, the mean age of diagnosis for ASD spectrum disorders in the United States, where particular focus has tightened this age data in recent years:6

   •  Autistic disorder: 3 years, 10 months
   •  ASD/pervasive developmental disorder (PDD): 4 years, 8 months
   •  Asperger disorder: 5 years, 7 months

Note: A study released 8 Dec 2018 showed a similar effect through data manipulation-exclusion techniques in the 2004 paper by DeStefano et al.; Age at first measles-mumps-rubella vaccination in children with autism and school-matched control subjects: a population-based study in metropolitan Atlanta. Pediatrics 2004;113:259-266.7

Neither did the study occur in a society which has observed a severe uptick in autism, nor during a timeframe which has been most closely associated with autism diagnoses, (2005+).8 Of additional note is the fact that school professionals refer non-profound autism diagnosis cases to the specialists in child psychiatry, effectively ensuring that all such diagnoses occurred after age 5, by practice alone. Exacerbating this is the fact that a bureaucratic infrastructure will be even more slow in/fatal in posting diagnoses to a centralized system of this type. These two factors alone will serve to force large absences in the data, which mimic confirmatory negatives. The worse the data collection is, the better the study results. A fallacy called utile absentia. The study even shows the consequent effect inversion (vaccines prevent autism), incumbent with utile absentia. In addition, the overt focus on the highly precise aspects of the study, and away from its risk exposures and other low-confidence aspects and assumptions, is a fallacy called idem existimatis. I will measure the depth of the water into which you are cliff diving, to the very millimeter – but measure the cliff you are diving off of, to the nearest 100 feet. The diver’s survival is now an established fact of science by the precision of the water depth measure alone.

In other words this study did not examine the relevant domain of data acceptable to underpin the hypothesis which it purported to support. Forget mechanism and parsimony to prior art – as those waved bye-bye to this study a long time ago. Its conclusions were granted immunity and immediate acclaim because they fit an a priori social narrative held by their sponsors. It even opened with a preamble citing that it was a study to counter a very disliked study on the part of its authors. Starting out a process purported to be of science, by being infuriated about someone else’s study results is not science, not skepticism, not ethical.

Accordingly, this study missed 80% of its relevant domain data. It failed in its duty to inform the scientific community of peers. It is almost as if a closed, less-exposed bureaucracy were chosen precisely because of its ability to both present reliable data, and yet at the same time screen out the maximum number of positives possible. Were I a criminal, I could not have selected a more sinister means of study design myself. This was brilliance in action. Grade = F (diabolical study design, poor science).

All of the above studies failed in their duty to inform. They failed in their responsibility to communicate the elements of hypothesis to the outside scientific community. They were sciebam – someone asked a question, poorly framed and without any background research – and by golly they got an answer. They sure got an answer. They were given free pass, because they conformed to political will. But they were all bad science.

It is the duty of the ethical skeptic to be aware of what constitutes true hypothesis, and winnow out those pretenders who vie for a claim to status as science.

epoché vanguards gnosis

The Ethical Skeptic, “The Elements of Hypothesis”; The Ethical Skeptic, WordPress, 4 Mar 2019; Web,


December 13, 2018 Posted by | Ethical Skepticism | , | Leave a comment

Embargo of The Necessary Alternative is Not Science

Einstein was desperate for a career break. He had a 50/50 shot – and he took it. The necessary alternative he selected, fixed c, was one which was both purposely neglected by science, and yet offered the only viable alternative to standing and celebrated club dogma. Dogma which had for the most part, gone unchallenged. Developing mechanism for such an alternative is the antithesis of religious activity. Maturing the necessary alternative into hypothesis, is the heart and soul of science.

Mr. Einstein You’ll Never Amount to Anything You Lazy Dog

Albert Einstein introduced in a 1905 scientific paper, the relationship proposed inside the equation e = mc² : the concept that the system energy of a body (e) is equal to the mass (m) of that body times the speed of light squared (c²). That same year he also introduced a scientific paper outlining his theory of special relativity. Most of the development work (observation, intelligence, necessity, hypothesis formulation) entailed in these papers was conducted during his employment as a technical expert – class III (aka clerk) at the Federal Office for Intellectual Property in Bern, Switzerland; colloquially known as the Swiss patent office.1 There, bouncing his ideas off a cubicle-mate (si vis) and former classmate, Michele Angelo Besso, an Italian Engineer, Einstein found the time to further explore ideas that had taken hold during his studies at the Swiss Federal Polytechnic School. He had been a fan of his instructor, physicist Heinrich Friedrich Weber – the most notable of his two top-engaged professors at Swiss Federal Polytechnic. Weber had stated two things which struck an impression on the budding physicist.2

“Unthinking respect for authority is the enemy of truth.” ~ physicist Heinrich Friedrich Weber

As well, “You are a smart boy, Einstein, a very smart boy. But you have one great fault; you do not let yourself be told anything.” quipped Weber as he scolded Einstein. His mathematics professor, Hermann Minkowski scoffed to his peers about Einstein, relating that he found Einstein to be a “lazy dog.” In similar vein, his instructor physicist Jean Pernet, admonished the C-average (82% or 4.91 of 6.00 GPA) student “[I would advise that you change major to] medicine, law or philosophy rather than physics. You can do what you like. I only wish to warn you in your own interest.” Pernet’s assessment was an implication to Einstein that he did not face a bright future, should he continue his career in pursuit of physics. His resulting mild ostracizing from science was of such extent that Einstein’s father later had to petition in an April 1901 letter, for a university to hire Einstein as an instructor’s assistant. His father wrote “…his idea that he has gone off tracks with his career & is now out of touch gets more and more entrenched each day.” Unfortunately for the younger Einstein, his father’s appeal fell upon deaf ears. Or perhaps fortuitously, as Einstein finally found employment at the Swiss patent office in 1902.3

However, it was precisely this penchant for bucking standing authority, which served to produce fruit in Einstein’s eventual physics career. In particular, Einstein’s youthful foible of examining anew the traditions of physical mechanics, combined with perhaps a dose of edginess from being rejected by the institutions of physics, were brought to bear effectively in his re-assessment of absolute time – absolute space Newtonian mechanics.

Einstein was not ‘doubting’ per se, which is not enough in itself. Rather he executed the discipline of going back and looking – proposing an alternative to a ruling dogma based upon hard nosed critical path induction work, and not through an agency desire to lazily pan an entire realm of developing ideas through abduction (panduction) – no social skeptics, Einstein did not practice your form of authority-enforcing ‘doubt’. Rather it was the opposite.

He was not doubting, rather executing work under a philosophical value-based principle called necessity (see Ethical Skepticism – Part 5 – The Real Ockham’s Razor). Einstein was by practice, an ethical skeptic.

Einstein was not lazy after all, and this was a miscall on the part of his rote-habituated instructors (one common still today). Einstein was a value economist. He applied resources into those channels for which they would provide the greatest beneficial effect. He chose to not waste his time upon repetition, memorization, rote procedure and exercises in compliance. He was the ethical C student – the person I hire before hiring any form of cheating/memorizing/imitating A or B student. And in keeping with such an ethic, Einstein proposed in 1905, 3 years into his fateful exile at the Swiss patent office, several unprecedented ideas which were subsequently experimentally verified in the ensuing years. Those included the physical basis of 3 dimensional contraction, speed and gravitational time dilation, relativistic mass, mass–energy equivalence, a universal speed limit (for matter and energy but not information or intelligence) and relativity of simultaneity.4 There has never been a time wherein I reflect upon this amazing accomplishment and lack profound wonder over its irony and requital in Einstein’s career.

The Necessary Alternative

If the antithesis of your alternative can in one observation, serve to falsify your preferred alternative or the null, then that antithesis is then, the necessary alternative.

But was the particular irony inside this overthrow of Newtonian mechanics all that unexpected or unreasonable? I contend that it was not only needed, but the cascade of implications leveraged by c-invariant physics was the only pathway left for physics at that time. It was the inevitable, and necessary, alternative. The leading physicists, as a very symptom of their institutionalization, had descended into a singular dogma. That dogma held as its centerpoint, the idea that space-time was the fixed reference for all reality. Every physical event which occurred inside our realm hinged around this principle. Einstein, in addressing anew such authority based thinking, was faced with a finite and small set of alternative ideas which were intrinsically available for consideration. That is to say – the set of ideas only included around 4 primary elements, which could alternately or in combination, be assumed as fixed, dependent, or independently variable. Let’s examine the permutation potential of these four ideas: fixed space, fixed time, fixed gravity and/or fixed speed of light. Four elements. The combinations available for such a set are 14, as related by the summation of three combination functions:


What reasoned conjecture offered, and given that combinations of 4 or 3 were highly unlikely to unstable, was to serve in bounding the set of viable alternative considerations to even a lesser set than 14 – maybe 6 very logical alternatives at most (the second C(4,2) function above). However, even more reductive, essentially Einstein would only need start by selecting from one of the four base choices, as represented by the first combination function above, C(4,1). Thereafter, if he chose correctly, he could proceed onward to address the other 3 factors depending upon where the critical path led. But the first choice was critical to this process. One of the following four had to be chosen, and two were already in deontological doubt, in Einstein’s mind.

•  Fixed 3 dimensional space (x, y, z)
•  Fixed time (t)
•  Fixed gravitation relative to mass (g-mass)
•  Fixed speed of light (c)

Ultimately then, only two choices existed if one is to suppose a maximum of two fixed elements as possible per below. Indeed this ended up being the plausal-set for Einstein. The necessary alternatives, one of which had been essentially embargoed by the science authorities at the time, were a combination of two of the above four combining elements. Another combination of two was currently in force (fixed space and time).

In other words, now we have reduced the suspect set to two murder suspects – Colonel Mustard and Professor Plum, and standing dogma was dictating that only Colonel Mustard could possibly be considered as the murderer. To Einstein this was at worst, an even bet.

This is the reason why we have ethical skepticism. This condition, an oft repeated condition wherein false skepticism is applied to underpin authority based denial in an a priori context, in order to enforce one mandatory conclusion at the expense of another or all others, is a situation ripe for deposing. Einstein grasped this. The idea that space and time were fixed references was an enforced dogma on the part of those wishing to strengthen their careers in a social club called physics. Everyone was imitating everyone else, and trying to improve club ranking through such rote activity. The first two element selection, stemming of course from strong inductive work by Newton and others, was a mechanism of control called an Einfach Mechanism (see The Tower of Wrong) or

Omega Hypothesis HΩ – the answer which has become more important to protect than science itself.

•  Fixed 3 dimensional space (x, y, z)
•  Fixed time (t)

Essentially, Einstein’s most logical alternative was to assume the speed of light as fixed first. By choosing first, a fixed reference of the speed of light, Einstein had journeyed down both a necessary, as well as inevitable hypothesis reduction pathway. It was the other murder suspect in the room, and as well stood as the rebellious Embargo Hypothesis option.

Embargo Hypothesis Hξ– the option which must be forbidden at all costs and before science even begins.

•  Fixed gravitation relative to mass (g-mass)
•  Fixed speed of light (c)

But this Embargo Hypothesis was also the necessary alternative, and Einstein knew this. One can argue both sides of the contention that the ’embargo’ of these two ideas was one of agency versus mere bias. In this context and for purposes of this example, both agency and bias are to be considered the same embargo principle. In many/most arguments however, they are not the same thing.

The Necessary Alternative

/philosophy : Ockham’s Razor : Necessity/ : an alternative which has become necessary for study under Ockham’s Razor because it is one of a finite, constrained and very small set of alternative ideas intrinsically available to provide explanatory causality or criticality inside a domain of sufficient unknown. This alternative does not necessarily require inductive development, nor proof and can still serve as a placeholder construct, even under a condition of pseudo-theory. In order to mandate its introduction, all that is necessary is a reduction pathway in which mechanism can be developed as a core facet of a viable and testable hypothesis based upon its tenets.

The assertion ‘there is a God’, does not stand as the necessary alternative to the assertion ‘there is no God’. Even though the argument domain constraints are similar, these constructs cannot be developed into mechanism and testable hypothesis. So, neither of those statements stand as the necessary alternative. I am sorry but neither of those statements are ones of science. They are Wittgenstein bedeutungslos – meaningless. A proposition or question which resides upon a lack of definition, or presumed definition which contains no meaning other than in and of its self.

However in exemplary contrast, the dilemma of whether or not life originated on Earth (abiogenesis), or off Earth (panspermia) do stand as a set of necessary alternatives. Even though both ideas are in their infancy, they can both ultimately be developed into mechanism and a testing critical path. The third letter of the DNA codon (see Exhibit II below) is one such test of the necessary alternatives, abiogenesis and panspermia. There is actually a third alternative as well, another Embargo Hypothesis (in addition to panspermia) in this case example – that of Intervention theory. But we shall leave that (in actuality necessary as well) alternative discussion for another day, as it comes with too much baggage to be of utility inside this particular discourse.

Einstein chose well from the set of two necessary alternatives, as history proved out. But the impetus which drove the paradigm change from that of the standing dogma and to Einstein’s favored Embargo Hypothesis, might not have been as astounding a happenstance as it might appear at first blush. Einstein chose red, when everyone and their teaching assistant, was of the awesome insistence that one need choose blue. All the ramifications of fixed speed of light (and fixed gravitation, relative only to mass), unfolded thereafter.

Einstein was desperate for a break. He had a 50/50 shot – and he took it.

Example of Necessity: Panspermia versus Abiogenesis

An example of this condition wherein the highly constrained set of alternatives (two in this case) inside a sufficient domain of unknown, forces the condition of dual necessity, can be exemplified inside the controversy around the third letter (base) of the DNA codon. A DNA codon is the word, inside the sentence of DNA. A codon is a series of 3 nucleotides (XXX of A, C, T or G) which have a ‘definition’ corresponding to a specific protein-function to be transcripted from the nucleus and decoded by the cell in its process of assembling body tissues. It is an intersection on the map of the organism. Essentially, the null hypothesis stands that, the 3rd letter (nucleotide) digit of the codon, despite its complex and apparently systematic methodical assignment codex, is the result of natural stochastic-derivation chemical happenstance during the fist 300 million years of Earth’s existence (not a long time). The idea being that life existed on a 2 letter DNA codon (XX) basis for eons, before a 3 letter (XXX) basis evolved (shown in Exhibit II below). The inductive evidence that such an assignment codex based upon 3 letters derived from 2, is beyond plausibility given the lack of probability of its occurrence and lack of time and influencing mechanism during which that improbability could have happened – this evidence supports its also-necessary alternative.

In this circumstance, the idea that the DNA codon third digit based codex, was not a case of 300 million year fantastical and highly improbable happenstance, but rather existed inside the very first forms of life which were to evolve (arrive) on Earth, is called panspermia. The necessary alternative panspermia does not involve or hinge upon the presence of aliens planting DNA on Earth, rather that the 3 letter codon basis was ‘unprecedented by context, complex and exhibiting two additional symmetries (radial and block) on top of that’ at the beginning of life here on Earth, and therefore had to be derived from a source external to the Earth. Note, this is not the same as ‘irreducible complexity’, a weak syllogism employed to counter-argue evolution (not abiogenesis) – rather it is a case of unprecedentable complexity. A much stronger and more deductive argument. It is the necessary alternative to abiogenesis. It is science. Both alternatives are science.

The key in terms of Ockham’s Razor plurality is this:
In order to provide hypothesis which aligns abiogenesis as a sufficient explanatory basis for
what we see in the fossil record – we must dress it up so that it performs in artificial manipulation,
exactly as panspermia would perform with no manipulation at all.
This renders panspermia a legitimate and necessary hypothesis.

This circumstance elicits the contrathetic impasse, a deliberation wherein a conundrum exists solely because authority is seeking to enforce a single answer at the expense of all others, or forbid one answer at the expense of science itself. The enforced answer is the Omega Hypothesis and the forbidden alternative is the Embargo Hypothesis. And while of course abiogenesis must stand as the null hypothesis (it can be falsified but never really proven) – that does not serve to make it therefore true. Fake skeptics rarely grasp this.

Therefore, the necessary alternative – that the DNA (XXX) codex did not originate on Earth, is supported by the below petition for plurality, comprising five elements of objective inference (A – E below). This systematic codex is one which cannot possibly be influenced (as is evolution) by chemical, charge, handedness, use of employment, epigenetic or culling factors (we cannot cull XX codex organisms to make the survival group more compatible for speciating into XXX codex organisms). Nor do we possess influences which can serve to evolve the protein based start, and silence based stop codons. It can only happen by accident or deliberation. This is an Einstein moment.

Omega Hypothesis HΩ – the third letter of the DNA codex evolved as a semi-useless appendage, in a single occurrence, from a 2 letter codex basis, featuring radial symmetry, featuring block assignment symmetry and molecule complexity to 2nd base synchrony, only upon Earth, in a 1.6 x 10^-15 (1 of 6 chance across a series of 31 pairings, across the potential permutations of (n – 1) proteins which could be assigned) chance, during the first 300 million years of Earth’s existence. Further the codex is exceptionally optimized for maximizing effect inside proteomic evolution. Then evolution of the codex stopped for an unknown reason and has never happened again for 3.8 billion years.

Stacking of Entities = 10 stacked critical path elements. Risk = Very High.

Embargo Hypothesis Hξ– the three letter codex basis of the DNA codon, pre-existed the origination of life on Earth, arrived here preserved by a mechanism via a natural celestial means, and did not/has not evolved for the most part, save for slight third base degeneracy. Further the codex is exceptionally optimized for maximizing effect inside proteomic evolution.

Stacking of Entities = 4 stacked critical path elements. Risk = Moderate.

Note: By the terms ‘deliberacy’ and ‘prejudice’ used within this article, I mean the ergodicity which is incumbent with the structure of the codex itself. Both how it originated and what its result was in terms of the compatibility with amino acids converting into life. There is no question of ergodicity here. The idea of ‘contrived’ on the other hand, involves a principle called agency. I am not implying agency here in this petition. A system can feature ergodicity, but not necessarily as a result of agency. To contend agency, is the essence of intervention hypotheses. To add agency would constitute stacking of entities (a lot of them too – rendering that hypothesis weaker than even abiogenesis). This according to Ockham’s Razor (the real one).

The contention that panspermia merely shifts the challenges addressed by abiogenesis ‘off-planet’ is valid; however those challenges are not salient to the critical path and incremental question at hand. It is a red herring at this point. With an ethical skeptic now understanding that abiogenesis involves a relatively high-stacked alternative versus panspermia, let’s examine the objective basis for such inference in addition to this subjective ‘stacking of entities’ skepticism surrounding the comparison.

The Case for Off-Earth Codex Condensation

Did our DNA codex originate its structure and progressions first-and-only upon Earth, or was it inherited from another external mechanism? A first problem exists of course in maintaining the code once extant. However, upon observation, a more pressing problem exists in establishing just how the code came into being in the first place. Evolved or pre-existed? ‘Pre-existed by what method of origination then?’ one who enforces an Omega Hypothesis may disdainfully pontificate. I do not have to possess an answer to that question in order to legitimize the status of this necessary alternative. To pretend an answer to that question would constitute entity stacking. To block this necessary alternative (pre-existing codex) however, based upon the rationality that it serves to imply something your club has embargoed or which you do not like personally – even if you have mild inductive support for abiogenesis – is a religion. Given that life most probably existed in the universe and in our galaxy, already well before us – it would appear to me that panspermia, The Embargo Hypothesis, is the simplest explanation, and not abiogenesis. However, five sets of more objective inference serve to make this alternative a very strong one, arguably deductive in nature, versus abiogenesis’ relative paltry battery of evidence.

As you read Elements A – E below, ask yourself the critical path question:
~ ~ ~
‘If the precise, improbable and sophisticated Elements A – E below were required
as the functional basis of evolution before evolution could even happen, then how did they then evolve?’

A.  The most early use amino acids and critical functions hold the fewest coding slots, and are exclusively dependent upon only the three letter codon form. Conjecture is made that life first developed upon a 2 letter codon basis and then added a third over time. The problem with this is that our first forms of life use essentially the full array of 3 letter dependent codex, to wit: Aspartate 2 (XXX), Lysine 2 (XXX), Asparagine 2 (XXX), Stop 3 (XXX), Methionine-Start 1 (XXX), Glutamine 2 (XXX). Glutamic Acid and Aspartic Acid, which synthesize in the absolute earliest forms of thermophiles in particular would have had to fight for the same 2 digit code, GA – which would have precluded the emergence of even the earliest thermal vent forms of life – under a 2 letter dependent codex (XX). These amino acids or codes were mandatory for the first life under any digit size context – and should hold the most two digit slots accordingly – they do not. As well, in the case where multiple codons are assigned to a single amino acid, the multiple codons are usually related. Even the most remote members of archaea, thermophilic archaea, use not only a full 3 letter codon dependent codex, but as well use proteins which reach well into both the adenine position 2 variants (XAX) and thymidine position 2 variants (XTX) groupings; ostensibly the most late-appearing sets of amino acids (see graphic in C. below and Table III at the end).5

It is interesting to also note that the three stop codons TAA-TAG-TGA, all match into codex boxing with later appearing/more complex amino acid molecules, and as members of the adenine position 2 variants (XAX) group. They box with the more complex and ‘later appearing’ amino acids tyrosine and tryptophan. The stop codes needed to be a GG, CC, GC, or at the very least a CT/TC XX codon basis at the very least, in order to support an extended evolutionary period under a two letter codon basis (XX). This was not the situation as you can see in Exhibit III at the end. This would suggest that the stop codes appeared later to last under a classic abiogenetic evolutionary construct. Life would have had to evolve up until thermophilic archaea without stop codes in their current form. Then suddenly, life would have had to adopt a new stop codon basis (and then never make another change again in 3.6 billion years), changing horses in mid stream. This XX codon previous form of life should be observable in our paleo record. But it is not.

Moreover, the use of the two digit codex is regarded by much of genomics as a degeneracy, ‘third base degeneracy’, and not an artifact of evolution.6 Finally, the codon ATA should in a certain number of instances, equate to a start code, since it would have an evolutionary two digit legacy – yet it is never used to encode for Methionine – this is incompatible with the idea that methionine used to employ a two digit AT code. Likewise, Tyrosine and the non-amino stop code, TA would have been in conflict under the two digit codex. In fact, overall there should exist a relationship between arrival of an amino acid use in Earth life, and the number of slots it occupies in the codex, and there is not. Neither to the positive slope, nor negative slope.7

One can broach the fact that protein reassignments were very possible and could explain away the apparent introduction of a XXX codon dependency of all observable life, midstream. But then one must explain why it never ‘speciated’ again over 3.6 billion years, along with the apparent absence of XX codon life in the paleo record. This chasm in such a construct is critical and must be accommodated before abiogenesis can be fully developed as a hypothesis. In comparison, panspermia possess no such critical obstacle (note, this is not a ‘gap’ as it relates to the critical path of the alternative and not merely circumstantial inductive inference).

Codon Radial Symmetry

B.  Evolution of the codex would necessarily occur from amino absences rather than positives. Of particular note is the secondary map function of the start and stop codons. Notice that the start of a DNA sentence begins with a specific protein (the polar charge molecule methionine-ATG). The end of a DNA sequence however, consists of no protein coding whatsoever (see TAA, TAG and TGA). In other words the DNA sentence begins with the same note every time, and ends with protein silence. tRNA evolved a way to accommodate the need for a positive, through the employment of proline-CCA during the protein assembly process. This is how a musical score works – it starts with a note, say an A440 tuning one, and ends with the silence dictated by the conductor’s wand. Deliberacy of an empty set, as opposed to the stochasticity of positive notes, as another appearance of a start code methionine could have sufficed for a positive stop and start code instead. This positive stop mechanism would succeed inside an evolution context much better. Why would an absence evolve into a stop code for transfer RNA? It could not, as ‘absence’ contains too much noise. It occurs at points other than simply a stop condition. The problem exists in that there is no way for an organism to survive, adapt, cull or evolve, based upon its use of an empty set (protein silence). Mistakes would be amplified under such an environment,  Evolution depends intrinsically upon logical positives only (nucleotides, mutations, death) – not empty sets.

C.  Features triple-symmetrical assignment, linear robustness, ergodicity along with a lack of both evolution and deconstructive chaos. This is NOT the same set of conditions as exist inside evolution, even though it may appear as such to a layman. This set of codex assignments, features six principle challenges (C., and C. 1, 2a, 2b, 3, and 4 below). Specifically,

• radial assignment symmetry (B. Codon Radial Symmetry chart above),
• thymidine and adenine (XTX, XAX) second base preference for specific chemistries,
• synchrony with molecule complexity (C. Codon versus Molecule Complexity 64 Slots graphic to the right),
• block symmetry (C. Codon Second Base Block Symmetry table below) around the second digit (base), and
• ergodicity, despite a lack of chemical feedback proximity nor ability for a codon base to attract a specific molecule chemical profile or moiety.
• lack of precedent from which to leverage.

These oddities could not ‘evolve’ as they have no basis to evolve from. The structure and assignment logic of the codex itself precludes the viability of a two base XX codex. Evolution by definition, is a precedent entity progressing gradually (or sporadically) into another new entity. It thrives upon deconstructive chaos and culling to produce speciation. There was no precedent entity in the case of the DNA stop codon nor its XXX codex. As well, at any time, the stop codons could have been adopted under the umbrella of a valid protein, rendering two SuperKingdoms of life extant on Earth – and that should have happened. Should have happened many times over and at any time in our early history (in Archaea). Yet it did not.8 An asteroid strike and extinction event would not serve to explain the linearity. Evolution is not linear. We should have a number of DNA stop and start based variants of life available (just as we have with evolution based mechanisms) to examine. But we do not. In fact, as you can see in the chart to the right (derived from Exhibit III at the end), there exist four challenges to a purely abiogenetic classic evolution construct:

1. An original symmetry to the assignment of codon hierarchies (codex), such that each quadrant of the assignment chart of 64 slots, mirrors the opposing quadrant in an ordinal discipline (see Codon Radial Symmetry charts in B. above to the right – click to get a larger image).

Codon Second Base Block Symmetry

2. The second character in the codon dictates (see chart in B. Codon Radial Symmetry chart above) what was possible with the third character. In other words

a. all the thymidine position 2 variants (XTX) had only nitrite molecules (NO2) assigned to them (marked in blue in the chart in C. Codon versus Molecule Complexity 64 Slots to the upper right and in Exhibit III at the end – from where the graph is derived). While the more complex nitrous amino acids were all assigned to more complex oversteps in codex groups (denoted by the # Oversteps line in the Codon versus Molecule Complexity 64 Slots chart to the upper right).

In addition,

b. all adenine position 2 variants (XAX) were designated for multi-use 3rd character codons, all cytidine position 2 variants (XCX) were designated for single use 3rd character codons, while guanine (XGX) and thymidine (XTX) both were split 50/50 and in the same symmetrical patterning (see Codon Second Base Block Symmetry table to the right).

3.  There exists a solid relationship, methodical in its application, between amino acid molecule nucleon count and assignment grouping by the second digit of the DNA codon in rank of increasing degeneracy. Second letter codon usages were apportioned to amino acids as they became more and more complex, until T and A had to be used because the naming convention was being exceeded (see chart in C., Codon versus Molecule Complexity 64 Slots above to the right as well as Exhibit III at the end). After this was completed, then one more use of G was required to add 6 slots for arginine, and then the table of 60 amino acids was appended by one more, tryptophan (the most complex of all the amino acid molecules – top right of chart in C. Codon versus Molecule Complexity 64 Slots above or the end slots in Exhibit III at the end) and the 3 stop codes thereafter. Very simple. Very methodical. Much akin to a IT developer’s subroutine log – which matures over the course of discovery inside its master application.

4.  Ergodicity. Prejudice… Order, featuring radial symmetry (B. Codon Radial Symmetry chart), synchrony with molecule complexity (C. Codon versus Molecule Complexity 64 Slots graphic) and and block symmetry (C. last image Codon Second Base Block Symmetry table) around the second digit. The problem is that there is no way a natural process could detect the name/features of the base/molecule/sequence as means to supplant such order and symmetry into the codex, three times – much less evolve is such short order, without speciation, in the first place.

Since the DNA chemistry itself is separated by two chemical critical path interventions, how would the chemistry of thymine for instance (the blue block in Exhibit III below) exclusively attract the nitric acid isomer of each amino acid? And why only the nitric acid isomers with more complex molecule bases? First, the DNA base 2 is no where physically near the chemical in question, as it is only a LOGICAL association, not a chemical one so it cannot contain a feedback or association loop. Second, there is no difference chemically, between C2H5NO2 and C5H11NO2. The NO2 is the active moiety. So there should have not been a synchrony progression (C.3. above), even if there were a direct chemical contact between the amino acid and the second base of the codon. So the patterns happen as a result of name only. One would have to know the name of the codon by its second digit (base), or the chemical formula for the amino acid, and employ that higher knowledge to make these assignments.

Finally, this order/symmetry has not changed since the code was first ‘introduced’ and certainly has not been the product of stochastic arrival – as a sufficiently functional-but-less-orderly code would have evolved many times over (as is the practice of evolution) and been struck into an alternative codice well before (several billion years) this beautiful symmetry could ever be attained.

We claim that evolution served to produce the codex, yet the codex bears the absolute signs of having had no evolution in its structure. We cannot selectively apply evolution to the codex – it must either feature evolutionary earmarks, or not be an evolved code. The mechanisms of evolution cannot become a special pleading football applied only when we need it, to enforce conformance – because in that case we will only ever find out, what we already know. It becomes no better argument philosophically than ‘God did it’.

D.  Related codons represent related amino acids. For example, a mutation of CTT to ATT (see table in C. above) results in a relatively benign replacement of leucine with isoleucine. So the selection of the CT and AT prefixes between leucine and isoleucine was done early, deliberately and in finality – based upon a rational constraint set (in the example case, two nitrite molecule suffixed proteins) and not eons of trail and error.9 Since the assignment of proteins below, is not partitioned based upon any physical characteristic of the involved molecule, there is no mechanism but deliberacy which could dictate a correspondence between codon relationships and amino acid relationships.10

E.  Statistically impossible codex, not just improbable. Finally, it is not simply the elegant symmetry to the codex which is perplexing, but as well, items A. – D. usage contexts identified above, allow one to infer (deductive?) that the codex, its precedent, provenance and structure are difficult to impossible to accommodate in even the most contorted construct of abiogenesis. Observe the A-G vs C-T tandem relationship between Lysine and Asparagine for instance. This elegant pattern of discipline repeats through the entire codex. This is the question asked by Eugene V. Koonin and Artem S. Novozhilov at the National Center for Biotechnology in Bethesda, Maryland in their study Origin and Evolution of the Genetic Code: The Universal Enigma (see graphic to the right, extracted from that study).11 This serious challenge is near to falsifying in nature, and cannot be dismissed by simple hand waving. Take some time (weeks or months, not just seconds) to examine the DNA codex in Exhibits I thru III below, the three tables and charts in B. and C. above, as well as the study from which the graphic to the right is extracted, and see if you do not agree. This argument does not suffer the vulnerability of ‘creationist’ arguments, so don’t play that memorized card – as Ockham’s Razor has been surpassed for this necessary alternative.

The hypothesis that the codex for DNA originated elsewhere bears specificity, definition and testable mechanism.
It bears less stacking, gapping and risk as compared to abiogenesis.
It is science. It is the necessary alternative.

Assuming that life just won the 1 in 1.6 x 10 to the 15th power lottery (the Omega Hypothesis), again, and quickly, near to the first time it even bought a lottery ticket – has become the old-hat fallback explanation inside evolution. One which taxes a skeptic’s tolerance for explanations which begin to sound a lot like pseudo-theory (one idea used to explain every problem at first broach). However this paradox of what we observe to be the nature of the codex, and its incompatibility with abiogenesis, involves an impossibly highly stacked assumption set to attempt to explain away based solely upon an appeal to plenitude error. A fallback similar in employment of the appeal to God for the creationist. The chance that the codex evolved a LUCA (Last Universal Common Ancestor) by culled stochastics alone in the first 300 million years of Earth’s existence,12 is remote from both the perspective of time involved (it happened very quickly) and statistical unlikelihood – but as well from the perspective that the codex is also ‘an optimal of optimates’ – in other words, it is not only functional, but smart too. Several million merely functional-but-uglier code variants would have sufficed to do the job for evolution. So this begs the question, why did we get a smart/organized codex (see Table II below and the radial codon graphic in B. above) and not simply a sufficiently functional but chaotic one (which would also be unlikely itself)? Many social skeptics wishing to enforce a nihilistic religious view of life, miss that we are stacking deliberacy on top of a remote infinitesimally small chance happenstance. Their habit is to ignore such risk chains and then point their finger at creationists as being ‘irrational’ as a distraction.

The codex exhibits unprecedentable, empty set as a positive logical entity, static, patterned codex format, exposed to deconstructive vulnerability which happens in everything else, but which chose to not happen in this one instance. In other words, evolution actually chose to NOT happen in the DNA codex – i.e. deliberacy. The codex, and quod erat demonstrandum, the code itself, came from elsewhere. I do not have to explain the elsewhere, but merely provide the basis for understanding that the code did not originate here. That is the only question on our scientific plate at the moment.

Exhibits I II and III

Exhibit I below shows the compressed 64 slot utilization and it efficiency and symmetry. 61 slots are coded for proteins and three are not – they are used as stop codes (highlighted in yellow). There are eight synonym based code groups (proteins), and twelve non-synonym code groups (proteins). Note that at any given time, small evolutionary fluctuations overlapping into the stop codes would render the code useless as the basis for life. So, the code had to be frozen from the very start, or never work at all. Either that or assign tryptophan as a dedicated stop code and mirror to methionine and make the 5th code group a synonym for Glutamine or Aspartic Acid.

Exhibit I – Tandem Symmetry Table

Exhibit II below expands upon the breakout of this symmetry by coded protein, chemical characteristics and secondary mapping if applicable.

Exhibit II – Expanded Tandem Symmetry and Mapping

Finally, below we relate the 64 codon slot assignments, along with the coded amino acid, the complexity of that molecule and then the use in thermophilic archaea. Here one can see that it is clear that even our first forms of life, constrained to environments in which they had the greatest possibility of even occurring, employed the full XXX codex (three letter codon). While it is reasonable to propose alternative conjecture (and indeed plurality exists here) that the chart suggests a two letter basis as the original codex, life’s critical dependency upon the full codon here is very apparent.

Exhibit III – 61 + 3 Stop Codon Assignment versus Molecule Complexity and
Use in Thermophilic Archaea (see chart in C. above)

Recognizing the legitimacy of the necessary alternative – one which was both purposely neglected by science, and yet offers the only viable alternative to standing and celebrated club dogma – this is a process of real science. Developing mechanism for such an alternative is the antithesis of religious activity. Maturing the necessary alternative into hypothesis, is the heart and soul of science.

Blocking such activity is the charter and role of social skepticism. Holding such obfuscating agency accountable is the function of ethical skepticism.

epoché vanguards gnosis


How to MLA cite this blog post =>

The Ethical Skeptic, “Embargo of The Necessary Alternative is Not Science” The Ethical Skeptic, WordPress, 24 Nov 2018; Web,

November 24, 2018 Posted by | Argument Fallacies | , , | 11 Comments

Exotic Nature of FRB 121102 Burst Congeries

It is clear from the data that a MIGO grouping exists inside the 93 bursts of FRB 121102, representing a consistent and distinct profile from their comparable Primary grouping burst twins in terms of frequency, signal duration and overall resulting Planck dilation – yet in stark contrast, featuring negligible impact in terms of signal arrival timing relative to c.
These fast radio bursts appear to bear the profile of the collision of two very massive objects. The smaller object moving rapidly as a percentage of the speed of light around the larger  – signalling the universe in desperation as it descends hopelessly into the dark Schwarzschild sea. Two black holes tripping the light fantastic among the stars.

Now I am not a physicist, nor an astrophysicist. I want to make that clear. I do not claim the moniker of scientist. Although I have been president of a research lab, and led it through the process of groundbreaking scientific discovery, and although I have employed or had in my reporting structure many scientists and engineers, I myself cannot claim such a title. Despite involvement inside complex decisions of science and technology on a daily basis, I have not earned the hash marks, degrees and dissertation necessary in passing industry qualification as a scientist.1 This was purposeful. I am a business man, economist, analyst, designer, technologist, strategist, leader and advocate for those who suffer at the hands of poorly developed science. Therefore I am technically only a skeptic. I critique the philosophy, structure and meta-application of science – flagging the circumstance wherein its deployment serves to negatively impact its stakeholders. I write technical reports and specifications for the employment of technology, and determine for its stakeholders, how the technology or science involved will serve to impact their lives. Now this is a profession inside which I am enormously qualified and maintain an arduous decades-long track record of qualification and success.

But during my youth I was a scientist at heart. I devoured every Carl Sagan, Stephen Jay Gould and Isaac Asimov non-fiction book which my small town library was able to get. In my free time I studied the sky with my Meade telescope and dabbled in my Gilbert Chemcraft junior chemistry lab. I burned, dissolved and emergency-buried a lot of volatile stuff. A freshly bottom-lit (not top-lit) Bunsen Burner will fire a penny through a ceiling tile at 1/4 the muzzle velocity of a .22 caliber standard load round. Many exciting things can be done with potassium. After my instructors realized that I was not stupid, rather just bored, and saw that my science aptitude scores were at a college level, while in the 5th Grade, I was advanced two years early through my science and math curricula; earning a top award for a science paper my senior year of high school. I entered a nationally ranked top-3 nuclear science undergraduate program, but was swayed in my career when the Dean of my school awarded me an A+++ on my paper on Ethics of Technology and Science, the highest grade he had ever given.  It was then that I knew there was more to science than simply donning a lab coat, initiating exoentropy and taking the measurements. The question was not one of how to do science, but what one could do with it. Or should do with it. For benefit or for harm, and how to discern the magnitude and difference.

As a skeptic, never rest on your laurels and self-congratulate over your callow wielding of doubt.
As a skeptic, you must go and actually look. You must think incrementally, eschew pat answers, ask probative questions and then risk hard work.
Anything short of this is worse than the process of never having doubted to begin with.

Throughout the time since I have maintained a fascination with astrophysics. I have read Kip S. Thorne’s, Black Holes & Time Warps, probably 3 to 8 times. I am a regular consuming fan of Deutsch, Tippler, Wolfram, and Greene.  Wolfram’s A New Kind of Science and Thorne’s Black Holes & Time Warps reside in my library on the quick-reference shelf along with the Webster’s Dictionary, Oxford Handbook of Philosophy and Science, Newton’s The Principia, Lewin’s Genes IX, The Handbook of Chemistry and Physics, Whitman’s Leaves of Grass and the New American Standard Bible. My thirst for clues which nature offers us through the wisdom of astrophysics, has never been slaked.

Fast Radio Burst 121102

So when science first started detecting Fast Radio Bursts (the subtle grey curved line inside the graphic to the right), this was a subject which fascinated me no end. Not in the sense that an extraterrestrial civilization might be the source of such quirky electromagnetic chirps (so far they bear a number of ‘natural’ profiles to be sure), but rather a fascination toward the clues which the phenomenon could serve to offer regarding the nature and structure of our cosmos. As a quick summary, a Fast Radio Burst is a very short (20 to 100 milliseconds ‘long’ in dispersion arc and .75 to 3.5 millisecond barycentric duration pulse) and narrow band (3 GigaHertz ‘tall’) flash of electromagnetic C-Band microwave energy. It is akin to a bird chirping a short and very precise musical note, or the emanation a bat might make in order to echo-locate. The key interesting feature of such a short duration burst of electromagnetic energy resides in its characteristic ‘dispersion’. Dispersion is the difference between the attenuation of the higher frequencies of EM energy in the signal and that of its lower frequencies. In our cosmos, lower frequency radiation is attenuated more readily and arrives at its destination somewhat after the higher frequencies inside the same exact signal. The lower frequencies lose the race against the higher ones. In the graphic to the right, one can observe that the higher frequencies at the top of the graph, say in the 7.5 GHz range, arrive first (motion of the EM signal is right to left) before do the lower frequencies inside the single FRB burst – despite both frequency sets having originated at the same exact instant, far far away. The magnitude of this dispersion allows an astrophysicist to estimate how far that signal has traveled through space-time (or gravity), through measuring the separation between the arrival of the higher and lower frequencies inside a fast radio burst.2

What results is an arc, characteristic of a warped electromagnetic signal. On a graph indexing an ordinate of signal frequency (GHz) against an abscissa of time (seconds), the result is an exponential relationship.  Inside the graphic immediately below in red field background, one can observe (again, pretend that the EM signal is moving from right to left) the higher 7.8 GHz EM C-band microwave radiation (at the top of the figure) to arrive at the receiver on Earth, sooner than do the 5.4 GHz frequencies (at the bottom of the figure), and by a simple square in acceleration of effect on the lower frequencies toward the bottom of the graph (which is why the signal is curved in its dispersion differential). The rate of dispersion shown in the graphics above and below equate to around 2.5 billion light years of travel through space-time and/or gravitational fields. The arc immediately below in particular was extracted from the FRB 121102 fusillade; marked as FRB 121102-1.

Problem Statement

But there were two peculiarities regarding FRB 121102 which piqued my interest above and beyond the media generated discourse around the other several dozen individual FRB’s we have found scattered around the cosmos. First, in contrast with the other FRB’s we have detected, this FRB burst comprised a fusillade of 93 individual signals which arrived in quick succession (seconds to hours apart). Second, the signals arrived in an array of differing dispersion and frequency profiles. Of course, obtaining a repeating FRB source was unprecedented to begin with and of key interest in its own right; however, the fact that all of FRB 121102’s dispersion and frequency profiles did not match, was a mystery of even greater proportion. You see, if the signals all emanated from the same source; and given their rapid fire and common location in another dwarf galaxy 2.5 billion light years away, they should be assumed to originate from a common source, then all of the signals should bear the same frequency and dispersion profiles (within a given measurement error precision and accuracy). This was not the case with the FRB 121102 signal burst group.


FRB 121102 burst signals featured significantly varying frequency and dispersion profiles, despite having emanated from a common source and having commensurately traversed the same exact space-time conditions.

So I set about the task of examining this odd stream of signals, in order to hypothesize a mechanism which potentially could impart such a characteristic pattern. The study from which I drew my data was a paper submitted on 9 Sep 2018 by Zhang, et. al., entitled Fast Radio Burst 121102 Pulse Detection and Periodicity: A Machine Learning Approach.3 The two graphics to the right (labeled 1* and 1) were extracted from the study, representing burst number 1, which was the signature burst for the group. It bore the strongest flux amplitude, as well as the signature duration of 1.57 milliseconds barycentric width and dispersion of .21 ∂v/∂t.  The study was a report on the detection of 93 total pulses “from the repeating fast radio burst FRB 121102 in Breakthrough Listen C-band (4-8 GHz) observations at the Green Bank Telescope. The pulses [last 72 of them] were found with a convolutional neural network in data taken on August 26, 2017, where 21 bursts had been previously detected.”4

The study did not offer up its database of signals, so I downloaded the imagery for each of the 93 signals and conducted measures of each signal’s frequency band and time dilation directly from the signal itself. I assembled a database (see bottom of article) of start time, end time, time measure, graph time, pulse width, signal to noise, v-peak, v-min, ∂v in GHz, ∂t in seconds, and then finally the dispersion measure ∂v/∂t (= ∂GHz/∂ms), signal flux in milli-Janskys and barycentric pulse width. I then conducted analytics and intelligence development upon the array of data which resulted. What followed stands not as a dilettante ‘proof’, rather an observation-intelligence-necessity petition for plurality or assistance in hypothesis mechanism development (Steps 1 thru 5 of the Scientific Method).

Observation Reduction and Methodology

Discrete Integrity of Signal

Intelligence 1 – The signals exhibited discrete frequency banding with a v-max beginning at 7.8 GHz and ranging all the way to 5.0 GHz.
Intelligence 2 – The single trend in relationship of v-max to v-min suggests with high confidence that the original signal was emitted from a single source.
Intelligence 3 – A single influencing factor served to additionally alter v-max and v-min by lowering them both in about half the signals, but not disturbing this 1:1 relationship.
Intelligence 4 – The source of the v-max cascading and mimicked dispersion of the .32 ∂v/∂t group, appears to suggest the intervention of a discrete, powerful and singular gravitational influence nearby the source of the signal – either through direct Schwarzschild time dilation or by inducing an orbit in the emission body featuring an exotically large speed.

The bursts exhibited direct proportional and 1:1 consistency in the level of frequency relationship between each v-max and v-min measure, confirming that the signal was of a discrete-banding nature and not a broad-band radio burst (such as might be emitted by a quasar). This is not an occurrence often seen in nature and I personally cannot fathom a physical circumstance, even under the high gravity or energy physics of a black hole event horizon, in which such a discrete duration (1 ms) and frequency band (2.5 GHz) of energy could be generated by a natural phenomenon. But neither am I the fount of all knowledge. This, while odd, is certainly not enough to start adding more exotic explanations into the fray just yet (Ockham’s Razor plurality). It merely suggests there is an area of exotic physics in which we have some discoveries yet to make. It inductively weakens our confidence in our standing related provisional explanations.

In the graphic to the right, the v-max index is along the abscissa and the v-min measurement is along the ordinate axis (y-axis). The 45 degree trend line suggests a direct and 1 to 1 relationship between the two, indicating a fixed interval from top frequency to bottom frequency. The dispersion of the scatter plot down and to the right most likely comprises imprecision in measurement along with the degradation of the signal to noise ratio as many of the pulses trended into lower frequencies – thereby making the lower end (most attenuated) of the pulse much harder to measure as compared to the higher end. Nonetheless, a terminal high and low end frequency was able to be established as a characteristic profile, confirmed by the group’s signature signal #1 (121102-1 was the strongest and most coherent of the fusillade) = 7.8 – 5.3 GHz.

Of added note is the fact that this one-to-one simple relationship between the v-max and v-min extremes indicates strongly that all 93 signals were emitted by the same source. This was corroborated later in examining the arrival time curve, which appears to exhibit a consistent one-factor logarithmic-formulaic pattern. In addition, lower and lower v-max frequencies were detected in the grouping, which appeared to either be a characteristic of the emitting source, or some kind of influencing or intervening source of gravity. This influence is substantiated by the linear trend discipline which exists, even in the case where v-max is altered significantly (the lower left end of the graph). This added dispersion or red shift, could be the results of a gravitational body or a high speed orbit. Both of these will be evaluate herein. Given that the attenuation patterns of both the lower and higher v-max emissions were similar – this suggests that the influencing factor was not a gas cloud – which would have caused enormous chaos in both the v-max and v-min patterns, causing a more circular scatter plot in the above graphic. In addition, a gas/lone plasma cloud could not, and exclusively would not have been able to serve to introduce this observed dispersion distortion, one mimicking in the .32 ∂v/∂t group (below) of signals an added 1.5 billion light years of travel for the lower v-max signals (when we know they were emitted at the same time from the same source). This scatter plot and dispersion profile is in no way compatible with the intervention of a gas cloud, or large bank of stars for that matter. The source of the v-max cascading and mimicked dispersion of the .32 ∂v/∂t group, appears to suggest the intervention of a discrete, powerful and singular gravitational influence nearby the source of the signal – a gravitational body which is directly dilating the EM emission, or is causing an orbiting body emitting the bursts to move alternately toward and away from us as the observer.

Natural Log Decay Timing Profile and Gapping

Intelligence 5 – The arrival timing of each burst fell cleanly into a formulaic pattern of a y = ln x natural logarithmic basis with no characteristic Shapiro time delay observed. This corroborates the linear v-max/v-min relationship above, and supports the hypothesis that the signals all emanated from a single, natural source. As well the peak signal flux amplitudes decayed by a logarithmic function, however sustained a base rate which persisted until the signal stopped.
Intelligence 6 – The single source which imbued the characteristic v-max cascading and mimicked dispersion of the .32 ∂v/∂t group, did not appreciably alter the speed of the signals themselves relative to space-time or c. So each of those data points was kept as original signal data.

The bursts appeared to take a confirmatory time of arrival (TOA in the chart at the bottom), arrival distribution conforming to a natural logarithm curve y = ln x. A classic textbook natural log curve is overlain across the time of arrival plot for the 93 burst group, in purple in the chart to the right. The logarithm trend line is placed only to highlight the circumstance that this burst progression indeed follows a natural log distribution in time. The natural logarithm of a number is its logarithm to the base of the mathematical constant e, where e is the irrational constant 2.7182819… ad infinitum. This does not mean that aliens have sent us the precise constant e as a message, rather this pattern occurs in a number of systems observed in nature, especially where the decay rate of energy is involved. For instance say, the decay of a radioactive isotope. This is a very large hint here that the source of fast radio bursts is a natural source.

In addition, the conformance discipline of this curve (with some exceptions to be examined below) hint that all the observations, despite their degraded signal to noise ratio in many cases, are valid observations of confirmed signal. None should be ‘tossed out’ as discrete entities. However this does not preclude our ability to group and profile the burst arrivals. This conclusion was essential to this analysis.

Of primary importance however, is the inference which can be drawn from this curve, in that the single source which imbued the characteristic v-max cascading and mimicked dispersion of the .32 ∂v/∂t group, did not appreciably alter the speed of the signals themselves relative to space-time nor c. This is addressed again later in Intelligence 10 inside this article. It is an important observation – as one must grapple in this circumstance with the power/energy of an intervening body which can cause 1.5 billion light years worth of pseudo-dispersion in an electromagnetic wave, yet not alter its speed in the least.

Apparent Burst Cluster Scatter Plot Groupings

Intelligence 7 – The bust fusillade bore more diversity in dispersion than anticipated, but appeared to exhibit a Poisson μ at .21 ∂v/∂t.

The peak of dispersion occurrence rate versus the signal to noise ratio of the 93 measures, resided at a dispersion of .21 ∂v/∂t. This measure was both the most commonly featured dispersion measure in the group, and as well was the dispersion measure for the strongest signal to noise ratio signals of the group. For instance FRB121102-1 cited earlier in this article, featured a .21 ∂v/∂t as well as a very high signal to noise ratio. It was the first signal detected and stands as the signature burst of the group. The cluster of 93 signals skewed to longer dispersion tails upon an apparent Poisson distribution, where the accuracy of measurement of the signals themselves imparted a +/- 10% measurement tolerance. Two suppositions came from this data: 1. That lower dispersion measures, which were fewer in number, were the result of antenna detection errors primarily, and 2. That a characteristic dispersion for the entire group, given a single common source and instance of signal, could be assigned at .21 ∂v/∂t.

Suggested Intervention of a MIGO Body

Intelligence 8 – Dispersion measures were chaotic, however exhibited a two-cluster profiling around .21 and .32 ∂v/∂t. Variation which was not stochastic in origin and exhibited bilateral symmetry between the two groups, as if bearing the gradient dynamics of an orbit pathway – approaching and regressing cyclically.

There appeared inside the data, a clustering of two distinct dispersion profiles, which exceeded significantly both the database detection sensitivity and the measurement error tolerance. These profiles clustered around .21 ∂v/∂t and .32 ∂v/∂t. The bursts which composed the .32 ∂v/∂t grouping tended to

• be slightly delayed in arrival time (see graph to right),
• be weaker in signal to noise ratio (.34 versus .22), and
• feature greater Poisson degrees of freedom as compared to, the .21 ∂v/∂t group.

This second grouping of bursts appeared to me to be a kind of weakened version of the bursts (or maybe an echo?). But given the y = ln x conformance – this is not likely), or perhaps delayed-warped-duplicate of what I call the ‘Primary Cluster’ bursts (in blue), perhaps the type of bent EM signal whose trajectory was impacted by an intervening large gravitational mass; perhaps a black hole. Very much like a refracted lensing which occurs in visual astronomy, this EM light appeared to be a replications of the Primary Cluster signals – red shifted – a separate vector of EM energy which was diverted from its original path by a Massive Intervening Gravitational Object (MIGO), and now toward the Earth, to join alongside their Primary and direct-path signal twins (orange versus their blue twins in the graphic to the right). It is not that each signal arrived at Earth twice – rather, there were two types of signal in general – Primary and MIGO. These MIGO bursts are flagged by orange color in the graphic to the right. They feature a consistent enough pattern to ascribe some characteristic measures to the group as a whole, which can be contrasted with the Primary Cluster equivalents. In this analysis we examine both the constructs the the MIGO object is directly Schwarzschild time dilating the MIGO signal group – OR – alternately is causing a high speed orbit in a second body, which would explain both the Primary and MIGO clusters as well.

However, even at this early point in our study, the bilateral symmetry and even balance and consistency between the two burst classes hints strongly at an orbiting body approaching and regressing, and exhibiting the incumbent Doppler effect differential.

FRB Source Orbiting the MIGO?

Intelligence 9 – MIGO Cluster bursts featured consistent differentiation from the Primary Cluster bursts – and both appear to alternate in contiguous groupings as if produced as a signature of an body in orbit around another.
Intelligence 10 – The Planck based red shift and time-width displacement (Schwarzschild time dilation in both observations) far exceeded the displacement of the twin signals in relative elapsed time of arrival (Shapiro time delay, a measure which was almost negligible) – This clue is critical in deducing a solution to the source of the signal, at the end of this article.

So I took a representative – not average, rather good signal to noise and parametrizing measured – signal from both the MIGO and Primary burst cluster groups and developed a consistent profile for each EM signal group, which removed the effect of antenna detection and measurement errors. Those two consistent EM burst profiles are depicted in the graphic to the right. The blue curve represents the dispersion, in the same format as FRB 121102-1 is depicted above, characteristic of the Primary Cluster of bursts. The orange curve represents the dispersion characteristic of the MIGO Cluster of bursts. It is clear from the data that the MIGO Cluster of bursts represent a consistent and distinct profile from the Primary Cluster burst group in terms of the following:

  • reduced v-peak from v 7.8 to 6.5 GHz
  • reduced v-min from 5.3 GHz to 4.8 GHz
  • reduce ∂v from 2.5 GHz-band to 1.7 GHz-band
  • increased signal duration ∂t from 60 milliseconds to 80 milliseconds
  • imbued Planck dilation red shift contrast on the order of .32 .21 ∂v/∂t
  • the relative arrival time ΔT differential was on the order of


Please note that it is possible that the MIGO is part of the formula as to how a fast radio burst is generated in the first place. In other words, two black holes.

The MIGO Exotic Profile – Two Massive Object Dynamics

Intelligence 11 – There exist 16 discrete gaps and 17 ‘orbits’ in the decay rate of the FRB source as compared to a y = ln x analog. These appear to be introduced by the influence of a massive external body to the source of the bursts.
Intelligence 12 – The burst .32 and .21 ∂v/∂t groups and burst trends appear to feature a positional relationship with these intervals of minor occulting, as if a lensing or possibly rotational effect was being imbued by an orbiting mass. Both will be examined.

In the analysis to the right, we examine further then a magnified view of the y = ln x arrival timing curve (arrivals 1 – 48) identified in Intelligence 5 above. Of significance in the time series of this set of early arrivals are the presences of static gaps in progression – flatter periods in the chart to the right, of which there are 7 shown here, and 16 or so of them in the overall 93 burst data set. The first four gaps are highlighted by a horizontal orange bar in the chart. The gaps of arrivals are in seconds of arrival observation. The strongest signals in the .21 ∂v/∂t group tend to appear just before the first occulting. However this relationship decays after burst 25 or so. Of interest is to note that one quadruple/triplicate burst occurred right at the inception of occulting number 3; an occulting of which then lasted for 121 seconds. These decay gaps tended to trend actual burst timing as distended slightly versus that of a true natural logarithmic y = ln x curve (in purple above and in Intelligences 13 thru 16 below). This flat-decay-gapping is highlighted by a 57 minute gap in the arrivals between bursts 82 and 83 (denoted in orange in both graphic above – also see TOA in chart at the bottom of this article).

It is also of interest to compare that exception to the natural logarithmic discipline of the purple curve above occurs only as a result of, and commensurate with each occulting – as if the occulting member is actually momentarily delaying the decay of the emanation source (an orbit artifact in this case?) in some fashion during the short perisingular (nee perigee) pass – thereafter the decay source briefly resuming its natural decay rate after a 119 to 198 second break early – and much longer breaks as the process moved on. I am establishing mechanism here, projecting that during perisingular pass between two objects, a state of connection is established such that the bursts are quenched in some fashion. Of course once the merge is complete, the bursts would then be quenched in finality.

Given that it is doubtful that during aposingular orbit progression (or possibly the entire early orbit even to the intersection of event horizons), that the Roche limit is surpassed for these two bodies – it is possible that some artifact is created between them, which only exists at a given/formulaic proportion of the Roche limit, distance and the two masses.

Examine if you will, the first three cycles of the orbiting body in the chart above, which occur over about 1100 seconds. If we use the assumption of a 1,000,000 mile average elliptical orbit radius, this equates to a speed of 17,100 miles per second, or 9.2% of the speed of light. Some kind of relativistic energy shedding may be at play in genesis of these bursts.

This same occult influence repeating can be observed in the larger scale time of arrival curve below (Intelligence 13 thru 16); wherein the 57 minute delay induced a complete cessation of the decay of the emanating source of the later group of FRB signals. This is highly exotic and suggests both a rapid orbit as well as an elliptic eccentricity inside such an orbit, culminating in a final merge of the two bodies.

Orbital Decay and Merge Dynamics

Intelligence 13 – The burst times of arrival appear to be occulted on a semi-regular basis (16 times).
Intelligence 14 – The only exception to the natural logarithmic discipline of this curve occurs with each occulting – as if the occulting member is actually momentarily delaying the natural log decay of the emanation source in some fashion.
Intelligence 15 – Because of the high speed and elliptical nature of the suggested object orbits, this set of curve metrics suggests that both the emanation source, as well as the intervening gravitational source, are massive large gravitational bodies.
Intelligence 16 – The decay gapping appears to exhibit an early elliptical orbit profile, and then progress steadily into a faster and faster orbit, then mass merge profile, over the period of 5 to 7 hours. It appears as if the emanation source itself is the smaller of the two bodies.

As we saw in Intelligences 11 and 12 above, buried within this curve are several interventions in the rate of decay in the arrival timing, highlighted by the 16 horizontal orange markings in the chart to the right. One can observe that the actual decay took longer than its natural logarithm analog in purple. This suggests an occulting by a larger body of some type repeatedly moving in front of the burst source and then possibly merging with it briefly during the cessations (actually as you will notice they are ‘suspensions in decay’ technically) in burst activity, and then finally permanently at the end of the curve.

The distention of the continued logarithmic curve thereafter in time, suggests a body which is so close to the source that it is altering the very decay physics of the emanation source itself, such as in the case of the consumption of maybe a neutron star or denser by a black hole. However, this is very preliminary and only mildly inductive. The occurrence of the 57 minute break runs in contrast with the breaks/gaps in decay which occur earlier in the burst decay process. These appeared to be more orbit related – however as the orbit of the smaller FRB source body decays over time, you can see the gapping getting more and more frequent until the burst 82 (57 minute) merge event. Thereafter, bursts became less and less common until there are none at all. What is depicted inside the graphic to the right in black are three concept orbit states which relate to the various burst signatures along the 5 hour decay log.

This suggests that a repeating FRB is only therefore a ‘multiple FRB’; not sustainable in reality, and not ‘repeating’ in the Search for Extraterrestrial Intelligence (SETI) sense. My projection is that we will hear no more noise from FRB 121102 in the future.

The occultings suggested by the data are complex, but not so complex as to be outside the possible range of Relativistic or even classic orbital dynamics. The relatively level state of the decay process during the gaps (flat orange lines in the graph to the right) could stem from a contribution of exotic material mass between the MIGO and emanating body, or as well be simply the result of a delay in the arrival of those bursts by their having to be refracted around the occulting MIGO body as it passes in front of the emanation source. It is tempting to jump to the conclusion that the latter explanation here fits the data well, as indeed it appears to do inside bursts 1 – 48. However, as seen in the curve above, a later 57 minute gap in burst activity results in a depression of the decay rate for a substantial period of time, lending more to the mass contribution explanation than the occult-refractory explanation. Overall, a disintegrating orbit scenario, with Doppler effect constituting the main mechanism underlying the differential red shift in the MIGO group, is a superior explanation.

The Implications of This Observation Set

Objective Implication

The exotic profiling of the MIGO cluster along with the arrival gapping in energetic decay appears to have been generated by the orbit of the FRB 121102 emission source around a massive intervening gravitational object. The MIGO suggested above would have had to be very close to the radio burst emission point in space and very tight along the line of sight with Earth during occultations. This because the ΔT(2) to ΔT(1) differential in the above equation proved to be very slight to nothing on the epochal scale of time involved. The images to the right and below are speculative, but portray a highly eccentric orbit dynamic between two black holes which have just initiated collision. Such a collision would be necessary to account for the high speed orbital occulting displayed in the Intelligence 13 – 16 graphic.

This inductively inferred scenario would account for the three critical path intelligence components:

  1.  Erratic occult gapping of bursts
  2.  Added Planck dilation of .32 ∂v/∂t refracted bursts
  3.  The monumental delay in the natural decay of the emanation source during occult gaps.

But it would not account for the lack of a Shapiro time delay observation (Intelligence 5). This is deductive in its critical path inference.

The burst dynamics, as well as the origin of FRB’s themselves, could be the result of the collision of two black holes – wherein a special condition exists which creates in the smaller (orbiting body) of the two, or in an intermediate exotic plasma or yet unidentified space-time condition, a brilliant 1 millisecond burst of narrow-band decay energy (say the momentary collapse or appearance of a neutron body releasing its quark binding force). In the case of FRB 121102, that special condition existed long enough to exhibit a natural energy decay profile, momentarily and erratically interrupted by the intervention of the MIGO black hole (most likely an occulting). I have developed a concept illustration above in an attempt to depict this dance between two black holes.

It is very possible that both scenarios are occurring – wherein there is an alternation between exotic elliptical gapping and mass merges at play. In fact, as you observe the gapping inside the arrival profile versus a pure logarithmic decay curve, you will notice increasingly large gaps in the decay time, which shift from Doppler red/blue shift dynamics and into mass contribution dynamics in their nature.

This suggests an artifact of the elliptical orbital collision and then mass merging of two gigantic massive bodies over a 5 – 7 hour period, as the genesis of Fast Radio Burst 121102.

Regardless, what this intelligence also suggests is that both the emanation source AND the intervening body are BOTH of a massive nature. And the ensuing dance energy is stimulating repeated brief 1 ms eruptions of electromagnetic energy, sparkling like a strobe in an erstwhile disco of black holes tripping the light fantastic.

Deductive Inference: We Found Schwarzschild but Not Shapiro – And You Need Both

Finally, a deductive inference regarding the FRB emission structure can be discerned by examining the implications of the General Theory of Relativity on this intelligence set – the problem with Intelligence 10 above is that it violates my understanding of electromagnetic energy propagation and Planck red shift. The Planck dilation of the MIGO .32 ∂v/∂t bursts featured an enormous impact in terms of such dilation – 2.5 GHz and 20 milliseconds, roughly equal in magnitude to each other, resulting in an overall .11 ∂v/∂t additional Planck dilation. This equates to an added 1.5 billion years of light travel imbued into only a subset (half?) of these signals. Signals which we know emanated from the same source at the same time. However the delay in time of arrival was essentially negligible – on the order of an estimated 120 seconds at most, over a base of 2.5 billion years (1/(7.9 x 10^16)). This is essentially a zero impact on the speed of this signal’s propagation versus the speed of light, c. In a Newtonian sense, the negligible delay or decay gaps might be explainable simply by the longer physical path that particular light vector took relative to a line of sight path to Earth. The problem is that this negligible difference violates the Shapiro time delay which should have been embedded into the .32 ∂v/∂t group of bursts, according to the formula5

A case where M is rather large. The conflict resides in reconciling the rather null presence of any observed Shapiro time delay, with the observed monumental effect of the ostensible Schwarzschild time dilation metric in the .32 ∂v/∂t group, which is governed by the formula6

M is exceedingly large in both cases. So what gives?

There should have been both a Shapiro time delay and a Schwarzschild time dilation inside the signals – and we apparently only got one of them at best. Therefore the lensing explanation for the MIGO Cluster group fails. We are left with a Relativistic Doppler red/blue shift as the remaining mechanism.

High Speed Orbital Doppler Red/Blue Shift Differential – We Got Bursts Coming and Going

Another possibility resides however, and potentially resolves this paradox, in that both signals possibly already do reflect the Shapiro time delay, and there is in actuality also no differential Schwarzschild time dilation as this factor is also equal in both the Primary and MIGO burst groups; however, the MIGO group red shifted profile was simply generated by a relativistic Doppler shift derived from the speed of the source away from us, relative to the speed of light.  In other words the source was alternating in its motion toward and away from Earth as it emitted this series of bursts.  This would be according to the formula7

Where v would be the velocity of the emitting body away from Earth during the red shifted emissions affecting both t – waveduration and f – wavefrequency. To the credit of this idea, the emissions did come in profile contiguous groups early in the series (Intelligence 12), as this construct might suggest. As well, the two sets of burst groupings exhibited bilateral symmetry around their common average. This is what one would expect in orbit cycle Doppler dynamics. But, as well, the emitting body would have had to be traveling around its gravitational host (which would be required in this case as well to allow for alterations between the Primary and MIGO blue/red shift profiles) at a significant fraction of the speed of light. So let’s examine this alternative then. Relatively, we observed 17 orbits (16 occultations) in about 5 hours. At a radius of 1 million miles between the black holes, this would represent an orbital velocity given by

or 5934 miles per second. Where C is the number of cycles undergone 17, and P is the duration of the merge. That equates to a v of 3.2% of the speed of light on average for the 17 cycles. Enough to do the job on the Hubble (λ) differential required, especially given that we must divide the .11 ∂v/∂t by a factor of two, since we are receding in one burst group and approaching in the other. Principally, once noise and error are removed, we arguably are left with only these two distinct red and blue shifted burst profiles.

So it is very possible to likely that the orbital velocity of the smaller black hole (the emission source) orbiting at ~1 to 4% of the speed of light, around a larger black hole, could explain the differential red shift between the Primary and MIGO fast radio burst groups, while at the same time allowing the FRB bursts to arrive in a clean natural log time distribution.

What remains to be explained is the mechanism inside the smaller black hole (or between it and the MIGO body) which allows for a natural logarithmic decaying multiple set of 2.5 GHz narrow band and discrete 1 ms time truncated electromagnetic frequency emissions.

It is possible, that the very act of accelerating to a fraction of the speed of light, on the part of a smaller black hole approaching a larger one, serves to produce disruptions in relativistic physics such that discrete quanta of spacetime are ejected from the smaller black hole at the signature frequency of that hole. In a direct collision, this only happens once. In an indirect collision, we now know it can happen 93 times.

Mystery Solved?

Finally, an intervening plasma or gas cloud could not have possibly caused this particular set of observations either. So if the blue/red shift orbit explanation above is not valid, then a dilemma exists, to my understanding, in that a Planck dilation of extraordinary magnitude in impact to a burst signal, was matched to a rather non-remarkable impact to the speed of that electromagnetic signal on the part of the same intervening massive object(s), over the same time and space vectoring. And if valid in structure and my understanding, this bears profound implications to our current paradigm of inflationary theory. Essentially, if an electromagnetic signal can be red shifted through the presence of gravity-time alone (Schwarzschild time dilation) in this manner and not be simply dispersed in its lower frequencies, yet its speed relative to c not be appreciably altered (no Shapiro time delay), then there is no need for galaxies to be ‘hurtling apart on a galactic scale’ (actually space-time itself inflating) to stand as the explanatory mechanism for an observable red shift in EM energy transiting our universe. The red shift per hoc aditum being simply an artifact of EM energy having traversed time and gravitational fields. In other words, a 2 dimensional Planck dilation (G,t), as opposed to a 3 dimensional space inflation (l,w,h). In other words, space is not inflating (Scale Invariant Cosmological Model) – rather gravity is serving to dilate time (t). Under this line of reasoning, a gravity-time dilation alone causing the red shift differential between these two sets of signals.

To be fair, such an alternative (time dilation) model of the red shifted universe has been proposed recently by University of Paris astrophysicist Jean-Pierre Petit. But so far has not received much ear from the scientific community at large. Time dilation models more than adequately explain the Hubble red shift, and in some circumstance, do a better job at explaining it.8 Does the FRB 121102 data support the Scale Invariant Cosmological model?

However, Ockham’s Razor suggests that since we have a less feature-stacked mechanism viable now and inside a classic and well supported model, there is not need to introduce the Scale Invariant Cosmological Model explanation just yet. Although there is inductive support for such an idea, the current model carries with it an explanation sufficient to reject pursuing it at this moment.

Unless I am mistaken in all of this of course. One of the tenets of ethical skepticism is to ask the question ‘If I was mistaken, would I even know?’ And in this case, I would not know, and accordingly should ask for help. Any physicists out there who understand this better than do I, and can provide me with the understanding of such a mechanism which serves to reconcile this observation back into alignment with standing universe inflation and red shift theory – please drop me a note and correct or enlighten me. It would be much appreciated.

The database I assembled and used for this analysis resides below. Click on the image to expand it to full size or save it. The Primary Cluster leading signals are in green shading, while the MIGO Cluster signals are shaded in orange.

epoché vanguards gnosis


How to MLA cite this blog post =>

The Ethical Skeptic, “Exotic Nature of FRB 121102 Burst Congery” The Ethical Skeptic, WordPress, 9 Nov 2018; Web,

November 9, 2018 Posted by | Ethical Skepticism | , , , | Leave a comment

How to Detect an Evil Person

How does one go about detecting an evil person? How does one distinguish the true sociopath, from the honest person who is simply mistaken or having a bad day? These are the hints which should tip one off to the reality that the person they are dealing with is mired in habits of evil.

As my readers know, I seldom reference the Bible inside this blog. I view the Bible not as a single document, rather a series of thoughts put down by men over the ages. Some authors were sincere; while unfortunately some editors were not entirely sincere. Although I have read the assembly of writings from cover to cover probably six times, I find the collection to not be that dramatically superior to other philosophical works of men. In fact, I find the Bible lacking in many essential lessons of life and spirituality.

However there is one verse which I have heeded most of the days of my life, Proverbs 23: 6-7. A verse which pertains to the conduct and habits of an evil person. One who cares about only a single thing on this planet, them self.

6 Do not engage with him who has a selfish intent,
    neither seek his approval;
for as he thinks in his heart,
    so is he.
“Come, let us discuss together” he says to you,
    but his heart is not with you.

Proverbs 23: 6-7 (Modern English Version – Transliterated)

How does one go about detecting the evil person? How does one distinguish the true sociopath, from the honest person who is simply mistaken or having a bad day? Below is a list of the hints which should tip one off to the reality that the person they are engaging with, regardless of their credential or notoriety, is mired in habitually evil practice.

They enjoy watching others be in pain

Their first instinct is to seek dominance or control

They do not engage to discuss the topic at hand, their focus is upon you

Every act is a manipulation towards an end/nothing is derived from objective neutrality

They conceal their inner nature/put on a veneer of virtue (the opposite of ethics)

They will habitually misrepresent what you say, often just to see if you will catch the straw man

They leave you with an uncomfortable dank feeling

They show no remorse nor respect for boundaries

They find entertainment in highlighting other’s errors or insulting them

They don’t take responsibility for their own actions

To them, everyone they target is a ‘narcissist’

Their complete absence of any reference to self betrays an enormous conceit

They mock the misfortune of others

Their few friends, don’t really know them and are exactly like them

They are/were once cruel to animals or had ‘psychological issues’

They will obsess over made up quo facto malo offenses their victims have ‘committed’

They derive great joy in the authoritative cleverness of a lie, the larger and more intimidating the lie, the better

They fail to associate their poor state in life with their dark habits

They find joy in making people feel stupid

They don’t bear an ability to introspect

Aside from memorized pablum, they don’t really get philosophy/ethics

Even though they constantly fixate or maintain the focus upon others, everything is in fact about them

It does not matter how accidentally correct they are – if they bear these traits, even the things they are correct about, endure merely as an act they are putting on for your manipulation. They have no interest in truth or fellowship. Do not be fooled, their heart is not with you.

The Ethical Skeptic, “How to Detect an Evil Person”; The Ethical Skeptic, WordPress, 15 Oct 2020; Web,

October 15, 2020 Posted by | Tradecraft SSkepticism | | 3 Comments

Post Stockholm Syndrome

More than simply developing an affinity for their captor, victims of Post Stockholm Syndrome begin to develop or are coerced into a state of amnesia about their being held hostage in the first place. Under such a condition, maintenance of this anosognosia on the part of the hostages becomes the preeminent priority.

Stockholm Syndrome as a social term was first used after four hostages were taken during a 1973 bank robbery in Stockholm, Sweden. The hostages in that prosecution defended their captors after being released and would not agree to testify in court against them. Stockholm syndrome is identified through the affinity that hostages may often develop towards their captors, in contraposition to the fear and disdain which an onlooker might feel towards those same captors.1

Accordingly, we hold this current social definition of Stockholm Syndrome:

Stockholm Syndrome

/psychology : human interaction : willful blindness/ : a condition in which hostages develop a psychological alliance with their captors during or after captivity. Emotional bonds may be formed, between captor and captives, during intimate or extensive time together; bonds which however are generally considered irrational in light of the danger or risk endured by the victims of such captivity.

Wikipedia: Stockholm syndrome

Now for a moment, let’s expand the circumstance of captor and hostage under a Stockholm Syndrome such that it encompasses a substantially and sufficiently longer period of time. A circumstance wherein, because of generational turnover, pluralistic ignorance or mental impairment/injury on the part of the hostages, a milieu of amnesia has begun to set in. An amnesia which blends together, both neutral to positive mythological sentiments toward the captors, as well as a comprehensive forgetting of the circumstances of illegitimate captivity in the first place. Such a context broaches what I call Post Stockholm Syndrome.

Post Stockholm Syndrome

/Philosophy : ethics : malevolence/ : a condition wherein Stockholm Syndrome hostages, more than simply developing an affinity for their captor, begin to develop or are coerced into a state of amnesia about their being held hostage in the first place. Under such a condition, maintenance of this anosognosia on the part of the hostages, through information/science embargo, becomes the preeminent priority.

Captor actions enforcing such amnesia under a Post Stockholm Syndrome circumstance may also be falsely spun by the captor as enforcing ‘The Prime Directive’ – especially if the captor has masqueraded illegally in the role as a deity, sexual/breeding/genetic tyrant, slavemaster, governing body or other form of abusive godship over the hostages during its activities as captor.

In such a circumstance of detection risk, one wherein the captor could be held accountable for its malevolent actions by an outside Authority bearing punitive power – concealment of past activity, a lack of captor detectability or apparent presence, along with a pervasive and enforced collective-amnesia on the part of the hostages are all of paramount importance. Means of enforcing this may include:

  • Dividing hostages into entertaining and constantly warring factions
  • Denial of essential energy, health and development technology, save for that which will maintain firm order and war footing
  • Appointing governing authorities among the hostages who are complicit in an information/science embargo, along with the resulting amnesia and conflict
  • Developing ‘Samson Option’ destructive devices that can serve to obliterate everything if the captors are detected/threatened by Outsiders
  • Supremacy of ignorance and idea embargo, in the name of holy or rational thought.
  • Development of embargo-compatible pervasive, holy and club-enforced theologies and atheologies concerning the state of the hostages, fully explaining the circumstances in which they find themselves as being either ‘just’ or completely by chance
  • Posing captor activities as random events, or those of a higher enlightened being, or as being derived through an ‘unfathomable love’ for the hostages
  • Posing captor activities as just desserts for some offense or group/original sin the hostages have been tricked into believing
  • Establishing the false mythology that if the captor is displaced and punished/banished, the captives are destined to receive that very same punishment/banishment for their wrongdoings as well
  • Inflicting the hostages with shortened lives bearing copious amounts of cerebral impairment, substance abuse, unchecked corruption, mandatory labor, disease, starvation and suffering – in order to keep their overall mental acuity, effectiveness and awareness low
  • Captor enjoying all of the above as a type of power, belonging, loosh-as-a-drug or false clout which may be used as currency to purchase allegiance to its scheme.

Such a captor may face the inevitability of having to become ‘one-in-the-same’ or of the same genetic fabric as its hostages, in order to evade eventual enforcement of the penalty of Law upon them for such malevolent actions. In this way, captors hope to circumvent the Law and skillfully confiscate that which they sought through the hostage-taking to begin with.

After all, if the hostages indeed bear the same fate as the captors, why then would outside Authorities hesitate in their intervention at all?

A captive hostage circumstance is always based upon lies – no matter how popular, loving or righteous those lies might be codified – no matter how random, natural or scientific they may be framed.

Just as with mathematics, the ethical skeptic can extend the logical reach and calculus of philosophy, to stand inside theoretical circumstances which are not readily apparent to the average philosopher. Does such a circumstance regarding Post Stockholm Syndrome exist for humanity on Earth today? I do not know the answer to that question from an epistemological standpoint – however, just as with most principles of humanity, while I can’t define its presence, I know when I am in it.

The Ethical Skeptic, “Post Stockholm Syndrome”; The Ethical Skeptic, WordPress, 8 Mar 2020; Web,

March 8, 2020 Posted by | Ethical Skepticism | 6 Comments

Chinese (Simplified)EnglishFrenchGermanHindiPortugueseRussianSpanish
%d bloggers like this: