The Ethical Skeptic

Challenging Agency of Pseudo-Skepticism & Cultivated Ignorance

Inflection Point Theory and the Dynamic of The Cheat

A mafia or cartel need not cheat at every level nor in every facet of its play. The majority of the time such collapsed-market mechanisms operate out of necessity under the integrity of discipline. After all, without standardized praxis, we all fail.
However, in order to effect a desired outcome an intervening agency may only need exploit specific indigo point moments in certain influential yet surreptitious ways. Inflection point theory is the discipline which allows the astute value chain strategist to sense the weather of opportunity and calamity – and moreover, spot the subtle methodology of The Cheat.

Note: because of the surprising and keen interest around this article by various groups, including rather vociferous Oakland Raider fans, I have expanded that section of the article for more clarity/depth on what I have observed; and further, added 6 excerpt links in each appropriate section outlaying the backup analysis for those wishing to review the data. One can scroll directly to that section of the article at about 45% through its essay length.

Inflection Point Theory

In one of my strategy firms over the decades of conducting numerous trade, infrastructure and market strategies, I had the good fortune to work as colleague with one of our Executive Vice Presidents, a Harvard/MIT graduate whose specialty focused in and around inflection point theory. He adeptly grasped and instructed me as to how this species of analytical approach could be applied to develop brand, markets, infrastructure, inventories and even corporate focus or culture. Inflection point theory in a nutshell, is the sub-discipline of value chain analytics or strategy (my expertise) in which particular focus is given those nodes, transactions or constraints which cause the entire value chain to swing wildly (whipsaw) in its outcome (ergodicity). The point of inflection at which such signal is typically detected or hopefully even anticipated, is called an indigo point.

Columbia Business School Strategic advisor Rita McGrath defines an inflection point as “that single point in time when everything changes irrevocably. Disruption is an outcome of an inflection point.”1 While this is not entirely incorrect, in my experience, once an inflection point has been reached, the disruption has actually already taken place (see the oil rig example offered below), and an E-ruptive period of change has just precipitated. It is one thing to be adept with the buzzwords surrounding inflection point theory, and another thing altogether to have held hands with those CEO’s and executive teams while they have ridden out its dynamic, time and time again.

The savvy quietly analyzes the hurricane before its landfall. The expert makes much noise about it thereafter.
The savvy perceives the interleaving of elemental dynamics inside an industry. The expert dazzles himself with academic mathematical equations.
The savvy is employed on the team which is at risk. The expert brings self-attention and bears no skin in the game.

Such is not a retrospective science in the least. Nonetheless, adept understanding of business inflection point theory does in a manner allow one to ‘see around corners’, as McGrath aptly puts it.

Those who ignore inflection points, are destined to fail their clients, if not themselves; left wondering why such resulting calamity could have happened in such short order – or even denying that it has occurred through Nelsonian knowledge. Those who adeptly observe an indigo point signal may succeed, not through simply offering a better product or service, rather more through the act of rendering their organization robust to concavity (Black Swans) and exposed to convexity (White Swans). Conversely under a risk strategy, an inflection-point-savvy company may revise their rollout of a technology to be stakeholder-impact resistant under conditions of Risk Horizon Types I and II and rapid (speed to margin, not just speed for speed’s sake) under a confirmed absence of both risk types.2 As an example, in this chart data from an earlier blog post one can observe the disastrous net impact (either social perception, real or both) of the Centers for Disease Control’s ignoring a very obvious indigo pause-point regarding the dynamic between aggressive vaccine schedule escalations and changes in diagnostic protocol doctrine. Were the CDC my client, I would have advised them in advance to halt deployment at point Indigo, and wait for three years of observation before doing anything new. An indigo point is that point at which one should ethically, at the very least plan to take pause and watch for any new trends or unanticipated consequences in their industry/market/discipline – to make ready for a change in the wind. No science is 100% comprehensive nor 100% perfect – and it is foolishness to pretend that such a confidence in deployment exists a priori. This is the necessary ethic of technology strategy, even when only addressed as a tactic of precaution. When one is responsible for at-risk stakeholders, stockholders, clients or employee families, to ignore such inflection points, borders on criminally stupid activity.

Much of my career has been wound up in helping clients and nations address such daunting decision factors – When do we roll out a technology and how far? When do we pause and monitor results, and how do we do this? What quality control measures need to be in place? What agency, bias or entities may serve to impact the success of the technology or our specific implementation of it? etc. In the end, inflection point theory allows the professional to construct effective definitions, useful in spotting cartels, cabals and mafias. Skills which have turned out to be of help in my years conducting national infrastructure strategy as well. Later in this article, we will outline three cases where such inflection point ignorance is not simply a case of epistemological stupidity, but rather planned maliciousness. In the end, ethically when large groups of stakeholders are at risk, inflection point ignorance and maliciousness become indistinguishable traits.

Inflection Point

/philosophy : science : maths/philosophy : neural or dynamic change/ : inflection points are the points along a continuous mathematical function wherein the curvature changes its sign or there is a change in the underlying differential equation or its neural constants/constraints. In a market, it is the point at which a signal is given for a potential or even likely momentum shift away from that market’s most recent trend, range or dynamic.

An inflection point is the point at which one anticipates being able to thereafter analytically observe a change which has already occurred.

Inflection Point Theory (Indigo Point Dynamics)

/philosophy : science : philosophy : value chain theory : inflection point theory/ : the value chain theory which focuses upon the ergodicity entailed from neural or dynamic constraints change, which is a critical but not sufficient condition or event; however, nonetheless serves to impart a desired shift in the underlying dynamic inside an asymmetric, price taking or competitive system. The point of inflection is often called an indigo point (I). Inside a system which they do not control (price taking), successful players will want to be exposed to convexity and robust to concavity at an inflection point. Conversely under a risk horizon, the inflection point savvy company may revise their rollout of a technology to be stakeholder-impact resistant under conditions of Risk Horizon Types I and II and rapid under a confirmed absence of both risk types.

An Example: In March of 2016, monthly high capacity crude oil extraction rig counts by oil formation, had all begun to trend in synchronous patterns (see chart below extracted from University of New Mexico research data).3 This sympathetic and stark trend suggested a neural change in the dynamic driving oil rig counts inside New Mexico oil basin operations. An external factor was imbuing a higher sensitivity contribution to rig count dynamics, than were normal market forces/chaos. This suggested that not only was a change in the math in the offing, but a substantial change in rig dynamics was underway, the numerics of which had not yet surfaced.

Indeed, subsequently Enverus DrillingInfo confirmed that New Mexico’s high capacity crude extraction rig counts increased, against the national downward trend, by a rate of 50+% per year for the ensuing years 2017 and into 2018 – thereby confirming this Indigo Point (inflection point).4

I was involved in some of this analysis for particular clients in that industry. This post-inflection increase was driven by the related-but-unseen shortfall in shallow and shale rigs, lowering production capacity out of Texas during that same time frame and increasing opportunity to produce to price for New Mexico wells – a trend which formerly had served to precipitate the fall in monthly New Mexico Rig Count to an indigo point to begin with. Yet this pre-inflection trend also had to end because the supply of rigs in Texas could not be sustained under such heavy demand for shale production.

Astute New Mexico equipment planners who used Inflection Point theory, might have been able to head this off and ensure their inventories were stocked in order to take advantage of the ‘no-discounts’ margin to be had during the incumbent rush for rigs in New Mexico. This key pattern in the New Mexico well data in particular, was what is called in the industry, an inflection point. My clients were able to increase stocks of tertiary wells, and while not flooding the market, were able to offer ‘limited discount’ sales for the period of short supply. They made good money. They were not raising prices of plywood before a hurricane mind you, rather being a bit more stingy on their negotiated discounts because they had prepared accordingly.

To place it in sailing vernacular: the wind has backed rather than veered, the humidity has changed, the barometric pressure has dropped – get ready to reef your sails and set a run course. A smart business person both becomes robust to inflection point concavity (prepares), and as well is exposed to their convexity (exploits).

The net impact to margin (not revenue) achievable through this approach to market analytics is on the order of 8 to 1 in swing. It is how the successful, make their success. It is how real business is conducted. However, there exists a difference between survival and thriving due to adept perspective-use concerning indigo points, and that activity which seeks to exploit their dynamic for market failure and consolidation (cartel-like behavior).

Self Protection is One Thing – But What about Exploiting an Inflection Point?

There exists a form of inflection point analytics and strategy which is not as en milieu knight-in-shining-armor – one more akin to gaming an industry vertical or market in order to establish enormous barriers to entry, exploit consolidation failure or defraud its participants or stakeholders. This genus of furtive activity is enacted to establish a condition wherein one controls a system, or is a price maker and no longer a price taker – no more ‘a surfer riding the wave’, rather now the creator of the wave itself. Inflection points constitute an excellent avenue through which one may establish a cheat mechanism, without tendering the appearance of doing so.

Inflection Point Exploitation (The Cheat)

/philosophy : science : philosophy : agency/ – a flaw, exploit or vulnerability inside a business vertical or white/grey market which allows that market to be converted into a mechanism exhibiting syndicate (cartel, cabal or mafia-like) behavior. Rather than the market becoming robust to concavity and exposed to convexity – instead, this type of consolidation-of-control market becomes exposed to excessive earnings extraction and sequestration of capital/information on the part of its cronies. Often there is one raison d’être (reason for existence) or mechanism of control which allows its operating cronies to enact the entailed cheat enabling its existence. This single mechanism will serve to convert a price taking market into a price making market and allow the cronies therein to establish behavior which serves to accrete wealth/information/influence into a few hands, and exclude erstwhile market competition from being able to function. Three flavors of syndicated entity result from such inflection point exploitation:

Cartel – a syndicate entity run by cronies which enforces closed door price-making inside an entire economic white market vertical.

Functions through exploitation of buyers (monoopoly) and/or sellers (monopsony) through manipulation of inflection points. Inflection Points where sensitivity is greatest, and as early into the value chain as possible, and finally inside a focal region where attentions are lacking. Its actions are codified as virtuous.

Cabal – a syndicate entity run by a club which enforces closed door price-making inside an information or influence market.

Functions through exploitation of consumers and/or researchers through manipulation of the philosophy which underlies knowledge development (skepticism) or the praxis of the overall market itself. Inflection Points where they can manipulate the outcomes of information and influence, through tampering with a critical inflection point early in its development methodology. Its actions are secretive, or if visible, are externally promoted through media as virtue or for sake of intimidation.

Mafia – a syndicate entity run by cronies which enforces closed door price-making inside a business activity, region or sub-vertical.

Functions through exploitation of its customers and under the table cheating in order to eliminate all competition, manipulate the success of its members and the flow of grey market money to its advantage. Inflection Points where sensitivity is greatest, and where accountability is low or subjective. Its actions are held confidential under threat of severe penalty against its organization participants. It promotes itself through intimidation, exclusive alliance and legislative power.

Three key examples of such cartel, cabal and mafia-like entities follow.

The Cartel Cheat – Exemplified by Exploitation of a Critical Value Chain Inflection Point

Our first example of The Cheat involves the long-sustained decline of US agricultural producer markets. A condition which has persisted since the 1980’s, ironically despite the ‘help’ farmers get from the agricultural technology industry itself.

Cheat where sensitivity is greatest and as early into the value chain as possible, at a point where attentions are lacking. Codify the virtue of your action.

Indigo point raison d’être: Efficiency of Mixed Bin Supply Chain

The agriculture markets in the US are driven by one raison d’être. They principally ship logistically (85%) via a method of supply chain called ‘mixed bin’ shipping. This is a practice wherein every producer of a specific product and class within a region dumps their agri-product into a common stock for delivery (which is detached from the sell, by means of a future). Under this method, purportedly in the name of ‘efficiency’, the farmer is not actually able to sell the value of her crop, rather must sell at a single speculative price (reasonable worst case discounted aggregate) to a few powerful buyers (monopsony).

Another way to describe this in value chain terms, is by characterizing the impact of this ownership of the supply chain by means of common-interdependent practice, as a ‘horizontal monopoly’. The monopoly/oligopoly powers in the presiding ABCD Cartel (as it is called), do not own the vertical supply of Ag products; instead they dominate the single method (value chain) of supply and distribution for all those products. This is what Walmart used in the 1970’s and 80’s to gut regional competitors. Players of lesser clout who could not compete initially inside the 2 – 8% to sales freight margin advantage; fell vulnerable finally the cost purchase discounts on volume which Walmart was eventually able to drive once a locus of purchasing power was established. Own the horizontal supply chain and you will eventually own the vertical as well. You have captured monopoly by using the Indigo Point of mandatory supply chain consolidation. Most US Courts will not catch this trick (plus much of it is practiced offshore) and will miss the incumbent violation of both the Sherman Anti-Trust Act as well as the Clayton Act. By the time the industry began to mimic in the 90’s and 00’s what Walmart had done, it was too late for a majority of the small to medium consumer goods market. They tried to intervene at the later ‘Tau Point’, when the magic had already been effected by Walmart at the lesser obvious ‘Indigo Point’ two or three decades earlier.

Moreover, with respect to agriculture’s resulting extremely powerful middle market, the farmer faces a condition wherein, the only way to improve her earnings is through a process of ‘minimizing all (cost) inputs’. In other words, using excessive growth-accelerant pesticides and the cheapest means to produce as much caloric biomass as possible – even at the cost of critical phloem fulvic human nutrition content and toxin exposure. After all, if you exceed tolerance – your product is going to be mixed with everyone else’s product, so things should be fine. Dilution is the solution to pollution. In fact, such nutrient content and growth accelerant actual ppm’s are never actually monitored at all in the cartel-like agriculture industry. This is criminal activity, because the buyer and consumer are not getting the product which they think they are buying – and they are being poisoned and nutritionally starved in the process of being defrauded.

The net result? Autoimmune diseases of malnutrition skyrocket, market prices go into decades-sustained fall, microbiome impacts from bactericidal pesticide effects plague the global consumer base, nations begin to reject US agri-products, farms trend higher in Chapter 12 bankruptcies, and finally global food security decreases overall – ironically from the very methods which purport an increase in per acre yields.

The industry consolidates and begins to effect even more cartel-like activity. A death spiral of stupidity. 

This is the net effect of cartel-like activity. Activity which is always harmful in the end to either human health, society or economy. These cartels exploit one minor but key inflection point inside the supply chain, the virtuous efficiency of shipping and freight, in order to extract a maximum of earnings from that entire economic sub-vertical, at the harm of everything else. This is the tail wagging the dog and constitutes a prime example of inflection point exploitation (The Cheat).

Such unethical activity has resulted in enormous harm to human health, along with a sustained decades-long depression in the agriculture producer industry (as exemplified in the above ‘Chapter 12 Farm Bankruptcies by Region’ graphic by Forbes)5 – but not a commensurate depression in the agriculture futures nor speculator industry.6 Very curious indeed, that the cartel members at Point Tau (see below) are not hurt by their own deleterious activity at Point Indigo. This is part of the game. This is backasswards wrong. It is corruption in every sense of the word.

In order to effect The Cheat, one does not have to be a pervasive cheater.
One only need tweak specific inputs or methods at a paucity of specific points in a system or chain of accountability.

Thereafter an embargo on speaking about the indigo point must be enforced as well,
or an apothegm/buzzword phrase must be introduced which serves to obfuscate its true nature and impact potential.

The Cabal Cheat – Exemplified by Exploitation of Point Indigo for the Scientific Method – Ockham’s Razor

Our second example of The Cheat, cites how science communicators and fake skeptics manipulate the outcomes of science, through tampering with a critical inflection point early in its methodology.

All things being equal, that which appears compatible with what I superficially think scientists believe, tends to obviate the need for any scientific investigation.

Indigo point raison d’être: ‘Occam’s Razor’ Employed in Lieu of Ockham’s Razor

Point indigo for the scientific method is Ockham’s Razor. This is the point, early in the scientific method, at which a competing theory is allowed access into the halls of science for consideration. Remember from our definition above, that cheating is best done early, so as to minimize its necessary scale. Ockham’s Razor is that early point at which both a sponsor, and his or her ideas are considered worthy members of ‘plurality’ – those things to be seriously considered by the ranks of science.7 The method by which fake skeptics (cartel members, or cabal members when not an economy) manipulate what is and what is not admissible into the ranks of scientific endeavor, is by means of a flag they title ‘pseudoscience’. By declaring any idea they dislike to be a pseudoscience, or failing ‘Occam’s Razor’ (it is not simple) – skeptics game the inflection point of the entire means of enacting science, the scientific method. They are able to declare a priori, those answers which will or will not arrive at Point Tau, for tipping into consensus at a later time.

To spray the field of science at night with a pre-emergent pesticide which will ensure that only the answer they desire, will come true in the growing sunlight.

Most of the stakeholder public does not grasp this gaming of inflection theory. Most skeptics do not either, they just go along with it – failing to even perceive that skeptics are to be allies at the Ockham’s Razor sponsorship point, not foes. They are there to help the competitiveness of alternatives, not to corruptly certify the field of monist conclusion. This is after all, what it means to be a skeptic – to seriously consider alternative points of view. To come alongside and help them mature into true hypothesis. They want to see the answer for themselves.

If I do not like a particular avenue of study, all one need do is throw the penalty flag regarding that item’s ‘not being simple’ (Occam’s Razor). Thereafter, by citing its researchers to be pseudo-scientists, because they are using the ‘implements and methods of science to study a pseudoscience’, one has gamed the system of science by means of its inflection exploit mechanism.

They have effectively enacted cartel-like activity around the exercise of science on the public’s behalf. This is corruption. This is why science must ever operate inside the public trust – so that it does not become the lap-dog of such agency.

Seldom seek to influence point Tau as that is difficult and typically is conducted inside an arena of high visibility – your work in deception should always focus first on point Indigo – where stakeholders and monitors are rarely paying attention yet. One can control much, through the adept manipulation of inflection points.

Extreme measures taken to control Point Tau are unnecessary if one possesses the ability to manipulate Point Indigo.

The final step of the scientific method, consensus acceptance, constitutes more of a Malcolm Gladwell tipping point as opposed to an unconstrained inflection point. A tipping point is that point at which the past trend signal is now confirmed as valid or comprehensive in its momentum. An inflection point is that point at which a change in dynamic has transpired, and what has happened in the past is all but guaranteed not to happen next. Technically, a tipping point is nothing but a constrained inflection point. But for the purposes of this presentation and explanatory usefulness, the two need to be made distinct. The graphic to the right portrays these principles, in hope that one can relate the difference in ergodicity dynamic between inflection and tipping points, to their specific applications inside the scientific method. We must, as a scientific trust be extraordinarily wary of tipping points (T), as undeserved enthusiasm for a particular understanding may ironically serve to codify such notions into Omega Hypothesis – that hypothesis which has become more important to protect, than the integrity of science itself. In similar fashion, we must also protect indigo points (I) from the undue influence of agency seeking a desired outcome.

Having science communicators deem what is good and bad science, is like having a mafia set the exchange rate you get at your local bank. Everyone fails, but nobody knows why.

The art of the Indigo-Tau cheat works like this:  Game your inflection dynamics sparingly and only until such time as a tipping point has been achieved – and then game no further. Lock up your inflection mechanism and never let it be accessed nor spoken of again. Thereafter, momentum will win the day. Do all your dirty-work, or fail to do essential good-work (Indigo), when the game is in doubt, and then resume fair play and balance, after the game outcome is already fait accompli (Tau). Such activity resides at the very heart of fake skepticism and its highly ironic pretense in ‘communicating science’.

Indigo Point Man (Person) – one who conceals their cleverness or contempt.

Tau Point Man (Person) – one who makes their cleverness or contempt manifest.

Based upon the tenet of ethical skepticism which cites that a shrewdly apportioned omission at Point Indigo, an inflection point early in a system, event or process, is a much more effective and hard to detect cheat/skill, than that of more manifest commission at Point Tau, the tipping point near the end of a system, event or process. Based upon the notion ‘Watch for the gentlemanly Dr. Jekyl at Point Tau, who is also the cunning Mr. Hyde at Point Indigo’. It outlines a principle wherein those who cheat (or apply their skill in a more neutral sense) most effectively, such as in the creation of a cartel, cabal or mafia – tend do do so early in the game and while attentions are placed elsewhere. In contrast, a Tau Point man tends to make their cheat/skill more manifest, near the end of the game or at its Tau Point (tipping point).

Shrewdly apportioned omission at Point Indigo is a much more effective and hard to detect cheat,
than that of more manifest commission at Point Tau. This is the lesson of the ethical skeptic.

Watch for the gentlemanly Dr. Jekyl at Point Tau, who is also the cunning Mr. Hyde at Point Indigo.

Which serves to introduce and segue into our last and most clever form of The Cheat.

The Mafia Cheat – Exemplified by NFL’s Exploitation of Interpretive Penalty Call/No-Call Inflection Points

Our final example of The Cheat involves a circumstance which exhibits how The Cheat itself can be hidden inside the fabric of propriety, leveraging from the subjective nature of shades-of-color interpretations and hard-to-distinguish absences which are very cleverly apportioned to effect a desired outcome.8

Cheating is the spice which makes the chef d’oeuvre. Cheat through bias of omission not commission, only marginally enough to enact the goal and then no further, and while bearing a stately manner in all other things. Intimidate or coerce participants to remain silent.

Indigo point raison d’être: Interpretive Penalty Calls/No-Calls at Critical Indigo Points and Rates which Benefit Perennially Favored Teams and Disadvantage Others

I watched a National Football League (NFL) game last week (statistics herein have been updated for NFL end-of-season 2019) where the entire outcome of the game was determined by three specific and flawed penalty calls on the part of the game referees. The calls in review, were all invalid flag tosses of an interpretive nature, which reversed twice, one team’s (Detroit Lions) stopping a come-from-behind drive by the ‘winning’ team (Green Bay Packers). Twice their opponent was given a touchdown by means of invalid violations for ‘hands-to-the-face’, on the part of a defensive lineman. Penalty flag tosses which cannot be changed by countermanding and clear evidence, as was the case in this game. The flags alone artificially turned the tide of the entire game. The ‘winning’ quarterback Aaron Rodgers, a man of great talent and integrity, when interviewed afterwards humbly said “It didn’t really feel like we had won the game, until I looked up at the scoreboard at the end.” Aaron Rodgers is a forthright Tau Point Man – he does not hide his bias or agency inside noise. Such honesty serves to contrast the indigo point nature and influence of penalties inside of America’s pastime of professional football. Most of the NFL’s manner of exploitation does not present itself in such obvious Tau Point fashion, as occurred in this Lions-Packers game.

An interpretive penalty is the most high-sensitivity inflection point mechanism impacting the game of professional football. For some reason they are not as impactful in its analogue, the NCAA of college football. Not that referees are not frustrating in that league either, but they do not have the world-crushing and stultifying impact as do the officials inside of the NFL. NFL officials single-handedly and often determine the outcome of games, division competitions and Super Bowl appearances. They achieve this (whether intended or not) impact by means of a critically placed set of calls, and more importantly no-calls, with regard to these interpretive subjective penalties. Patterns which can be observed as consistent across decades of the NFL’s agency-infused and court-defined ‘entertainment’. Let’s examine these call (Indigo Point Commissions) and no-call (Indigo Point Omissions) patterns by means of two specific and recent team examples respectively – the cases of the 2019 Oakland Raiders and the 2017 New England Patriots.

Indigo-Commission Disadvantages Specific NFL Teams: Case of the 2019 Oakland Raiders

Argument #1 – The Penalty Detriment-Benefit Spread and Raider 60-Year Penalty History

The NFL Oakland Raiders have consistently been the ‘most penalized’ team by far, over the last 60 years of NFL operations. Year after year they are flagged more than any other team. For a while, this was an amusing shtick concerning the bad-guy aura the Raiders carried 40 or 50 years ago. But when one examines the statistics, and the types of penalties involved – consistent through six decades, multiple dozens of various level coaches who were not as highly penalized elsewhere in their careers, two owners and 10 varieties of front offices – the idea that this team gets penalized, ‘because they are supposed to’ begins to fall flat under the evidence. Of course it is also no surprise that the Raiders hold the record for the most penalties in a single game as well, 23 penalties and 200 yards penalized.9

A typical year can be observed in the chart to the right, which I created through analyzing the penalty databases at NFLPenalties.com.10 The detailed data analysis can be viewed by clicking here. True to form, the Oakland Raiders were penalized per play once again for 2019 (see the previous years here), more than any other NFL team (save for Jacksonville who narrowly overtook the Raiders with a late-season 16-penalty game). More to the point however, for the 2019 NFL Season the greatest differential between penalties-against and penalties-benefit, once again is held by the Oakland Raiders. What the chart shows is that in general, it takes 9 less plays executed for the Raiders (1 penalty every 21 plays) to be awarded their next penalty flag, as compared to their opponent (one penalty every 30 plays). Or put another way, the Raiders were flagged an average of 8 times per game, while comparatively their opponents were flagged on average 5.6 times per game – inside a range of feasibility which annually runs from about 8.2 to 5.4 to begin with.  These Oakland Raider 2019 penalty results are hugging the highest and lowest possible extremes for team versus opponent penalties respectively.

In other words, on average for 2019,
the Raiders were by far the most penalized team per game play in the NFL –
while the consistently least penalized team in the NFL was
whatever team happened to be playing the Oakland Raiders each week.

A popular older version of the graphic, which outlined this condition part way through the season can be viewed here: Most to Least Penalized – 2019 Oakland Raiders and Their Opponents.11 Nevertheless, the bottom line is this, and it is unassailable:

The Oakland Raiders are far and above more penalized than any other NFL team, leading the league as the most penalized team in season-years 1963, 1966, 1968-69, 1975, 1982, 1984, 1991, 1993-96, 2003-05, 2009-11, 2016, and most of 2019 – further then landing in the top 3 penalized teams every year from 1982 through to 2019 with only a few exceptions.12 13

Argument #2 – The Drive-Sustaining Penalty Deficit

In the case of the Raiders, the overcall/undercall of penalties is not a matter of coaching discipline, as one might reasonably presume at first blush – rather, in many of the years in question the vast majority of the penalty incident imbalances involve calls of merely subtle interpretation (marked in yellow in the chart below). Things which can be called on every single play, but for various reasons, are not called for certain teams, and are more heavily called on a few targeted teams – flags thrown or not thrown at critical moments in a drive, or upon a beneficial turnover or touchdown. To wit, in the chart which I developed to the right, one can discern that not only are the Oakland Raiders the most differentially-penalized team in the NFL for the 2019 season once again – but as well, the penalties which are thrown against the Raiders are done so at the most critically-disfavoring moments in their games. Times when the Raiders have forced the opposing team into 3rd down and long circumstances and their opponent therefore needed a break and an automatic first down in order to sustain a scoring drive. As you may observe in the chart, a team playing the Raiders in such a circumstance for 2019, bore by far the greatest likelihood of being awarded the subjective-call14 critical break they needed from NFL officials.15 16

The net uptake of this is that across their 16-game 2019 season the Raiders had 37 more drives impacted negatively by penalties versus the average NFL team on their schedule – equating to a whopping 96 additional opponent score points (by the Net Drive Points chart below). Above and beyond their opponents’ performances along this same index, this equates to at least an additional 6 points per game (because of unknown ball control minutes impact) being awarded to Raider 2019 opponents. Thereby making the difference between a 7 – 9 versus a 9 – 7 (or possibly even 10 – 6) record – not to mention the loss of a playoff berth. One can view the calculation tables for this set of data here. So yes, this disadvantage versus the NFL teams on the Raider’s 2019 schedule was a big deal in terms of their overall season success.

Calls for objective violations, such as delay of game, too many players, neutral zone infractions, encroachment and false starts – things which are not subject to interpretation – analyze these penalties and you will find that the Raiders actually perform better in these penalty categories than the NFL average (see chart on right for 2019 called penalties). These are the ‘discipline indicator’ class of penalties. What the astute investigator will find is that, contrary to the story-line foisted for decades concerning this reputation on the part of the Raiders, the team actually fares rather well in these measures. In contrast however, one can glean from the Net Drive Points chart below and derive the same number in the chart to the right, that the Raiders are penalized at double (2x) the rate of the average NFL team for scoring-drive subjective-call defensive penalties, and as well 16.3% higher for all interpretive penalty types in total (yellow Raider totals in the Net Drive Points chart below). In contrast, the Raiders are penalized at 72% of the League average for objective class or non-interpretive penalties. It is just a simple fact that the Raiders are examined by League officials with twice as much scrutiny for the violations of defensive holding, unnecessary roughness, offensive and defensive pass interference, roughing the passer, illegal pick, illegal contact and player disqualification. One can observe the analysis supporting this for 2019 Called Penalties here.17

The non-interpretive penalties (or ‘Discipline Class’ in the chart to the right) cannot be employed as inflection points of control, so their statistics will of course trend towards a more reasonable mean. Accordingly, this falsifies the notion that the Raiders are more penalized than other NFL teams because of shortfalls in coaching disciplines. If this were the case, there should be no differential between the objective versus interpretive penalty-type stats. In fact, inside this ‘discipline indicator’ penalty class, the Raiders fare better than the average NFL team. But this begs the question, do the coaching penalty statistics then corroborate this intelligence? Yes, as it happens, they do.

Argument #3 – Oakland Raider Head Coach Penalty Burden

Further then falsifying this notion that excess Raider penalties are a result of coaching and discipline, are the NFL penalty statistics of the Raider head coaches themselves. Such a notion does not pan out under that evidence either. On average Raider head coaches have been penalized 31.6% higher in their years as a Raider head coach than in their years as head coach of another NFL team. However, for conservancy we have chosen in the graph to the right to weight average coach’s contribution by the number of years coached in each role. Thus, conservatively a Raider head coach is penalized 26.3% more in that role as compared to their head coaching stints both before and after their tenure as head coach of the Oakland Raiders.18 Accordingly, this significant disadvantage has been part of the impetus which has shortened many coach tenures with the Raiders, thereby helping account for the 3.3 year Raider average tenure, versus the 6.6 year average tenure on the part of the same group of coaches both before and after being head coach of the Raiders. One can observe this in the graph, which reflects a blend of eight NFL coaches over the 1979 – 2019 NFL seasons; all prominent NFL coaches who spent significant time – 16 years on average coaching both the Raiders as well as other NFL teams.19

Not even one of the nineteen head coaches in the entire history of the Raider organization bucked this trend of being higher penalized as a Raider head coach. Not even one. Let that sink in.

There is no reasonable possibility, that all these coaches and their variety of organizations could be that undisciplined, almost every single season for 50 years. The data analysis supporting this graphic can be viewed here.

Argument #4 – Oakland Raider Player Penalty Burden

Statistically this coaching differential has to impute to the players’ performances as well, through the association of common-base data. Former Raider cornerback D.J. Hayden portrayed this well in his recent contention that he was penalized more as an Oakland Raider than with other teams. In fact if we examine the Pro Football Reference data, indeed Hayden was penalized a total of 35 times during his four years as a Raider defensive back, and only 11 times in his three years with Detroit and Jacksonville. This equates to 35 penalties in 45 games played for the Raiders, compared to only 11 penalties in 41 games played for other teams.20 That reflects a 65% reduction in penalty per game played and 55% reduction in penalty per snap played during his tenure with a team other than the Oakland Raiders.21

Such detriment constitutes a disincentive for players to want to play for a team which is penalized so often – potentially marring their careers and negatively impacting their dreams for Pro Bowl, MVP or even Hall of Fame selections. This is part of the reason I believe, as to why the badge-of-honor tag-phrase has evolved “Once a Raider, Always a Raider”. In order to play for the Raiders, you pretty much have to acknowledge this shtick inside your career, and live with it for life. Should we now asterisk every player and coach in the NFL Hall of Fame with a ‘Played for the Oakland Raiders’ asterisk now? A kind of reverse steroid-penalty bias negatively impacting a player’s career?

In the end, all such systemic bias serves to do is erode NFL brand, cost the NFL its revenue – and most importantly, harm fans, players, coaches and families.

NFL, your brand and reputation has drifted since the infamous Tuck Rule Game, into becoming ‘Bill Belichick and the Zebra Street Boys’. Your’s is a brand containing the word ‘National’, and as a league you should act accordingly to protect it. Nurture and protect it through a strategy of optimizing product quality.

And finally, the most idiotic thing one can do is to blame all this on the Oakland fans, as was done in this boneheaded article by the Bleacher Report on the Raider penalty problem from as far back as February 2012.

Collectively, all this is known inside any other professional context as ‘bias’ or could even be construed by angry fans as cheating – and when members of an organization are forced under financial/career penalty to remain silent about such activity (extortion), when you observe coaches and players and more importantly members of the free press as well, biting their tongue over this issue – this starts to become reminiscent of prohibition era 18 U.S.C. § 1961 – U.S. Code Racketeering activity.

When you examine the history of such data, much of this patterning in bias remains consistent, decade after decade. It is systemic. It is agency. One can find and download into a datamart or spreadsheet for intelligence derivation, the history of NFL penalties by game, type, team, etc. here: NFL Penalty Tracker. Go and look for yourself, and you will see that what I am saying is true. What we have outlined here is a version of the more obvious Indigo-point commission bias. Let’s examine now a more clever form of cheat, the Indigo-point omission bias.

Indigo-Omission Favors Specific NFL Teams: Case of the 2017 New England Patriots

Let’s address an example in contrast to the Oakland Raiders (also from the NFLPenalties.com data set), the case of a perennial NFL Officials’ call-favored team, the New England Patriots. As one can see in an exemplary season for that franchise, portrayed in the chart to the right, the New England Patriots team that traveled to the 2017 Season Super Bowl, was flagged (from game 10 of the season through to the Super Bowl) at a rate which exceeded 2 standard deviations below, even the next least-flagged team inside the group of 31 other NFL teams. Two standard deviations below even the second best team in terms of penalties called against them. That is an enormous bias in signal. One can observe the 2017 game-by-game statistical data from which the graphic to the right is derived here. If one removes the flagrant, non-inflection-point-useful and very obvious penalties from the Patriots’ complete penalty log (non-highlited penalty types in the chart below), this further then means the Patriots were called for 29 interpretive penalties in these final 12 games – the average of which was not called until late in the 3rd quarter, after the game’s outcome was already determined in many cases.22

In the chart to the right, one may observe the Net Drive Points (score) which were the statistical result of each of the most common forms of NFL penalty (Of note is the dramatic skew in Raider penalties towards higher score-sensitive penalties versus the average NFL team (102%). For those penalties (highlighted in yellow in the chart) which can be called on any play, New England opponents for weeks 10 through the end of the 2017 season earned 6.6 interpretive penalties per game, in those same weeks in which New England was flagged 2.4 times on average. This equates to New England earning only 36% as many interpretive penalties as their average opponent during that same timeframe. As well, most teams average their interpretive penalties late in their second quarter of play (as statistically they should), while New England was awarded their interpretive penalties with less than 5 minutes left in the third quarter of each game on average.

This means that New England was very seldom interpretive-penalized during any time in a game in which the outcome of that game was in doubt. This is ‘exploitation of omissions at Point Indigo’ by means of an absence of interpretive calls against them, for on average of the first three quarters of each game played in late 2017.  This factor, as much as being a good team, is what propelled them to the Super Bowl.

Exploiting the Tau Point on specific critical plays near the end of a game, constitutes ironically a less effective and more obvious mode of cheating – one which will simply serve to piss-off alert fans, as happened in the January 20th 2019 Rams-Saints ‘No Call’ game. One cannot Indigo Point cheat viscerally for long and not get called on such obvious bias – the highly skilled cheat must be in the form of an exploit conducted when stakeholder attentions are not piqued.

Indigo Point Exploitation: The New England Patriots received their interpretive penalties at 36% the rate of the average NFL team, a full quarter later into the game than the average NFL team, most typically when the game outcome was already well in hand. This constitutes exploitation through omission at the Indigo Point.

In fact, for the entire AFC Championship and Super Bowl that season, New England was only flagged twice for any type of violation – a total of 15 yards. Their opponents? The Jaguars and the Eagles were flagged 10 and 7 times more yards respectively, than were the Patriots in their respective championship games. True to form for 2019, from the same NFLPenalites database employed for the Raiders Penalty Differential chart at the top of this article section, one can examine and find that New England was the second least penalized team in the NFL for most of the 2019 Season, only falling to 6th overall in the final games (after they were busted a 6th time for cheating, by filming the sidelines of next week’s opposing team) – and on track to another probable and tedious Super Bowl appearance.

To put it in gambling terms, or seriously tested means of quantification upon which bookies rely – the Patriot’s opponents in the 2017 NFL Season, on average for games 10 through the Super Bowl, were given 4 more penalties in each game than were the Patriots themselves (3 less awarded to them + 1 higher awarded to their opponent on average). Using the Net Drive Points for the most common interpretive penalty types (highlighted in yellow) from the chart immediately above (published at Sports Information Solutions)23, this equates to awarding 10.8 extra points to the Patriots, per game, every game, all the way from game 10 of the 2017 season, through to and including the Super Bowl. No wonder they got to the Super Bowl.

This equates to awarding the Patriots an extra 10.8 points per game in the second half of the season thru the playoffs.

Half the teams in the NFL could have gotten to the 2017 Season Super Bowl if they were given this
same dishonest two touchdown per game advantage afforded the New England Patriots by league officials that year.

Once again, as in the case of the Oakland Raiders earlier, one can make up the pseudo-theory that ‘hey they are more disciplined team, so they are penalized less’. That is, until one examines the data and observes that this condition has gone on for five decades (ostensibly since, but in reality much further back than the notorious ‘Tuck Rule’ AFC Championship Game, the video of which can no longer be found in its original form because the NFL edited out over 2 minutes in order to conceal the game’s penalty no-call Tau Point league phone call intervention). The penalties which are called or not-called are of an interpretive nature – again those that occur most every single down, but are called on some teams consistently, and on other teams not so much. Again here as well, the penalty classes which are not subject to interpretation, delay of game, false start, etc. – surprise, New England is just average in those ‘no doubt’ classes of penalty.24 If this were a matter of coaching discipline, New England should also therefore be two standard deviations below the mean for objective-class penalties as well. They are not. The subjective-class (yellow) penalty calls and no-calls have nothing whatsoever to do with coaching discipline, and everything to do with a statistically manifest bias on the part of the league and its officials.

The Economics of Mafia-Like Activity

It took me a while in order to come to this realization. Because of the presence of closed-door threats and fines to its members, monopolistic overcharging-for-services exploitation of its customers and illicit revenue gained through under-the-table manipulation of the success of its organizations and the flow of grey market (gambling) money, the National Football League is actually not a cartel, rather they are therefore more akin to a mafia by definition.

To annually bill customers who are being misled that they are watching or wagering upon unbiased games of skill, chance and coaching – $830 to DirectTV and $300 to NFL Sunday Ticket – bare-bones cost (both purchases are required and the reality cost for most consumers is on the order of $1,350 or more per year) – purchasing a product which is touted to be one thing, but is delivered as a form of dishonest charade – to my sense this constitutes consumer or gambling failure to deliver a contracted service.

I personally paid $29,000 to NFL Sunday Ticket and DirectTV over the last 15 years of viewing NFL games, being misled by the falsehood that I was watching a sporting event wherein my teams had a chance of success through skill, draft selection, talent, coaching and ball bounces. Fully unaware that in reality, my teams had little chance of success at all.

I was not delivered the product which was sold to me.

The NFL has actually counter-argued this very consumer accusation before the Supreme Court, as recently as 2010, contending that they are merely ‘a form of entertainment’. In 2007, a Jets season ticket holder sued the NFL for $185 million. The case reached the US Supreme Court. The Jets fan argued that, all Jets fans are entitled to refunds because they paid for a ticket to a competition of skill, coaching and chance. Further, had they been aware that the games were not real then the fans would not have bought tickets. This fan lost the case on the grounds that the fans were not buying a ticket to a ‘fair’ event, rather an entertainment event.

Accordingly the NFL contends that this Supreme Court precedent gives them contractual rights to be able to advantage or disadvantage a team without having to address their own bias or cheating. Further, that the league is legally entitled to do what is needed to entertain their audience, such as in the creation and promotion of certain ‘storylines’.25 Storylines of the evil people and the good people (sound familiar?) in order to stimulate ticket and media purchases. A farce wherein ironically, the league office actually thrives upon the brand-premise that they are administering a game of skill, chance and coaching. The reality is that NFL officials pick and choose who they want to win and who they want to lose, the same teams, decade in and decade out. None of its at-risk members (players, organizations, staff and coaches) are allowed to speak of this gaming, for threat of fines or their career. At least in professional wrestling, the league leadership and participants admit that it is all an act. In professional wrestling no one is fooled out of their money.

This is a pivotal reason why I dumped NFL Sunday Ticket and DirectTV. I am not into being bilked of hard-earned household money by a quasi-mafia.

Update (Dec 2019): NFL is reportedly planning a “top-down review” of the league’s officiating during the 2020 offseason.

Such shenanigans as exemplified in the three case studies above represent the everpresence and impact of agency (not merely bias). Bias can be mitigated; however, agency involves the removal and/or disruption of the power structures of the cartel, cabal and mafia. These case examples in corruption demonstrate how agency can manipulate inflection dynamics to reach a desired tipping point – after which one can sit in their university office and enjoy tenure, all the way to sure victory. The only tasks which remain are to protect the indigo point secret formula by means of an appropriate catch phrase, and as well ensure that one does not have any mirrors hanging about, so that you do not have to look at yourself.

An ethical skeptic maintains a different view as to how championships, ethical markets, as well as scientific understanding, should be prosecuted and won.

The Ethical Skeptic, “Inflection Point Theory and the Dynamic of The Cheat”; The Ethical Skeptic, WordPress, 20 Oct 2019; Web, https://wp.me/p17q0e-atd

October 20, 2019 Posted by | Institutional Mandates, Tradecraft SSkepticism | , , , , | 17 Comments

The Earth-Lunar Lagrange 1 Orbital Rapid Response Array (ELORA)

Elora is a name meaning ‘The laurel of victory’. Within this paper, The Ethical Skeptic has proposed for consideration a concept for an elegant, flexible, high delivery-mass, rapid response, high kinetic-energy and low rubble-fragmentation system called ELORA. A Lagrange exploiting orbital array around the Moon, which can be rapidly deployed to interdict an approaching Earth-impactor threat, through massive, adaptable and repeated kinetic impact. It is the contention of this white paper that this concept system offers features superior in every facet of challenge, to the existing asteroid/comet deflection technologies under consideration.

Elora is a name bearing the meaning ‘the laurel of victory’. The symbol of the laurel wreath traces back to Greek mythology. Apollo, god of warfare archers and archery, was often represented wearing a laurel wreath which encircled his head, as a crown of symbolic power. Accordingly, in the Greek Olympics such laurel wreaths were crafted from a wild form of olive tree known as “kotinos” (κότινος). In the later Roman context, laurel wreaths were symbols of martial victory, crowning a successful commander for having just vanquished an enemy force with rapidity.1

Rapid is a business term, which is used to encompass both the contexts of quickness in response (Amazon) and fastness in delivery (FedEx). ELORA, is a gravity-exploiting wreath, worn around the head of the Moon, designed to mitigate large celestial future and importantly, emergent Earth-impacting orbital bodies, through a rapid, repeatable and overwhelming kinetic response. A system which solves (in the concept presented herein) many of the problems which face today’s proposed Earth-impactor mitigation ideas, and yet bears few of their disadvantages.

ELORA is an acronym for: Earth-Lunar Lagrange 1 (ELL-1) Orbital Rapid Response Array. ELORA is a proposed system to interdict and deflect Potential Hazardous Objects to Earth. It is a series of Lunar dust bags that each perform kinetically like shotgun pellets. They are bagged on the Moon and then individually launched to Earth-Lunar Lagrange point 1, in order to be assembled into massive single payloads of bound-but-separate dust bags – yielding a total of 1000 – 3000 kilotons of TNT (about 2.8 – 4.2 Petajoules) of direct kinetic energy per payload. Twelve of these 1728-bag/200,000 kilogram single payloads are to be assembled, which will station as trojan ELL-1 payloads; ready to be rapid deployed to any Lunar orbit inclination in order to interdict large (>50 meters) and short notice Near Earth or Potential Hazardous Objects (NEO/PHO) from space. The array as a concept is easy to assemble and offers redundancy, power and rapidity unparalleled by existing conceptual alternative interdiction approaches.

Of top concern among those scientists tasked to forward-think about threats to mankind, is the real possibility that the Earth will be someday threatened by a rogue asteroid, comet or other, even extra-solar space debris – which becomes a Potential Hazardous Object (PHO).2 3 Current plans to address cosmic impactor threats include nuclear warheads and various ingenious forms of imbuing physical effects to the PHO object or add or subtract momentum from its solar-orbital vector.

‘This one did sneak up on us’: Internal emails reveal how NASA almost missed Asteroid ‘2019 OK’ (a 130 meter asteroid) when it whizzed past Earth in July, within 24 hours of its detection.4

In 2011, the director of the Asteroid Deflection Research Center at Iowa State University, Professor Bong Wie began to study strategies that could deal with 50-to-500-metre-diameter (200–1,600 ft) objects when the time to Earth impact was less than one year. He concluded that to provide the required energy, a nuclear explosion or other event that could deliver the same power, are the only methods that can work against a very large asteroid within these time constraints.5 It is the contention of this author, that space deployed nuclear warheads constitute a dangerous, expensive and less effective means of mitigating such objects. A massive high-kinetic shotgun payload system such as ELORA will delivery more kinetic energy, more rapidly, and in more overwhelming fashion, than can nuclear warheads – bearing less of the downsides and costs of nuclear or other approaches.

Existing Approaches to Asteroid Deflection/Mitigation

Various PHO and emergent bolide collision avoidance techniques have different trade-offs with respect to metrics such as overall performance, cost, failure risks, redundancy, operations, and deployment readiness. There are various methods under serious consideration now, as means of changing the course of any potential Earth threat. These can be differentiated by various attributes such as the type of mitigation (deflection or fragmentation), energy source (kinetic, electromagnetic, gravitational, solar/thermal, or nuclear), and approach strategy (long term influence or immediate impact).6

Potential Hazardous Object (PHO) Problem Definition: Four Challenges Exist

1.  PHO interdiction technologies exist in a convex technology trade-off relationship of diminishing marginal returns (lower blue curve in the graphic below), in that,

a.  What can be deployed quickly or be easily maneuvered in space, is also not sufficient to do the job.

b.  What can do the job, cannot be deployed quickly nor be maneuvered easily in space.

2.  Hydrogen (lithium deuteride or equivalent) core detonations are theoretically effective for low diameter bodies, yet diminish in effectiveness (upper blue curve in the graphic below) asymptotically to a maximum of a 100 – 150 meter bolide, as constituting the largest effective body which the technology can be employed to interdict.

3.  Current estimates of effectiveness are theoretical only –  a condition wherein neither their adequacy at the job, nor rapidness/maneuverability in deployment can be easily tested against mock threat conditions prior to their actual need.

4.  No System to date has offered a low-cost, rapidly deployable, scalable, flexible, testable, centuries-durable, low maintenance, all aspect angle, low fragmentation, redundant, bolide-mass altering, high-mass/kinetic potential and multiple-impactor solution – which can address the emergent or otherwise 150+ meter diameter body.

The various current approaches to deflecting a wayward celestial body fall into four approach categories (Note: These are all derived/reworded and modified/categorized into a more logical taxonomy, from Wikipedia: Asteroid Impact Avoidance): 

Fragmentation – explosive or high velocity kinetic methods which seek to pulverize the orbital body into both bolides which take non-threatening orbital tracks (achieve orbital body escape velocity) or pose less of a destructive threat when they do eventually enter the Earth’s atmosphere (hopefully less than 35 meters in average diameter). These can be executed in either an emergent or long-term strategy.

1.  Hypervelocity Asteroid Mitigation Mission for Emergency Response (HAMMER) – a spacecraft (8 tonnes) capable of detonating a nuclear bomb to deflect an asteroid through two methods of approach:

a.  Nuclear Impact Device (NID) – a direct impact by a nuclear device causes the body to be broken through concussion into smaller pieces of both escape velocity and less-damaging characteristics.

b.  Nuclear Standoff Device (NSD) – a nuclear device or series thereof, are detonated a given distance from the orbital body. The kinetic energy of thermal and fast neutrons, along with x-rays and gamma rays causes a push which changes the track of the orbital body (note, this is not the same as cometization).

2.  Dual Warhead Nozzle-Ejecta – a two stage nuclear/nuclear approach, which combines an initial nuclear blast to create a provisional deep crater, which is then followed by a second subsurface nuclear detonation within that provisional crater (the nozzle), which would generate an ejecta effect and high degree of efficiency in the conversion of the x-ray and neutron energy that is released into propulsive energy to the orbital body.

Kinetic Energy/Impact – massive and high velocity man-assembled bodies which impact the orbital body directly and impart a resulting inertial/momentum transfer change to its orbit.

3.  Asteroid Redirect – capture and employment of another asteroid body as an inertial mass which is directed to impact and fragment or alter the trajectory of the threatening orbital body.

Earth-Lunar Lagrange 1 Orbital Quick Response Array (ELORA) – a large kinetic object and quick response approach developed by The Ethical Skeptic. A series of Lunar dust bag bundles, bound together into large massive projectiles held on station at Earth-Lunar Lagrange Point 1 and subsequently placed into any needed inclination Lagrange orbit around the Moon. These would be short notice directed by thruster and/or Moon-Earth slingshot towards the approaching orbital body, exploiting the low/zero gravity of Earth-Moon Lagrange 1, and targeted for a direct high velocity/high kinetic impact. The bags can be un-bound at the last minute, in order to form a larger impact pattern (shotgun effect) in the case of a rubble pile asteroid, thereby distributing the momentum over a larger area of the orbiting body and displacing a greater amount of the rubble and reducing fragmentation.

4.  Hypervelocity Asteroid Intercept Vehicle (HAIV) – a two stage kinetic/nuclear hybrid approach, which combines a kinetic impactor to create an initial crater, which is then followed by a subsurface nuclear detonation within that initial crater, which would generate a lensing effect and high degree of efficiency in the conversion of the x-ray and neutron energy that is released into propulsive energy to the orbital body.

5.  Conventional Rocket Engine – launching and attaching any spacecraft propulsion engine to the center of mass of the orbital object, and using the engine to give a push, possibly forcing the asteroid onto a non-threatening trajectory.

Gradualization – various approaches by means of technology, engines, colors, lasers or offset thrust devices which serve to push, pull, alter the solar pressure on or cometize the orbital body.

6.  Gravity Tractor Thrust Rockets – a more massive thruster spacecraft is placed into orbit around the Earth-threatening orbital body. A slow thrust is applied from the spacecrafts engines, never exceeding escape velocity. The mutual gravitation between the two bodies begins to alter alter the trajectory of the orbital body from its original course.

7.  Ion Beam Driver – involves the use of a low-divergence ion thruster mounted on an orbiting spacecraft, which is pointed at the center of mass of the asteroid. The momentum imparted by the ions reaching the asteroid surface produces a slow-but-continuous force that can deflect the asteroid in similar fashion to a gravity tractor, but with a much lighter spacecraft.

8.  Solar Sail Push/Pull – attaching a solar sail either behind or on the surface of the orbital body, in order to use the solar wind to alter the trajectory of the orbital body.

9.  Painting – altering the color of the orbital body to the opposite end of the color band from which it naturally exists. The whiter or blacker surface alteration would then provide for a differential dynamic in the absorption and reflection of solar photons and gradually alter the body’s trajectory over time via the Yarkovsky effect.

10.  Solar Focusing – a technique using a set of refractory lenses or a large reflector lens (probably deployed foil) which focuses a relatively narrow beam of reflected sunlight onto a specific region of the orbital body, creating thrust from the resulting vaporization of material, solar wind or through amplifying the Yarkovsky effect, wherein photons emitted from the body itself serve to alter its trajectory.

11.  Nuclear Pulse Propulsion – involves the use of a nuclear pulse engine mounted on a spacecraft, which lands on the surface of the asteroid. The momentum imparted by the nuclear pulses produces a slow-but-continuous force that can deflect the asteroid in similar fashion to a thruster rocket.

12.  Cometization – heating the surface of the orbital body through a thermonuclear release of neutrons, x-rays and gamma rays so that it begins to eject heated material from cracks or vents in the surface, in similar manner to a comet – thereby causing a thrust vector nudging of the orbital body itself for a short to moderate period of time. Depending on the brisance and yield of the nuclear device, the resulting ejecta exhaust and mass loss effects, would produce enough alteration in the object’s orbit to make it miss Earth.

13.  Laser Ablation – focus sufficient laser energy from Earth or a space deployed laser or laser array, onto the surface of an asteroid to cause flash vaporization and mass ablation and create either an impulse or mass alteration which changes the momentum of the orbital body.

14.  Magnetic Flux Compression – magnetically brakes objects that contain a high percentage of iron through deploying a wide coil of wire along the sides of its orbital path. When the body moves through the coil or tunnel, inductance creates an electromagnet solenoid effect which causes EM drag on the orbital body.

Mass Alteration – various methods of digging and ejecting or addition of added mass from/to the orbital body, thereby altering its long term orbital track.

15.  Deep Impact Collision – an impactor which injects itself deep into the surface of the orbital body, thereby changing both its velocity and net mass.

16.  Mass Driver – a system landed onto the surface of an orbital body, which ejects material into space, thus giving the object a slow steady push as well as decreasing its mass.

17.  Gravity Tractor Redirect – another smaller, but still significant spacecraft or redirected body is placed into orbit around the Earth-threatening orbital body. The added binary-systemic gravitation/mass of the new body alter the trajectory of the orbital body from its original course.

18.  Tether Tractor – attaching a mass by means of a tether or netting, to the orbital body, thereby altering the net mass of the system and as well its orbital trajectory.

19.  Dust/Steam Cloud Accretion – releasing dust or water vapor from a spacecraft or from a detonated redirected comet, which would subsequently be gathered/accreted by the orbital body and serve to alter its mass/trajectory over a long period of time.

20.  Coherent Digger Array – multiple mobile or fixed flat tractors which attach to the surface of the orbital body and dig up material, ejecting it into space and thereby significantly altering the mass of the orbital body and changing its trajectory. The material could also be released from one side of the body as a coordinated fountain array with an added propulsive effect.

21.  Net Drag – a durable net material which is deployed into the path of the orbital object, which then wraps around the object. This netting addition is added several times over until the net mass/momentum of the orbital body is changed.

Carl Sagan, in his book Pale Blue Dot, expressed concern about deflection technology, noting that any method capable of deflecting impactors away from Earth could also be abused to divert non-threatening bodies toward the planet.

If you can reliably deflect a threatening worldlet so it does not collide with the Earth, you can also reliably deflect a harmless worldlet so it does collide with the Earth. Suppose you had a full inventory, with orbits, of the estimated 300,000 near-Earth asteroids larger than 100 meters—each of them large enough, on impacting the Earth, to have serious consequences. Then, it turns out, you also have a list of huge numbers of inoffensive asteroids whose orbits could be altered with nuclear warheads so they quickly collide with the Earth…

Tracking asteroids and comets is prudent, it’s good science, and it doesn’t cost much. But, knowing our weaknesses, why would we even consider now developing the technology to deflect small worlds?…

If we’re too quick in developing the technology to move worlds around, we may destroy ourselves; if we’re too slow, we will surely destroy ourselves. The reliability of world political organizations and the confidence they inspire will have to make significant strides before they can be trusted to deal with a problem of this seriousness…

Since the danger of misusing deflection technology seems so much greater than the danger of an imminent impact, we can afford to wait, take precautions, rebuild political institutions—for decades certainly, probably centuries. If we play our cards right and are not unlucky, we can pace what we do up there by what progress we’re making down here…

The asteroid hazard forces our hand. Eventually, we must establish a formidable human presence throughout the inner Solar System. On an issue of this importance I do not think we will be content with purely robotic means of mitigation. To do so safely we must make changes in our political and international systems.

   ~[p 146-150], Pale Blue Dot, Carl Sagan

The critical path issue elucidated through this – is that a well designed and elegant deflection technology would be employed to increase the entropy of the interdiction circumstance, whereas using a redirect technology critically depends upon decreasing the entropy of that circumstance. In other words, by choosing a non-nuclear deflection (as opposed to redirection) we are pushing the threatening orbital body into any one of a billion potential outcomes, all of which are satisfactory in nature. In order to make a non-threatening orbital body suddenly become a threat, one must alter its trajectory to one specific outcome among billions. A task of extraordinarily greater difficulty – rendering that technology also not an optimal choice as an impactor-mitigating solution. I disagree with Sagan that all mitigation technologies will/can be used as an implement of warfare, and therefore must be delayed – as one need resign self to the single answer of nuclear detonations in order to assume that such a false dilemma exists.

Indeed, that dilemma does not necessarily exist. What we have proposed below, provides for a powerful, yet neutral, non-nuclear and single purpose system – which can only be employed to deflect incoming invaders with abandon, yet cannot be used to deflect them in order to purposely place Earth into harm’s way. The concept system resolves most every shortfall characteristic in the list of mitigation approaches above (see graph and list of technologies 1 – 21), and as well resolves Sagan’s concern, through use of simple technologies and focused on-task elegance in design.

Elegant Solution Approach: ELORA – Earth-Lunar Lagrange 1 Orbital Rapid Response Array

Below are presented five slides which serve to introduce the ELORA concept approach and feature set. The first, second and third slides serve to introduce the Lagrange exploitation construct, along with the principle involving 12 x 1728 bags of Lunar dust in trojan Earth-Lunar Lagrange 1 station or targeting orbit around the Moon. The fourth slide speaks to the establishment of all-Lunar-inclination-angle target interdiction capability, while the fifth slide depicts the multiple impactor (up to 12) and shotgun (1728 ‘pellets’) approaches which achieve the enormous kinetic energy payload and low fragmentation outcome.

The development process consists of simply harvesting dust from the surface of the Moon, so that large particles are not created from spills in orbit around the Moon or after impact with the targeted bolide. This dust is bagged and launched into space in quantities of 12 bags. After 144 launches (much more cheaply executed from the surface of the Moon and its low gravity than from Earth), these 1728 bags of Lunar dust are bound together as a single 200,000 kg ‘payload’ – one single impactor designed to mitigate an Earth endangering NEO/PHO. Each payload is then affixed with a rocket and attitude control system, and then parked at Lagrange 1 (or ready-placed into Lagrange elliptical orbit around the Moon, in a variety of orbit inclinations so as to maximize celestial omnidirectional coverage). The payload is preset with small deployment charges which allow the bags of dust to be burst apart slightly, and to separate during the last 5 minutes of terminal approach, so that they act as a kind of shotgun effect on the targeted bolide.

This is all accomplished at a space work-station called ELL-1 Payload Assembly, in trojan orbit at Earth-Lunar Lagrange point 1. The Earth-Lunar Lagrange 1 Payload Assembly station would be used to conduct monitoring, maintenance and upgrades of the system from then on. This would be absolutely essential due to the structure fatiguing and propellant degradation which each payload and its control system would experience, due to age or the constant repetitive changes in the Moon’s tidal gravity over each orbit. Alternatively, all 12 payloads may be kept on station as ready-station trojan bodies at ELL-1. The Moon orbital phase for payloads under this approach would only be initiated when the actual deployment of the system was needed. This would delay the rapidness of response only by a couple of days. Of course a hybrid system thereof may also be deployed, with a portion of the payloads in orbit and the remainder in trojan station-keeping reserve so as to minimize maintenance demand.

The result is a single payload impactor (200,000 kg) with the force of 1000 – 3000 kilotons of TNT (about 2.8 – 4.2 Petajoules); in the range of 60 to 90 times as much energy as that released from the atomic bomb detonated at Hiroshima.

However, unlike a nuclear fusion core detonation (used by the most effective alternative approaches in the chart above) – ALL of an ELORA payload’s kinetic potential is transferred into momentum imparted to the orbital body.

Alternative approaches above would require 672 static load launches or 50 to 85 – B83 hydrogen nuclear core detonations in order to achieve the same inertial effect as 12 single payloads from an ELORA intervention – all static assets needing to be maintained by an international body for centuries, and then without warning be required within a matter of days.

And of course, ELORA could be tested on 150+ meter asteroids and NEO’s, at low cost, whereas the Delta IV static load and B83 hydrogen warhead detonation approaches could not.

Now, it should be noted that the orbit paths of the payloads do not have to conform to the specific polar orbit depicted in the slides below. Alternative Lunar retrograde orbits and other oblique/equatorial/inclination offset orbits can be established to enhance the ability to deliver payloads to an impactor body approaching from a variety of aspect angles, and in the most rapid and low-energy-input to high kinetic payload ratio means as possible. The illustrations below depict only one type of potential prograde polar orbit, for conceptual simplicity.

notes: While the Lunar orbit is depicted as somewhat circular, the actual orbit would be elliptical. As well the relative sizes of the Moon and Earth bias towards presenting the Moon as larger and closer relative to the Earth than it really is, and both bodies larger to scale than reality. All of these are done for sake of presentation only.

Critical Advantages of ELORA over Other Interdiction Concepts/Approaches

The ELORA concept solution presents a number of advantages over currently proposed approaches:

1.  Low construction cost (Provided we are working on the Moon already)

2.  Repeated impacts and multiple attempts possible in quick response context (tolerates single failures)

3.  No fragmentation of threat – Impactor is fine dust and spreads over an area most of the size of the bolide immediately prior to impact so that it bears less likelihood of splitting it

4.  Low cost to maintain/launch/station-keep

5.  Very quick deployment – System can be deployed within hours after a five sigma track is established for the target object

6.  Extremely high velocities and impact reach possible – Superior kinetic energy potential – Superior inertia imparted as compared to hydrogen core detonation

7.  Modular/Scalable/’Magazine’ is cheaply and easily reload-able – the advantageous bag-by-bag method as to how it is assembled, becomes also a key strength in how it impacts the orbital body (like shotgun pellets) and reduces overall threat of fragmentation

8.  Can address multiple objects at once or persistent fragments which remain after first impact, with a second fusillade

9.  Can still be used with superior effectiveness for longer term intervention scenarios

10.  ‘Paints’ an asteroid white (for long term intervention scenario) – Increases Yarkovsky effect – Induces cometization on impact side

11.  Adds superior amount of mass to the target orbital body

12.  Spread pattern (shotgun blast) or single bullet projectile and variable velocities possible – tailored to orbital body challenge. Not vulnerable to the tumbling of the target bolide (roll, pitch, yaw) as are all other technologies

13.  Deflects very large orbital body mass threats compared to current conceptual approaches

14.  Remaining straggler threat fragments can be independently targeted and impacted separately

15.  Uses Lunar orbit angular momentum and/or Lunar/Earth slingshot effect for added kinetic energy at launch

16.  Vastly superior single impactor total mass (56 x) – equivalent to 1000 – 3000 kilotons of TNT (about 2.8 – 4.2 Petajoules), in the range of 60 to 90 times as much energy as that released from the atomic bomb detonated at Hiroshima. However, unlike a nuclear warhead blast – ALL of this kinetic potential is transferred into momentum imparted to the orbital body.

17.  Rapid intervention arrival time onto targeted threat

18.  Potential for deployment to not be controlled by a single nation nor launch station

19.  Lower chance of technology chain risk-failures/straightforward mechanisms

20.  Thrusters are only directional do not have to lift anything into space, nor expend regular fuel in order to keep dynamic orbit – Less fuel vulnerable/Lower fuel requirement

21.  Each impactor unit arrival provides ranging/correction for more accurate successive impacts – (shoot shoot look shoot)

22.  Employs the kinetic energy of the Moon’s orbit around the Earth like a pitcher’s throw in baseball

24.  Uses stationary Lagrange point 1 assembly – low G and low cost to assemble/handle impactor payloads

25.  Can be recaptured by Lagrange 1 assembly station and repair/maintenance done as needed

26.  Low cost of assembly/launch from low G of Moon surface

27.  System can be upgraded with better trajectory rockets, without having to change out the actual payload

28.  System can be tested repeatedly and at a low cost. Is easy to replace the expended round.

29.  Can deflect an irregular shape, long and tumbling bolide (such as 2017 Oumuamua)

30.  Trojan payloads in static orbit at Earth-Lunar Lagrange 1, can be launched/slingshot by the Moon and Earth along any selected initial Lunar orbit inclination vector desired (as well as corresponding Earth slingshot inclination), to interdict objects approaching from any direction inside the celestial grid.

31.  Assembly and trojan stationing at Earth-Lunar Lagrange Point 1 allows for a very large payload to be assembled in space, yet not have to carry the rockets and large fuel required to keep orbit station around the Moon, or even worse, Earth during its assembly – wherein one would constantly have to add energy, adjusting the orbit of the payload as bag mass is added to its structure over time.

Assembly and trojan stationing at Earth-Lunar Lagrange Point 1 allows for a very
large payload to be assembled in space, yet not have to carry the rockets and large
fuel required to keep orbit station around the Moon, or even worse, Earth during its assembly –
wherein one would constantly have to add energy, adjusting the orbit of the payload as bag mass is added to its structure over time.

Development and Phasing

While much work remains to be completed on the development phase obviously, and accordingly demands that a Moon base of operations be established (becoming only one of the reasons to mandate such a thing – so this project cannot be burdened with the full cost of establishing operations on the Moon itself), the deployment is conducted in relatively straightforward fashion, through beta testing and four deployment phases below.

2038  Beta 0 Testing – Earth based test of smaller trojan payload station-keeping at ELL-1

2040  Beta 1 Testing – ELL-1 in situ testing of larger payload assembly/station-keeping

2043  Beta 2 Testing – Trojan to Moon orbit transition test and asteroid test interdiction

2050  Phase I – Establish Moon surface station infrastructure

2055  Phase II – Lunar launch station assembly/operation/test bagging & launch

2058 – 2068  Phase III – Earth-Lunar L1 trojan impactor amassing (creating payloads)

2070 – 2075  Phase IV – Lunar Lagrange orbital array stationing/acceptance testing series

Thus we are probably at least 40 years from being able to begin to accomplish such a feat at face value as presented herein. However, it is the opinion of this author, that eventually the best minds in this discipline will conclude that this solution is the only real way in which an emergent, 150+ meter bolide interdiction could be achieved by mankind. In the meantime, the nuclear option (distasteful as that may be) appears to be the best stop-gap measure for Earth defense with respect to smaller, more likely, PHO bolides, while we obtain the political and social will to create the elegant and ethical ELORA architecture in our binary space.

However, there is nothing to say that we cannot in the meantime, create a couple of these payloads with conventional Delta IV launches over the next two decades, place a similar smaller sized payload at Lagrange 1, and then test the concept first. In fact, we should do this. But the question will remain, will we be this bold? Or are PHO/Earth-impactors just another myth to the assuredly skeptical mind?

In the meantime, respectfully submitted for your consideration.

The Ethical Skeptic, “The Earth-Lunar Lagrange 1 Orbital Rapid Response Array (ELORA)”; The Ethical Skeptic, WordPress, 14 Sep 2019; Web, https://wp.me/p17q0e-aeh

September 14, 2019 Posted by | Ethical Skepticism | , , , , , , , , , , | 5 Comments

The Elements of Hypothesis

One and done statistical studies, based upon a single set of statistical observations (or even worse lacks thereof), are not much more credible in strength than a single observation of Bigfoot or a UFO. The reason, because they have not served to develop the disciplines of true scientific hypothesis. They fail in their duty to address and inform.

As most scientifically minded persons realize, hypothesis is the critical foundation in exercise of the scientific method. It is the entry door which demonstrates the discipline and objectivity of the person asking to promote their case in science. Wikipedia cites the elements of hypothesis in terms of the below five features, as defined by philosophers Theodore Schick and Lewis Vaughn:1

  • Testability (involving falsifiability)
  • Parsimony (as in the application of “Occam’s razor” (sic), discouraging the postulation of excessive numbers of entities)
  • Scope – the apparent application of the hypothesis to multiple cases of phenomena
  • Fruitfulness – the prospect that a hypothesis may explain further phenomena in the future
  • Conservatism – the degree of “fit” with existing recognized knowledge-systems.

Please note that herein we are discussing alternative hypothesis structuring under the scientific method; not hypothesis testing under the experimental method.

Equivocally, these elements are all somewhat correct; however none of the five elements listed above constitute logical truths of science nor philosophy. They are only correct under certain stipulations. The problem resides in that this renders these elements not useful, and at worst destructive in terms of the actual goals of science. They do not bear utility in discerning when fully structured hypothesis is in play, or some reduced set thereof. Scope is functionally moot at the point of hypothesis, because in the structure of Intelligence, the domain of observation has already been established – it had to have been established, otherwise you could not develop the hypothesis from any form of intelligence to begin with.2 3 To address scope again at the hypothesis stage is to further tamper with the hypothesis without sound basis. Let the domain of observation stand, as it was observed – science does not advance when observations are artificially fitted into scope buckets (see two excellent examples of this form of pseudoscience in action, with Examples A and B below).

Fruitfulness can mean ‘producing that which causes our paradigm to earn me more tenure or money’ or ‘consistent with subjects I favor and disdain’ or finally and worse, ‘is able to explain everything I want explained’. Predictive strength, or even testable mechanism, are much stronger and less equivocal elements of hypothesis. So, these two features of hypothesis defined by Schick and Vaughn are useless to malicious in terms of real contribution to scientific study. These two bad philosophies of science (social skepticism) serve to produce inevitably a fallacy called explanitude. A condition wherein the hypothesis is considered stronger the more historical observations it serves to explain and how flexible it can be in predicting or explaining future observations. Under ethical skepticism, this qualification of an alternative or especially null hypothesis is a false notion.

Finally, parsimony and conservatism are functionally the same thing – conserving and leveraging prior art along a critical path of necessary incremental conjecture risk. This is something which few people aside from experienced patent filers understand. If I constrain my conjecture to simply one element of risk along a critical path of syllogism, I am both avoiding ‘excessive numbers of entities’ and exercising ‘fit with existing recognized knowledge systems’ at the same time. Otherwise, I am proposing an orphan question, and although it might appear to be science, p-values and all, it is not. Thus, a lack of understanding on the part of the Schick and Vaughn inside How to think about weird things: critical thinking for a New Age, as to how true science works, misled them into believing that these two principles needed to be addressed separately. One is a fortiori with the other inside Parsimony (see below). Unless of course one is implying that ‘fit’ means ‘to comply’ (as the authors probably do, being that both authors are social skeptics and have no professional experience managing a lab) – then of course we are dealing with a completely different paradigm of science called sciebam: the only answers I will accept, until I die, are answers which help me improve or modify my grasp of how correct I am. The duty of a hypothesis is to inform about and address standing evidence and inference (Element 4 below), not to necessarily just conform to it. It should avoid beginning science by means of an orphan question, especially under a conflict of interest – and especially if that interest is ‘preservation of career reputation’. Thus the process of simply confirming standing theory, and the process of discovery are often two different things altogether. This leverages around the critical discernment between what ethical skepticism calls science and sciebam.

Orphan Question

/philosophy : pseudoscience : sciebam/ : a question, purported to be the beginning of the scientific method, which is asked in the blind, without sufficient intelligence gathering or preparation research, and is as a result highly vulnerable to being manipulated or posed by means of agency. The likelihood of a scientifically valid answer being developed from this question process, is very low. However, an answer of some kind can almost always be developed – and is often spun by its agency as ‘science’. This form of question, while not always pseudoscience, is a part of a modified process of science called sciebam. It should only be asked when there truly is no base of intelligence or body of information regarding a subject. A condition which is rare.

         Sciebam

/philosophy : science : method : sciebam/ : (Latin: I knew) An alternative form of knowledge development, which mandates that science begins with the orphan/non-informed step of ‘ask a question’ or ‘state a hypothesis’. A non-scientific process which bypasses the first steps of the scientific method: observation, intelligence development and formulation of necessity. This form of pseudoscience/non-science presents three vulnerabilities:

First it presumes that the researcher possesses substantially all the knowledge or framework they need, lacking only to fill in final minor gaps in understanding. This creates an illusion of knowledge effect on the part of the extended domain of researchers. As each bit of provisional knowledge is then codified as certain knowledge based upon prior confidence. Science can only progress thereafter through a series of shattering paradigm shifts.

Second, it renders science vulnerable to the possibility that, if the hypothesis, framework or context itself is unacceptable at the very start, then its researcher therefore is necessarily conducting pseudoscience. This no matter the results, nor how skillfully and expertly they may apply the methods of science. And since the hypothesis is now a pseudoscience, no observation, intelligence development or formulation of necessity are therefore warranted. The subject is now closed/embargoed by means of circular appeal to authority.

Finally, the question asked at the beginning of a process of inquiry can often prejudice the direction and efficacy of that inquiry. A premature or poorly developed question, and especially one asked under the influence of agency (not simply bias) – and in absence of sufficient observation and intelligence – can most often result quickly in a premature or poorly induced answer.

Science – ‘I learn’ = using deduction and inductive consilience to infer a novel understanding
Sciebam – ‘I knew’ = using abduction and panduction to enforce an existing interpretation

Real Hypothesis

Ethical skepticism proposes a different way of lensing the above elements. Under this philosophy of hypothesis development, I cannot make any implication of the ilk that ‘I knew’ the potential answer a priori. Such implication biases both the question asked, as well as the processes of inference employed. Rather, hypothesis development under ethical skepticism involves structure which is developed around the facets of Intelligence, Mechanism and Wittgenstein Definition/Domain. A hypothesis is neither a hunch, assumption, suspicion nor idea. Rather it is

       Hypothesis

/philosophy : skepticism : scientific method/ : a disciplined and structured incremental risk in inquiry, relying upon the co-developed necessity of mechanism and intelligence. A hypothesis necessarily features seven key elements which serve to distinguish it from non-science or pseudoscience.

The Seven Elements of Hypothesis

1.  Construct based upon necessity. A construct is a disciplined ‘spark’ (scintilla) of an idea, on the part of a researcher or type I, II or III sponsor, educated in the field in question and experienced in its field work. Once a certain amount of intelligence has been developed, as well as definition of causal mechanism which can eventually be tested (hopefully), then the construct becomes ‘necessary’ (i.e. passes Ockham’s Razor). See The Necessary Alternative.

2.  Wittgenstein definition and defined domain. A disciplined, exacting, consistent, conforming definition need be developed for both the domain of observation, as well as the underpinning terminology and concepts. See Wittgenstein Error.

3.  Parsimony. The resistance to expand explanatory plurality or descriptive complexity beyond what is absolutely necessary, combined with the wisdom to know when to do so. Conjecture along an incremental and critical path of syllogism. Avoidance of unnecessarily orphan questions, even if apparently incremental in the offing. See The Real Ockham’s Razor. Three characteristic traits highlight hypothesis which has been adeptly posed inside parsimony.

a. Is incremental and critical path in its construct – the incremental conjecture should be a reasoned, single stack and critical path new construct. Constructs should follow prior art inside the hypothesis (not necessarily science as a whole), and seek an answer which serves to reduce the entropy of knowledge.

b. Methodically conserves risk in its conjecture – no question may be posed without risk. Risk is the essence of hypothesis. A hypothesis, once incremental in conjecture, should be developed along a critical path which minimizes risk in this conjecture by mechanism and/or intelligence, addressing each point of risk in increasing magnitude or stack magnitude.

c. Posed so as to minimize stakeholder risk – (i.e. precautionary principle) – a hypothesis should not be posed which suggests that a state of unknown regarding risk to impacted stakeholders is acceptable as central aspect of its ongoing construct critical path. Such risk must be addressed first in critical path as a part of 3. a. above.

4.  Duty to Reduce Address and Inform. A critical element and aspect of parsimony regarding a scientific hypothesis. The duty of such a hypothesis to expose and address in its syllogism, all known prior art in terms of both analytical intelligence obtained or direct study mechanisms and knowledge. If information associated with a study hypothesis is unknown, it should be simply mentioned in the study discussion. However, if countermanding information is known or a key assumption of the hypothesis appears magical, the structure of the hypothesis itself must both inform of its presence and as well address its impact. See Methodical Deescalation and The Warning Signs of Stacked Provisional Knowledge.

Unless a hypothesis offers up its magical assumption for direct testing, it is not truly a scientific hypothesis. Nor can its conjecture stand as knowledge.

Pseudo-hypothesis

/philosophy : pseudoscience/ : A pseudo-hypothesis explains everything, anything and nothing, all at the same time.

A pseudo-hypothesis fails in its duty to reduce, address or inform. A pseudo-hypothesis states a conclusion and hides its critical path risk (magical assumption) inside its set of prior art and predicate structure. A hypotheses on the other hand reduces its sets of prior art, evidence and conjecture and makes them manifest. It then addresses critical path issues and tests its risk (magical assumption) as part of its very conjecture accountability. A hypothesis reduces, exposes and puts its magical assertion on trial. A pseudo-hypothesis hides is magical assumptions woven into its epistemology and places nothing at risk thereafter. A hypothesis is not a pseudo-hypothesis as long as it is ferreting out its magical assumptions and placing them into the crucible of accountability. Once this process stops, the hypothesis has become an Omega Hypothesis. Understanding this difference is key to scientific literacy.

Grant me one hidden miracle and I can explain everything.

5.  Intelligence. Data is denatured into information, and information is transmuted into intelligence. Inside decision theory and clandestine operation practices, intelligence is the first level of illuminating construct upon which one can make a decision. The data underpinning the intelligence should necessarily be probative and not simply reliable. Intelligence skills combine a healthy skepticism towards human agency, along with an ability to adeptly handle asymmetry, recognize probative data, assemble patterns, increase the reliability of incremental conjecture and pursue a sequitur, salient and risk mitigating pathway of syllogism. See The Role of Intelligence Inside Science.

6.  Mechanism. Every effect in the universe is subject to cause. Such cause may be mired in complexity or agency; nonetheless, reducing a scientific study into its components and then identifying underlying mechanisms of cause to effect – is the essence of science. A pathway from which cause yields effect, which can be quantified, measured and evaluated (many times by controlled test) – is called mechanism. See Reduction: A Bias for Understanding.

7.  Exposure to Accountability.  This is not peer review. While during the development phase, a period of time certainly must exist in which a hypothesis is held proprietary so that it can mature – and indeed fake skeptics seek to intervene before a hypothesis can mature and eliminate it via ‘Occam’s Razor’ (sic) so that it cannot be researched. Nonetheless, a hypothesis must be crafted such that its elements 1 – 6 above can be held to the light of accountability, by 1. skepticism (so as to filter out sciebam and fake method) which seeks to improve the strength of hypothesis (this is a ‘ally’ process and not peer review), and 2. stakeholders who are impacted or exposed to its risk. Hypothesis which imparts stakeholder risk, which is held inside proprietary cathedrals of authority – is not science, rather oppression by court definition.

It is developed from a construct – which is a type of educated guess (‘scintilla’ in the chart below). One popular method of pseudoscience is to bypass the early to mid disciplines of hypothesis and skip right from data analysis to accepted proof. This is no different ethically, from skipping right from a blurry photo of Blobsquatch, to conjecture that such cryptic beings are real and that they inhabit all of North America. It is simply a pattern in some data. However, in this case, blurry data which happened to fit or support a social narrative.

A hypothesis reduces, exposes and puts its magical assertion on trial.
A pseudo-hypothesis hides is magical assumptions woven into its epistemology and places nothing at risk thereafter.

Another method of accomplishing inference without due regard to science, is to skip past falsifying or countermanding information and simply ignore it. This is called The Duty to Address and Inform. A hypothesis, as part of its parsimony, cannot be presented in the blind – bereft of any awareness of prior art and evidence. To undertake such promotional activity is a sale job and not science. Why acknowledge depletion of plant food nutrients on the part of modern agriculture, when you have a climate change message to push? Simply ignore that issue and press your hypothesis anyway (see Examples A and B below).

However, before we examine that and other examples of such institutional pseudoscience, let’s first look at what makes for sound scientific hypothesis. Inside ethical skepticism, a hypothesis bears seven critical elements which serve to qualify it as science.

These are the seven elements which qualify whether or not an alternative hypothesis becomes real science. They are numbered in the flow diagram below and split by color into the three discipline streams of Indirect Study (Intelligence), Parsimony and Conservatism (Knowledge Continuity) and Direct Study (Mechanism).

A Few Examples

In the process of defining this philosophical basis over the years, I have reviewed several hundred flawed and agency-compliant scientific studies. Among them existed several key examples, wherein the development of hypothesis was weak to non-existent, yet the conclusion of the study was accepted as ‘finished science’ from its publishing onward.

Most institutional pseudoscience spins its wares under a failure to address and/or inform.

If you are going to accuse your neighbor of killing your cat, if their whereabouts were unknown at the time, then your hypothesis does not have to address such an unknown. Rather merely acknowledge it (inform). However much your neighbor disliked your cat (intelligence), if your neighbor was in the Cayman Islands that week, your hypothesis must necessarily address such mechanism. You cannot ignore that fact simply because it is inconvenient to your inductive/abductive evidence set.

Most all of these studies skip the hypothesis discipline by citing a statistical anomaly (or worse lack thereof), and employing a p-value masquerade as means to bypass the other disciplines of hypothesis and skip right to the peer review and acceptance steps of the scientific method. Examples A and B below fail in their duty to address critical mechanism, while Examples B and C fail in their duty to inform the scientific community of all the information they need, in order to tender peer review. Such studies end at the top left hand side of the graphic above and call the process done, based upon one scant set of statistical observation – in ethical reality not much more credible in strength than a single observation of Bigfoot or a UFO.

Example A – Failure in Duty to Address Mechanism

Increasing CO2 threatens human nutrition. Meyers, Zanobetti, et. al. (Link)

In this study, and in particular Extended Data Table 1, a statistical contrast was drawn between farms located in elevated CO2 regions versus ambient CO2 regions. The contrast resulted in a p-value significance indicating that levels of  Iron, Zinc, Protein and Phytate were lower in areas where CO2 concentrations exhibited an elevated profile versus the global ambient average. This study was in essence a statistical anomaly; and while part of science, should never be taken to stand as neither a hypothesis, nor even worse a conclusion – as is indicated in the social skeptic ear-tickling and sensationalist headline title of the study ‘Increasing CO2 threatens human nutrition’. The study has not even passed the observation step of science (see The Elements of Hypothesis graphic above). Who allowed this conclusion to stand inside peer review? There are already myriad studies showing that modern (1995+) industrial farming practices serve to dramatically reduced crop nutrient levels.4 Industrial farms tend to be nearer to heavy CO2 output regions. Why was this not raised inside the study? What has been accomplished here is to merely hand off a critical issue of health risk, for placement into the ‘climate change’ explanitude bucket, rather than its address and potential resolution. It begs the question, since the authors neither examined the above alternative, nor raised it inside their Discussion section – that they care neither about climate change nor nutrition dilution – viewing both instead as political football means to further their careers. It is not that they have to confirm this existing study direction, however they should at least acknowledge this in their summary of analytics and study limitations. The authors failed in their duty to address standing knowledge about industrial farming nutrient depletion. This would have never made it past my desk. Grade = C (good find, harmful science).

Example B – Failure in Both Duty to Inform of Intelligence and Duty to Address Mechanism

Possible future impacts of elevated levels of atmospheric CO2 on human cognitive performance and on the design and operation of ventilation systems in buildings. Lowe, Heubner, et. al. (Link)

This study cites its review of the immature body of research surrounding the relationship between elevated CO2 and cognitive ability. Half of the studies reviewed indicated that human cognitive performance declines with increasing CO2 concentrations. The problem entailed in this study, similar to the Zanobetti study above in Example 1, is that it does not develop any underlying mechanism which could explain instances how elevated CO2 directly impacts cognitive performance. This is not a condition of ‘lacking mechanism’ (as sometimes the reality is that one cannot assemble such), rather one in which the current mechanism paradigm falsifies the idea. The study should be titled ‘Groundbreaking new understanding on the toxicity of carbon dioxide’. This is of earth-shattering import. There is a lot of science which needs to be modified if this study proved correct at face value. The sad reality is that the study does not leverage prior art in the least. As an experienced diver, I know that oxygen displacement on the order of 4 percentage points is where the first slight effects of cognitive performance come into play. Typical CO2 concentrations in today’s atmosphere are in the range of 400 ppm – not even in the relevant range for an oxygen displacement argument. However, I would be willing to accept this study in sciebam, were they to offer another mechanism of direct effect; such as ‘slight elevations in CO2 and climate temperature serve to toxify the blood’, for example. But no such mechanism exists – in other words, CO2 is only a toxicant as it becomes an asphyxiant.5 This study bears explanitude, it allows for an existing paradigm to easily blanket-explain an observation which might have otherwise indicated a mechanism of risk – such as score declines being attributable to increases in encephalitis, not CO2. It violates the first rule of ethical skepticism, If I was wrong, would I even know it? The authors failed in their duty to inform about the known mechanisms of CO2 interaction inside the body, and as well failed to address standing knowledge about industrial farming nutrient depletion. As well, this study was a play for political sympathy and club rank. Couching this pseudo-science with the titular word ‘Possible’ is not excuse to pass this off as science. Grade = D (inexpert find, harmful science).

Example C – Orphan Question, Failing in All Seven Elements of Hypothesis, and Especially Duty to Inform of Intelligence

A Population-Based Study of Measles, Mumps, and Rubella Vaccination and Autism. Madsen, Hviid, et. al. (Link)

This is the notorious ‘Danish Study’ of the relationship between the MMR vaccination and observed rates of autism psychiatric confirmed diagnoses inside the Danish Psychiatric Central Register. These are confirmed diagnoses of autism spectrum disorders (Autism, ADD/PDD and Asperger’s) over a nine year tracking period (see Methodology and Table 2). In Denmark, children are referred to specialists in child psychiatry by general practitioners, schools, and psychologists if autism is suspected. Only specialists in child psychiatry diagnose autism and assign a diagnostic code, and all diagnoses are recorded in the Danish Psychiatric Central Register. The fatal flaw in this study resided in its data domain analyzed and the resulting study design. 77% of autism cases are not typically diagnosed until past 4.5 years of age. Based upon a chi-squared cumulative distribution fit at each individual μ below from the CDC, and 1.2 years degree of freedom, and 12 months of Danish bureaucratic bias = .10 + .08 + .05 = 0.23 chance of detection by CDC statistical practices – or 77% chance of a false negative (miss). The preponderance of diagnoses in the ADD/PDD and Asperger’s sets serves to weight the average age of diagnosis well past the average age of the subjects in this nine year study – tracking patients from birth (average age = 4.5 years at study end). See graphic to the right, which depicts the Gompertzian age-arrival distribution function embedded inside this study’s population; an arrival distribution which Madsen and Hviid should have accounted for – but did not. This is a key warning flag of exclusion bias. From the CDC data on this topic, the mean age of diagnosis for ASD spectrum disorders in the United States, where particular focus has tightened this age data in recent years:6

   •  Autistic disorder: 3 years, 10 months
   •  ASD/pervasive developmental disorder (PDD): 4 years, 8 months
   •  Asperger disorder: 5 years, 7 months

Note: A study released 8 Dec 2018 showed a similar effect through data manipulation-exclusion techniques in the 2004 paper by DeStefano et al.; Age at first measles-mumps-rubella vaccination in children with autism and school-matched control subjects: a population-based study in metropolitan Atlanta. Pediatrics 2004;113:259-266.7

Neither did the study occur in a society which has observed a severe uptick in autism, nor during a timeframe which has been most closely associated with autism diagnoses, (2005+).8 Of additional note is the fact that school professionals refer non-profound autism diagnosis cases to the specialists in child psychiatry, effectively ensuring that all such diagnoses occurred after age 5, by practice alone. Exacerbating this is the fact that a bureaucratic infrastructure will be even more slow in/fatal in posting diagnoses to a centralized system of this type. These two factors alone will serve to force large absences in the data, which mimic confirmatory negatives. The worse the data collection is, the better the study results. A fallacy called utile absentia. The study even shows the consequent effect inversion (vaccines prevent autism), incumbent with utile absentia. In addition, the overt focus on the highly precise aspects of the study, and away from its risk exposures and other low-confidence aspects and assumptions, is a fallacy called idem existimatis. I will measure the depth of the water into which you are cliff diving, to the very millimeter – but measure the cliff you are diving off of, to the nearest 100 feet. The diver’s survival is now an established fact of science by the precision of the water depth measure alone.

In other words this study did not examine the relevant domain of data acceptable to underpin the hypothesis which it purported to support. Forget mechanism and parsimony to prior art – as those waved bye-bye to this study a long time ago. Its conclusions were granted immunity and immediate acclaim because they fit an a priori social narrative held by their sponsors. It even opened with a preamble citing that it was a study to counter a very disliked study on the part of its authors. Starting out a process purported to be of science, by being infuriated about someone else’s study results is not science, not skepticism, not ethical.

Accordingly, this study missed 80% of its relevant domain data. It failed in its duty to inform the scientific community of peers. It is almost as if a closed, less-exposed bureaucracy were chosen precisely because of its ability to both present reliable data, and yet at the same time screen out the maximum number of positives possible. Were I a criminal, I could not have selected a more sinister means of study design myself. This was brilliance in action. Grade = F (diabolical study design, poor science).

All of the above studies failed in their duty to inform. They failed in their responsibility to communicate the elements of hypothesis to the outside scientific community. They were sciebam – someone asked a question, poorly framed and without any background research – and by golly they got an answer. They sure got an answer. They were given free pass, because they conformed to political will. But they were all bad science.

It is the duty of the ethical skeptic to be aware of what constitutes true hypothesis, and winnow out those pretenders who vie for a claim to status as science.

epoché vanguards gnosis

The Ethical Skeptic, “The Elements of Hypothesis”; The Ethical Skeptic, WordPress, 4 Mar 2019; Web, https://wp.me/p17q0e-94J

 

December 13, 2018 Posted by | Ethical Skepticism | , | Leave a comment

Embargo of The Necessary Alternative is Not Science

Einstein was desperate for a career break. He had a 50/50 shot – and he took it. The necessary alternative he selected, fixed c, was one which was both purposely neglected by science, and yet offered the only viable alternative to standing and celebrated club dogma. Dogma which had for the most part, gone unchallenged. Developing mechanism for such an alternative is the antithesis of religious activity. Maturing the necessary alternative into hypothesis, is the heart and soul of science.

Mr. Einstein You’ll Never Amount to Anything You Lazy Dog

Albert Einstein introduced in a 1905 scientific paper, the relationship proposed inside the equation e = mc² : the concept that the system energy of a body (e) is equal to the mass (m) of that body times the speed of light squared (c²). That same year he also introduced a scientific paper outlining his theory of special relativity. Most of the development work (observation, intelligence, necessity, hypothesis formulation) entailed in these papers was conducted during his employment as a technical expert – class III (aka clerk) at the Federal Office for Intellectual Property in Bern, Switzerland; colloquially known as the Swiss patent office.1 There, bouncing his ideas off a cubicle-mate (si vis) and former classmate, Michele Angelo Besso, an Italian Engineer, Einstein found the time to further explore ideas that had taken hold during his studies at the Swiss Federal Polytechnic School. He had been a fan of his instructor, physicist Heinrich Friedrich Weber – the most notable of his two top-engaged professors at Swiss Federal Polytechnic. Weber had stated two things which struck an impression on the budding physicist.2

“Unthinking respect for authority is the enemy of truth.” ~ physicist Heinrich Friedrich Weber

As well, “You are a smart boy, Einstein, a very smart boy. But you have one great fault; you do not let yourself be told anything.” quipped Weber as he scolded Einstein. His mathematics professor, Hermann Minkowski scoffed to his peers about Einstein, relating that he found Einstein to be a “lazy dog.” In similar vein, his instructor physicist Jean Pernet, admonished the C-average (82% or 4.91 of 6.00 GPA) student “[I would advise that you change major to] medicine, law or philosophy rather than physics. You can do what you like. I only wish to warn you in your own interest.” Pernet’s assessment was an implication to Einstein that he did not face a bright future, should he continue his career in pursuit of physics. His resulting mild ostracizing from science was of such extent that Einstein’s father later had to petition in an April 1901 letter, for a university to hire Einstein as an instructor’s assistant. His father wrote “…his idea that he has gone off tracks with his career & is now out of touch gets more and more entrenched each day.” Unfortunately for the younger Einstein, his father’s appeal fell upon deaf ears. Or perhaps fortuitously, as Einstein finally found employment at the Swiss patent office in 1902.3

However, it was precisely this penchant for bucking standing authority, which served to produce fruit in Einstein’s eventual physics career. In particular, Einstein’s youthful foible of examining anew the traditions of physical mechanics, combined with perhaps a dose of edginess from being rejected by the institutions of physics, were brought to bear effectively in his re-assessment of absolute time – absolute space Newtonian mechanics.

Einstein was not ‘doubting’ per se, which is not enough in itself. Rather he executed the discipline of going back and looking – proposing an alternative to a ruling dogma based upon hard nosed critical path induction work, and not through an agency desire to lazily pan an entire realm of developing ideas through abduction (panduction) – no social skeptics, Einstein did not practice your form of authority-enforcing ‘doubt’. Rather it was the opposite.

He was not doubting, rather executing work under a philosophical value-based principle called necessity (see Ethical Skepticism – Part 5 – The Real Ockham’s Razor). Einstein was by practice, an ethical skeptic.

Einstein was not lazy after all, and this was a miscall on the part of his rote-habituated instructors (one common still today). Einstein was a value economist. He applied resources into those channels for which they would provide the greatest beneficial effect. He chose to not waste his time upon repetition, memorization, rote procedure and exercises in compliance. He was the ethical C student – the person I hire before hiring any form of cheating/memorizing/imitating A or B student. And in keeping with such an ethic, Einstein proposed in 1905, 3 years into his fateful exile at the Swiss patent office, several unprecedented ideas which were subsequently experimentally verified in the ensuing years. Those included the physical basis of 3 dimensional contraction, speed and gravitational time dilation, relativistic mass, mass–energy equivalence, a universal speed limit (for matter and energy but not information or intelligence) and relativity of simultaneity.4 There has never been a time wherein I reflect upon this amazing accomplishment and lack profound wonder over its irony and requital in Einstein’s career.

The Necessary Alternative

If the antithesis of your alternative can in one observation, serve to falsify your preferred alternative or the null, then that antithesis is then, the necessary alternative.

But was the particular irony inside this overthrow of Newtonian mechanics all that unexpected or unreasonable? I contend that it was not only needed, but the cascade of implications leveraged by c-invariant physics was the only pathway left for physics at that time. It was the inevitable, and necessary, alternative. The leading physicists, as a very symptom of their institutionalization, had descended into a singular dogma. That dogma held as its centerpoint, the idea that space-time was the fixed reference for all reality. Every physical event which occurred inside our realm hinged around this principle. Einstein, in addressing anew such authority based thinking, was faced with a finite and small set of alternative ideas which were intrinsically available for consideration. That is to say – the set of ideas only included around 4 primary elements, which could alternately or in combination, be assumed as fixed, dependent, or independently variable. Let’s examine the permutation potential of these four ideas: fixed space, fixed time, fixed gravity and/or fixed speed of light. Four elements. The combinations available for such a set are 14, as related by the summation of three combination functions:

  

What reasoned conjecture offered, and given that combinations of 4 or 3 were highly unlikely to unstable, was to serve in bounding the set of viable alternative considerations to even a lesser set than 14 – maybe 6 very logical alternatives at most (the second C(4,2) function above). However, even more reductive, essentially Einstein would only need start by selecting from one of the four base choices, as represented by the first combination function above, C(4,1). Thereafter, if he chose correctly, he could proceed onward to address the other 3 factors depending upon where the critical path led. But the first choice was critical to this process. One of the following four had to be chosen, and two were already in deontological doubt, in Einstein’s mind.

•  Fixed 3 dimensional space (x, y, z)
•  Fixed time (t)
•  Fixed gravitation relative to mass (g-mass)
•  Fixed speed of light (c)

Ultimately then, only two choices existed if one is to suppose a maximum of two fixed elements as possible per below. Indeed this ended up being the plausal-set for Einstein. The necessary alternatives, one of which had been essentially embargoed by the science authorities at the time, were a combination of two of the above four combining elements. Another combination of two was currently in force (fixed space and time).

In other words, now we have reduced the suspect set to two murder suspects – Colonel Mustard and Professor Plum, and standing dogma was dictating that only Colonel Mustard could possibly be considered as the murderer. To Einstein this was at worst, an even bet.

This is the reason why we have ethical skepticism. This condition, an oft repeated condition wherein false skepticism is applied to underpin authority based denial in an a priori context, in order to enforce one mandatory conclusion at the expense of another or all others, is a situation ripe for deposing. Einstein grasped this. The idea that space and time were fixed references was an enforced dogma on the part of those wishing to strengthen their careers in a social club called physics. Everyone was imitating everyone else, and trying to improve club ranking through such rote activity. The first two element selection, stemming of course from strong inductive work by Newton and others, was a mechanism of control called an Einfach Mechanism (see The Tower of Wrong) or

Omega Hypothesis HΩ – the answer which has become more important to protect than science itself.

•  Fixed 3 dimensional space (x, y, z)
•  Fixed time (t)

Essentially, Einstein’s most logical alternative was to assume the speed of light as fixed first. By choosing first, a fixed reference of the speed of light, Einstein had journeyed down both a necessary, as well as inevitable hypothesis reduction pathway. It was the other murder suspect in the room, and as well stood as the rebellious Embargo Hypothesis option.

Embargo Hypothesis Hξ– the option which must be forbidden at all costs and before science even begins.

•  Fixed gravitation relative to mass (g-mass)
•  Fixed speed of light (c)

But this Embargo Hypothesis was also the necessary alternative, and Einstein knew this. One can argue both sides of the contention that the ’embargo’ of these two ideas was one of agency versus mere bias. In this context and for purposes of this example, both agency and bias are to be considered the same embargo principle. In many/most arguments however, they are not the same thing.

The Necessary Alternative

/philosophy : Ockham’s Razor : Necessity/ : an alternative which has become necessary for study under Ockham’s Razor because it is one of a finite, constrained and very small set of alternative ideas intrinsically available to provide explanatory causality or criticality inside a domain of sufficient unknown. This alternative does not necessarily require inductive development, nor proof and can still serve as a placeholder construct, even under a condition of pseudo-theory. In order to mandate its introduction, all that is necessary is a reduction pathway in which mechanism can be developed as a core facet of a viable and testable hypothesis based upon its tenets.

The assertion ‘there a God’, does not stand as the necessary alternative to the assertion ‘there is no God’. Even though the argument domain constraints are similar, these constructs cannot be developed into mechanism and testable hypothesis. So, neither of those statements stand as the necessary alternative. I am sorry but neither of those statements are ones of science. They are Wittgenstein bedeutungslos – meaningless. A proposition or question which resides upon a lack of definition, or presumed definition which contains no meaning other than in and of its self.

However in exemplary contrast, the dilemma of whether or not life originated on Earth (abiogenesis), or off Earth (panspermia) do stand as a set of necessary alternatives. Even though both ideas are in their infancy, they can both ultimately be developed into mechanism and a testing critical path. The third letter of the DNA codon (see Exhibit II below) is one such test of the necessary alternatives, abiogenesis and panspermia. There is actually a third alternative as well, another Embargo Hypothesis (in addition to panspermia) in this case example – that of Intervention theory. But we shall leave that (in actuality necessary as well) alternative discussion for another day, as it comes with too much baggage to be of utility inside this particular discourse.

Einstein chose well from the set of two necessary alternatives, as history proved out. But the impetus which drove the paradigm change from that of the standing dogma and to Einstein’s favored Embargo Hypothesis, might not have been as astounding a happenstance as it might appear at first blush. Einstein chose red, when everyone and their teaching assistant, was of the awesome insistence that one need choose blue. All the ramifications of fixed speed of light (and fixed gravitation, relative only to mass), unfolded thereafter.

Einstein was desperate for a break. He had a 50/50 shot – and he took it.

Example of Necessity: Panspermia versus Abiogenesis

An example of this condition wherein the highly constrained set of alternatives (two in this case) inside a sufficient domain of unknown, forces the condition of dual necessity, can be exemplified inside the controversy around the third letter (base) of the DNA codon. A DNA codon is the word, inside the sentence of DNA. A codon is a series of 3 nucleotides (XXX of A, C, T or G) which have a ‘definition’ corresponding to a specific protein-function to be transcripted from the nucleus and decoded by the cell in its process of assembling body tissues. It is an intersection on the map of the organism. Essentially, the null hypothesis stands that, the 3rd letter (nucleotide) digit of the codon, despite its complex and apparently systematic methodical assignment codex, is the result of natural stochastic-derivation chemical happenstance during the fist 300 million years of Earth’s existence (not a long time). The idea being that life existed on a 2 letter DNA codon (XX) basis for eons, before a 3 letter (XXX) basis evolved (shown in Exhibit II below). The inductive evidence that such an assignment codex based upon 3 letters derived from 2, is beyond plausibility given the lack of probability of its occurrence and lack of time and influencing mechanism during which that improbability could have happened – this evidence supports its also-necessary alternative.

In this circumstance, the idea that the DNA codon third digit based codex, was not a case of 300 million year fantastical and highly improbable happenstance, but rather existed inside the very first forms of life which were to evolve (arrive) on Earth, is called panspermia. The necessary alternative panspermia does not involve or hinge upon the presence of aliens planting DNA on Earth, rather that the 3 letter codon basis was ‘unprecedented by context, complex and exhibiting two additional symmetries (radial and block) on top of that’ at the beginning of life here on Earth, and therefore had to be derived from a source external to the Earth. Note, this is not the same as ‘irreducible complexity’, a weak syllogism employed to counter-argue evolution (not abiogenesis) – rather it is a case of unprecedentable complexity. A much stronger and more deductive argument. It is the necessary alternative to abiogenesis. It is science. Both alternatives are science.

The key in terms of Ockham’s Razor plurality is this:
In order to provide hypothesis which aligns abiogenesis as a sufficient explanatory basis for
what we see in the fossil record – we must dress it up so that it performs in artificial manipulation,
exactly as panspermia would perform with no manipulation at all.
This renders panspermia a legitimate and necessary hypothesis.

This circumstance elicits the contrathetic impasse, a deliberation wherein a conundrum exists solely because authority is seeking to enforce a single answer at the expense of all others, or forbid one answer at the expense of science itself. The enforced answer is the Omega Hypothesis and the forbidden alternative is the Embargo Hypothesis. And while of course abiogenesis must stand as the null hypothesis (it can be falsified but never really proven) – that does not serve to make it therefore true. Fake skeptics rarely grasp this.

Therefore, the necessary alternative – that the DNA (XXX) codex did not originate on Earth, is supported by the below petition for plurality, comprising five elements of objective inference (A – E below). This systematic codex is one which cannot possibly be influenced (as is evolution) by chemical, charge, handedness, use of employment, epigenetic or culling factors (we cannot cull XX codex organisms to make the survival group more compatible for speciating into XXX codex organisms). Nor do we possess influences which can serve to evolve the protein based start, and silence based stop codons. It can only happen by accident or deliberation. This is an Einstein moment.

Omega Hypothesis HΩ – the third letter of the DNA codex evolved as a semi-useless appendage, in a single occurrence, from a 2 letter codex basis, featuring radial symmetry, featuring block assignment symmetry and molecule complexity to 2nd base synchrony, only upon Earth, in a 1.6 x 10^-15 (1 of 6 chance across a series of 31 pairings, across the potential permutations of (n – 1) proteins which could be assigned) chance, during the first 300 million years of Earth’s existence. Further the codex is exceptionally optimized for maximizing effect inside proteomic evolution. Then evolution of the codex stopped for an unknown reason and has never happened again for 3.8 billion years.

Stacking of Entities = 10 stacked critical path elements. Risk = Very High.

Embargo Hypothesis Hξ– the three letter codex basis of the DNA codon, pre-existed the origination of life on Earth, arrived here preserved by a mechanism via a natural celestial means, and did not/has not evolved for the most part, save for slight third base degeneracy. Further the codex is exceptionally optimized for maximizing effect inside proteomic evolution.

Stacking of Entities = 4 stacked critical path elements. Risk = Moderate.

Note: By the terms ‘deliberacy’ and ‘prejudice’ used within this article, I mean the ergodicity which is incumbent with the structure of the codex itself. Both how it originated and what its result was in terms of the compatibility with amino acids converting into life. There is no question of ergodicity here. The idea of ‘contrived’ on the other hand, involves a principle called agency. I am not implying agency here in this petition. A system can feature ergodicity, but not necessarily as a result of agency. To contend agency, is the essence of intervention hypotheses. To add agency would constitute stacking of entities (a lot of them too – rendering that hypothesis weaker than even abiogenesis). This according to Ockham’s Razor (the real one).

The contention that panspermia merely shifts the challenges addressed by abiogenesis ‘off-planet’ is valid; however those challenges are not salient to the critical path and incremental question at hand. It is a red herring at this point. With an ethical skeptic now understanding that abiogenesis involves a relatively high-stacked alternative versus panspermia, let’s examine the objective basis for such inference in addition to this subjective ‘stacking of entities’ skepticism surrounding the comparison.

The Case for Off-Earth Codex Condensation

Did our DNA codex originate its structure and progressions first-and-only upon Earth, or was it inherited from another external mechanism? A first problem exists of course in maintaining the code once extant. However, upon observation, a more pressing problem exists in establishing just how the code came into being in the first place. Evolved or pre-existed? ‘Pre-existed by what method of origination then?’ one who enforces an Omega Hypothesis may disdainfully pontificate. I do not have to possess an answer to that question in order to legitimize the status of this necessary alternative. To pretend an answer to that question would constitute entity stacking. To block this necessary alternative (pre-existing codex) however, based upon the rationality that it serves to imply something your club has embargoed or which you do not like personally – even if you have mild inductive support for abiogenesis – is a religion. Given that life most probably existed in the universe and in our galaxy, already well before us – it would appear to me that panspermia, The Embargo Hypothesis, is the simplest explanation, and not abiogenesis. However, five sets of more objective inference serve to make this alternative a very strong one, arguably deductive in nature, versus abiogenesis’ relative paltry battery of evidence.

As you read Elements A – E below, ask yourself the critical path question:
~ ~ ~
‘If the precise, improbable and sophisticated Elements A – E below were required
as the functional basis of evolution before evolution could even happen, then how did they then evolve?’

A.  The most early use amino acids and critical functions hold the fewest coding slots, and are exclusively dependent upon only the three letter codon form. Conjecture is made that life first developed upon a 2 letter codon basis and then added a third over time. The problem with this is that our first forms of life use essentially the full array of 3 letter dependent codex, to wit: Aspartate 2 (XXX), Lysine 2 (XXX), Asparagine 2 (XXX), Stop 3 (XXX), Methionine-Start 1 (XXX), Glutamine 2 (XXX). Glutamic Acid and Aspartic Acid, which synthesize in the absolute earliest forms of thermophiles in particular would have had to fight for the same 2 digit code, GA – which would have precluded the emergence of even the earliest thermal vent forms of life – under a 2 letter dependent codex (XX). These amino acids or codes were mandatory for the first life under any digit size context – and should hold the most two digit slots accordingly – they do not. As well, in the case where multiple codons are assigned to a single amino acid, the multiple codons are usually related. Even the most remote members of archaea, thermophilic archaea, use not only a full 3 letter codon dependent codex, but as well use proteins which reach well into both the adenine position 2 variants (XAX) and thymidine position 2 variants (XTX) groupings; ostensibly the most late-appearing sets of amino acids (see graphic in C. below and Table III at the end).5

It is interesting to also note that the three stop codons TAA-TAG-TGA, all match into codex boxing with later appearing/more complex amino acid molecules, and as members of the adenine position 2 variants (XAX) group. They box with the more complex and ‘later appearing’ amino acids tyrosine and tryptophan. The stop codes needed to be a GG, CC, GC, or at the very least a CT/TC XX codon basis at the very least, in order to support an extended evolutionary period under a two letter codon basis (XX). This was not the situation as you can see in Exhibit III at the end. This would suggest that the stop codes appeared later to last under a classic abiogenetic evolutionary construct. Life would have had to evolve up until thermophilic archaea without stop codes in their current form. Then suddenly, life would have had to adopt a new stop codon basis (and then never make another change again in 3.6 billion years), changing horses in mid stream. This XX codon previous form of life should be observable in our paleo record. But it is not.

Moreover, the use of the two digit codex is regarded by much of genomics as a degeneracy, ‘third base degeneracy’, and not an artifact of evolution.6 Finally, the codon ATA should in a certain number of instances, equate to a start code, since it would have an evolutionary two digit legacy – yet it is never used to encode for Methionine – this is incompatible with the idea that methionine used to employ a two digit AT code. Likewise, Tyrosine and the non-amino stop code, TA would have been in conflict under the two digit codex. In fact, overall there should exist a relationship between arrival of an amino acid use in Earth life, and the number of slots it occupies in the codex, and there is not. Neither to the positive slope, nor negative slope.7

One can broach the fact that protein reassignments were very possible and could explain away the apparent introduction of a XXX codon dependency of all observable life, midstream. But then one must explain why it never ‘speciated’ again over 3.6 billion years, along with the apparent absence of XX codon life in the paleo record. This chasm in such a construct is critical and must be accommodated before abiogenesis can be fully developed as a hypothesis. In comparison, panspermia possess no such critical obstacle (note, this is not a ‘gap’ as it relates to the critical path of the alternative and not merely circumstantial inductive inference).

Codon Radial Symmetry

B.  Evolution of the codex would necessarily occur from amino absences rather than positives. Of particular note is the secondary map function of the start and stop codons. Notice that the start of a DNA sentence begins with a specific protein (the polar charge molecule methionine-ATG). The end of a DNA sequence however, consists of no protein coding whatsoever (see TAA, TAG and TGA). In other words the DNA sentence begins with the same note every time, and ends with protein silence. tRNA evolved a way to accommodate the need for a positive, through the employment of proline-CCA during the protein assembly process. This is how a musical score works – it starts with a note, say an A440 tuning one, and ends with the silence dictated by the conductor’s wand. Deliberacy of an empty set, as opposed to the stochasticity of positive notes, as another appearance of a start code methionine could have sufficed for a positive stop and start code instead. This positive stop mechanism would succeed inside an evolution context much better. Why would an absence evolve into a stop code for transfer RNA? It could not, as ‘absence’ contains too much noise. It occurs at points other than simply a stop condition. The problem exists in that there is no way for an organism to survive, adapt, cull or evolve, based upon its use of an empty set (protein silence). Mistakes would be amplified under such an environment,  Evolution depends intrinsically upon logical positives only (nucleotides, mutations, death) – not empty sets.

C.  Features triple-symmetrical assignment, linear robustness, ergodicity along with a lack of both evolution and deconstructive chaos. This is NOT the same set of conditions as exist inside evolution, even though it may appear as such to a layman. This set of codex assignments, features six principle challenges (C., and C. 1, 2a, 2b, 3, and 4 below). Specifically,

• radial assignment symmetry (B. Codon Radial Symmetry chart above),
• thymidine and adenine (XTX, XAX) second base preference for specific chemistries,
• synchrony with molecule complexity (C. Codon versus Molecule Complexity 64 Slots graphic to the right),
• block symmetry (C. Codon Second Base Block Symmetry table below) around the second digit (base), and
• ergodicity, despite a lack of chemical feedback proximity nor ability for a codon base to attract a specific molecule chemical profile or moiety.
• lack of precedent from which to leverage.

These oddities could not ‘evolve’ as they have no basis to evolve from. The structure and assignment logic of the codex itself precludes the viability of a two base XX codex. Evolution by definition, is a precedent entity progressing gradually (or sporadically) into another new entity. It thrives upon deconstructive chaos and culling to produce speciation. There was no precedent entity in the case of the DNA stop codon nor its XXX codex. As well, at any time, the stop codons could have been adopted under the umbrella of a valid protein, rendering two SuperKingdoms of life extant on Earth – and that should have happened. Should have happened many times over and at any time in our early history (in Archaea). Yet it did not.8 An asteroid strike and extinction event would not serve to explain the linearity. Evolution is not linear. We should have a number of DNA stop and start based variants of life available (just as we have with evolution based mechanisms) to examine. But we do not. In fact, as you can see in the chart to the right (derived from Exhibit III at the end), there exist four challenges to a purely abiogenetic classic evolution construct:

1. An original symmetry to the assignment of codon hierarchies (codex), such that each quadrant of the assignment chart of 64 slots, mirrors the opposing quadrant in an ordinal discipline (see Codon Radial Symmetry charts in B. above to the right – click to get a larger image).

Codon Second Base Block Symmetry

2. The second character in the codon dictates (see chart in B. Codon Radial Symmetry chart above) what was possible with the third character. In other words

a. all the thymidine position 2 variants (XTX) had only nitrite molecules (NO2) assigned to them (marked in blue in the chart in C. Codon versus Molecule Complexity 64 Slots to the upper right and in Exhibit III at the end – from where the graph is derived). While the more complex nitrous amino acids were all assigned to more complex oversteps in codex groups (denoted by the # Oversteps line in the Codon versus Molecule Complexity 64 Slots chart to the upper right).

In addition,

b. all adenine position 2 variants (XAX) were designated for multi-use 3rd character codons, all cytidine position 2 variants (XCX) were designated for single use 3rd character codons, while guanine (XGX) and thymidine (XTX) both were split 50/50 and in the same symmetrical patterning (see Codon Second Base Block Symmetry table to the right).

3.  There exists a solid relationship, methodical in its application, between amino acid molecule nucleon count and assignment grouping by the second digit of the DNA codon in rank of increasing degeneracy. Second letter codon usages were apportioned to amino acids as they became more and more complex, until T and A had to be used because the naming convention was being exceeded (see chart in C., Codon versus Molecule Complexity 64 Slots above to the right as well as Exhibit III at the end). After this was completed, then one more use of G was required to add 6 slots for arginine, and then the table of 60 amino acids was appended by one more, tryptophan (the most complex of all the amino acid molecules – top right of chart in C. Codon versus Molecule Complexity 64 Slots above or the end slots in Exhibit III at the end) and the 3 stop codes thereafter. Very simple. Very methodical. Much akin to a IT developer’s subroutine log – which matures over the course of discovery inside its master application.

4.  Ergodicity. Prejudice… Order, featuring radial symmetry (B. Codon Radial Symmetry chart), synchrony with molecule complexity (C. Codon versus Molecule Complexity 64 Slots graphic) and and block symmetry (C. last image Codon Second Base Block Symmetry table) around the second digit. The problem is that there is no way a natural process could detect the name/features of the base/molecule/sequence as means to supplant such order and symmetry into the codex, three times – much less evolve is such short order, without speciation, in the first place.

Since the DNA chemistry itself is separated by two chemical critical path interventions, how would the chemistry of thymine for instance (the blue block in Exhibit III below) exclusively attract the nitric acid isomer of each amino acid? And why only the nitric acid isomers with more complex molecule bases? First, the DNA base 2 is no where physically near the chemical in question, as it is only a LOGICAL association, not a chemical one so it cannot contain a feedback or association loop. Second, there is no difference chemically, between C2H5NO2 and C5H11NO2. The NO2 is the active moiety. So there should have not been a synchrony progression (C.3. above), even if there were a direct chemical contact between the amino acid and the second base of the codon. So the patterns happen as a result of name only. One would have to know the name of the codon by its second digit (base), or the chemical formula for the amino acid, and employ that higher knowledge to make these assignments.

Finally, this order/symmetry has not changed since the code was first ‘introduced’ and certainly has not been the product of stochastic arrival – as a sufficiently functional-but-less-orderly code would have evolved many times over (as is the practice of evolution) and been struck into an alternative codice well before (several billion years) this beautiful symmetry could ever be attained.

We claim that evolution served to produce the codex, yet the codex bears the absolute signs of having had no evolution in its structure. We cannot selectively apply evolution to the codex – it must either feature evolutionary earmarks, or not be an evolved code. The mechanisms of evolution cannot become a special pleading football applied only when we need it, to enforce conformance – because in that case we will only ever find out, what we already know. It becomes no better argument philosophically than ‘God did it’.

D.  Related codons represent related amino acids. For example, a mutation of CTT to ATT (see table in C. above) results in a relatively benign replacement of leucine with isoleucine. So the selection of the CT and AT prefixes between leucine and isoleucine was done early, deliberately and in finality – based upon a rational constraint set (in the example case, two nitrite molecule suffixed proteins) and not eons of trail and error.9 Since the assignment of proteins below, is not partitioned based upon any physical characteristic of the involved molecule, there is no mechanism but deliberacy which could dictate a correspondence between codon relationships and amino acid relationships.10

E.  Statistically impossible codex, not just improbable. Finally, it is not simply the elegant symmetry to the codex which is perplexing, but as well, items A. – D. usage contexts identified above, allow one to infer (deductive?) that the codex, its precedent, provenance and structure are difficult to impossible to accommodate in even the most contorted construct of abiogenesis. Observe the A-G vs C-T tandem relationship between Lysine and Asparagine for instance. This elegant pattern of discipline repeats through the entire codex. This is the question asked by Eugene V. Koonin and Artem S. Novozhilov at the National Center for Biotechnology in Bethesda, Maryland in their study Origin and Evolution of the Genetic Code: The Universal Enigma (see graphic to the right, extracted from that study).11 This serious challenge is near to falsifying in nature, and cannot be dismissed by simple hand waving. Take some time (weeks or months, not just seconds) to examine the DNA codex in Exhibits I thru III below, the three tables and charts in B. and C. above, as well as the study from which the graphic to the right is extracted, and see if you do not agree. This argument does not suffer the vulnerability of ‘creationist’ arguments, so don’t play that memorized card – as Ockham’s Razor has been surpassed for this necessary alternative.

The hypothesis that the codex for DNA originated elsewhere bears specificity, definition and testable mechanism.
It bears less stacking, gapping and risk as compared to abiogenesis.
It is science. It is the necessary alternative.

Assuming that life just won the 1 in 1.6 x 10 to the 15th power lottery (the Omega Hypothesis), again, and quickly, near to the first time it even bought a lottery ticket – has become the old-hat fallback explanation inside evolution. One which taxes a skeptic’s tolerance for explanations which begin to sound a lot like pseudo-theory (one idea used to explain every problem at first broach). However this paradox of what we observe to be the nature of the codex, and its incompatibility with abiogenesis, involves an impossibly highly stacked assumption set to attempt to explain away based solely upon an appeal to plenitude error. A fallback similar in employment of the appeal to God for the creationist. The chance that the codex evolved a LUCA (Last Universal Common Ancestor) by culled stochastics alone in the first 300 million years of Earth’s existence,12 is remote from both the perspective of time involved (it happened very quickly) and statistical unlikelihood – but as well from the perspective that the codex is also ‘an optimal of optimates’ – in other words, it is not only functional, but smart too. Several million merely functional-but-uglier code variants would have sufficed to do the job for evolution. So this begs the question, why did we get a smart/organized codex (see Table II below and the radial codon graphic in B. above) and not simply a sufficiently functional but chaotic one (which would also be unlikely itself)? Many social skeptics wishing to enforce a nihilistic religious view of life, miss that we are stacking deliberacy on top of a remote infinitesimally small chance happenstance. Their habit is to ignore such risk chains and then point their finger at creationists as being ‘irrational’ as a distraction.

The codex exhibits unprecedentable, empty set as a positive logical entity, static, patterned codex format, exposed to deconstructive vulnerability which happens in everything else, but which chose to not happen in this one instance. In other words, evolution actually chose to NOT happen in the DNA codex – i.e. deliberacy. The codex, and quod erat demonstrandum, the code itself, came from elsewhere. I do not have to explain the elsewhere, but merely provide the basis for understanding that the code did not originate here. That is the only question on our scientific plate at the moment.

Exhibits I II and III

Exhibit I below shows the compressed 64 slot utilization and it efficiency and symmetry. 61 slots are coded for proteins and three are not – they are used as stop codes (highlighted in yellow). There are eight synonym based code groups (proteins), and twelve non-synonym code groups (proteins). Note that at any given time, small evolutionary fluctuations overlapping into the stop codes would render the code useless as the basis for life. So, the code had to be frozen from the very start, or never work at all. Either that or assign tryptophan as a dedicated stop code and mirror to methionine and make the 5th code group a synonym for Glutamine or Aspartic Acid.

Exhibit I – Tandem Symmetry Table

Exhibit II below expands upon the breakout of this symmetry by coded protein, chemical characteristics and secondary mapping if applicable.

Exhibit II – Expanded Tandem Symmetry and Mapping

Finally, below we relate the 64 codon slot assignments, along with the coded amino acid, the complexity of that molecule and then the use in thermophilic archaea. Here one can see that it is clear that even our first forms of life, constrained to environments in which they had the greatest possibility of even occurring, employed the full XXX codex (three letter codon). While it is reasonable to propose alternative conjecture (and indeed plurality exists here) that the chart suggests a two letter basis as the original codex, life’s critical dependency upon the full codon here is very apparent.

Exhibit III – 61 + 3 Stop Codon Assignment versus Molecule Complexity and
Use in Thermophilic Archaea (see chart in C. above)

Recognizing the legitimacy of the necessary alternative – one which was both purposely neglected by science, and yet offers the only viable alternative to standing and celebrated club dogma – this is a process of real science. Developing mechanism for such an alternative is the antithesis of religious activity. Maturing the necessary alternative into hypothesis, is the heart and soul of science.

Blocking such activity is the charter and role of social skepticism. Holding such obfuscating agency accountable is the function of ethical skepticism.

epoché vanguards gnosis

——————————————————————————————

How to MLA cite this blog post =>

The Ethical Skeptic, “Embargo of The Necessary Alternative is Not Science” The Ethical Skeptic, WordPress, 24 Nov 2018; Web, https://wp.me/p17q0e-8Ob

November 24, 2018 Posted by | Argument Fallacies | , , | 11 Comments

Exotic Nature of FRB 121102 Burst Congeries

It is clear from the data that a MIGO grouping exists inside the 93 bursts of FRB 121102, representing a consistent and distinct profile from their comparable Primary grouping burst twins in terms of frequency, signal duration and overall resulting Planck dilation – yet in stark contrast, featuring negligible impact in terms of signal arrival timing relative to c.
These fast radio bursts appear to bear the profile of the collision of two very massive objects. The smaller object moving rapidly as a percentage of the speed of light around the larger  – signalling the universe in desperation as it descends hopelessly into the dark Schwarzschild sea. Two black holes tripping the light fantastic among the stars.

Now I am not a physicist, nor an astrophysicist. I want to make that clear. I do not claim the moniker of scientist. Although I have been president of a research lab, and led it through the process of groundbreaking scientific discovery, and although I have employed or had in my reporting structure many scientists and engineers, I myself cannot claim such a title. Despite involvement inside complex decisions of science and technology on a daily basis, I have not earned the hash marks, degrees and dissertation necessary in passing industry qualification as a scientist.1 This was purposeful. I am a business man, economist, analyst, designer, technologist, strategist, leader and advocate for those who suffer at the hands of poorly developed science. Therefore I am technically only a skeptic. I critique the philosophy, structure and meta-application of science – flagging the circumstance wherein its deployment serves to negatively impact its stakeholders. I write technical reports and specifications for the employment of technology, and determine for its stakeholders, how the technology or science involved will serve to impact their lives. Now this is a profession inside which I am enormously qualified and maintain an arduous decades-long track record of qualification and success.

But during my youth I was a scientist at heart. I devoured every Carl Sagan, Stephen Jay Gould and Isaac Asimov non-fiction book which my small town library was able to get. In my free time I studied the sky with my Meade telescope and dabbled in my Gilbert Chemcraft junior chemistry lab. I burned, dissolved and emergency-buried a lot of volatile stuff. A freshly bottom-lit (not top-lit) Bunsen Burner will fire a penny through a ceiling tile at 1/4 the muzzle velocity of a .22 caliber standard load round. Many exciting things can be done with potassium. After my instructors realized that I was not stupid, rather just bored, and saw that my science aptitude scores were at a college level, while in the 5th Grade, I was advanced two years early through my science and math curricula; earning a top award for a science paper my senior year of high school. I entered a nationally ranked top-3 nuclear science undergraduate program, but was swayed in my career when the Dean of my school awarded me an A+++ on my paper on Ethics of Technology and Science, the highest grade he had ever given.  It was then that I knew there was more to science than simply donning a lab coat, initiating exoentropy and taking the measurements. The question was not one of how to do science, but what one could do with it. Or should do with it. For benefit or for harm, and how to discern the magnitude and difference.

As a skeptic, never rest on your laurels and self-congratulate over your callow wielding of doubt.
As a skeptic, you must go and actually look. You must think incrementally, eschew pat answers, ask probative questions and then risk hard work.
Anything short of this is worse than the process of never having doubted to begin with.

Throughout the time since I have maintained a fascination with astrophysics. I have read Kip S. Thorne’s, Black Holes & Time Warps, probably 3 to 8 times. I am a regular consuming fan of Deutsch, Tippler, Wolfram, and Greene.  Wolfram’s A New Kind of Science and Thorne’s Black Holes & Time Warps reside in my library on the quick-reference shelf along with the Webster’s Dictionary, Oxford Handbook of Philosophy and Science, Newton’s The Principia, Lewin’s Genes IX, The Handbook of Chemistry and Physics, Whitman’s Leaves of Grass and the New American Standard Bible. My thirst for clues which nature offers us through the wisdom of astrophysics, has never been slaked.

Fast Radio Burst 121102

So when science first started detecting Fast Radio Bursts (the subtle grey curved line inside the graphic to the right), this was a subject which fascinated me no end. Not in the sense that an extraterrestrial civilization might be the source of such quirky electromagnetic chirps (so far they bear a number of ‘natural’ profiles to be sure), but rather a fascination toward the clues which the phenomenon could serve to offer regarding the nature and structure of our cosmos. As a quick summary, a Fast Radio Burst is a very short (20 to 100 milliseconds ‘long’ in dispersion arc and .75 to 3.5 millisecond barycentric duration pulse) and narrow band (3 GigaHertz ‘tall’) flash of electromagnetic C-Band microwave energy. It is akin to a bird chirping a short and very precise musical note, or the emanation a bat might make in order to echo-locate. The key interesting feature of such a short duration burst of electromagnetic energy resides in its characteristic ‘dispersion’. Dispersion is the difference between the attenuation of the higher frequencies of EM energy in the signal and that of its lower frequencies. In our cosmos, lower frequency radiation is attenuated more readily and arrives at its destination somewhat after the higher frequencies inside the same exact signal. The lower frequencies lose the race against the higher ones. In the graphic to the right, one can observe that the higher frequencies at the top of the graph, say in the 7.5 GHz range, arrive first (motion of the EM signal is right to left) before do the lower frequencies inside the single FRB burst – despite both frequency sets having originated at the same exact instant, far far away. The magnitude of this dispersion allows an astrophysicist to estimate how far that signal has traveled through space-time (or gravity), through measuring the separation between the arrival of the higher and lower frequencies inside a fast radio burst.2

What results is an arc, characteristic of a warped electromagnetic signal. On a graph indexing an ordinate of signal frequency (GHz) against an abscissa of time (seconds), the result is an exponential relationship.  Inside the graphic immediately below in red field background, one can observe (again, pretend that the EM signal is moving from right to left) the higher 7.8 GHz EM C-band microwave radiation (at the top of the figure) to arrive at the receiver on Earth, sooner than do the 5.4 GHz frequencies (at the bottom of the figure), and by a simple square in acceleration of effect on the lower frequencies toward the bottom of the graph (which is why the signal is curved in its dispersion differential). The rate of dispersion shown in the graphics above and below equate to around 2.5 billion light years of travel through space-time and/or gravitational fields. The arc immediately below in particular was extracted from the FRB 121102 fusillade; marked as FRB 121102-1.

Problem Statement

But there were two peculiarities regarding FRB 121102 which piqued my interest above and beyond the media generated discourse around the other several dozen individual FRB’s we have found scattered around the cosmos. First, in contrast with the other FRB’s we have detected, this FRB burst comprised a fusillade of 93 individual signals which arrived in quick succession (seconds to hours apart). Second, the signals arrived in an array of differing dispersion and frequency profiles. Of course, obtaining a repeating FRB source was unprecedented to begin with and of key interest in its own right; however, the fact that all of FRB 121102’s dispersion and frequency profiles did not match, was a mystery of even greater proportion. You see, if the signals all emanated from the same source; and given their rapid fire and common location in another dwarf galaxy 2.5 billion light years away, they should be assumed to originate from a common source, then all of the signals should bear the same frequency and dispersion profiles (within a given measurement error precision and accuracy). This was not the case with the FRB 121102 signal burst group.

Problem

FRB 121102 burst signals featured significantly varying frequency and dispersion profiles, despite having emanated from a common source and having commensurately traversed the same exact space-time conditions.

So I set about the task of examining this odd stream of signals, in order to hypothesize a mechanism which potentially could impart such a characteristic pattern. The study from which I drew my data was a paper submitted on 9 Sep 2018 by Zhang, et. al., entitled Fast Radio Burst 121102 Pulse Detection and Periodicity: A Machine Learning Approach.3 The two graphics to the right (labeled 1* and 1) were extracted from the study, representing burst number 1, which was the signature burst for the group. It bore the strongest flux amplitude, as well as the signature duration of 1.57 milliseconds barycentric width and dispersion of .21 ∂v/∂t.  The study was a report on the detection of 93 total pulses “from the repeating fast radio burst FRB 121102 in Breakthrough Listen C-band (4-8 GHz) observations at the Green Bank Telescope. The pulses [last 72 of them] were found with a convolutional neural network in data taken on August 26, 2017, where 21 bursts had been previously detected.”4

The study did not offer up its database of signals, so I downloaded the imagery for each of the 93 signals and conducted measures of each signal’s frequency band and time dilation directly from the signal itself. I assembled a database (see bottom of article) of start time, end time, time measure, graph time, pulse width, signal to noise, v-peak, v-min, ∂v in GHz, ∂t in seconds, and then finally the dispersion measure ∂v/∂t (= ∂GHz/∂ms), signal flux in milli-Janskys and barycentric pulse width. I then conducted analytics and intelligence development upon the array of data which resulted. What followed stands not as a dilettante ‘proof’, rather an observation-intelligence-necessity petition for plurality or assistance in hypothesis mechanism development (Steps 1 thru 5 of the Scientific Method).

Observation Reduction and Methodology

Discrete Integrity of Signal

Intelligence 1 – The signals exhibited discrete frequency banding with a v-max beginning at 7.8 GHz and ranging all the way to 5.0 GHz.
Intelligence 2 – The single trend in relationship of v-max to v-min suggests with high confidence that the original signal was emitted from a single source.
Intelligence 3 – A single influencing factor served to additionally alter v-max and v-min by lowering them both in about half the signals, but not disturbing this 1:1 relationship.
Intelligence 4 – The source of the v-max cascading and mimicked dispersion of the .32 ∂v/∂t group, appears to suggest the intervention of a discrete, powerful and singular gravitational influence nearby the source of the signal – either through direct Schwarzschild time dilation or by inducing an orbit in the emission body featuring an exotically large speed.

The bursts exhibited direct proportional and 1:1 consistency in the level of frequency relationship between each v-max and v-min measure, confirming that the signal was of a discrete-banding nature and not a broad-band radio burst (such as might be emitted by a quasar). This is not an occurrence often seen in nature and I personally cannot fathom a physical circumstance, even under the high gravity or energy physics of a black hole event horizon, in which such a discrete duration (1 ms) and frequency band (2.5 GHz) of energy could be generated by a natural phenomenon. But neither am I the fount of all knowledge. This, while odd, is certainly not enough to start adding more exotic explanations into the fray just yet (Ockham’s Razor plurality). It merely suggests there is an area of exotic physics in which we have some discoveries yet to make. It inductively weakens our confidence in our standing related provisional explanations.

In the graphic to the right, the v-max index is along the abscissa and the v-min measurement is along the ordinate axis (y-axis). The 45 degree trend line suggests a direct and 1 to 1 relationship between the two, indicating a fixed interval from top frequency to bottom frequency. The dispersion of the scatter plot down and to the right most likely comprises imprecision in measurement along with the degradation of the signal to noise ratio as many of the pulses trended into lower frequencies – thereby making the lower end (most attenuated) of the pulse much harder to measure as compared to the higher end. Nonetheless, a terminal high and low end frequency was able to be established as a characteristic profile, confirmed by the group’s signature signal #1 (121102-1 was the strongest and most coherent of the fusillade) = 7.8 – 5.3 GHz.

Of added note is the fact that this one-to-one simple relationship between the v-max and v-min extremes indicates strongly that all 93 signals were emitted by the same source. This was corroborated later in examining the arrival time curve, which appears to exhibit a consistent one-factor logarithmic-formulaic pattern. In addition, lower and lower v-max frequencies were detected in the grouping, which appeared to either be a characteristic of the emitting source, or some kind of influencing or intervening source of gravity. This influence is substantiated by the linear trend discipline which exists, even in the case where v-max is altered significantly (the lower left end of the graph). This added dispersion or red shift, could be the results of a gravitational body or a high speed orbit. Both of these will be evaluate herein. Given that the attenuation patterns of both the lower and higher v-max emissions were similar – this suggests that the influencing factor was not a gas cloud – which would have caused enormous chaos in both the v-max and v-min patterns, causing a more circular scatter plot in the above graphic. In addition, a gas/lone plasma cloud could not, and exclusively would not have been able to serve to introduce this observed dispersion distortion, one mimicking in the .32 ∂v/∂t group (below) of signals an added 1.5 billion light years of travel for the lower v-max signals (when we know they were emitted at the same time from the same source). This scatter plot and dispersion profile is in no way compatible with the intervention of a gas cloud, or large bank of stars for that matter. The source of the v-max cascading and mimicked dispersion of the .32 ∂v/∂t group, appears to suggest the intervention of a discrete, powerful and singular gravitational influence nearby the source of the signal – a gravitational body which is directly dilating the EM emission, or is causing an orbiting body emitting the bursts to move alternately toward and away from us as the observer.

Natural Log Decay Timing Profile and Gapping

Intelligence 5 – The arrival timing of each burst fell cleanly into a formulaic pattern of a y = ln x natural logarithmic basis with no characteristic Shapiro time delay observed. This corroborates the linear v-max/v-min relationship above, and supports the hypothesis that the signals all emanated from a single, natural source. As well the peak signal flux amplitudes decayed by a logarithmic function, however sustained a base rate which persisted until the signal stopped.
Intelligence 6 – The single source which imbued the characteristic v-max cascading and mimicked dispersion of the .32 ∂v/∂t group, did not appreciably alter the speed of the signals themselves relative to space-time or c. So each of those data points was kept as original signal data.

The bursts appeared to take a confirmatory time of arrival (TOA in the chart at the bottom), arrival distribution conforming to a natural logarithm curve y = ln x. A classic textbook natural log curve is overlain across the time of arrival plot for the 93 burst group, in purple in the chart to the right. The logarithm trend line is placed only to highlight the circumstance that this burst progression indeed follows a natural log distribution in time. The natural logarithm of a number is its logarithm to the base of the mathematical constant e, where e is the irrational constant 2.7182819… ad infinitum. This does not mean that aliens have sent us the precise constant e as a message, rather this pattern occurs in a number of systems observed in nature, especially where the decay rate of energy is involved. For instance say, the decay of a radioactive isotope. This is a very large hint here that the source of fast radio bursts is a natural source.

In addition, the conformance discipline of this curve (with some exceptions to be examined below) hint that all the observations, despite their degraded signal to noise ratio in many cases, are valid observations of confirmed signal. None should be ‘tossed out’ as discrete entities. However this does not preclude our ability to group and profile the burst arrivals. This conclusion was essential to this analysis.

Of primary importance however, is the inference which can be drawn from this curve, in that the single source which imbued the characteristic v-max cascading and mimicked dispersion of the .32 ∂v/∂t group, did not appreciably alter the speed of the signals themselves relative to space-time nor c. This is addressed again later in Intelligence 10 inside this article. It is an important observation – as one must grapple in this circumstance with the power/energy of an intervening body which can cause 1.5 billion light years worth of pseudo-dispersion in an electromagnetic wave, yet not alter its speed in the least.

Apparent Burst Cluster Scatter Plot Groupings

Intelligence 7 – The bust fusillade bore more diversity in dispersion than anticipated, but appeared to exhibit a Poisson μ at .21 ∂v/∂t.

The peak of dispersion occurrence rate versus the signal to noise ratio of the 93 measures, resided at a dispersion of .21 ∂v/∂t. This measure was both the most commonly featured dispersion measure in the group, and as well was the dispersion measure for the strongest signal to noise ratio signals of the group. For instance FRB121102-1 cited earlier in this article, featured a .21 ∂v/∂t as well as a very high signal to noise ratio. It was the first signal detected and stands as the signature burst of the group. The cluster of 93 signals skewed to longer dispersion tails upon an apparent Poisson distribution, where the accuracy of measurement of the signals themselves imparted a +/- 10% measurement tolerance. Two suppositions came from this data: 1. That lower dispersion measures, which were fewer in number, were the result of antenna detection errors primarily, and 2. That a characteristic dispersion for the entire group, given a single common source and instance of signal, could be assigned at .21 ∂v/∂t.

Suggested Intervention of a MIGO Body

Intelligence 8 – Dispersion measures were chaotic, however exhibited a two-cluster profiling around .21 and .32 ∂v/∂t. Variation which was not stochastic in origin and exhibited bilateral symmetry between the two groups, as if bearing the gradient dynamics of an orbit pathway – approaching and regressing cyclically.

There appeared inside the data, a clustering of two distinct dispersion profiles, which exceeded significantly both the database detection sensitivity and the measurement error tolerance. These profiles clustered around .21 ∂v/∂t and .32 ∂v/∂t. The bursts which composed the .32 ∂v/∂t grouping tended to

• be slightly delayed in arrival time (see graph to right),
• be weaker in signal to noise ratio (.34 versus .22), and
• feature greater Poisson degrees of freedom as compared to, the .21 ∂v/∂t group.

This second grouping of bursts appeared to me to be a kind of weakened version of the bursts (or maybe an echo?). But given the y = ln x conformance – this is not likely), or perhaps delayed-warped-duplicate of what I call the ‘Primary Cluster’ bursts (in blue), perhaps the type of bent EM signal whose trajectory was impacted by an intervening large gravitational mass; perhaps a black hole. Very much like a refracted lensing which occurs in visual astronomy, this EM light appeared to be a replications of the Primary Cluster signals – red shifted – a separate vector of EM energy which was diverted from its original path by a Massive Intervening Gravitational Object (MIGO), and now toward the Earth, to join alongside their Primary and direct-path signal twins (orange versus their blue twins in the graphic to the right). It is not that each signal arrived at Earth twice – rather, there were two types of signal in general – Primary and MIGO. These MIGO bursts are flagged by orange color in the graphic to the right. They feature a consistent enough pattern to ascribe some characteristic measures to the group as a whole, which can be contrasted with the Primary Cluster equivalents. In this analysis we examine both the constructs the the MIGO object is directly Schwarzschild time dilating the MIGO signal group – OR – alternately is causing a high speed orbit in a second body, which would explain both the Primary and MIGO clusters as well.

However, even at this early point in our study, the bilateral symmetry and even balance and consistency between the two burst classes hints strongly at an orbiting body approaching and regressing, and exhibiting the incumbent Doppler effect differential.

FRB Source Orbiting the MIGO?

Intelligence 9 – MIGO Cluster bursts featured consistent differentiation from the Primary Cluster bursts – and both appear to alternate in contiguous groupings as if produced as a signature of an body in orbit around another.
Intelligence 10 – The Planck based red shift and time-width displacement (Schwarzschild time dilation in both observations) far exceeded the displacement of the twin signals in relative elapsed time of arrival (Shapiro time delay, a measure which was almost negligible) – This clue is critical in deducing a solution to the source of the signal, at the end of this article.

So I took a representative – not average, rather good signal to noise and parametrizing measured – signal from both the MIGO and Primary burst cluster groups and developed a consistent profile for each EM signal group, which removed the effect of antenna detection and measurement errors. Those two consistent EM burst profiles are depicted in the graphic to the right. The blue curve represents the dispersion, in the same format as FRB 121102-1 is depicted above, characteristic of the Primary Cluster of bursts. The orange curve represents the dispersion characteristic of the MIGO Cluster of bursts. It is clear from the data that the MIGO Cluster of bursts represent a consistent and distinct profile from the Primary Cluster burst group in terms of the following:

  • reduced v-peak from v 7.8 to 6.5 GHz
  • reduced v-min from 5.3 GHz to 4.8 GHz
  • reduce ∂v from 2.5 GHz-band to 1.7 GHz-band
  • increased signal duration ∂t from 60 milliseconds to 80 milliseconds
  • imbued Planck dilation red shift contrast on the order of .32 .21 ∂v/∂t
  • the relative arrival time ΔT differential was on the order of

and

Please note that it is possible that the MIGO is part of the formula as to how a fast radio burst is generated in the first place. In other words, two black holes.

The MIGO Exotic Profile – Two Massive Object Dynamics

Intelligence 11 – There exist 16 discrete gaps and 17 ‘orbits’ in the decay rate of the FRB source as compared to a y = ln x analog. These appear to be introduced by the influence of a massive external body to the source of the bursts.
Intelligence 12 – The burst .32 and .21 ∂v/∂t groups and burst trends appear to feature a positional relationship with these intervals of minor occulting, as if a lensing or possibly rotational effect was being imbued by an orbiting mass. Both will be examined.

In the analysis to the right, we examine further then a magnified view of the y = ln x arrival timing curve (arrivals 1 – 48) identified in Intelligence 5 above. Of significance in the time series of this set of early arrivals are the presences of static gaps in progression – flatter periods in the chart to the right, of which there are 7 shown here, and 16 or so of them in the overall 93 burst data set. The first four gaps are highlighted by a horizontal orange bar in the chart. The gaps of arrivals are in seconds of arrival observation. The strongest signals in the .21 ∂v/∂t group tend to appear just before the first occulting. However this relationship decays after burst 25 or so. Of interest is to note that one quadruple/triplicate burst occurred right at the inception of occulting number 3; an occulting of which then lasted for 121 seconds. These decay gaps tended to trend actual burst timing as distended slightly versus that of a true natural logarithmic y = ln x curve (in purple above and in Intelligences 13 thru 16 below). This flat-decay-gapping is highlighted by a 57 minute gap in the arrivals between bursts 82 and 83 (denoted in orange in both graphic above – also see TOA in chart at the bottom of this article).

It is also of interest to compare that exception to the natural logarithmic discipline of the purple curve above occurs only as a result of, and commensurate with each occulting – as if the occulting member is actually momentarily delaying the decay of the emanation source (an orbit artifact in this case?) in some fashion during the short perisingular (nee perigee) pass – thereafter the decay source briefly resuming its natural decay rate after a 119 to 198 second break early – and much longer breaks as the process moved on. I am establishing mechanism here, projecting that during perisingular pass between two objects, a state of connection is established such that the bursts are quenched in some fashion. Of course once the merge is complete, the bursts would then be quenched in finality.

Given that it is doubtful that during aposingular orbit progression (or possibly the entire early orbit even to the intersection of event horizons), that the Roche limit is surpassed for these two bodies – it is possible that some artifact is created between them, which only exists at a given/formulaic proportion of the Roche limit, distance and the two masses.

Examine if you will, the first three cycles of the orbiting body in the chart above, which occur over about 1100 seconds. If we use the assumption of a 1,000,000 mile average elliptical orbit radius, this equates to a speed of 17,100 miles per second, or 9.2% of the speed of light. Some kind of relativistic energy shedding may be at play in genesis of these bursts.

This same occult influence repeating can be observed in the larger scale time of arrival curve below (Intelligence 13 thru 16); wherein the 57 minute delay induced a complete cessation of the decay of the emanating source of the later group of FRB signals. This is highly exotic and suggests both a rapid orbit as well as an elliptic eccentricity inside such an orbit, culminating in a final merge of the two bodies.

Orbital Decay and Merge Dynamics

Intelligence 13 – The burst times of arrival appear to be occulted on a semi-regular basis (16 times).
Intelligence 14 – The only exception to the natural logarithmic discipline of this curve occurs with each occulting – as if the occulting member is actually momentarily delaying the natural log decay of the emanation source in some fashion.
Intelligence 15 – Because of the high speed and elliptical nature of the suggested object orbits, this set of curve metrics suggests that both the emanation source, as well as the intervening gravitational source, are massive large gravitational bodies.
Intelligence 16 – The decay gapping appears to exhibit an early elliptical orbit profile, and then progress steadily into a faster and faster orbit, then mass merge profile, over the period of 5 to 7 hours. It appears as if the emanation source itself is the smaller of the two bodies.

As we saw in Intelligences 11 and 12 above, buried within this curve are several interventions in the rate of decay in the arrival timing, highlighted by the 16 horizontal orange markings in the chart to the right. One can observe that the actual decay took longer than its natural logarithm analog in purple. This suggests an occulting by a larger body of some type repeatedly moving in front of the burst source and then possibly merging with it briefly during the cessations (actually as you will notice they are ‘suspensions in decay’ technically) in burst activity, and then finally permanently at the end of the curve.

The distention of the continued logarithmic curve thereafter in time, suggests a body which is so close to the source that it is altering the very decay physics of the emanation source itself, such as in the case of the consumption of maybe a neutron star or denser by a black hole. However, this is very preliminary and only mildly inductive. The occurrence of the 57 minute break runs in contrast with the breaks/gaps in decay which occur earlier in the burst decay process. These appeared to be more orbit related – however as the orbit of the smaller FRB source body decays over time, you can see the gapping getting more and more frequent until the burst 82 (57 minute) merge event. Thereafter, bursts became less and less common until there are none at all. What is depicted inside the graphic to the right in black are three concept orbit states which relate to the various burst signatures along the 5 hour decay log.

This suggests that a repeating FRB is only therefore a ‘multiple FRB’; not sustainable in reality, and not ‘repeating’ in the Search for Extraterrestrial Intelligence (SETI) sense. My projection is that we will hear no more noise from FRB 121102 in the future.

The occultings suggested by the data are complex, but not so complex as to be outside the possible range of Relativistic or even classic orbital dynamics. The relatively level state of the decay process during the gaps (flat orange lines in the graph to the right) could stem from a contribution of exotic material mass between the MIGO and emanating body, or as well be simply the result of a delay in the arrival of those bursts by their having to be refracted around the occulting MIGO body as it passes in front of the emanation source. It is tempting to jump to the conclusion that the latter explanation here fits the data well, as indeed it appears to do inside bursts 1 – 48. However, as seen in the curve above, a later 57 minute gap in burst activity results in a depression of the decay rate for a substantial period of time, lending more to the mass contribution explanation than the occult-refractory explanation. Overall, a disintegrating orbit scenario, with Doppler effect constituting the main mechanism underlying the differential red shift in the MIGO group, is a superior explanation.

The Implications of This Observation Set

Objective Implication

The exotic profiling of the MIGO cluster along with the arrival gapping in energetic decay appears to have been generated by the orbit of the FRB 121102 emission source around a massive intervening gravitational object. The MIGO suggested above would have had to be very close to the radio burst emission point in space and very tight along the line of sight with Earth during occultations. This because the ΔT(2) to ΔT(1) differential in the above equation proved to be very slight to nothing on the epochal scale of time involved. The images to the right and below are speculative, but portray a highly eccentric orbit dynamic between two black holes which have just initiated collision. Such a collision would be necessary to account for the high speed orbital occulting displayed in the Intelligence 13 – 16 graphic.

This inductively inferred scenario would account for the three critical path intelligence components:

  1.  Erratic occult gapping of bursts
  2.  Added Planck dilation of .32 ∂v/∂t refracted bursts
  3.  The monumental delay in the natural decay of the emanation source during occult gaps.

But it would not account for the lack of a Shapiro time delay observation (Intelligence 5). This is deductive in its critical path inference.

The burst dynamics, as well as the origin of FRB’s themselves, could be the result of the collision of two black holes – wherein a special condition exists which creates in the smaller (orbiting body) of the two, or in an intermediate exotic plasma or yet unidentified space-time condition, a brilliant 1 millisecond burst of narrow-band decay energy (say the momentary collapse or appearance of a neutron body releasing its quark binding force). In the case of FRB 121102, that special condition existed long enough to exhibit a natural energy decay profile, momentarily and erratically interrupted by the intervention of the MIGO black hole (most likely an occulting). I have developed a concept illustration above in an attempt to depict this dance between two black holes.

It is very possible that both scenarios are occurring – wherein there is an alternation between exotic elliptical gapping and mass merges at play. In fact, as you observe the gapping inside the arrival profile versus a pure logarithmic decay curve, you will notice increasingly large gaps in the decay time, which shift from Doppler red/blue shift dynamics and into mass contribution dynamics in their nature.

This suggests an artifact of the elliptical orbital collision and then mass merging of two gigantic massive bodies over a 5 – 7 hour period, as the genesis of Fast Radio Burst 121102.

Regardless, what this intelligence also suggests is that both the emanation source AND the intervening body are BOTH of a massive nature. And the ensuing dance energy is stimulating repeated brief 1 ms eruptions of electromagnetic energy, sparkling like a strobe in an erstwhile disco of black holes tripping the light fantastic.

Deductive Inference: We Found Schwarzschild but Not Shapiro – And You Need Both

Finally, a deductive inference regarding the FRB emission structure can be discerned by examining the implications of the General Theory of Relativity on this intelligence set – the problem with Intelligence 10 above is that it violates my understanding of electromagnetic energy propagation and Planck red shift. The Planck dilation of the MIGO .32 ∂v/∂t bursts featured an enormous impact in terms of such dilation – 2.5 GHz and 20 milliseconds, roughly equal in magnitude to each other, resulting in an overall .11 ∂v/∂t additional Planck dilation. This equates to an added 1.5 billion years of light travel imbued into only a subset (half?) of these signals. Signals which we know emanated from the same source at the same time. However the delay in time of arrival was essentially negligible – on the order of an estimated 120 seconds at most, over a base of 2.5 billion years (1/(7.9 x 10^16)). This is essentially a zero impact on the speed of this signal’s propagation versus the speed of light, c. In a Newtonian sense, the negligible delay or decay gaps might be explainable simply by the longer physical path that particular light vector took relative to a line of sight path to Earth. The problem is that this negligible difference violates the Shapiro time delay which should have been embedded into the .32 ∂v/∂t group of bursts, according to the formula5

A case where M is rather large. The conflict resides in reconciling the rather null presence of any observed Shapiro time delay, with the observed monumental effect of the ostensible Schwarzschild time dilation metric in the .32 ∂v/∂t group, which is governed by the formula6

M is exceedingly large in both cases. So what gives?

There should have been both a Shapiro time delay and a Schwarzschild time dilation inside the signals – and we apparently only got one of them at best. Therefore the lensing explanation for the MIGO Cluster group fails. We are left with a Relativistic Doppler red/blue shift as the remaining mechanism.

High Speed Orbital Doppler Red/Blue Shift Differential – We Got Bursts Coming and Going

Another possibility resides however, and potentially resolves this paradox, in that both signals possibly already do reflect the Shapiro time delay, and there is in actuality also no differential Schwarzschild time dilation as this factor is also equal in both the Primary and MIGO burst groups; however, the MIGO group red shifted profile was simply generated by a relativistic Doppler shift derived from the speed of the source away from us, relative to the speed of light.  In other words the source was alternating in its motion toward and away from Earth as it emitted this series of bursts.  This would be according to the formula7

Where v would be the velocity of the emitting body away from Earth during the red shifted emissions affecting both t – waveduration and f – wavefrequency. To the credit of this idea, the emissions did come in profile contiguous groups early in the series (Intelligence 12), as this construct might suggest. As well, the two sets of burst groupings exhibited bilateral symmetry around their common average. This is what one would expect in orbit cycle Doppler dynamics. But, as well, the emitting body would have had to be traveling around its gravitational host (which would be required in this case as well to allow for alterations between the Primary and MIGO blue/red shift profiles) at a significant fraction of the speed of light. So let’s examine this alternative then. Relatively, we observed 17 orbits (16 occultations) in about 5 hours. At a radius of 1 million miles between the black holes, this would represent an orbital velocity given by

or 5934 miles per second. Where C is the number of cycles undergone 17, and P is the duration of the merge. That equates to a v of 3.2% of the speed of light on average for the 17 cycles. Enough to do the job on the Hubble (λ) differential required, especially given that we must divide the .11 ∂v/∂t by a factor of two, since we are receding in one burst group and approaching in the other. Principally, once noise and error are removed, we arguably are left with only these two distinct red and blue shifted burst profiles.

So it is very possible to likely that the orbital velocity of the smaller black hole (the emission source) orbiting at ~1 to 4% of the speed of light, around a larger black hole, could explain the differential red shift between the Primary and MIGO fast radio burst groups, while at the same time allowing the FRB bursts to arrive in a clean natural log time distribution.

What remains to be explained is the mechanism inside the smaller black hole (or between it and the MIGO body) which allows for a natural logarithmic decaying multiple set of 2.5 GHz narrow band and discrete 1 ms time truncated electromagnetic frequency emissions.

It is possible, that the very act of accelerating to a fraction of the speed of light, on the part of a smaller black hole approaching a larger one, serves to produce disruptions in relativistic physics such that discrete quanta of spacetime are ejected from the smaller black hole at the signature frequency of that hole. In a direct collision, this only happens once. In an indirect collision, we now know it can happen 93 times.

Mystery Solved?

Finally, an intervening plasma or gas cloud could not have possibly caused this particular set of observations either. So if the blue/red shift orbit explanation above is not valid, then a dilemma exists, to my understanding, in that a Planck dilation of extraordinary magnitude in impact to a burst signal, was matched to a rather non-remarkable impact to the speed of that electromagnetic signal on the part of the same intervening massive object(s), over the same time and space vectoring. And if valid in structure and my understanding, this bears profound implications to our current paradigm of inflationary theory. Essentially, if an electromagnetic signal can be red shifted through the presence of gravity-time alone (Schwarzschild time dilation) in this manner and not be simply dispersed in its lower frequencies, yet its speed relative to c not be appreciably altered (no Shapiro time delay), then there is no need for galaxies to be ‘hurtling apart on a galactic scale’ (actually space-time itself inflating) to stand as the explanatory mechanism for an observable red shift in EM energy transiting our universe. The red shift per hoc aditum being simply an artifact of EM energy having traversed time and gravitational fields. In other words, a 2 dimensional Planck dilation (G,t), as opposed to a 3 dimensional space inflation (l,w,h). In other words, space is not inflating (Scale Invariant Cosmological Model) – rather gravity is serving to dilate time (t). Under this line of reasoning, a gravity-time dilation alone causing the red shift differential between these two sets of signals.

To be fair, such an alternative (time dilation) model of the red shifted universe has been proposed recently by University of Paris astrophysicist Jean-Pierre Petit. But so far has not received much ear from the scientific community at large. Time dilation models more than adequately explain the Hubble red shift, and in some circumstance, do a better job at explaining it.8 Does the FRB 121102 data support the Scale Invariant Cosmological model?

However, Ockham’s Razor suggests that since we have a less feature-stacked mechanism viable now and inside a classic and well supported model, there is not need to introduce the Scale Invariant Cosmological Model explanation just yet. Although there is inductive support for such an idea, the current model carries with it an explanation sufficient to reject pursuing it at this moment.

Unless I am mistaken in all of this of course. One of the tenets of ethical skepticism is to ask the question ‘If I was mistaken, would I even know?’ And in this case, I would not know, and accordingly should ask for help. Any physicists out there who understand this better than do I, and can provide me with the understanding of such a mechanism which serves to reconcile this observation back into alignment with standing universe inflation and red shift theory – please drop me a note and correct or enlighten me. It would be much appreciated.

The database I assembled and used for this analysis resides below. Click on the image to expand it to full size or save it. The Primary Cluster leading signals are in green shading, while the MIGO Cluster signals are shaded in orange.

epoché vanguards gnosis

——————————————————————————————

How to MLA cite this blog post =>

The Ethical Skeptic, “Exotic Nature of FRB 121102 Burst Congery” The Ethical Skeptic, WordPress, 9 Nov 2018; Web, https://wp.me/p17q0e-8yk

November 9, 2018 Posted by | Ethical Skepticism | , , , | Leave a comment

Post Stockholm Syndrome

More than simply developing an affinity for their captor, victims of Post Stockholm Syndrome begin to develop or are coerced into a state of amnesia about their being held hostage in the first place. Under such a condition, maintenance of this anosognosia on the part of the hostages becomes the preeminent priority.

Stockholm Syndrome as a social term was first used after four hostages were taken during a 1973 bank robbery in Stockholm, Sweden. The hostages in that prosecution defended their captors after being released and would not agree to testify in court against them. Stockholm syndrome is identified through the affinity that hostages may often develop towards their captors, in contraposition to the fear and disdain which an onlooker might feel towards those same captors.1

Accordingly, we hold this current social definition of Stockholm Syndrome:

Stockholm Syndrome

/psychology : human interaction : willful blindness/ : a condition in which hostages develop a psychological alliance with their captors during or after captivity. Emotional bonds may be formed, between captor and captives, during intimate or extensive time together; bonds which however are generally considered irrational in light of the danger or risk endured by the victims of such captivity.

Wikipedia: Stockholm syndrome

Now for a moment, let’s expand the circumstance of captor and hostage under a Stockholm Syndrome such that it encompasses a substantially and sufficiently longer period of time. A circumstance wherein, because of generational turnover, pluralistic ignorance or mental impairment/injury on the part of the hostages, a milieu of amnesia has begun to set in. An amnesia which blends together, both neutral to positive mythological sentiments toward the captors, as well as a comprehensive forgetting of the circumstances of illegitimate captivity in the first place. Such a context broaches what I call Post Stockholm Syndrome.

Post Stockholm Syndrome

/Philosophy : ethics : malevolence/ : a condition wherein Stockholm Syndrome hostages, more than simply developing an affinity for their captor, begin to develop or are coerced into a state of amnesia about their being held hostage in the first place. Under such a condition, maintenance of this anosognosia on the part of the hostages, through information/science embargo, becomes the preeminent priority.

Captor actions enforcing such amnesia under a Post Stockholm Syndrome circumstance may also be falsely spun by the captor as enforcing ‘The Prime Directive’ – especially if the captor has masqueraded illegally in the role as a deity, sexual/breeding/genetic tyrant, slavemaster, governing body or other form of abusive godship over the hostages during its activities as captor.

In such a circumstance of detection risk, one wherein the captor could be held accountable for its malevolent actions by an outside Authority bearing punitive power – concealment of past activity, a lack of captor detectability or apparent presence, along with a pervasive and enforced collective-amnesia on the part of the hostages are all of paramount importance. Means of enforcing this may include:

  • Dividing hostages into entertaining and constantly warring factions
  • Denial of essential energy, health and development technology, save for that which will maintain firm order and war footing
  • Appointing governing authorities among the hostages who are complicit in an information/science embargo, along with the resulting amnesia and conflict
  • Developing ‘Samson Option’ destructive devices that can serve to obliterate everything if the captors are detected/threatened by Outsiders
  • Development of embargo-compatible pervasive, holy and club-enforced theologies and atheologies concerning the state of the hostages, fully explaining the circumstances in which they find themselves as being either ‘just’ or completely by chance
  • Posing captor activities as random events, or those of a higher enlightened being, or as being derived through an ‘unfathomable love’ for the hostages
  • Posing captor activities as just desserts for some offense or group sin the hostages have committed
  • Establishing the false mythology that if the captor is displaced and punished/banished, the captives are destined to receive that very same punishment/banishment for their wrongdoings as well
  • Inflicting the hostages with shortened lives bearing copious amounts of cerebral impairment, substance abuse, unchecked corruption, mandatory labor, disease, starvation and suffering – in order to keep their overall mental acuity, effectiveness and awareness low
  • Captor enjoying all of the above as a type of power, belonging, loosh-as-a-drug or false clout which may be used as currency to purchase allegiance to its scheme.

Such a captor may face the inevitability of having to become ‘one-in-the-same’ or of the same genetic fabric as its hostages, in order to evade eventual enforcement of the penalty of Law upon them for such malevolent actions. In this way, captors hope to circumvent the Law and skillfully confiscate that which they sought through the hostage-taking to begin with.

After all, if the hostages indeed bear the same fate as the captors, why then would outside Authorities hesitate in their intervention at all?

A captive hostage circumstance is always based upon lies – no matter how popular, loving or righteous those lies might be codified – no matter how random, natural or scientific they may be framed.

Just as with mathematics, the ethical skeptic can extend the logical reach and calculus of philosophy, to stand inside theoretical circumstances which are not readily apparent to the average philosopher. Does such a circumstance regarding Post Stockholm Syndrome exist for humanity on Earth today? I do not know the answer to that question from an epistemological standpoint – however, just as with most principles of humanity, while I can’t define its presence, I know when I am in it.

The Ethical Skeptic, “Post Stockholm Syndrome”; The Ethical Skeptic, WordPress, 8 Mar 2020; Web, https://theethicalskeptic.com/?p=44557

March 8, 2020 Posted by | Ethical Skepticism | Leave a comment

The Climate Change Alternative We Ignore (to Our Peril)

A study released this week from the Institute of Atmospheric Physics/Chinese Academy of Sciences, and Science Press and Springer-Verlag GmbH Germany, claimed the world’s oceans are warming at the same rate as if five atomic bombs were dropped into the sea every second. Breitbart News Network, 15 Jan 2020
When the Earth’s core enters an exothermic cycle, the Earth’s air-conditioning heat pump gets less efficient.

I read a very interesting study that a friend forwarded to me yesterday; one which piqued my interest in summarizing some of the research I have done over the last ten years regarding climate change. Yes, it is generally acknowledged by mainstream science and society at large that our planet’s oceans are heating very fast.1 2 The result of this warming is an increasingly unhealthy environment for our ocean’s flora, fishes, microbiota, mollusks, crustaceans and fauna.3 To varying degrees, this emergent condition threatens everything which lives on planet Earth. The vast preponderance of scientists agree that we are well underway on the sixth mass, or what could be reasonably titled, Anthropocene Extinction. Much of this the result of extreme and recent climate change brought about through man’s activity.

Now before reviewing this article I must ask two things of its prospective reader. First, if one finds them self tempted to shift their more-sciencey-than-thou underoos all askew and further then perceives sufficient knee-jerk dissonance coming on to assign me an ‘anti-‘ label – understand that I am a proponent of addressing anthropogenic global warming as a first priority for mankind. I have worked harder than 99% of this planet inside issues targeting mitigation of volatile organic compounds, alkanes, methane, carbon monoxide and dioxide contribution on the part of mankind. I have conducted professional studies regarding the value chain of carbon inside the economy, and have developed businesses and worked to change markets, with a principal focus of mitigating carbon contribution by the various industries involved. I am gravely concerned about human contribution to the the stark rise in global temperatures now obviously underway.

Second, what I am summarizing in very short form herein stems from hundreds of hours of research and literally multiple hundreds of references which I cannot possibly compile into this blog article by coherent sequence – without sacrificing the ability to deliver its core message. This is a summary of my analysis, observations and thoughts; all of which I have developed on this issue over time. It is meant to provide a framework of sponsorship behind an idea which has slowly formulated in my head. This idea is a construct, an idea which aspires to be developed into real hypothesis. As such, this work is not posed under a pretense of residing at the level of a broad-scope scientific research effort. To do full justice inside this argument would require over 350 recitations and a great deal more research on the part of mainstream science. I have nonetheless included 47 essential references within this article. One can therefore anticipate herein, a greater experiential depth and level of sourcing recitation as compared to the standard media propaganda article regurgitating the same ol’ same ol’ regarding climate change. My hope is that you find this article both challenging and refreshing. Please understand that its purpose is Ockham’s Razor plurality, and not any insistence (claim) as to a conclusive single answer. This idea therefore is not posed as a denial of anthropogenic induced climate change.

If what I propose here as a supplementary contributor to climate change theory begins to explain more completely what we are observing globally – then the construct will have served its purpose. Further then, it is my opinion that its core kinetic-energy-derivation argument bears soundness, salience, elegance, logical calculus and compelling explanatory power – key prerequisites of true hypothesis. Despite its need for further development and maturation, this argument should not be ignored through our polarization over this issue politically. We need fewer children with scowling faces, and more unbiased thinking adults addressing this challenge.

That all being predicated, I propose below an additional hypothesis construct which I feel is now necessary as plurality under Ockham’s Razor. This is a construct which serves as a compliment to our current understanding of anthropogenic induced climate change – and not as a replacement thereof. The argument is not one of a false dilemma, as fake skeptics most commonly deem any alternative idea inside this issue to stand as a threat to their authority. I have not derived this idea as the result of any form of agency, rather merely a concern over the nine primary observations themselves (see below). Nobody handed me this idea – I came up with it on my own, only after exhaustive review of the data involved. The 80 year-old medium fallax error employed to dismiss this argument genre, by a wave of the hand and sciencey-sounding ‘watts per square meter’ distributed energy geophysics, merely stands as obfuscating propaganda. It is tail condition and asymmetric system ignorance. We are clearly past that point at which, if we did find that we were in part wrong on climate change, scientific culture would never be able to admit such error on the world stage.

The key issue entailed inside this argument is that of observed lithosphere and hydrosphere (oceans) heat, and these measures far-outpacing what atmospheric carbon capture models have predicted.4 This is the critical path issue at hand.

Part of The Heat May Indeed Be Coming from Beneath Our Feet

I am not a climate scientist – however, nor am I carrying anyone’s water on this issue. I do not possess an implicit threat to my career if I say something forbidden or research an embargoed idea. My efforts involve developing agricultural and green energy solutions which serve to reduce carbon ppms imparted to the atmosphere inside those industry verticals. In the midst of my work inside climate change solution development, a number of peripheral observations I have made have begun to bother me greatly. They have caused me to perceive the necessity to formulate and propose another idea. An idea that in my opinion fits the observation base much more elegantly, without forcing and in more compelling fashion than simply the Omega Hypothesis of ‘man is causing it all – no need to look any further’. These notions stem as well from my time managing an exotic materials lab, and from working with several US oil exploration companies. Don’t start labeling me, I work with green energy companies as well. My point is, that this is an idea which requires a multi-disciplinary understanding of the physical phenomena involved.

In short, my alternative idea could be titled: ‘The Heat May in Part Be Coming from Beneath our Feet’. Its exegesis (at the end of this article), derived from a series of nine primary independent observations in order of critical path dependence and increasing inferential strength, follows:

Observation 1 (Inductive-Introduces Plurality) – Fall to Winter CO2 Rise Exhibits a Northern Hemisphere Winter Solstice Pause Which Should Not Exist if All PPM is Generated by Man Alone

The chart I developed to the right depicts the annual normalized cycling of carbon parts per million as measured at the Earth’s northern hemisphere Mauna Loa observatory (blue bars) as compared to the annual geographic latitude position of the sun (orange sinusoidal line).5 6 7 One can observe the strong consumption of carbon dioxide out of the atmosphere which occurs each spring and into the summer, upon the annual greening of the northern hemisphere. Take note here as to the raw power which nature possesses in mitigating atmospheric carbon, if left alone to do its work. This trend is mostly solar-photosynthesis induced as its regression matches the latitudinal declination regression of the sun each year almost exactly (the summer months in the graph). Each year however, we experience a surplus between carbon generated and the carbon which plants and algae consume (difference between the magnitude of the peak on the left and the trough on the right in blue bars) – thereby causing an annual overage in our planet’s carbon budget, if you will – a deficit which accumulates and does not go away (observable in the carbon ppm and temperature graph below).

Now consider for a moment this parallel sympathetic trend between the solar latitude (declination) and the carbon ppm mitigation effect of northern hemisphere foliage in the spring and summer – and notice that this same parallel sympathetic trend is violated in the winter months for the northern hemisphere. If one examines the right hand side of the carbon ppm bars (15 Dec – 15 Jan), there exists a taper off (flattening of ppm slope) in Carbon contribution which occurs annually each time the sun hits its most southerly latitudes – a feature which is not a signature of economic activity, as man does not just stop producing carbon in the winter and in fact produces more carbon for heating dwellings and massive levels of travel. Rather, I propose that this flatter ppm slope stems from an annual winter-cessation in solar heating of the high northerly-latitude permafrost, tundra and shallow oil formations (such as exist in Russia and between Alaska and Texas). Deeper geostrata, features and biomes which are already hotter than in the past, because of some separate influence than merely solar radiation capture. In other words, the pace of methane and carbon emission is synced very heavily with the sun’s geographic latitude – almost exclusively. One can see this inside the graph’s carbon ppm slope differential between the winter solstice period as compared to the vernal equinox period. The slope in carbon ppm’s is clearly less, during a time when its magnitude should actually be higher. This mandates plurality on the subject. Something in the northern regions of the globe responds in very sensitive ppm relationship with the rising of the sun’s geographic latitude across the Vernal Equinox (1 Mar – 15 Apr). An effect magnitude which is significantly larger than the carbon effect imbued through man’s activity during that same period.

Observation 2 (Inductive-Introduces Plurality) – Atmospheric CO2 Levels Follow Temperature Rises and Are Accelerating – Man’s Carbon Producing Activity is Linear and of Insufficient Slope to Drive This

In order to understand this correlation mismatch, one must understand what is occurring in the chart to the right. The two regressions – regressions of both Y-axis 1 – ΔT or global temperature anomaly and Y-axis 2 – Mauna Loa measured carbon ppm’s – are aligned manually and made congruent so as to remove any reference range bias. This allows the reader to make observation in perspective to a tight relationship between carbon ppm measures at the Mauna Loa NOAA observatory and the global temperature increases since 1958.8 9 But one must remember that this apparent tight relationship is forced by me, through an annual and necessary adjustment of the two-axis regression alignment. If I apply this same regression alignment (the straight line in the graphic to the right) to other timeframes as well, suddenlly the two curves do not match up as cleanly.

However, of key note even inside this clean and annually re-aligned graphic are several observations:

  • Atmospheric CO2 levels are increasing by a square law. A square law means that two carbon contribution factors or more are underway, not just one. This because,
  • Economic activity levels on the part of man are not increasing by a square function – nor even this fast in slope. There was no slowdown in carbon ppm trends attributable to the global economic depression from 2008 – 2012.
  • Global temperature increases are rising discretely and linearly, while carbon ppm amounts appear to be chasing this trend by means of a continuous acceleration (man and unacknowledged natural sources serving a square law increase)
  • There is no acceleration-to-acceleration relationship anywhere inside this correlation data. There is one discrete change in temperature trend at 1965, a trend which remains linear thereafter – yet carbon ppm’s are accelerating.

In other words – global temperature increases appear to be leading carbon ppm increases – and are not solely generated by them. Otherwise we would observe a mutual acceleration, which simple does not exist in the data. Atmospheric carbon certainly will also serve to increase global temperatures – however this effect appears to be drowned out by another primary temperature change impetus. The point is that – another source of global heating is evident here – and we have ignored this, possibly to our peril. This is a very critical difference in observation from most of the material I have reviewed in the media.

Observation 3 (Deductive-Introduces Plurality) – Ceres EBAF measures of Earth’s Reemergent Albedo are Higher Than They Should Be – Indicating Earth is Not CO2-Capturing as Much Heat as Climate Models Require

If one insists on using average watts per square meter measures to prove out a case for a specific model of climate change which involves atmospheric carbon trapping solar radiation – then that model prediction should be confirmed by observing a commensurate reduction in the reemergent albedo of Earth as observed from space. In other words, if our atmosphere traps solar radiation at a greater rate than in the past, then quod erat demonstrandum we should observe a 100% commensurate reduction in that radiation which reemerges from Earth’s atmosphere back into space. The problem is, that we are not observing this commensurate level of albedo reduction.

A 2017 study by scientists Ned Nikolov and Karl Zeller published in the Journal of Environment Pollution and Climate Change elicits that the albedo of Earth has not diminished at a level sufficient to explain nor corroborate 100% of the GISTEMP global increase in temperatures (the data I used for the escalation graph in Observation 2 above). One can observe this comparative in the graphic to the right – rights held by and extracted from publications by Dr. Nikolov and Zeller.10 While Nikolov and Zeller propose that atmospheric pressure is the actual mechanism which is primarily sensitive-causal to global temperatures – it is clear in the Ceres EBAF data that too much solar radiation is being reflected/reexpressed back into space, sufficient and necessary to explain 100% of global temperature increases via a carbon capture model.

Two voices of support have been expressed by prominent client scientists as to this need for a new explanatory model for the excess heat in the Earth’s atmosphere which cannot be explained by radiation capture models.11 Nils-Axel Mörner, the retired chief of the Paleogeophysics and Geodynamics Department at Stockholm University, is among those who express support for pursuing a new model which bears explanatory power for these findings.

The paper by Nikolov and Zeller is exceptionally interesting, a big step forward, and probably a door-opener to a new ‘paradigm’.

Nils-Axel Mörner, the retired chief of the Paleogeophysics and Geodynamics Department at Stockholm University

Professor Philip Lloyd with the Energy Institute at South Africa’s Cape Peninsula University of Technology (CPUT) also expressed support for the idea.

Nikolov’s work is very interesting, and I think the underlying physics is sound… However, they face the question, if not carbon dioxide, what is it?

Philip Lloyd with the Energy Institute at South Africa’s Cape Peninsula University of Technology

Read on, and I believe that what is proposed herein stands as a reasonable case for sponsorship at to what is causing this temperature increase above and beyond what Earth albedo measures and stand-alone carbon capture impacts can substantiate.

Observation 4 (Inductive-Introduces Critical Path) – Mean Sea Level is Rising Yes – But MSL Variance Range is Also Increasing (and Should Not Be) – Global Ocean Current Speed has Increased by 15% Over that Same Timeframe

I took a sample of forty-five years worth of NOAA Tidal Station mean sea level (MSL) data from the tidal stations at Annapolis, Maryland, Bar Harbor, Maine and Montauk, New York.12 You can observe this compiled data in the graph to the right. I chose three geographically proximal sea and temperature monitoring stations in order to observe any common signal inside their data. But three also with sufficient variance in terrain so that constrictions from geographic coastal formations did not come into play within the MSL range data. The critical path issue involved regards the red variance-range bands surrounding the mean sea level rise.

Yes, it is clear that mean sea level (MSL) is rising – and this does concern me greatly. But mean sea level ranges differently by year, based on the timing of the moon. The magnitude of this variance range itself should not increase over forty-five years (and the gamut of lunar periodicity), under a simple rise-in-sea-level scenario. Yet it is. The variance range of the annual MSL is itself increasing. There exists only a very small set of possibilities by which this can occur over a large geographic region (as sampled above) – that is by a change in the position of the Moon (which we know has not occurred), a change in height (altitude) of the landmass or local ocean bottom, or by a change in local upper mantle gravitational effect upon the ocean immediately above it.

As a sailor and navigator who is familiar with and has employed mean sea level measures for decades, the migration in this variance phenomenon bothers me enormously. One can observe in the orange bars in the graph to the right, that the variance range of the annual MSL for the three monitoring stations has increased by 25% over 45 years. This is a monumental and recent change in a factor/measure which should not change at all – or cannot change without a commensurate change in local gravity or currents. As we examine next, currents may well play into this observable MSL range change, or perhaps there is another factor which I am discounting, but right now to me this is a big problem in the sea level data, above and beyond the mere matter of its rising.

There is only one energy source in contact-proximity to the Earth’s oceans,
which can deliver enough kinetic energy to speed up all the Earth’s ocean currents by 15% in just two decades

and it is not the sun, and certainly not the Earth’s atmosphere.

In addition to this change in the viable range of annual Mean Sea Level comes a commensurate rise over that same period of time, in the average speed of global ocean currents.13 Ostensibly this increase in ocean current speed is driven by the ‘wind’ according to purported climate models and linear inductive affirmation (weakest form of valid inference) science. But using standard rule-of-thumb submarine sailing doctrine (rules long tested at sea) – a 48 knot wind is required to create 1 knot of surface current to 40 ft of depth. Heck, 16 knots of wind are required to move an object floating on the water 1 knot (an object without a sail); so much more wind velocity is required in order to move the water itself. Yes hurricanes and cyclones push ocean surges ahead of them which can move at the same speed as the depression center, but these are pressure displacement waves and not ‘currents’. In fact yes, world wind velocities are increasing on average by 15% (6.5 to 7.4 knots) over the last four decades. In addition, all ocean currents are increasing in speed, and not just surface currents in direct communication with atmospheric inertia.14 This increase in global wind speeds has occurred over decades in which ocean currents have increased in speed by 15% in just half that time. This means that atmospheric winds could account for a woeful 1% (1/2 x 1/48) of the ocean current increase in speed (or even total kinetic energy).

There exist only two factors which possess the requisite and massive motive power potential necessary to drive this observed ocean current speed increase and change in range of mean sea level, and that is geophysical and geothermal impacts to deep ocean conveyance currents; not atmospheric kinetic energy.

It is one thing to assume that atmospheric temperature is driving ocean temperatures (which is a 1 to 1000 heat content problem in itself), but it is another level of linear-inductive-affirmation stupid, to presume that winds are driving 15% acceleration of deep ocean currents – immediately after discovery of this fact and based upon zero research.

Therefore, a reasonable deductive (not inductive nor affirming) contention can be made that changes in the geothermal and gravitational signature under the oceans, are the impetus behind both the increase in ocean current speeds, as well as the dilation of annual mean sea level variances globally. Accordingly, our process of increasing-strength inference follows that particular critical path as we proceed onward with our observation set.

Observation 5 (Deductive-Consilient) – The Schumann Resonance Banding-Amplitude Has Ranged High – While Geomagnetic Moment/Polarity has Weakened/Wandered – All Highly Commensurate with Historical and Recent Global Temperature Increases

It is a well established fact that the global Schumann Resonance range banding-power peak serves as a very precise indicator of global temperatures.15 16 Recent Schumann Resonance banding-power (not the frequencies themselves as has been errantly reported by some sources17) has ranged upwards through more of the higher frequencies inside the established eight resonance harmonics (six of which manifest in the graph example to the right); indicating a weakening in the Earth’s magnetic moment generated from its solid core.18

A comparison of electromagnetic and temperature data indicated that there is a link between the annual variation of the Schumann resonance intensity and the global temperature.

M. Sekiguchi, M. Hayakawa, et. al.; Evidence on a link between the intensity of Schumann resonance and global surface temperature; Ann. Geophys. 2006

This weakening of the Earth’s magnetic moment as indicated by the chaotic power banding in the Schumann Resonance comes commensurate with a dramatic change in the geographic location of the geomagnetic north pole.

The Earth’s geomagnetic north pole has wandered significantly in the last two decades. In those decades, the geomagnetic north pole accelerated to an average speed of 55 kilometres (34 miles) per year.19 20 One can observe this acceleration in the migration of the geomagnetic north pole in the yellow dots inside the graphic to the right, obtained from the Nation Centers for Environmental Information of NOAA (click on image to see an enlarged version).21 These yellow balls reflect the movement of the north geomagnetic pole just since 1973, while the remainder of the colors cover the timeframe back to 1590. This as well comes commensurate with a pronounced weakening of the Earth’s magnetic moment.

It’s well established that in modern times, the axial dipole component of Earth’s main magnetic field is decreasing by approximately 5% per century. Recently, scientists using the SWARM satellite announced that their data indicate a decay rate ten times faster, or 5% per decade.

Global Research The Weakening of Earth’s Magnetic Field Has Greatly Accelerated, Could Have Apocalyptic Implications for All of Us; 12 Apr 2019

While we don’t know fully what all this means in terms of global climate change, mankind can draw at the very least, the inference that substantial changes are at play in both the Earth’s inner and outer cores which serve to generate our planet’s magnetic moments. These three changes, higher Schumann banding, acceleration of geographic location as well as weakening of the Earth’s magnetic moment, run commensurate with and sensitive in dynamic to the last two decades of extreme climate change. Such changes historically have served to correlate well with global temperatures. These changes cannot be ignored as potential contributors vis-à-vis the ‘heat coming from beneath our feet’.

Observation 6 (Deductive-Consilient) – Earth’s Rotation is Slowing Faster than Historical – Indicating a Recent-Term But Constant Ferrous Mass Contribution in Phase Change from l-HCP Outer Core to l-FCC Lower Mantle

What is clear in the chronological records of the Earth is that the outer rotational body is slowing, due to a transfer of both kinetic energy and more importantly mass, from the inner rotational body of the Earth, to its outer rotational body.22 In the graphic to the right, one can observe the daily slowing of the Earth’s rotation, along with the comparative addition of ‘leap seconds’ throughout the last 55 years. There have been 27 leap second additions since 1972 according to the National Institute of Standards and Technology.23 This comes commensurate with NASA Global Land Ocean Temp Index changes showing that 75% of our 1880 – 2015 global temperature index increase has been since 1972 as well. This represents the fastest addition of leap seconds (since 1880) during a period which also just happens to account for 60% of our global temp increase since 1880. This is not mere coincidence.

Our pace of addition in leap seconds (red line in the graph above) currently is many times faster than the Earth could sustain inside its angular momentum epochally. Had the Earth been slowing at this fast a pace throughout its eons of history, the planet would have come to a rotational halt by now. So we are obviously in a kind of uber-slowing phase of outer rotational body angular velocity. In the graphic to the right, one can see the simple principle that, when the core of the Earth, which spins separately from the outer rotational body of the Earth, passes mass to the outer rotational body – that outer body slows down in its rotation – and the inner body speeds up.24 Ergo, we add leap seconds at a more aggressive pace, as we have been for the last 50 years. The result of this is much akin to when a spinning ice skater extends their arms, and thereby slows the angular velocity of their rotation – mass added to the extremity of a rotating body serves to slow the rotation of that spinning body. That mass is being handed from the outer core of the Earth and into its lower mantle (part of the separate outer rotational body). This added mass is serving to temporarily slow the Earth’s outer rotational sphere faster than it typically has been slowed by the moon and ocean tides throughout its history. This extra slowing will of course will eventually end and reverse. But for now, in terms of understanding climate change, it is of significant importance. And of course, such an evolution correlates well with upper mantle activity, our next point in the observation base.

What they found is that roughly every 32 years there was an uptick in the number of significant earthquakes worldwide.
The team was puzzled as to the root cause of this cyclicity in earthquake rate.

They compared it with a number of global historical datasets and found only one that showed a strong correlation with the uptick in earthquakes.

That correlation was to the slowing down of Earth’s rotation.

Forbes: Geologist Trevor Nace: Earth’s Rotation Is Mysteriously Slowing Down: Experts Predict Uptick In 2018 Earthquakes

Which of course segues well into our next topic, the increase in earthquakes and volcanic activity globally.

Observation 7 (Inductive-Consilient) – Recent-Term Rise in Activity of Earth’s Upper Mantle in Terms of Earthquakes and Volcanic Activity Commensurate with Temperature Increases

While we have established a link between earthquakes and the slowing of the Earth’s rotation, of course as well, there is a well established link between volcanic activity and the Earth’s climate system.25 Both of these phenomena, earthquakes and volcanic activity pertain to activity changes in the upper mantle and especially the asthenosphere.

This serves to raise the question then, is global volcanism also on the rise? The correct answer is that we do not know for sure. The tally of listed active volcanoes has grown simply because the number and geographic spread of humans on the planet have both grown substantially over the last two centuries. However, to me the Smithsonian data, a portion of which is depicted in the graphic to the right (active volcano count in blue and number of eruptions in orange), does indicate a 3 to 5 fold increase in large volcanic activity since 1800. There exists however a concerted effort to downplay this putative increase in apparent large volcanism (as well as earthquakes) observed by mankind since 1800. Subjective essays which make a final claim to science of ‘No, no, no’, submitted along with masked data which screams ‘Yes, yes, yes!’. This is perhaps for good reason since the population of Earth has grown significantly in the most recent two centuries – and as a result the number of observed active volcanoes (and earthquakes) has also risen.26 This of course does not mean that volcanism is therefore on the increase.

However I went ahead and ran my own graph on the only unbiased database I could find on the matter, which you may observe to the right.27 Despite the threats and intimidation about using their data to come to a conclusion contrary to their doctrine, I believe that the Smithsonian data shows a significant increase in volcanic activity globally. Ignorance is never science, even if its enactment supports the ‘correct answer’. This is the instance wherin an Omega Hypothesis becomes ‘more important to protect than the integrity of science itself.’ We shall have to see how this trend continues and how volcanic activity has served to impact Arctic and Antarctic ice cap formations in particular.28 I realize that this is a hot button issue employed frequently by AGW deniers, but to an ethical skeptic ignorance is never a satisfactory tactic in dealing with such rancor.

Observation 8 (Deductive-Critical Path) – Heat Anomalies are Not Entropic – Rather Bear Recurring Mantle-Like Cohesiveness – Heat is Arising Principally from Ocean Conveyance Belts at Mid-Atlantic Rise and El Niño Thermohaline Currents

Yes, we have good clear evidence of the increase in occurrence, patterning and frequency of global heat anomalies. But these anomalies exhibit other signal data which we tend to ignore. These anomalies also appear to originate at the same longitude, flow like molasses eastward around the planet geographically (one can observe the video here) and tend to cluster in mutually exclusive hemispherical Europe-Asia or Africa-Asia flow patterns, which alternate and bear fluid momentum. Such signal ergodicity cannot be ethically ignored. Examine the heat anomaly patterns/flows over the past 120 years and you will observe a cohesive and slow-fluid patterning imbued inside the occurrence of these anomalies. To a systems engineer, this is a signal pattern – and provides intelligence.29 To many other professionals it is a source of blank stares. This too is a problem.

No matter whether the heat anomaly flow is resident in the northern hemisphere or alternately the southern hemisphere, the heat anomaly itself always originates from the same longitudinal position – The Mid-Atlantic Rise: a bulge thought to be caused by upward convective forces in the asthenosphere pushing upward on the oceanic crust and lithosphere.30 This construct postulates that the Mid-Atlantic Rise is pushing more than simply mantle mass. It is pushing exothermic core kinetic energy (in a temporary cycle) as well. A cycle which is both releasing heat and serving to act as a reasonable cause of all the anomalous effects observed inside this article.

Notice as well that the cohesive dynamic of the temperature anomalies tend to begin in Europe and then extend into the Middle East, while at the same time a counter-sympathetic trend originates in Africa as well. In other words, when Europe heats up, Africa does not, and when the Africa heats up, Europe takes a break from its anomalies – which cannot be explained in terms of human carbon emissions. In other words the clumping and neural feedback signals of these temperature dynamics are following a sub-signal. An influence which resides beneath both tandem phenomena.

Observe in the graphic as well that 32 years prior to 2019, or in 1987, this flow patterning kicked into a discrete and sudden high gear. Man’s economics and industrial output did not suddenly change in 1987 into this discrete a fashion nor magnitude. This discrete change matches the temperature average increase chart I developed below, a chart in which temperature increases are preceding CO2 measures and not arriving as merely the result of them. One as well in which carbon ppm’s are accelerating, while man’s economic activity is not. What I see inside this data is something wholly different than the 1:1000 effect which can be imparted through the heating of oceans by atmospheric contribution alone. The energy contribution involved here is several orders of magnitude greater than the speed at which our carbon is binding heat into the Earth’s atmosphere – and studies confirm this31 As well, the heating of the oceans is far faster, and at the wrong depths – than can be imbued by a thin atmospheric heat content contribution.

A Case Example: The El Niño and La Niña Conveyance Effect

As a case example, lets examine the heat anomaly timing resulting from the deep ocean conveyance belts which serve to originate the El Niño and La Niña climatological phenomena specifically. In the graphic to the right, one can observe the deep ocean conveyance belt effect that pulls deep ocean conveyance (blue line) from the eastern Pacific into the highly mantle-active southerly polar latitudes, whereupon this serves to impart a heat anomaly. This heating delta T (ΔT heat anomaly) then in turn becomes El Niño as the conveyance belt turns and heads back northward and shallow (red line) along the South American coast. This dynamic system serves to generate both of these climatological variation phenomena.32 33 The map of deep and shallow ocean conveyance belts and their interdependence is called a Thermohaline map.34 In the graphic to the right one can observe that the pronounced El Niño heating and La Niña cooling effects are generated specifically by the ΔT heat anomaly which arises from that conveyance belt passing near hot Antarctic latitude mantle and volcanic activity. This is denoted as point 1 in the Thermohaline graphic. In similar fashion, points 2 and 3 just happen to reside at the Mid-Atlantic Rise heat sources which we examined earlier in this observation.

The exchange points for conversion of a deep ocean current, to a shallow ocean current are indicated as the yellow dots in the Arctic and Antarctic latitudes. But in reality deep ocean currents are in immediate contact with the abyssal layer of ocean throughout the globe, so this effect can happen anywhere, and not just at the conversion points. The key is this – if anywhere along this conveyance, the blue lines are imbued with a heat anomaly, then this anomaly will carry forward to the shallow ocean currents (red lines at points 1, 2 and 3 on the Thermohaline Map). These heat anomalies (or absence thereof) then dictate specifically whether on not the planet will observe an abnormally hot or cold year relative to the average. Keep both of these principles in mind as you read further on to Observation 9 below.

Now notice that I have placed a red and blue fingerprint by each respective El Niño and La Niña phenomenon in the Thermohaline graphic above, with red indicating a hot period and blue indicating a relative cold period. If you examine the chart to the right, one can observe that these El Niño hot and La Niña cold periods fingerprint (not simply a correlation) exactly to the timing in global temperature peaks which we identified in Observation 2 above. In this case example it is clear that deep ocean conveyance belt effects are driving atmospheric climate and not the other way around. Notice that the magnitude ΔT heat anomaly spread between simply the 2017 El Niño and 2019 La Niña is very pronounced. Notice further then that just four of these scale events can account for the entirety of the last 50 years of atmospheric climate change alone. Add in the same peak contributors from points 2 and 3 along the Mid-Atlantic Rise as well, and this explanatory basis becomes not merely plausible, but compelling. The evidence is clear on this, global temperatures for sea and air are not only rising fastest at the poles (our critical ocean current cooling spots), but those rise variances are more pronounced than the general global variance – indicative of a causal, not subjective profile. You probably guessed the next consilience – yes, these pole temperature surges are timed with El Niño hot and La Niña cold periods.

Just as the wind could not possibly physically drive the increase in ocean current speed, even so ambient atmospheric temperatures could not possibly drive the below observed polar temperature phenomena.

The Air Above Antarctica Just Got Very Hot Very Fast, Breaking All Previous Temperature Records35

Warming at the poles will soon be felt globally in rising seas, extreme weather – Arctic is heating faster than Antarctic36

Now realize of course that this flow of heat content (or lack of former rate of cooling) from the poles and into their associated ocean conveyance currents constitutes just one single example of conveyance belt impact upon global climate. There are at least 5 other similar pronounced global conveyance touch points we have not even taken into consideration in the graph above. It is no long stretch of conjecture therefore, and possibly even conforming to Ockham’s Razor, to consider that this case example in geothermal flow, therefore just might extrapolate to the entire planet’s climate patterns, including its climate change as well. Such an idea cannot be dismissed by a one paragraph statement from agency and little study whatsoever.

It is very possible therefore, that deep ocean heating bears the sensitivity effect necessary to explain the majority of global climate change,
and that further then, carbon ppm’s are chasing this statistic and may not be the sole cause of the entailed warming.

Such conjecture is not proof, however it does necessitate plurality. To dismiss this, constitutes an act of ignorance on the part of mankind.

Observation 9 (Deductive-Critical Path) – Abyssal Oceans are Absorbing More Novel Heat Content per Cubic Meter of Ocean (ΔT-gigajoules/m3) than are Surface Oceans by an Enormous Margin – This is Neglected and Highly Critical Path Climate Science

Finally, there is a highly probative and deductive climate observation set which we are ignoring as a science. The abyssal layer of oceans have absorbed more heat content per cubic meter of ocean water than has the surface layer of the Earth’s oceans. This should not happen in solely a solar energy capture global warming scenario. The atmosphere does not possess an immediate and direct way to rapidly heat the abyssal layer of the ocean (although the abyssal layer does bear a mechanism to heat the atmosphere, which we shall examine next).

We begin by outlining on the right, the well documented taper curve regarding ocean temperature progression versus ocean depth.37 As one may observe, the temperature of the ocean drops off very fast from about 300 to 1000 meters in depth. Thereafter ocean temperatures follow a linear taper until the final 500 meters of abyssal depth, wherein the temperature drops to about 0 to 3 oC. This entire temperature function is called the thermocline. The first challenge to note is that most of our climate change oceanographic measures are taken only to the 2000 meter level (surface layer or grey shaded depths in the chart to the right), leaving mankind for the most part blind as to the thermal dynamics of both the deep (2000 – 4000 m) and abyssal (4000 – 6000 m) layers of the ocean.38 On the chart below, one can see those two layers along with a calculated thermal delta T per cubic meter of ocean water.

Over 3,000 free-drifting floats have been deployed all over the ocean and each float is programmed to sink 2,000 meters down, drifting at that depth for about 10 days. The float then makes its way to the surface measuring temperature and salinity the whole time. Data is transmitted to a satellite once the float reaches the surface, so that scientists and the public have access to the state of the ocean within hours of the data collection.

Windows to the Universe: Temperature of Ocean Water (How Climate Scientists Monitor Ocean Temperatures and Salinity by Depth)

Now that we know the lay of the land with respect to the ‘normal’ (for our intents and purposes say 1954 – 1958 timeframe) ambient ocean temperatures by depth, lets examine the temperature anomaly by those same 250 meter size depth bands which we just employed to define the natural thermocline.

If we take the known percent of Earth ocean surface, which is covered by each specific depth of ocean from 0 to 6000 meters – or what is called a hypsographic curve,39 and then use that arrival distribution to determine the percent of total ocean water, and therefore cubic meters of ocean water as well, which exist at each band of ocean depth by 250 meter intervals, we arrive at the ocean-water-by-depth cubic volume distribution curve in the third and fourth columns of the graphic to the immediate right. These two columns present the percent of total ocean water in each 250 meter-depth band, as well as then the resulting cubic meters which that percentage represents of Earth’s total 4 x 1012 m3 of ocean water (totaled at the bottom of column 3).40 This represents cubic meters of ocean water which exists on the entire Earth, partitioned into 250 meter bands of depth. As one can observe, each nominal ocean depth begins to represent less and less of the total percentage of Earth’s oceans as depths range into the lower abyssal (>5000 meters).

Subsequently, if we take the 2017 ΔT heat anomaly vs 1954, which was measured to be 148.5 zetajoules to a 700 meter depth,41 and allocate that heat content to the appropriate depth band, we arrive at the ΔT for each 250m band of the upper surface layer of oceans. Again, if we take the same heat content curve for the 700 – 2000 meter bands, and apply this same exercise, we find the ΔT for each of the 250m bands in the remainder of the surface layer of the oceans. This allows us to now calculate a gigajoule per cubic meter index for the first eight depth bands of the ocean. As you may observe in the graph to the right, those shallow ocean 250m bands have warmed substantially from 1993 through 2017, as expected from climate change impacts.42 This can be observed in the rightmost column in the graph, wherein the gigajoules per cubic meter index for the surface layer of the ocean is color highlighted by its heat content magnitude relative to the other layers (light-orange).

However, if we continue this exercise and employ the heat content change data which has been measured in the few studies which do address climate impacts at the deep and abyssal layers,43 we find a reasonable taper curve in gigajoules per cubic meter, all the way to the 4500 meter depth level. This equates to a total 2017 ΔT heat anomaly of 345 gigajoules, by means of the three studies cited. In column 4, we have distributed that 345 gigajoules by the factor of the ocean’s natural thermocline. As a note, one gets essentially the same anomaly distribution by depth if the discrete components of the heat anomaly are distributed by layer and strict study result cited herein (that heat anomaly distribution is shown in the ‘Heat Anomaly ΔT Conveyance graphic to the right). In either case, the heat content cited in the abyssal layer always forces an extreme heat into ‘small footprints’ mathematically, as may be observed as the ‘Required Heat Anomaly’ in column 5 of the graphic above. Indeed, the actual heat content changes (ΔT) measured in the abyssal layer in particular – given the much lower cubic amount of ocean water which exists at that layer depth – result in rather dramatic estimates for the required gigajoules per cubic meter index needed to resolve this heat anomaly layer and arrive at the 2017 total ocean anomaly of 345 zetajoules. One can observe this in the darker orange and red shaded high index numbers on the bottom right of the chart above.

What we are observing in this set of calculations, is that of course a heat anomaly per cubic meter of ocean water exists at the ocean surface; however, a more pronounced heat anomaly exists at the abyssal, volcanic and ocean trench depth bands of the Earth’s oceans. This abyssal heat content anomaly of course does not just sit there. Nor is it ambient. It conveys as a belt of heat content (ΔT) inside the body of a long-extant current, rising eventually up to the surface. Where it renders that ancient deep oceanic conveyor belt less effective at cooling the ocean surface and its communicating atmosphere than it has been in the past – thereby causing a net increase in global atmospheric temperatures.

It is clear that there exists an excess of heat anomaly content in the abyssal layer of ocean, relative to its volume of ocean water.
This must be examined – as it is both critical path and deductive.

In fact, two recent deep and abyssal ocean temperature studies comment upon this very observation, corroborating the necessity to begin to examine the abyssal layer and its critical path role in possibly effecting a portion of our observable climate change acceleration.44 45

Although considerable work has conclusively shown significant warming in the upper (<700 m) ocean where the bulk of historical ocean temperature measurements are found (e.g., Rhein et al., 2013, and the section above on The Observing Network), and extending down to 2,000 m during the recent Argo period, there is now a growing consensus supported by numerous studies that changes are also occurring in the deeper global ocean (>2,000 m). Based on observations below 2,000 m, it is estimated that the global ocean has accumulated heat at a rate of 33 ± 21 TW over 1991 to 2010 (Desbruyeres et al., 2016). Two-thirds of this warming is occurring between 2,000 m and 4,000 m, albeit with large uncertainty, almost entirely owing to warming in the Southern Ocean in this depth range (see Sallée, 2018, in this issue). Below 4,000 m, the observations show a large meridional gradient in the deep warming rate, with the southernmost basins warming 10 times faster than the deep basins to the north (Figure 5A). While the warming below 4,000 m only accounts for one-third of the total warming below 2,000 m, the regional variability is lower, leading to greater statistical certainty in the abyssal changes (4,000 m to 6,000 m; Purkey and Johnson, 2010; Desbruyeres et al., 2016; Figure 5A).

Durack, Gleckler, et al. Ocean Warming: From the Surface to the Deep in Observations and Models; Oceanography; 9 Dec 2018

The strongest warming rates are found in the abyssal layer (4000–6000 m), which contributes to one third of the total heat uptake with the largest contribution from the Southern and Pacific Oceans.

Desbruyeres, Purkey, et al. Deep and abyssal ocean warming from 35 years of repeat hydrography. Geophysical Research Letters

The issue therefore is not one of total ocean change in ambient heat (watts/m2 ‘budget’ as the Cheng-Abraham
study deems it), but rather one of the relative change in layer-depth heat content per cubic meter of ocean water (ΔT-gigajoules/m3).

As a final note, ignore those who speak in terms of average and ambient heat transfer statistics in ‘watts per square meter’, lithosphere taper curves or ambient heat transferred from the mantle by convection, radiation and conduction. These concepts constitute merely sophomoric understandings of oceanographic thermostatic measures; approaches which ignore systems sensitivity and incremental dynamics – in effect nothing more than ‘Mt. Stupid‘ arguments. ΔT heat content (not ambient heat) in the Earth’s oceans transfers by means of numerous and extreme small-footprint exposures along with the fourth mode of heat transfer, ‘conveyance’ – and less by means of ambient averages and principles of high school science. Systems theory, feedback and incremental dynamics are not taught in high school natural science. Ignore such dimwittery.

By means of principally these nine observations, I contend that Ockham’s Razor has been surpassed – the plurality of a new alternative explanatory climate change model is now necessary.

The Necessary and Elegant Alternative We Must Now Consider – Exothermic Core Cycle to Deep Ocean Induced Climate Change

Now with all of this observation set under our belt, let’s examine the alternative that I believe we must address – out of both ethics and precaution. This alternative is not vulnerable to the easy wave-of-the-hand single-analysis/apothegm dismissals to which so many other climate change alternatives fall prey. This does not serve to invalidate anthropogenic contribution to carbon and global temperatures by any means. But such a reality also never necessitates that mankind adopt complete ignorance either. This construct alternative can be summarized in four points.

1.  The Earth’s core is undergoing extreme exothermic change – shedding high-latent-energy hexagonal closepack (HCP) iron into the mantle where it converts to face centered cubic (FCC) iron.
2.  The exothermic heat content from this eventually reaches the asthenosphere.
3.  Ancient abyssal ocean conveyance belts pull novel heat content from small footprint yet now much hotter contribution points exposed to the asthenosphere – and convey this novel heat content to the surface of the ocean.
4.  Ocean heats atmosphere (or fails to cool it as well as it once did) much more readily than atmosphere heats ocean.

Because of the contribution of latent kinetic energy from hexagonal closepack (HCP) lattice material exiting the Earth’s outer core (and slowing the Earth’s rotation), the Earth’s asthenosphere heats up as much as 20 degrees Celsius. Most of this heat content cannot communicate with nor reach the surface of the Earth – as one will commonly be told in classic climate science ‘Watt/square meter’ literature. However, this is a grand assumption of Gaussian blindness, as some of the heat does escape the asthenosphere – and at critical heat transfer-to-conveyance points along abyssal ocean currents.

A – Ocean ridge volcanic activity is on a steady 220 year substantial increase trend. Temperature anomalies appear at the Mid-Atlantic Rise and then migrate as a fluid, eastward in an alternating southern and northern hemisphere exclusivity.

B – Deep oil formations are heated by the asthenosphere ΔT and release volatile organic compounds and alkanes (principally methane). Methane rises faster than economic activity can substantiate (which is indeed what is occurring).46

C – Deep sea solid methane traps are heated by the now warmer asthenosphere and begin to sublime into to methane gas.

D – Ocean trenches are heated by the now warmer asthenosphere and subsequently heat deep ocean conveyance currents by 1.5 to 3.5 degrees Celsius (ΔT). Heat is not simply transferred by convection, radiation and conduction – it is also transferred by conveyance, from deep exposure points, to the surface where these now warmer currents used to cool the atmosphere, however no longer do so as effectively.

E – Gas hydrate vents are heated and become more active.

F – Permafrost/Tundra is heated and releases both carbon dioxide and methane. These geoformations now become active during the winter months in which the sun is increasing in declination, whereas once they were not. (see: National Geographic 6 Feb 2020: The Arctic’s thawing ground is releasing a shocking amount of gasses – twice what we had thought; https://www.nationalgeographic.com/science/2020/02/arctic-thawing-ground-releasing-shocking-amount-dangerous-gases/)47

G – Historic atmospheric-ocean deep belt cooling touch points no longer cool the atmosphere as they once did, thereby resulting in an increase in overall atmospheric temperatures. This explains the surplus heat identified by the shortfall in Earth albedo reduction cited in Observation 3.

H – The catalytic decay of volatile organic compounds into alkanes, alkanes into methane and finally methane into carbon dioxide – all release latent energy into the atmosphere – indirectly and catalytically heating it.

Now let’s examine how this plays into the heat released through a temporary exothermic cycle of the Earth’s inner and outer cores.

Now of course, stepping back and looking again at the core structure of the Earth, I conjecture a scenario (albeit temporary of course) wherein the latent energy bound up in the hexagonal closepack (HCP) iron lattice of the Earth’s core NiFe (Nickel-Iron) material, is converted to heat energy upon that mass’s communication up into the lower mantle of the outer rotational body of the Earth. This HCP lattice of iron converts into a face centered cubic (FCC) lattice of iron (see phase diagram at lower left hand side of the above graphic – ΔT or ‘Delta T’ boundary) and a bevy of heat (ΔT) wound up in the incumbent latent energy release.

1. Earth’s inner core goes into an exothermic/exomaterial sloughing cycle.

2. Magnetic permeability of the Earth’s inner core falls – Earth’s magnetic field weakens, geo-magnetic north and magnetic north begin to wander in position – Schumann Resonance ranges into higher and higher amplitude power-bands (which correlates historically with higher global temperatures).

3. Inner core contributes solid hexagonal closepack (HCP) iron material to outer core across the Solid-HCP to Liquid-HCP boundary.

4. Outer core becomes exothermic/exomaterial and distributes L-HCP iron into the lower mantle, iron which snaps from an L-HCP to L-FCC lattice bravais and releases: massive KE – kinetic energy in the forms of electrical energy (electrons – number of sprites, booms and clear weather lightning incidents rise) and most importantly, heat.

5. Mantle heats up, and in turn heats the asthenosphere by up to 20o C. 1.5 to 3.5 degrees of this heat escapes the asthenosphere and into the deep ocean conveyance belts (not ambient ocean temperature).

6. Asthenosphere heats ocean conveyance belts by volcanic vents, deep troughs and other touch points in deep ocean. Heat specifically impacts deep ocean (cold) conveyance belts by raising their temperature slightly. This heat content is conveyed to the surface over the next decade of flow and is not imparted to deep ocean ambient temperature.

7. Deep ocean conveyance heats atmosphere by conveying kinetic energy in the form of added heat – and not through radiation, convection nor conduction.

8. Added heat from asthenosphere becomes genesis of novel volatile organic compounds, methane and other alkanes, from deep oil formations being heated and heating of the northern hemisphere’s permafrost and tundra.

And I contend, that this model elegantly and with ample explanatory power, addresses what we indeed see with respect to global climate change today.

Such is the state of the construct I have developed. In no way will the simple act of pondering this idea of course sway me from participating in global action regarding climate change. But neither will I conduct my activity from a position of willful ignorance.

Such is the nature of an ethical skeptic.

The Ethical Skeptic, “The Climate Change Alternative We Ignore (to Our Peril)”; The Ethical Skeptic, WordPress, 17 Jan 2020; Web, https://theethicalskeptic.com/2020/01/16/the-climate-change-alternative-we-ignore-to-our-peril/

February 16, 2020 Posted by | Institutional Mandates | , , | 4 Comments

Oh the Quackery!

Fake medical skeptics must realize that instructing someone from a position of scientific authority, claim to facts or likelihood, to not undertake a treatment or protocol, constitutes quackery as well.
Americans are successfully employing supplements to improve their well being, and as well are increasingly sharing this success with others. As this industry inflection point unfolds, it is such a joy to witness the trolls of pretend science scoffing angrily from their parents’ basements. A wage well earned.

In a November 2017 Business Insider article journalist Erin Brodwin tendered copious amounts of medical advice concerning the supplement industry, and in particular which supplements one should and should not be taking. For example, I should be taking zinc and magnesium she instructs, but not vitamins C, cobalamine (B12), NADH (B3) nor l-methylfolate (B9). She expertly opines that most all this constitutes “pills and powders which are ineffective and sometimes dangerous”, and follows this modus absens scientific claim with an even more amazing claim, that “[All/unnamed] Public health experts recommend that people stay away from supplements altogether.” Let’s be clear – this constitutes a medical treatment advisement to me based upon a psychic diagnosis on the part of a pretend medical professional appealing to unnamed scientific authority. No more, no less. I lost count of how many times Erin cited the size of the supplement industry ($37 billion) in the article – as if this revenue turnover, which would simply inflate four-fold in price if the pharmaceutical-regulatory industry gained control of it, immediately in and of itself served to condemn such well-being management activity. As it turned out, Erin Brodwin was not simply wrong – but the medical advice she offered up to me in this article, is the same as that which has served to impart significant harm to my life for decades. In this article she was acting in the role of a quack, plain and simple.

I contend that the majority of suffering experienced by especially our US population, stems from a lack of available health knowledge on the part of its average citizen. Shill agency or no, knowledge which is squelched in the media by such fake medical skeptics as Erin Brodwin. Millions suffer, she gets a pharma-guaranteed celebrity boost to her career. Fake medical skeptics must realize, that instructing someone from a self-claimed position of scientific authority, set of ‘facts’ or even probability, that a treatment is ineffective/harmful/quackery – constitutes the making of a medical recommendation as to diagnoses, cures and appropriate treatments. Instructing someone that, not administering a treatment or arguably beneficial approach, constitutes the right medical treatment for them – is pretending to be a medical professional and offering unskilled medical advice – even if offered to a group of individuals. One cannot simultaneously make an accusation of ‘ineffective and/or dangerous’ and then qualify the accusation with the de rigueur ‘there are only anecdotes of its effectiveness’ permissive apologetic. This is dishonesty in inference, and in itself constitutes the most harm-imparting form of quackery.

Information that constitutes medical advice [is] the provision of a professional’s [or poseur’s thereof] opinion about what action an individual should or should not take with regard to their health…

Dana C. McWay, Legal and Ethical Aspects of Health Information Management1

If I could sue the skeptic-quacks who instructed me through highly publicized media releases, purported to be ‘communicating the science’ of medicine, that the following list (see ‘The Quackery’ below) was quackery – I would sue them for millions for the harm they created in my life over decades of suffering – through wrong diagnosis and erroneous treatment. An example of just such a quack-study can be found in this May 2019 ‘publication’, in the Annals of Internal Medicine no less (was simply a press release in reality): Chen F, Du M, Blumberg JB, et al. Association Among Dietary Supplement Use, Nutrient Intake, and Mortality Among U.S. Adults: A Cohort Study. A study wherein a student at Tufts University advises an entire national population as to a medical/health protocol they should not undertake (modus absens). The study was based upon death statistics among large cohorts who recalled ever taking a vitamin pill in their life, and what food they ate over 6, two year intervals. Not to mention recalling how much copper and 30 other nutrients that food had in it. Of course those who are still living are going to recall that this longevity is because they ‘ate healthy’ – this is how self-deception works in humans, and this study sought to exploit that foible. In other words, its analysis bore the agency and student conflict-of-interest (seeking to impress potential future employers) which sought to exploit noise-infused cohort stat-hacking bullshit. No wonder the study and its data are all hidden behind a paywall. An extraordinary claim to an absence (a monumental task of inference), affecting hundreds of millions of people through a medical diagnosis and treatment recommendation – and they don’t want to show the data or study. Right. A notorious trick of those seeking legislative rule (and extractive earnings) over American lives and rights.

Those seeking to keep Americans chronically sick – knowing that nutrient is being diluted from our food more and more each decade,2 they insist that all nutrient must come from our food alone, and then are mystified as to why Americans compulsively consume more calories each decade as well.

The incumbent weight battle and health harm were all imparted to me through instruction as purported medical and science aficionados, that the below approaches were quackery. When indeed all the below protocols turned out to be highly beneficial; critical in the recovery and maintenance of my well being. Such pseudoscience gets very personal and as a result, I am not afraid to call people like Brodwin and Chen, Blumberg, et al. incompetent and malicious fakers.

It is one thing to cite that the claimed benefits of a treatment have not been study-confirmed by the FDA. It is another level of harm-imparting potential to then call that same thing ‘ineffective or dangerous’.

Never trust a person who does not understand the ethical difference – and certainly never get your science nor medical advice from them, no matter what letters they may flaunt after their name.

“Supplements are an ineffective and sometimes dangerous waste of money.”

The Most Injurious Statement a Quack Can Make

Such fakers should be held legally accountable for the medical misinformation they spread. Be careful medical skeptics – the world does not suffer a lack of your cudgeling voice as to what constitutes the entire set of falsehood. Claims to absence and falsity require a much higher rigor in inference than do claims to presence,3 yet ironically such claims are doled out like candy by celebrity-seeking medical skeptics and journalists. Those foisting final claims to conclusive confidence, regarding topics about which they in reality know very little. That emotionally impaired propensity, the ‘Bunk! – I am the smartest person in the room and cannot be fooled’4 bravado, adds no value whatsoever to society. Such emotional frailty inevitably serves to impart harm, an affliction upon us all derived from one’s lack of critical knowledge and circumspection. If that is what you are here to add into the fray, then your life is of a net negative value to mankind. Celebrity or no – Doctor or no. You might help one person, and then definitely harm 10,000 in the next breath. Such sad circumstance mandates a long look in the mirror on your part.

You harm people like me – persons who no longer accept your claim to personal representation of medicine, science, science communication nor skepticism. Americans are successfully employing supplements to improve their well being, and as well are increasingly sharing this success with their friends and families. As this industry inflection point unfolds, it is such a joy to witness the trolls of pretend science scoffing angrily from their parents’ basements. A wage well earned.

The Quackery

Now first please note, that I am not a medical professional. The protocols I undertook below, while beneficial for me, do not constitute recommendations nor non-recommendations by me as to diagnoses, cures, treatments or protocols for adoption on the part of any individual. It should also be noted that each of these successes were accompanied by many more protocols I personally tested, which either failed or did nothing for me.

That being said, the following protocol introductions changed my life substantially, in order of critical benefit – each of these were pooh-pooed by skeptic-quacks over the decades (and in particular the article and study cited above), those who caused me much injury by recommending specific not-protocols, which turned out to bear harm:

    l-Methylfolate (L-5-methyltetrahydrofolate)

Transitioning from the feeling like I was dying, weakness, sweating, light-headedness and anxiety – to feeling like it was a warm spring summer day and I was well again. I could run 3 miles on an 8 minute pace in my daily workouts, but could not even walk through the grocery store nor sit through an hour and a half professional conference lecture – without wondering whether I should have them call an ambulance. It ceased within 10 minutes of taking my first l-methylfolate and has never come back. My daily folic acid vitamin I took over the decades was completely useless this entire time.

    Methylcobalamine/Adenosylcobalamine

All the same maladies as cited under l-methylfolate above, as I take this in combination with that supplement. These and more ceased within 10 minutes of taking my first methylcobalamine and have never come back. Doctor confirmed that my red blood cell count, months after starting this, had risen barely back above the anemic level. I was in the bottom 3% of the range – but to me it felt invigorating and wonderful just getting to that point.

    EDTA and Doxycycline (Both are required)

As verified by catheterization by a top cardiologist (“Well TES, I have good news and I have bad news. The bad news is, you are going to die of cancer in your mid to late 90’s most likely…”). Two years of daily therapeutic dose in the morning completely eliminated arterial plaque from both my heart (cardiologist confirmed) and (I conjecture) my brain fine capillaries and other plaque-vulnerable organs. Significant cardiovascular boost and significant boost in cognitive skills. Significant change in endurance required breath for heavy activity. I lost my feet callouses and my veins became supple like cooked spaghetti (according to my regular phlebotomist). The cardiologist suggested I stop, since the job was done. I did – but the benefits have sustained without diminishing, for well over 15 years now. In my 50’s, with training, I am able to beat one third of my high school 5K cross country times.

    Digestive Enzyme Pancreatin/Ox Bile/Betaine HCL

Daily left lower quadrant pain (after all other possibilities were eliminated by doctor first) was eliminated via taking this with each meal and at bedtime. Helped clear up skin.

    Nicotinamide Adenine Dinucleotide (NADH/NAD+)

Significant boost in daily energy, mental clarity and feeling of well being. If one gets dizzy, then back off on the supplementation amount.

    Negative Ionic Fulvic Acid Suspension

Energy all day long, reduction in anxiety, reduction in autoimmune measures (thyroid peroxidase antibodies and thyroid supplement required). If I go without this for more than 48 hours, I can tell physically. The first ingestion of this afterwards is akin to drinking water after being very thirsty. Very refreshing and reinvigorating. Hair thickness boost.

    Quercetin and Bromelain

Significant reduction in face sores and rosacea. Reduction in the sick-bloated feeling after evening meal.

    Vitamin C in Larger Dose (Not ‘Mega-Dose’)

Significant reduction in time to get up off the floor. Significant improvement in flexibility. Significant reduction in joint pain. Lower rate of flu and cold styled illnesses per year. Dropped from sick once per year – to once every other or three years.

    Vitamin D3

Nominal boost in overall well being, skin and hair tone. Lower rate of depressive winter funk.

    Amla (Indian Gooseberry) Powder

Significant reduction in illness, sick feeling, brain fog and rosacea – along with an increase in well being, energy and fresher morning feeling (no bad taste in mouth). Much less joint pain in knees and ankles, and more flexible workouts.

    Eliminating Toxic Agriculture from My Diet

Significant quality of life improvements were achieved by my whole family, through the elimination of the following toxic foods from our diet:

     Soy (All types and forms)

     GMO Corn

     Wheat

     Non-Grassfed Butter

     Dairy (All types and forms)

     Peanuts/Legumes

     GMO Oils (Soybean, Canola, Cottonseed)

Night and day difference in overall well-being, lessened anxiety, irritable bowel syndrome, thick and slow sick/toxic feeling, autoimmune reactions – along with a significant reduction in facial redness/rosacea and increase in metal acuity/attention/alertness.

My doctor of course has helped me through one surgery and a broken ankle. I celebrate those successes. However, the endless profit-minded monitoring of my blood pressure, A1C and cholesterol – has served to dissuade the doctor’s work from my real medical needs. Decades of undiagnosed pernicious anemia. Decades of autoimmune maladies and years of painful IBS. These were the important things (which probably eventually cause out-of-range blood pressure, A1C and cholesterol in the first place).

The money-making measures were never out of line – and my doctor falsely regarded that because of this, I was therefore fine. This, to my harm and suffering. I no longer want my blood pressure, A1C and cholesterol checked by my doctor. Neither do I bathe myself in ice-water when I get a nominal 101 degree fever. Instead I look for the cause. Otherwise, to focus on only the symptom …is well, quackery.

An ethical skeptic eschews such fake knowledge which stands in substitution of the critical knowledge, path or need.

The Ethical Skeptic, “Oh the Quackery!”; The Ethical Skeptic, WordPress, 25 Jan 2020; Web, https://theethicalskeptic.com/?p=44156

January 25, 2020 Posted by | Agenda Propaganda, Institutional Mandates | , | 6 Comments

Chinese (Simplified)EnglishFrenchGermanHindiPortugueseRussianSpanish
%d bloggers like this: