The Ethical Skeptic

Challenging Agency of Pseudo-Skepticism & Cultivated Ignorance

Inflection Point Theory and the Dynamic of The Cheat

A mafia or cartel need not cheat at every level nor in every facet of its play. The majority of the time such collapsed-market mechanisms operate out of necessity under the integrity of discipline. After all, without standardized praxis, we all fail.
However, in order to effect a desired outcome an intervening agency may only need exploit specific indigo point moments in certain influential yet surreptitious ways. Inflection point theory is the discipline which allows the astute value chain strategist to sense the weather of opportunity and calamity – and moreover, spot the subtle methodology of The Cheat.

Note: because of the surprising and keen interest around this article by various groups, including rather vociferous Oakland Raider fans, I have expanded that section of the article for more clarity/depth on what I have observed; and further, added 6 excerpt links in each appropriate section outlaying the backup analysis for those wishing to review the data. One can scroll directly to that section of the article at about 45% through its essay length.

Inflection Point Theory

In one of my strategy firms over the decades of conducting numerous trade, infrastructure and market strategies, I had the good fortune to work as colleague with one of our Executive Vice Presidents, a Harvard/MIT graduate whose specialty focused in and around inflection point theory. He adeptly grasped and instructed me as to how this species of analytical approach could be applied to develop brand, markets, infrastructure, inventories and even corporate focus or culture. Inflection point theory in a nutshell, is the sub-discipline of value chain analytics or strategy (my expertise) in which particular focus is given those nodes, transactions or constraints which cause the entire value chain to swing wildly (whipsaw) in its outcome (ergodicity). The point of inflection at which such signal is typically detected or hopefully even anticipated, is called an indigo point.

Columbia Business School Strategic advisor Rita McGrath defines an inflection point as “that single point in time when everything changes irrevocably. Disruption is an outcome of an inflection point.”1 While this is not entirely incorrect, in my experience, once an inflection point has been reached, the disruption has actually already taken place (see the oil rig example offered below), and an E-ruptive period of change has just precipitated. It is one thing to be adept with the buzzwords surrounding inflection point theory, and another thing altogether to have held hands with those CEO’s and executive teams while they have ridden out its dynamic, time and time again.

The savvy quietly analyzes the hurricane before its landfall. The expert makes much noise about it thereafter.

Such is not a retrospective science in the least. Nonetheless, adept understanding of business inflection point theory does in a manner allow one to ‘see around corners’, as McGrath aptly puts it.

Those who ignore inflection points, are destined to fail their clients, if not themselves; left wondering why such resulting calamity could have happened in such short order – or even denying that it has occurred through Nelsonian knowledge. Those who adeptly observe an indigo point signal may succeed, not through simply offering a better product or service, rather more through the act of rendering their organization robust to concavity (Black Swans) and exposed to convexity (White Swans). Conversely under a risk strategy, an inflection-point-savvy company may revise their rollout of a technology to be stakeholder-impact resistant under conditions of Risk Horizon Types I and II and rapid (speed to margin, not just speed for speed’s sake) under a confirmed absence of both risk types.2 As an example, in this chart data from an earlier blog post one can observe the disastrous net impact (either social perception, real or both) of the Centers for Disease Control’s ignoring a very obvious indigo pause-point regarding the dynamic between aggressive vaccine schedule escalations and changes in diagnostic protocol doctrine. Were the CDC my client, I would have advised them in advance to halt deployment at point Indigo, and wait for three years of observation before doing anything new. An indigo point is that point at which one should ethically, at the very least plan to take pause and watch for any new trends or unanticipated consequences in their industry/market/discipline – to make ready for a change in the wind. No science is 100% comprehensive nor 100% perfect – and it is foolishness to pretend that such a confidence in deployment exists a priori. This is the necessary ethic of technology strategy, even when only addressed as a tactic of precaution. When one is responsible for at-risk stakeholders, stockholders, clients or employee families, to ignore such inflection points, borders on criminally stupid activity.

Much of my career has been wound up in helping clients and nations address such daunting decision factors – When do we roll out a technology and how far? When do we pause and monitor results, and how do we do this? What quality control measures need to be in place? What agency, bias or entities may serve to impact the success of the technology or our specific implementation of it? etc. In the end, inflection point theory allows the professional to construct effective definitions, useful in spotting cartels, cabals and mafias. Skills which have turned out to be of help in my years conducting national infrastructure strategy as well. Later in this article, we will outline three cases where such inflection point ignorance is not simply a case of epistemological stupidity, but rather planned maliciousness. In the end, ethically when large groups of stakeholders are at risk, inflection point ignorance and maliciousness become indistinguishable traits.

Inflection Point

/philosophy : science : maths/philosophy : neural or dynamic change/ : inflection points are the points along a continuous mathematical function wherein the curvature changes its sign or there is a change in the underlying differential equation or its neural constants/constraints. In a market, it is the point at which a signal is given for a potential or even likely momentum shift away from that market’s most recent trend, range or dynamic.

An inflection point is the point at which one anticipates being able to thereafter analytically observe a change which has already occurred.

Inflection Point Theory (Indigo Point Dynamics)

/philosophy : science : philosophy : value chain theory : inflection point theory/ : the value chain theory which focuses upon the ergodicity entailed from neural or dynamic constraints change, which is a critical but not sufficient condition or event; however, nonetheless serves to impart a desired shift in the underlying dynamic inside an asymmetric, price taking or competitive system. The point of inflection is often called an indigo point (I). Inside a system which they do not control (price taking), successful players will want to be exposed to convexity and robust to concavity at an inflection point. Conversely under a risk horizon, the inflection point savvy company may revise their rollout of a technology to be stakeholder-impact resistant under conditions of Risk Horizon Types I and II and rapid under a confirmed absence of both risk types.

An Example: In March of 2016, monthly high capacity crude oil extraction rig counts by oil formation, had all begun to trend in synchronous patterns (see chart below extracted from University of New Mexico research data).3 This sympathetic and stark trend suggested a neural change in the dynamic driving oil rig counts inside New Mexico oil basin operations. An external factor was imbuing a higher sensitivity contribution to rig count dynamics, than were normal market forces/chaos. This suggested that not only was a change in the math in the offing, but a substantial change in rig dynamics was underway, the numerics of which had not yet surfaced.

Indeed, subsequently Enverus DrillingInfo confirmed that New Mexico’s high capacity crude extraction rig counts increased, against the national downward trend, by a rate of 50+% per year for the ensuing years 2017 and into 2018 – thereby confirming this Indigo Point (inflection point).4

I was involved in some of this analysis for particular clients in that industry. This post-inflection increase was driven by the related-but-unseen shortfall in shallow and shale rigs, lowering production capacity out of Texas during that same time frame and increasing opportunity to produce to price for New Mexico wells – a trend which formerly had served to precipitate the fall in monthly New Mexico Rig Count to an indigo point to begin with. Yet this pre-inflection trend also had to end because the supply of rigs in Texas could not be sustained under such heavy demand for shale production.

Astute New Mexico equipment planners who used Inflection Point theory, might have been able to head this off and ensure their inventories were stocked in order to take advantage of the ‘no-discounts’ margin to be had during the incumbent rush for rigs in New Mexico. This key pattern in the New Mexico well data in particular, was what is called in the industry, an inflection point. My clients were able to increase stocks of tertiary wells, and while not flooding the market, were able to offer ‘limited discount’ sales for the period of short supply. They made good money. They were not raising prices of plywood before a hurricane mind you, rather being a bit more stingy on their negotiated discounts because they had prepared accordingly.

To place it in sailing vernacular: the wind has backed rather than veered, the humidity has changed, the barometric pressure has dropped – get ready to reef your sails and set a run course. A smart business person both becomes robust to inflection point concavity (prepares), and as well is exposed to their convexity (exploits).

The net impact to margin (not revenue) achievable through this approach to market analytics is on the order of 8 to 1 in swing. It is how the successful, make their success. It is how real business is conducted. However, there exists a difference between survival and thriving due to adept perspective-use concerning indigo points, and that activity which seeks to exploit their dynamic for market failure and consolidation (cartel-like behavior).

Self Protection is One Thing – But What about Exploiting an Inflection Point?

There exists a form of inflection point analytics and strategy which is not as en milieu knight-in-shining-armor – one more akin to gaming an industry vertical or market in order to establish enormous barriers to entry, exploit consolidation failure or defraud its participants or stakeholders. This genus of furtive activity is enacted to establish a condition wherein one controls a system, or is a price maker and no longer a price taker – no more ‘a surfer riding the wave’, rather now the creator of the wave itself. Inflection points constitute an excellent avenue through which one may establish a cheat mechanism, without tendering the appearance of doing so.

Inflection Point Exploitation (The Cheat)

/philosophy : science : philosophy : agency/ – a flaw, exploit or vulnerability inside a business vertical or white/grey market which allows that market to be converted into a mechanism exhibiting syndicate (cartel, cabal or mafia-like) behavior. Rather than the market becoming robust to concavity and exposed to convexity – instead, this type of consolidation-of-control market becomes exposed to excessive earnings extraction and sequestration of capital/information on the part of its cronies. Often there is one raison d’être (reason for existence) or mechanism of control which allows its operating cronies to enact the entailed cheat enabling its existence. This single mechanism will serve to convert a price taking market into a price making market and allow the cronies therein to establish behavior which serves to accrete wealth/information/influence into a few hands, and exclude erstwhile market competition from being able to function. Three flavors of syndicated entity result from such inflection point exploitation:

Cartel – a syndicate entity run by cronies which enforces closed door price-making inside an entire economic white market vertical.

Functions through exploitation of buyers (monoopoly) and/or sellers (monopsony) through manipulation of inflection points. Inflection Points where sensitivity is greatest, and as early into the value chain as possible, and finally inside a focal region where attentions are lacking. Its actions are codified as virtuous.

Cabal – a syndicate entity run by a club which enforces closed door price-making inside an information or influence market.

Functions through exploitation of consumers and/or researchers through manipulation of the philosophy which underlies knowledge development (skepticism) or the praxis of the overall market itself. Inflection Points where they can manipulate the outcomes of information and influence, through tampering with a critical inflection point early in its development methodology. Its actions are secretive, or if visible, are externally promoted through media as virtue or for sake of intimidation.

Mafia – a syndicate entity run by cronies which enforces closed door price-making inside a business activity, region or sub-vertical.

Functions through exploitation of its customers and under the table cheating in order to eliminate all competition, manipulate the success of its members and the flow of grey market money to its advantage. Inflection Points where sensitivity is greatest, and where accountability is low or subjective. Its actions are held confidential under threat of severe penalty against its organization participants. It promotes itself through intimidation, exclusive alliance and legislative power.

Three key examples of such cartel, cabal and mafia-like entities follow.

The Cartel Cheat – Exemplified by Exploitation of a Critical Value Chain Inflection Point

Our first example of The Cheat involves the long-sustained decline of US agricultural producer markets. A condition which has persisted since the 1980’s, ironically despite the ‘help’ farmers get from the agricultural technology industry itself.

Cheat where sensitivity is greatest and as early into the value chain as possible, at a point where attentions are lacking. Codify the virtue of your action.

Indigo point raison d’être: Efficiency of Mixed Bin Supply Chain

The agriculture markets in the US are driven by one raison d’être. They principally ship logistically (85%) via a method of supply chain called ‘mixed bin’ shipping. This is a practice wherein every producer of a specific product and class within a region dumps their agri-product into a common stock for delivery (which is detached from the sell, by means of a future). Under this method, purportedly in the name of ‘efficiency’, the farmer is not actually able to sell the value of her crop, rather must sell at a single speculative price (reasonable worst case discounted aggregate) to a few powerful buyers (monopsony).

Another way to describe this in value chain terms, is by characterizing the impact of this ownership of the supply chain by means of common-interdependent practice, as a ‘horizontal monopoly’. The monopoly/oligopoly powers in the presiding ABCD Cartel (as it is called), do not own the vertical supply of Ag products; instead they dominate the single method (value chain) of supply and distribution for all those products. This is what Walmart used in the 1970’s and 80’s to gut regional competitors. Players of lesser clout who could not compete initially inside the 2 – 8% to sales freight margin advantage; fell vulnerable finally the cost purchase discounts on volume which Walmart was eventually able to drive once a locus of purchasing power was established. Own the horizontal supply chain and you will eventually own the vertical as well. You have captured monopoly by using the Indigo Point of mandatory supply chain consolidation. Most US Courts will not catch this trick (plus much of it is practiced offshore) and will miss the incumbent violation of both the Sherman Anti-Trust Act as well as the Clayton Act. By the time the industry began to mimic in the 90’s and 00’s what Walmart had done, it was too late for a majority of the small to medium consumer goods market. They tried to intervene at the later ‘Tau Point’, when the magic had already been effected by Walmart at the lesser obvious ‘Indigo Point’ two or three decades earlier.

Moreover, with respect to agriculture’s resulting extremely powerful middle market, the farmer faces a condition wherein, the only way to improve her earnings is through a process of ‘minimizing all (cost) inputs’. In other words, using excessive growth-accelerant pesticides and the cheapest means to produce as much caloric biomass as possible – even at the cost of critical phloem fulvic human nutrition content and toxin exposure. After all, if you exceed tolerance – your product is going to be mixed with everyone else’s product, so things should be fine. Dilution is the solution to pollution. In fact, such nutrient content and growth accelerant actual ppm’s are never actually monitored at all in the cartel-like agriculture industry. This is criminal activity, because the buyer and consumer are not getting the product which they think they are buying – and they are being poisoned and nutritionally starved in the process of being defrauded.

The net result? Autoimmune diseases of malnutrition skyrocket, market prices go into decades-sustained fall, microbiome impacts from bactericidal pesticide effects plague the global consumer base, nations begin to reject US agri-products, farms trend higher in Chapter 12 bankruptcies, and finally global food security decreases overall – ironically from the very methods which purport an increase in per acre yields.

The industry consolidates and begins to effect even more cartel-like activity. A death spiral of stupidity. 

This is the net effect of cartel-like activity. Activity which is always harmful in the end to either human health, society or economy. These cartels exploit one minor but key inflection point inside the supply chain, the virtuous efficiency of shipping and freight, in order to extract a maximum of earnings from that entire economic sub-vertical, at the harm of everything else. This is the tail wagging the dog and constitutes a prime example of inflection point exploitation (The Cheat).

Such unethical activity has resulted in enormous harm to human health, along with a sustained decades-long depression in the agriculture producer industry (as exemplified in the above ‘Chapter 12 Farm Bankruptcies by Region’ graphic by Forbes)5 – but not a commensurate depression in the agriculture futures nor speculator industry.6 Very curious indeed, that the cartel members at Point Tau (see below) are not hurt by their own deleterious activity at Point Indigo. This is part of the game. This is backasswards wrong. It is corruption in every sense of the word.

In order to effect The Cheat, one does not have to be a pervasive cheater.
One only need tweak specific inputs or methods at a paucity of specific points in a system or chain of accountability.

Thereafter an embargo on speaking about the indigo point must be enforced as well,
or an apothegm/buzzword phrase must be introduced which serves to obfuscate its true nature and impact potential.

The Cabal Cheat – Exemplified by Exploitation of Point Indigo for the Scientific Method – Ockham’s Razor

Our second example of The Cheat, cites how science communicators and fake skeptics manipulate the outcomes of science, through tampering with a critical inflection point early in its methodology.

All things being equal, that which appears compatible with what I superficially think scientists believe, tends to obviate the need for any scientific investigation.

Indigo point raison d’être: ‘Occam’s Razor’ Employed in Lieu of Ockham’s Razor

Point indigo for the scientific method is Ockham’s Razor. This is the point, early in the scientific method, at which a competing theory is allowed access into the halls of science for consideration. Remember from our definition above, that cheating is best done early, so as to minimize its necessary scale. Ockham’s Razor is that early point at which both a sponsor, and his or her ideas are considered worthy members of ‘plurality’ – those things to be seriously considered by the ranks of science.7 The method by which fake skeptics (cartel members, or cabal members when not an economy) manipulate what is and what is not admissible into the ranks of scientific endeavor, is by means of a flag they title ‘pseudoscience’. By declaring any idea they dislike to be a pseudoscience, or failing ‘Occam’s Razor’ (it is not simple) – skeptics game the inflection point of the entire means of enacting science, the scientific method. They are able to declare a priori, those answers which will or will not arrive at Point Tau, for tipping into consensus at a later time.

To spray the field of science at night with a pre-emergent pesticide which will ensure that only the answer they desire, will come true in the growing sunlight.

Most of the stakeholder public does not grasp this gaming of inflection theory. Most skeptics do not either, they just go along with it – failing to even perceive that skeptics are to be allies at the Ockham’s Razor sponsorship point, not foes. They are there to help the competitiveness of alternatives, not to corruptly certify the field of monist conclusion. This is after all, what it means to be a skeptic – to seriously consider alternative points of view. To come alongside and help them mature into true hypothesis. They want to see the answer for themselves.

If I do not like a particular avenue of study, all one need do is throw the penalty flag regarding that item’s ‘not being simple’ (Occam’s Razor). Thereafter, by citing its researchers to be pseudo-scientists, because they are using the ‘implements and methods of science to study a pseudoscience’, one has gamed the system of science by means of its inflection exploit mechanism.

They have effectively enacted cartel-like activity around the exercise of science on the public’s behalf. This is corruption. This is why science must ever operate inside the public trust – so that it does not become the lap-dog of such agency.

Seldom seek to influence point Tau as that is difficult and typically is conducted inside an arena of high visibility – your work in deception should always focus first on point Indigo – where stakeholders and monitors are rarely paying attention yet. One can control much, through the adept manipulation of inflection points.

Extreme measures taken to control Point Tau are unnecessary if one possesses the ability to manipulate Point Indigo.

The final step of the scientific method, consensus acceptance, constitutes more of a Malcolm Gladwell tipping point as opposed to an unconstrained inflection point. A tipping point is that point at which the past trend signal is now confirmed as valid or comprehensive in its momentum. An inflection point is that point at which a change in dynamic has transpired, and what has happened in the past is all but guaranteed not to happen next. Technically, a tipping point is nothing but a constrained inflection point. But for the purposes of this presentation and explanatory usefulness, the two need to be made distinct. The graphic to the right portrays these principles, in hope that one can relate the difference in ergodicity dynamic between inflection and tipping points, to their specific applications inside the scientific method. We must, as a scientific trust be extraordinarily wary of tipping points (T), as undeserved enthusiasm for a particular understanding may ironically serve to codify such notions into Omega Hypothesis – that hypothesis which has become more important to protect, than the integrity of science itself. In similar fashion, we must also protect indigo points (I) from the undue influence of agency seeking a desired outcome.

Having science communicators deem what is good and bad science, is like having a mafia set the exchange rate you get at your local bank. Everyone fails, but nobody knows why.

The art of the Indigo-Tau cheat works like this:  Game your inflection dynamics sparingly and only until such time as a tipping point has been achieved – and then game no further. Lock up your inflection mechanism and never let it be accessed nor spoken of again. Thereafter, momentum will win the day. Do all your dirty-work, or fail to do essential good-work (Indigo), when the game is in doubt, and then resume fair play and balance, after the game outcome is already fait accompli (Tau). Such activity resides at the very heart of fake skepticism and its highly ironic pretense in ‘communicating science’.

Indigo Point Man (Person) – one who conceals their cleverness or contempt.

Tau Point Man (Person) – one who makes their cleverness or contempt manifest.

Based upon the tenet of ethical skepticism which cites that a shrewdly apportioned omission at Point Indigo, an inflection point early in a system, event or process, is a much more effective and hard to detect cheat/skill, than that of more manifest commission at Point Tau, the tipping point near the end of a system, event or process. Based upon the notion ‘Watch for the gentlemanly Dr. Jekyl at Point Tau, who is also the cunning Mr. Hyde at Point Indigo’. It outlines a principle wherein those who cheat (or apply their skill in a more neutral sense) most effectively, such as in the creation of a cartel, cabal or mafia – tend do do so early in the game and while attentions are placed elsewhere. In contrast, a Tau Point man tends to make their cheat/skill more manifest, near the end of the game or at its Tau Point (tipping point).

Shrewdly apportioned omission at Point Indigo is a much more effective and hard to detect cheat,
than that of more manifest commission at Point Tau. This is the lesson of the ethical skeptic.

Watch for the gentlemanly Dr. Jekyl at Point Tau, who is also the cunning Mr. Hyde at Point Indigo.

Which serves to introduce and segue into our last and most clever form of The Cheat.

The Mafia Cheat – Exemplified by NFL’s Exploitation of Interpretive Penalty Call/No-Call Inflection Points

Our final example of The Cheat involves a circumstance which exhibits how The Cheat itself can be hidden inside the fabric of propriety, leveraging from the subjective nature of shades-of-color interpretations and hard-to-distinguish absences which are very cleverly apportioned to effect a desired outcome.8

Cheating is the spice which makes the chef d’oeuvre. Cheat through bias of omission not commission, only marginally enough to enact the goal and then no further, and while bearing a stately manner in all other things. Intimidate or coerce participants to remain silent.

Indigo point raison d’être: Interpretive Penalty Calls/No-Calls at Critical Indigo Points and Rates which Benefit Perennially Favored Teams and Disadvantage Others

I watched a National Football League (NFL) game last week (statistics herein have been updated for NFL end-of-season 2019) where the entire outcome of the game was determined by three specific and flawed penalty calls on the part of the game referees. The calls in review, were all invalid flag tosses of an interpretive nature, which reversed twice, one team’s (Detroit Lions) stopping a come-from-behind drive by the ‘winning’ team (Green Bay Packers). Twice their opponent was given a touchdown by means of invalid violations for ‘hands-to-the-face’, on the part of a defensive lineman. Penalty flag tosses which cannot be changed by countermanding and clear evidence, as was the case in this game. The flags alone artificially turned the tide of the entire game. The ‘winning’ quarterback Aaron Rodgers, a man of great talent and integrity, when interviewed afterwards humbly said “It didn’t really feel like we had won the game, until I looked up at the scoreboard at the end.” Aaron Rodgers is a forthright Tau Point Man – he does not hide his bias or agency inside noise. Such honesty serves to contrast the indigo point nature and influence of penalties inside of America’s pastime of professional football. Most of the NFL’s manner of exploitation does not present itself in such obvious Tau Point fashion, as occurred in this Lions-Packers game.

An interpretive penalty is the most high-sensitivity inflection point mechanism impacting the game of professional football. For some reason they are not as impactful in its analogue, the NCAA of college football. Not that referees are not frustrating in that league either, but they do not have the world-crushing and stultifying impact as do the officials inside of the NFL. NFL officials single-handedly and often determine the outcome of games, division competitions and Super Bowl appearances. They achieve this (whether intended or not) impact by means of a critically placed set of calls, and more importantly no-calls, with regard to these interpretive subjective penalties. Patterns which can be observed as consistent across decades of the NFL’s agency-infused and court-defined ‘entertainment’. Let’s examine these call (Indigo Point Commissions) and no-call (Indigo Point Omissions) patterns by means of two specific and recent team examples respectively – the cases of the 2019 Oakland Raiders and the 2017 New England Patriots.

Indigo-Commission Disadvantages Specific NFL Teams: Case of the 2019 Oakland Raiders

Argument #1 – The Penalty Detriment-Benefit Spread and Raider 60-Year Penalty History

The NFL Oakland Raiders have consistently been the ‘most penalized’ team by far, over the last 60 years of NFL operations. Year after year they are flagged more than any other team. For a while, this was an amusing shtick concerning the bad-guy aura the Raiders carried 40 or 50 years ago. But when one examines the statistics, and the types of penalties involved – consistent through six decades, multiple dozens of various level coaches who were not as highly penalized elsewhere in their careers, two owners and 10 varieties of front offices – the idea that this team gets penalized, ‘because they are supposed to’ begins to fall flat under the evidence. Of course it is also no surprise that the Raiders hold the record for the most penalties in a single game as well, 23 penalties and 200 yards penalized.9

A typical year can be observed in the chart to the right, which I created through analyzing the penalty databases at NFLPenalties.com.10 The detailed data analysis can be viewed by clicking here. True to form, the Oakland Raiders were penalized per play once again for 2019 (see the previous years here), more than any other NFL team (save for Jacksonville who narrowly overtook the Raiders with a late-season 16-penalty game). More to the point however, for the 2019 NFL Season the greatest differential between penalties-against and penalties-benefit, once again is held by the Oakland Raiders. What the chart shows is that in general, it takes 9 less plays executed for the Raiders (1 penalty every 21 plays) to be awarded their next penalty flag, as compared to their opponent (one penalty every 30 plays). Or put another way, the Raiders were flagged an average of 8 times per game, while comparatively their opponents were flagged on average 5.6 times per game – inside a range of feasibility which annually runs from about 8.2 to 5.4 to begin with.  These Oakland Raider 2019 penalty results are hugging the highest and lowest possible extremes for team versus opponent penalties respectively.

In other words, on average for 2019, the Raiders were the most penalized team per game play in the NFL, while the least penalized team in the NFL was generally whatever team happened to be playing the Oakland Raiders each week.11

The Oakland Raiders are far and above more penalized than any other NFL team, leading the league as the most penalized team in season-years 1963, 1966, 1968-69, 1975, 1982, 1984, 1991, 1993-96, 2003-05, 2009-11, 2016, and most of 2019 – further then landing in the top 3 penalized teams every year from 1982 through to 2019 with only a few exceptions.12 13

Argument #2 – The Drive-Sustaining Penalty Deficit

In the case of the Raiders, the overcall/undercall of penalties is not a matter of coaching discipline, as one might reasonably presume at first blush – rather, in many of the years in question the vast majority of the penalty incident imbalances involve calls of merely subtle interpretation (marked in yellow in the chart below). Things which can be called on every single play, but for various reasons, are not called for certain teams, and are more heavily called on a few targeted teams – flags thrown or not thrown at critical moments in a drive, or upon a beneficial turnover or touchdown. To wit, in the chart which I developed to the right, one can discern that not only are the Oakland Raiders the most differentially-penalized team in the NFL for the 2019 season once again – but as well, the penalties which are thrown against the Raiders are done so at the most critically-disfavoring moments in their games. Times when the Raiders have forced the opposing team into 3rd down and long circumstances and their opponent therefore needed a break and an automatic first down in order to sustain a scoring drive. As you may observe in the chart, a team playing the Raiders in such a circumstance for 2019, bore by far the greatest likelihood of being awarded the subjective-call14 critical break they needed from NFL officials.15 16

The net uptake of this is that across their 16-game 2019 season the Raiders had 37 more drives impacted negatively by penalties versus the average NFL team on their schedule – equating to a whopping 96 additional opponent score points (by the Net Drive Points chart below). Above and beyond their opponents’ performances along this same index, this equates to at least an additional 6 points per game (because of unknown ball control minutes impact) being awarded to Raider 2019 opponents. Thereby making the difference between a 7 – 9 versus a 9 – 7 (or possibly even 10 – 6) record – not to mention the loss of a playoff berth. One can view the calculation tables for this set of data here. So yes, this disadvantage versus the NFL teams on the Raider’s 2019 schedule was a big deal in terms of their overall season success.

Calls for objective violations, such as delay of game, too many players, neutral zone infractions, encroachment and false starts – things which are not subject to interpretation – analyze these penalties and you will find that the Raiders actually perform better in these penalty categories than the NFL average (see chart on right for 2019 called penalties). These are the ‘discipline indicator’ class of penalties. What the astute investigator will find is that, contrary to the story-line foisted for decades concerning this reputation on the part of the Raiders, the team actually fares rather well in these measures. In contrast however, one can glean from the Net Drive Points chart below and derive the same number in the chart to the right, that the Raiders are penalized at double (2x) the rate of the average NFL team for scoring-drive subjective-call defensive penalties, and as well 16.3% higher for all interpretive penalty types in total (yellow Raider totals in the Net Drive Points chart below). In contrast, the Raiders are penalized at 72% of the League average for objective class or non-interpretive penalties. It is just a simple fact that the Raiders are examined by League officials with twice as much scrutiny for the violations of defensive holding, unnecessary roughness, offensive and defensive pass interference, roughing the passer, illegal pick, illegal contact and player disqualification. One can observe the analysis supporting this for 2019 Called Penalties here.17

The non-interpretive penalties (or ‘Discipline Class’ in the chart to the right) cannot be employed as inflection points of control, so their statistics will of course trend towards a more reasonable mean. Accordingly, this falsifies the notion that the Raiders are more penalized than other NFL teams because of shortfalls in coaching disciplines. If this were the case, there should be no differential between the objective versus interpretive penalty-type stats. In fact, inside this ‘discipline indicator’ penalty class, the Raiders fare better than the average NFL team. But this begs the question, do the coaching penalty statistics then corroborate this intelligence? Yes, as it happens, they do.

Argument #3 – Oakland Raider Head Coach Penalty Burden

Further then falsifying this notion that excess Raider penalties are a result of coaching and discipline, are the NFL penalty statistics of the Raider head coaches themselves. Such a notion does not pan out under that evidence either. On average Raider head coaches have been penalized 31.6% higher in their years as a Raider head coach than in their years as head coach of another NFL team. However, for conservancy we have chosen in the graph to the right to weight average coach’s contribution by the number of years coached in each role. Thus, conservatively a Raider head coach is penalized 26.3% more in that role as compared to their head coaching stints both before and after their tenure as head coach of the Oakland Raiders.18 Accordingly, this significant disadvantage has been part of the impetus which has shortened many coach tenures with the Raiders, thereby helping account for the 3.3 year Raider average tenure, versus the 6.6 year average tenure on the part of the same group of coaches both before and after being head coach of the Raiders. One can observe this in the graph, which reflects a blend of eight NFL coaches over the 1979 – 2019 NFL seasons; all prominent NFL coaches who spent significant time – 16 years on average coaching both the Raiders as well as other NFL teams.19

Not even one of the nineteen head coaches in the entire history of the Raider organization bucked this trend of being higher penalized as a Raider head coach. Not even one. Let that sink in.

There is no reasonable possibility, that all these coaches and their variety of organizations could be that undisciplined, almost every single season for 50 years. The data analysis supporting this graphic can be viewed here.

Argument #4 – Oakland Raider Player Penalty Burden

Statistically this coaching differential has to impute to the players’ performances as well, through the association of common-base data. Former Raider cornerback D.J. Hayden portrayed this well in his recent contention that he was penalized more as an Oakland Raider than with other teams. In fact if we examine the Pro Football Reference data, indeed Hayden was penalized a total of 35 times during his four years as a Raider defensive back, and only 11 times in his three years with Detroit and Jacksonville. This equates to 35 penalties in 45 games played for the Raiders, compared to only 11 penalties in 41 games played for other teams.20 That reflects a 65% reduction in penalty per game played and 55% reduction in penalty per snap played during his tenure with a team other than the Oakland Raiders.21

Such detriment constitutes a disincentive for players to want to play for a team which is penalized so often – potentially marring their careers and negatively impacting their dreams for Pro Bowl, MVP or even Hall of Fame selections. This is part of the reason I believe, as to why the badge-of-honor tag-phrase has evolved “Once a Raider, Always a Raider”. In order to play for the Raiders, you pretty much have to acknowledge this shtick inside your career, and live with it for life. Should we now asterisk every player and coach in the NFL Hall of Fame with a ‘Played for the Oakland Raiders’ asterisk now? A kind of reverse steroid-penalty bias negatively impacting a player’s career?

In the end, all such systemic bias serves to do is erode NFL brand, cost the NFL its revenue – and most importantly, harm fans, players, coaches and families.

NFL, your brand and reputation has drifted since the infamous Tuck Rule Game, into becoming ‘Bill Belichick and the Zebra Street Boys’. Your’s is a brand containing the word ‘National’, and as a league you should act accordingly to protect it. Nurture and protect it through a strategy of optimizing product quality.

And finally, the most idiotic thing one can do is to blame all this on the Oakland fans, as was done in this boneheaded article by the Bleacher Report on the Raider penalty problem from as far back as February 2012.

Collectively, all this is known inside any other professional context as ‘bias’ or could even be construed by angry fans as cheating – and when members of an organization are forced under financial/career penalty to remain silent about such activity (extortion), when you observe coaches and players and more importantly members of the free press as well, biting their tongue over this issue – this starts to become reminiscent of prohibition era 18 U.S.C. § 1961 – U.S. Code Racketeering activity.

When you examine the history of such data, much of this patterning in bias remains consistent, decade after decade. It is systemic. It is agency. One can find and download into a datamart or spreadsheet for intelligence derivation, the history of NFL penalties by game, type, team, etc. here: NFL Penalty Tracker. Go and look for yourself, and you will see that what I am saying is true. What we have outlined here is a version of the more obvious Indigo-point commission bias. Let’s examine now a more clever form of cheat, the Indigo-point omission bias.

Indigo-Omission Favors Specific NFL Teams: Case of the 2017 New England Patriots

Let’s address an example in contrast to the Oakland Raiders (also from the NFLPenalties.com data set), the case of a perennial NFL Officials’ call-favored team, the New England Patriots. As one can see in an exemplary season for that franchise, portrayed in the chart to the right, the New England Patriots team that traveled to the 2017 Season Super Bowl, was flagged (from game 10 of the season through to the Super Bowl) at a rate which exceeded 2 standard deviations below, even the next least-flagged team inside the group of 31 other NFL teams. Two standard deviations below even the second best team in terms of penalties called against them. That is an enormous bias in signal. One can observe the 2017 game-by-game statistical data from which the graphic to the right is derived here. If one removes the flagrant, non-inflection-point-useful and very obvious penalties from the Patriots’ complete penalty log (non-highlited penalty types in the chart below), this further then means the Patriots were called for 29 interpretive penalties in these final 12 games – the average of which was not called until late in the 3rd quarter, after the game’s outcome was already determined in many cases.22

In the chart to the right, one may observe the Net Drive Points (score) which were the statistical result of each of the most common forms of NFL penalty (Of note is the dramatic skew in Raider penalties towards higher score-sensitive penalties versus the average NFL team (102%). For those penalties (highlighted in yellow in the chart) which can be called on any play, New England opponents for weeks 10 through the end of the 2017 season earned 6.6 interpretive penalties per game, in those same weeks in which New England was flagged 2.4 times on average. This equates to New England earning only 36% as many interpretive penalties as their average opponent during that same timeframe. As well, most teams average their interpretive penalties late in their second quarter of play (as statistically they should), while New England was awarded their interpretive penalties with less than 5 minutes left in the third quarter of each game on average.

This means that New England was very seldom interpretive-penalized during any time in a game in which the outcome of that game was in doubt. This is ‘exploitation of omissions at Point Indigo’ by means of an absence of interpretive calls against them, for on average of the first three quarters of each game played in late 2017.  This factor, as much as being a good team, is what propelled them to the Super Bowl.

Exploiting the Tau Point on specific critical plays near the end of a game, constitutes ironically a less effective and more obvious mode of cheating – one which will simply serve to piss-off alert fans, as happened in the January 20th 2019 Rams-Saints ‘No Call’ game. One cannot Indigo Point cheat viscerally for long and not get called on such obvious bias – the highly skilled cheat must be in the form of an exploit conducted when stakeholder attentions are not piqued.

Indigo Point Exploitation: The New England Patriots received their interpretive penalties at 36% the rate of the average NFL team, a full quarter later into the game than the average NFL team, most typically when the game outcome was already well in hand. This constitutes exploitation through omission at the Indigo Point.

In fact, for the entire AFC Championship and Super Bowl that season, New England was only flagged twice for any type of violation – a total of 15 yards. Their opponents? The Jaguars and the Eagles were flagged 10 and 7 times more yards respectively, than were the Patriots in their respective championship games. True to form for 2019, from the same NFLPenalites database employed for the Raiders Penalty Differential chart at the top of this article section, one can examine and find that New England was the second least penalized team in the NFL for most of the 2019 Season, only falling to 6th overall in the final games (after they were busted a 6th time for cheating, by filming the sidelines of next week’s opposing team) – and on track to another probable and tedious Super Bowl appearance.

To put it in gambling terms, or seriously tested means of quantification upon which bookies rely – the Patriot’s opponents in the 2017 NFL Season, on average for games 10 through the Super Bowl, were given 4 more penalties in each game than were the Patriots themselves (3 less awarded to them + 1 higher awarded to their opponent on average). Using the Net Drive Points for the most common interpretive penalty types (highlighted in yellow) from the chart immediately above (published at Sports Information Solutions)23, this equates to awarding 10.8 extra points to the Patriots, per game, every game, all the way from game 10 of the 2017 season, through to and including the Super Bowl. No wonder they got to the Super Bowl.

This equates to awarding the Patriots an extra 10.8 points per game in the second half of the season thru the playoffs.

Half the teams in the NFL could have gotten to the 2017 Season Super Bowl if they were given this
same dishonest two touchdown per game advantage afforded the New England Patriots by league officials that year.

Once again, as in the case of the Oakland Raiders earlier, one can make up the pseudo-theory that ‘hey they are more disciplined team, so they are penalized less’. That is, until one examines the data and observes that this condition has gone on for five decades (ostensibly since, but in reality much further back than the notorious ‘Tuck Rule’ AFC Championship Game, the video of which can no longer be found in its original form because the NFL edited out over 2 minutes in order to conceal the game’s penalty no-call Tau Point league phone call intervention). The penalties which are called or not-called are of an interpretive nature – again those that occur most every single down, but are called on some teams consistently, and on other teams not so much. Again here as well, the penalty classes which are not subject to interpretation, delay of game, false start, etc. – surprise, New England is just average in those ‘no doubt’ classes of penalty.24 If this were a matter of coaching discipline, New England should also therefore be two standard deviations below the mean for objective-class penalties as well. They are not. The subjective-class (yellow) penalty calls and no-calls have nothing whatsoever to do with coaching discipline, and everything to do with a statistically manifest bias on the part of the league and its officials.

The Economics of Mafia-Like Activity

It took me a while in order to come to this realization. Because of the presence of closed-door threats and fines to its members, monopolistic overcharging-for-services exploitation of its customers and illicit revenue gained through under-the-table manipulation of the success of its organizations and the flow of grey market (gambling) money, the National Football League is actually not a cartel, rather they are therefore more akin to a mafia by definition.

To annually bill customers who are being misled that they are watching or wagering upon unbiased games of skill, chance and coaching – $830 to DirectTV and $300 to NFL Sunday Ticket – bare-bones cost (both purchases are required and the reality cost for most consumers is on the order of $1,350 or more per year) – purchasing a product which is touted to be one thing, but is delivered as a form of dishonest charade – to my sense this constitutes consumer or gambling failure to deliver a contracted service.

I personally paid $29,000 to NFL Sunday Ticket and DirectTV over the last 15 years of viewing NFL games, being misled by the falsehood that I was watching a sporting event wherein my teams had a chance of success through skill, draft selection, talent, coaching and ball bounces. Fully unaware that in reality, my teams had little chance of success at all.

I was not delivered the product which was sold to me.

The NFL has actually counter-argued this very consumer accusation before the Supreme Court, as recently as 2010, contending that they are merely ‘a form of entertainment’. In 2007, a Jets season ticket holder sued the NFL for $185 million. The case reached the US Supreme Court. The Jets fan argued that, all Jets fans are entitled to refunds because they paid for a ticket to a competition of skill, coaching and chance. Further, had they been aware that the games were not real then the fans would not have bought tickets. This fan lost the case on the grounds that the fans were not buying a ticket to a ‘fair’ event, rather an entertainment event.

Accordingly the NFL contends that this Supreme Court precedent gives them contractual rights to be able to advantage or disadvantage a team without having to address their own bias or cheating. Further, that the league is legally entitled to do what is needed to entertain their audience, such as in the creation and promotion of certain ‘storylines’.25 Storylines of the evil people and the good people (sound familiar?) in order to stimulate ticket and media purchases. A farce wherein ironically, the league office actually thrives upon the brand-premise that they are administering a game of skill, chance and coaching. The reality is that NFL officials pick and choose who they want to win and who they want to lose, the same teams, decade in and decade out. None of its at-risk members (players, organizations, staff and coaches) are allowed to speak of this gaming, for threat of fines or their career. At least in professional wrestling, the league leadership and participants admit that it is all an act. In professional wrestling no one is fooled out of their money.

This is a pivotal reason why I dumped NFL Sunday Ticket and DirectTV. I am not into being bilked of hard-earned household money by a quasi-mafia.

Update (Dec 2019): NFL is reportedly planning a “top-down review” of the league’s officiating during the 2020 offseason.

Such shenanigans as exemplified in the three case studies above represent the everpresence and impact of agency (not merely bias). Bias can be mitigated; however, agency involves the removal and/or disruption of the power structures of the cartel, cabal and mafia. These case examples in corruption demonstrate how agency can manipulate inflection dynamics to reach a desired tipping point – after which one can sit in their university office and enjoy tenure, all the way to sure victory. The only tasks which remain are to protect the indigo point secret formula by means of an appropriate catch phrase, and as well ensure that one does not have any mirrors hanging about, so that you do not have to look at yourself.

An ethical skeptic maintains a different view as to how championships, ethical markets, as well as scientific understanding, should be prosecuted and won.

   How to MLA cite this article:

The Ethical Skeptic, “Inflection Point Theory and the Dynamic of The Cheat”; The Ethical Skeptic, WordPress, 20 Oct 2019; Web, https://wp.me/p17q0e-atd

October 20, 2019 Posted by | Institutional Mandates, Tradecraft SSkepticism | , , , , | 17 Comments

The Plural of Anecdote is Data

A single observation does not necessarily constitute an instance of the pejorative descriptive ‘anecdote’. Not only do anecdotes constitute data, but one anecdote can serve to falsify the null hypothesis and settle a scientific question in short order. Such is the power of a single observation. Such is the power of wielding skillfully, scientific inference. Fake skeptics seek to emasculate the power of the falsifying observation, at all costs.

It is incumbent upon the ethical skeptic, those of us who are researchers if you will – those who venerate science both as an objective set of methods as well as their underlying philosophy – incumbent that we understand the nature of anecdote and how the tool is correctly applied inside scientific inference. Anecdotes are not ‘Woo’, as most fake skeptics will imply through a couple of notorious memorized one-liners. Never mind what they say, nor might claim as straw man of their intent, and watch instead how they apply their supposed wisdom. You will observe such abuse of the concept to be most often the case. We must insist upon the theist and nihilist religious community of deniers, that inside the context of falsification/deduction in particular, a single observation does not constitute an instance of ‘anecdote’ (in the pejorative). Not only do anecdotes constitute data, but one anecdote can serve to falsify the Null (or even null hypothesis) and settle the question in short order. Such is the power of a single observation.

See ‘Anecdote’ – The Cry of the Pseudo-Skeptic

To an ethical skeptic, inductive anecdotes may prove to be informative in nature if one gives structure to and catalogs them over time. Anecdotes which are falsifying/deductive in nature are not only immediately informative, but moreover they are even more importantly, probative. Probative with respect to the null. I call the inferential mode modus absens the ‘null’ because usually in non-Bayesian styled deliberation, the null hypothesis, the notion that something is absent, is not actually a hypothesis at all. Rather, this species of idea constitutes simply a placeholder – the idea that something is not, until proved to be. And while this is a good common sense structure for the resolution of a casual argument, it does not mean that one should therefore believe or accept the null, as merely outcome of this artifice in common sense. In a way, deflecting observations by calling them ‘anecdote’ is a method of believing the null, and not in actuality conducting science nor critical thinking. However, this is the reality we face with unethical skeptics today. The tyranny of the religious default Null.

The least scientific thing a person can do, is to believe the null hypothesis.

Wolfinger’s Misquote

/philosophy : skepticism : pseudoscience : apothegm/ : you may have heard the phrase ‘the plural of anecdote is not data’. It turns out that this is a misquote. The original aphorism, by the political scientist Ray Wolfinger, was just the opposite: ‘The plural of anecdote is data’. The only thing worse than the surrendered value (as opposed to collected value, in science) of an anecdote is the incurred bias of ignoring anecdotes altogether. This is a method of pseudoscience.

Our opponents elevate the scientific status of a typical placeholder Null (such-and-such does not exist) and pretend that the idea, 1. actually possesses a scientific definition and 2. bears consensus acceptance among scientists. These constitute their first of many magician’s tricks, that those who do not understand the context of inference fall-for, over and over. Even scientists will fall for this ole’ one-two, so it is understandable as to why journalists and science communicators will as well. But anecdotes are science, when gathered under the disciplined structure of Observation (the first step of the scientific method). Below we differentiate four contexts of the single observation, in the sense of both two inductive and two deductive inference contexts, only one of which fits the semantics regarding ‘anecdote’ which is exploited by fake skeptics.

Inductive Anecdote

Inductive inference is the context wherein a supporting case or story can be purely anecdotal (The plural of anecdote is not data). This apothegm is not a logical truth, as it could apply to certain cases of induction, however does not apply universally.

Null:  Dimmer switches do not cause house fires to any greater degree than do normal On/Off flip switches.

Inference Context 1 – Inductive Data Anecdote:  My neighbor had dimmer switched lights and they caused a fire in his house.

Inference Context 2 – Mere Anecdote (Appeal to Ignorance):  My neighbor had dimmer switched lights and they never had a fire in their house.

Hence we have Wolfinger’s Inductive Paradox.

Wolfinger’s Inductive Paradox

/philosophy : science : data collection : agency/ : an ‘anecdote’ to the modus praesens (observation or case which supports an objective presence of a state or object) constitutes data, while an anecdote to the modus absens (observation supporting an appeal to ignorance claim that a state or object does not exist) is merely an anecdote. One’s refusal to collect or document the former, does not constitute skepticism. Relates to Hempel’s Paradox.

Finally, we have the instance wherein we step out of inductive inference, and into the stronger probative nature of deduction and falsification. In this context an anecdote is almost always probative. As in the case of Wolfinger’s Inductive Paradox above, one’s refusal to collect or document such data, does not constitute skepticism.

Deductive or Falsifying Anecdote

Deductive inference leading to also, falsification (The plural of anecdote is data). Even the singular of anecdote is data under the right condition of inference.

Null:  There is no such thing as a dimmer switch.

Inference Context 3 – Deductive Anecdote:  I saw a dimmer switch in the hardware store and took a picture of it.

Inference Context 4 – Falsifying Anecdote:  An electrician came and installed a dimmer switch into my house.

For example, what is occurring when one accepts materialism as an a priori truth pertains to those who insert that religious agency between steps 2 and 3 above. They contend that dimmer switches do not exist, so therefore any photo of one necessarily has to be false. And of course, at any given time, there is only one photo of one at all (all previous photos were dismissed earlier in similar exercises). Furthermore they then forbid any professional electrician from installing any dimmer switches (or they will be subject to losing their license). In this way – dimmer switches can never ‘exist’ and deniers endlessly can proclaim to non-electricians ‘you bear the burden of proof’ (see Proof Gaming). From then on, deeming all occurrences of step 2 to constitute lone cases of ‘anecdote’, while failing to distinguish between inductive and deductive contexts therein.

Our allies and co-observers as ethical skeptics need bear the knowledge of philosophy of science (skepticism) sufficient to stand up and and say, “No – this is wrong. What you are doing is pseudoscience”.

Hence, one of my reasons for creating The Map of Inference.

   How to MLA cite this article:

The Ethical Skeptic, “The Plural of Anecdote is Data”; The Ethical Skeptic, WordPress, 1 May 2019; Web, https://wp.me/p17q0e-9HJ

May 1, 2019 Posted by | Argument Fallacies, Tradecraft SSkepticism | , , | Leave a comment

Torfuscation – Gaming Study Design to Effect an Outcome

As important as the mode of inference one employs by means of a scientific study, is the design of the study itself. Before one can begin to reduce and analyze a body of observations, the ethical scientist must first select the study type and design that will afford them the greatest draw in terms of probative potential. Not all studies are equal in terms of their bootstrap nor inferential strength.
The intricacies of this process present the poseur an opportunity to game outcomes of science through study design, type and PICO vulnerabilities. Tactics which can serve to produce outcomes furthering the obfuscating, political, social or religious causes of their sponsors.

There are several ways to put on the appearance of conducting serious science, yet still effect outcomes which maintain alignment with the agency of one’s funders, sponsors, mentors or controlling authorities. Recent ethical evolution inside science, has highlighted the need for understanding that a researcher’s simply having calculated a rigorous p-value, applied an arrival distribution or bounded an estimate inside a confidence interval, does not necessarily mean that they have developed a sound basis from which to draw any quality scientific inference.1 In similar philosophy, one can develop a study – and completely mislead the scientific community as to the probative depth or nature of reality inside a given contention of science. A thousand studies bearing weak inductive inference can be rendered null by one sound deductive study. The key resides in the ethical skeptic’s ability to survey this domain of study strength and adeptly apply it to what is foisted as constituting science.

We are all familiar with the popular trick of falsely calling a ‘survey of study abstracts’, or a meta-synthesis of best evidence, or an opinion piece summarizing a body of study from one person’s point of view – a ‘meta-analysis’. An authentic meta-analysis combines congruent study designs and bodies of equivalent data, in order to improve the statistical power of the combined entailed analyses.2 The fake forms of meta-analysis achieve no such gravitas in strength. A meta-analysis is a secondary or filtered systematic review which only bears leveraged strength in the instance wherein randomized controlled trials or longitudinal studies of the same species, are able to be combined in order to derive a higher statistical power than any single study can deliver independently. Every other flavor of ‘blending of study’, fails to accomplish such an objective. This casual blending presented in the faux-flavors of meta-study may, and this is important, ironically serve to reduce the probative power of such systematic review itself. Nonetheless, you will find less-than-ethical scientists trying to push their opinion/summary articles upon the community as if they reflected through convenient misnomer, this ‘most rigorous form of study design’. One can find an example of this within the study: Taylor, Swerdfeger, Eslick; An evidence-based meta-analysis of case-control and cohort studies; Elsevier, 2014.3

This sleight-of-hand treatment stands as merely one example of the games played within the agency-influenced domains of science. With regard to manipulating study design in order to effect a desired scientific outcome, there are several means of accomplishing this feat. Most notably the following methods, which I call collectively, torfuscation (Saxon for ‘hiding the dead body in the bog’). Torfuscation is an active form of Nelsonian inference which involves one or more species of study abuse:

1. asking an orphan question, one which is non-sequitur or does not address the critical path of the scientific question at hand,

2. employing a less rigorous study type (lower rank on the Chart below) than ethically is warranted by the scientific question at hand – (aka, methodical deescalation),

3. employing an ineffective study design, and masking that error with rigorous academic statistical analysis of what is essentially garbage input,

4. selecting for a body of ‘reliable’ data to the exclusion of available and more probative data – (aka, streetlight effect),

5. employing an ineffective secondary or filtered study design, spun as if it were a higher probative or bootstrap strength study, or

6. study constrained by a type of flawed methodical PICO-time analysis (wrong population, wrong timeframe, wrong signal/indicator, etc.).

Abuses which will serve most often to weaken the probative potential of an avenue of research which could ostensibly serve to produce an outcome threatening the study sponsors. I call this broad set of pseudo-scientific practices, torfuscation.

Torfuscation

/philosophy : pseudoscience : study fraud : Saxon : ‘hide in the bog’/ : pseudoscience or obfuscation enacted through a Nelsonian knowledge masquerade of scientific protocol and study design. Inappropriate, manipulated or shallow study design crafted so as to obscure or avoid a targeted/disliked inference. A refined form of praedicate evidentia or utile abstentia employed through using less rigorous or probative methods of study than are requisite under otherwise ethical science.  Exploitation of study noise generated through first level ‘big data’ or agency-influenced ‘meta-synthesis’, as the ‘evidence’ that no further or deeper study is therefore warranted – and moreover that further research of the subject entailed is now socially embargoed.

Study design which exploits the weakness potential entailed inside the PICO-time Study Design Development Model4 (see Study to Inference Strength and Risk Chart below), through the manipulation of the study

P – patient, problem or population
I – intervention, indicator
C – comparison, control or comparator
O – outcome, or
time – time series

Which seeks to compromise the outcome or conclusion in terms of the study usage; more specifically: prevention, screening, diagnostic, treatment, quality of life, compassionate use, expanded access, superiority, non-inferiority and or equivalence.

Meta-Garbage, Deescalation and PICO-time Manipulation

One example of tampering with the PICO-time attributes of a study, would consist of the circumstance wherein only medical insurance plan completed diagnostic data is used as the sample base for a retrospective observational cohort study’s ‘outcome’ data. Such data is highly likely to be incomplete or skewed in a non-probative or biased direction, under a condition of linear induction (a weaker form of inference) and utile abstentia (a method of exclusion bias through furtive data-source selection). As an example, if the diagnosis of a condition occurs on average at 5.5 years of age inside a study population of kids, and the average slack time between diagnosis and first possible recording into a medical insurance plan database is 4 to 18 months, then a constraining of the time-series involved inside a study examining that data, to 4.5 years, is an act of incompetent or malicious study design. But you will find both of these tricks to be common in studies wherein a potential outcome is threatening to a study’s sponsors; agents who hope to prove by modus absens shallow and linear inductive inference that the subject can be embargoed from the point of their study onward. Just such a study can be found here: Madsen, Hviid; A Population-Based Study of Measles, Mumps, and Rubella Vaccination and Autism, 2002.5

A study may also be downgraded (lower on the chart below), and purposely forced to employ a lesser form of design probative strength (Levels 1 – 8 on the left side of the chart); precisely because its sponsors suspect the possibility of a valid risk they do not want broached/exposed. This is very similar to the downgrading in inference method we identified above, called methodical deescalation. Methodical deescalation is a common trick of professional pseudoscience wherein abduction is used in lieu of induction, or induction is used in lieu of deduction – when the latter (stronger) mode, type or form of inference was ethically demanded. One may also notice that studies employing these six torfuscation tricks we listed earlier are often held as proprietary in their formulation; concealed from the public or at-risk stakeholders during the critical study design phase. This lack of public accountability or input is purposeful. Such activity is akin to asking for forgiveness rather than permission, and can often constitute in reality court-defined ‘malice and oppression’ in the name of science.6

Beware of studies supporting activity which serves to place a large stakeholder group at risk,
yet seek zero input from those stakeholders as to adequacy of study design.
This is also known as oppression.

The astute reader may also notice an irony here, in that the ‘meta-analysis’ decried earlier in this article, cited the very study just mentioned as an example of torfuscation, as its ‘best evidence study’ inside its systematic review. Meta-fraud providing fraud as its recitation basis. Well, at least the species of study are congruent. If you meta-study garbage, you will produce meta-garbage as well (see Secondary Study in the Chart below).

Be very wary of a science which constrains its body of study to the bottom of
the chart below or is quick to a claim of absense (modus absens) –
especially when higher or positive forms of study are available
but scientists are dis-incentivized to pursue them.

Study Design to Mode of Inference Strength and Risk

The following is The Ethical Skeptic’s chart indexing study design against mode of inference, strength and risk in torfuscation. It is a handy tool for helping spot torfuscation such as is employed in the three example types elicited above (and more). The study types are ranked from top to bottom in terms of Level in probative strength (1 – 8), and as well are arranged into Direct, Analytical and Descriptive study groupings by color. Torfuscation involves the selection of a study type with a probative power lower down on the chart, when a higher probative level of study was available and/or ethically warranted; as well as in tampering with the PICO-time risk elements (right side of chart under the yellow header) characteristic of each study type so as to weaken its overall ability to indicate a potential disliked outcome.

The Chart is followed up by a series of definitions for each study type listed. The myriad sources for this compiled set of industry material are listed at the end of this article – however, it should be noted that the sources cited did not agree with each other on the material/level, structure nor definitions of various study designs. Therefore modifications and selections were made as to the attributes of study, which allowed for the entire set of alternatives/definitions to come into synchrony with each other – or fit like a puzzle with minimal overlap and confusion. So you will not find 100% of this chart replicated inside any single resource or textbook. (note: My past lab experience has been mostly in non-randomized controlled factorial trial study – whose probative successes were fed into a predictive model, then confirmed by single mechanistic lab tests. I found this approach to be highly effective in my past professional work. But that lab protocol may not apply to other types of study challenge and could be misleading if applied as a panacea. Hence the need for the chart below.)

Study Design Type Definitions

PRIMARY/DIRECT STUDY

Experimental– A study which involves a direct physical test of the material or principal question being asked.

Mechanistic/Lab – A direct study which examines a physical attribute or mechanism inside a controlled closed environment, influencing a single input variable, while observing a single output variable – both related to that attribute or mechanism.

Controlled Trial

Randomized (Randomized Controlled Trial) – A study in which people are allocated at random (by chance alone) to receive one of several clinical interventions. One of these interventions is the standard of comparison or the ‘control’. The control may be a standard practice, a placebo (“sugar pill”), or no intervention at all.

Non-Randomized Controlled Trial – A study in which people are allocated by a discriminating factor (not bias), to receive one of several clinical interventions. One of these interventions is the standard of comparison or the ‘control’. The control may be a standard practice, a placebo (“sugar pill”), or no intervention at all.

Parallel – A type of controlled trial where two groups of treatments, A and B, are given so that one group receives only A while another group receives only B. Other names for this type of study include “between patient” and “non-crossover” studies.

Crossover – A longitudinal direct study in which subjects receive a sequence of different treatments (or exposures). In a randomized controlled trial with repeated measures design, the same measures are collected multiple times for each subject. A crossover trial has a repeated measures design in which each patient is assigned to a sequence of two or more treatments, of which one may either be a standard treatment or a placebo. Nearly all crossover controlled trial studies are designed to have balance, whereby all subjects receive the same number of treatments and participate for the same number of periods. In most crossover trials each subject receives all treatments, in a random order.

Factorial – A factorial study is an experiment whose design consists of two or more factors, each with discrete possible values or ‘levels’, and whose experimental units take on all possible combinations of these levels across all such factors. A full factorial design may also be called a fully-crossed design. Such an experiment allows the investigator to study the effect of each factor on the response variable or outcome, as well as the effects of interactions between factors on the response variable or outcome.

Blind Trial – A trial or experiment in which information about the test is masked (kept) from the participant (single blind) and/or the test administerer (double blind), to reduce or eliminate bias, until after a trial outcome is known.

Open Trial – A type of non-randomized controlled trial in which both the researchers and participants know which treatment is being administered.

Placebo-Control Trial – A study which blindly and randomly allocates similar patients to a control group that receives a placebo and an experimental test group. Therein investigators can ensure that any possible placebo effect will be minimized in the final statistical analysis.

Interventional (Before and After/Interrupted Time Series/Historical Control) – A study in which observations are made before and after the implementation of an intervention, both in a group that receives the intervention and in a control group that does not. A study that uses observations at multiple time points before and after an intervention (the ‘interruption’). The design attempts to detect whether the intervention has had an effect significantly greater than any underlying trend over time.

Adaptive Clinical Trial – A controlled trial that evaluates a medical device or treatment by observing participant outcomes (and possibly other measures, such as side-effects) along a prescribed schedule, and modifying parameters of the trial protocol in accord with those observations. The adaptation process generally continues throughout the trial, as prescribed in the trial protocol. Modifications may include dosage, sample size, drug undergoing trial, patient selection criteria or treatment mix. In some cases, trials have become an ongoing process that regularly adds and drops therapies and patient groups as more information is gained. Importantly, the trial protocol is set before the trial begins; the protocol pre-specifies the adaptation schedule and processes. 

Observational – Analytical

Cohort/Panel (Longitudinal) – A study in which a defined group of people (the cohort – a group of people who share a defining characteristic, typically those who experienced a common event in a selected period) is followed over time, to examine associations between different interventions received and subsequent outcomes.  

Prospective – A cohort study which recruits participants before any intervention and follows them into the future.

Retrospective – A cohort study which identifies subjects from past records describing the interventions received and follows them from the time of those records.

Time-Series – A cohort study which identifies subjects from a particular segment in time following an intervention (which may have also occurred in a time series) and follows them during only the duration of that time segment. Relies upon robust intervention and subject tracking databases. For example, comparing lung health to pollution during a segment in time.

Cross-Sectional/Transverse/Prevalence – A study that collects information on interventions (past or present) and current health outcomes, i.e. restricted to health states, for a group of people at a particular point in time, to examine associations between the outcomes and exposure to interventions.

Case-Control – A study that compares people with a specific outcome of interest (‘cases’) with people from the same source population but without that outcome (‘controls’), to examine the association between the outcome and prior exposure (e.g. having an intervention). This design is particularly useful when the outcome is rare.

Nested Case-Control – A study wherein cases of a health outcome that occur in a defined cohort are identified and, for each, a specified number of matched controls is selected from among those in the cohort who have not developed the health outcome by the time of occurrence in the case. For many research questions, the nested case-control design potentially offers impressive reductions in costs and efforts of data collection and analysis compared with the full case-control or cohort approach, with relatively minor loss in statistical efficiency.

Community Survey – An observational study wherein a targeted cohort or panel is given a set of questions regarding both interventions and observed outcomes over the life or a defined time period of the person, child or other close family member. These are often conducted in conjunction with another disciplined polling process (such as a census or general medical plan survey) so as to reduce statistical design bias or error.

Ecological (Correlational) – A study of risk-modifying factors on health or other outcomes based on populations defined either geographically or temporally. Both risk-modifying factors and outcomes are averaged or are linear regressed for the populations in each geographical or temporal unit and then compared using standard statistical methods.

Observational – Descriptive

Population – A study of a group of individuals taken from the general population who share a common characteristic, such as age, sex, or health condition. This group may be studied for different reasons, such as their response to a drug or risk of getting a disease. 

Case Series – Observations are made on a series of specific individuals, usually all receiving the same intervention, before and after an intervention but with no control group.

Case Report – Observation is made on a specific individual, receiving an intervention, before and after an intervention but with no control group/person other than the general population.

SECONDARY/FILTERED STUDY

Systematic Review/Objective Meta-Analysis – A method for systematically combining pertinent qualitative and quantitative study data from several selected studies to develop a single conclusion that has greater statistical power. This conclusion is statistically stronger than the analysis of any single study, due to increased numbers of subjects, greater diversity among subjects, or accumulated effects and results. However, researchers must ensure that the quantitative and study design attributes of the contained studies all match, in order to retain and enhance the statistical power entailed. Mixing lesser rigorous or incongruent studies with more rigorous studies will only result in a meta-analysis which bears the statistical power of only a portion of the studies, or of the least rigorous study type contained, in decreasing order along the following general types of study:

Controlled Trial/Mechanism
Longitudinal/Cohort
Cross-Sectional
Case-Control
Survey/Ecological
Descriptive

Interpretive/Abstract ‘Meta-Synthesis’ – A study which surveys the conclusion or abstract of a pool of studies in order to determine the study authors’ conclusions along a particular line of conjecture or deliberation. This may include a priori conclusions or author preferences disclosed inside the abstract of each study, which were not necessarily derived as an outcome of the study itself. This study may tally a ‘best evidence’ subset of studies within the overall survey group, which stand as superior in their representation of the conclusion, methodology undertaken or breadth in addressing the issue at hand.

Editorial/Expert Opinion – A summary article generally citing both scientific outcomes and opinion, issued by an expert within a given field, currently active and engaged in research inside that field. The article may or may not refer to specific examples of studies, which support an opinion that a consilience of evidence points in a given direction regarding an issue of deliberation. The author will typically delineate a circumstance of study outcome, consilience or consensus as separate from their personal professional opinion.

Critical Review/Skeptic Opinion – A self-identified skeptic or science enthusiast, applies a priori thinking with no ex ante accountability, in order to arrive at a conclusion. The reviewer may or may not cite a couple examples or studies to back their conclusion.

Sources: 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

   How to MLA cite this article:

The Ethical Skeptic, “Torfuscation – Gaming Study Design to Effect an Outcome”; The Ethical Skeptic, WordPress, 15 Apr 2019; Web, https://wp.me/p17q0e-9yQ

April 15, 2019 Posted by | Agenda Propaganda, Institutional Mandates, Tradecraft SSkepticism | , , | 2 Comments

Chinese (Simplified)EnglishFrenchGermanHindiPortugueseRussianSpanish
%d bloggers like this: