As important as the mode of inference one employs by means of a scientific study, is the design of the study itself. Before one can begin to reduce and analyze a body of observations, the ethical scientist must first select the study type and design that will afford them the greatest draw in terms of probative potential. Not all studies are equal in terms of their bootstrap nor inferential strength.
The intricacies of this process present the poseur an opportunity to game outcomes of science through study design, type and PICO vulnerabilities. Tactics which can serve to produce outcomes furthering the obfuscating, political, social or religious causes of their sponsors.
There are several ways to put on the appearance of conducting serious science, yet still effect outcomes which maintain alignment with the agency of one’s funders, sponsors, mentors or controlling authorities. Recent ethical evolution inside science, has highlighted the need for understanding that a researcher’s simply having calculated a rigorous p-value, applied an arrival distribution or bounded an estimate inside a confidence interval, does not necessarily mean that they have developed a sound basis from which to draw any quality scientific inference.1 In similar philosophy, one can develop a study – and completely mislead the scientific community as to the probative depth or nature of reality inside a given contention of science. A thousand studies bearing weak inductive inference can be rendered null by one sound deductive study. The key resides in the ethical skeptic’s ability to survey this domain of study strength and adeptly apply it to what is foisted as constituting science.
We are all familiar with the popular trick of falsely calling a ‘survey of study abstracts’, or a meta-synthesis of best evidence, or an opinion piece summarizing a body of study from one person’s point of view – a ‘meta-analysis’. An authentic meta-analysis combines congruent study designs and bodies of equivalent data, in order to improve the statistical power of the combined entailed analyses.2 The fake forms of meta-analysis achieve no such gravitas in strength. A meta-analysis is a secondary or filtered systematic review which only bears leveraged strength in the instance wherein randomized controlled trials or longitudinal studies of the same species, are able to be combined in order to derive a higher statistical power than any single study can deliver independently. Every other flavor of ‘blending of study’, fails to accomplish such an objective. This casual blending presented in the faux-flavors of meta-study may, and this is important, ironically serve to reduce the probative power of such systematic review itself. Nonetheless, you will find less-than-ethical scientists trying to push their opinion/summary articles upon the community as if they reflected through convenient misnomer, this ‘most rigorous form of study design’. One can find an example of this within the study: Taylor, Swerdfeger, Eslick; An evidence-based meta-analysis of case-control and cohort studies; Elsevier, 2014.3
This sleight-of-hand treatment stands as merely one example of the games played within the agency-influenced domains of science. With regard to manipulating study design in order to effect a desired scientific outcome, there are several means of accomplishing this feat. Most notably the following methods, which I call collectively, torfuscation (Saxon for ‘hiding the dead body in the bog’). Torfuscation is an active form of Nelsonian inference which involves one or more species of study abuse:
1. asking an orphan question, one which is non-sequitur or does not address the critical path of the scientific question at hand,
2. employing a less rigorous study type (lower rank on the Chart below) than ethically is warranted by the scientific question at hand – (aka, methodical deescalation),
3. employing an ineffective study design, and masking that error with rigorous academic statistical analysis of what is essentially garbage input,
4. selecting for a body of ‘reliable’ data to the exclusion of available and more probative data – (aka, streetlight effect), or conversely selecting for a ‘reliable’ database procedurally, which has a high probability of failure of detection or cataloging of such detection (see an example later in this article),
5. employing an ineffective secondary or filtered study design, spun as if it were a higher probative or bootstrap strength study, or
6. study constrained by a type of flawed methodical PICO-time analysis (wrong population, wrong/inequivalent timeframe between cohorts, changing context across the time period, wrong signal/indicator, etc. – see an example later in this article).
Abuses which will serve most often to weaken the probative potential of an avenue of research which could ostensibly serve to produce an outcome threatening the study sponsors. I call this broad set of pseudo-scientific practices, torfuscation.
/philosophy : pseudoscience : study fraud : Saxon : ‘hide in the bog’/ : pseudoscience or obfuscation enacted through a Nelsonian knowledge masquerade of scientific protocol and study design. Inappropriate, manipulated or shallow study design crafted so as to obscure or avoid a targeted/disliked inference. A refined form of praedicate evidentia or utile absentia employed through using less rigorous or probative methods of study than are requisite under otherwise ethical science. Exploitation of study noise generated through first level ‘big data’ or agency-influenced ‘meta-synthesis’, as the ‘evidence’ that no further or deeper study is therefore warranted – and moreover that further research of the subject entailed is now socially embargoed.
Study design which exploits the weakness potential entailed inside the PICO-time Study Design Development Model4 (see Study to Inference Strength and Risk Chart below), through the manipulation of the study
P – patient, problem or population
I – intervention, indicator
C – comparison, control or comparator
O – outcome, or
time – time series
Which seeks to compromise the outcome or conclusion in terms of the study usage; more specifically: prevention, screening, diagnostic, treatment, quality of life, compassionate use, expanded access, superiority, non-inferiority and or equivalence.
Meta-Garbage, Deescalation and PICO-time Manipulation Examples
Madsen, Hviid; A Population-Based Study of Measles, Mumps, and Rubella Vaccination and Autism, 2002.5
This first study constitutes one example of tampering with the PICO-time attributes of a research effort, wherein only medical insurance plan completed and diagnostic-cataloged records are employed (under the guise of being ‘reliable’) as the sample base for a retrospective observational cohort study’s ‘outcome’ data. Such data is highly likely to be incomplete or skewed in a non-probative or biased direction, under a condition of linear induction (a weaker form of inference) and utile absentia (a method of exclusion bias through furtive or detection-failure-laden data source selection). This study example is elicited as shown in the chart on the right, constructed from data inside the referenced Madsen study above. If the diagnosis of a condition occurs on average at 5.5 years of age inside a study population of kids, and the average slack time between diagnosis and first possible recording into a medical insurance plan database is 4 to 18 months, then a constraining of the time-series involved inside a study examining that data, to 4.5 years, is an act of incompetent or malicious study design. The study effectively screened out positive detection by inclusion criteria trickery as depicted inside the chart I developed from its time series constraints description. Only 23% of the signal population would have ever be detected, as shown in this ‘Nine Year Tracking Window’ chart. The study would rely upon a constraint which essentially bound the model to the principle that ‘only quickly-detected and visceral cases count’. This is also known as criminal activity – it is no different than cooking the books as an accountant and pocketing the cash. Except in this case hundreds of millions of innocent children and families are harmed by the entailed fraud.
Interim Estimates of Vaccine Effectiveness of BNT162b2 and mRNA-1273 COVID-19 Vaccines in Preventing SARS-CoV-2 Infection Among Health Care Personnel, First Responders, and Other Essential and Frontline Workers — Eight U.S. Locations, December 2020–March 2021.6
In similar PICO-time series manipulation, say the study team in the above second example study identifies a treatment for a disease but compares Test and Control cohorts evaluating the success of that treatment wherein the following time series inequivalents apply:
- The Control is tracked the entire study period but the Test is only tracked for a subset thereof
- Testing is done continuously every couple days or essentially all-day/every-day (in a symptomatic protocol), therefore the Control is exposed to detection false positives to a greater degree than is the treatment-Test cohort
- The testing is conducted across a PICO-timeframe in which the malady is naturally in decline, an effect which serves only to benefit the Test cohort statistics because they naturally draw observations nearer to the end of the study.
To wit, all three of these advantageous constraints are exhibited in the table below, derived from data extracted from the second CDC study listed above as the second example of torfuscation. When the time series cohorts are leveled by detection arrival probability and number of study days in which the cohorts were observed, suddenly the efficacy of the treatment in question disappears (see infections per available study day). Remember, that billions upon billions of dollars, not to mention entire institutions and careers, are at stake regarding the outcome of the above study. Its result was guaranteed.
You will find both of these study-example tricks to be present in circumstances wherein a potential outcome is threatening to a study’s sponsors; political agents who hope to prove by shallow/linear inductive inference and exclusion criteria trickery that the subject can be embargoed or closed for discussion from the point of their study onward.
Moreover, a study may also be downgraded (lower on the chart below), and purposely forced to employ a lesser form of design probative strength (Levels 1 – 8 on the left side of the chart); precisely because its sponsors suspect the possibility of a valid risk they do not want broached/exposed. This is very similar to the downgrading in inference method we identified above, called methodical deescalation. Methodical deescalation is a common trick of professional pseudoscience wherein abduction is used in lieu of induction, or induction is used in lieu of deduction – when the latter (stronger) mode, type or form of inference was ethically demanded. One may also notice that studies employing these six torfuscation tricks we listed earlier are often held as proprietary in their formulation; concealed from the public or at-risk stakeholders during the critical study design phase. This lack of public accountability or input is purposeful. Such activity is akin to asking for forgiveness rather than permission, and can often constitute in reality court-defined ‘malice and oppression’ in the name of science.7
Beware of studies supporting activity which serves to place a large stakeholder group at risk,
yet seek zero input from those stakeholders as to adequacy of study design.
This is also known as oppression.
The astute reader may also notice an irony here, in that the ‘meta-analysis’ decried earlier in this article, cited the very study just mentioned as an example of torfuscation, as its ‘best evidence study’ inside its systematic review. Meta-fraud providing fraud as its recitation basis. Well, at least the species of study are congruent. If you meta-study garbage, you will produce meta-garbage as well (see Secondary Study in the Chart below).
Be very wary of a science which constrains its body of study to the bottom of
the chart below or is quick to a claim of absense (modus absens) –
especially when higher or positive forms of study are available
but scientists are dis-incentivized to pursue them.
Study Design to Mode of Inference Strength and Risk
The following is The Ethical Skeptic’s chart indexing study design against mode of inference, strength and risk in torfuscation. It is a handy tool for helping spot torfuscation such as is employed in the three example types elicited above (and more). The study types are ranked from top to bottom in terms of Level in probative strength (1 – 8), and as well are arranged into Direct, Analytical and Descriptive study groupings by color. Torfuscation involves the selection of a study type with a probative power lower down on the chart, when a higher probative level of study was available and/or ethically warranted; as well as in tampering with the PICO-time risk elements (right side of chart under the yellow header) characteristic of each study type so as to weaken its overall ability to indicate a potential disliked outcome.
The Chart is followed up by a series of definitions for each study type listed. The myriad sources for this compiled set of industry material are listed at the end of this article – however, it should be noted that the sources cited did not agree with each other on the material/level, structure nor definitions of various study designs. Therefore modifications and selections were made as to the attributes of study, which allowed for the entire set of alternatives/definitions to come into synchrony with each other – or fit like a puzzle with minimal overlap and confusion. So you will not find 100% of this chart replicated inside any single resource or textbook. (note: My past lab experience has been mostly in non-randomized controlled factorial trial study – whose probative successes were fed into a predictive model, then confirmed by single mechanistic lab tests. I found this approach to be highly effective in my past professional work. But that lab protocol may not apply to other types of study challenge and could be misleading if applied as a panacea. Hence the need for the chart below.)
Study Design Type Definitions
Experimental– A study which involves a direct physical test of the material or principal question being asked.
Mechanistic/Lab – A direct study which examines a physical attribute or mechanism inside a controlled closed environment, influencing a single input variable, while observing a single output variable – both related to that attribute or mechanism.
Randomized (Randomized Controlled Trial) – A study in which people are allocated at random (by chance alone) to receive one of several clinical interventions. One of these interventions is the standard of comparison or the ‘control’. The control may be a standard practice, a placebo (“sugar pill”), or no intervention at all.
Non-Randomized Controlled Trial – A study in which people are allocated by a discriminating factor (not bias), to receive one of several clinical interventions. One of these interventions is the standard of comparison or the ‘control’. The control may be a standard practice, a placebo (“sugar pill”), or no intervention at all.
Parallel – A type of controlled trial where two groups of treatments, A and B, are given so that one group receives only A while another group receives only B. Other names for this type of study include “between patient” and “non-crossover” studies.
Crossover – A longitudinal direct study in which subjects receive a sequence of different treatments (or exposures). In a randomized controlled trial with repeated measures design, the same measures are collected multiple times for each subject. A crossover trial has a repeated measures design in which each patient is assigned to a sequence of two or more treatments, of which one may either be a standard treatment or a placebo. Nearly all crossover controlled trial studies are designed to have balance, whereby all subjects receive the same number of treatments and participate for the same number of periods. In most crossover trials each subject receives all treatments, in a random order.
Factorial – A factorial study is an experiment whose design consists of two or more factors, each with discrete possible values or ‘levels’, and whose experimental units take on all possible combinations of these levels across all such factors. A full factorial design may also be called a fully-crossed design. Such an experiment allows the investigator to study the effect of each factor on the response variable or outcome, as well as the effects of interactions between factors on the response variable or outcome.
Blind Trial – A trial or experiment in which information about the test is masked (kept) from the participant (single blind) and/or the test administerer (double blind), to reduce or eliminate bias, until after a trial outcome is known.
Open Trial – A type of non-randomized controlled trial in which both the researchers and participants know which treatment is being administered.
Placebo-Control Trial – A study which blindly and randomly allocates similar patients to a control group that receives a placebo and an experimental test group. Therein investigators can ensure that any possible placebo effect will be minimized in the final statistical analysis.
Interventional (Before and After/Interrupted Time Series/Historical Control) – A study in which observations are made before and after the implementation of an intervention, both in a group that receives the intervention and in a control group that does not. A study that uses observations at multiple time points before and after an intervention (the ‘interruption’). The design attempts to detect whether the intervention has had an effect significantly greater than any underlying trend over time.
Adaptive Clinical Trial – A controlled trial that evaluates a medical device or treatment by observing participant outcomes (and possibly other measures, such as side-effects) along a prescribed schedule, and modifying parameters of the trial protocol in accord with those observations. The adaptation process generally continues throughout the trial, as prescribed in the trial protocol. Modifications may include dosage, sample size, drug undergoing trial, patient selection criteria or treatment mix. In some cases, trials have become an ongoing process that regularly adds and drops therapies and patient groups as more information is gained. Importantly, the trial protocol is set before the trial begins; the protocol pre-specifies the adaptation schedule and processes.
Observational – Analytical
Cohort/Panel (Longitudinal) – A study in which a defined group of people (the cohort – a group of people who share a defining characteristic, typically those who experienced a common event in a selected period) is followed over time, to examine associations between different interventions received and subsequent outcomes.
Prospective – A cohort study which recruits participants before any intervention and follows them into the future.
Retrospective – A cohort study which identifies subjects from past records describing the interventions received and follows them from the time of those records.
Time-Series – A cohort study which identifies subjects from a particular segment in time following an intervention (which may have also occurred in a time series) and follows them during only the duration of that time segment. Relies upon robust intervention and subject tracking databases. For example, comparing lung health to pollution during a segment in time.
Cross-Sectional/Transverse/Prevalence – A study that collects information on interventions (past or present) and current health outcomes, i.e. restricted to health states, for a group of people at a particular point in time, to examine associations between the outcomes and exposure to interventions.
Case-Control – A study that compares people with a specific outcome of interest (‘cases’) with people from the same source population but without that outcome (‘controls’), to examine the association between the outcome and prior exposure (e.g. having an intervention). This design is particularly useful when the outcome is rare.
Nested Case-Control – A study wherein cases of a health outcome that occur in a defined cohort are identified and, for each, a specified number of matched controls is selected from among those in the cohort who have not developed the health outcome by the time of occurrence in the case. For many research questions, the nested case-control design potentially offers impressive reductions in costs and efforts of data collection and analysis compared with the full case-control or cohort approach, with relatively minor loss in statistical efficiency.
Community Survey – An observational study wherein a targeted cohort or panel is given a set of questions regarding both interventions and observed outcomes over the life or a defined time period of the person, child or other close family member. These are often conducted in conjunction with another disciplined polling process (such as a census or general medical plan survey) so as to reduce statistical design bias or error.
Ecological (Correlational) – A study of risk-modifying factors on health or other outcomes based on populations defined either geographically or temporally. Both risk-modifying factors and outcomes are averaged or are linear regressed for the populations in each geographical or temporal unit and then compared using standard statistical methods.
Observational – Descriptive
Population – A study of a group of individuals taken from the general population who share a common characteristic, such as age, sex, or health condition. This group may be studied for different reasons, such as their response to a drug or risk of getting a disease.
Case Series – Observations are made on a series of specific individuals, usually all receiving the same intervention, before and after an intervention but with no control group.
Case Report – Observation is made on a specific individual, receiving an intervention, before and after an intervention but with no control group/person other than the general population.
Systematic Review/Objective Meta-Analysis – A method for systematically combining pertinent qualitative and quantitative study data from several selected studies to develop a single conclusion that has greater statistical power. This conclusion is statistically stronger than the analysis of any single study, due to increased numbers of subjects, greater diversity among subjects, or accumulated effects and results. However, researchers must ensure that the quantitative and study design attributes of the contained studies all match, in order to retain and enhance the statistical power entailed. Mixing lesser rigorous or incongruent studies with more rigorous studies will only result in a meta-analysis which bears the statistical power of only a portion of the studies, or of the least rigorous study type contained, in decreasing order along the following general types of study:
Interpretive/Abstract ‘Meta-Synthesis’ – A study which surveys the conclusion or abstract of a pool of studies in order to determine the study authors’ conclusions along a particular line of conjecture or deliberation. This may include a priori conclusions or author preferences disclosed inside the abstract of each study, which were not necessarily derived as an outcome of the study itself. This study may tally a ‘best evidence’ subset of studies within the overall survey group, which stand as superior in their representation of the conclusion, methodology undertaken or breadth in addressing the issue at hand.
Editorial/Expert Opinion – A summary article generally citing both scientific outcomes and opinion, issued by an expert within a given field, currently active and engaged in research inside that field. The article may or may not refer to specific examples of studies, which support an opinion that a consilience of evidence points in a given direction regarding an issue of deliberation. The author will typically delineate a circumstance of study outcome, consilience or consensus as separate from their personal professional opinion.
Critical Review/Skeptic Opinion – A self-identified skeptic or science enthusiast, applies a priori thinking with no ex ante accountability, in order to arrive at a conclusion. The reviewer may or may not cite a couple examples or studies to back their conclusion.
The Ethical Skeptic, “Torfuscation – Gaming Study Design to Effect an Outcome”; The Ethical Skeptic, WordPress, 15 Apr 2019; Web, https://wp.me/p17q0e-9yQ