When Observation Gives Way to Data-Centric Only Science We All Lose

Is the study which you are imperiously and gleefully foisting upon me a meta-analysis? Or is it muta-analysis? It behooves the ethical skeptic to be able to discern the difference. Data alone, does not a science make. Meta-Anlysis studies bear risks inherent in the study methodology which do not exist in Level I and II direct observation science studies. To ignore these risks is pseudoscience.
Are we relying too much upon meta-analysis and imparting too much gravitas to such study approaches on a large scale? The answer is yes, absolutely. Data is the management task of the technician. Even more, an excellent magician can blur the lines between data and method, introducing muta-science into the mix of contended data rigor. It is amazing to me that a study method, which does not allow for suitable replication or peer review, and which contains 21 elements of risk in series – is somehow regarded as the pinnacle of scientific rigor. How did we arrive at such a delusion? Obviously someone of celebrity merit said it and it was repeated from then on.
In contrast, observation remains the heartfelt journey of the scientist. It affords him or her the less common ability to detect bullshit-all-dressed-up-as-rigor.

data is not the same as observationThis morning an excellent blog by The Neuroskeptic wafted across my Twitter screen. One which rekindled my interest inside a topic upon which I have expounded with my science and engineering teams at various times over the years. The Neuroskeptic pointed out a migration of nomeclature and terminology utilized inside medical science paper titles over primarily the last 50 years. The graph to the right is extracted from the March 19th Discover Magazine Online blog article by The Neuroskeptic, addressing this issue.¹ In this blog, he points out the – information technology driven, yet nonetheless peculiar – migration from the term observation, to the term data inside published medical paper titles from 1915 to 2015. As you might infer from the graphic to the right, beginning with the advent of the IBM 1400 common use mainframe in 1961,² until now, there has been an asymptotic decline in the use of observation as the basis of medical paper contention, in favor of data manipulation approaches. A certainly understandable trend. What I contend herein, is not that the rise in data-analyses is an inappropriate trend inside science; rather its employment in lieu of observation – and not as its compliment – or its employment as the gold standard of scientific analysis, both bear pitfalls. Warning flags that science may be trending into directions relegating it vulnerable to manipulation and agenda. And while this graphic only pertains to paper titles, I think the The Neuroskeptic has tapped into an appropriate way in which to introduce and elucidate this issue.

Observation, it is the first step in the scientific method. Otherwise, how does one even know what question to ask? One of the central tenets of Ethical Skepticism involves the skillful understanding of the difference between the role of observation, and its influence in the assimilation of data and crafting of intelligence. Data alone, does not a science make. It is only when we assemble data after a journey of observation, and in the purpose of crafting intelligence, and in answering a question born of necessity – that the process of science is undertaken. One does not simply begin science with data and a question. A meta-analysis can be rigorous, however to presume that since one has conducted a meta-analysis, therefore one has performed rigorous science is a logical fallacy. It is a game for the dilettante – in which they impress each other and the scientifically illiterate.

One thing I learned as the CEO of an information technology and business intelligence firm is… Anything can be crafted from data, in the eyes of the inexpert technician observer. You simply stack the data relationships in the right fashion, ask questions in the right fashion and sequence and juxtapose the results at the right time.

meta analysis increase in useThey key of data science is to be able to detect when you run the risk of deceiving yourself and need to bring in direct material in order to cull and craft such data. In order to accomplish such, one must undertake observations or one must employ the beneficial wisdom life journey of an observer. The strength of the data sets from which one draws has little to do with the strength of argument one claims – unless one possesses the journeyman expertise to interpret such data and its basis of origin. One must be able to skillfully produce intelligence in order to answer a question, not simply analyze data. In order to do that, one cannot be simply a data engineer with a general familiarity of the subject at hand. Nor can one be a scientist, who bears little expertise at the methods and tools entailed in data analysis.

Why do I call it intelligence? Because in actual Security and Intelligence work, one finds that data by itself is useless to most of the time, misleading. Almost any message can be spun from data. Only field work and a knowledgeable observer can distinguish data, and allow it to be placed into sets of useful query-based information – intelligence. Modern Skepticism, if it ever knew this, has certainly forgotten it. The advent of mass storage, relational database structures enabling query by example capability, and the readiness by which we can now assemble data to our fingertips, while mimicking the practices of intelligence, do not stand in adequate substitution of them.

Moreover, a key outcome of these data technology advantages is the increase in employment inside of science of what is called (in particular in medical science) the meta-analysis. A meta-analysis is a ‘study of studies,’ an approach which seeks to bring to coherence a measure of consensus over a specific question of science inside a complicated field of alternative study methods, and potentially conflicting conclusions. The question always remains, is such a drive to consensus representation – too premature, unfounded, unfriendly to replication or heavily risk laden to serve as a basis of claim? I contend that meta-analysis is fatally vulnerable to all such risks.

Meta-Analysis

/science : data science : third level study/ : a post hoc study which does not itself directly observe nor test, rather employs statistical procedures or combines for power, analyses of the same study design type or species. Procedures crafted to substantiate a claim based upon interrogation and integration of the results of a pooled group of first and second level studies, abstracts or case studies. When objective analyses cannot be conducted or the study set consists of studies of different species, this is also often called a systematic review.

See Study Design Indexed to Mode of Inference Strength and Risk

“A meta-analysis is considered the most rigorous of approaches to experimental study. Meta-analysis is a quantitative, formal, epidemiological study design used to systematically assess previous research studies to derive conclusions about that body of research. The examination of variability or heterogeneity in study results is also a critical outcome [of the meta-analysis].”³

~ A. B. Hadich, Hippokratia, 2010.

Potential Benefits of Meta-Anaylsis

  1.  It can potentially access the entire body of research around a question.
  2.  A more precise estimate of the effect of treatment or risk factor for disease, or other outcomes, than any individual study contributing to the pooled analysis.³
  3.  A consolidated and quantitative review of a large, and often complex, sometimes apparently conflicting, body of literature.³
  4.  The ability to contrast varied outcomes addressing a single question addressed by multiple parties and varying approaches to study.
  5.  The ability to gauge the sensitivity of various peripheral issues surrounding a scientific question, based upon its frequency of relevancy inside the pooled analysis group.
  6.  May be conducted from a clinical data distance, by a person with less skin in the game, and potentially therefore, presenting less bias and cost.

Weakness of Meta analysisIn general, because of such potential benefits, the meta-analysis is considered the pinnacle of medical scientific study rigor.  Accordingly, the study approach use is skyrocketing in employment. The number of publications employing meta-analysis over time, through 2012, has skyrocketed beyond reason (results from PubMed search using text “meta-analysis”) is depicted in the graphic to the above right.‡ It exhibits the increasing popularity of such a bypass to Level I and II direct observational and expert study. It represents an alarming and increasing reliance upon a data-centric only approaches to science, which leverages only upon our increased technological prowess at handling data, and not in an overall increase in method coherence or knowledge of subject.

But are there weaknesses entailed in a data-centric only approach to science? Achilles heels which only appear once one is able to delve into and understand the observation set which composes the foundational basis of the pooled first and second level studies inside the analysis? And is there sufficient time, resource and money to effect replication and peer review of such a monumental study? It is not as impressive to me, that the meta-analysis technician understand the question to which the meta-analysis pertains. Instead, I am impressed when the meta-analysis is founded upon a study which proves both that the question being asked is the next logical calculus or reductive critical path question in the scientific method, and that the question has been vetted to be such, as agreed by the study authors, inside the conclusions of the pooled studies themselves.  This sadly however, is almost never the case in a meta-analysis. In general, a study which is further removed from the observational basis from which it is derived, bears a greater risk of unseen error and flaw in logical derivations which can be used for conclusion. This is according to several separate laws:

Filbert’s Law

/philosophy : science : analysis : data : risk/ : to find a result use a small sample population, to hide a result use a large one. More accurately expressed as the law of diminishing information return. Increasing the amount of data brought into an analysis does not necessarily serve to improve salience, precision nor accuracy of the analysis, yet comes at the cost of stacking risk in signal. The further away one gets from direct observation, and the more one gets into ‘the data’ only, the higher is the compounded stack of risk in inference; while simultaneously the chance of contribution from bias is also greater.

Simpson’s Paradox

/philosophy : science : analysis : data : risk/ : a trend appearing in different groups of data can be manipulated to disappear or reverse (see Effect Inversion) when these groups are combined.

Univariate Error

/philosophy : science : error : method/ : a procedural error (not a ‘fallacy’) wherein one is misled by the phenomenon where it’s possible for two multivariate distributions to overlap along any one variable, but be cleanly separable or have the relationship disappear when one examines the whole relational or configuration space in its entirety.

utile absentia

/philosophy : science : bias : method : converting silence into data/ : a study which observes false absences of data or creates artificial absence noise through improper study design, and further then assumes such error to represent verified negative or positive observations. A study containing field or set data in which there exists a risk that absences in measurement data, will be caused by external factors which artificially serve to make the evidence absent, through risk of failure of detection/collection/retention of that data. The absences of data, rather than being filtered out of analysis, are fallaciously presumed to constitute bonafide observations of negatives. This is improper study design which will often serve to produce an inversion effect (curative effect) in such a study’s final results. Similar to torfuscation.

As well, the instance when an abstract offers a background summary or history of the topic’s material argument as part of its introductory premise, and thereafter mentions that its research supports one argument or position – however fails to define the inference or critical path which served to precipitate that ‘support’ – or even worse, tenders research about a related but not-critical aspect of the research. Like pretending to offer ‘clinical research’ supporting opinions about capitalism, inside a study of the methods employed by bank tellers – it only sounds related. In this case you have converted an absence into a positive. A formal error called utile absentia. This sleight-of-hand allows an opinion article to masquerade as a ‘research study’. It allows one to step into the realm of tendering an apparent epistemological result, which is really nothing more than a ‘they said/I say’ editorial with a scientific analysis bolted onto it, which may or may not present any bearing whatsoever into the subject at hand.

Most abstract surveyors do not know the difference – and most study leads cannot detect when this has occurred.​

If one is to ask a broad final consensus question of science, from a series of 1,000 studies many of whom did not ask, nor result in such a question-related set of data, one runs the risk of becoming fouled in the deceptive nature of questioneering and muta-science.

Observing the Candle

observation candleMy Advanced Placement Physics teacher in high school, early in my senior year pulled me aside, and in different approach to that of my equation learning fellow students, assigned me the task of ‘observing a candle.’ I was a bit chagrined by this seemingly mundane task, its burden appearing to me to be somewhat punitive in comparison to the excitement of lighting paper on fire by means of equation estimates of Joule’s First Law. Nonetheless my physics instructor, smiled and said “TES, I want you to sit quietly, relax your mind, and breathe steadily. Then I want you to cite 250 observations of this candle. Not for me, but for you. Lit and unlit, 250 observations.” My mind immediately swept through the salient observations which compose all that an impatient young mind needs to know.

  1.  It is a candle.
  2.  It is key-lime yellow.
  3.  I can light it.
  4.  I can lick my finger and move it back and forth through the flame for fun.

OK, done.

“No, I want you to observe harder…” He smiled and left me with my candle, a pen and 10 sheets of notebook paper. “Crap” I thought. “Sigh…” Okay, so I will observe every little minute detail of this candle and every nuance of that detail, in order to get to my 250 observation count. I breathed, I relaxed and cleared my mind, and stared at the candle well until after my classmates had left the physics lab.

  1. The flame is in a sinusoidal oscillating rhythm.
  2. The average rhythm is .6 seconds.
  3. The rhythm possesses a variation in time from .9 seconds to .2 seconds.
  4. The rhythm moves from .9 seconds to .2 seconds in progression.
  5. The flame appears as if aspirating or breathing.
  6. There is yellow in the flame.
  7. There is blue in the flame.
  8. There is orange in the flame.
  9. The blue increases and decreases with the aspirating sinusoidal pattern…

My instructor did not want me memorizing formulas in order to pass tests – he knew that the other students were going to get senior administrative jobs following rules at big companies or in government offices.  He wanted me, to instead become a scientist. Science is about the ability to observe sufficiently, not the administration of data and formulas, to examine them for familiar patterns and make life easy.

And so on it went. My best observations came as a direct result of my curiosity about what was happening inside this flame. How the candle was made, and the complex interaction between the wax, the ether which combusted versus the paraffin which remained. Suddenly I realized what a wick was for in the first place. I began to formulate ideas as to how to make this device even better, with role plays drifting through my naturally distracted mind. By the time I arrived at 250 observations, I had found that I was not yet done. I knew more about a candle, than I thought possible. I needed to fill in some blanks with some reference technical information, but I understood candle-dom.

A mother who cares for and loves her encephalitic disabled or autistic child from its first moments kicking in her womb, to the most recent day she is spat upon by ill meaning ‘science communicators’ pretending to represent the medical science community, …she is the observer. Everyone else is a data and formula poser. She knows the science, they only know the script. They wave a single academic meta-analysis in the air as if a Bible, yet possess an empty set of intimate knowledge of the questions entailed, the subtle nature of the subject, its risk or even the right question to ask under the scientific method.

Data is the management task of the technician. Stand alone data can serve to deceive even more easily than it can underpin analyses which enlighten.

Observation is the heartfelt journey of the true scientist.

data is notorious way to hideThis is why, even inside subjects towards which I instinctively react in declaring bunk, I bristle at ‘skeptics’ who do not go into the field and make observations. I do not believe in astrology. But if you have not immersed yourself into it, then do not make claims to me regarding ‘evidence’ and science. I am not impressed at your ability to recite a talking sheet. This is a charade worse than the astrology itself, no matter how likely it is that what you think is indeed correct. If you have not sat quietly in ‘haunted’ houses for nights on end, as part of your life’s journey, don’t pretend to tell me all about the reality of ghosts. If you have not chased a mach VIII object in your F-14 and filmed it on the gun camera and pulled the tape on the fire control radar, don’t pretend to tell me all about UFO’s. And if you have spent sizeable amounts of your ‘skeptic’ time debunking such subjects, and then wander over and purport to tell me all about the evidence behind medical conformity which you would like to enforce upon me, I am going to call you a ‘fucking idiot.’

Don’t be deluded by your databases, formulas and ‘evidence’ crafted by those just like you. It is nothing but counter-bunk, to an ethical skeptic.

Simply having run a statistical analyses on candles, can deceive one into thinking that they know even the first thing about candles at all.  This is the principle behind what I call Muta-Science. Or in any ethical world, pseudoscience. Bad method, even more questionable results – advertised as science of the ‘highest rigor.’

Fatal Pitfalls of a Data-Centric Meta-Analysis Only Approach to Science

magicians make for good skepticsThis principal fallacy, the lack of observation in science, or its sacrifice at the hands of the ‘data scientist’ (a scientist who bears little or no qualifying reductive expertise in the subject at hand) introduces the faulty form of science which is often conflated with a rigorous and focused meta-analysis. A muta-analysis changes, by means of sleight-of-hand which cannot easily be isolated, the question being asked, the underlying data itself, the contentions of past science, and the methods of science – by skipping around all but one of the steps of the scientific method.

“In general, post hoc analysis should be deemed exploratory and not conclusive. In the best-case scenario, by revealing the magnitude of effect sizes associated with prior research, meta-analysis can suggest how future studies might be best designed to maximize their individual power.”

     ~T. Greco, Pitfalls of Meta-Analyses‡

bonus sive malusIn other words, meta-analyses are not meant to provide boast to consensus, nor completion of the scientific method. It is ironic indeed that where we deny the need to assemble data and intelligence at the beginning of the scientific method, we bless such action as ‘the gold standard’ when used inappropriately in lieu of or to artificially force consensus as the end of the scientific method.

Muta-Analysis (Pseudoscience)

/science : data science : pseudoscience : spin : questioneering/ : the most unreliable of scientific studies. Often a badly developed meta-analysis, which cannot be easily replicated or peer reviewed, contains a high degree of unacknowledged risk, or was executed based upon a poor study plan. An appeal to authority based upon faulty statistical knowledge development processes. Processes which alter or do not employ full scientific methodology, in favor of a premature claim to consensus or rigor implied by the popularity of a statistical study type. A method which does not directly observe, nor directly test, rather employs statistical procedures to answer a faulty inclusion criteria selected, asked, agenda bearing or peripherally addressed scientific question.

The Probability of Failure is High and Unacknowledged: Muta-Analysis Employed Simply as a Data-Centric Appeal to Authority

Uncertainty Imparted from Source Study Material Practices and Lack of Observational Basis

  1. muta analysisLack of qualified individual data or poor study design results in such a large domain of material so as to dilute, or render-below-p, important signal data.
  2. Lack of qualified individual data or poor study design causes access to such a complex domain of material so as to conceal important focused signal data inside the noise generated.
  3. Data obtained from study summaries, rather than original data – serves to surreptitiously mislead.
  4. Studies which corroborate or correspond to current avenues of science tend to have robustly informative titles to catch attention, while studies which have spotted a counter signal, only title their study precisely to the observed signal effect and not the broader topic, and will tend to show up less often as dissenting or countermanding material in a study search.
  5. Large pool study populations challenge the literature search’s ability to understand each study adequately for inclusion criteria.
  6. Does not draw knowledge from the literature, commentary, peer review commentary or expert editorial/caution around a subject, otherwise available from scientists and experts.
  7. When used to impart inferences from studies, pertaining to peripheral topics, which the Level I and II study authors never intended, nor to which they would agree.
  8. Can be used to impart a premature consensus conclusion from the authors of its pooled studies, which they never addressed nor intended to issue.
  9. The mistaken perception can be held that increases in data or precision, result in increases in study quality, result in increases in accuracy.
  10. Meta-analysis makes it possible to look at events that were too rare in the original studies to show a statistically significant difference. However, analyzing rare events represents a problem because small changes in data can determine important changes in the results and this instability can be exaggerated by the use of relative measures of effect instead of absolute ones.‡
  11. There is resistance from authors to allow ready access to their own dataset containing individual patient data, because of a variety of proprietary reasons and concerns over misinterpretation or liability.‡

Study Scale and Cost Detractor Risks

  1. A meta-analysis is work content and resource costly and difficult to replicate or review. Therefore it can be tempting to take its results as consensus finished science – appeal to authority by cost or effort – when such a claim is not warranted or is premature.
  2. Peer review is difficult to impossible to execute because the qualifications to review the data at hand require specific technical, not science, expertise, effort, cost and domain visibility.
  3. An expensive and work content heavy study – which risks a statistical outcome of underwhelming proportion or one lacking heterorgeneity, may be tempted to revise the study or question, once principal results are in hand, in order to resolve a weaker, but more monumental outcome for peer review and publication.
  4. The very potential of wasted money, introduces an intrinsic conflict of interest, which does not exist in incremental Level I and II studies.
  5. Database queries are run by low cost analysts and research assistants, as a necessity of cost. These non-experts may inadvertently tamper with the available set of input data by means of the criteria employed in a search, its exclusion and inclusion conditions or the inability to know when a suitably representative sample has been extracted from a domain with which the assistant has no familiarity.

Misalignment Between Expertise Demand and Task Assignments Imparting Uncertainty and Risk

  1. Can be executed by academic students and research assistants, at an arms-length from the field, or by data technicians who do not fully grasp the field or question at hand because the lead scientists are not familiar with the database/query technology employed in the meta-analysis.
  2. Executed by data technicians who are not skilled as science or observation, nor the process of developing a logical critical reduction path.
  3. Literature searches performed by semi-qualified, data technician or research assistant parties serve to bias the formulation of question and bias the results.
  4. Technical tasks inside the study do not allow senior research or observation professionals ease of visibility and access into the data practices and intermediate results. No litmus testing is easily allowed midstream. All of which serve to weaken the overall strength and circumspection of the study. The results are the results, and who has the expertise to say they are wrong?
  5. Most abstract surveyors are not aware when they have misinterpreted an abstract or created false support/dissent observation for the sampled issue – and most study leads cannot detect when this has occurred.​

Study Design & Execution Risks

  1.  Subtle failures to adequately define the question studied serve to bias the results, through faulty inclusion criteria.
  2.  Electronic-only database searches artificially skew the study pool and induce bias through excluding studies without an exclusion criteria
  3.  Inclusion criteria crafted to unfairly represent the objections of the dissenting body of science inside a given question.
  4.  Inclusion criteria posed to answer a different question than the ones posed in the first and second level pool studies from which it draws.
  5.  Development of a conclusion from Level I and II studies which provide for little or no heterogeneity in conclusion base.
  6.  Quality scores in study design may exclude more salient studies with lower quality scores in favor of less salient studies which have high quality scores, thereby both imbuing bias into the selection criteria while at the same time advertising a high quality selection rating.
  7. Sensitivity analyses must presume a correlative conjecture – yet do not prove that relationship at the same time. The danger is that these relationships will be granted the same gravitas as the primary claim. “Correlation does not prove causality, unless it is a side observation inside a meta-analysis.”
  8. Both data sourcing and arm’s length Cochrane-style Reviews come from contributors and groups based all around the world with the majority of the work carried out online. Enormous bias in data interpretation can be imputed through such a process.

Abuse of Study Intent Gravitas and Implications

  1. In the face of sufficient risk, unknown or detractors, a meta-analysis can be employed as a defensive boast to provide a smoke screen of rigor which cannot be otherwise claimed in level I or II study.
  2. In the face of conflicting results, a meta-analysis can be utilized more as an involuntary ‘vote’ by scientists or popularity contest fake-measure for deriving artificially crafted ‘consensus.’
  3. Can be used, because of its ‘most rigorous form of study’ POTENTIAL – as an appeal to authority.
  4. The question arises as to why, in a field such as medicine where direct observations are so readily available, do we constantly value science which involves NO observation at all? This renders the study methodology vulnerable to market forces who wish to bypass liability imparted by means of direct observation.
  5. Cochrane Collaboration approaches may cause undue credibility to be granted biased studies from vested/conflicted interests through otherwise respected systematic review channels.

Note: None of the above risks exist in a Level I and II study, which fail more from errors in measurement, alternative diligence and conclusive shortcuts – not risks which are imparted by the nature of the study methodology itself. This is the unacknowledged risk, taken on by the person making a claim via meta-analysis.

High Risk  +  Used as a Claim to Gold Standard Rigor or Consensus  =  High Probability of Abuse

nevrer meta analysis I did not likeRemember that risk is not defined as the ‘probability of failure,’ rather ‘a cumulation of the number of risk bearing elements in series, which could potentially serve to undermine a process or result.’† In this regard, a meta-analysis bears more risk than does a level I or II study – because of the complexity and inability to replicate the logical calculus or scale of effort involved. Twelve high probability tasks targeting accomplishment of a goal, results in one low probable outcome of success. Twenty one risk points (ten of the above 31 points are parallel risks or potential abuses of employment, not intrinsic series method risks) affords an even less probable avenue to success, approaching a probability of failure of 1.00.† Figure 1 shows the cumulative series on such methodology risk, where P(f) is the the probability of failure, and its approach to a 1.00 asymptote.

Figure 1: Highly populated probability of failure, series†

series risk in

So, with the ‘gold standard of rigor’ claim with regard to meta-analyses comes the knowledge that we are purchasing such a claim to authority at the cost of an enormous amount of unacknowledged but adopted risk. To wholesale allow for the establishment of consensus evidence, on the basis of solely meta-study approaches, is scientific foolishness and self deception.

It is amazing to me that a study method, which does not allow for replication, nor suitable peer review, and which contains 21 elements of risk in series – is somehow regarded as the pinnacle of scientific rigor.

A meta-analysis – CAN be – the gold standard of study rigor and verity. But more importantly, we introduce error into science when we issue this appeal to authority basis to studies which fail to address their own propensity to misidentify, misconstrue, mislead, and misinform science and the public.

epoché vanguards gnosis


¹  The Neuroskeptic; Discover Magazine Online: From “Observations” to “Data”: The Changing Language of Science | March 19, 2016 10:32 am; http://blogs.discovermagazine.com/neuroskeptic/2016/03/19/from-observations-to-data/#more-7513.

²  Computer History Museum: Timeline of Computer History; http://www.computerhistory.org/timeline/computers/.

³  Haidich, A. B. (2010). Meta-analysis in medical research. Hippokratia, 14(Suppl 1), 29–37; http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3049418/.

†  Bin Suoa, Yong sheng Chenga; Calculation of Failure Probability of Series and Parallel Systems for Imprecise Probability; Jun Lihttp://www.mecs-press.org/ijem/ijem-v2-n2/IJEM-V2-N2-12.pdf.

‡  Greco, T., Zangrillo, A., Biondi-Zoccai, G., & Landoni, G. (2013). Meta-analysis: pitfalls and hints. Heart, Lung and Vessels, 5(4), 219–225.

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments