The Ethical Skeptic

Challenging Pseudo-Skepticism, Institutional Propaganda and Cultivated Ignorance

Ethical Skepticism – Part 8 – The Watchers Must Also Be Watched

One of the tenets of Ethical Skepticism is “Monitor those who do the monitoring.” A confluence of three pitfalls derive from a monitoring process which has gone awry. In-group biases tend to reinforce in the mind of the watchers, the need for their quality entity (external entity skepticism in lieu of science) and they may fail to be able to recognize a quality outcome – becoming the source of error themselves. The net result, many times is an unbound combination of lack of accountability and coalescence of power to the authority who watches. This stands as a god-proxy. A mistake wherein the network may value itself above product or topic and become a regularly self-justifying and error stimulating/generating mechanism.
Clubs fail to ensure quality. Ethical Skepticism is the very absence of club quality.

Quality is not an Add-OnIn classic quality control theory, there exist five principal approaches to improving and sustaining quality of process and delivery. They revolve around the ethos of designing elegant procedure, being smart, and treating people in an ethical manner. Accountability imbued from outsiders is rarely effective, rather only standing as a cathartic and futile gesture on the part of someone looking to profit from the process, not share in its success. Shortfalls in this regard are what result in human and systemic error. Error does not stem primarily from an absence of monitoring errors; rather, it stems from a bad assumption, bad training, bad process …and sometimes (many times), bad monitoring itself. The key elements entailed in designing a process of quality, in order, are †

I.  Craft process(es) based upon clarity and value regarding human, training, system and their symbiosis

II.  Interweave self-checking mechanisms which highlight and correct error as an elegant aspect of each step

III.  Right-Pace productivity expectations to enhance quality, not make things produce as fast or low-cost as possible

IV.  Inform those who are stakeholders, and reward those who are critical, in achieving and sustaining quality delivery – Punishment and social derision are ineffective at producing sustained quality, or even quality at all.

V.  Monitor the mechanisms which monitor the process/quality.

Skepticism as Quality

In this same manner, (Ethical) skepticism is a quality mindset one maintains while doing actual science. It is not an add-on which decides, judges, derides, intimidates, concludes or provisionally stacks externally to or in lieu of science. This latter approach is demonstrably and timelessly ineffective.

How its not doneWhat my businesses have found over the years is that, if you do the first four things right, then the majority of error will be generated regarding pitfalls inside element V. In other words, your goal is to craft a process which is effective enough from a quality standpoint, that the monitoring process itself becomes the weakest link in the chain. As a young executive, the first time my organization achieved this state, it surprised me. From then on, I understood.

Treating people ethically is the key to quality – you do not punish quality deliverers and reward external parties – this is anathema to a sound approach in establishing quality. This however, is the practice of Social Skepticism.

In real ethical business and engineered process, you inform stakeholders (those directly impacted) and you reward those who deliver quality. Unconcerned parties do not get a voice – no matter how many buzzwords they know.

This lens into the principle of quality elicits a key tenet of Ethical Skepticism. That of watching the watchers. Systems are systems and humans are humans. Once established, they tend to erect mechanisms which serve to defend the existence of the system or human organization itself. Just as in the principle where the old bootleg networks of the prohibition era simply became drug networks after prohibition was repealed, any self-justifying network (one in which the value incorrectly resides in the network itself and not the product) will find targets which serve to reinforce justification for their existence. It was the network after all which was important and not the drug they were supplying.

With this in mind, several current pitfalls intersect to produce the current reality we observe with regard to Social Skepticism:

A.  The value, in the mind of the member is incorrectly shifted from the product or topic, and into the Organization itself.

B.  The watchers or Organization themselves may be unconnected to the issue, fail to recognize success and be where the majority of the error is then generated.

C. The Organization errantly begins to see quality as an external process of authority, derision and punishment – this always fails.

A or B or C or any intersection thereof. The watcher network may value itself above product, begin to fail to be able to imbue a quality outcome and become a regularly self-justifying and error stimulating/generating mechanism of its own.

This is the condition (A or B or C or any intersection thereof) we find ourselves in today. Fake skepticism run amok; wherein its participants reside in a state of such epistemic commitment and in-group bias, that they cannot observe the ineffective and many times destructive quality role they have played inside the public’s understanding of science and skepticism. This is the condition wherein a god proxy has arisen and is now exercising power.

The watchers are abusing the public and are not being held to account themselves. They are only producing errant outcomes and quality somehow never seems to arrive. An excellent example may be found inside this blog by Vixen Valentino, where as an astute observer of process error, she has identified the hypocrisy of appeal to motive accusations carelessly foisted by this self-justifying watchers organization. This is not how science is done, and not how skepticism is done. This introduces another form of informal fallacy for our consideration, qualitas clava error.

Qualitas Clava Error

/philosophy : fallacy : demarcation of skepticism and pseudo-skepticism/ : club quality error. The presumption on the part of role-playing or celebrity-power-seeking social skeptics that their club or its power, is important in ensuring the quality of science and scientific understanding on the part of the broader population. The presumption that external club popularity and authority, lock step club allegiance and presumptive stacks of probable knowledge will serve to produce valid or quality outcomes inside scientific, rational or critical thought processes. The pretense of encouraging skepticism, while at the same time promoting conclusions. Such thought fails in light of time proven quality improvement practices.

Those who truly value the outcomes of science, those who truly seek to develop knowledge and alleviate suffering – must be ever vigilant to watch for those who are simply using science as a battering ram to build their ego, money, politics and celebrity. At the supposed benefit of increasing quality which never seems to come; all at the cost of understanding and the sustaining of human suffering.

There is no club inside Ethical Skepticism. There should not be a club, as Ethical Skepticism is the very absence of club. Nor does teaching people how to think ethically skeptical constitute a qualitas clava error – an ethical skeptic encourages dissent by means of originality of thought and hard field research – not simply parroting of the provisional knowledge and one-liners held by him or his cronies. Ethical Skepticism is a process of personal choice regarding knowledge. It is an allegiance to preparing the mind to conduct science; a respect for quality knowledge improvement and the subject at hand, above all else.

epoché vanguards gnosis

†  There are numerous references which I can cite with regard to quality and process design – however, these five principles stem from my own decades of experience. They overlap 100% with established industry wisdom, but this version is a crafting of my own, employed through 30 years of creating effective and industry leading businesses and processes. The focus of this blog is not to provide a dissertation on quality control, rather highlight this tenet of Ethical Skepticism. However, if you seek some academic backing and foundational resource on systemic quality, some excellent reading can be found here:

Oakland, John S.; Total Quality Management (Fourth Edition); Routledge, 2014; ISBN-13: 978-0415635493.

Peters, T.J., Waterman, R. H.; In Search of Excellence; Harper Business, 2006; ISBN-00-6-0548789.

Hadley, M.E., Levine, J.E.; Endocrinology; Pearson-Prentice Hall, 2007; ISBN-0-13-187606-6.

¹  Many thanks to Vixen for highlighting to me this very important aspect of Ethical Skepticism, one which I had long forgotten to address.

June 13, 2016 Posted by | What is Ethical Skepticism | , , , | Leave a comment

When Observation Gives Way to Data-Centric Only Science We All Lose

Is the study which you are imperiously and gleefully foisting upon me a meta-analysis? Or is it muta-analysis? It behooves the ethical skeptic to be able to discern the difference. Data alone, does not a science make. Meta-Anlysis studies bear risks inherent in the study methodology which do not exist in Level I and II direct observation science studies. To ignore these risks is pseudoscience.
Are we relying too much upon meta-analysis and imparting too much gravitas to such study approaches on a large scale? The answer is yes, absolutely. Data is the management task of the technician. Even more, an excellent magician can blur the lines between data and method, introducing muta-science into the mix of contended data rigor. It is amazing to me that a study method, which does not allow for suitable replication or peer review, and which contains 21 elements of risk in series – is somehow regarded as the pinnacle of scientific rigor. How did we arrive at such a delusion? Obviously someone of celebrity merit said it and it was repeated from then on.
In contrast, observation remains the heartfelt journey of the scientist. It affords him or her the less common ability to detect bullshit-all-dressed-up-as-rigor.

data is not the same as observationThis morning an excellent blog by The Neuroskeptic wafted across my Twitter screen. One which rekindled my interest inside a topic upon which I have expounded with my science and engineering teams at various times over the years. The Neuroskeptic pointed out a migration of nomeclature and terminology utilized inside medical science paper titles over primarily the last 50 years. The graph to the right is extracted from the March 19th Discover Magazine Online blog article by The Neuroskeptic, addressing this issue.¹ In this blog, he points out the – information technology driven, yet nonetheless peculiar – migration from the term observation, to the term data inside published medical paper titles from 1915 to 2015. As you might infer from the graphic to the right, beginning with the advent of the IBM 1400 common use mainframe in 1961,² until now, there has been an asymptotic decline in the use of observation as the basis of medical paper contention, in favor of data manipulation approaches. A certainly understandable trend. What I contend herein, is not that the rise in data-analyses is an inappropriate trend inside science; rather its employment in lieu of observation – and not as its compliment – or its employment as the gold standard of scientific analysis, both bear pitfalls. Warning flags that science may be trending into directions relegating it vulnerable to manipulation and agenda. And while this graphic only pertains to paper titles, I think the The Neuroskeptic has tapped into an appropriate way in which to introduce and elucidate this issue.

Observation, it is the first step in the scientific method. Otherwise, how does one even know what question to ask? One of the central tenets of Ethical Skepticism involves the skillful understanding of the difference between the role of observation, and its influence in the assimilation of data and crafting of intelligence. Data alone, does not a science make. It is only when we assemble data after a journey of observation, and in the purpose of crafting intelligence, and in answering a question born of necessity – that the process of science is undertaken. One does not simply begin science with data and a question. A meta-analysis can be rigorous, however to presume that since one has conducted a meta-analysis, therefore one has performed rigorous science is a logical fallacy. It is a game for the dilettante – in which they impress each other and the scientifically illiterate.

One thing I learned as the CEO of an information technology and business intelligence firm is… Anything can be crafted from data, in the eyes of the inexpert technician observer. You simply stack the data relationships in the right fashion, ask questions in the right fashion and sequence and juxtapose the results at the right time.

meta analysis increase in useThey key of data science is to be able to detect when you run the risk of deceiving yourself and need to bring in direct material in order to cull and craft such data. In order to accomplish such, one must undertake observations or one must employ the beneficial wisdom life journey of an observer. The strength of the data sets from which one draws has little to do with the strength of argument one claims – unless one possesses the journeyman expertise to interpret such data and its basis of origin. One must be able to skillfully produce intelligence in order to answer a question, not simply analyze data. In order to do that, one cannot be simply a data engineer with a general familiarity of the subject at hand. Nor can one be a scientist, who bears little expertise at the methods and tools entailed in data analysis.

Why do I call it intelligence? Because in actual Security and Intelligence work, one finds that data by itself is useless to most of the time, misleading. Almost any message can be spun from data. Only field work and a knowledgeable observer can distinguish data, and allow it to be placed into sets of useful query-based information – intelligence. Modern Skepticism, if it ever knew this, has certainly forgotten it. The advent of mass storage, relational database structures enabling query by example capability, and the readiness by which we can now assemble data to our fingertips, while mimicking the practices of intelligence, do not stand in adequate substitution of them.

Moreover, a key outcome of these data technology advantages is the increase in employment inside of science of what is called (in particular in medical science) the meta-analysis. A meta-analysis is a ‘study of studies,’ an approach which seeks to bring to coherence a measure of consensus over a specific question of science inside a complicated field of alternative study methods, and potentially conflicting conclusions. The question always remains, is such a drive to consensus representation – too premature, unfounded, unfriendly to replication or heavily risk laden to serve as a basis of claim? I contend that meta-analysis is fatally vulnerable to all such risks.


/science : data science : third level study/ : a post hoc study which does not directly observe, nor directly test, rather employs statistical procedures. Procedures crafted to substantiate a claim based upon integration of the results of a pooled group of first and second level studies which directly address the claim or related question under contention.

“A meta-analysis is considered the most rigorous of approaches to experimental study. Meta-analysis is a quantitative, formal, epidemiological study design used to systematically assess previous research studies to derive conclusions about that body of research. The examination of variability or heterogeneity in study results is also a critical outcome [of the meta-analysis].”³

~ A. B. Hadich, Hippokratia, 2010.

Potential Benefits of Meta-Anaylsis

  1.  It can potentially access the entire body of research around a question.
  2.  A more precise estimate of the effect of treatment or risk factor for disease, or other outcomes, than any individual study contributing to the pooled analysis.³
  3.  A consolidated and quantitative review of a large, and often complex, sometimes apparently conflicting, body of literature.³
  4.  The ability to contrast varied outcomes addressing a single question addressed by multiple parties and varying approaches to study.
  5.  The ability to gauge the sensitivity of various peripheral issues surrounding a scientific question, based upon its frequency of relevancy inside the pooled analysis group.
  6.  May be conducted from a clinical data distance, by a person with less skin in the game, and potentially therefore, presenting less bias and cost.

Weakness of Meta analysisIn general, because of such potential benefits, the meta-analysis is considered the pinnacle of medical scientific study rigor.  Accordingly, the study approach use is skyrocketing in employment. The number of publications employing meta-analysis over time, through 2012, has skyrocketed beyond reason (results from PubMed search using text “meta-analysis”) is depicted in the graphic to the above right.‡ It exhibits the increasing popularity of such a bypass to Level I and II direct observational and expert study. It represents an alarming and increasing reliance upon a data-centric only approaches to science, which leverages only upon our increased technological prowess at handling data, and not in an overall increase in method coherence or knowledge of subject.

But are there weaknesses entailed in a data-centric only approach to science? Achilles heels which only appear once one is able to delve into and understand the observation set which composes the foundational basis of the pooled first and second level studies inside the analysis? And is there sufficient time, resource and money to effect replication and peer review of such a monumental study? It is not as impressive to me, that the meta-analysis technician understand the question to which the meta-analysis pertains. Instead, I am impressed when the meta-analysis is founded upon a study which proves both that the question being asked is the next logical calculus or reductive critical path question in the scientific method, and that the question has been vetted to be such, as agreed by the study authors, inside the conclusions of the pooled studies themselves.  This sadly however, is almost never the case in a meta-analysis. In general, a study which is further removed from the observational basis from which it is derived, bears a greater risk of unseen error and flaw in logical derivations which can be used for conclusion. This is according to Filbert’s Law:

Filbert’s Law

/philosophy : science : analysis : data : risk/ : or the law of diminishing information return. Increasing the amount of data brought into an analysis does not necessarily serve to improve salience, precision or accuracy of the analysis, yet comes at the cost of stacking risk in veracity. The further away one gets from direct observation, and the more one gets into ‘the data’ only, the higher is the stack of chairs upon which one stands. To find a result use a small sample population, to hide a result use a large one.

If one is to ask a broad final consensus question of science, from a series of 1,000 studies many of whom did not ask, nor result in such a question-related set of data, one runs the risk of becoming fouled in the deceptive nature of questioneering and muta-science.

Observing the Candle

observation candleMy Advanced Placement Physics teacher in high school, early in my senior year pulled me aside, and in different approach to that of my equation learning fellow students, assigned me the task of ‘observing a candle.’ I was a bit chagrined by this seemingly mundane task, its burden appearing to me to be somewhat punitive in comparison to the excitement of lighting paper on fire by means of equation estimates of Joule’s First Law. Nonetheless my physics instructor, smiled and said “TES, I want you to sit quietly, relax your mind, and breathe steadily. Then I want you to cite 250 observations of this candle. Not for me, but for you. Lit and unlit, 250 observations.” My mind immediately swept through the salient observations which compose all that an impatient young mind needs to know.

  1.  It is a candle.
  2.  It is key-lime yellow.
  3.  I can light it.
  4.  I can lick my finger and move it back and forth through the flame for fun.

OK, done.

“No, I want you to observe harder…” He smiled and left me with my candle, a pen and 10 sheets of notebook paper. “Crap” I thought. “Sigh…” Okay, so I will observe every little minute detail of this candle and every nuance of that detail, in order to get to my 250 observation count. I breathed, I relaxed and cleared my mind, and stared at the candle well until after my classmates had left the physics lab.

  1. The flame is in a sinusoidal oscillating rhythm.
  2. The average rhythm is .6 seconds.
  3. The rhythm possesses a variation in time from .9 seconds to .2 seconds.
  4. The rhythm moves from .9 seconds to .2 seconds in progression.
  5.  The flame appears as if aspirating or breathing.
  6. There is yellow in the flame.
  7. There is blue in the flame.
  8. There is orange in the flame.
  9.  The blue increases and decreases with the aspirating sinusoidal pattern…

And so on it went. My best observations came as a direct result of my curiosity about what was happening inside this flame. How the candle was made, and the complex interaction between the wax, the ether which combusted versus the paraffin which remained. Suddenly I realized what a wick was for in the first place. I began to formulate ideas as to how to make this device even better, with role plays drifting through my naturally distracted mind. By the time I arrived at 250 observations, I had found that I was not yet done. I knew more about a candle, than I thought possible. I needed to fill in some blanks with some reference technical information, but I understood candle-dom.

A mother who cares for and loves her encephalitic disabled or autistic child from its first moments kicking in her womb, to the most recent day she is spat upon by ill meaning ‘skeptics’ pretending to represent the medical science community, …she is the observer. Everyone else is a data and formula poser. She knows the science, they only know the script. They wave a single academic meta-analysis in the air as if a Bible, yet possess an empty set of intimate knowledge of the questions entailed, the subtle nature of the subject, its risk or even the right question to ask under the scientific method.

Data is the management task of the technician. Stand alone data can serve to deceive even more easily than it can underpin analyses which enlighten.

Observation is the heartfelt journey of the true scientist.

data is notorious way to hideThis is why, even inside subjects towards which I instinctively react in declaring bunk, I bristle at ‘skeptics’ who do not go into the field and make observations. I do not believe in astrology. But if you have not immersed yourself into it, then do not make claims to me regarding ‘evidence’ and science. I am not impressed at your ability to recite a talking sheet. This is a charade worse than the astrology itself, no matter how likely it is that what you think is indeed correct. If you have not sat quietly in ‘haunted’ houses for nights on end, as part of your life’s journey, don’t pretend to tell me all about the reality of ghosts. If you have not chased a mach VIII object in your F-14 and filmed it on the gun camera and pulled the tape on the fire control radar, don’t pretend to tell me all about UFO’s. And if you have spent sizeable amounts of your ‘skeptic’ time debunking such subjects, and then wander over and purport to tell me all about the evidence behind medical conformity which you would like to enforce upon me, I am going to call you a ‘fucking idiot.’

Don’t be deluded by your databases, formulas and ‘evidence’ crafted by those just like you. It is nothing but counter-bunk, to an ethical skeptic.

Simply having run a statistical analyses on candles, can deceive one into thinking that they know even the first thing about candles at all.  This is the principle behind what I call Muta-Science. Or in any ethical world, pseudoscience. Bad method, even more questionable results – advertised as science of the ‘highest rigor.’

Fatal Pitfalls of a Data-Centric Meta-Analysis Only Approach to Science

magicians make for good skepticsThis principal fallacy, the lack of observation in science, or its sacrifice at the hands of the ‘data scientist’ (a scientist who bears little or no qualifying reductive expertise in the subject at hand) introduces the faulty form of science which is often conflated with a rigorous and focused meta-analysis. A muta-analysis changes, by means of sleight-of-hand which cannot easily be isolated, the question being asked, the underlying data itself, the contentions of past science, and the methods of science – by skipping around all but one of the steps of the scientific method.

“In general, post hoc analysis should be deemed exploratory and not conclusive. In the best-case scenario, by revealing the magnitude of effect sizes associated with prior research, meta-analysis can suggest how future studies might be best designed to maximize their individual power.”

     ~T. Greco, Pitfalls of Meta-Analyses‡

bonus sive malusIn other words, meta-analyses are not meant to provide boast to consensus, nor completion of the scientific method. It is ironic indeed that where we deny the need to assemble data and intelligence at the beginning of the scientific method, we bless such action as ‘the gold standard’ when used inappropriately in lieu of or to artificially force consensus as the end of the scientific method.

Muta-Analysis (Pseudoscience)

/science : data science : pseudoscience : spin : questioneering/ : the most unreliable of scientific studies. Often a badly developed meta-analysis, which cannot be easily replicated or peer reviewed, contains a high degree of unacknowledged risk, or was executed based upon a poor study plan. An appeal to authority based upon faulty statistical knowledge development processes. Processes which alter or do not employ full scientific methodology, in favor of a premature claim to consensus or rigor implied by the popularity of a statistical study type. A method which does not directly observe, nor directly test, rather employs statistical procedures to answer a faulty inclusion criteria selected, asked, agenda bearing or peripherally addressed scientific question.

The Probability of Failure is High and Unacknowledged: Muta-Analysis Employed Simply as a Data-Centric Appeal to Authority

Uncertainty Imparted from Source Study Material Practices and Lack of Observational Basis

  1. muta analysisLack of qualified individual data or poor study design results in such a large domain of material so as to dilute, or render-below-p, important signal data.
  2. Lack of qualified individual data or poor study design causes access to such a complex domain of material so as to conceal important focused signal data inside the noise generated.
  3. Data obtained from study summaries, rather than original data – serves to surreptitiously mislead.
  4. Studies which corroborate or correspond to current avenues of science tend to have robustly informative titles to catch attention, while studies which have spotted a counter signal, only title their study precisely to the observed signal effect and not the broader topic, and will tend to show up less often as dissenting or countermanding material in a study search.
  5. Large pool study populations challenge the literature search’s ability to understand each study adequately for inclusion criteria.
  6. Does not draw knowledge from the literature, commentary, peer review commentary or expert editorial/caution around a subject, otherwise available from scientists and experts.
  7. When used to impart inferences from studies, pertaining to peripheral topics, which the Level I and II study authors never intended, nor to which they would agree.
  8. Can be used to impart a premature consensus conclusion from the authors of its pooled studies, which they never addressed nor intended to issue.
  9. The mistaken perception can be held that increases in data or precision, result in increases in study quality, result in increases in accuracy.
  10. Meta-analysis makes it possible to look at events that were too rare in the original studies to show a statistically significant difference. However, analyzing rare events represents a problem because small changes in data can determine important changes in the results and this instability can be exaggerated by the use of relative measures of effect instead of absolute ones.‡
  11. There is resistance from authors to allow ready access to their own dataset containing individual patient data, because of a variety of proprietary reasons and concerns over misinterpretation or liability.‡

Study Scale and Cost Detractor Risks

  1. A meta-analysis is work content and resource costly and difficult to replicate or review. Therefore it can be tempting to take its results as consensus finished science – appeal to authority by cost or effort – when such a claim is not warranted or is premature.
  2. Peer review is difficult to impossible to execute because the qualifications to review the data at hand require specific technical, not science, expertise, effort, cost and domain visibility.
  3. An expensive and work content heavy study – which risks a statistical outcome of underwhelming proportion or one lacking heterorgeneity, may be tempted to revise the study or question, once principal results are in hand, in order to resolve a weaker, but more monumental outcome for peer review and publication.
  4. The very potential of wasted money, introduces an intrinsic conflict of interest, which does not exist in incremental Level I and II studies.
  5. Database queries are run by low cost analysts and research assistants, as a necessity of cost. These non-experts may inadvertently tamper with the available set of input data by means of the criteria employed in a search, its exclusion and inclusion conditions or the inability to know when a suitably representative sample has been extracted from a domain with which the assistant has no familiarity.

Misalignment Between Expertise Demand and Task Assignments Imparting Uncertainty and Risk

  1. Can be executed by academic students and research assistants, at an arms-length from the field, or by data technicians who do not fully grasp the field or question at hand because the lead scientists are not familiar with the database/query technology employed in the meta-analysis.
  2. Executed by data technicians who are not skilled as science or observation, nor the process of developing a logical critical reduction path.
  3. Literature searches performed by semi-qualified, data technician or research assistant parties serve to bias the formulation of question and bias the results.
  4. Technical tasks inside the study do not allow senior research or observation professionals ease of visibility and access into the data practices and intermediate results. No litmus testing is easily allowed midstream. All of which serve to weaken the overall strength and circumspection of the study. The results are the results, and who has the expertise to say they are wrong?

Study Design & Execution Risks

  1.  Subtle failures to adequately define the question studied serve to bias the results, through faulty inclusion criteria.
  2.  Electronic-only database searches artificially skew the study pool and induce bias through excluding studies without an exclusion criteria
  3.  Inclusion criteria crafted to unfairly represent the objections of the dissenting body of science inside a given question.
  4.  Inclusion criteria posed to answer a different question than the ones posed in the first and second level pool studies from which it draws.
  5.  Development of a conclusion from Level I and II studies which provide for little or no heterogeneity in conclusion base.
  6.  Quality scores in study design may exclude more salient studies with lower quality scores in favor of less salient studies which have high quality scores, thereby both imbuing bias into the selection criteria while at the same time advertising a high quality selection rating.
  7. Sensitivity analyses must presume a correlative conjecture – yet do not prove that relationship at the same time. The danger is that these relationships will be granted the same gravitas as the primary claim. “Correlation does not prove causality, unless it is a side observation inside a meta-analysis.”
  8. Both data sourcing and arm’s length Cochrane-style Reviews come from contributors and groups based all around the world with the majority of the work carried out online. Enormous bias in data interpretation can be imputed through such a process.

Abuse of Study Intent Gravitas and Implications

  1. In the face of sufficient risk, unknown or detractors, a meta-analysis can be employed as a defensive boast to provide a smoke screen of rigor which cannot be otherwise claimed in level I or II study.
  2. In the face of conflicting results, a meta-analysis can be utilized more as an involuntary ‘vote’ by scientists or popularity contest fake-measure for deriving artificially crafted ‘consensus.’
  3. Can be used, because of its ‘most rigorous form of study’ POTENTIAL – as an appeal to authority.
  4. The question arises as to why, in a field such as medicine where direct observations are so readily available, do we constantly value science which involves NO observation at all? This renders the study methodology vulnerable to market forces who wish to bypass liability imparted by means of direct observation.
  5. Cochrane Collaboration approaches may cause undue credibility to be granted biased studies from vested/conflicted interests through otherwise respected systematic review channels.

Note: None of the above risks exist in a Level I and II study, which fail more from errors in measurement, alternative diligence and conclusive shortcuts – not risks which are imparted by the nature of the study methodology itself. This is the unacknowledged risk, taken on by the person making a claim via meta-analysis.

High Risk  +  Used as a Claim to Gold Standard Rigor or Consensus  =  High Probability of Abuse

nevrer meta analysis I did not likeRemember that risk is not defined as the ‘probability of failure,’ rather ‘a cumulation of the number of risk bearing elements in series, which could potentially serve to undermine a process or result.’† In this regard, a meta-analysis bears more risk than does a level I or II study – because of the complexity and inability to replicate the logical calculus or scale of effort involved. Twelve high probability tasks targeting accomplishment of a goal, results in one low probable outcome of success. Twenty one risk points (ten of the above 31 points are parallel risks or potential abuses of employment, not intrinsic series method risks) affords an even less probable avenue to success, approaching a probability of failure of 1.00.† Figure 1 shows the cumulative series on such methodology risk, where P(f) is the the probability of failure, and its approach to a 1.00 asymptote.

Figure 1: Highly populated probability of failure, series†

series risk in

So, with the ‘gold standard of rigor’ claim with regard to meta-analyses comes the knowledge that we are purchasing such a claim to authority at the cost of an enormous amount of unacknowledged but adopted risk. To wholesale allow for the establishment of consensus evidence, on the basis of solely meta-study approaches, is scientific foolishness and self deception.

It is amazing to me that a study method, which does not allow for replication, nor suitable peer review, and which contains 21 elements of risk in series – is somehow regarded as the pinnacle of scientific rigor.

A meta-analysis – CAN be – the gold standard of study rigor and verity. But more importantly, we introduce error into science when we issue this appeal to authority basis to studies which fail to address their own propensity to misidentify, misconstrue, mislead, and misinform science and the public.

epoché vanguards gnosis

¹  The Neuroskeptic; Discover Magazine Online: From “Observations” to “Data”: The Changing Language of Science | March 19, 2016 10:32 am;

²  Computer History Museum: Timeline of Computer History;

³  Haidich, A. B. (2010). Meta-analysis in medical research. Hippokratia, 14(Suppl 1), 29–37;

†  Bin Suoa, Yong sheng Chenga; Calculation of Failure Probability of Series and Parallel Systems for Imprecise Probability; Jun Li

‡  Greco, T., Zangrillo, A., Biondi-Zoccai, G., & Landoni, G. (2013). Meta-analysis: pitfalls and hints. Heart, Lung and Vessels, 5(4), 219–225.

March 19, 2016 Posted by | Agenda Propaganda, Argument Fallacies | , , , , | Leave a comment

No Promenade in the Savage Dance

5.0.2Susurrations of wonder beckon the poetic heart to make its dwelling extraordinary

    Lacking promenade in savage writhe, muse’s dance it remains

The scoffing rejoinder of the immature cries ‘Ne’er such a thing!’

    ‘Intimate illusions are they, ignis fatuus in all manner of fool’s folly!’

Yet this whisper vexes us with its duplicitous offense

    Its absurdity she bears schism, mocking our surety, erasing the proud

One of Truth, One of Myopics, both stumbling stones

    Betrayed each by canard of pompous might, or adornment of elite proxy

Nonetheless its matron cries birth screams of an unknown pang, immune to excuse

    Siren’s Song of hydrogen beckoning in the vast darkness; bride of all that we are to be and become: ‘Find me, young mind; even so, …find me’

Trust not simply your own eyes, should they gaze upon a tree for such a time that they can now see nothing else

    Smile warmly and step passed those drunk off the fruit of its intoxication

It is the heeding of the call, the humility of seeking without fear

    Only it is wonder born of such deliberate paradox and our will to tilt lances

…which proves we are

January 1, 2016 Posted by | Ethical Skepticism | , , , , , , , | Leave a comment

Chinese (Simplified)EnglishFrenchGermanHindiPortugueseRussianSpanish
%d bloggers like this: