Discerning Sound from Questionable Science Publication

Non-replicatable meta-analyses published in tier I journals do not constitute the preponderance of good source material available to the more-than-casual researcher. This faulty idea stems from a recently manufactured myth on the part of social skepticism. Accordingly, the life-long researcher must learn techniques beyond the standard pablum pushed by social skeptics; discerning techniques which will afford them a superior ability to tell good science from bad – through more than simply shallow cheat sheets and publication social ranking classifications.
The astute ethical skeptic is very much this life-long and in depth researcher. For him or her, ten specific questions can serve to elucidate this difference inside that highly political, complicated and unfair playing field called science.

the-ten-study-questionsRecently, a question was posed to me by a colleague concerning the ability of everyday people to be able to discern good scientific work from dubious efforts. A guide had been passed around inside her group, a guide which touted itself as a brief on 5 key steps inside a method to pin-point questionable or risky advising publications. The author cautioned appropriately that “This method is not infallible and you must remain cautious, as pseudoscience may still dodge the test.” He failed of course to mention the obvious additional risk possibility that the method could serve to screen science which either 1) is good but cannot possibly muster the credential, funding and backing to catch the attention of crowded major journals, or 2) is valid, however is also screened by power-wielding institutions which could have the resources and connections as well as possible motive to block research on targeted ideas. The article my friend’s group was circulating in consideration constituted nothing but a Pollyanna, wide-eyed and apple pie view of the scientific publication process. One bereft of the scarred knuckles and squint-eyed wisdom requisite in discriminating human motivations and foibles.

There is much more to this business of vetting ideas than simply identifying the bad people and the bad subjects. More than simply crowning the conclusions of ‘never made an observation in my life’ meta-analyses as the new infallible standard of truth.

Scientific organizations are prone to the same levels of corruption, bias, greed, desire to get something for as little input as possible, as is the rest of the population. Many, or hopefully even most, individual scientists buck this mold certainly, and are deserving of utmost respect. However, even their best altruism is checked by organizational practices which seek to ensure that those who crave power, are dealt their more-than-ample share of fortune, fame and friar-hood. They will gladly sacrifice the best of science in this endeavor. And in this context of human wisdom it is critical that we keep watch.

If you are a casual reader of science, say consuming three or four articles a month, then certainly the guidelines outlined by Ariel Poliandri below, in his blog entitled “A guide to detecting bogus scientific journals”, represent a suitable first course on the menu of publishing wisdom.¹ In fact, were I offered this as the basis of a graduate school paper, it would be appropriately and warmly received. But if this is all you had to offer the public after 20 years of hard fought science, I would aver that you had wasted your career therein.

1 – Is the journal a well-established journal such as Nature, Science, Proceedings of the National Academy of Sciences, etc.?
2 – Check authors’ affiliations. Do they work in a respectable University? Or do they claim to work in University of Lala Land or no university at all?
3 – Check the Journal’s speciality and the article’s research topic. Are the people in the journal knowledgeable in the area the article deals with?
4 – Check the claims in the title and summary of the article. Are they reasonable for the journal publishing them?
5 – Do the claims at least make sense?

The above process suffers from a vulnerability in hailing only science developed under what is called a Türsteher Mechanism, or bouncer effect. A process producing a sticky but unwarranted prejudice against specific subjects. The astute researcher must ever be aware of the presence of this effect. An awareness which rules out the above 5 advisements as being sufficient.

Türsteher Mechanism

/philosophy : science : pseudoscience : peer review bias/ : the effect or presence of ‘bouncer mentality’ inside journal peer review. An acceptance for peer review which bears the following self-confirming bias flaws in process:

  1. Selection of a peer review body is inherently biassed towards professionals who the steering committee finds impressive,
  2. Selection of papers for review fits the same model as was employed to select the reviewing body,
  3. Selection of papers from non core areas is very limited and is not informed by practitioners specializing in that area, and
  4. Bears an inability as to how to handle evidence that is not gathered in the format that it understands (large scale, hard to replicate, double blind randomized clinical trials or meta-studies).

Therein such a process, the selection of initial papers is biased. Under this flawed process, the need for consensus results in not simply attrition of anything that cannot be agreed upon – but rather, a sticky bias against anything which has not successfully passed this unfair test in the past. An artificial and unfair creation of a pseudoscience results.

This above list by Mr. Poliandri represents simply a non-tenable way to go about vetting your study and resource material so that only pluralistic ignorance influences your knowledge base. It is lazy – sure to be right and safe – useless advisement, to a true researcher. The problem with this list resides inside some very simple industry realities:

1.  ‘Well-established journal’ publication requires sponsorship from a major institution. Scientific American cites that 88% of scientists possess no such sponsorship, and this statistic has nothing to do with the scientific groups’ relative depth in subject field.² So this standard, while useful for the casual reader of science, is not suitable at all for one who spends a lifetime of depth inside a subject. This would include for instance, a person studying impacting factors on autism in their child, or persons researching the effect of various supplements on their health. Not to mention of course, the need to look beyond this small group of publications applies to scientists who spend a life committed to their subject as well.

One will never arrive at truth by tossing out 88% of scientific studies right off the bat.

2.  Most scientists do not work for major universities. Fewer than 15% of scientists ever get to participate in this sector even once in their career.² This again is a shade-of-gray replication of the overly stringent filtering bias recommended in point 1. above. I have employed over 100 scientists and engineers over the years, persons who have collectively produced groundbreaking studies. For the most part, none ever worked for a major university. Perhaps 1 or 2 spent a year inside university affiliated research institutes. Point 2 is simply a naive standard which can only result in filtering out everything with the exception of what one is looking for. One must understand that, in order to survive in academia, one must be incrementally brilliant and not what might be even remotely considered disruptively brash. Academics bask in the idea that their life’s work and prejudices have all panned out to come true. The problem with this King Wears No Clothes process is that it tends to stagnate science, and not provide the genesis of great discovery.

One will never arrive at truth by ignoring 85% of scientists, right off the bat.

3.  There are roles for both specialty journals and generalized journals. There is a reason for this, and it is not to promote ‘bogus pseudoscience’ as the blog author implies (note his context framing statement in quotes above). A generalized journal maintains resource peers to whom they issue subject matter for review. They are not claiming peer evaluation to be their sole task. Larger journals can afford this, but not all journals can. Chalk this point up as well up to naivete. Peer review requires field qualification; however in general, journal publication does not necessarily. Sometimes they are one in the same, sometimes not. Again, if this is applied without wisdom, such naive discrimination can result in a process of personal filtering bias, and not stand as a suitable standard identifying acceptable science.

One will never arrive at truth by viewing science peer review as a sustainable revenue club. Club quality does not work.

4.  Check for the parallel nature of the question addressed in the article premise, methodology, results, title and conclusion.  Article writers know all about the trick of simply reading abstracts and summaries. They know 98% of readers will only look this far, or will face the requisite $25 to gain access further than the abstract. If the question addressed is not the same throughout, then there could be an issue. As well, check the expository or disclosure section of the study or article. If it consists even in part, of a polemic focusing on the bad people, or the bad ideas, or the bad industry player – then the question addressed in the methodology may have come from bias in the first place. Note: blog writing constitutes this type of writing. A scientific study should be disciplined to the question at hand, be clear on any claims made, and as well any preliminary disclosures which help premise, frame, constrain, or improve the predictive nature of the question. Blogs and articles do not have to do this; however, neither are they scientific studies. Know the difference.

Writers know the trick – that reviewers will only read the summary or abstract. The logical calculus of a study resides below this level. So authors err toward favoring established ideas in abstracts.

5.  Claims make sense with respect to the context in which they are issued and the evidence by which they are backed. Do NOT check to see if you believe the claims or they make some kind of ‘Occam’s Razor’ sense. This is a false standard of ‘I am the science’ pretense taught by false skepticism. Instead, understand what the article is saying and what it is not saying – and avoid judging the article based on whether it says something you happen to like or dislike. We often call this ‘sense’ – and incorrectly so. It is bias.

Applying personal brilliance to filter ideas, brilliance which you learned from only 12% of publication abstracts and 15% of scientists who played the game long enough – is called: gullibility.

It is not that the body of work vetted by such criteria is invalid; rather simply that – to regard science as only this – is short sighted and bears fragility. Instead of these Pollyanna 5 guidelines, the ethical skeptic will choose to understand whether or not the study or article in question is based upon standards of what constitutes good Wittgenstein and Popper science. This type of study can be conducted by private lab or independent researchers too. One can transcend the Pollyanna 5 questions above by asking the ten simple questions regarding any material – and outlined in the graphic at the top of this article. Epoché is exercised by keeping their answers in mind, without prejudice, as onward you may choose to read. Solutions to problems come from all levels and all types of contributors. This understanding constitutes the essence of wise versus naive science.

“Popper holds that there is no unique methodology specific to science. Science, like virtually every other human, and indeed organic, activity, Popper believes, consists largely of problem-solving.”³

There are two types of people, those who wish to solve the problem at hand, and those who already had it solved, so it never was a problem for them to begin with, rather simply an avenue of club agenda expression or profit/career creation.

Let’s be clear here: If you have earned tenure as an academic or journal reviewer or a secure career position which pays you a guaranteed $112,000 a year, from age 35 until the day you retire, this is the same as holding a bank account with $2,300,000 in it at age 35† – even net of the $200,000 you might have invested in school. You are a millionaire. So please do not advertise the idea that scientists are all doing this for the subject matter.

$2.3 million (or more in sponsorship) is sitting there waiting for you to claim it – and all you have to do is say the right things, in the right venues, for long enough.

This process of depending solely on tier I journals – is an exercise in industry congratulationism. There has to be a better way to vet scientific study, …and there is. The following is all about telling which ilk of person is presenting an argument to you.

The Ten Questions Differentiating Good Science from Bad

better-science-1Aside from examining a study’s methodology and logical calculus itself, the following ten questions are what I employ to guide me as to how much agenda and pretense has been inserted into its message or methodology. There are many species of contention; eight in the least if we take the combinations of the three bisected axes in the graph to the right. Twenty four permutations if we take the sequence in which the logic is contended (using falsification to promote an idea versus promoting the idea that something ‘will be falsified under certain constraints’, etc.) In general, what I seek to examine is an assessment of how many ideas the author is seeking to refute or promote, with what type of study, and with what inductive or deductive approach. An author who attempts to dismiss too many competing ideas, via a predictive methodology supporting a surreptitiously promoted antithesis, which cannot possibly evaluate a critical theoretical mechanism – this type of study or article possesses a great likelihood of delivering bad science. Think about the celebrity skeptics you have read. How many competing ideas are they typically looking to discredit inside their material, and via one mechanism of denial (usually an apothegm and not a theoretical mechanism)? The pool comprises 768 items – many to draw from – and draw from this, they do.

Let’s be clear here – a study can pass major journal peer review and possess acceptable procedural/analytical methodology – but say or implicate absolutely nothing for the most part. Ultimately being abused (or abusing its own research in extrapolating its reach) to say things which the logical calculus involved would never support (see Dunning-Kruger Abuse). Such conditions do not mean that the study will be refused peer review. Peer reviewers rarely ever contend (if they disregard the ‘domain of application’ part of a study’s commentary):

“We reject this study because it could be abused in its interpretation by malicious stakeholders.” (See example here: http://www.medicaldaily.com/cancer-risks-eating-gmo-corn-glyphosate-vs-smoking-cigarettes-according-411617)

Just because a study is accepted for and pass peer review, does not mean that all its extrapolations, exaggerations, implications or abuses are therefore true. You, as the reader are the one who must apply the sniff test as to what the study is implying, saying or being abused to say. What helps a reader avoid this? Those same ten questions from above.

null-hypothesisThe ten questions I have found most useful in discerning good science from bad, are formulated based upon the following Popperian four-element premise.² All things being equal, better science is conducted in the case wherein

  • one idea is
  • denied through
  • falsification of its
  • critical theoretical mechanism.

If the author pulls this set of four things off successfully, eschews promotion of ‘the answer’ (which is the congruent context to one having disproved a set of myriad ideas), then the study stands as a challenge to the community and must be sought for replication (see question IX below). For the scientific community at large to ignore such a challenge is the genesis of (our pandemic) pluralistic ignorance.

For instance, in one of the materials research labs I managed, we were tasked by an investment fund and their presiding board to determine the compatibility of titanium to various lattice state effects analogous to iron. The problem exists however in that titanium is not like iron at all. It will not accept the same interstitial relationships with other small atomic radius class elements that iron will (boron, carbon, oxygen, nitrogen). We could not pursue the question the way the board posed it. “Can you screw with titanium in exotic ways to make it more useful to high performance aircraft?”  We first had to reduce the question into a series of salient, then sequitur Bayesian reductions. The first question to falsify was “Titanium maintains its vacancy characteristics at all boundary conditions along the gamma phase state?” Without an answer (falsification) to this single question – not one single other question related to titanium could be answered in any way shape or form. Most skeptics do not grasp this type of critical path inside streams of logical calculus. This is an enormous source of confusion and social ignorance. Even top philosophers and celebrity skeptics fail this single greatest test of skepticism. And they are not held to account because few people are the wiser, and the few who are wise to it – keep quiet to avoid the jackboot ignorance enforced by the Cabal.

Which introduces and opens up the more general question of ‘What indeed, all things being considered, makes for good effective science?” This can be lensed through ten useful questions below, applied in the same fashion as the titanium example case:

I. Has the study or article asked and addressed the 1. relevant, 2. salient, 3. sound and 4. critical path next question under the scientific method?

If it has accomplished this, it is already contending for tier I science, as only a minority of scientists understand how to pose reductive study in this way. A question can be relevant, but not salient to the question at hand. This is the most common trick of pseudoscience. The question can also be relevant and salient, yet be asked in incorrect sequence, so as to frame its results in a prejudicial light. If this diligence has not been done then do not even proceed to the next questions II though VII below. Throw the study in the waste can. Snopes is notorious for this type of chicanery. The material is rhetoric, targeting a victim group, idea or person.

If the answer to this is ‘No’ – Stop here and ignore the study. Use it as an example of how not to do science.

II. Did the study or article focus on utilization of a critical theoretical mechanism which it set out to evaluate for validity?

The litmus which differentiates a construct (idea or framework of ideas) from a theory, is that a theory contains a testable and critical theoretical mechanism. Was the critical theoretical mechanism identified and given a chance for peer input prior to its establishment? Or was it just assumed as valid by a small group, or one person? For instance, a ‘DNA Study’ can examine three classes of DNA: mtDNA, autosomal DNA, or Y-DNA. If it is a study of morphology, yet examines the Y-DNA only for example, then the study is fraud. Y-DNA has nothing to do with morphology or genetic makeup. This would be an example of an invalid (probably slipped by as an unchallenged assumption) critical test mechanism.

If the answer to this is ‘No’ – Regard the study or article as an opinion piece, or worse propaganda piece, and not of true scientific incremental value.

III.  Did the study or article attempt to falsify this mechanism, or employ it to make predictions? (z-axis)

Karl Popper outlined that good science involves falsification of alternative ideas or the null hypothesis. However, given that 90% of science cannot be winnowed through falsification alone, it is generally recognized that a theory’s predictive ability can act as a suitable critical theoretical mechanism via which to examine and evaluate. Evolution was accepted through just such a process. In general however, mechanisms which are falsified are regarded as stronger science over successfully predictive mechanisms. A second question to ask is, did the study really falsify the mechanism being tested for, or did it merely suggest possible falsity? Watch for this trick of pseudoscience.

If the study or article sought to falsify a theoretical mechanism – keep reading with maximum focus. If the study used predictive measures – catalog it and look for future publishing on the matter.

IV.  Did the study or article attempt to deny specific idea(s), or did it seek to promote specific idea(s)? (x-axis)

Denial and promotion of ideas is not a discriminating facet inside this issue stand alone. What is significant here is how it interrelates with the other questions. In general attempting to deny multiple ideas or promote a single idea are techniques regarded as less scientific than the approach of denying a single idea – especially if one is able to bring falsification evidence to bear on the critical question and theoretical mechanism. Did the study authors seem to have a commitment to certain jargon or prejudicial positions, prior to the results being obtained? Also watch for the condition where a cherry picked test mechanism may appear to be a single item test, yet is employed to deny an entire series of ideas as a result. This is not actually a condition of single idea examination, though it may appear to be so.

Simply keep the idea of promotion and denial in mind while you consider all other factors.

V.  Did the study affix its contentions on a single idea, or a group of ideas? (y-axis)

In general, incremental science and most of discovery science work better when a study focuses on one idea for evaluation and not a multiplicity of ideas. This minimizes extrapolation and special pleading loopholes or ignorance. Both deleterious implications for a study. Prefer authors who study single ideas over authors who try and make evaluations upon multiple ideas at once. The latter task is not a wise undertaking even in the instance where special pleading can theoretically be minimized.

If your study author is attempting to tackle the job of denying multiple ideas all at once – then the methodical cynicism alarm should go off. Be very skeptical.

VI.  What percent of the material was allocated towards ideas versus the more agenda oriented topics of persons, events or groups?

If the article or study spends more than 10% of its Background material focused on persons, events or groups it disagrees with, throw the study in the trash. If any other section contains such material above 0%, then the study should be discarded as well. Elanor Roosevelt is credited with the apothegm “Great minds discuss ideas; Average minds discuss events; Small minds discuss people.” Did the study make a big deal about its ‘accoutrements and processes of science’ – in an attempt to portray the appearance of legitimacy. Did the study sponsors photograph themselves wearing face shields and lab coats and writing in notebooks. This is often pretense and promotion, beware.

Take your science only from great minds focusing on ideas and not events or persons.

As well, if the author broaches a significant amount of related but irrelevant or non-salient to the question at hand material, you may be witnessing an obdurate, polemic or ingens vanitatum argument. Do not trust a study or article where the author appears to be demonstrating how much of an expert they are in the matter (through addressing related but irrelevant and non-salient or non-sequitur material). This is irrelevant and you should be very skeptical of such publications.

VII. Did the author put an idea, prediction or construct at risk in their study?

Fake science promoters always stay inside well established lines of social safety, so that they are 1) Never found wrong, 2) Don’t bring the wrong type of attention to themselves (remember the $2.6+ million which is at stake here), and 3) Can imply their personal authority inside their club as an opponent-inferred appeal in arguing. They always repeat the correct apothegm, and always come to the correct conclusion. Did the study sponsor come in contending that they ‘can do the study quickly’, followed by a low cost and ‘simple’ result which conformed with a pre-selected answer? Don’t buy it.

Advancing science always involves some sort of risk. Do not consider those who choose paths of safety, familiarity and implied authority to possess any understanding of science.

VIII.  Was the study? (In order of increasing gravitas)

1.  increasing-gravitasPsychology or Motivation (Pseudo-Theory – Explains Everything)

2.  Meta-Data – Studies of Studies (Indirect Data Only vulnerable to Simpson’s Paradox or Filtering/Interpretive Bias)

3.  Data – Cohort and Set Measures (Direct but still Data Only)

4.  Direct Measurement Observation (Direct Confirmation)

5.  Inductive Consilience Establishment (Preponderance of Evidence from Multiple Channels/Sources)

6.  Deductive Case Falsification (Smoking Gun)

All it takes in order to have a strong study is one solid falsifying observation. This is the same principle as is embodied inside the apothegm ‘It only takes one white crow, to falsify the idea that all crows are black’.

IX.  When the only viable next salient and sequitur reductive step, post study – is to replicate the results – then you know you have a strong argument inside that work.

X.  Big data and meta-analysis studies like to intimidate participants in the scientific method with the implicit taunt “I’m too big to replicate, bring consensus now.”

These questions, more than anything else – will allow the ethical skeptic to begin to grasp what is reliable science and what is questionable science. Especially in the context where one can no longer afford to dwell inside only the lofty 5% of the highest regarded publications or can no longer stomach the shallow talking point sheets of social skepticism – all of which serve only to ignore or give short shrift to the ideas to which one has dedicated their life in study.

epoché vanguards gnosis


¹  Poliandri, Ariel; “A guide to detecting bogus scientific journals”; Sci – Phy, May 12, 2013; http://sci-phy.com/detecting-bogus-scientific-journals/

²  Beryl Lieff Benderly, “Does the US Produce Too Many Scientists?; Scientific American, February 22, 2010; https://www.scientificamerican.com/article/does-the-us-produce-too-m/

³  Thornton, Stephen, “Karl Popper”, The Stanford Encyclopedia of Philosophy (Winter 2016 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2016/entries/popper/&gt;

†  Present Value of future cash flows with zero ending balance: 456 payments of $9,333 per month at .25% interest per payment period.