The Lyin’tific Method: The Ten Commandments of Fake Science

The earmarks of bad science are surreptitious in fabric, not easily discerned by media and the public at large. Sadly, as well they are not often easily discerned by scientists themselves. This is why we have ethical skepticism. It’s purpose is not simply to examine ‘extraordinary claims’, but also to examine those claims which masquerade, hidden in plain sight, as if constituting ordinary boring old ‘settled science’.

Science is a strategy, not a tactic. Beware of those who wear the costume of the tactic, as a pretense of its strategy.

When you do not want the answer to be known, or you desire a specific answer because of social pressure surrounding an issue, or you are tired of irrational hordes babbling some nonsense about your product ‘harming their family members’ *boo-hoo 😢. Maybe you want to tout the life extending benefits of drinking alcohol, show how vaccines do not make profits, demonstrate very quickly a pesticide as safe or over-inflate death rates so that you can blame it on people you hate poliitcally – or maybe you are just plain ol’ weary of the burdensome pain-in-the-ass attributes of real science. Wherever your Procrustean aspiration may reside, this is the set of guidebook best practices for you and your science organization. Trendy and proven techniques which will allow your organization to get science back on your side, at a fraction of the cost and in a fraction of the time. 👍

We have managed to transfer religious belief into gullibility for whatever can masquerade as science.

~ Nassim Nicholas Taleb, Antifragile

When you have become indignant and up to your rational limit over privileged anti-science believers questioning your virtuous authority and endangering your industry profits (pseudo-necessity), well then it is high time to undertake the following procedure of activism. Crank up your science communicators, your skeptics, your critical thinkers and notify them to be at the ready …ready to cut-and-paste plagiarize a whole new set of journalistic propaganda, ‘cuz here comes The Lyin’tific Method!

The Lyin’tific Method: The Ten Commandments of Fake Science

1. Select for Intimidation. Appoint an employee who is under financial or career duress, to create a company formed solely to conduct this study under an appearance of impartiality, to then go back and live again comfortably in their career or retirement. Hand them the problem definition, approach, study methodology and scope. Use lots of Bradley Effect vulnerable interns (as data scientists) and persons trying to gain career exposure and impress. Visibly assail any dissent as being ‘anti-science’, the study lead will quickly grasp the implicit study goal – they will execute all this without question. Demonstrably censure or publicly berate a scientist who dissented on a previous study – allow the entire organization/world to see this. Make him become the hate-symbol for your a priori cause.

2. Ask a Question First. Start by asking a ‘one-and-done’, noncritical path & poorly framed, half-assed, sciencey-sounding question, representative of a very minor portion of the risk domain in question and bearing the most likely chance of obtaining a desired result – without any prior basis of observation, necessity, intelligence from stakeholders nor background research. Stress that the scientific method begins with ‘asking a question’. Avoid peer or public input before and after approval of the study design. Never allow stakeholders at risk to help select or frame the core problem definition, nor identify the data pulled. Never allow a party highly involved in making observations inside the domain (such as a parent, product user or farmer) to have input into the question being asked nor the study design itself. These entities do not understand science and have no business making inputs to PhD’s.

3. Amass the Right Data. Never seek peer input at the beginning of the scientific process (especially on what data to assemble), only the end. Gather a precipitously large amount of ‘reliable’ data, under a Streetlight Effect, which is highly removed from the data’s origin and stripped of any probative context – such as an administrative bureaucracy database. Screen data from sources which introduce ‘unreliable’ inputs (such as may contain eyewitness, probative, falsifying, disadvantageous anecdotal or stakeholder influenced data) in terms of the core question being asked. Gather more data to dilute a threatening signal, less data to enhance a desired one. Number of records pulled is more important than any particular discriminating attribute entailed in the data. The data volume pulled should be perceptibly massive to laymen and the media. Ensure that the reliable source from which you draw data, bears a risk that threatening observations will accidentally not be collected, through reporting, bureaucracy, process or catalog errors. Treat these absences of data as constituting negative observations.

4. Compartmentalize. Address your data analysts and interns as ‘data scientists’ and your scientists who do not understand data analysis at all, as the ‘study leads’. Ensure that those who do not understand the critical nature of the question being asked (the data scientists) are the only ones who can feed study results to people who exclusively do not grasp how to derive those results in the first place (the study leads). Establish a lexicon of buzzwords which allow those who do not fully understand what is going on (pretty much everyone), to survive in the organization. This is laundering information by means of the dichotomy of compartmented intelligence, and it is critical to everyone being deceived. There should not exist at its end, a single party who understands everything which transpired inside the study. This way your study architecture cannot be betrayed by insiders (especially helpful for step 8).

5. Go Meta-Study Early. Never, ever, ever employ study which is deductive in nature, rather employ study which is only mildly and inductively suggestive (so as to avoid future accusations of fraud or liability) – and of such a nature that it cannot be challenged by any form of direct testing mechanism. Meticulously avoid direct observation, randomized controlled trial, retrospective cohort study, case-control study, cross-sectional study, case reports and series, or especially reports or data from any stakeholders at risk. Go meta-study early, and use its reputation as the highest form of study, to declare consensus; especially if the body of industry study from which you draw is immature and as early in the maturation of that research as is possible.  Imply idempotency in process of assimilation, but let the data scientists interpret other study results as they (we) wish. Allow them freedom in construction of Oversampling adjustment factors. Hide methodology under which your data scientists derived conclusions from tons of combined statistics derived from disparate studies examining different issues, whose authors were not even contacted in order to determine if their study would apply to your statistical database or not.

6. Shift the Playing Field. Conduct a single statistical study which is ostensibly testing all related conjectures and risks in one fell swoop, in a different country or practice domain from that of the stakeholders asking the irritating question to begin with; moreover, with the wrong age group or a less risky subset thereof, cherry sorted for reliability not probative value, or which is inclusion and exclusion biased to obfuscate or enhance an effect. If the anti-science group is whining about something in prevalent use in Canada, then conduct the study in Moldova. Bias the questions asked so as to convert negatives into unknowns or vice versa if a negative outcome is desired. If the data shows a disliked signal in aggregate, then split it up until that disappears – conversely if it shows a signal in component sets, combine the data into one large Yule-Simpson effect. Ensure there exists more confidence in the accuracy of the percentage significance in measure (p-value), than of the accuracy/precision of the contained measures themselves. Be cautious of inversion effect: if your hazardous technology shows that it cures the very thing it is accused of causing – then you have gone too far in your exclusion bias. Add in some of the positive signal cases you originally excluded until the inversion effect disappears.

7. Trashcan Failures to Confirm. Query the data 50 different ways and shades of grey, selecting for the method which tends to produce results which favor your a priori position. Instruct the ‘data scientists’ to throw out all the other data research avenues you took (they don’t care), especially if it could aid in follow-on study which could refute your results. Despite being able to examine the data 1,000 different ways, only examine it in this one way henceforth. Peer review the hell out of any studies which do not produce a desired result. Explain any opposing ideas or studies as being simply a matter of doctors not being trained to recognize things the way your expert data scientists did. If as a result of too much inherent bias in these methods, the data yields an inversion effect – point out the virtuous component implied by your technology – how it will feed the world or cure all diseases, is fighting a species of supremacy or how the ‘technology not only does not cause the malady in question, but we found in this study that it cures it~!’.

8. Prohibit Replication and Follow Up. Craft a study which is very difficult to or cannot be replicated, does not offer any next steps nor serves to open follow-on questions (all legitimate study generates follow-on questions, yours should not), and most importantly, implies that the science is now therefore ‘settled’. Release the ‘data scientists’ back to their native career domains so that they cannot be easily questioned in the future.  Intimidate organizations from continuing your work in any form, or from using the data you have assembled. Never find anything novel (other than a slight surprise over how unexpectedly good you found your product to be), as this might imply that you did not know the answers all along. Never base consensus upon deduction of alternatives, rather upon how many science communicators you can have back your message publicly. Make your data proprietary. View science details as an activity of relative privation, not any business of the public.

9. Extrapolate and Parrot/Conceal the Analysis. Publish wildly exaggerated & comprehensive claims to falsification of an entire array of ideas and precautionary diligence, extrapolated from your single questionable and inductive statistical method (panduction). Publish the study bearing a title which screams “High risk technology does not cause (a whole spectrum of maladies) whatsoever” – do not capitalize the title as that will appear more journaly and sciencey and edgy and rebellious and reserved and professorial. Then repeat exactly this extraordinarily broad-scope and highly scientific syllogism twice in the study abstract, first in baseless declarative form and finally in shocked revelatory and conclusive form, as if there was some doubt about the outcome of the effort (ahem…). Never mind that simply repeating the title of the study twice, as constituting the entire abstract is piss poor protocol – no one will care. Denialists of such strong statements of science will find it very difficult to gain any voice thereafter. Task science journalists to craft 39 ‘research articles’ derived from your one-and-done study; deem that now 40 studies. Place the 40 ‘studies’, both pdf and charts (but not any data), behind a registration approval and $40-per-study paywall. Do this over and over until you have achieved a number of studies and research articles which might fancifully be round-able up to ‘1,000’ (say 450 or so ~ see reason below). Declare Consensus.

10. Enlist Aid of SSkeptics and Science Communicators. Enlist the services of a public promotion for-hire gang, to push-infiltrate your study into society and media, to virtue signal about your agenda and attack those (especially the careers of wayward scientists) who dissent.  Have members make final declarative claims in one liner form “A thousand studies show that high risk technology does not cause anything!” ~ a claim which they could only make if someone had actually paid the $40,000 necessary in actually accessing the ‘thousand studies’. That way the general public cannot possibly be educated in any sufficient fashion necessary to refute the blanket apothegm. Have them demand final proof as the only standard for dissent. This is important: make sure the gang is disconnected from your organization (no liability imparted from these exaggerated claims nor any inchoate suggested dark activities *wink wink), and moreover, who are motivated by some social virtue cause such that they are stupid enough that you do not actually have to pay them.

The organizations who manage to pull this feat off, have simultaneously claimed completed science in a single half-assed study, contended consensus, energized their sycophancy and exonerated themselves from future liability – all in one study. To the media, this might look like science. But to a life-long researcher, it is simply a big masquerade. It is pseudo-science in the least; and at its worst constitutes criminal felony and assault against humanity. It is malice and oppression, in legal terms (see Dewayne Johnson vs Monsanto Company)

The discerning ethical skeptic bears this in mind and uses this understanding to discern the sincere from the poser, and real groundbreaking study from commonplace surreptitiously bad science.

The Ethical Skeptic, “The Lyin’tific Method: The Ten Commandments of Fake Science” The Ethical Skeptic, WordPress, 3 Sep 2018; Web, https://wp.me/p17q0e-8f1

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments