Sciebam – Religion with P-Values

At the heart of sciebam, resides what is in essence an embargo of specific ideas, avenues of research, and methods of investigation. It is a simultaneous appeal to truth, appeal to embargo, and proselytization all in one. Religion dressed up in p-values and confidence intervals.

Years ago, I was hired as President of a materials research company, brought into resolve the capital-consuming logjam situation which had developed among the lab’s researchers. The lab and its research team were funded under a scope of finding the solution to an impasse inside material physics which had existed in industry for decades (many of the particulars are/were 32 C.F.R. 2001 Classified). A grand set of assumptions as to what was possible and impossible, had grown like ivy around the state of knowledge inside the subject. The team’s task was to break this impasse in knowledge, so that critical applications of the resulting technology could be developed inside the aerospace and energy industries.

To make a long story very short, we were successful in this development – the first time the particular achievement had ever been accomplished (by mankind at least). I will never forget the 3:45 pm Friday call from one of the techs, “Mr. G, we got 18 successful tests out of 240 iterations, and they all pertained to a single parameter and single reactor setting.” That was exactly what we had been looking for. While this set of happenstance does not serve to make me any kind of expert in material physics by any means, it did derive in part from a prowess in skepticism and the processes of science. It stemmed from a discernment of the difference between real and protect-my-job scientific activity.

I had the team immediately lock the accesses to the building, shut off the internet and phones, and station an armed guard at the front door – and to not allow anyone to enter or leave the facility until I could come (I was a mere 5 minutes away at a key vendor’s shop) and debrief them. This also involved a team celebration of course. It was an exciting evolution for everyone involved. The board members were all very noteworthy and powerful persons, who then unfortunately split according to their level of greed and desire for control of both company and intellectual property – each team encouraging me to join them.

Aside from issues of greed and power, the chief principle which served as the basis of the logjam inside that particular research, involved the conflation of two differing notions of science. One which venerates challenge and incremental discovery, versus another which prioritizes the control and career stability achievable through exploiting established knowledge. The reason I was hired in the first place is because one, I don’t venerate old untested knowledge and two, I am not intimidated in the least by scientists flaunting degrees, exclusionary lexicon, and technical notation jargon. The research team knew it too. Such barriers to entry, can only stand for so long before one eventually figures out the game. Within a week, I was pressing the old guard past their ability to defend the status quo and began developing new testing approaches, lab procedures, and shift dockets.

Sciebam – Consensus Through Appeal to Truth (Implicit Embargo)

In similar principle, much of what is conducted in the name of science today, is not science for the most part – rather a technical process of career qualification through methodical and linear confirmation bias, a set of activities which I call sciebam. Such activity contrasts itself with the discipline of science along the following lines:

Science (Latin: scī́mus/sciḗmus -‘we know/we will know’)1leveraging challenging thinking, deductive falsification, straightforward complexity, and consilience to infer a critical path of novel comprehension – one prosecutes (pursues) truth.

Sciebam (Latin: sciēbā́mus -‘we knew’)2exploiting assumption, abduction, panduction, complicated simplicity, and linear/statistical induction to confirm an existing or orphan understanding – one is holder of the truth.

†See The Distinction Between Comprehension and Understanding (The Problem of Abduction)

At the heart of sciebam, resides what is in essence an embargo of specific ideas, avenues of research, and methods of investigation. Of course most researchers do not typically perceive their habits in such fashion, so it often takes an outsider to come in and shake things up. To in effect, sweep out the cobwebs of sciebam and renew an interest in a passion for true discovery (see The Strategic Mindset).

There exists a principle of philosophy that I observe, which falls along the lines of Sir Isaac Newton’s third law of motion, also known as the law of action-reaction. That is, unless one is very careful, an appeal to truth will almost always be accompanied by, at the very least, an implicit appeal to embargo​. Nihilism is the embargo, which comes commensurate with the ‘truth’ of not being able to measure outside the bounds of physical reality. Collectivism suffers the embargo which results from the ‘truth’ of a successful deployment of capital. Freedom suffers the embargo which arrives under the awesome specter of an impending cataclysm‘s ‘truth’.

When an advocate appeals to authority of truth, by means of enforcing an embargo of competing ideas – they are typically protecting a lie, or at least hyperbole. Even if that advocate is accidentally correct about their truth in the end, such mechanism still constitutes a lie because of the way in which the truth was enforced – by means of explicit or implicit false dichotomy, and enforcement of a false null hypothesis with no competing alternative idea. Accuracy in such a case, is a mere triviality.

Ethical Skeptic’s Law – if science won’t conduct the experiment, society will force the experiment. One can only embargo an idea for so long.

An Example of Appeal to Truth (Implicit Embargo)

Psychologist (no doubt psychologists often fall prey to this error – observe here which disciplines dominate inside fake skepticism) Dr. Gary Marcus at the University of Washington is a proponent of an embargo regarding any research which pursues a pathway other than material nihilism – particularly as it pertains to arguments regarding the mind/brain relationship and near death experiences. In an interview with Alex Tsakiris of Skeptico several years back, he leads into his argument with these critical thesis statements (they are statements, not arguments):3

I don’t doubt that there’s a [Mind≡Brain] phenomena that needs to be explained, but I doubt that the explanation is that the brain is not [the entire source] of the experience that’s being processed [in an NDE]. I cannot conceive of how that would be true. (Embargo by means of premature rationality and appeal to authority – sciebam, or a religion passed off as science)

Discussion about the brain is basically the province of neuroscience. (Appeal to Authority, Appeal to Privation/Province)

My understanding is that the mind is essentially a property of the brain. (Appeal to Self Authority and Rationality)

I don’t see a lot of room for any alternative [Brain⊆Mind] which does not have something to do with [consciousness being constrained solely to] the physiology of the brain [Mind≡Brain]. (Appeal to Embargo as an Appeal to Truth)

~ Psychologist Dr. Gary Marcus, Skeptico, 6 Aug 2015

Please take note ethical skeptic, of the extraordinary amount of ambiguity, authority, and finality in these statements; crafted in such a way so as to appear non-declarative in nature, and posed inside a fake context of objectivity. Never fall for this. This is an appeal to authority, coupled with an appeal to embargo. This would not be a problem if it were merely a personal metaphysical notion, or if this line of thinking were not further then enforced inside Dr. Marcus’ discipline of study. This is where the error occurs.

These are not statements which should pass peer review (however they often do), because of the incumbent ambiguity and lack of epistemological backing. But his meaning inside them is illuminated in the rest of that same interview with Alex Tsakiris. He is both making a final claim to conclusiveness about the nature of mind and brain, and also is making the assertion that he does not have to back up any of these claims. This was very much akin to the ‘it can’t be done’ proclamations of the older scientists in the lab over which I presided earlier in this article.

Since, in this particular deliberation we have two necessary constructs and a constraint in terms of discipline history of study, plurality under Ockham’s Razor therefore, exists. Two valid ‘working hypotheses’ or constructs (placeholders until true hypothesis can be developed) are at play. Notice that I do not encourage embargo of research regarding Dr. Marcus’ preferred construct (I want that idea researched as well). In contrast, he chooses to embargo those ideas of his opposing camp – to use ignorance as a playground for consensus. I am perfectly fine with a [Mind≡Brain] reality. I however, do not want it forced upon me as a religion, nor even worse a religion which is masquerading as science (sciebam).

Our chief problem is when people, who purport to be persons of ‘science’, such as does Dr. Marcus above – try and push a concept or construct, which is not even mature enough to be a true scientific hypothesis, to status as final truth – skipping all the intervening steps. Then they appeal to their authority as a PhD, to make a claim inside a discipline which is either not their home discipline (Plaiting Fallacy), or is a discipline in which everyone has a stake, not just psychologists – metaphysical choice.

Dr. Marcus is therefore a Type I Expert. He cannot appeal to rationality nor authority here – but does so anyway.

Type I: Bowel Movement Expertise

The bowel movement expert (derived from an activity in which everyone is an expert, but some regard that their own expertise therein is superior – a perfect storm of ‘their shit don’t stink’ and ‘don’t know jack shit’) is an advisor inside a subject which is part of everyday life or is commonly experienced by many or most people.

In other words, I am just as much an expert in the construct [Mind≡Brain] as is Dr. Marcus – he cannot ethically bully me with his PhD in Psychology into accepting his religion, no matter the lexicon, no matter the p-values, no matter the confidence intervals. If alternately he demanded that I accept Heaven as a reality, I would object on the same basis of argument. As a skeptic, what I observe is that the materialist is too lazy and agency-insistent to wait for an actual hypothesis of scientific discipline to mature – so they take the concept/construct that our reality is the only possible basal reality (singular, closed I E and M fields, and non-derivative) and enforce it by means of a heavy handed embargo. Such is a mistake of skepticism.

These are the challenges, which a person like me faces, when tasked to manage a legacy scientific effort. The challenges of sciebam.

Einfach Mechanism

/philosophy : skepticism : pseudo-science : alternative reduction/ : an idea which is not yet mature under the tests of valid hypothesis, yet is installed as the null hypothesis or best explanation regardless. An explanation, theory or idea which sounds scientific, yet resolves a contention through bypassing the scientific method, then moreover is installed as truth thereafter solely by means of pluralistic ignorance around the idea itself. Pseudo-theory which is not fully tested at its inception, nor is ever held to account thereafter. ​

Anomie

/philosophy : ethics : circularity/ : a condition in which a club, group or society provides little or negative ethical guidance to the individuals which inhabit it or craft its direction.​

Pluralistic Ignorance

ad populum – a condition wherein the majority of individuals believe without evidence, either that everyone else assents or that everyone else dissents upon a specific idea.

ad consentum – a self-reinforcing cycle wherein wherein the majority of members in a body believe without evidence, that a certain consensus exists, and they therefore support that idea as consensus as well.

ad immunitatem – a condition wherein the majority of individuals are subject to a risk, however most individuals regard themselves to reside in the not-at-risk group – often because risk is not measured.

ad salutem – a condition wherein a plurality or majority of individuals have suffered an injury, however most individuals regard themselves to reside in the non-injured group – often because they cannot detect such injury.

If an idea is not even mature enough to qualify as a scientific hypothesis, it also cannot be installed as truth, no matter how ‘likely’ you regard it to be. Such error is made even worse if that truth comes bundled with an implicit embargo of any competing research (see The Ethical Skeptic’s definition of religion). If Dr. Marcus were to pursue his notions above as just one ‘working hypothesis’, then that would be ethically acceptable – however he is instead enforcing truth as an expert. He is taking a God position.

There exist at least 1700 other Elisabeth Noelle-Neumann’s Spiral of Silence-styled efforts in oppression, on the part of people who have declared themselves to be ‘experts in all that is wrong’, just as Dr. Marcus has done above. We at The Ethical Skeptic oppose such arrogance and false parsimony, even if we end up defending a hypothesis which itself turns out to be false in the end. Being found wrong is informative – being correct, is not. In other words, what we at The Ethical Skeptic object to, is pseudo-science, underpinned by pseudo-skepticism.

The Ethical Skeptic, “Sciebam – Religion with P-Values”; The Ethical Skeptic, WordPress, 17 Feb 2022; Web, https://theethicalskeptic.com/?p=38018

Trouble on the Way from Notion to Inference

As a child, my favorite bedtime story was Dr. Seuss’ I Had Trouble in Getting to Solla Sollew.1 I never forgot the story line segment wherein the hero is involuntarily conscripted inside an army, in order to confront the ‘Perilous Poozer of Pomplemoose Pass’. The erstwhile army ends up bailing on the hero and he is left alone, surrounded, and without a real weapon, to fight not one, but many Perilous Poozers.

During a severe market recession a couple decades ago, I was a junior partner in a firm whose principals and owners all bailed on the business and absconded in short order with most all the accounts, clients, and assets. These partners secretly knew that one major client was about to go bankrupt, while another was being acquired and merged. This left me alone to rescue the enterprise, and during a severe recession no less. We were abandoned with a mere two months of backlogged sales, while employees fretted over what was to happen with their jobs, families, and lives. We faced a monthly payroll that was alone twice the size of all backlogged sales. It was a dark time.2

I was quite happy and lived by the ocean
Not far from a place called the Valley of Notion
Where nothing, not anything ever was wrong
Until… well, one day I was walking along

And I learned there are troubles of more than one kind
Some come from ahead and some come from behind

There I was, all completely surrounded by trouble,
When a chap rumbled up in a One-Wheeler Wubble

“Young Fellow,” he said, “what has happened to you
has happened to me and to other folks, too
What I’ve decided to do is to think in more sense…
So I’m off to the City of True Inference

I was able leverage my house and retirement accounts, borrow money and time, change our market message and approach, and through an intense road campaign, raise new business to replace the old – and not let a single employee down through forfeiture of their job. We even brought the company back to equal its heights of record business – selling the business at a premium nine times earnings years later. I also ensured that the employees who stuck with the business were rewarded well in that sale. Such experience and willingness to stand in the gap, is essential to the life of the true philosopher. The stark challenge to think without coercion, and under differing goal structures. Such lessons are not learned in academia nor government, and yet are also critically essential to good science.

In the end, the hero of the Dr. Seuss story turns back to confront his troubles, and becomes trouble to them instead. When making the journey from notion to inference, there exists a cast of standard nefarious pretenders – characters who have never done a thing with their life, and for whatever reason, are angry at you over this reality. They will attempt to make the journey confusing and ineffective. These are the Perilous Poozers one must face down, in order to discern sound science or public policy.

The Perilous Poseurs of Pompelmoose Pass

Fallacy Falcons

They don’t actually ever create anything. They hide inside the lack of accountability automatically afforded denial and critique. They never get into the mix, but rather fly high above it, merely to swoop down and point out the informal fallacy you have committed. The problem with garden variety fallacies is, they lend a false confidence into the mind of this form of poseur skeptic. The notion that, because they have filtered out disliked ideas by means of informal violations, they have therefore increased the likelihood that their own ideas are correct. But you will also notice that they never expose their own ideas to critique and never show their hand at actual logical calculus built into an argument or refutation – this is part of the massive ego complex they conceal. In the end, their debunking only constitutes a form of punishing those who disagree and has nothing to do with any form of inference, rationality, or scientific ‘likelihood’.

It is commonly claimed that giving a fallacy a name and studying it will help the student identify the fallacy in the future and will steer them away from using the fallacy in their own reasoning. Fallacy theory is criticized by some teachers of informal reasoning for its over-emphasis on poor reasoning rather than good reasoning. Do colleges teach the Calculus by emphasizing all the ways one can make mathematical mistakes?

~ Internet Encyclopedia of Philosophy: Fallacies: 3. Pedagogy

Bayesian Bullies

Bayes Theorem is founded upon scientific estimations of probability, which are confirmed and then updated by series inductive tests. However, poseurs therein are often not aware of when such a process does and does not bear utility. These poseurs will constantly sea-lion for ‘studies’, ‘recitations’, ‘proof’, knowing that most subjects are not easily reduced much less resolved by Bayesian induction under confidence. They use linear induction and abductive reasoning, in place of deduction, consilience, and falsification. They elect to be scientists when an investigator is needed most, and then become technicians when they need to be scientists. Shrinking from the true prosecution of ideas. They intimidate by means of unjustifiable levels of precisely framed outcome, or precision as a substitute for accuracy. They frame a complete guess, by means of boastfully confident (hedging) error bands. They resolve the answer before determining the right question. They forecast the future before defining correctly the present, hoping to be lucky rather than good. They harden their model to inaccurate outcomes, failing to realize its incumbent brittleness.

Bayesian methods are presented as an automatic inference engine, and this raises suspicion in any-one with applied experience… such methods being oversold as an all-purpose statistical solution to genuinely hard problems. Compared to classical inference, which focuses on how to extract the information available in data, Bayesian methods seem to quickly move to elaborate computation rather than the deeper questions of inference.

~ quoted and condensed from Andrew Gelman, Objections to Bayesian Statistics, 2008, Colombia University

Parable Probabilizers

Since all knowledge is uncertain, therefore knowledge can be gained by merely establishing a scant level of likelihood regarding it. Such probabilizers exploit the cache of obviousness as evidence that the ‘simple is more likely’. They stack up comfortable and understandable parables, asking you to ignore the risk, and just focus on the ‘explain it in simple terms’ answer they have crafted. They contend that ‘until God gets here and establishes truth for us, I will explain that which is more likely in his stead’. Yes, this is a claim to being God. They ‘results gauge’, or produce answers which are at face value simple, conforming or understandable (concealed complication in reality) as opposed to answers which are complex, informative, challenging or push our development envelope. They fail to understand Ockham’s Razor, and thus crafted this mutated version called Occam’s Razor, affording one permission to wrap up all epistemological loose ends as ‘finished science’ in one fell swoop of fatal logic. They ignore the riddle of Lindy:

The fact that an opinion has been widely held is no evidence whatever that it is not utterly absurd; indeed in view of the silliness of the majority of mankind, a widely spread belief is more likely to be foolish than sensible.

~ Bertrand Russell, Marriage and Morals

Process Ponzi Schemers

One key method of pretend science is to borrow assumptions from early in the scientific method, and apply them later as pretend held assets, asking one to invest belief in such process of science. This is at its heart a Ponzi Scheme. Paying off scientific answers by means of borrowed assumptions, premature questions, and gravitas that are not real owned assets. They ‘ask a question’ before conducting any kind of intelligence development or establishment of necessity. They promote a mere notion to the vaulted office of hypothesis, and then prove it by its ‘simplicity’ alone. They fail to ask ‘What do we not know?’ or ‘Can this lack of knowledge cause harm?’ They use the process of reduction and linear analysis to affirm what they already ‘knew’ (sciebam), rather than seek to challenge and falsify (science). They declare (scientific claim) something ‘supernatural’ or ‘pseudoscience’ and not approachable by science, so that it does not have to be studied in the first place and therefore can never become science either. They use accidental absences of data in a discovery protocol, to stand as evidence of absence. They start with the answer, and finish in the very next step by means of the awesome insistence of meta-analysis. They view science as a bludgeon to conclusion, and not as a feedback cycle.

When I give a thesis to students, most of the time the problem I give for a thesis is not solved. It’s not solved because the solution of the question, most of the time, is not in solving the question, it’s in questioning the question itself. It’s realizing that in the way the problem was formulated there was some implicit prejudice or assumption that should be dropped.

~ Carlo Rovelli, The New Republic: Science Is Not About Certainty, 11 Jul 2014

At times, it is indeed the job of the ethical skeptic to stand in the gap on behalf of the innocent. To make life hell, for those who choose to be abusive troubles rather than thoughtful contributors.

The Ethical Skeptic, “Trouble on the Way from Notion to Inference”; The Ethical Skeptic, WordPress, 17 Feb 2022; Web, https://theethicalskeptic.com/?p=50006

The Lyintific Method and The Ten Commandments of Fake Science

The earmarks of bad science are surreptitious in fabric, not easily discerned by media and the public at large. Sadly, as well they are not often easily discerned by scientists themselves. This is why we have ethical skepticism. It’s purpose is not simply to examine ‘extraordinary claims’, but also to examine those claims which masquerade, hidden in plain sight, as if constituting ordinary boring old ‘settled science’.

Science is a strategy, not a tactic. Beware of those who wear the costume of the tactic, as a pretense of its strategy.

When you do not want the answer to be known, or you desire a specific answer because of social pressure surrounding an issue, or you are tired of irrational hordes babbling some nonsense about your product ‘harming their family members’ *boo-hoo 😢. Maybe you want to tout the life extending benefits of drinking alcohol, show how vaccines do not make profits, demonstrate very quickly a pesticide as safe or over-inflate death rates so that you can blame it on people you hate politically – or maybe you are just plain ol’ weary of the burdensome pain-in-the-ass attributes of real science. Wherever your Procrustean aspiration may reside, this is the set of guidebook best practices for you and your science organization. Trendy and proven techniques which will allow your organization to get science back on your side, at a fraction of the cost and in a fraction of the time. 👍

We have managed to transfer religious belief into gullibility for whatever can masquerade as science.

~ Nassim Nicholas Taleb, Antifragile

When you have become indignant and up to your rational limit over privileged anti-science conspiracy theorists questioning your virtuous authority and endangering your industry profits, well then it is high time to undertake the following procedure of virtuous activism. Crank up your science communicators, your skeptics, your critical thinkers and notify them to be at the ready …ready to cut-and-paste plagiarize a whole new set of journalistic propaganda, ‘cuz here comes The Lyintific Method!

The Lyintific Method

  1. Concoct the claim your club wants to be true
  2. Fund universities and direct syndicates/agencies to buy into it
  3. Whip up some mildly suggestive p-values fed into a cursory intern-run ‘meta-study’
  4. Cite the claim as ‘the science’ loudly and repeatedly
  5. Threaten the careers of any professionals who dissent
  6. Direct social media to censor and harass dissenters as ‘conspiracy theorists’
  7. Claim consensus and that the ‘science is settled’

The Ten Commandments of Fake Science

1. Select for Intimidation. Appoint an employee who is under financial or career duress, to create a company formed solely to conduct this study under an appearance of impartiality, to then go back and live again comfortably in their career or retirement. Hand them the problem definition, approach, study methodology and scope. Use lots of Bradley Effect vulnerable interns (as data scientists) and persons trying to gain career exposure and impress. Visibly assail any dissent as being ‘anti-science’, the study lead will quickly grasp the implicit study goal – they will execute all this without question. Demonstrably censure or publicly berate a scientist who dissented on a previous study – allow the entire organization/world to see this. Make him become the hate-symbol for your a priori cause.

2. Ask a Question First. Start by asking a ‘one-and-done’, noncritical path & poorly framed, half-assed, sciencey-sounding question, representative of a very minor portion of the risk domain in question and bearing the most likely chance of obtaining a desired result – without any prior basis of observation, necessity, intelligence from stakeholders nor background research. Stress that the scientific method begins with ‘asking a question’. Avoid peer or public input before and after approval of the study design. Never allow stakeholders at risk to help select or frame the core problem definition, nor identify the data pulled. Never allow a party highly involved in making observations inside the domain (such as a parent, product user or farmer) to have input into the question being asked nor the study design itself. These entities do not understand science and have no business making inputs to PhD’s.

3. Amass the Right Data. Never seek peer input at the beginning of the scientific process (especially on what data to assemble), only the end. Gather a precipitously large amount of ‘reliable’ data, under a Streetlight Effect, which is highly removed from the data’s origin and stripped of any probative context – such as an administrative bureaucracy database. Screen data from sources which introduce ‘unreliable’ inputs (such as may contain eyewitness, probative, falsifying, disadvantageous anecdotal or stakeholder influenced data) in terms of the core question being asked. Gather more data to dilute a threatening signal, less data to enhance a desired one. Number of records pulled is more important than any particular discriminating attribute entailed in the data. The data volume pulled should be perceptibly massive to laymen and the media. Ensure that the reliable source from which you draw data, bears a risk that threatening observations will accidentally not be collected, through reporting, bureaucracy, process or catalog errors. Treat these absences of data as constituting negative observations.

4. Compartmentalize. Address your data analysts and interns as ‘data scientists’ and your scientists who do not understand data analysis at all, as the ‘study leads’. Ensure that those who do not understand the critical nature of the question being asked (the data scientists) are the only ones who can feed study results to people who exclusively do not grasp how to derive those results in the first place (the study leads). Establish a lexicon of buzzwords which allow those who do not fully understand what is going on (pretty much everyone), to survive in the organization. This is laundering information by means of the dichotomy of compartmented intelligence, and it is critical to everyone being deceived. There should not exist at its end, a single party who understands everything which transpired inside the study. This way your study architecture cannot be betrayed by insiders (especially helpful for step 8).

5. Go Meta-Study Early. Never, ever, ever employ study which is deductive in nature, rather employ study which is only mildly and inductively suggestive (so as to avoid future accusations of fraud or liability) – and of such a nature that it cannot be challenged by any form of direct testing mechanism. Meticulously avoid direct observation, randomized controlled trial, retrospective cohort study, case-control study, cross-sectional study, case reports and series, or especially reports or data from any stakeholders at risk. Go meta-study early, and use its reputation as the highest form of study, to declare consensus; especially if the body of industry study from which you draw is immature and as early in the maturation of that research as is possible.  Imply idempotency in process of assimilation, but let the data scientists interpret other study results as they (we) wish. Allow them freedom in construction of Oversampling adjustment factors. Hide methodology under which your data scientists derived conclusions from tons of combined statistics derived from disparate studies examining different issues, whose authors were not even contacted in order to determine if their study would apply to your statistical database or not.

6. Shift the Playing Field. Conduct a single statistical study which is ostensibly testing all related conjectures and risks in one fell swoop, in a different country or practice domain from that of the stakeholders asking the irritating question to begin with; moreover, with the wrong age group or a less risky subset thereof, cherry sorted for reliability not probative value, or which is inclusion and exclusion biased to obfuscate or enhance an effect. If the anti-science group is whining about something in prevalent use in Canada, then conduct the study in Moldova. Bias the questions asked so as to convert negatives into unknowns or vice versa if a negative outcome is desired. If the data shows a disliked signal in aggregate, then split it up until that disappears – conversely if it shows a signal in component sets, combine the data into one large Yule-Simpson effect. Ensure there exists more confidence in the accuracy of the percentage significance in measure (p-value), than of the accuracy/precision of the contained measures themselves. Be cautious of inversion effect: if your hazardous technology shows that it cures the very thing it is accused of causing – then you have gone too far in your exclusion bias. Add in some of the positive signal cases you originally excluded until the inversion effect disappears.

7. Trashcan Failures to Confirm. Query the data 50 different ways and shades of grey, selecting for the method which tends to produce results which favor your a priori position. Instruct the ‘data scientists’ to throw out all the other data research avenues you took (they don’t care), especially if it could aid in follow-on study which could refute your results. Despite being able to examine the data 1,000 different ways, only examine it in this one way henceforth. Peer review the hell out of any studies which do not produce a desired result. Explain any opposing ideas or studies as being simply a matter of doctors not being trained to recognize things the way your expert data scientists did. If as a result of too much inherent bias in these methods, the data yields an inversion effect – point out the virtuous component implied by your technology – how it will feed the world or cure all diseases, is fighting a species of supremacy or how the ‘technology not only does not cause the malady in question, but we found in this study that it cures it~!’.

8. Prohibit Replication and Follow Up. Craft a study which is very difficult to or cannot be replicated, does not offer any next steps nor serves to open follow-on questions (all legitimate study generates follow-on questions, yours should not), and most importantly, implies that the science is now therefore ‘settled’. Release the ‘data scientists’ back to their native career domains so that they cannot be easily questioned in the future.  Intimidate organizations from continuing your work in any form, or from using the data you have assembled. Never find anything novel (other than a slight surprise over how unexpectedly good you found your product to be), as this might imply that you did not know the answers all along. Never base consensus upon deduction of alternatives, rather upon how many science communicators you can have back your message publicly. Make your data proprietary. View science details as an activity of relative privation, not any business of the public.

9. Extrapolate and Parrot/Conceal the Analysis. Publish wildly exaggerated & comprehensive claims to falsification of an entire array of ideas and precautionary diligence, extrapolated from your single questionable and inductive statistical method (panduction). Publish the study bearing a title which screams “High risk technology does not cause (a whole spectrum of maladies) whatsoever” – do not capitalize the title as that will appear more journaly and sciencey and edgy and rebellious and reserved and professorial. Then repeat exactly this extraordinarily broad-scope and highly scientific syllogism twice in the study abstract, first in baseless declarative form and finally in shocked revelatory and conclusive form, as if there was some doubt about the outcome of the effort (ahem…). Never mind that simply repeating the title of the study twice, as constituting the entire abstract is piss poor protocol – no one will care. Denialists of such strong statements of science will find it very difficult to gain any voice thereafter. Task science journalists to craft 39 ‘research articles’ derived from your one-and-done study; deem that now 40 studies. Place the 40 ‘studies’, both pdf and charts (but not any data), behind a registration approval and $40-per-study paywall. Do this over and over until you have achieved a number of studies and research articles which might fancifully be round-able up to ‘1,000’ (say 450 or so ~ see reason below). Declare Consensus.

10. Enlist Aid of SSkeptics, Social Media, and Science Communicators. Enlist the services of a public promotion for-hire gang, to push-infiltrate your study into society and media, to virtue signal about your agenda and attack those (especially the careers of wayward scientists) who dissent.  Have members make final declarative claims in one liner form “A thousand studies show that high risk technology does not cause anything!” ~ a claim which they could only make if someone had actually paid the $40,000 necessary in actually accessing the ‘thousand studies’. That way the general public cannot possibly be educated in any sufficient fashion necessary to refute the blanket apothegm. Have them demand final proof as the only standard for dissent. This is important: make sure the gang is disconnected from your organization (no liability imparted from these exaggerated claims nor any inchoate suggested dark activities *wink wink), and moreover, who are motivated by some social virtue cause such that they are stupid enough that you do not actually have to pay them.

The organizations who manage to pull this feat off, have simultaneously claimed completed science in a single half-assed study, contended consensus, energized their sycophancy and exonerated themselves from future liability – all in one study. To the media, this might look like science. But to a life-long researcher, it is simply a big masquerade. It is pseudo-science in the least; and at its worst constitutes criminal felony and assault against humanity. It is malice and oppression, in legal terms (see Dewayne Johnson vs Monsanto Company)

The discerning ethical skeptic bears this in mind and uses this understanding to discern the sincere from the poser, and real groundbreaking study from commonplace surreptitiously bad science.

The Ethical Skeptic, “The Lyin’tific Method: The Ten Commandments of Fake Science” The Ethical Skeptic, WordPress, 3 Sep 2018; Web, https://wp.me/p17q0e-8f1