The Ethical Skeptic

Challenging Pseudo-Skepticism, Institutional Propaganda and Cultivated Ignorance

The Lyin’tific Method: The Ten Commandments of Fake Science

The earmarks of bad science are surreptitious in fabric, not easily discerned by media and the public at large. Sadly, as well they are not often easily discerned by scientists themselves. This is why we have ethical skepticism. It’s purpose is not simply to examine ‘extraordinary claims’, but also to examine those claims which masquerade, hidden in plain sight, as if constituting ordinary boring old ‘settled science’.

When you do not want the answer to be known, or you desire a specific answer because of social pressure surrounding an issue, or you are tired of irrational hordes babbling some nonsense about your product ‘harming their family members’ *boo-hoo 😢. Maybe you want to tout the life extending benefits of drinking alcohol, or overinflate death rates so that you can blame it on people you hate – or maybe you are just plain ol’ weary of the requisite attributes of real science. Wherever your Procrustean aspiration may reside, this is the set of guidebook best practices for you and your science organization. Trendy and proven techniques which will allow your organization to get science back on your side, at a fraction of the cost and in a fraction of the time. 👍

Crank up your science communicators and notify them to be at the ready, to plagiarize a whole new set of journalistic propaganda, ‘cuz here comes The Lyin’tific Method!

The Lyin’tific Method: The Ten Commandments of Fake Science

When you have become indignant and up to your rational limit over privileged anti-science believers questioning your virtuous authority and endangering your industry profits (pseudo-necessity), well then it is high time to undertake the following procedure.

1. Select for Intimidation. Appoint an employee who is under financial or career duress, to create a company formed solely to conduct this study under an appearance of impartiality, to then go back and live again comfortably in their career or retirement. Hand them the problem definition, approach, study methodology and scope. Use lots of Bradley Effect vulnerable interns (as data scientists) and persons trying to gain career exposure and impress. Visibly assail any dissent as being ‘anti-science’, the study lead will quickly grasp the implicit study goal – they will execute all this without question. Demonstrably censure or publicly berate a scientist who dissented on a previous study – allow the entire organization/world to see this. Make him become the hate-symbol for your a priori cause.

2. Ask a Question First. Start by asking a ‘one-and-done’, noncritical path & poorly framed, half-assed, sciencey-sounding question, representative of a very minor portion of the risk domain in question and bearing the most likely chance of obtaining a desired result – without any prior basis of observation, necessity, intelligence from stakeholders nor background research. Stress that the scientific method begins with ‘asking a question’. Avoid peer or public input before and after approval of the study design. Never allow stakeholders at risk to help select nor frame the core problem definition, nor the data pulled, nor the methodology/architecture of study.

3. Amass the Right Data. Never seek peer input at the beginning of the scientific process (especially on what data to assemble), only the end. Gather a precipitously large amount of ‘reliable’ data, under a Streetlight Effect, which is highly removed from the data’s origin and stripped of any probative context – such as an administrative bureaucracy database. Screen data from sources which introduce ‘unreliable’ inputs (such as may contain eyewitness, probative, falsifying, disadvantageous anecdotal or stakeholder influenced data) in terms of the core question being asked. Gather more data to dilute a threatening signal, less data to enhance a desired one. Number of records pulled is more important than any particular discriminating attribute entailed in the data. The data volume pulled should be perceptibly massive to laymen and the media. Ensure that the reliable source from which you draw data, bears a risk that threatening observations will accidentally not be collected, through reporting, bureaucracy, process or catalog errors. Treat these absences of data as constituting negative observations.

4. Compartmentalize. Address your data analysts and interns as ‘data scientists’ and your scientists who do not understand data analysis at all, as the ‘study leads’. Ensure that those who do not understand the critical nature of the question being asked (the data scientists) are the only ones who can feed study results to people who exclusively do not grasp how to derive those results in the first place (the study leads). Establish a lexicon of buzzwords which allow those who do not fully understand what is going on (pretty much everyone), to survive in the organization. This is laundering information by means of the dichotomy of compartmented intelligence, and it is critical to everyone being deceived. There should not exist at its end, a single party who understands everything which transpired inside the study. This way your study architecture cannot be betrayed by insiders (especially helpful for step 8).

5. Go Meta-Study Early. Never, ever, ever employ study which is deductive in nature, rather employ study which is only mildly and inductively suggestive (so as to avoid future accusations of fraud or liability) – and of such a nature that it cannot be challenged by any form of direct testing mechanism. Meticulously avoid systematic review, randomized controlled trial, cohort study, case-control study, cross-sectional study, case reports and series, or reports from any stakeholders at risk. Go meta-study early, and use its reputation as the highest form of study, to declare consensus; especially if the body of industry study from which you draw is immature and as early in the maturation of that research as is possible.  Imply idempotency in process of assimilation, but let the data scientists interpret other study results as they (we) wish. Allow them freedom in construction of Oversampling adjustment factors. Hide methodology under which your data scientists derived conclusions from tons of combined statistics derived from disparate studies examining different issues, whose authors were not even contacted in order to determine if their study would apply to your statistical database or not.

6. Shift the Playing Field. Conduct a single statistical study which is ostensibly testing all related conjectures and risks in one felled swoop, in a different country or practice domain from that of the stakeholders asking the irritating question to begin with; moreover, with the wrong age group or a less risky subset thereof, cherry sorted for reliability not probative value, or which is inclusion and exclusion biased to obfuscate or enhance an effect. Bias the questions asked so as to convert negatives into unknowns or vice versa if a negative outcome is desired. If the data shows a disliked signal in aggregate, then split it up until that disappears – conversely if it shows a signal in component sets, combine the data into one large Yule-Simpson effect. Ensure there exists more confidence in the accuracy of the percentage significance in measure (p-value), than of the accuracy/salience of the contained measures themselves.

7. Trashcan Failures to Confirm. Query the data 50 different ways and shades of grey, selecting for the method which tends to produce results which favor your a priori position. Instruct the ‘data scientists’ to throw out all the other data research avenues you took (they don’t care), especially if it could aid in follow-on study which could refute your results. Despite being able to examine the data 1,000 different ways, only examine it in this one way henceforth. Peer review the hell out of any studies which do not produce a desired result. Explain any opposing ideas or studies as being simply a matter of doctors not being trained to recognize things the way your expert data scientists did. If as a result of too much inherent bias in these methods, the data yields an inversion effect – point out the virtuous component implied (our technology not only does not cause the malady in question, but we found in this study that it cures it~!).

8. Prohibit Replication and Follow Up. Craft a study which is very difficult to or cannot be replicated, does not offer any next steps nor serves to open follow-on questions (all legitimate study generates follow-on questions, yours should not), and most importantly, implies that the science is now therefore ‘settled’. Release the ‘data scientists’ back to their native career domains so that they cannot be easily questioned in the future.  Intimidate organizations from continuing your work in any form, or from using the data you have assembled. Never find anything novel (other than a slight surprise over how unexpectedly good you found your product to be), as this might imply that you did not know the answers all along. Never base consensus upon deduction of alternatives, rather upon how many science communicators you can have back your message publicly. Make your data proprietary. View science details as a an activity of relative privation, not any business of the public.

9. Extrapolate and Parrot/Conceal the Analysis. Publish wildly exaggerated & comprehensive claims to falsification of an entire array of ideas and precautionary diligence, extrapolated from your single questionable and inductive statistical method (panduction). Publish the study bearing a title which screams “High risk technology does not cause (a whole spectrum of maladies) whatsoever” – do not capitalize the title as that will appear more journaly and sciencey and edgy and rebellious and reserved and professorial. Then repeat exactly this extraordinarily broad-scope and highly scientific syllogism twice in the study abstract, first in baseless declarative form and finally in shocked revelatory and conclusive form, as if there was some doubt about the outcome of the effort (ahem…). Never mind that simply repeating the title of the study twice, as constituting the entire abstract is piss poor protocol – no one will care. Denialists of such strong statements of science will find it very difficult to gain any voice thereafter. Task science journalists to craft 39 ‘research articles’ derived from your one-and-done study; deem that now 40 studies. Place the 40 ‘studies’, both pdf and charts (but not any data), behind a registration approval and $40-per-study paywall. Do this over and over until you have achieved a number of studies and research articles which might fancifully be round-able up to ‘1,000’ (say 450 or so ~ see reason below). Declare Consensus.

10. Enlist Aid of SSkeptics and Science Communicators. Enlist the services of a public promotion for-hire gang, to push-infiltrate your study into society and media, to virtue signal about your agenda and attack those (especially the careers of wayward scientists) who dissent.  Have members make final declarative claims in one liner form “A thousand studies show that high risk technology does not cause anything!” ~ a claim which they could only make if someone had actually paid the $40,000 necessary in actually accessing the ‘thousand studies’. That way the general public cannot possibly be educated in any sufficient fashion necessary to refute the blanket apothegm. This is important: make sure the gang is disconnected from your organization (no liability imparted from these exaggerated claims nor any inchoate suggested dark activities *wink wink), and moreover, who are motivated by some social virtue cause such that they are stupid enough that you do not actually have to pay them.

The organizations who manage to pull this feat off, have simultaneously claimed completed science in a single half-assed study, contended consensus, energized their sycophancy and exonerated themselves from future liability – all in one study. To the media, this might look like science. But to a life-long researcher, it is simply a big masquerade. It is pseudo-science in the least; and at its worst constitutes criminal felony and assault against humanity. It is malice and oppression, in legal terms (see Dewayne Johnson vs Monsanto Company)

The discerning ethical skeptic bears this in mind and uses this understanding to discern the sincere from the poser, and real groundbreaking study from commonplace surreptitiously bad science.

epoché vanguards gnosis

——————————————————————————————

How to MLA cite this blog post =>

The Ethical Skeptic, “The Lyin’tific Method: The Ten Commandments of Fake Science” The Ethical Skeptic, WordPress, 3 Sep 2018; Web, https://wp.me/p17q0e-8f1

September 3, 2018 Posted by | Agenda Propaganda, Institutional Mandates, Social Disdain | , | Leave a comment

Panduction: The Invalid Form of Inference

One key, if not the primary form of invalid inference on the part of fake skeptics, resides in the methodology of panductive inference. A pretense of Popper demarcation, panduction is employed as a masquerade of science in the form of false deduction. Moreover it constitutes an artifice which establishes the purported truth of a favored hypothesis by means of the extraordinary claim of having falsified every competing idea in one felled swoop of rationality. Panduction is the most common form of pseudoscience.

Having just finished my review of the Court’s definition of malice and oppression in the name of science, as outlined in the Dewayne Johnson vs. Monsanto Company case, my thinking broached a category of pseudoscience which is practiced by parties who share similar motivations to the defendant in that landmark trial. Have you ever been witness to a fake skeptic who sought to bundle all ‘believers’ as one big deluded group, who all hold or venerate the same credulous beliefs? Have you ever read a skeptic blog, claiming a litany of subjects to be ‘woo’ – yet fully unable to cite any evidence whatsoever which served to epistemologically classify that embargoed realm of ideas under such an easy categorization of dismissal? What you are witness to, is the single most common, insidious and pretend-science habit of fake skeptics, panduction.

It’s not that all the material contained in the embargoed hypotheses realm has merit. Most of it does not. But what is comprised therein, even and especially in being found wrong, resides along the frontier of new discovery. You will soon learn on this journey of ethical skepticism, that discovery is not the goal of the social skeptic; rather that is exactly what they have been commissioned to obfuscate.

Science to them is nothing more than an identity which demands ‘I am right’.

There exist three forms of valid inference, in order of increasing scientific gravitas: abduction, induction and deduction. Cleverly downgrading science along these forms of inference in order to avoid more effective inference methods which might reveal a disliked outcome, constitutes another form of fallacy altogether, called methodical deescalation.  We shall not address methodical deescalation here, but rather, a fourth common form of inference, which is entirely invalid in itself. Panduction is a form of ficta rationalitas; an invalid attempt to employ critical failures in logic and evidence in order to condemn a broad array of ideas, opinions, hypotheses, constructs and avenues of research as being Popper falsified; when in fact nothing of the sort has been attained. It is a method of proving yourself correct, by impugning everyone and everything besides the idea you seek to protect, all in one incredible feat of armchair or bar-stool reasoning. It is often peddled as critical thinking by fake skeptics.

Panduction is a form of syllogism derived from extreme instances of Appeal to Ignorance, Inverse Negation and/or Bucket Characterization from a Negative Premise. It constitutes a shortcut attempt to promote one idea at the expense of all other ideas, or kill an array of ideas one finds objectionable. Nihilists employ panduction for example, as a means to ‘prove’ that nothing exists aside from the monist and material entities which they approve as real. They maintain the fantasy that science has proved that everything aside from what they believe, is false by a Popperian standard of science – i.e. deducted. This is panduction.

Panduction

/philosophy : invalid inference/ : an invalid form of inference which is spun in the form of pseudo-deductive study. Inference which seeks to falsify in one felled swoop ‘everything but what my club believes’ as constituting one group of bad people, who all believe the same wrong and correlated things – this is the warning flag of panductive pseudo-theory. No follow up series studies nor replication methodology can be derived from this type of ‘study’, which in essence serves to make it pseudoscience.  This is a common ‘study’ format which is conducted by social skeptics masquerading as scientists, to pan people and subjects they dislike.

There are three general types of Panduction. In its essence, panduction is any form of inference used to pan an entire array of theories, constructs, ideas and beliefs (save for one favored and often hidden one), by means of the following technique groupings:

  1. Extrapolate and Bundle from Unsound Premise
  2. Impugn through Invalid Syllogism
  3. Mischaracterize though False Observation

The first is executed through attempting to falsify entire subject horizons through bad extrapolation. The second involves poorly developed philosophies of denial. Finally the third involves the process of converting disliked observations or failures to observe, into favorable observations:

Panduction Type I

Extrapolate and Bundle from Unsound Premise – Bucket Characterization through Invalid Observation – using a small, targeted or irrelevant sample of linear observations to extrapolate and further characterize an entire asymmetric array of ideas other than a preferred concealed one. Falsification by:

Absence of Observation (praedicate evidentia modus ponens) – any of several forms of exaggeration or avoidance in qualifying a lack of evidence, logical calculus or soundness inside an argument. Any form of argument which claims a proposition consequent ‘Q’, which also features a lack of qualifying modus ponens, ‘If P then’ premise in its expression – rather, implying ‘If P then’ as its qualifying antecedent. This as a means of surreptitiously avoiding a lack of soundness or lack of logical calculus inside that argument; and moreover, enforcing only its conclusion ‘Q’ instead. A ‘There is not evidence for…’ claim made inside a condition of little study or full absence of any study whatsoever.

Insignificant Observation (praedicate evidentia) – hyperbole in extrapolating or overestimating the gravitas of evidence supporting a specific claim, when only one examination of merit has been conducted, insufficient hypothesis reduction has been performed on the topic, a plurality of data exists but few questions have been asked, few dissenting or negative studies have been published, or few or no such studies have indeed been conducted at all.

Anecdote Error – the abuse of anecdote in order to squelch ideas and panduct an entire realm of ideas. This comes in two forms:

Type I – a refusal to follow up on an observation or replicate an experiment, does not relegate the data involved to an instance of anecdote.

Type II – an anecdote cannot be employed to force a conclusion, such as using it as an example to condemn a group of persons or topics – but an anecdote can be employed however to introduce Ockham’s Razor plurality. This is a critical distinction which social skeptics conveniently do not realize nor employ.

Cherry Picking – pointing to a talking sheet of handpicked or commonly circulated individual cases or data that seem to confirm a particular position, while ignoring or denying a significant portion of related context cases or data that may contradict that position.

Straw Man – misrepresentation of either an ally or opponent’s position, argument or fabrication of such in absence of any stated opinion.

Dichotomy of Specific Descriptives – a form of panduction, wherein anecdotes are employed to force a conclusion about a broad array of opponents, yet are never used to apply any conclusion about self, or one’s favored club. Specific bad things are only done by the bad people, but very general descriptives of good, apply when describing one’s self or club. Specifics on others who play inside disapproved subjects, general nebulous descriptives on self identity and how it is acceptable ‘science’ or ‘skepticism’.

Associative Condemnation (Bucket Characterization and Bundling) – the attempt to link controversial subject A with personally disliked persons who support subject B, in an effort to impute falsehood to subject B and frame its supporters as whackos. Guilt through bundling association and lumping all subjects into one subjective group of believers. This will often involve a context shift or definition expansion in a key word as part of the justification. Spinning for example, the idea that those who research pesticide contribution to cancer, are also therefore flat Earther’s.

Panduction Type II

Impugn through Invalid Syllogism – Negative Assertion from a Pluralistic, Circular or Equivocal Premise – defining a set of exclusive premises to which the contrapositive applies, and which serves to condemn all other conditions.

Example (Note that ‘paranormal’ here is defined as that which a nihilist rejects a being even remotely possible):

All true scientists are necessarily skeptics. True skeptics do not believe in the paranormal. Therefore no true scientist can research the paranormal.

All subjects which are true are necessarily not paranormal. True researchers investigate necessarily true subjects. Therefore to investigate a paranormal subject makes one not a true researcher.

All false researchers are believers. All believers tend to believe the same things. Therefore all false researchers believe all the same things.

Evidence only comes from true research. A paranormal investigator is not a true researcher. Therefore no evidence can come from a paranormal subject.

One may observe that the above four examples, thought which rules social skepticism today, are circular in syllogism and can only serve to produce the single answer which was sought in the first place. But ruling out entire domains of theory, thought, construct, idea and effort, one has essentially panned everything, except that which one desires to be indeed true (without saying as much).  It would be like Christianity pointing out that every single thought on the part of mankind, is invalid, except what is in the Bible. The Bible being the codification equivalent of the above four circular syllogisms, into a single document.

Panduction Type III

Mischaracterize through False Observation – Affirmation from Manufacturing False Positives or Negatives – manipulating the absence of data or the erroneous nature of various data collection channels to produce false negatives or positives.

Panduction Type III is an extreme form of an appeal to ignorance. In an appeal to ignorance, one is faced with observations of negative conditions which could tempt one to infer inductively that there exists nothing but the negative condition itself. An appeal to ignorance simply reveals one of the weaknesses of inductive inference.  Let’s say that I find a field which a variety of regional crow murders frequent. So I position a visual motion detection camera on a pole across from the field in order to observe crow murders who frequent that field. In my first measurement and observation instance, I observe all of the crows to be black. Let us further then assume that I then repeat that observation exercise 200 times on that same field over the years. From this data I may well develop a hypothesis that includes a testable mechanism in which I assert that all crows are black. I have observed a large population size, and all of my observations were successful, to wit: I found 120,000 crows to all be black. This is inductive inference. Even though this technically would constitute an appeal to ignorance, it is not outside of reason to assert a new null hypothesis, that all crows are black – because my inference was derived from the research and was not a priori favored. I am not seeking to protect the idea that all crows are black simply because I or my club status are threatened by the specter of a white crow. The appeal to ignorance fallacy is merely a triviality in this case, and does not ‘disprove’ the null (see the Appeal to Fallacy). Rather it stands as a caution, that plurality should be monitored regarding the issue of all crows being black.

But, what if I become so convinced that the null hypothesis in this case is the ‘true’ hypothesis, or even preferred that idea in advance because I was a member of a club which uses a black crow as its symbol? In such a case I approach the argument with an a priori belief which I must protect. I begin to craft my experimental interpretation of measurement such that it conforms to this a priori mandate in understanding. This will serve to produce four species of study observation procedural error, which are in fact, pseudoscience; the clever masquerade of science and knowledge:

A.  Affirmation from Result Conversion  – employing a priori assumptions as filters or data converters, in order to produce desired observational outcomes.

1.  Conversion by a priori Assumption (post hoc ergo propter hoc). But what if the field I selected, bore a nasty weather phenomenon of fog, on an every other day basis. Further then, this fog obscured a good view of the field, to the point where I could only observe the glint of sunlight off the crow’s wings, which causes several of them to appear white, even though they are indeed black. But because I know there are no white crows now, I use a conversion algorithm I developed to count the glints inside the fog, and register them as observations of black crows? Even though a white crow could also cause the same glint. I have created false positives by corrupted method.

2.  Conversion by Converse a priori Assumption (propter hoc ergo hoc – aka plausible deniability). Further then, what if I assumed that any time I observed a white crow, that this would therefore be an indication that fog was present, and a condition of Data Conversion by a priori Assumption was therefore assumed to be in play? I would henceforth, never be able to observe a white crow at all, finding only results which conform to the null hypothesis, which would now be an Omega Hypothesis (see The Art of Professional Lying: The Tower of Wrong).

Example: Viking Mars Lander Data Manipulation

Two Mars Viking Landers were sent to Mars, in part to study for signs of life. NASA researchers took soil samples the Viking landers scooped from the surface and mixed it with nutrient-rich water. If the soil had life, the theory went that the soil’s microbes would metabolize the nutrients in the water and release a certain signature of radioactive molecules. To their pleasant surprise, the nutrients metabolized and radioactive molecules were released – suggesting that Mars’ soil contained life. However, the Viking probes’ other two experiments found no trace of organic material, which prompted the question: If there were no organic materials, what could be doing the metabolizing? So by assumption, the positive results from the metabolism test, were dismissed as derivative from some other chemical reaction, which has not been identified to date. The study was used as rational basis from which to decline further search for life on Mars, when it should have been appropriately deemed ‘inconclusive’ instead (especially in light of our finding organic chemicals on Mars in the last several months)1

B. Affirmation from Observation Failure Conversion – errors in observation are counted as observations of negative conditions, further then used as data or as a data screening criterion.

Continuing with our earlier example, what if on 80% of the days in which I observed the field full of crows, the camera malfunctioned and errantly pointed into the woods to the side, and I was fully unable to make observations at all on those days? Further then, what if I counted those non-observing days as ‘black crow’ observation days, simply because I had defined a black crow as being the ‘absence of a white crow’ (pseudo-Bayesian science) instead of being constrained to only the actual observation of an actual physical white crow? Moreover, what if, because of the unreliability of this particular camera, any observations of white crows it presented were tossed out, so as to prefer observations from ‘reliable’ cameras only? This too, is pseudoscience in two forms:

1.  Observation Failure as Observation of a Negative (kíndynos apousías). – a statistical study which observes false absences of data and further then assumes them to represent verified negative observations. A study containing field or set of data in which there exists a risk that absences being measured, will be caused by external factors which artificially serve to make the evidence absent, through risk of failure of detection/collection/retention of that data. The absences of data are therefore not a negative observation, rather are presumed to be a negative observation in error. This will often serve to produce an inversion effect in the final results.

2.  Observation Failure as Basis for Selecting For Reliable over Probative Data (Cherry Sorting) – when one applies the categorization of ‘anecdote’ to screen out unwanted observations and data. Based upon the a priori and often subjective claim that the observation was ‘not reliable’. Ignores the probative value of the observation and the ability to later compare other data in order to increase its reliability in a more objective fashion, in favor of assimilating an intelligence base which is not highly probative, and can be reduced only through statistical analytics – likely then only serving to prove what one was looking for in the first place (aka pseudo-theory).

These two forms of conversion of observation failures into evidence in favor of a particular position, are highlighted no better than studies which favor healthcare plan diagnoses over cohort and patient input surveys. Studies such as the Dutch MMR-Autism Statistical Meta-Analysis or the Jain-Marshall Autism Statistical Analysis failed precisely because of the two above fallacious methods regarding the introduction of data. Relying only upon statistical analytics of risk-sculpted and cherry sorted data, rather than direct critical path observation.

 Example: Jain-Marshall Autism Study

Why is the 2015 Jain-Marshall Study of weak probative value? Because it took third party, unqualified (health care plan) sample interpretations of absences (these are not observations – they are ‘lack-of’ observations – which are not probative data to an intelligence specialist – nor to a scientist – see pseudo-theory) from vaccinated and non-vaccinated children’s final medical diagnoses at ages 2, 3, and 5. It treated failures in the data collection of these healthcare databases, as observations of negative results (kíndynos apousías). A similar data vulnerability to the National Vaccine Injury Compensation System’s ‘self-volunteering’ of information and limitation of detection to within 3 years. This favors a bad, non-probative data repository, simply because of its perception as being ‘reliable’ as a source of data. This fails to catch 99% of signal observations (Cherry Sorting), and there is good demonstrable record of that failure to detect actual injury circumstances.2

One might chuckle at the face value ludicrousness of either Panduction Type III A and B. But Panduction Type III is regularly practiced inside of peer reviewed journals of science. Its wares constitute the most insidious form of malicious and oppressive fake science. One can certainly never expect a journalist to understand why this form of panduction is invalid, but certainly one should expect it of their peer review scientists – those who are there to protect the public from bad science. And of course, one should expect it from an ethical skeptic.

epoché vanguards gnosis

——————————————————————————————

How to MLA cite this blog post =>

The Ethical Skeptic, “Panduction: The Invalid Form of Inference” The Ethical Skeptic, WordPress, 31 Aug 2018; Web, https://wp.me/p17q0e-8c6

 

August 31, 2018 Posted by | Argument Fallacies, Tradecraft SSkepticism | , , | Leave a comment

Malice and Oppression in the Name of Skepticism and Science

The Dewayne Johnson versus Monsanto case did not simply provide precedent for pursuit of Monsanto over claims regarding harm caused by its products. As well, it established a court litmus regarding actions in the name of science, which are generated from malice and as well seek oppression upon a target populace or group of citizens.
Watch out fake skeptics – your targeting of citizens may well fit the court’s definition of malice, and your advocacy actions those of oppression – especially under a context of negligence and when posed falsely in the name of science.

If you are a frequent reader of The Ethical Skeptic, you may have witnessed me employ the terms ‘malice’ and ‘malevolence’ in terms of certain forms of scientific or political chicanery. Indeed, the first principles of ethical skepticism focus on the ability to discern a condition wherein one is broaching malice in the name of science – the two key questions of ethical skepticism:

  1. If I was wrong, would I even know it?
  2. If I was wrong, would I be contributing to harm?

These are the questions which a promoter of a technology must constantly ask, during and after the deployment of a risk bearing mechanism. When a company starts to run from these two questions, and further then employs science as a shield to proffer immunity from accountability, a whole new set of motivation conditions comes into play.

The litmus elements of malice and oppression, when exhibited by a ‘science’ promoting party exist now inside the following precedent established by the Court in the case of Dewayne Johnson vs. Monsanto : Superior Court of the State of California, for the County of San Francisco: Case No. CGC-16-550128, Dewayne Johnson, Plaintiff, v. Monsanto Company, Defendant. (see Honorable Suzanne R. Bolanos; Verdict Form; web, https://www.baumhedlundlaw.com/pdf/monsanto-documents/johnson-trial/Johnson-vs-Monsanto-Verdict-Form.pdf below). Below I have digested from the Court Proceedings, the critical questions which led to a verdict of both negligence, as well as malice and oppression, performed in the name of science, on the part of Monsanto Company.

It should be noted that Dewayne Johnson v. Monsanto Company is not a stand alone case in the least. The case establishes precedent in terms of those actions which are punishable in a legal context, on the part of corporations or agencies who promote risk bearing technologies in the name of science – and more importantly in that process, target at-risk stakeholders who object, dissenting scientists and activists in the opposition. So let us be clear here, inside a context of negligence, the following constitutes malice and oppression:

1.  The appointing of inchoate agents, who’s purpose is to publicly demean opponents and intimidate scientific dissent, by means of a variety of public forum accusations, including that of being ‘anti-science’.

Inchoate Action

/philosophy : pseudoscience : malice and oppression/ : a set of activity or a permissive argument which is enacted or proffered by a celebrity or power wielding sskeptic, which prepares, implies, excuses or incites their sycophancy to commit acts of harm against those who have been identified as the enemy, anti-science, credulous or ‘deniers’. Usually crafted is such a fashion as to provide a deniability of linkage to the celebrity or inchoate activating entity.

This includes skeptics, and groups appointed, commissioned or inchoate encouraged by the promoter, even if not paid for such activity.

2.  The publishing of scientific study, merely to promote or defend a negligent product or idea, or solely for the purpose of countermanding science disfavored by the promoter of a negligent product or idea.

All that has to be established is a context of negligence on the part of the promoter. This includes any form of failure to followup study a deployed technology inside which a mechanism of risk could possibly exist. So, let’s take a look at the structure of precedent in terms of negligence, malice and oppression established by the Court in this matter. The questions inside the verdict, from which this structure was derived, are listed thereafter, in generic form.

Malice and Oppression in the Name of Science

/philosophy : the law : high crimes : oppression/ : malice which results in the oppression of a targeted segment of a population is measured inside three litmus elements. First, is the population at risk able to understand and make decisions with regard to the science, technology or any entailed mechanism of its risk? Second, has an interest group or groups crafted the process of science or science review and communication in a unethical fashion so as to steer its results and/or interpretation in a desired direction? Third, has a group sought to attack, unduly influence, intimidate or demean various members of society, media, government or the targeted group, as a means to enforce their science conclusions by other than appropriate scientific method and peer review.

I.  Have a group or groups targeted or placed a population at other than natural risk inside a scientific or technical matter

a. who bears a legitimate stakehold inside that matter

b. who can reasonably understand and make self-determinations inside the matter

c. whom the group(s) have contended to be illegitimate stakeholders, or as not meriting basic human rights or constitutionality with regard to the matter?

II.  Have these group or groups contracted for or conducted science methods, not as an incremental critical path means of investigation, rather only as means to

a. promote a novel technology, product, service, condition or practice which it favors, and

b. negate an opposing study or body of research

c. exonerate the group from reasonable liability to warn or protect the stakeholders at risk

d. exonerate the group from the burden of precaution, skepticism or followup scientific study

e. cover for past scientific mistakes or disadvantageous results

f. damage the reputation of dissenting researchers

g. influence political and legislative decisions by timing or extrapolation of results

h. pose a charade of benefits or detriment in promotion/disparagement of a market play, product or service

i. establish a monopoly/monopsony or to put competition out of business?

III.  Have these group(s) enlisted officers, directors, or managing agents, outside astroturf, undue influence, layperson, enthusiast, professional organization or media entities to attack, intimidate and/or disparage

a. stakeholders who are placed at risk by the element in question

b. wayward legislative, executive or judicial members of government

c. dissenting scientists

d. stakeholders they have targeted or feel bear the greatest threat

e. neutral to challenging media outlets

f. the online and social media public?

The Ruling Precedent (Verdict)

The sequence of questions posed by the Court, to the Jury, in the trail of Dewayne Johnson vs. Monsanto (applied generically as litmus/precedent):

Negligence

I.  Is the product or service set of a nature about which an ordinary consumer can form reasonable minimum safety expectations?

II.  Did the products or services in question fail to ensure the safety an ordinary consumer would have expected when used or misused in an intended or reasonably foreseeable way?

III.  Was the product design, formulation or deployment a contributor or principal contributing factor in causing harm?

IV.  Did the products or services bear potential risks that were known, or were knowable, in light of the scientific knowledge that was generally accepted in the scientific community at the time of their manufacture, distribution or sale?

V.  Did the products or services present a substantial danger to persons using or misusing them in an intended or reasonably foreseeable way?

VI.  Would ordinary citizen stakeholder users have recognized these potential risks?

VII.  Did the promoting agency or company fail to adequately warn either government or citizen stakeholders of the potential risks, or did they under represent the level of risk entailed?

VIII.  Was this lack of sufficient warnings a substantial factor in causing harm?

IX.  Did the promoter know or should it reasonably have known that its products or services were dangerous or were likely to be dangerous when used or misused in a reasonably foreseeable manner?

X.  Did the promoter know or should it reasonably have known that users would not realize the danger?

XI.  Did the promoter fail to adequately warn of the danger or instruct on the safe use of products or services?

XII.  Could and would a reasonable manufacturer, distributor, or seller under the same or similar circumstances have warned of the danger or instructed on the safe use of the products or services?

XIII.  Was the promoter’s failure to warn a substantial factor in causing harm?

Malice and Oppression

XIV.  Did the promoter of the products or services act with malice or oppression towards at-risk stakeholders or critical scientists or opponents regarding this negligence or the risks themselves?

XV.  Was the conduct constituting malice or oppression committed, ratified, or authorized by one or more officers, directors, or managing agents of the promoter, acting on behalf of promoter?

epoché vanguards gnosis

——————————————————————————————

How to MLA cite this blog post =>

The Ethical Skeptic, “Malice and Oppression in the Name of Skepticism and Science” The Ethical Skeptic, WordPress, 28 Aug 2018; Web, https://wp.me/p17q0e-85F

 

August 28, 2018 Posted by | Institutional Mandates, Tradecraft SSkepticism | , | 2 Comments

Chinese (Simplified)EnglishFrenchGermanHindiPortugueseRussianSpanish
%d bloggers like this: