The Ethical Skeptic

Challenging Pseudo-Skepticism, Institutional Propaganda and Cultivated Ignorance

The Fermi Paradox is Babysitting Rubbish

The curtains of paradox are woven of the fabric of ill assumption and intent. So goes this apothegm of ethical skepticism. Our assumptions surrounding the promotional rhetoric of the Fermi Paradox are immature and lacking in skeptical circumspection. The odds are, that a civilization long determined to not exist, will contact us well before we are equipped to resolve any questions raised inside the Fermi Paradox’s very posing to begin with.

Primate sign language existed long before we taught American Sign Language to Koko the Gorilla, and well before we even knew that many primates possessed both a vocal and gestural language all their own.1 2 It took us a mere 30,000 years to figure out that animals and plants on our very own planet communicate by means around which we did not bear the first inkling of awareness.3 How much more time will mankind need in order to understand a potential communication means, which is completely alien to anything we have ever experienced? How do we go about establishing a probability that we will be able to discern such an incongruous construct to our own forms of communication, and easily distinguish it from all forms of background noise inside our cosmos?

The reality concerning the rhetorical ‘Fermi Paradox’, as it is called, centers around the tenet of ethical skepticism which cites that our most dangerous weakness resides in the fact that we do not know what we do not know. We have signal-searched an infinitesimally small segment of our galaxy, and an even smaller segment of time.4 Yet, in our lack of wisdom we begin to demand of the cosmos, pseudo-reductionist answers which we are not prepared to accept in the least. The Fermi Paradox, along with its rhetorical resolution, stand exemplary of just such an exercise in pretend epistemology. The Fermi Paradox proceeds as such (from Wikipedia):5

The Fermi paradox is a conflict between arguments of scale and probability that seem to favor intelligent life being common in the universe, and a total lack of evidence of intelligent life having ever arisen anywhere other than on the Earth.

Totally. In a blog earlier this year for the SETI Institute, Seth Shostak, Senior Astronomer for the SETI Institute, opines that the Fermi Paradox, and in particular the ‘total lack of evidence of intelligent life having ever arisen anywhere‘ component itself, constitute ‘strong arguments’.6

The fact that aliens don’t seem to be walking our planet apparently implies that there are no extraterrestrials anywhere among the vast tracts of the Galaxy. Many researchers consider this to be a radical conclusion to draw from such a simple observation. Surely there is a straightforward explanation for what has become known as the Fermi Paradox. There must be some way to account for our apparent loneliness in a galaxy that we assume is filled with other clever beings.

A lot of folks have given this thought. The first thing they note is that the Fermi Paradox is a remarkably strong argument.

Please note that the idea that ‘they don’t exist’ as a scientific construct is simple, but it is not straightforward, as Shostak incorrectly claims. It is a highly feature stacked alternative. His thoughts in this regard lack philosophical rigor. Moreover, that the Fermi Paradox, and in particular its last component boast in the form of an appeal to ignorance, constitute any form of ‘strong argument’ is laughable pseudoscience to say the least. An amazing level of arrogance. However, since Seth has made the claim, let’s examine for a moment the Fermi Paradox, in light of ethical skepticism’s elements which define the features of a strong argument.7

   The Fermi Paradox Fails Assessment by Features of a Strong Argument

Formal Strength

1.  Coherency – argument is expressed with elements, relationships, context, syntax and language which conveys actual probative information or definition

The Paradox is simple. But never confuse simple with the state of being coherent. This is a common tradecraft inside social skepticism. The statement bears no underpinning definition. It seeks a free pass from the perspective that everyone knows what ‘intelligent life’, ‘evidence’, ‘scale’ and ‘probability’ are, right? For me, as an ethical skeptic, I fear what I do not know, that I do not know. I possess no definition of how evidence of this type would appear, nor the specific measures of probability and scale entailed in such a search. I cannot presume such arrogance of knowledge on my part – and certainly cannot pretense a resolution in its offing, before I even start looking.

This is no different than saying ‘God is Love’. Simplicity does not convey coherence (in the eyes of an ethical skeptic) – as it can constitute merely a charade. The principle is not coherent because it has been issued as law before any of its foundational elements of soundness have been framed; much less measured. This is what renders the principle both an Einfach Mechanism as well as a Imposterlösung Mechanism. Incoherent pretenses of science. A null hypothesis which has not earned its mantle of venerability.

2.  Soundness – premises support or fail to adequately support its proposed conclusion

The premise that there exists ‘a total lack of evidence of intelligent life having ever arisen anywhere other than on the Earth‘ is unsound. Notice the prejudicial modifier ‘total’, employed in framing a supposed ‘lack of evidence’. Total lack, not just a lack, but a total lack – well now I believe you then. This is prejudicial language feeding into casuistry; it is agency – and does not stand as a derivation of science by any means. The term ‘common in the universe‘ is also not constrained, relegating the Paradox artificially into a divergent model structure. This as well renders the syllogism unsound.

A similar conjecture could be made in terms of a personal accusation in this form: There is absolutely a paradox surrounding your claim to never have beaten your wife, yet we can find absolutely no evidence whatsoever to support such a claim on your part that you have never beaten anyone’s wife.

3.  Formal Theory – strength and continuity of predicate and logical calculus (basis of formal fallacy)

The Formal Theory of the model consists simply of a rhetorical syllogism citing that there exists a paradox. What is being sold are the premises and not the inert syllogism itself: the statement ‘a total lack of evidence of intelligent life having ever arisen anywhere other than on the Earth‘. This is a syllogistic approach to reverse-selling an unfounded premise assumption without due rigor of science, also know as rhetoric.

4.  Inductive Strength – sufficiency of completeness and exacting inference which can be drawn (as a note, deductive inference when it exists, relates to 3. Formal Theory)

Since observation has not been completed in reality (see below, the Parce-Ames equation), there exists no inductive strength for the Fermi Paradox rhetorical argument.

Informal Strength

5.  Circumstantial Strength – validity of information elements comprised by the argument or premises

Since observation has not been completed in reality (see below, the Parce-Ames equation), there exists no factual strength for the Fermi Paradox rhetorical argument.

6.  Integrity of Form/Cogency – informal critique of expression, intent or circumstantial features

The Fermi Paradox as it is currently expressed (and there is a future in which it can exist as an actual scientific principle) bears several forms of informal fallacy:

It constitutes non rectum agitur fallacy of science method
It stands as an appeal to authority
It stands as an appeal to ignorance
It is both an Einfach Mechanism as well as a Imposterlösung Mechanism (as there exists a very paltry set of ‘factualness’ surrounding this subject)
It is an Omega Hypothesis

To an ethical skeptic, the presence of technique involving reverse-selling a premise by means of structured rhetoric, inside a context of tilted language and equivocal definition, bearing a complete lack of soundness, and finally featuring the five fallacies at the end of this list, all collectively hint at one thing – A LIE.  Lies are sold socially, by social skeptics. With that idea in mind of a social construct being sold as science, examine for a moment many of the explanations for the Fermi Paradox (which also assume it to be a strong argument – which it is not), which include placing galaxy-inhabiting civilizations inside the same dilemma context in which mankind currently resides. Alien civilizations all blow themselves up with nukes at some point. They all die from carbon harvesting global warming. They pollute themselves into extinction. They all could not exit their solar system as intact beings. They could not solve mass/energy to speed of light relativism, etc. This habit of judging novel observations in light of current and popular controversies is an exercise in socially constructed science, called the familiar controversy bias. It is a key indicator that social skepticism, and not science, is attempting to sway the perception of the public at large inside an issue. A rather humorous example of such socially induced bias can be found here. And in light of the Fermi Paradox constituting rhetoric itself – such extrapolations of current controversies off of its presumptive base, form sort of a double layer cake of rhetoric. An amazing feat of organic untruth (lying with facts).

Familiar Controversy Bias

/philosophy : informal fallacy : habituation : bias/ : the tendency of individuals or researchers to frame explanations of observed phenomena in terms of mainstream current or popular controversies. The Fermi Paradox exists because aliens all eventually blow themselves up with nuclear weapons right after they discover radio. Venus is a case of run-away global warming from greenhouse gasses. Every hurricane since 1995 has been because of Republicans. Every disaster is God’s punishment for some recent thing a nation conducted. Mars is a case of ozone depletion at its worst, etc. Every paradox or novel observation is readily explainable in terms of a current popular or manufactured controversy. Similar to the anachronistic fallacy of judging past events in light of today’s mores or ethics.

Continuing with the Shostak blog article then, Seth whips out another extraordinary claim casually near its end.

Consequently, scientists in and out of the SETI community have conjured up other arguments to deal with the conflict between the idea that aliens should be everywhere and our failure (so far) to find them.

One does not have to ‘conjure up arguments’ to his ontological-preference Fermi Paradox resolution (advanced extraterrestrial civilizations do not exist at all), as Shostak claims8 – as such alternative arguments are mandated by Ockham’s Razor. They should be studied, and do not need to be forced or conjured in any way. The Paradox itself in no way suggests that there ‘aren’t any advanced extraterrestrial civilizations out there’, as Shostak all-too-eagerly opines as well. This idea of complete absence bears no scientific utility, neither as a construct nor as a null hypothesis. So why push it so hard? Perhaps, again in fine form of rhetoric – far away advanced extraterrestrial civilizations are not the target of this lazy abductive inference at all. Rather the real focus is on promoting the concept of non-existence of nearby advanced civilizations, or even visiting ones. A very familiar target set comprising a curiously large portion of Shostak’s vitriol, air time and professional focus.

These are all extraordinary claims, made by a person with zero evidence to support them – coupled with a high anchoring bias in squelching this issue before the public at large. Seth Shostak’s entire mind, purpose and reason for being, is based upon a psychological obsession with the dissemination of propaganda surrounding this issue. He was selected for the symbolic role and the suit he inhabits precisely because of these foibles. He is babysitting a symbolic issue, passing out pablum to the public and helping obfuscate the answer to a question which his sponsors do not want asked in the first place.

Remember, that in order to get the right answer, one need only ask a wrong question (see Interrogative Biasing: Asking the Wrong Question in Order to Derive the Right Answer). The Fermi Paradox is an example of just such a tactic of obfuscation. It is a religious action – stemming from a faith, which we will outline below.

The Faith of the Fermi Paradox

The fact that we accept the Fermi Paradox, given the following conditions, renders it more a statement of faith than a statement of science by any means.

Critical Path Logic: Fatal

The preferred rhetorical conclusion it entails employs the implicit concepts ‘alien’, ‘extraterrestrial’, ‘scale’, ‘probability’, ‘evidence’ and ‘life’ in rhetorical, prejudicial, incoherent and unsound syllogism. While the topic is valid, the question in its current form, is not.

However, let us presume this condition of fatality to be irrelevant, and continue down its logical critical path in reductionist series risk:

Reductionist Series Risk: Extremely High

α:  It presumes mankind to know the relevant range of what constitutes an inhabitant life form

β :  It presumes mankind to know the means by which inhabitants would ostensibly communicate

γ :  It presumes that all inhabitants are distant

δ :  It presumes that technology takes only a single path and direction similar to mankind’s own

ε :  It presumes that all communication media throughout the galaxy are similar to ours

ζ :  It presumes that we would recognize all forms of communication similar to ours

η :  It presumes that inhabitants would broadcast in omnidirectional and powerful EM signals or would be directing their EM energy straight toward us only

θ :  It presumes that inhabitants would broadcast ‘in the clear’ (i.e. unencrypted outside the cosmic background radiation)

ι :    It presumes that broadcasting inhabitants would have also presumed that no one was listening to them and/or would not care

κ :  It presumes that life can exist inside only our relative frame of reference/dimensionality

λ :  It presumes that we have examined a significant amount of space

μ :  It presumes that we rigorously know what space and time are, and its reductive inference upon radiation to be

ν :  It presumes that we have rigorously studied the timeframe in which an advanced civilization could broadcast during its development history

Finally we address key elements of the same logical critical path in macroscopic or parallel risk

Macroscopic Parallel Risk: Fatally High

  •  It presumes that mankind’s life originated only upon Earth through abiogenesis 
  •  It presumes that all intelligent life is noisy
  •  It presumes that all universal inhabitants are full time bound by our frame of reference/dimensionality
  •  It presumes that we have actually looked for inhabitant signals
  •  It presumes that humankind’s existence is lacking in agency
  •  It presumes that science/skepticism is lacking in agency
  •  It presumes that those who might have observed such communication in the past (distant or recent), would expose this circumstance
  •  It precludes the idea that a subset of mankind is already communicating

The Omega Hypothesis therefore – the idea being artificially enforced at all costs – is expressed no better than by Seth Shostak himself, its proponent and babysitter:9

“Some even insisted that there was no paradox at all: the reason we don’t see evidence of extraterrestrials is because there aren’t any.”

This is what is known inside ethical skepticism as babysitter rhetoric – false wisdom promulgated to stand in as a proxy for wisdom one desires to block. It is wishful thinking; pre-emptive thinking. The better-fit (least convoluted in necessary assumptions) explanation is, that ‘they’ are already aware of us, and have been for some time. This actually is a very elegant resolution for the Fermi Paradox at a local level, along with a battery of robust observations which lay fallow and unattended inside of so-called ‘fringe’ science – a hypothesis which requires significantly less gymnastics in denying data and twisting philosophy, than comparatively that required to enforce a single mandatory ‘nobody is home’ Omega Hypothesis. In this regard, I am not a proponent of enforcing one, Ockham’s Razor violating answer, over the condition of plurality which would dictate examining two possible solutions. I remain open to both ideas, as this is the ethic of skepticism – anathema to the cadre of pretenders who oppress this subject.

Babysitter

/philosophy : rhetoric : pseudoscience : science communicator/ : a celebrity or journalist who performs the critical tasks of agency inside a topic which is embargoed. The science communicator assigned a responsibility of appeasing public curiosity surrounding an issue which the public is not authorized to research nor understand. A form of psychosis, exhibited by an individual who is a habituated organic liar. A prevarication specialist who spins a subset of fact, along with affectations of science, in such as way as to craft the appearance of truth – and further then, invests the sum of their life’s work into perpetuating or enforcing a surreptitious lie.

So let’s develop a kind of Reverse Drake Equation why don’t we, based upon the above cited criteria of probability then (the Greek alphabet labelled items above as opposed to the bullet pointed items). This is a kind of risk chain assessment. Remember that risks in a risk chain in series are multiplicative as you add them into the mix. However, some of the above risks are in parallel, so they cannot be added into the series based formula below (Parce-Ames equation). The series based risks are highlighted by their corresponding Greek alphabet characters above, and are assigned a serial factor used inside the formula below.  Parallel risk elements cannot be added into a risk reductionist critical path (as they are subjective and duplicative in nature and therefore are not able to be employed inside a reductionist approach) and are therefore excluded from the equation. Beware of those who intermix parallel and series risk arguments, as they are plural arguing. A sign of lack in intellectual rigor, and a key sign of agency.

Parce-Ames Probability Dynamic

The Parce-Ames equation demonstrates the ludicrous folly of the Fermi Paradox. It serves to expand the dynamic regarding the probability that we would have detected even one (x) of the total population (N) of advanced civilizations (from the Drake Equation) in our galaxy by this moment in our history. The Parce-Ames Probability Dynamic therefore, hinges off of the probability around fourteen low-confidence and independent input variables, as factored into 250 billion stars, all compounding risk in series and according to the following equation:

P(N(x))  =  N/2.5 x 10¹¹  ·  Σ(Ψ)  ·  α  ·  β  ·  γ  ·  δ  ·  ε  ·  ζ  ·  η  ·  θ  ·  ι  ·  κ  ·  λ  ·  μ  ·  ν

where:

P(N(x)) = the probability that we would have detected even one (x) of N advanced civilizations in our galaxy by this moment in our history

Σ(Ψ) = the sum total of all stars (Σ) studied by all observation apertures (Ψ) on Earth

and

α :  the chance that we grasp adequately what constitutes an inhabitant life form
β :  the chance that we have correctly assumed how inhabitants would ostensibly communicate
γ :  the chance that inhabitants are inside our search band
δ :  the chance that a given inhabitant technology takes a path and direction similar to mankind’s own
ε :  the chance that any communication is similar to ours
ζ :  the chance that we would recognize all forms of communication similar to ours
η :  the chance that inhabitants would broadcast in omnidirectional and powerful EM signals or would be directing their EM energy straight toward us only
θ :  the chance that inhabitants would broadcast ‘in the clear’ (i.e. unencrypted outside the cosmic background radiation)
ι :    the chance that broadcasting inhabitants would have also presumed that no one was listening to them and/or would not care
κ :  the chance that life can exist inside only our relative frame of reference/dimensionality
λ :  the chance percentage of signal-detectable space we have examined
μ :  the chance that we rigorously know what space and time are, and its reductive inference upon radiation to be
ν :  the chance that we have rigorously studied the timeframe in which an advanced civilization could/would broadcast in a detectable form during its development history

The journalists at Science News sum this equation dynamic up in one recitation:10

A new calculation shows that if space is an ocean, we’ve barely dipped in a toe. The volume of observable space combed so far for E.T. is comparable to searching the volume of a large hot tub for evidence of fish in Earth’s oceans, astronomer Jason Wright at Penn State and colleagues say in a paper posted online September 19 at arXiv.org.

Another way to put this, in terms of the discussion herein is that, the Parce-Ames equation always approaches zero, unless a majority of answers are ascertained and refined in accuracy by an observing civilization. We as a civilization are nowhere near the dynamic range of the Parce-Ames curve progression. We are in the first hot tub of ocean water, swimming around looking for fish and yelling ‘a total lack of any evidence!’ as bubbles come streaming up in sequence with our underwater declarations. And we have on our smart sciencey swim trunks too.

The stark reality is – that in absence of a civilization coming alongside and teaching us many of the objective elements of the Parce-Ames equation, we face very little chance of ever striking out on our own and finding (even by means of radio-telescope) a nearby, much less galactic, civilization. As you can see in the graphic above, the inflection point of knowledge which would equip us to answer the Fermi Paradox is far past the more likely state of our being contacted first.

The dramatically higher odds are, that an intelligence inhabiting life form will find us, long before we ever find even one, ourselves. The idea therefore, that another advanced culture is aware of or has visited Earth, is well supported by Ockham’s Razor, and should stand as a construct of science, even now. To avoid this alternative, is a form of pseudoscience. The more likely realities are that either:

1. they will find us first, by detecting the gamma ray bursts from our 2243 nuclear weapon detonations, long before we resolve even the first variable inside the Drake Equation – or

2. they already were engaged with ‘us’ a long time ago.

Both of these explanations are much less feature stacked than is the ‘they do not exist’ alternative being promoted by social skepticism.

We have no idea how an alien might exist, communicate or travel. We possess no compelling argument which falsifies the very possible hypothesis that they were already here long ago, and are still hanging around. Not one shred of science – therefore, plurality under Ockham’s Razor is mandated. And if you do not understand what this means, neither are you ready to argue this topic.

The First Duty of Ethical Skepticism, is not to promulgate answers. I do not hold an answer inside this subject. Rather it is to spot and to oppose agency. Especially the rhetoric of babysitting agency. Foolishness, dressed up as science. Wonder in the purported offing – but oppressive in its reality of enforcement.

Fake skepticism.

epoché vanguards gnosis

——————————————————————————————

How to MLA cite this blog post =>

The Ethical Skeptic, “The Fermi Paradox is Babysitting Rubbish” The Ethical Skeptic, WordPress, 2 Oct 2018; Web, https://wp.me/p17q0e-8jd

October 2, 2018 Posted by | Argument Fallacies, Institutional Mandates | , | Leave a comment

The Lyin’tific Method: The Ten Commandments of Fake Science

The earmarks of bad science are surreptitious in fabric, not easily discerned by media and the public at large. Sadly, as well they are not often easily discerned by scientists themselves. This is why we have ethical skepticism. It’s purpose is not simply to examine ‘extraordinary claims’, but also to examine those claims which masquerade, hidden in plain sight, as if constituting ordinary boring old ‘settled science’.

When you do not want the answer to be known, or you desire a specific answer because of social pressure surrounding an issue, or you are tired of irrational hordes babbling some nonsense about your product ‘harming their family members’ *boo-hoo 😢. Maybe you want to tout the life extending benefits of drinking alcohol, or overinflate death rates so that you can blame it on people you hate – or maybe you are just plain ol’ weary of the requisite attributes of real science. Wherever your Procrustean aspiration may reside, this is the set of guidebook best practices for you and your science organization. Trendy and proven techniques which will allow your organization to get science back on your side, at a fraction of the cost and in a fraction of the time. 👍

Crank up your science communicators and notify them to be at the ready, to plagiarize a whole new set of journalistic propaganda, ‘cuz here comes The Lyin’tific Method!

The Lyin’tific Method: The Ten Commandments of Fake Science

When you have become indignant and up to your rational limit over privileged anti-science believers questioning your virtuous authority and endangering your industry profits (pseudo-necessity), well then it is high time to undertake the following procedure.

1. Select for Intimidation. Appoint an employee who is under financial or career duress, to create a company formed solely to conduct this study under an appearance of impartiality, to then go back and live again comfortably in their career or retirement. Hand them the problem definition, approach, study methodology and scope. Use lots of Bradley Effect vulnerable interns (as data scientists) and persons trying to gain career exposure and impress. Visibly assail any dissent as being ‘anti-science’, the study lead will quickly grasp the implicit study goal – they will execute all this without question. Demonstrably censure or publicly berate a scientist who dissented on a previous study – allow the entire organization/world to see this. Make him become the hate-symbol for your a priori cause.

2. Ask a Question First. Start by asking a ‘one-and-done’, noncritical path & poorly framed, half-assed, sciencey-sounding question, representative of a very minor portion of the risk domain in question and bearing the most likely chance of obtaining a desired result – without any prior basis of observation, necessity, intelligence from stakeholders nor background research. Stress that the scientific method begins with ‘asking a question’. Avoid peer or public input before and after approval of the study design. Never allow stakeholders at risk to help select nor frame the core problem definition, nor the data pulled, nor the methodology/architecture of study.

3. Amass the Right Data. Never seek peer input at the beginning of the scientific process (especially on what data to assemble), only the end. Gather a precipitously large amount of ‘reliable’ data, under a Streetlight Effect, which is highly removed from the data’s origin and stripped of any probative context – such as an administrative bureaucracy database. Screen data from sources which introduce ‘unreliable’ inputs (such as may contain eyewitness, probative, falsifying, disadvantageous anecdotal or stakeholder influenced data) in terms of the core question being asked. Gather more data to dilute a threatening signal, less data to enhance a desired one. Number of records pulled is more important than any particular discriminating attribute entailed in the data. The data volume pulled should be perceptibly massive to laymen and the media. Ensure that the reliable source from which you draw data, bears a risk that threatening observations will accidentally not be collected, through reporting, bureaucracy, process or catalog errors. Treat these absences of data as constituting negative observations.

4. Compartmentalize. Address your data analysts and interns as ‘data scientists’ and your scientists who do not understand data analysis at all, as the ‘study leads’. Ensure that those who do not understand the critical nature of the question being asked (the data scientists) are the only ones who can feed study results to people who exclusively do not grasp how to derive those results in the first place (the study leads). Establish a lexicon of buzzwords which allow those who do not fully understand what is going on (pretty much everyone), to survive in the organization. This is laundering information by means of the dichotomy of compartmented intelligence, and it is critical to everyone being deceived. There should not exist at its end, a single party who understands everything which transpired inside the study. This way your study architecture cannot be betrayed by insiders (especially helpful for step 8).

5. Go Meta-Study Early. Never, ever, ever employ study which is deductive in nature, rather employ study which is only mildly and inductively suggestive (so as to avoid future accusations of fraud or liability) – and of such a nature that it cannot be challenged by any form of direct testing mechanism. Meticulously avoid systematic review, randomized controlled trial, cohort study, case-control study, cross-sectional study, case reports and series, or reports from any stakeholders at risk. Go meta-study early, and use its reputation as the highest form of study, to declare consensus; especially if the body of industry study from which you draw is immature and as early in the maturation of that research as is possible.  Imply idempotency in process of assimilation, but let the data scientists interpret other study results as they (we) wish. Allow them freedom in construction of Oversampling adjustment factors. Hide methodology under which your data scientists derived conclusions from tons of combined statistics derived from disparate studies examining different issues, whose authors were not even contacted in order to determine if their study would apply to your statistical database or not.

6. Shift the Playing Field. Conduct a single statistical study which is ostensibly testing all related conjectures and risks in one felled swoop, in a different country or practice domain from that of the stakeholders asking the irritating question to begin with; moreover, with the wrong age group or a less risky subset thereof, cherry sorted for reliability not probative value, or which is inclusion and exclusion biased to obfuscate or enhance an effect. Bias the questions asked so as to convert negatives into unknowns or vice versa if a negative outcome is desired. If the data shows a disliked signal in aggregate, then split it up until that disappears – conversely if it shows a signal in component sets, combine the data into one large Yule-Simpson effect. Ensure there exists more confidence in the accuracy of the percentage significance in measure (p-value), than of the accuracy/salience of the contained measures themselves.

7. Trashcan Failures to Confirm. Query the data 50 different ways and shades of grey, selecting for the method which tends to produce results which favor your a priori position. Instruct the ‘data scientists’ to throw out all the other data research avenues you took (they don’t care), especially if it could aid in follow-on study which could refute your results. Despite being able to examine the data 1,000 different ways, only examine it in this one way henceforth. Peer review the hell out of any studies which do not produce a desired result. Explain any opposing ideas or studies as being simply a matter of doctors not being trained to recognize things the way your expert data scientists did. If as a result of too much inherent bias in these methods, the data yields an inversion effect – point out the virtuous component implied (our technology not only does not cause the malady in question, but we found in this study that it cures it~!).

8. Prohibit Replication and Follow Up. Craft a study which is very difficult to or cannot be replicated, does not offer any next steps nor serves to open follow-on questions (all legitimate study generates follow-on questions, yours should not), and most importantly, implies that the science is now therefore ‘settled’. Release the ‘data scientists’ back to their native career domains so that they cannot be easily questioned in the future.  Intimidate organizations from continuing your work in any form, or from using the data you have assembled. Never find anything novel (other than a slight surprise over how unexpectedly good you found your product to be), as this might imply that you did not know the answers all along. Never base consensus upon deduction of alternatives, rather upon how many science communicators you can have back your message publicly. Make your data proprietary. View science details as a an activity of relative privation, not any business of the public.

9. Extrapolate and Parrot/Conceal the Analysis. Publish wildly exaggerated & comprehensive claims to falsification of an entire array of ideas and precautionary diligence, extrapolated from your single questionable and inductive statistical method (panduction). Publish the study bearing a title which screams “High risk technology does not cause (a whole spectrum of maladies) whatsoever” – do not capitalize the title as that will appear more journaly and sciencey and edgy and rebellious and reserved and professorial. Then repeat exactly this extraordinarily broad-scope and highly scientific syllogism twice in the study abstract, first in baseless declarative form and finally in shocked revelatory and conclusive form, as if there was some doubt about the outcome of the effort (ahem…). Never mind that simply repeating the title of the study twice, as constituting the entire abstract is piss poor protocol – no one will care. Denialists of such strong statements of science will find it very difficult to gain any voice thereafter. Task science journalists to craft 39 ‘research articles’ derived from your one-and-done study; deem that now 40 studies. Place the 40 ‘studies’, both pdf and charts (but not any data), behind a registration approval and $40-per-study paywall. Do this over and over until you have achieved a number of studies and research articles which might fancifully be round-able up to ‘1,000’ (say 450 or so ~ see reason below). Declare Consensus.

10. Enlist Aid of SSkeptics and Science Communicators. Enlist the services of a public promotion for-hire gang, to push-infiltrate your study into society and media, to virtue signal about your agenda and attack those (especially the careers of wayward scientists) who dissent.  Have members make final declarative claims in one liner form “A thousand studies show that high risk technology does not cause anything!” ~ a claim which they could only make if someone had actually paid the $40,000 necessary in actually accessing the ‘thousand studies’. That way the general public cannot possibly be educated in any sufficient fashion necessary to refute the blanket apothegm. Have them demand final proof as the only standard for dissent. This is important: make sure the gang is disconnected from your organization (no liability imparted from these exaggerated claims nor any inchoate suggested dark activities *wink wink), and moreover, who are motivated by some social virtue cause such that they are stupid enough that you do not actually have to pay them.

The organizations who manage to pull this feat off, have simultaneously claimed completed science in a single half-assed study, contended consensus, energized their sycophancy and exonerated themselves from future liability – all in one study. To the media, this might look like science. But to a life-long researcher, it is simply a big masquerade. It is pseudo-science in the least; and at its worst constitutes criminal felony and assault against humanity. It is malice and oppression, in legal terms (see Dewayne Johnson vs Monsanto Company)

The discerning ethical skeptic bears this in mind and uses this understanding to discern the sincere from the poser, and real groundbreaking study from commonplace surreptitiously bad science.

epoché vanguards gnosis

——————————————————————————————

How to MLA cite this blog post =>

The Ethical Skeptic, “The Lyin’tific Method: The Ten Commandments of Fake Science” The Ethical Skeptic, WordPress, 3 Sep 2018; Web, https://wp.me/p17q0e-8f1

September 3, 2018 Posted by | Agenda Propaganda, Institutional Mandates, Social Disdain | , | Leave a comment

Malice and Oppression in the Name of Skepticism and Science

The Dewayne Johnson versus Monsanto case did not simply provide precedent for pursuit of Monsanto over claims regarding harm caused by its products. As well, it established a court litmus regarding actions in the name of science, which are generated from malice and as well seek oppression upon a target populace or group of citizens.
Watch out fake skeptics – your targeting of citizens may well fit the court’s definition of malice, and your advocacy actions those of oppression – especially under a context of negligence and when posed falsely in the name of science.

If you are a frequent reader of The Ethical Skeptic, you may have witnessed me employ the terms ‘malice’ and ‘malevolence’ in terms of certain forms of scientific or political chicanery. Indeed, the first principles of ethical skepticism focus on the ability to discern a condition wherein one is broaching malice in the name of science – the two key questions of ethical skepticism:

  1. If I was wrong, would I even know it?
  2. If I was wrong, would I be contributing to harm?

These are the questions which a promoter of a technology must constantly ask, during and after the deployment of a risk bearing mechanism. When a company starts to run from these two questions, and further then employs science as a shield to proffer immunity from accountability, a whole new set of motivation conditions comes into play.

The litmus elements of malice and oppression, when exhibited by a ‘science’ promoting party exist now inside the following precedent established by the Court in the case of Dewayne Johnson vs. Monsanto : Superior Court of the State of California, for the County of San Francisco: Case No. CGC-16-550128, Dewayne Johnson, Plaintiff, v. Monsanto Company, Defendant. (see Honorable Suzanne R. Bolanos; Verdict Form; web, https://www.baumhedlundlaw.com/pdf/monsanto-documents/johnson-trial/Johnson-vs-Monsanto-Verdict-Form.pdf below). Below I have digested from the Court Proceedings, the critical questions which led to a verdict of both negligence, as well as malice and oppression, performed in the name of science, on the part of Monsanto Company.

It should be noted that Dewayne Johnson v. Monsanto Company is not a stand alone case in the least. The case establishes precedent in terms of those actions which are punishable in a legal context, on the part of corporations or agencies who promote risk bearing technologies in the name of science – and more importantly in that process, target at-risk stakeholders who object, dissenting scientists and activists in the opposition. So let us be clear here, inside a context of negligence, the following constitutes malice and oppression:

1.  The appointing of inchoate agents, who’s purpose is to publicly demean opponents and intimidate scientific dissent, by means of a variety of public forum accusations, including that of being ‘anti-science’.

Inchoate Action

/philosophy : pseudoscience : malice and oppression/ : a set of activity or a permissive argument which is enacted or proffered by a celebrity or power wielding sskeptic, which prepares, implies, excuses or incites their sycophancy to commit acts of harm against those who have been identified as the enemy, anti-science, credulous or ‘deniers’. Usually crafted is such a fashion as to provide a deniability of linkage to the celebrity or inchoate activating entity.

This includes skeptics, and groups appointed, commissioned or inchoate encouraged by the promoter, even if not paid for such activity.

2.  The publishing of scientific study, merely to promote or defend a negligent product or idea, or solely for the purpose of countermanding science disfavored by the promoter of a negligent product or idea.

All that has to be established is a context of negligence on the part of the promoter. This includes any form of failure to followup study a deployed technology inside which a mechanism of risk could possibly exist. So, let’s take a look at the structure of precedent in terms of negligence, malice and oppression established by the Court in this matter. The questions inside the verdict, from which this structure was derived, are listed thereafter, in generic form.

Malice and Oppression in the Name of Science

/philosophy : the law : high crimes : oppression/ : malice which results in the oppression of a targeted segment of a population is measured inside three litmus elements. First, is the population at risk able to understand and make decisions with regard to the science, technology or any entailed mechanism of its risk? Second, has an interest group or groups crafted the process of science or science review and communication in a unethical fashion so as to steer its results and/or interpretation in a desired direction? Third, has a group sought to attack, unduly influence, intimidate or demean various members of society, media, government or the targeted group, as a means to enforce their science conclusions by other than appropriate scientific method and peer review.

I.  Have a group or groups targeted or placed a population at other than natural risk inside a scientific or technical matter

a. who bears a legitimate stakehold inside that matter

b. who can reasonably understand and make self-determinations inside the matter

c. whom the group(s) have contended to be illegitimate stakeholders, or as not meriting basic human rights or constitutionality with regard to the matter?

II.  Have these group or groups contracted for or conducted science methods, not as an incremental critical path means of investigation, rather only as means to

a. promote a novel technology, product, service, condition or practice which it favors, and

b. negate an opposing study or body of research

c. exonerate the group from reasonable liability to warn or protect the stakeholders at risk

d. exonerate the group from the burden of precaution, skepticism or followup scientific study

e. cover for past scientific mistakes or disadvantageous results

f. damage the reputation of dissenting researchers

g. influence political and legislative decisions by timing or extrapolation of results

h. pose a charade of benefits or detriment in promotion/disparagement of a market play, product or service

i. establish a monopoly/monopsony or to put competition out of business?

III.  Have these group(s) enlisted officers, directors, or managing agents, outside astroturf, undue influence, layperson, enthusiast, professional organization or media entities to attack, intimidate and/or disparage

a. stakeholders who are placed at risk by the element in question

b. wayward legislative, executive or judicial members of government

c. dissenting scientists

d. stakeholders they have targeted or feel bear the greatest threat

e. neutral to challenging media outlets

f. the online and social media public?

The Ruling Precedent (Verdict)

The sequence of questions posed by the Court, to the Jury, in the trail of Dewayne Johnson vs. Monsanto (applied generically as litmus/precedent):

Negligence

I.  Is the product or service set of a nature about which an ordinary consumer can form reasonable minimum safety expectations?

II.  Did the products or services in question fail to ensure the safety an ordinary consumer would have expected when used or misused in an intended or reasonably foreseeable way?

III.  Was the product design, formulation or deployment a contributor or principal contributing factor in causing harm?

IV.  Did the products or services bear potential risks that were known, or were knowable, in light of the scientific knowledge that was generally accepted in the scientific community at the time of their manufacture, distribution or sale?

V.  Did the products or services present a substantial danger to persons using or misusing them in an intended or reasonably foreseeable way?

VI.  Would ordinary citizen stakeholder users have recognized these potential risks?

VII.  Did the promoting agency or company fail to adequately warn either government or citizen stakeholders of the potential risks, or did they under represent the level of risk entailed?

VIII.  Was this lack of sufficient warnings a substantial factor in causing harm?

IX.  Did the promoter know or should it reasonably have known that its products or services were dangerous or were likely to be dangerous when used or misused in a reasonably foreseeable manner?

X.  Did the promoter know or should it reasonably have known that users would not realize the danger?

XI.  Did the promoter fail to adequately warn of the danger or instruct on the safe use of products or services?

XII.  Could and would a reasonable manufacturer, distributor, or seller under the same or similar circumstances have warned of the danger or instructed on the safe use of the products or services?

XIII.  Was the promoter’s failure to warn a substantial factor in causing harm?

Malice and Oppression

XIV.  Did the promoter of the products or services act with malice or oppression towards at-risk stakeholders or critical scientists or opponents regarding this negligence or the risks themselves?

XV.  Was the conduct constituting malice or oppression committed, ratified, or authorized by one or more officers, directors, or managing agents of the promoter, acting on behalf of promoter?

epoché vanguards gnosis

——————————————————————————————

How to MLA cite this blog post =>

The Ethical Skeptic, “Malice and Oppression in the Name of Skepticism and Science” The Ethical Skeptic, WordPress, 28 Aug 2018; Web, https://wp.me/p17q0e-85F

 

August 28, 2018 Posted by | Institutional Mandates, Tradecraft SSkepticism | , | 2 Comments

Chinese (Simplified)EnglishFrenchGermanHindiPortugueseRussianSpanish
%d bloggers like this: