The Ethical Skeptic

Challenging Pseudo-Skepticism, Institutional Propaganda and Cultivated Ignorance

Critical Attributes Which Distinguish the Scientific Method

The scientific method bears several critical attributes which distinguish it from both mere experiment, as well as its masquerade, the Lyin’tific Method. It behooves the ethical skeptic to understand the critical features which distinguish science from its pretense; to maintain the skill in spotting social manipulation in the name of science.

The experimental method is a subset of the scientific method. There exists a distinct difference between these two protocols of science. The experimental method is oriented towards an incremental continuation of existing knowledge development, and accordingly begins with the asking of a question, bolstered by some quick research before initiating experimental testing. But not all, nor even the majority of knowledge development can be prosecuted in this fashion of study. Under the scientific method, one cannot boast about possessing the information necessary in asking a question at the very start. Asking an uninformed question may serve to bias the entire process – or kill the research artificially without the full awareness of the sponsors or stakeholders. Accordingly, in the scientific method, a question is not asked until step 8 – this in an effort to avoid the pitfalls of pseudo-theory. This is purposeful, because the astute researcher often does not know the critical path necessary to reach his discovery – at the very beginning. Science involves a intelligence development period wherein we ask, 1. what are the critical issues that need to be resolved, 2. what are the irrelevant factors we can ignore for now? and 3. how do I chain these issue resolutions into a critical path of knowledge development? In absence of this process, there exists a bevy of questions – wherein just selecting one and starting experiments, is akin to shooting in the dark.

The materials physicist Percy Bridgman, commented upon the process by which we ‘translate’ abstract theories and concepts into specific experimental contexts and protocols. Calling this work of reduction and translation ‘operationalism’ – Bridgman cautioned that experimental data production is often guided by substantial presuppositions about the subject matter which arise as a part of this translation. Often raising concern about the ways in which initial questions are formulated inside a scientific context. True science is a process which revisits its methodological constructs (modes of research method) as often as it does its epistemological (knowledge) ones. Accordingly, this principle identified by Bridgman is the foundation of the philosophy which clarifies the difference between the scientific method, and the experimental method. It is unwise to consider the two as being necessarily congruent.1

The process of developing a scientific question, is many times daunting, involving commitment from a sponsor, a long horizon of assimilating observational intelligence and persistence in seeking to establish a case for necessity. A necessity which serves to introduce plurality of argument (see Ockham’s Razor), which can be brought before peers. Advising peers who are in support of the research and assist in developing the construct being addressed, into a workable hypothesis. These peers are excited to witness the results of the research as well.

Science is a process of necessity in developing taxonomic observation, which seeks to establish a critical path of continuously evaluated and incremental in risk conjecture, probing by means of the most effective inference method available, the resolution of a query and its inevitable follow-on inquiry, in such manner that this process can be replicated and shared with mankind.

The Lyin’tific Method in contrast, will target one or more of these critical attributes to be skipped, in an effort to get to a desired answer destination in as expedient a manner as is possible – yet still appear as science.

The Critical Attributes of Science

Although science is indeed an iterative process, nonetheless true science, as opposed to general or developmental study, involves these critical path steps at the beginning of the scientific method:

  1. Observation
  2. Intelligence
  3. Necessity

Science thereafter is an iterative method which bears the following necessary features:

  1. Flows along a critical path of dependent, salient and revelatory observation and query
  2. Develops hypothesis through testable mechanism
  3. Is incremental in risk of conjecture (does not stack conjectures)
  4. Examines probative study in preference over reliable data
  5. Seeks reliable falsification over reliable inductive inference
  6. Seeks reliable consilience over reliable abductive inference
  7. Does not prematurely make a claim to consensus in absence of available deduction
  8. Shares results, next questions, next steps and replication guidance.

Social skeptics seek to deny the first three steps of science, along with routinely ignoring its necessary features. Social skeptics then further push the experimental method in place of the above attributes of science – asking a biased and highly uninformed question (also known in philosophy as rhetoric), while promoting science as nothing but exclusive club lab activity.  Finally, incorporating their corrupted version of ‘peer review’ wherein they seek to kill ideas before they can be formulated into a hypothesis and be studied. This is a process of corruption.

Most unanswered questions reside in a state of quandary precisely because of a failure in or refusal to pursue the above characteristics of science.

Accordingly, the scientific method begins with a process of circumspection and skepticism, which is distinctly different from the inception of the much more tactical experimental method. To scoff at this distinction, reveals a state of scientific illiteracy and of never having done actual scientific research nor discovery.

While both the experimental method and the scientific method are valid process descriptions applicable to science, there does exist an abbreviated version of the scientific method which sometimes slips by as valid to political agenda proponents and the mainstream press – that method which is practiced in the pesticide and vaccine industries.  It follows:

The Lyin’tific Method: The Ten Commandments of Fake Science

When you have become indignant and up to your rational limit over privileged anti-science believers questioning your virtuous authority and endangering your industry profits (pseudo-necessity), well then it is high time to undertake the following procedure.

1. Select for Intimidation. Appoint an employee who is under financial or career duress, to create a company formed solely to conduct this study under an appearance of impartiality, to then go back and live again comfortably in their career or retirement. Hand them the problem definition, approach, study methodology and scope. Use lots of Bradley Effect vulnerable interns (as data scientists) and persons trying to gain career exposure and impress. Visibly assail any dissent as being ‘anti-science’, the study lead will quickly grasp the implicit study goal – they will execute all this without question. Demonstrably censure or publicly berate a scientist who dissented on a previous study – allow the entire organization/world to see this. Make him become the hate-symbol for your a priori cause.

2. Ask a Question First. Start by asking a ‘one-and-done’, noncritical path & poorly framed, half-assed, sciencey-sounding question, representative of a very minor portion of the risk domain in question and bearing the most likely chance of obtaining a desired result – without any prior basis of observation, necessity, intelligence from stakeholders nor background research. Stress that the scientific method begins with ‘asking a question’. Avoid peer or public input before and after approval of the study design. Never allow stakeholders at risk to help select nor frame the core problem definition, nor the data pulled, nor the methodology/architecture of study.

3. Amass the Right Data. Never seek peer input at the beginning of the scientific process (especially on what data to assemble), only the end. Gather a precipitously large amount of ‘reliable’ data, under a Streetlight Effect, which is highly removed from the data’s origin and stripped of any probative context – such as an administrative bureaucracy database. Screen data from sources which introduce ‘unreliable’ inputs (such as may contain eyewitness, probative, falsifying, disadvantageous anecdotal or stakeholder influenced data) in terms of the core question being asked. Gather more data to dilute a threatening signal, less data to enhance a desired one. Number of records pulled is more important than any particular discriminating attribute entailed in the data. The data volume pulled should be perceptibly massive to laymen and the media. Ensure that the reliable source from which you draw data, bears a risk that threatening observations will accidentally not be collected, through reporting, bureaucracy, process or catalog errors. Treat these absences of data as constituting negative observations.

4. Compartmentalize. Address your data analysts and interns as ‘data scientists’ and your scientists who do not understand data analysis at all, as the ‘study leads’. Ensure that those who do not understand the critical nature of the question being asked (the data scientists) are the only ones who can feed study results to people who exclusively do not grasp how to derive those results in the first place (the study leads). Establish a lexicon of buzzwords which allow those who do not fully understand what is going on (pretty much everyone), to survive in the organization. This is laundering information by means of the dichotomy of compartmented intelligence, and it is critical to everyone being deceived. There should not exist at its end, a single party who understands everything which transpired inside the study. This way your study architecture cannot be betrayed by insiders (especially helpful for step 8).

5. Go Meta-Study Early. Never, ever, ever employ study which is deductive in nature, rather employ study which is only mildly and inductively suggestive (so as to avoid future accusations of fraud or liability) – and of such a nature that it cannot be challenged by any form of direct testing mechanism. Meticulously avoid systematic review, randomized controlled trial, cohort study, case-control study, cross-sectional study, case reports and series, or reports from any stakeholders at risk. Go meta-study early, and use its reputation as the highest form of study, to declare consensus; especially if the body of industry study from which you draw is immature and as early in the maturation of that research as is possible.  Imply idempotency in process of assimilation, but let the data scientists interpret other study results as they (we) wish. Allow them freedom in construction of Oversampling adjustment factors. Hide methodology under which your data scientists derived conclusions from tons of combined statistics derived from disparate studies examining different issues, whose authors were not even contacted in order to determine if their study would apply to your statistical database or not.

6. Shift the Playing Field. Conduct a single statistical study which is ostensibly testing all related conjectures and risks in one felled swoop, in a different country or practice domain from that of the stakeholders asking the irritating question to begin with; moreover, with the wrong age group or a less risky subset thereof, cherry sorted for reliability not probative value, or which is inclusion and exclusion biased to obfuscate or enhance an effect. Bias the questions asked so as to convert negatives into unknowns or vice versa if a negative outcome is desired. If the data shows a disliked signal in aggregate, then split it up until that disappears – conversely if it shows a signal in component sets, combine the data into one large Yule-Simpson effect. Ensure there exists more confidence in the accuracy of the percentage significance in measure (p-value), than of the accuracy/salience of the contained measures themselves.

7. Trashcan Failures to Confirm. Query the data 50 different ways and shades of grey, selecting for the method which tends to produce results which favor your a priori position. Instruct the ‘data scientists’ to throw out all the other data research avenues you took (they don’t care), especially if it could aid in follow-on study which could refute your results. Despite being able to examine the data 1,000 different ways, only examine it in this one way henceforth. Peer review the hell out of any studies which do not produce a desired result. Explain any opposing ideas or studies as being simply a matter of doctors not being trained to recognize things the way your expert data scientists did. If as a result of too much inherent bias in these methods, the data yields an inversion effect – point out the virtuous component implied (our technology not only does not cause the malady in question, but we found in this study that it cures it~!).

8. Prohibit Replication and Follow Up. Craft a study which is very difficult to or cannot be replicated, does not offer any next steps nor serves to open follow-on questions (all legitimate study generates follow-on questions, yours should not), and most importantly, implies that the science is now therefore ‘settled’. Release the ‘data scientists’ back to their native career domains so that they cannot be easily questioned in the future.  Intimidate organizations from continuing your work in any form, or from using the data you have assembled. Never find anything novel (other than a slight surprise over how unexpectedly good you found your product to be), as this might imply that you did not know the answers all along. Never base consensus upon deduction of alternatives, rather upon how many science communicators you can have back your message publicly. Make your data proprietary. View science details as a an activity of relative privation, not any business of the public.

9. Extrapolate and Parrot/Conceal the Analysis. Publish wildly exaggerated & comprehensive claims to falsification of an entire array of ideas and precautionary diligence, extrapolated from your single questionable and inductive statistical method (panduction). Publish the study bearing a title which screams “High risk technology does not cause (a whole spectrum of maladies) whatsoever” – do not capitalize the title as that will appear more journaly and sciencey and edgy and rebellious and reserved and professorial. Then repeat exactly this extraordinarily broad-scope and highly scientific syllogism twice in the study abstract, first in baseless declarative form and finally in shocked revelatory and conclusive form, as if there was some doubt about the outcome of the effort (ahem…). Never mind that simply repeating the title of the study twice, as constituting the entire abstract is piss poor protocol – no one will care. Denialists of such strong statements of science will find it very difficult to gain any voice thereafter. Task science journalists to craft 39 ‘research articles’ derived from your one-and-done study; deem that now 40 studies. Place the 40 ‘studies’, both pdf and charts (but not any data), behind a registration approval and $40-per-study paywall. Do this over and over until you have achieved a number of studies and research articles which might fancifully be round-able up to ‘1,000’ (say 450 or so ~ see reason below). Declare Consensus.

10. Enlist Aid of SSkeptics and Science Communicators. Enlist the services of a public promotion for-hire gang, to push-infiltrate your study into society and media, to virtue signal about your agenda and attack those (especially the careers of wayward scientists) who dissent.  Have members make final declarative claims in one liner form “A thousand studies show that high risk technology does not cause anything!” ~ a claim which they could only make if someone had actually paid the $40,000 necessary in actually accessing the ‘thousand studies’. That way the general public cannot possibly be educated in any sufficient fashion necessary to refute the blanket apothegm. This is important: make sure the gang is disconnected from your organization (no liability imparted from these exaggerated claims nor any inchoate suggested dark activities *wink wink), and moreover, who are motivated by some social virtue cause such that they are stupid enough that you do not actually have to pay them.

To the media, this might look like science. But to a life-long researcher, it is nowhere near valid.  It is pseudo-science at the least; and even worse than in the case of the paranormal peddler – it is a criminal felony and assault against humanity. It is malice and oppression, in legal terms.

The discerning ethical skeptic bears this in mind and uses it to discern the sincere researcher from the attention grabbing poseur.

epoché vanguards gnosis

How to MLA cite this blog post =>
The Ethical Skeptic, “The Scientific Method Contrasted with The Experimental Method” The Ethical Skeptic, WordPress, 31 March 2018, Web; https://wp.me/p17q0e-7qG

March 31, 2018 Posted by | Ethical Skepticism | , , | Leave a comment

The Scientific Method is Not Simply The Experimental Method

Neither the Developmental Scientific Method, nor the Experimental Method are wrong necessarily.  But what those two subsets of the scientific method fail to address are several vital steps of Discovery Science Methodology.  Our regard of the scientific method as simply being a big lab experiment constitutes a logical fallacy; one which blinds and binds our professionals and emasculates our ability as a culture to address the key questions which face humanity today.

discovery science vs developSearch for the scientific method in Google and you will find an enormous amount of misinformation and conflation of the scientific method with the experimental method.  This confusion is an example of well meaning but of sophomoric guild individuals or cabals attempting to explain incorrectly, what is indeed science.  I suppose that this is how these same people might describe the method of making love:

Making Love Method:

  1. Obtain a Naked Person
  2. Examine Various Body Parts
  3. Rub Genitals Together
  4. Ask if It Was Acceptable
  5. Exchange Phone Numbers

This is not making love.  There is so much that is missing what is happening here, such as to render this process invalid, despite its apparent correctness.  This is a method which is touted by someone who has never made love.

In the same way, science is not an experiment, rather it is the process and body of knowledge development.  And as such, its applied acumen resides to the greater degree outside the lab, not in it.  Anyone who has managed a scientific research organization knows this. A team can refine an experimental insight only so many times, but if they have not asked the right question or obtained the right resources and data, then this is simply lots of activity executed by technicians masquerading as scientists.  Regarding the scientific method as simply an extended experiment, can leave it open to ineffectiveness at best, or even worse manipulation by ill-meaning forces who seek to direct the body of predictive knowledge in certain directions (see Promotification below).  Science demands that its participants be circumspect and prepared, before they pretend to be competent at testing its first questions.

Wikipedia, in similar form, defines the “Scientific Method” as below (http://en.wikipedia.org/wiki/Scientific_method; extracted Apr 1, 2014).†  I have called it by its more accurate name here in red:

Scientific Method:                         (Developmental Scientific Methodology)

  1. Define a question
  2. Gather information and resources (observe)
  3. Form an explanatory hypothesis
  4. Test the hypothesis by performing an experiment and collecting data in a reproducible manner
  5. Analyze the data
  6. Interpret the data and draw conclusions that serve as a starting point for new hypothesis
  7. Publish results
  8. Retest (frequently done by other scientists)

†Please note that Wikipedia has removed its older definition of ‘scientific method’ which began with ‘Define a Question’ as the first step (as shown above), and has replaced it with ethical skepticism’s – ‘Conduct Observation’ now instead.  This is a major breakthrough, and while a remote stretch to imply that The Ethical Skeptic provided contribution to this change; nonetheless, it has taken time and activism on the part of real researchers just like us to supersede the false version of the scientific method, formerly taught by social skeptics over the last 6 decades. They have yet to add in the steps of ‘Frame Intelligence’ and ‘Establish Necessity’ (Ockham’s Razor), before asking a question – but this is a step in the right direction. The Ethical Skeptic is very pleased with this. This evolution is part of the contribution to the disintegration of the social skepticism movement currently underway.

While this step series is generally correct and close, this actually represents really only an expansive form of the Experimental Method and focuses on Developmental Science only.  In other words, what Wikipedia and its academic authors have defined here is what one does to improve our knowledge of existing and established paradigms, in highly controlled environments, and in cases where we already know what question to ask in the above Step 1. Define a Question. This is simply a method of refining existing knowledge focused on essentially technology development.  And that is indeed a valid approach, since what are we going to do if we cannot turn our science into beneficial application?  Certainly a large part of science necessarily centers around this diligent technical incrementalism and existing paradigm development process.

But this is NOT the scientific method.  It is a PART of the scientific method, more focused and centered on specific procedure from what the authors learned in school, that of the Experimental Method (below, mostly courtesy of Colby College (http://www.colby.edu/biology/BI17x/expt_method.html).  To be fair, Wikipedia does address this issue in part of their excellent writeup on Experimental Methodology (http://en.wikipedia.org/wiki/Experiment) and its ethical employment and limitations.

Experimental Method:                    (Look Familiar?)

  1. Ask a Question
  2. Form a Hypothesis
  3. Define a Test/Variables
  4. Perform an Experiment
  5. Analyze the Results
  6. Draw a Conclusion
  7. Report Results

But what if we do not have the necessary set of observations which could educate us to even know what question to ask in the first place?  What if by asking the question as the first step, we bias the participants or the outcome, or blind ourselves to the true experimental domain entailed? What if we were able to conduct a series of initial falsification tests in the early data sets, which would preclude an entire series of predictive tests in the classic developmental methodology later?  Moreover, what if we did not know because we collected the wrong data, all because we asked the wrong questions to begin with, or failed to learn from past mistakes made by competitors on the subject.  These confusing challenges are common to every lab in a variety of industry verticals.

But Science is About Discovery, not Simply Incremental Development of Current Paradigms

In some of my labs, in the past, when we have made major breakthroughs, or turned an eight month research program into a 3 week discovery process, we did not employ the above process as expressed by academia and Wikipedia.  We took a step back and asked three important circumspect questions which differentiate scientists from lab technicians, which occur commensurate with Discovery Scientific Methodology, Step 3 – Aggregation of Data:

Three Critical Questions Scientists Ask When They Really Want an Answer:

  1. What is it that we do not know, that we do not know?
  2. What should we test and/or statistically aggregate and analyze before we boast that we can competently ask the question?
  3. What missteps have we or our competitors made to date?

The Wikipedia Developmental Science Methodology presumes that there is only a small set (s) of the unknown, and our task is simply to fill in that (s) blank.   In discovery science this presumption of the small unknown is incorrect, as it embodies a version of the Penultimate Set Fallacy. In Discovery Science Methodology, the key is that we do not necessarily have all the information we need, and even more importantly we might not even be aware that we are not equipped to boast that we can suitably ask the right question.  Proceeding in such a disadvantageous state under the Developmental Science process would be akin to one searching for Jimmy Hoffa by starting in one’s living room. It is clear that science has woefully undersold the role of the aggregation of data (Step 3 – Intelligence/Data Aggregation (The 3 Key Questions, below). Mike Huerta is the Associate Director of the US National Library of Medicine, National Institutes of Health, USA cites that data is the weakness of science in the biomedical field. Probably the richest mine of field scientific data known to man, yet still, the weak link in the scientific method continues even there, to be the data – the first half of the Discovery Science Methodology process.

I think a lot of people haven’t thought broadly about the benefits that will come from data sharing. Once we have a comprehensive set of information about data, as we do about the scientific literature, it’ll let us start looking at the landscape of biomedical research from a different perspective. It will give us another metric for assessing science and progress and it will allow us to find data that might be useful for any of a number of scientific purposes. Now, for most biomedical research the public products are the conclusions and the interpretations about data, and those conceptual aspects are probably the most fragile part of that scientific process.

   ~ Mike Huerta, Associate Director of the US National Labrary of Medicine (http://blogs.nature.com/scientificdata/2014/08/12/data-matters-interview-with-mike-huerta/)

Without data availability, the researcher cannot even hope to ask the right question, and must simply start by acting all busy.  Well yes, you can search your living room for Jimmy Hoffa, and certainly perform developmental investigation there, but you are really only performing those activities to which you are accustomed.  You will see many a SSkeptic performing this type of ‘scientific inquiry.’  They have not looked at the broad set of data, nor have they asked the right question to begin with, sometimes purposefully; rather desiring only to tender the appearance of doing science.  Getting themselves photographed naked, and pretending that they were in the process of making love.

Two Reason to Pretend at ScienceKnowing how to ask the right question, if approached properly, can turn years of potentially misleading predictive study (Promotification) into a much shorter timeframe and more productive falsification based conclusions.  This process is neither deliberated nor executed in the lab.  Much of what is considered “pseudoscience” as a subject, suffers from this sleight-of-hand shortfall through targeting by fake SSkeptics.  By not knowing how to ask the right question, one can fall susceptible to a pseudoscience called Promotification:

Promotification – One or a series of predictive experiments touted as scientific, yet employed in such a fashion as to mislead, impugn, obfuscate or delay.  Deception or incompetence wherein only predictive testing methodology was undertaken in a hypothesis reduction hierarchy when more effective falsification pathways or current evidence were readily available, but were ignored.

Through asking the wrong question, power is sublimed from the hands of science and into the hands of those who do not desire an answer.

Karl Popper, one of the greatest philosophers of science of the 20th century, goes even further in condemning the employment of Promotification:

“Science does not make progress by confirmation of hypotheses. because confirmatory evidence is too easy to find.”

the-key-to-powerWhen you regard the ethical impact of promoting ideas, asking wrong questions, or when one or more of the below steps is skipped or placed in the wrong order, in a discovery science context, then this can be an indication that SSkeptical Tradecraft is underway.  It behooves the discovery science researcher to be fully cognizant and circumspect for the influences of SSkepticism in the answers he is handed.  One does not even have to manage a lab, and might be simply addressing a tough question.  If one is actually being held accountable by an external body of oversight, such as a board of directors who want results, not status-quo and protocol, then often necessity drives this as the true scientific method:

DISCOVERY SCIENCE METHODOLOGY:

1.  OBSERVATION

2.  INTELLIGENCE/AGGREGATION OF DATA (The Three Key Questions)

3.  NECESSITY

4.  CONSTRUCT FORMULATION

5.  SPONSORSHIP/PEER INPUT (Ockham’s Razor)

6.  HYPOTHESIS DEVELOPMENT

7.  PREDICTIVE TESTING

8.  COMPETITIVE HYPOTHESES FRAMING (ASKING THE RIGHT QUESTION)

9.  FALSIFICATION TESTING

10.  HYPOTHESIS MODIFICATION

11.  FALSIFICATION TESTING/REPEATABILITY

12.  THEORY FORMULATION/REFINEMENT

13.  PEER REVIEW (Community Vetting)

14.  PUBLISH

15.  ACCEPTANCE


Just as corruption produces human suffering, in similar fashion, Tradecraft SSkepticism produces cultivated ignorance

How do I know that SSkeptics fully acknowledge this process as constituting the full scientific method?  Because they skillfully and adeptly know how to manipulate the steps of this process such that specific desired outcomes and conclusions are produced.  It is a method of corruption, not unlike that which the ministers of a country might employ, through the gaming of laws, policies and bureaucracy such that they and their cronies are enriched in the process of legislation.  Just as corruption produces human suffering, in similar, Tradecraft SSkepticism produces cultivated ignorance.

PracticesThe icon on the right will act as my post signature icon, tagging the entire series of posts on Tradecraft SSkepticcism.  This tag will apply when the post is specifically depicting ways in which Social Skepticism manipulates through Tradecraft, academia, media, science, scientists and their Cabal faithful; spinning a false representation of the reality which encompasses the nature of man and our realm.  This icon will be affixed on the top right hand side of such posts. 🙂

SSkeptics are fully aware at how to obviate, block access to, or eliminate any or all of the above steps, as means to a specific end.  They intimidate researchers in specific embargoed subjects involved in or considering seriously any activity under Steps 1 – 7. Further then, SSkeptics pretend that they are the only ones sufficiently equipped to ask the question framed in Step 8; which constitutes an extraordinary boast.  By skipping Steps 1 – 7, SSkeptics are able to socially circumvent sound science and posit the wrong question as the first step.   Amateurs researchers rarely catch this sleight-of-hand which has been foisted on them, while the public just nods in wide eyed resignation.  Scientists who understand this know that they are to keep quiet.  This asking of the wrong question ensures a flawed execution of the scientific method such that Steps 9 – 15 never have a realistic set of hypotheses which to test.  So, SSkeptics DO understand this, the full scientific method.  All too well.

April 1, 2014 Posted by | Ethical Skepticism, Tradecraft SSkepticism | , , , , , , , , | Leave a comment

   

Chinese (Simplified)EnglishFrenchGermanHindiPortugueseRussianSpanish
%d bloggers like this: