How and Why We Know What We Know

Undertaking the (ethical) skepticism of understanding of how and why we know what we know, is not tantamount to usurping scientists at their job. It is not a philsophy of ‘doing the science all over again, in order to believe it’. Such prattle is agency lined straw man. Nonetheless, the question arises for concerned citizen and science communicator alike, ‘How do we know, how we know what we know, and why then it should be regarded as science?’

Rights come with responsibilities. The right to pursue profit in the name of science, comes commensurate with the responsibility to execute such activity under the burden of unadulterated public scrutiny. In compliment, the right of a common citizen stakeholder to have a say in the deployment of science and technology upon society, comes commensurate with the duty to understand how and why we know what we know.

We are not afraid to entrust the American people with unpleasant facts, foreign ideas, alien philosophies, and competitive values. For a nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people. ~John F Kennedy

Undertaking the (ethical) skepticism of developing an understanding of how we know what we know, is not the same thing as usurping scientists at their job. It is not tantamount to ‘doing the science all over again, before we believe it.’ Such are straw man framings lined with agency. Nonetheless, the question arises then, ‘How do we know what we know, and Why then should it be regarded as science?’ How can we differentiate claims such as ‘The Earth is a sphere’, from ‘Dioxin is safe when employed under appropriate application protocols’ or ‘vaccines do not cause cerebral injury’? You might even see a person wearing a T-shirt, walking down the grocery store aisle, which says ‘We landed on the Moon, The Earth is Not Flat, Chemtrails are Not a Thing, Dioxin is Safe’. Well the first thing to note, is that truth does not come in bundles like this. Any person wearing such a t-shirt is scientifically illiterate to begin with. Lies and abuse come pork-barreled, always.

But, what if the discernment of truth is not as easy as is this visceral t-shirt example for the average member of the public? What questions should we as public stakeholders and ethical skeptics, those who bear a love for their neighbor and their children – what should we ask in order to discern strong science, from that science which is developed under a social masquerade? Now that we live in the era where the media censors voices which issue caution around science it favors, how do we tell valid plenary science, from its agency-infused imposter? When a single scientific study is being touted as the basis for truth the ethical skeptic must bear in mind, the disciplines of science. The following seven questions most often apply.

    1. How near is the researcher to the topic?
    2. How new is the topic and do we really understand much about it?
    3. Did the researcher stay focused and constrained on a single relevant aspect of the topic?
    4. Did the researcher conduct observations which would serve to fully inform, or to only partially inform?
    5. Does the work support the researchers’ conclusions, or did they bundle, exaggerate or extrapolate its results?
    6. Was the researcher’s primary objective to intimidate an issue into closure and embargo?
    7. Is the researcher influenced by agency of any kind?

The bottom line: If you are a citizen, you need to be asking these seven questions about the science being foisted upon you. It arrives every single media-propaganda operating day. If you are a science communicating journalist (or even a scientist), and you don’t understand these questions nor why they are important, then you are pretending at your craft.

Discerning Plenary Science from Imposter Science – Seven Questions

How near is the researcher to the topic?

1. Is the researcher close to the observation domain? Would you trust a two-year experienced intern in molecular biology, to scour 10,000 studies and determine what the opinion of the study authors were regarding a particular highly-charged political question? I would not. But this happens every day inside so-called ‘meta-study’. Highly compensated science celebrities, with little database and information technology skill, rarely possess the resources necessary in such detailed work. Young professionals often do however. They know that they will get ahead by simply tendering the results their mentor desires. The arrogance of extrapolation is then added by means of the pseudo-wisdom that ‘meta-analysis is the most rigorous form of scientific study’.1 What a load of hogwash. If a researcher conducts their first study out of the gate as a massive ‘statistical analysis’, and has never interviewed parents, doctors, kids, victims, never conducted decades of any direct physical observation at all, the odds of their statistical study bearing any gravitas are very low. If it goes straight to the media as ‘finished science’ thereafter, that broaches a case of court-defined oppression in the name of science.

See: Distinguishing Scientific from Academic Study

See The Lyin’tific Method: The Ten Commandments of Fake Science

How new is the topic and do we really understand much about it?

2. How large is the relevant subject domain relative to what we have currently observed of it? A four year-old can probably inform me as much about Heaven as can an 80 year-old (note: ‘Heaven’ being defined as ‘all that is, but remains unknown’). The reason for this is that the topic of Heaven is such a large domain, that one lifetime does not serve to place even a dent into its discernment. More tactically, inside topics such as Dark Matter versus Quantum Interference we have only begun to scratch the surface of the necessary understanding – I would be remiss to attempt to enforce a conclusion therein. This demonstrates the folly of certainty expressed inside a domain of substantive unknown. Similarly, were I to encounter a researcher who was sure he knew everything which was needful concerning Fast Radio Bursts, and had concluded exactly what they are – given that we have only detected them for less than a decade, and only for a few milliseconds at a time (and only two instances wherein they repeat), then I would not regard that researcher’s contentions as being plenary science.

See: The Fermi Paradox is Babysitting Rubbish

See: The Map of Inference

Did the researcher stay focused and constrained on a single relevant aspect of the topic?

3. Did the researcher employ actual critical path scientific hypothesis? Was there an actual necessity which drove the study? Were its terms well defined and disciplined? Was it conservative in its reach and implication? Did it address the critical path of prior art on the topic and inform its audience of any contrast or corroboration regarding that prior art? Did it combine a robust set of intelligence data with a set of direct observations and probative tests? Did it establish a proposed mechanism at risk and seek to hold that contention up to the light of accountability? It does not matter whether or not a study uses p-values to test its statistical hypothesis versus the null hypothesis – if the preceding questions are not answered in the positive, one should be concerned that they are examining propaganda, and not science.

See: Reduction: A Bias for Understanding

See: The Elements of Hypothesis

See: Qualifying Theory and Pseudo-Theory

Did the researcher conduct observations which would serve to fully inform, or to only partially inform?

4. Was an appropriate study design employed? Did the study author use a longitudinal data study when a directly observed cohort study would better differentiate the critical argument at hand? Did the study authors avoid deductive study in favor of statistical induction, because it cost less or bore less risk of deriving a controversial result? Did the study authors sample an inordinately large population as their first and only data analysis, never establishing protocols to mitigate Yule-Simpson effect? Did the authors study two cohorts which were differentiated only by a trivial factor, and then declare that a major differentiating factor was actually studied by this method? Did the study authors ask the right question and do the appropriate background research? Did the abstract summarize the results, or simply preach about other study results as a pretext – or how this study was not necessary but anti-science forces mandated that it be done? Did the study contain a ‘Limitations and Qualifications’ section? Was the study passed straight to the media upon completion?

See: Interrogative Biasing: Asking the Wrong Question in Order to Get the Right Answer

See: Torfuscation – Gaming Study Design to Effect an Outcome

Does the work support the researchers’ conclusions, or did they bundle, exaggerate or extrapolate its results?

5. What type and mode of inference was drawn? Did the researcher seem to be aware of both the type (vertical axis of the Map of Inference) as well as mode (horizontal axis on the Map of Inference) of inference being drawn? Was methodical deescalation performed? Did the researcher seek to bundle-associate their scientific question with other, more established ones, or manifestations of virtue and correctness (did they wear the t-shirt)? Did the researcher use a small question inside a very large domain and then extrapolate its results to apply to that entire domain? Did the researcher claim that something was ‘not’ or that something ‘did not exist’ – when good argument as been made as to that entity’s possible reality? Did the researcher understand the difference between plurality and proof, under Ockham’s Razor?

See: The Map of Inference

See: The Three Types of Reason

Was the researcher’s primary objective to intimidate an issue into closure and embargo?

6. Did the researcher appear to distinguish a claim to proof from one of simply plurality? A very common philosophical sleight-of-hand proceeds along this line ‘I doubt, therefore my preferred and implicit conclusion is true.’ Its proponent will not admit this false syllogism at face value, however – watch for this tacit contention, as this is a key tactic of fake skeptics. In similar fashion always ask yourself, does the researcher involved make it clear what they are actually claiming? Are they suggesting that an argument be opened up for further deliberation, or are they suggesting that the entire issue is now closed? Even worse, are they appealing to an embargo of all competing ideas. Fake skeptics will gravitate towards the latter two. The burden of evidence in the case of the latter two far outweighs the burden of evidence sufficient for the former. A curious skeptic should regard with keen interest, any petition for plurality – the idea that ‘an additional explanation should now be considered, and here is why’. This versus the resistance they should tender in the face of a claim to conclusivity, especially premature conclusivity. Fake skeptics and fake scientists use such conclusivity as a method of embargo of ideas they dislike. Watch for this form of chicanery.

See: Ethical Skepticism – Part 5 – The Real Ockham’s Razor

See: Embargo of The Necessary Alternative is Not Science

Is the researcher influenced by agency of any kind?

7. Is the researcher well known for publishing studies of this expected result in the first place? Would the researcher be now negatively perceived if they showed a different result? Agency is different from either conflict of interest or bias. It is actually stronger than either, and more important in its detection. Especially when a denial is involved, the incentive to double-down on that denial, in order to preserve office, income or celebrity – is larger than either bias or nominal conflict of interest. Did the study start in the abstract by targeting or blaming disliked persons or ideas? If the researcher has found the antithesis to be indeed the case, would they have lost their job or community respect? Was the study funded by a group who stood to gain revenue or political power from the study’s touted implication? Is the study author well known for a position upon which they would need to double-down, if other threatening studies were to surface? Then perhaps you are witnessing agency (not simply bias) at play inside the process of supposed science.

See: A Handy Checklist for Distinguishing Propaganda from Actual Science

See: Epoché and The Handedness of Information

Although this responsibility of course is simple in it expression as seven specific questions – it can be rather complex in regard to its actual execution. Ethical skepticism is a life journey, and not a light undertaking by any means. One does not have to grasp all of this inside one day. Nonetheless, it is our duty. War is never fought casually, nor should it be. We are in just such a battle for mind, power, money and influence.

Arm yourself accordingly ethical skeptic.

The Ethical Skeptic, “How and Why We Know What We Know”; The Ethical Skeptic, WordPress, 3 Apr 2019; Web, https://wp.me/p17q0e-9xs

  1. Duke University: Introduction to Evidence-Based Practice : Types of Studies; https://guides.mclibrary.duke.edu/ebmtutorial/study-types
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments