The Ethical Skeptic

Challenging Pseudo-Skepticism, Institutional Propaganda and Cultivated Ignorance

Discerning Sound from Questionable Science Publication

Non-replicatable meta-analyses published in tier I journals do not constitute the preponderance of good source material available to the more-than-casual researcher. This faulty idea stems from a recently manufactured myth on the part of social skepticism. Accordingly, the life-long researcher must learn techniques beyond the standard pablum pushed by social skeptics; discerning techniques which will afford them a superior ability to tell good science from bad – through more than simply shallow cheat sheets and publication social ranking classifications.
The astute ethical skeptic is very much this life-long and in depth researcher. For him or her, ten specific questions can serve to elucidate this difference inside that highly political, complicated and unfair playing field called science.

the-ten-study-questionsRecently, a question was posed to me by a colleague concerning the ability of everyday people to be able to discern good scientific work from dubious efforts. A guide had been passed around inside her group, a guide which touted itself as a brief on 5 key steps inside a method to pin-point questionable or risky advising publications. The author cautioned appropriately that “This method is not infallible and you must remain cautious, as pseudoscience may still dodge the test.” He failed of course to mention the obvious additional risk possibility that the method could serve to screen science which either 1) is good but cannot possibly muster the credential, funding and backing to catch the attention of crowded major journals, or 2) is valid, however is also screened by power-wielding institutions which could have the resources and connections as well as possible motive to block research on targeted ideas. The article my friend’s group was circulating in consideration constituted nothing but a Pollyanna, wide-eyed and apple pie view of the scientific publication process. One bereft of the scarred knuckles and squint-eyed wisdom requisite in discriminating human motivations and foibles.

There is much more to this business of vetting ideas than simply identifying the bad people and the bad subjects. More than simply crowning the conclusions of ‘never made an observation in my life’ meta-analyses as the new infallible standard of truth.

Scientific organizations are prone to the same levels of corruption, bias, greed, desire to get something for as little input as possible, as is the rest of the population. Many, or hopefully even most, individual scientists buck this mold certainly, and are deserving of utmost respect. However, even their best altruism is checked by organizational practices which seek to ensure that those who crave power, are dealt their more-than-ample share of fortune, fame and friar-hood. They will gladly sacrifice the best of science in this endeavor. And in this context of human wisdom it is critical that we keep watch.

If you are a casual reader of science, say consuming three or four articles a month, then certainly the guidelines outlined by Ariel Poliandri below, in his blog entitled “A guide to detecting bogus scientific journals”, represent a suitable first course on the menu of publishing wisdom.¹ In fact, were I offered this as the basis of a graduate school paper, it would be appropriately and warmly received. But if this is all you had to offer the public after 20 years of hard fought science, I would aver that you had wasted your career therein.

1 – Is the journal a well-established journal such as Nature, Science, Proceedings of the National Academy of Sciences, etc.?
2 – Check authors’ affiliations. Do they work in a respectable University? Or do they claim to work in University of Lala Land or no university at all?
3 – Check the Journal’s speciality and the article’s research topic. Are the people in the journal knowledgeable in the area the article deals with?
4 – Check the claims in the title and summary of the article. Are they reasonable for the journal publishing them?
5 – Do the claims at least make sense?

The above process suffers from a vulnerability in hailing only science developed under what is called a Türsteher Mechanism, or bouncer effect. A process producing a sticky but unwarranted prejudice against specific subjects. The astute researcher must ever be aware of the presence of this effect. An awareness which rules out the above 5 advisements as being sufficient.

Türsteher Mechanism

/philosophy : science : pseudoscience : peer review bias/ : the effect or presence of ‘bouncer mentality’ inside journal peer review. An acceptance for peer review which bears the following self-confirming bias flaws in process:

  1. Selection of a peer review body is inherently biassed towards professionals who the steering committee finds impressive,
  2. Selection of papers for review fits the same model as was employed to select the reviewing body,
  3. Selection of papers from non core areas is very limited and is not informed by practitioners specializing in that area, and
  4. Bears an inability as to how to handle evidence that is not gathered in the format that it understands (large scale, hard to replicate, double blind randomized clinical trials or meta-studies).

Therein such a process, the selection of initial papers is biased. Under this flawed process, the need for consensus results in not simply attrition of anything that cannot be agreed upon – but rather, a sticky bias against anything which has not successfully passed this unfair test in the past. An artificial and unfair creation of a pseudoscience results.

This above list by Mr. Poliandri represents simply a non-tenable way to go about vetting your study and resource material so that only pluralistic ignorance influences your knowledge base. It is lazy – sure to be right and safe – useless advisement, to a true researcher. The problem with this list resides inside some very simple industry realities:

1.  ‘Well-established journal’ publication requires sponsorship from a major institution. Scientific American cites that 88% of scientists possess no such sponsorship, and this statistic has nothing to do with the scientific groups’ relative depth in subject field.² So this standard, while useful for the casual reader of science, is not suitable at all for one who spends a lifetime of depth inside a subject. This would include for instance, a person studying impacting factors on autism in their child, or persons researching the effect of various supplements on their health. Not to mention of course, the need to look beyond this small group of publications applies to scientists who spend a life committed to their subject as well.

One will never arrive at truth by tossing out 88% of scientific studies right off the bat.

2.  Most scientists do not work for major universities. Fewer than 15% of scientists ever get to participate in this sector even once in their career.² This again is a shade-of-gray replication of the overly stringent filtering bias recommended in point 1. above. I have employed over 100 scientists and engineers over the years, persons who have collectively produced groundbreaking studies. For the most part, none ever worked for a major university. Perhaps 1 or 2 spent a year inside university affiliated research institutes. Point 2 is simply a naive standard which can only result in filtering out everything with the exception of what one is looking for. One must understand that, in order to survive in academia, one must be incrementally brilliant and not what might be even remotely considered disruptively brash. Academics bask in the idea that their life’s work and prejudices have all panned out to come true. The problem with this King Wears No Clothes process is that it tends to stagnate science, and not provide the genesis of great discovery.

One will never arrive at truth by ignoring 85% of scientists, right off the bat.

3.  There are roles for both specialty journals and generalized journals. There is a reason for this, and it is not to promote ‘bogus pseudoscience’ as the blog author implies (note his context framing statement in quotes above). A generalized journal maintains resource peers to whom they issue subject matter for review. They are not claiming peer evaluation to be their sole task. Larger journals can afford this, but not all journals can. Chalk this point up as well up to naivete. Peer review requires field qualification; however in general, journal publication does not necessarily. Sometimes they are one in the same, sometimes not. Again, if this is applied without wisdom, such naive discrimination can result in a process of personal filtering bias, and not stand as a suitable standard identifying acceptable science.

One will never arrive at truth by viewing science peer review as a sustainable revenue club. Club quality does not work.

4.  Check for the parallel nature of the question addressed in the article premise, methodology, results, title and conclusion.  Article writers know all about the trick of simply reading abstracts and summaries. They know 98% of readers will only look this far, or will face the requisite $25 to gain access further than the abstract. If the question addressed is not the same throughout, then there could be an issue. As well, check the expository or disclosure section of the study or article. If it consists even in part, of a polemic focusing on the bad people, or the bad ideas, or the bad industry player – then the question addressed in the methodology may have come from bias in the first place. Note: blog writing constitutes this type of writing. A scientific study should be disciplined to the question at hand, be clear on any claims made, and as well any preliminary disclosures which help premise, frame, constrain, or improve the predictive nature of the question. Blogs and articles do not have to do this; however, neither are they scientific studies. Know the difference.

Writers know the trick – that reviewers will only read the summary or abstract. The logical calculus of a study resides below this level. So authors err toward favoring established ideas in abstracts.

5.  Claims make sense with respect to the context in which they are issued and the evidence by which they are backed. Do NOT check to see if you believe the claims or they make some kind of ‘Occam’s Razor’ sense. This is a false standard of ‘I am the science’ pretense taught by false skepticism. Instead, understand what the article is saying and what it is not saying – and avoid judging the article based on whether it says something you happen to like or dislike. We often call this ‘sense’ – and incorrectly so. It is bias.

Applying personal brilliance to filter ideas, brilliance which you learned from only 12% of publication abstracts and 15% of scientists who played the game long enough – is called: gullibility.

It is not that the body of work vetted by such criteria is invalid; rather simply that – to regard science as only this – is short sighted and bears fragility. Instead of these Pollyanna 5 guidelines, the ethical skeptic will choose to understand whether or not the study or article in question is based upon standards of what constitutes good Wittgenstein and Popper science. This type of study can be conducted by private lab or independent researchers too. One can transcend the Pollyanna 5 questions above by asking the ten simple questions regarding any material – and outlined in the graphic at the top of this article. Epoché is exercised by keeping their answers in mind, without prejudice, as onward you may choose to read. Solutions to problems come from all levels and all types of contributors. This understanding constitutes the essence of wise versus naive science.

“Popper holds that there is no unique methodology specific to science. Science, like virtually every other human, and indeed organic, activity, Popper believes, consists largely of problem-solving.”³

There are two types of people, those who wish to solve the problem at hand, and those who already had it solved, so it never was a problem for them to begin with, rather simply an avenue of club agenda expression or profit/career creation.

Let’s be clear here: If you have earned tenure as an academic or journal reviewer or a secure career position which pays you a guaranteed $112,000 a year, from age 35 until the day you retire, this is the same as holding a bank account with $2,300,000 in it at age 35† – even net of the $200,000 you might have invested in school. You are a millionaire. So please do not advertise the idea that scientists are all doing this for the subject matter.

$2.3 million (or more in sponsorship) is sitting there waiting for you to claim it – and all you have to do is say the right things, in the right venues, for long enough.

This process of depending solely on tier I journals – is an exercise in industry congratulationism. There has to be a better way to vet scientific study, …and there is. The following is all about telling which ilk of person is presenting an argument to you.

The Ten Questions Differentiating Good Science from Bad

better-science-1Aside from examining a study’s methodology and logical calculus itself, the following ten questions are what I employ to guide me as to how much agenda and pretense has been inserted into its message or methodology. There are many species of contention; eight in the least if we take the combinations of the three bisected axes in the graph to the right. Twenty four permutations if we take the sequence in which the logic is contended (using falsification to promote an idea versus promoting the idea that something ‘will be falsified under certain constraints’, etc.) In general, what I seek to examine is an assessment of how many ideas the author is seeking to refute or promote, with what type of study, and with what inductive or deductive approach. An author who attempts to dismiss too many competing ideas, via a predictive methodology supporting a surreptitiously promoted antithesis, which cannot possibly evaluate a critical theoretical mechanism – this type of study or article possesses a great likelihood of delivering bad science. Think about the celebrity skeptics you have read. How many competing ideas are they typically looking to discredit inside their material, and via one mechanism of denial (usually an apothegm and not a theoretical mechanism)? The pool comprises 768 items – many to draw from – and draw from this, they do.

Let’s be clear here – a study can pass major journal peer review and possess acceptable procedural/analytical methodology – but say or implicate absolutely nothing for the most part. Ultimately being abused (or abusing its own research in extrapolating its reach) to say things which the logical calculus involved would never support (see Dunning-Kruger Abuse). Such conditions do not mean that the study will be refused peer review. Peer reviewers rarely ever contend (if they disregard the ‘domain of application’ part of a study’s commentary):

“We reject this study because it could be abused in its interpretation by malicious stakeholders.” (See example here:

Just because a study is accepted for and pass peer review, does not mean that all its extrapolations, exaggerations, implications or abuses are therefore true. You, as the reader are the one who must apply the sniff test as to what the study is implying, saying or being abused to say. What helps a reader avoid this? Those same ten questions from above.

null-hypothesisThe ten questions I have found most useful in discerning good science from bad, are formulated based upon the following Popperian four-element premise.² All things being equal, better science is conducted in the case wherein

  • one idea is
  • denied through
  • falsification of its
  • critical theoretical mechanism.

If the author pulls this set of four things off successfully, eschews promotion of ‘the answer’ (which is the congruent context to one having disproved a set of myriad ideas), then the study stands as a challenge to the community and must be sought for replication (see question IX below). For the scientific community at large to ignore such a challenge is the genesis of (our pandemic) pluralistic ignorance.

For instance, in one of the materials research labs I managed, we were tasked by an investment fund and their presiding board to determine the compatibility of titanium to various lattice state effects analogous to iron. The problem exists however in that titanium is not like iron at all. It will not accept the same interstitial relationships with other small atomic radius class elements that iron will (boron, carbon, oxygen, nitrogen). We could not pursue the question the way the board posed it. “Can you screw with titanium in exotic ways to make it more useful to high performance aircraft?”  We first had to reduce the question into a series of salient, then sequitur Bayesian reductions. The first question to falsify was “Titanium maintains its vacancy characteristics at all boundary conditions along the gamma phase state?” Without an answer (falsification) to this single question – not one single other question related to titanium could be answered in any way shape or form. Most skeptics do not grasp this type of critical path inside streams of logical calculus. This is an enormous source of confusion and social ignorance. Even top philosophers and celebrity skeptics fail this single greatest test of skepticism. And they are not held to account because few people are the wiser, and the few who are wise to it – keep quiet to avoid the jackboot ignorance enforced by the Cabal.

Which introduces and opens up the more general question of ‘What indeed, all things being considered, makes for good effective science?” This can be lensed through ten useful questions below, applied in the same fashion as the titanium example case:

I. Has the study or article asked and addressed the 1. relevant, 2. salient, 3. sound and 4. critical path next question under the scientific method?

If it has accomplished this, it is already contending for tier I science, as only a minority of scientists understand how to pose reductive study in this way. A question can be relevant, but not salient to the question at hand. This is the most common trick of pseudoscience. The question can also be relevant and salient, yet be asked in incorrect sequence, so as to frame its results in a prejudicial light. If this diligence has not been done then do not even proceed to the next questions II though VII below. Throw the study in the waste can. Snopes is notorious for this type of chicanery. The material is rhetoric, targeting a victim group, idea or person.

If the answer to this is ‘No’ – Stop here and ignore the study. Use it as an example of how not to do science.

II. Did the study or article focus on utilization of a critical theoretical mechanism which it set out to evaluate for validity?

The litmus which differentiates a construct (idea or framework of ideas) from a theory, is that a theory contains a testable and critical theoretical mechanism. Was the critical theoretical mechanism identified and given a chance for peer input prior to its establishment? Or was it just assumed as valid by a small group, or one person? For instance, a ‘DNA Study’ can examine three classes of DNA: mtDNA, autosomal DNA, or Y-DNA. If it is a study of morphology, yet examines the Y-DNA only for example, then the study is fraud. Y-DNA has nothing to do with morphology or genetic makeup. This would be an example of an invalid (probably slipped by as an unchallenged assumption) critical test mechanism.

If the answer to this is ‘No’ – Regard the study or article as an opinion piece, or worse propaganda piece, and not of true scientific incremental value.

III.  Did the study or article attempt to falsify this mechanism, or employ it to make predictions? (z-axis)

Karl Popper outlined that good science involves falsification of alternative ideas or the null hypothesis. However, given that 90% of science cannot be winnowed through falsification alone, it is generally recognized that a theory’s predictive ability can act as a suitable critical theoretical mechanism via which to examine and evaluate. Evolution was accepted through just such a process. In general however, mechanisms which are falsified are regarded as stronger science over successfully predictive mechanisms. A second question to ask is, did the study really falsify the mechanism being tested for, or did it merely suggest possible falsity? Watch for this trick of pseudoscience.

If the study or article sought to falsify a theoretical mechanism – keep reading with maximum focus. If the study used predictive measures – catalog it and look for future publishing on the matter.

IV.  Did the study or article attempt to deny specific idea(s), or did it seek to promote specific idea(s)? (x-axis)

Denial and promotion of ideas is not a discriminating facet inside this issue stand alone. What is significant here is how it interrelates with the other questions. In general attempting to deny multiple ideas or promote a single idea are techniques regarded as less scientific than the approach of denying a single idea – especially if one is able to bring falsification evidence to bear on the critical question and theoretical mechanism. Did the study authors seem to have a commitment to certain jargon or prejudicial positions, prior to the results being obtained? Also watch for the condition where a cherry picked test mechanism may appear to be a single item test, yet is employed to deny an entire series of ideas as a result. This is not actually a condition of single idea examination, though it may appear to be so.

Simply keep the idea of promotion and denial in mind while you consider all other factors.

V.  Did the study affix its contentions on a single idea, or a group of ideas? (y-axis)

In general, incremental science and most of discovery science work better when a study focuses on one idea for evaluation and not a multiplicity of ideas. This minimizes extrapolation and special pleading loopholes or ignorance. Both deleterious implications for a study. Prefer authors who study single ideas over authors who try and make evaluations upon multiple ideas at once. The latter task is not a wise undertaking even in the instance where special pleading can theoretically be minimized.

If your study author is attempting to tackle the job of denying multiple ideas all at once – then the methodical cynicism alarm should go off. Be very skeptical.

VI.  What percent of the material was allocated towards ideas versus the more agenda oriented topics of persons, events or groups?

If the article or study spends more than 10% of its Background material focused on persons, events or groups it disagrees with, throw the study in the trash. If any other section contains such material above 0%, then the study should be discarded as well. Elanor Roosevelt is credited with the apothegm “Great minds discuss ideas; Average minds discuss events; Small minds discuss people.” Did the study make a big deal about its ‘accoutrements and processes of science’ – in an attempt to portray the appearance of legitimacy. Did the study sponsors photograph themselves wearing face shields and lab coats and writing in notebooks. This is often pretense and promotion, beware.

Take your science only from great minds focusing on ideas and not events or persons.

As well, if the author broaches a significant amount of related but irrelevant or non-salient to the question at hand material, you may be witnessing an obdurate, polemic or ingens vanitatum argument. Do not trust a study or article where the author appears to be demonstrating how much of an expert they are in the matter (through addressing related but irrelevant and non-salient or non-sequitur material). This is irrelevant and you should be very skeptical of such publications.

VII. Did the author put an idea, prediction or construct at risk in their study?

Fake science promoters always stay inside well established lines of social safety, so that they are 1) Never found wrong, 2) Don’t bring the wrong type of attention to themselves (remember the $2.6+ million which is at stake here), and 3) Can imply their personal authority inside their club as an opponent-inferred appeal in arguing. They always repeat the correct apothegm, and always come to the correct conclusion. Did the study sponsor come in contending that they ‘can do the study quickly’, followed by a low cost and ‘simple’ result which conformed with a pre-selected answer? Don’t buy it.

Advancing science always involves some sort of risk. Do not consider those who choose paths of safety, familiarity and implied authority to possess any understanding of science.

VIII.  Was the study? (In order of increasing gravitas)

1.  increasing-gravitasPsychology or Motivation (Pseudo-Theory – Explains Everything)

2.  Meta-Data – Studies of Studies (Indirect Data Only vulnerable to Simpson’s Paradox or Filtering/Interpretive Bias)

3.  Data – Cohort and Set Measures (Direct but still Data Only)

4.  Direct Measurement Observation (Direct Confirmation)

5.  Inductive Consilience Establishment (Preponderance of Evidence from Multiple Channels/Sources)

6.  Deductive Case Falsification (Smoking Gun)

All it takes in order to have a strong study is one solid falsifying observation. This is the same principle as is embodied inside the apothegm ‘It only takes one white crow, to falsify the idea that all crows are black’.

IX.  When the only viable next salient and sequitur reductive step, post study – is to replicate the results – then you know you have a strong argument inside that work.

X.  Big data and meta-analysis studies like to intimidate participants in the scientific method with the implicit taunt “I’m too big to replicate, bring consensus now.”

These questions, more than anything else – will allow the ethical skeptic to begin to grasp what is reliable science and what is questionable science. Especially in the context where one can no longer afford to dwell inside only the lofty 5% of the highest regarded publications or can no longer stomach the shallow talking point sheets of social skepticism – all of which serve only to ignore or give short shrift to the ideas to which one has dedicated their life in study.

epoché vanguards gnosis

¹  Poliandri, Ariel; “A guide to detecting bogus scientific journals”; Sci – Phy, May 12, 2013;

²  Beryl Lieff Benderly, “Does the US Produce Too Many Scientists?; Scientific American, February 22, 2010;

³  Thornton, Stephen, “Karl Popper”, The Stanford Encyclopedia of Philosophy (Winter 2016 Edition), Edward N. Zalta (ed.), URL = <>

†  Present Value of future cash flows with zero ending balance: 456 payments of $9,333 per month at .25% interest per payment period.

February 25, 2017 Posted by | Agenda Propaganda, Social Disdain | , , | Leave a comment

Abuse of the Dunning-Kruger Effect

When does a Dunning-Kruger misapplication flag the circumstance of ad hominem attack by a claimant who sees them self as superior minded? When you observe it being applied in situations and domains inside of which the study authors, Kruger and Dunning, never intended. It behooves the ethical skeptic to actually read the studies which are purported at face value to back habitual social skeptic condemnation tactics. Knowing how to not commit a Dunning-Kruger Effect error in application, ironically is a key indicator as to one’s competency under a Dunning-Kruger perspective in the first place.

A saying is attributed to Thomas Jefferson about the wisdom of self-knowledge, and goes as such “He who knows best, best knows how little he knows.” This quote is actually highlighted inside a celebrated study by Cornell University Psychologists, Justin Kruger and David Dunning; commonly referred to as the ‘Dunning-Kruger Effect’ study. Indeed this principle elicited by Jefferson is embodied inside two of the Eight Tropes of Ethical Skepticism:

I.    There is critically more we do not know, than we do know.

II.   We do not know, what we do not know. Only a sub-critical component of mankind effectively grasps this.

Dunning KrugerOne wonders if Thomas Jefferson, in recognizing this human foible would have been the wiser to not attempt his bold assertions inside of “A Declaration by the Representatives of The United States of America, In General Congress Assembled.”¹ This haughty document, certainly venturing into an arena in which Jefferson himself had no personal degree or particular expertise, represented a projection into a subject about which he could not possibly have known competency.  Surely this is a case of Dunning-Kruger ‘fallacy’ if ever one was observed. An enormous boast of unseemly levels of claim to knowledge (ones which make socialist and social skeptics uncomfortable to this very day):

We hold these truths to be self-evident: that all men are created equal; that they are endowed by their creator with inherent and certain inalienable rights; that among these are life, liberty, & the pursuit of happiness: that to secure these rights, governments are instituted among men, deriving their just powers from the consent of the governed; that whenever any form of government becomes destructive of these ends, it is the right of the people to alter or abolish it, & to institute new government, laying it’s foundation on such principles, & organizing it’s powers in such form, as to them shall seem most likely to effect their safety & happiness.¹

Indeed, the Eight Tropes of Ethical Skepticism continue with a focus outlining that the necessity of knowledge, even in absence of knowledge, is to observe and correct when a party seeks to control by means of ignorance, provisional knowledge, methodical cynicism and authority alone (the three basic elements of Social Skepticism). It holds our common service to each other and love for our fellow man and his plight, to underpin with utmost importance our qualification to observe and direct the processes of knowledge. In this instance, regarding Jefferson and those who crafted our first ideas of what a government was to be, courage, risk and personal gumption outweighed calls for caution – specifically because of the particular need, benefit or danger entailed. Dunning-Kruger indeed did not apply because the situation dictated actions of character on the part of an agent of change (see #12 below). This is most often the circumstance which we as ethical skeptics face today.

Dunning-Kruger awareness does not apply as a fallacy of disqualification in such circumstances. This awareness about both the limits of knowledge, as well as when a Dunning-Kruger Effect does and does not apply, relate directly to the Seven Tropes of Ethical Skepticism. Several species/errors in application arise under a logical calculus which seeks to survey the landscape of the Dunning-Kruger Effect:

Dunning-Kruger Abuse (ad hominem)

/philosophy : pseudo-science : fascism/ : a form of ad hominem attack. Inappropriate application of the Dunning-Kruger fallacy in circumstances where it should not apply; instances where every person has a right, responsibility or qualification as a victim/stakeholder to make their voice heard, despite not being deemed a degree, competency or title holding expert in that field.

This circumstance of employment standing in stark contrast with legitimate circumstances where the Dunning-Kruger Effect does indeed apply. Including those circumstances where ironically, a fake skeptic is not competent enough to identify a broader circumstance of Dunning-Kruger in themselves and their favored peers (several species below).

Dunning-Kruger Effect

/philosophy : misconception : bias/ : an effect in which incompetent people fail to realize they are incompetent because they lack the skill or maturity to distinguish between competence and incompetence among their peers

A principle which serves to introduce the ironic forms of Dunning-Kruger Effect employed skillfully by Social Skepticism today:

Dunning-Kruger Denial

/philosophy : pseudo-science : false skepticism : social manipulation/ : the manipulation of public sentiment and perceptions of science, and/or condemnation of persons through skillful exploitation of the Dunning-Kruger Effect. This occurs in five speciated forms:

Dunning-Kruger Exploitation

/philosophy : pseudo-science : fascism/ : the manipulation of unconsciously incompetent persons or laypersons into believing that a source of authority expresses certain opinions, when in fact the persons can neither understand the principles underpinning the opinions, nor critically address the recitation of authority imposed upon them. This includes the circumstance where those incompetent persons are then included in the ‘approved’ club solely because of their adherence to proper and rational approved ideas.

Dunning-Kruger Milieu

/philosophy : pseudo-science : fascism/ : a circumstance wherein either errant information or fake-hoaxing exists in such quantity under a Dunning-Kruger Exploitation circumstance, or a critical mass of Dunning-Kruger Effect population is present, such that core truths observations, principles and effects surrounding a topic cannot be readily communicated or discerned, as distinct from misinformation, propaganda and bunk.

Dunning-Kruger Projection (aka Plaiting)

/philosophy : misconception : bias/ : the condition in which an expert, scientist or PhD in one discipline over-confidently or ignorantly fails to realize that they are not competent to speak in another discipline, or attempts to pass authority ‘as a scientist’ inside an expertise set to which they are only mildly competent at best. Any attempt to use the identity of ‘scientist’ to underpin authority, bully, seek worship or conduct self-deception regarding an array of subjects inside of which they in actuality know very little.

Non-Equivalence of Competence

/philosophy : deception : bias or method/ : I don’t have to competent on a subject, in order to ascertain that you are incompetent on that subject.

Dunning-Kruger Skepticism

/philosophy : misconception : bias/ : an effect in which incompetent people making claim under ‘skepticism,’ fail to realize they are incompetent both as a skeptic and as well inside the subject matter at hand. Consequently they will fall easily for an argument of social denial/promotion because they

1.  lack the skill or maturity to distinguish between competence and incompetence among their skeptic peers and/or are

2.  unduly influenced by a condition of Dunning-Kruger Exploitation or Millieu, and/or are

3.  misled by false promotions of what is indeed skepticism, or possess a deep seated need to be accepted under a Negare Attentio Effect.

Dunning-Kruger Denial is a chief objective of social skepticism. So it was not surprising that social skepticism recognized this overall malady first; as exploiting its ad hominem potential, is one of the principal tactics of fake skepticism.

Nonetheless, back to the principal context of this blog, with regard to fair contextual application of actual underlying Dunning-Kruger principles, and framed in a more simple and condensed expression:

One does not possess the right, to dismiss the rights of others – by means of a Dunning-Kruger Effect accusation.

What the Kruger and Dunning Study Did Say

Dunning-Kruger ExhibitThe famously heralded study, one by Justin Kruger and David Dunning inside the Department of Psychology of Cornell University in 1999, implied the importance of recognizing when one has outlasted their competency in a given field versus their peers in that field – and the importance of keeping mute/inactive in circumstances where this could serve to embarrass or endanger. A study which would have certainly been embraced by the Royalist or Tory in the day of Thomas Jefferson.  More specifically the study outlined four pitfalls which were observed among 60 – 90 Cornell University undergraduate first year students (below).

(Note: This certainly a Dunning-Kruger commentary in itself as to Kruger and Dunning’s ability to develop unbiased inclusion criteria which would or would not serve to amplify desired effect. Have you ever known an undergraduate freshman who did not overestimate their success in an upcoming exam or evaluation? This is the definition of freshman.

Scientific parsimony would have been applicable here, especially from the perspective of selecting a source-S sample pool of silver-spooned Ivy-Leaguers who have been told their entire lives that they are the smartest person in the room/building. This is like observing if fights will break out when two people hit each other, by conducting surveys inside a drunken London mosh pit full of Manchester United and Arsenal Football Clubbers. It is stupidity dressed up in lab coats. An epistemologically shallow if not elegant convenience of social skeptic tradecraft. A common produit-decélèbre on their part – especially among psychology PhD’s.

What they observed in fact, was the unique nutrient solution of psychology and social pressure which serves to cultivate our brood of social skeptics. These test subjects and their indoctrinated peers will be sure to never step out of line, or speak up when they might be afraid, ever again. See # 11 below.)

Given this skewed inclusive criteria group, one with which Kruger and Dunning were very familiar and inside of which they had already bore an intuitive estimation of positive result, four predictions from the surveys were developed and confirmed:

Prediction 1. Incompetent individuals, compared with their more competent peers, will dramatically overestimate their ability and performance relative to objective criteria.

Prediction 2. Incompetent individuals will suffer from deficient metacognitive skills, in that they will be less able than their more competent peers to recognize competence when they see it–be it their own or anyone else’s.

Prediction 3. Incompetent individuals will be less able than their more competent peers to gain insight into their true level of performance by means of social comparison information. In particular, because of their difficulty recognizing competence in others, incompetent individuals will be unable to use information about the choices and performances of others to form more accurate impressions of their own ability.

Prediction 4. The incompetent can gain insight about their shortcomings, but this comes (paradoxically) by making them more competent, thus providing them the metacognitive skills necessary to be able to realize that they have performed poorly.²

None of the cautions above and below herein of course, serve to invalidate the effect Kruger and Dunning (and others since) have cited in the referenced study. These cautions simply function as a sentinel, flagging conditions wherein such a study might be abused for social ends. To that end, let us discuss some of those circumstances where a social skeptic might abuse such a study as a means of demanding conformance through social ridicule, on issues they are seeking to promote.

When Dunning-Kruger Effect Does Not Apply

A reasonable man would suppose that underestimating one’s ability to adeptly handle the intricate subtleties of a Dunning-Kruger accusation, stands as a form of Dunning-Kruger fallacy in itself. But that does not inhibit our self-appointed elite, the social skeptic from slinging around the accusation with all the adeptness of a demolitions expert in a porcelain factory. The sad reality is that the majority of instances in which I have seen the accusation foisted, have been instances of invalid usage. In other words, as the social skeptic interprets this study and instructs their sycophants as to its employment, they and their disciples are now scientifically justified (remember they represent science) in making the following accusations.

How the four findings of the Dunning-Kruger study are abused in the anosognosia vulnerable mind:

  1. People whom I do not like, do stupid things.
  2. People whom I do not like, fail to recognize how smart I am.
  3. People whom I do not like, fail to recognize how stupid they are.
  4. It is simply a matter of me training the stupid, because as they become more informed like me, they will be come less stupid and recognize stupidity in others.

Do you see the sales cycle evolving here? This is a religious pitch used by fundamentalist Christianity. They could print this up in a tract and hand it out inside airport bathrooms. In other words what the Dunning-Kruger misapplication has introduced is an act of social anosognosia (a deficit of self awareness) on the part of those who see themselves as superior minded. This relates to the more complex comparatives between Intelligence and Rationality, a perception on the part of social skeptics which we addressed in an earlier blog.

Intelligence is smart people who do or think unauthorized things. Rationality is smart people who do or think correct things. Social Skepticism is about knowing the difference.

Ethical Skepticism says ‘Bullshit’ to this line of reasoning.

Which introduces the final point set of this blog, circumstances where the Dunning-Kruger Effect does not bear applicability. Instances where the sociopathology of the anosognosiac have crossed the line into abuse of both the Dunning-Kruger Effect and more importantly, those around them:

Specific instances in which the Dunning-Kruger Effect does not apply include:

1.  In matters of Public Policy.

e.g. ∈ You have the right to speak up about contaminants in your food, you do not have to be a chemist or agricultural scientist.

2.  In matters of Voting, Political Voice and Will.

e.g. ∈ You have the right to speak up about foreign trade policy and jobs, you do not have to be a degree holding economist.

3.  In situations where professionals and non-professionals are involved. Dunning-Kruger is speaking about continuous scale comparatives between peers, not discrete breakouts between groups, as in the case of professionals and various tiers of non-professionals (from layman to dilettante) in a given discipline. From the ‘notes/discussion’ section of the Kruger and Dunning study itself:

“There is no categorical bright line that separates “competent” individuals from “incompetent” ones. Thus, when we speak of “incompetent” individuals we mean people who are less competent than their peers.”²

e.g. ∈ You have the right to speak up about where NASA’s space programs are headed, you do not have to be an astrophysicist or on NASA’s advisory board.

4.  When the speaker is a victim of corporate, governmental, mafia, criminal, supposed or real expert actions or fraud.

e.g. ∈ You have the right to speak up about your vaccine injured child, you do not have to be an epidemiologist or medical doctor.

5.  In matters where there is more unknown than is known, or where science has studied very little.

e.g. ∈ Einstein bore the right to speak up about Special Relativity while simply an entry level patent engineer, he was not disqualified by a previous academic C-average, nor by his not holding a PhD.

6.  In matters where competency in reality only comprises simply a few memorized facts, procedure or trivia concerning the subject.

e.g. ∈ You have the right to speak up about water contamination in your community, you do not have to be involved in constructing assay sheets at your local processing plant.

7.  In matters where social conformance is conflated with competency (i.e. social skeptic ‘rationality’).

e.g. ∈ You have the right to speak up about science ignoring an important issue observed in your local community, you do not have to be a degree holding scientist in that arena.

8.  In matters of personal financial and household management.

e.g. ∈ You have the right to organize community to refuse a tax levied on your home for unfair reasons, you do not have to be a career politician or expert in the subject which is funded by the tax itself.

9.  In matters of personal health, disease prevention and health management.

e.g. ∈ You have the right to speak up about things harming your family’s health, you do not have to be a member of Science Based Medicine.

10.  In matters of personal religious practice or choice of faith.

e.g. ∈ You have the right to say that you observed something extraordinary or miraculous from a spiritual perspective, you do not have to be a priest or scientist.

11.  In any matter or circumstance where the Dunning-Kruger Effect is employed to intimidate or create compliance by means of fear/ridicule.

e.g. ∈ You have the right to speak up about unbridled immigration and population dumping, this does not make you a racist. You have the freedom and right to identify such things as acts of war.

12.  When courage, risk and personal gumption override calls for caution because the need, benefit or danger entailed dictate actions of character on the part of an agent of change.

e.g. ∈ You have the right to speak up about VINDA Autism, you do not have to be a Centers for Disease Control professional, in order to demand third party review of ‘settled science.’

The study authors, had they been following the protocols of science, should have included points such as these in their commentary and counter-point acknowledgement sections. This is what ethical skeptics and scientists for that matter, do; they remain aware of and allow-for counter-point arguments. They regard them as matters of importance. Unfortunately, save for number 3. above (and only in part even for that one), Kruger and Dunning did not bear such circumspection about their own findings in their work. Another shortfall in scientific method.

Knowing how to not use a weapon is the supreme qualification for a user of that weapon. Knowing how to not commit a Dunning-Kruger Effect error in its application, ironically is a key indicator as to one’s competency under a Dunning-Kruger perspective in the first place.

epoché vanguards gnosis


²  Journal of Personality and Social Psychology: American Psychological Association, December 1999 Vol. 77, No. 6, 1121-1134; Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments; Justin Kruger and David Dunning, Department of Psychology, Cornell University. A series study conducted by survey of a series of Cornell University undergraduates about competency and self perception, meta-cognition and projection. (

May 12, 2016 Posted by | Agenda Propaganda, Argument Fallacies | , , , , , | Leave a comment

The Correlation-Causality One-Liner Can Highlight One’s Scientific Illiteracy

“Correlation does not prove causality.” You have heard the one-liner uttered by clueless social skeptics probably one thousand times or more. But real science rarely if ever starts with ‘proof.’ More often than not, neither does a process of science end in proof. Correlation was never crafted as an analytical means to proof. However this one-liner statement is most often employed as a means of implying proof of an antithetical idea. To refuse to conduct the scientific research behind such fingerprint signal conditions, especially when involving a risk exposure linkage, can demonstrate just plain ole malicious ignorance. It is not even stupid.

When a social skeptic makes the statement “Correlation does not prove causality,” they are making a correct statement. It is much akin to pointing out that a pretty girl smiling at you does not mean she wants to spend the week in Paris with you. It is a truism, most often employed to squelch an idea which is threatening to the statement maker. As if the statement maker were the boyfriend of the girl who smiled at you. Of course a person smiling at you does not mean they want to spend a week in Paris with you. Of course correlation does not prove causality. Nearly every single person bearing any semblance of rational mind understands this.  But what the one who has uttered this statement does not grasp, while feeling all smart and skeptickey in its mention, is that they have in essence revealed a key insight into their own lack of scientific literacy. Specifically, when a person makes this statement, three particular forms of error most often arise. In particular, they do not comprehend, across an entire life of employing such a statement, that

1.  Proof Gaming/Non Rectum Agitur Fallacy: Correlation is used as one element in a petition for ‘plurality’ and research inside the scientific method, and is NOT tantamount to a claim to proof by anyone – contrary to the false version of method foisted by scientific pretenders.

To attempt to shoot down an observation, by citing that it by itself does not rise tantamount to proof, is a form of Proof Gaming. It is a trick of trying to force the possible last step of the scientific method, and through strawman fallacy regarding a disliked observer, pretend that it is the first step in the scientific method. It is a logical fallacy, and a method of pseudoscience. Science establishes plurality first, seeks to develop a testable hypothesis, and then hopes, …only hopes, to get close to proof at a later time.

Your citing examples of correlation which fail the Risk Exposure Test, does not mean that my contention is proved weak.

… and yes, science does use correlation comparatives in order to establish plurality of argument, and consilience which can lead to consensus (in absence of abject proof). The correlation-causality statement, while mathematically true, is philosophically and scientifically illiterate.¹²³

2. Ignoratio Elenchi Fallacy (ingens vanitatum): What is being strawman framed as simply a claim to ‘correlation’ by scientific pretenders, is often a whole consilience (or fingerprint) of mutually reinforcing statistical inference well beyond the defined context of simple correlation.

Often when data shows a correlation, it also demonstrates other factors which may be elicited to demonstrate a relationship between two previously unrelated contributing variables or data measures.  There are a number of other factors which science employs through the disciplines of modeling theory, probability and statistics which can be drawn from a data relationship. In addition these inferences can be used to mutually support one another, and exponentially increase the confidence of contentions around the data set in question.²³

3.  Methodical Cynicism: Correlation is used as a tool to examine an allowance for and magnitude of variable dependency. In many cases where a fingerprint signal is being examined, the dependency risk has ALREADY BEEN ESTABLISHED or is ALLOWED-FOR by diligent reductive science. To step in the way of method and game protocols and persuasion in order to block study, is malevolent pseudoscience.

If the two variables pass the risk-exposure test, then we are already past correlation and into measuring that level of dependency, not evaluating its existence. If scientific studies have already shown that a chemical has impacts on the human or animal kidney/livers/pancreas, to call an examination of maladies relating to those organs as they relate to trends in use of that chemical a ‘correlation’ is an indication of scientific illiteracy on the part of the accuser. Once a risk relationship is established, as in the case of colon disorders as a risk of glyphosate intake, accusations of ‘correlation does not prove causality’ constitute a non-sequitur Wittgenstein Error inside the scientific method. Plurality has been established and a solid case for research has been laid down. To block such research is obdurate scientific fraud.²³

Calling or downgrading the sum total of these inferences through the equivocal use of the term ‘correlation,’ not only is demonstrative of one’s mathematical and scientific illiteracy, but also demonstrates a penchant for the squelching of data through definition in a fraudulent manner. It is an effort on the part of a dishonest agent to prevent the plurality step of the scientific method.
None of this has anything whatsoever to do with ‘proof.’

A Fingerprint Signal is Not a ‘Correlation’

earth mag fieldAn example of this type of scientific illiteracy can be found here (Correlation Is Not Causation in Earth’s Dipole Contribution to Climate – Steven Novella). There is a well established covariance, coincidence, periodicity and tail sympathy; a long tight history of dynamic with respect to how climate relates to the strength of Earth’s magnetic dipole moment. This is a fingerprint signal. Steven Novella incorrectly calls this ‘correlation.’ A whole host of Earth’s climate phenomena move in concert with the strength of our magnetic field. This does not disprove anthropogenic contribution to current global warming. But to whip out a one liner and shoot at a well established facet of geoscience, all so as to protect standing ideas from facing the peer review of further research is not skepticism, it is pseudoscience. The matter merits investigation. This hyperepistemology one-liner does not even rise to the level of being stupid.

Measuring of An Established Risk Relationship is Not a ‘Correlation’

Risk Exposure Exists CorrelationAn example of this type of scientific illiteracy can be found inside pharmaceutical company pitches about how the increase in opioid addiction and abuse was not connected with their promotional and lobbying efforts. Correlation did not prove causality. Much of today’s opiate epidemic stems from two decades of promotional activity undertaken by pharmaceutical companies. According to New Yorker Magazine, companies such as Endo Pharmaceuticals, Purdue Pharma and Johnson & Johnson centered their marketing campaigns on opioids as general use pain treatment medications. Highly regarded medical journals featured promotions directed towards physicians involved in pain management. Educational courses on the benefits of opioid-based treatments were offered. Pharmaceutical companies made widespread use of lobbyist groups in their efforts to disassociate opiate industry practices from recent alarming statistics (sound familiar? See an example where Scientific American is used for such propaganda here). One such group received $2.5 million from pharmaceutical companies to promote opioid justification and discourage legislators from passing regulations against unconstrained opioid employment in medical practices. (See New Yorker Magazine: Who is Responsible for the Pain Pill Epidemic?) The key here is, that once a risk relationship is established, such as between glyphosate and cancer, one cannot make the claim that correlation does not prove causality in the face of two validated sympathetic risk-dependency signals. It is too late, plurality has been established and the science needs to be done. To block such science is criminal fraud.

Perhaps We Need a New Name Besides Correlation for Such Robust Data Fit

Both of these examples above elicit instances where fake skeptic scientific illiteracy served to mis-inform, mis-lead or cause harm to the American Public. Correlation, in contrast, is simply a measure of the ‘fit’ of a linear trend inside the relationship between a two factor data set. It asks two questions (the third is simply a mathematical variation of the second):

  1. Can a linear inference be derived from cross indexing both data sets?, and
  2. How ‘close to linearity’ do these cross references of data come?
  3. How ‘close to curvinlinearity’ do these cross references of data come?

The answer to question number 2 is called an r-factor or correlation coefficient. Commonly, question number 3 is answered by means of a coefficient of determination and is expressed as an r² factor (r squared).³ Both are a measure of a paired-data set fit to linearity. That is all. In many instances pundits will use correlation to exhibit a preestablished relationship, such as the well known relationship between hours spent studying and academic grades. They are not establishing proof with a graph, rather simply showing a relationship which has already been well documented through several other previous means. However, in no way shape or form does that mean that persons who apply correlation as a basis of a theoretical construct are therefore then contending a case for proof. This is a relational form of the post hoc ergo propter hoc fallacy. This is a logical flaw, served up by the dilettante mind which confuses the former case, an exhibit, and conflates it with the later use, the instance of a petition for research.

Correlation Dismissal Error (Fingerprint Ignorance)

/philosophy : logic : evidence : fallacy/ : when employing the ‘correlation does not prove causality’ quip to terminally dismiss an observed correlation, when the observation is being used to underpin a construct or argument possessing consilience, is seeking plurality, constitutes direct fingerprint evidence and/or is not being touted as final conclusive proof in and of itself.

THIS is Correlation (Pearson’s PPMCC)      It does not prove causality (duh…)¹²

Cor 1

This is a Fingerprint Signal and is Not Simply a Correlation³∋

diabetes and glyphosate

There are a number of other methods of determining the potential relationship between two sets of data, many of which appear to the trained eye in the above graph. Each of the below relational features individually, and increasingly as they confirm one another, establish a case for plurality of explanation. The above graph is not “proving” that glyphosate aggravates diabetes rates. However, when this graph is taken against the exact same shape and relationship graphs for multiple myloma, non-Hodgkin’s Lymphoma, bladder cancer, thyroid disease, pancreatic cancer, irritable bowel syndrome, inflammatory bowel syndrome, lupus, fibromyalgia, renal function diminishment, Alzheimer’s, Crohn’s Disease, wheat/corn/canola/soy sensitivity, SIBO, dysbyosis, esophageal cancer, stomach cancer, rosacea, gall bladder cancer, ulcerative colitis, rheumatoid arthritis, liver impairment and stress/fatty liver disease, … and for the first time in our history a RISE in the death rates of of middle aged Americans…

… and the fact that in the last 20 years our top ten disease prescription bases have changed 100%… ALL relating to the above conditions and ALL auto-immune and gut microbiome in origin. All this despite a decline in lethargy, smoking and alcohol consumption on average. All of this in populations younger than an aging trend can account for.

Then plurality has been argued. Fingerprint signal data has been well established. This is an example of consilience inside an established risk exposure relationship. To argue against plurality through the clueless statement “Correlation does not prove causality” is borderline criminal. It is scientifically illiterate, a shallow pretense which is substantiated by false rationality (social conformance) and a key shortfall in real intelligence.

Contextual Wittgenstein Error Example – Incorrect Rhetoric Depiction of Correlation

cor 2

The cartoon to the left is a hypoepistemology which misses the entire substance of what constitutes fingerprint correlation. A fingerprint signal is derived when the bullet-pointed conditions exist – None of which exist in the cartoon invalid comparison to the left – this is a tampering with definition, enacted by a person who has no idea what correlation in this context, even means. A Wittgenstein Error. In other words: scientifically illiterate propaganda. Conditions which exist in a proper correlation, or more, condition:

  • A constrained pre-domain and relevant range which differ in stark significance
  • An ability to fit both data sets to curvinlinear or linear fit, with projection through golden section, regression or a series of other models
  • A preexisting contributor risk exposure between one set of unconstrained variables and a dependent variable
  • A consistent time displacement between independent and dependent variables
  • A covariance in the dynamic nature of data set fluctuations
  • A coincident period of commencement and timeframe of covariance
  • A jointly shared arrival distribution profile
  • Sympathetic long term convex or concave trends
  • A risk exposure (see below) – the cartoon to the left fails the risk exposure test.

Rhetoric: An answer, looking for a question, targeting a victim

Fingerprint Elements: When One or More of These Risk Factor Conditions is Observed, A Compelling Case Should be Researched¹²³

Corresponding Data – not only can one series be fitted with a high linear coefficient, another independent series can also be fitted with a similar and higher coefficient which increases in coherence throughout a time series both before and during its domain of measure, and bears similar slope, period and magnitude. In this instance as well, a preexisting risk exposure has been established.  This does not prove causality, however is a strong case for plurality especially if a question of risk is raised. To ignore this condition, is a circumstance where ignorance ranges into fraud.

Cor 1a

Covariant Data – not only can one series be fitted with a high coefficient, another independent series can also be observed with a similar fit which increases in coherence as a time series both before and during its domain of measure, and bears similar period and magnitude. Adding additional confidence to this measure is the dx/dy covariance, Browning Covariance, or distance covariance, etc. measure which can be established between the two data series; that is, the change in x(1)…x(n) versus y(1)…y(n). In this instance as well, a preexisting risk exposure has been established.  This does not prove causality, however is a very strong case for plurality especially if a question of risk is raised. To ignore this condition, is a circumstance where socially pushed skepticism ranges into fraud.

 Cor 1b

Co-incidence Data – two discrete measures coincide as a time series both before and during its domain of measure, and bear similar period and magnitude. Adding additional confidence to this measure magnitude consistency which can be established between the two data series; that is, the discrete change in x(1)…x(n) versus y(1)…y(n). In this instance as well, a preexisting risk exposure has been established.  This does not prove causality, however is a moderately strong case for plurality especially if a question of risk is raised. To ignore this condition, is a circumstance where arrogant skepticism ranges into fraud.

Cor 1c

Jointly Distributed Data – two independent data sets exhibit the same or common arrival distribution functions. Adding additional confidence to this measure magnitude consistency which can be established between the two data series; that is, the discrete change in x(1)…x(n) versus y(1)…y(n). In this instance as well, a preexisting risk exposure has been established.  This does not prove causality, however is a moderately strong case for plurality especially if a question of risk is raised. To ignore this condition, is a circumstance where arrogant skepticism ranges into fraud.

Cor 1d

Probability Function Match – two independent data sets exhibit a resulting probability density function of similar name/type/shape. Adding additional confidence to this measure magnitude consistency which can be established between the two data series; that is, the discrete change in x(1)…x(n) versus y(1)…y(n). In this instance as well, a preexisting risk exposure has been established.  This does not prove causality, however is a moderately strong case for plurality especially if a question of risk is raised. To ignore this condition is not wise.

Cor 1e

Marginal or Tail Condition Match – the tail or extreme regions of the data exhibit coincidence and covariance. Adding additional confidence to this measure magnitude consistency which can be established between the two data series when applied in the extreme or outlier condition; that is, the discrete change of these remote data in x(1)…x(n) versus y(1)…y(n). In this instance as well, a preexisting risk exposure has been established.  This does not prove causality, however is a moderately strong case for plurality especially if a question of risk is raised. To ignore this condition, is a circumstance where even moderate skepticism ranges into fraud activity.

 Cor 1f

Sympathetic Long Term Shared Concave or Convex – long term trends match each other, but more importantly each is a departure from the previous history and occurred simultaneously, offset by a time displacement, are both convex or concave and co-vary across the risk period. Adding additional confidence to this measure magnitude consistency which can be established between the two data series; that is, the discrete change in x(1)…x(n) versus y(1)…y(n). In this instance as well, a preexisting risk exposure has been established.  This does not prove causality, however is a compellingly strong case for plurality especially if a question of risk is raised. To ignore this condition, is a circumstance where even moderate skepticism ranges into fraud activity.

 Cor 1g

Discrete Measures Covariance – the mode, median or mean of discrete measures is shared in common and/or in coincidence, and also vary sympathetically over time. Adding additional confidence to this measure magnitude consistency which can be established between the two data series; that is, the discrete change in mode and mean over time. In this instance as well, a preexisting risk exposure has been established.  This does not prove causality, however is a moderate case for plurality especially if a question of risk is raised. To ignore this condition is not wise.

Cor 1h

Risk Exposure Chain/Test – two variables, if technical case were established that one indeed influenced the other, would indeed be able to influence one another. (In other words, if your kid WAS eating rat poison every Tuesday, he WOULD be sick on every Wednesday – but your kid eating rat poison would not make the city mayor sick on Wednesday). If this condition exists, along with one or more of the above conditions, a case for plurality has been achieved. To ignore this condition, is a circumstance where even moderate skepticism ranges into fraud activity.

 Cor 1i

These elements, when taken in concert by honest researchers, are called fingerprint data. When fake skeptics see an accelerating curve which matches another accelerating curve – completely (and purposely) missing the circumstance wherein any or ALL of these factors are more likely in play – to say “correlation” is what is being seen, demonstrates their scientific illiteracy. It is up to the ethical skeptic to raise their hand and say “Hold on, I am not ready to dismiss that data relationship so easily. Perhaps we should conduct studies which investigate this risk linkage and its surrounding statistics.”

To refuse to conduct the scientific research behind such conditions, especially if it involves something we are exposed to three times a day for life, constitutes just plain active ignorance and maliciousness. It is not even stupid.

epoché vanguards gnosis

¹  Madsen, Richard W., ” Statistical Concepts with Applications to Business and Economics,” Prentice-Hall, 1980; pp 604 – 610.

²  Gorini, Catherine A., “Master Math Probability,” Course Technology, 2012; pp. 175-196, 252-274.

³  Levine, David M.; Stephan, David F., “Statistics and Analytics,” Pearson Education, 2015; pp. 137-275.

∋  Graphic employed for example purposes only. Courtesy of work of Dr. Stephanie Seneff, sulfates, glyphosates and gmo food; MIT, september 19, 2013.

January 17, 2016 Posted by | Argument Fallacies | , , , , , , , , , , , | 2 Comments

Chinese (Simplified)EnglishFrenchGermanHindiPortugueseRussianSpanish
%d bloggers like this: