The Ethical Skeptic

Challenging Agency of Pseudo-Skepticism & Cultivated Ignorance

Skeptics Need You – But You Don’t Need Them

Stop striving to impress skeptics. Just because scientists employ skepticism, does not mean therefore that skeptics represent science. In fact, they only serve to personify a straw man of science. They seek to foment conflict between the public and scientists – because that serves to impart power to them and their club.
A hypocrisy meme, where a man disdainfully holds his intellectual looking spectacles in the air and cites that the job of skeptics is to promote a better understanding of science. Then ironically, starts spinning a whole slew of reasons why science finds the reader unacceptable and calls them names and irrational.

Skeptics have placed you under the spell of a little mind trick. They do not seek the truth of any particular matter, rather they seek only to leverage your sincerity, wonder and inquisitiveness towards a goal of power, humiliation and polarization. They wish you to infer that scientists regard your lines of inquiry, rights and notions – as woo. They wish to imply that science relies upon proof and that scientists have disproved you, and further regard you as anti-science (q.e.d. anti-them).  Upon sensing this finger-point generated animus, scientists begin to perceive much of the public as a frothing, anti-science horde who cannot fathom what they do, and further must now be ignored in order to save the world. This is the actual lesson skeptics are teaching all concerned on both sides – “You must worship me as the smartest, cede unto me the power of punishment (of both the public and scientists) – as I now represent science.” It is a clever little social trick of identity bullying.

In this they ironically pose as a factor which promotes understanding of science on the part of the public.

Skeptics desperately need you – to add fuel to their superiority complex, polarizing message, power to humiliate, club member ranking, acclaim, and to tacitly reinforce their religious view of the world however, you do not need them. You do not need to invite them to events to ‘provide a skeptical perspective’, as this is part of the game of misrepresentation which they play on everyone. Most researchers are already skeptical in their work; most scientists are skeptics by nature and training. This infusion of discipline is a natural part of living a sincere, hard working life. But this does not mean that self-identity skeptics do any research, nor that they are sincere, nor that they are scientists – nor especially that they represent science.

Through personifying a straw man of science, skeptics seek to foment conflict between the public and science
– a state wherein their club gains authority along with the power to punish;
because both science and the public now perceive each other as the denialist enemy.
An enemy which you must fear, mistrust and marginalize.

Do not fall for this game. You will know that you have won, when skeptics ignore you back.

†  When we speak of ‘skeptics’ in this article, we are speaking of those who identify as ‘skeptic’ publicly, as a means of bullying, posturing and self-congratulation. Social skeptics. Fake skeptics. Those who regularly point the finger of ‘pseudoscience’, ‘woo’, ‘credulity’, and ‘anti-science’. Your identity as an ethical skeptic, is simply a means to say ‘I do not participate in that game – other than to oppose agency and bullying, I am not here to promote any given conclusion nor myself. I love science. I love mankind – let’s solve this together and without identity bullying.’

     How to MLA cite this article:

The Ethical Skeptic, “Skeptics Need You – But You Don’t Need Them”; The Ethical Skeptic, WordPress, 4 Dec 2018; Web, https://theethicalskeptic.com/2018/12/04/skep-need-u/

December 4, 2018 Posted by | Ethical Skepticism | , , | 4 Comments

Discerning Sound from Questionable Science Publication

Non-replicatable meta-analyses published in tier I journals do not constitute the preponderance of good source material available to the more-than-casual researcher. This faulty idea stems from a recently manufactured myth on the part of social skepticism. Accordingly, the life-long researcher must learn techniques beyond the standard pablum pushed by social skeptics; discerning techniques which will afford them a superior ability to tell good science from bad – through more than simply shallow cheat sheets and publication social ranking classifications.
The astute ethical skeptic is very much this life-long and in depth researcher. For him or her, ten specific questions can serve to elucidate this difference inside that highly political, complicated and unfair playing field called science.

the-ten-study-questionsRecently, a question was posed to me by a colleague concerning the ability of everyday people to be able to discern good scientific work from dubious efforts. A guide had been passed around inside her group, a guide which touted itself as a brief on 5 key steps inside a method to pin-point questionable or risky advising publications. The author cautioned appropriately that “This method is not infallible and you must remain cautious, as pseudoscience may still dodge the test.” He failed of course to mention the obvious additional risk possibility that the method could serve to screen science which either 1) is good but cannot possibly muster the credential, funding and backing to catch the attention of crowded major journals, or 2) is valid, however is also screened by power-wielding institutions which could have the resources and connections as well as possible motive to block research on targeted ideas. The article my friend’s group was circulating in consideration constituted nothing but a Pollyanna, wide-eyed and apple pie view of the scientific publication process. One bereft of the scarred knuckles and squint-eyed wisdom requisite in discriminating human motivations and foibles.

There is much more to this business of vetting ideas than simply identifying the bad people and the bad subjects. More than simply crowning the conclusions of ‘never made an observation in my life’ meta-analyses as the new infallible standard of truth.

Scientific organizations are prone to the same levels of corruption, bias, greed, desire to get something for as little input as possible, as is the rest of the population. Many, or hopefully even most, individual scientists buck this mold certainly, and are deserving of utmost respect. However, even their best altruism is checked by organizational practices which seek to ensure that those who crave power, are dealt their more-than-ample share of fortune, fame and friar-hood. They will gladly sacrifice the best of science in this endeavor. And in this context of human wisdom it is critical that we keep watch.

If you are a casual reader of science, say consuming three or four articles a month, then certainly the guidelines outlined by Ariel Poliandri below, in his blog entitled “A guide to detecting bogus scientific journals”, represent a suitable first course on the menu of publishing wisdom.¹ In fact, were I offered this as the basis of a graduate school paper, it would be appropriately and warmly received. But if this is all you had to offer the public after 20 years of hard fought science, I would aver that you had wasted your career therein.

1 – Is the journal a well-established journal such as Nature, Science, Proceedings of the National Academy of Sciences, etc.?
2 – Check authors’ affiliations. Do they work in a respectable University? Or do they claim to work in University of Lala Land or no university at all?
3 – Check the Journal’s speciality and the article’s research topic. Are the people in the journal knowledgeable in the area the article deals with?
4 – Check the claims in the title and summary of the article. Are they reasonable for the journal publishing them?
5 – Do the claims at least make sense?

The above process suffers from a vulnerability in hailing only science developed under what is called a Türsteher Mechanism, or bouncer effect. A process producing a sticky but unwarranted prejudice against specific subjects. The astute researcher must ever be aware of the presence of this effect. An awareness which rules out the above 5 advisements as being sufficient.

Türsteher Mechanism

/philosophy : science : pseudoscience : peer review bias/ : the effect or presence of ‘bouncer mentality’ inside journal peer review. An acceptance for peer review which bears the following self-confirming bias flaws in process:

  1. Selection of a peer review body is inherently biassed towards professionals who the steering committee finds impressive,
  2. Selection of papers for review fits the same model as was employed to select the reviewing body,
  3. Selection of papers from non core areas is very limited and is not informed by practitioners specializing in that area, and
  4. Bears an inability as to how to handle evidence that is not gathered in the format that it understands (large scale, hard to replicate, double blind randomized clinical trials or meta-studies).

Therein such a process, the selection of initial papers is biased. Under this flawed process, the need for consensus results in not simply attrition of anything that cannot be agreed upon – but rather, a sticky bias against anything which has not successfully passed this unfair test in the past. An artificial and unfair creation of a pseudoscience results.

This above list by Mr. Poliandri represents simply a non-tenable way to go about vetting your study and resource material so that only pluralistic ignorance influences your knowledge base. It is lazy – sure to be right and safe – useless advisement, to a true researcher. The problem with this list resides inside some very simple industry realities:

1.  ‘Well-established journal’ publication requires sponsorship from a major institution. Scientific American cites that 88% of scientists possess no such sponsorship, and this statistic has nothing to do with the scientific groups’ relative depth in subject field.² So this standard, while useful for the casual reader of science, is not suitable at all for one who spends a lifetime of depth inside a subject. This would include for instance, a person studying impacting factors on autism in their child, or persons researching the effect of various supplements on their health. Not to mention of course, the need to look beyond this small group of publications applies to scientists who spend a life committed to their subject as well.

One will never arrive at truth by tossing out 88% of scientific studies right off the bat.

2.  Most scientists do not work for major universities. Fewer than 15% of scientists ever get to participate in this sector even once in their career.² This again is a shade-of-gray replication of the overly stringent filtering bias recommended in point 1. above. I have employed over 100 scientists and engineers over the years, persons who have collectively produced groundbreaking studies. For the most part, none ever worked for a major university. Perhaps 1 or 2 spent a year inside university affiliated research institutes. Point 2 is simply a naive standard which can only result in filtering out everything with the exception of what one is looking for. One must understand that, in order to survive in academia, one must be incrementally brilliant and not what might be even remotely considered disruptively brash. Academics bask in the idea that their life’s work and prejudices have all panned out to come true. The problem with this King Wears No Clothes process is that it tends to stagnate science, and not provide the genesis of great discovery.

One will never arrive at truth by ignoring 85% of scientists, right off the bat.

3.  There are roles for both specialty journals and generalized journals. There is a reason for this, and it is not to promote ‘bogus pseudoscience’ as the blog author implies (note his context framing statement in quotes above). A generalized journal maintains resource peers to whom they issue subject matter for review. They are not claiming peer evaluation to be their sole task. Larger journals can afford this, but not all journals can. Chalk this point up as well up to naivete. Peer review requires field qualification; however in general, journal publication does not necessarily. Sometimes they are one in the same, sometimes not. Again, if this is applied without wisdom, such naive discrimination can result in a process of personal filtering bias, and not stand as a suitable standard identifying acceptable science.

One will never arrive at truth by viewing science peer review as a sustainable revenue club. Club quality does not work.

4.  Check for the parallel nature of the question addressed in the article premise, methodology, results, title and conclusion.  Article writers know all about the trick of simply reading abstracts and summaries. They know 98% of readers will only look this far, or will face the requisite $25 to gain access further than the abstract. If the question addressed is not the same throughout, then there could be an issue. As well, check the expository or disclosure section of the study or article. If it consists even in part, of a polemic focusing on the bad people, or the bad ideas, or the bad industry player – then the question addressed in the methodology may have come from bias in the first place. Note: blog writing constitutes this type of writing. A scientific study should be disciplined to the question at hand, be clear on any claims made, and as well any preliminary disclosures which help premise, frame, constrain, or improve the predictive nature of the question. Blogs and articles do not have to do this; however, neither are they scientific studies. Know the difference.

Writers know the trick – that reviewers will only read the summary or abstract. The logical calculus of a study resides below this level. So authors err toward favoring established ideas in abstracts.

5.  Claims make sense with respect to the context in which they are issued and the evidence by which they are backed. Do NOT check to see if you believe the claims or they make some kind of ‘Occam’s Razor’ sense. This is a false standard of ‘I am the science’ pretense taught by false skepticism. Instead, understand what the article is saying and what it is not saying – and avoid judging the article based on whether it says something you happen to like or dislike. We often call this ‘sense’ – and incorrectly so. It is bias.

Applying personal brilliance to filter ideas, brilliance which you learned from only 12% of publication abstracts and 15% of scientists who played the game long enough – is called: gullibility.

It is not that the body of work vetted by such criteria is invalid; rather simply that – to regard science as only this – is short sighted and bears fragility. Instead of these Pollyanna 5 guidelines, the ethical skeptic will choose to understand whether or not the study or article in question is based upon standards of what constitutes good Wittgenstein and Popper science. This type of study can be conducted by private lab or independent researchers too. One can transcend the Pollyanna 5 questions above by asking the ten simple questions regarding any material – and outlined in the graphic at the top of this article. Epoché is exercised by keeping their answers in mind, without prejudice, as onward you may choose to read. Solutions to problems come from all levels and all types of contributors. This understanding constitutes the essence of wise versus naive science.

“Popper holds that there is no unique methodology specific to science. Science, like virtually every other human, and indeed organic, activity, Popper believes, consists largely of problem-solving.”³

There are two types of people, those who wish to solve the problem at hand, and those who already had it solved, so it never was a problem for them to begin with, rather simply an avenue of club agenda expression or profit/career creation.

Let’s be clear here: If you have earned tenure as an academic or journal reviewer or a secure career position which pays you a guaranteed $112,000 a year, from age 35 until the day you retire, this is the same as holding a bank account with $2,300,000 in it at age 35† – even net of the $200,000 you might have invested in school. You are a millionaire. So please do not advertise the idea that scientists are all doing this for the subject matter.

$2.3 million (or more in sponsorship) is sitting there waiting for you to claim it – and all you have to do is say the right things, in the right venues, for long enough.

This process of depending solely on tier I journals – is an exercise in industry congratulationism. There has to be a better way to vet scientific study, …and there is. The following is all about telling which ilk of person is presenting an argument to you.

The Ten Questions Differentiating Good Science from Bad

better-science-1Aside from examining a study’s methodology and logical calculus itself, the following ten questions are what I employ to guide me as to how much agenda and pretense has been inserted into its message or methodology. There are many species of contention; eight in the least if we take the combinations of the three bisected axes in the graph to the right. Twenty four permutations if we take the sequence in which the logic is contended (using falsification to promote an idea versus promoting the idea that something ‘will be falsified under certain constraints’, etc.) In general, what I seek to examine is an assessment of how many ideas the author is seeking to refute or promote, with what type of study, and with what inductive or deductive approach. An author who attempts to dismiss too many competing ideas, via a predictive methodology supporting a surreptitiously promoted antithesis, which cannot possibly evaluate a critical theoretical mechanism – this type of study or article possesses a great likelihood of delivering bad science. Think about the celebrity skeptics you have read. How many competing ideas are they typically looking to discredit inside their material, and via one mechanism of denial (usually an apothegm and not a theoretical mechanism)? The pool comprises 768 items – many to draw from – and draw from this, they do.

Let’s be clear here – a study can pass major journal peer review and possess acceptable procedural/analytical methodology – but say or implicate absolutely nothing for the most part. Ultimately being abused (or abusing its own research in extrapolating its reach) to say things which the logical calculus involved would never support (see Dunning-Kruger Abuse). Such conditions do not mean that the study will be refused peer review. Peer reviewers rarely ever contend (if they disregard the ‘domain of application’ part of a study’s commentary):

“We reject this study because it could be abused in its interpretation by malicious stakeholders.” (See example here: http://www.medicaldaily.com/cancer-risks-eating-gmo-corn-glyphosate-vs-smoking-cigarettes-according-411617)

Just because a study is accepted for and pass peer review, does not mean that all its extrapolations, exaggerations, implications or abuses are therefore true. You, as the reader are the one who must apply the sniff test as to what the study is implying, saying or being abused to say. What helps a reader avoid this? Those same ten questions from above.

null-hypothesisThe ten questions I have found most useful in discerning good science from bad, are formulated based upon the following Popperian four-element premise.² All things being equal, better science is conducted in the case wherein

  • one idea is
  • denied through
  • falsification of its
  • critical theoretical mechanism.

If the author pulls this set of four things off successfully, eschews promotion of ‘the answer’ (which is the congruent context to one having disproved a set of myriad ideas), then the study stands as a challenge to the community and must be sought for replication (see question IX below). For the scientific community at large to ignore such a challenge is the genesis of (our pandemic) pluralistic ignorance.

For instance, in one of the materials research labs I managed, we were tasked by an investment fund and their presiding board to determine the compatibility of titanium to various lattice state effects analogous to iron. The problem exists however in that titanium is not like iron at all. It will not accept the same interstitial relationships with other small atomic radius class elements that iron will (boron, carbon, oxygen, nitrogen). We could not pursue the question the way the board posed it. “Can you screw with titanium in exotic ways to make it more useful to high performance aircraft?”  We first had to reduce the question into a series of salient, then sequitur Bayesian reductions. The first question to falsify was “Titanium maintains its vacancy characteristics at all boundary conditions along the gamma phase state?” Without an answer (falsification) to this single question – not one single other question related to titanium could be answered in any way shape or form. Most skeptics do not grasp this type of critical path inside streams of logical calculus. This is an enormous source of confusion and social ignorance. Even top philosophers and celebrity skeptics fail this single greatest test of skepticism. And they are not held to account because few people are the wiser, and the few who are wise to it – keep quiet to avoid the jackboot ignorance enforced by the Cabal.

Which introduces and opens up the more general question of ‘What indeed, all things being considered, makes for good effective science?” This can be lensed through ten useful questions below, applied in the same fashion as the titanium example case:

I. Has the study or article asked and addressed the 1. relevant, 2. salient, 3. sound and 4. critical path next question under the scientific method?

If it has accomplished this, it is already contending for tier I science, as only a minority of scientists understand how to pose reductive study in this way. A question can be relevant, but not salient to the question at hand. This is the most common trick of pseudoscience. The question can also be relevant and salient, yet be asked in incorrect sequence, so as to frame its results in a prejudicial light. If this diligence has not been done then do not even proceed to the next questions II though VII below. Throw the study in the waste can. Snopes is notorious for this type of chicanery. The material is rhetoric, targeting a victim group, idea or person.

If the answer to this is ‘No’ – Stop here and ignore the study. Use it as an example of how not to do science.

II. Did the study or article focus on utilization of a critical theoretical mechanism which it set out to evaluate for validity?

The litmus which differentiates a construct (idea or framework of ideas) from a theory, is that a theory contains a testable and critical theoretical mechanism. Was the critical theoretical mechanism identified and given a chance for peer input prior to its establishment? Or was it just assumed as valid by a small group, or one person? For instance, a ‘DNA Study’ can examine three classes of DNA: mtDNA, autosomal DNA, or Y-DNA. If it is a study of morphology, yet examines the Y-DNA only for example, then the study is fraud. Y-DNA has nothing to do with morphology or genetic makeup. This would be an example of an invalid (probably slipped by as an unchallenged assumption) critical test mechanism.

If the answer to this is ‘No’ – Regard the study or article as an opinion piece, or worse propaganda piece, and not of true scientific incremental value.

III.  Did the study or article attempt to falsify this mechanism, or employ it to make predictions? (z-axis)

Karl Popper outlined that good science involves falsification of alternative ideas or the null hypothesis. However, given that 90% of science cannot be winnowed through falsification alone, it is generally recognized that a theory’s predictive ability can act as a suitable critical theoretical mechanism via which to examine and evaluate. Evolution was accepted through just such a process. In general however, mechanisms which are falsified are regarded as stronger science over successfully predictive mechanisms. A second question to ask is, did the study really falsify the mechanism being tested for, or did it merely suggest possible falsity? Watch for this trick of pseudoscience.

If the study or article sought to falsify a theoretical mechanism – keep reading with maximum focus. If the study used predictive measures – catalog it and look for future publishing on the matter.

IV.  Did the study or article attempt to deny specific idea(s), or did it seek to promote specific idea(s)? (x-axis)

Denial and promotion of ideas is not a discriminating facet inside this issue stand alone. What is significant here is how it interrelates with the other questions. In general attempting to deny multiple ideas or promote a single idea are techniques regarded as less scientific than the approach of denying a single idea – especially if one is able to bring falsification evidence to bear on the critical question and theoretical mechanism. Did the study authors seem to have a commitment to certain jargon or prejudicial positions, prior to the results being obtained? Also watch for the condition where a cherry picked test mechanism may appear to be a single item test, yet is employed to deny an entire series of ideas as a result. This is not actually a condition of single idea examination, though it may appear to be so.

Simply keep the idea of promotion and denial in mind while you consider all other factors.

V.  Did the study affix its contentions on a single idea, or a group of ideas? (y-axis)

In general, incremental science and most of discovery science work better when a study focuses on one idea for evaluation and not a multiplicity of ideas. This minimizes extrapolation and special pleading loopholes or ignorance. Both deleterious implications for a study. Prefer authors who study single ideas over authors who try and make evaluations upon multiple ideas at once. The latter task is not a wise undertaking even in the instance where special pleading can theoretically be minimized.

If your study author is attempting to tackle the job of denying multiple ideas all at once – then the methodical cynicism alarm should go off. Be very skeptical.

VI.  What percent of the material was allocated towards ideas versus the more agenda oriented topics of persons, events or groups?

If the article or study spends more than 10% of its Background material focused on persons, events or groups it disagrees with, throw the study in the trash. If any other section contains such material above 0%, then the study should be discarded as well. Elanor Roosevelt is credited with the apothegm “Great minds discuss ideas; Average minds discuss events; Small minds discuss people.” Did the study make a big deal about its ‘accoutrements and processes of science’ – in an attempt to portray the appearance of legitimacy. Did the study sponsors photograph themselves wearing face shields and lab coats and writing in notebooks. This is often pretense and promotion, beware.

Take your science only from great minds focusing on ideas and not events or persons.

As well, if the author broaches a significant amount of related but irrelevant or non-salient to the question at hand material, you may be witnessing an obdurate, polemic or ingens vanitatum argument. Do not trust a study or article where the author appears to be demonstrating how much of an expert they are in the matter (through addressing related but irrelevant and non-salient or non-sequitur material). This is irrelevant and you should be very skeptical of such publications.

VII. Did the author put an idea, prediction or construct at risk in their study?

Fake science promoters always stay inside well established lines of social safety, so that they are 1) Never found wrong, 2) Don’t bring the wrong type of attention to themselves (remember the $2.6+ million which is at stake here), and 3) Can imply their personal authority inside their club as an opponent-inferred appeal in arguing. They always repeat the correct apothegm, and always come to the correct conclusion. Did the study sponsor come in contending that they ‘can do the study quickly’, followed by a low cost and ‘simple’ result which conformed with a pre-selected answer? Don’t buy it.

Advancing science always involves some sort of risk. Do not consider those who choose paths of safety, familiarity and implied authority to possess any understanding of science.

VIII.  Was the study? (In order of increasing gravitas)

1.  increasing-gravitasPsychology or Motivation (Pseudo-Theory – Explains Everything)

2.  Meta-Data – Studies of Studies (Indirect Data Only vulnerable to Simpson’s Paradox or Filtering/Interpretive Bias)

3.  Data – Cohort and Set Measures (Direct but still Data Only)

4.  Direct Measurement Observation (Direct Confirmation)

5.  Inductive Consilience Establishment (Preponderance of Evidence from Multiple Channels/Sources)

6.  Deductive Case Falsification (Smoking Gun)

All it takes in order to have a strong study is one solid falsifying observation. This is the same principle as is embodied inside the apothegm ‘It only takes one white crow, to falsify the idea that all crows are black’.

IX.  When the only viable next salient and sequitur reductive step, post study – is to replicate the results – then you know you have a strong argument inside that work.

X.  Big data and meta-analysis studies like to intimidate participants in the scientific method with the implicit taunt “I’m too big to replicate, bring consensus now.”

These questions, more than anything else – will allow the ethical skeptic to begin to grasp what is reliable science and what is questionable science. Especially in the context where one can no longer afford to dwell inside only the lofty 5% of the highest regarded publications or can no longer stomach the shallow talking point sheets of social skepticism – all of which serve only to ignore or give short shrift to the ideas to which one has dedicated their life in study.

epoché vanguards gnosis


¹  Poliandri, Ariel; “A guide to detecting bogus scientific journals”; Sci – Phy, May 12, 2013; http://sci-phy.com/detecting-bogus-scientific-journals/

²  Beryl Lieff Benderly, “Does the US Produce Too Many Scientists?; Scientific American, February 22, 2010; https://www.scientificamerican.com/article/does-the-us-produce-too-m/

³  Thornton, Stephen, “Karl Popper”, The Stanford Encyclopedia of Philosophy (Winter 2016 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2016/entries/popper/>

†  Present Value of future cash flows with zero ending balance: 456 payments of $9,333 per month at .25% interest per payment period.

February 25, 2017 Posted by | Agenda Propaganda, Social Disdain | , , | Leave a comment

Abuse of the Dunning-Kruger Effect

When does a Dunning-Kruger misapplication flag the circumstance of ad hominem attack by a claimant who sees them self as superior minded? When you observe it being applied in situations and domains inside of which the study authors, Kruger and Dunning, never intended. It behooves the ethical skeptic to actually read the studies which are purported at face value to back habitual social skeptic condemnation tactics. Knowing how to not commit a Dunning-Kruger Effect error in application, ironically is a key indicator as to one’s competency under a Dunning-Kruger perspective in the first place.

A saying is attributed to Thomas Jefferson about the wisdom of self-knowledge, and goes as such “He who knows best, best knows how little he knows.” This quote is actually highlighted inside a celebrated study by Cornell University Psychologists, Justin Kruger and David Dunning; commonly referred to as the ‘Dunning-Kruger Effect’ study. Indeed this principle elicited by Jefferson is embodied inside two of the Eight Tropes of Ethical Skepticism:

I.    There is critically more we do not know, than we do know.

II.   We do not know, what we do not know. Only a sub-critical component of mankind effectively grasps this.

Dunning KrugerOne wonders if Thomas Jefferson, in recognizing this human foible would have been the wiser to not attempt his bold assertions inside of “A Declaration by the Representatives of The United States of America, In General Congress Assembled.”¹ This haughty document, certainly venturing into an arena in which Jefferson himself had no personal degree or particular expertise, represented a projection into a subject about which he could not possibly have known competency.  Surely this is a case of Dunning-Kruger ‘fallacy’ if ever one was observed. An enormous boast of unseemly levels of claim to knowledge (ones which make socialist and social skeptics uncomfortable to this very day):

We hold these truths to be self-evident: that all men are created equal; that they are endowed by their creator with inherent and certain inalienable rights; that among these are life, liberty, & the pursuit of happiness: that to secure these rights, governments are instituted among men, deriving their just powers from the consent of the governed; that whenever any form of government becomes destructive of these ends, it is the right of the people to alter or abolish it, & to institute new government, laying it’s foundation on such principles, & organizing it’s powers in such form, as to them shall seem most likely to effect their safety & happiness.¹

But wait, how possibly can ‘the people’ abolish and institute when they are overestimating their competence (Dunning-Kruger) inside the subject of government? Surely they must be ‘anti-government’. Indeed, the Eight Tropes of Ethical Skepticism continue with a focus outlining that the necessity of knowledge, even in absence of knowledge, is to observe and correct when a party (even government or science) seeks to control by means of ignorance, provisional knowledge, methodical cynicism and authority alone (the three basic elements of Social Skepticism). It holds our common service to each other and love for our fellow man and his plight, to underpin with utmost importance our qualification to observe and direct the processes of knowledge. In this instance, regarding Jefferson and those who crafted our first ideas of what a government was to be, courage, risk and personal gumption outweighed calls for caution – specifically because of the particular need, benefit or danger entailed. Dunning-Kruger indeed did not apply because the situation dictated actions of character on the part of an agent of change (see #12 below). This is most often the circumstance which we as ethical skeptics face today.

Dunning-Kruger awareness does not apply as a fallacy of disqualification in such circumstances. This awareness about both the limits of knowledge, as well as when a Dunning-Kruger Effect does and does not apply, relate directly to the Seven Tropes of Ethical Skepticism. Several species/errors in application arise under a logical calculus which seeks to survey the landscape of the Dunning-Kruger Effect:

Dunning-Kruger Abuse (ad hominem)

/philosophy : pseudo-science : fascism/ : a form of ad hominem attack. Inappropriate application of the Dunning-Kruger fallacy in circumstances where it should not apply; instances where every person has a right, responsibility or qualification as a victim/stakeholder to make their voice heard, despite not being deemed a degree, competency or title holding expert in that field.

This circumstance of employment standing in stark contrast with legitimate circumstances where the Dunning-Kruger Effect does indeed apply. Including those circumstances where ironically, a fake skeptic is not competent enough to identify a broader circumstance of Dunning-Kruger in themselves and their favored peers (several species below).

Dunning-Kruger Effect

/philosophy : misconception : bias/ : an effect in which incompetent people fail to realize they are incompetent because they lack the skill or maturity to distinguish between competence and incompetence among their peers

A principle which serves to introduce the ironic forms of Dunning-Kruger Effect employed skillfully by Social Skepticism today:

Dunning-Kruger Denial

/philosophy : pseudo-science : false skepticism : social manipulation/ : the manipulation of public sentiment and perceptions of science, and/or condemnation of persons through skillful exploitation of the Dunning-Kruger Effect. This occurs in five speciated forms:

Dunning-Kruger Exploitation

/philosophy : pseudo-science : fascism/ : the manipulation of unconsciously incompetent persons or laypersons into believing that a source of authority expresses certain opinions, when in fact the persons can neither understand the principles underpinning the opinions, nor critically address the recitation of authority imposed upon them. This includes the circumstance where those incompetent persons are then included in the ‘approved’ club solely because of their adherence to proper and rational approved ideas.

Dunning-Kruger Milieu

/philosophy : pseudo-science : fascism/ : a circumstance wherein either errant information or fake-hoaxing exists in such quantity under a Dunning-Kruger Exploitation circumstance, or a critical mass of Dunning-Kruger Effect population is present, such that core truths observations, principles and effects surrounding a topic cannot be readily communicated or discerned, as distinct from misinformation, propaganda and bunk.

Dunning-Kruger Projection (aka Plaiting)

/philosophy : misconception : bias/ : the condition in which an expert, scientist or PhD in one discipline over-confidently or ignorantly fails to realize that they are not competent to speak in another discipline, or attempts to pass authority ‘as a scientist’ inside an expertise set to which they are only mildly competent at best. Any attempt to use the identity of ‘scientist’ to underpin authority, bully, seek worship or conduct self-deception regarding an array of subjects inside of which they in actuality know very little.

Non-Equivalence of Competence

/philosophy : deception : bias or method/ : I don’t have to competent on a subject, in order to ascertain that you are incompetent on that subject.

Dunning-Kruger Skepticism

/philosophy : misconception : bias/ : an effect in which incompetent people making claim under ‘skepticism,’ fail to realize they are incompetent both as a skeptic and as well inside the subject matter at hand. Consequently they will fall easily for an argument of social denial/promotion because they

1.  lack the skill or maturity to distinguish between competence and incompetence among their skeptic peers and/or are

2.  unduly influenced by a condition of Dunning-Kruger Exploitation or Millieu, and/or are

3.  misled by false promotions of what is indeed skepticism, or possess a deep seated need to be accepted under a Negare Attentio Effect.

Dunning-Kruger Denial is a chief objective of social skepticism. So it was not surprising that social skepticism recognized this overall malady first; as exploiting its ad hominem potential, is one of the principal tactics of fake skepticism.

Nonetheless, back to the principal context of this blog, with regard to fair contextual application of actual underlying Dunning-Kruger principles, and framed in a more simple and condensed expression:

One does not possess the right, to dismiss the rights of others – by means of a Dunning-Kruger Effect accusation.

What the Kruger and Dunning Study Did Say

Dunning-Kruger ExhibitThe famously heralded study, one by Justin Kruger and David Dunning inside the Department of Psychology of Cornell University in 1999, implied the importance of recognizing when one has outlasted their competency in a given field versus their peers in that field – and the importance of keeping mute/inactive in circumstances where this could serve to embarrass or endanger. A study which would have certainly been embraced by the Royalist or Tory in the day of Thomas Jefferson.  More specifically the study outlined four pitfalls which were observed among 60 – 90 Cornell University undergraduate first year students (below).

(Note: This certainly a Dunning-Kruger commentary in itself as to Kruger and Dunning’s ability to develop unbiased inclusion criteria which would or would not serve to amplify desired effect. Have you ever known an undergraduate freshman who did not overestimate their success in an upcoming exam or evaluation? This is the definition of freshman.

Scientific parsimony would have been applicable here, especially from the perspective of selecting a source-S sample pool of silver-spooned Ivy-Leaguers who have been told their entire lives that they are the smartest person in the room/building. This is like observing if fights will break out when two people hit each other, by conducting surveys inside a drunken London mosh pit full of Manchester United and Arsenal Football Clubbers. It is stupidity dressed up in lab coats. An epistemologically shallow if not elegant convenience of social skeptic tradecraft. A common produit-decélèbre on their part – especially among psychology PhD’s.

What they observed in fact, was the unique nutrient solution of psychology and social pressure which serves to cultivate our brood of social skeptics. These test subjects and their indoctrinated peers will be sure to never step out of line, or speak up when they might be afraid, ever again. See # 11 below.)

Given this skewed inclusive criteria group, one with which Kruger and Dunning were very familiar and inside of which they had already bore an intuitive estimation of positive result, four predictions from the surveys were developed and confirmed:

Prediction 1. Incompetent individuals, compared with their more competent peers, will dramatically overestimate their ability and performance relative to objective criteria.

Prediction 2. Incompetent individuals will suffer from deficient metacognitive skills, in that they will be less able than their more competent peers to recognize competence when they see it–be it their own or anyone else’s.

Prediction 3. Incompetent individuals will be less able than their more competent peers to gain insight into their true level of performance by means of social comparison information. In particular, because of their difficulty recognizing competence in others, incompetent individuals will be unable to use information about the choices and performances of others to form more accurate impressions of their own ability.

Prediction 4. The incompetent can gain insight about their shortcomings, but this comes (paradoxically) by making them more competent, thus providing them the metacognitive skills necessary to be able to realize that they have performed poorly.²

None of the cautions above and below herein of course, serve to invalidate the effect Kruger and Dunning (and others since) have cited in the referenced study. These cautions simply function as a sentinel, flagging conditions wherein such a study might be abused for social ends. To that end, let us discuss some of those circumstances where a social skeptic might abuse such a study as a means of demanding conformance through social ridicule, on issues they are seeking to promote.

When Dunning-Kruger Effect Does Not Apply

A reasonable man would suppose that underestimating one’s ability to adeptly handle the intricate subtleties of a Dunning-Kruger accusation, stands as a form of Dunning-Kruger fallacy in itself. But that does not inhibit our self-appointed elite, the social skeptic from slinging around the accusation with all the adeptness of a demolitions expert in a porcelain factory. The sad reality is that the majority of instances in which I have seen the accusation foisted, have been instances of invalid usage. In other words, as the social skeptic interprets this study and instructs their sycophants as to its employment, they and their disciples are now scientifically justified (remember they represent science) in making the following accusations.

How the four findings of the Dunning-Kruger study are abused in the anosognosia vulnerable mind:

  1. People whom I do not like, do stupid things.
  2. People whom I do not like, fail to recognize how smart I am.
  3. People whom I do not like, fail to recognize how stupid they are.
  4. It is simply a matter of me training the stupid, because as they become more informed like me, they will be come less stupid and recognize stupidity in others.

Do you see the sales cycle evolving here? This is a religious pitch used by fundamentalist Christianity. They could print this up in a tract and hand it out inside airport bathrooms. In other words what the Dunning-Kruger misapplication has introduced is an act of social anosognosia (a deficit of self awareness) on the part of those who see themselves as superior minded. This relates to the more complex comparatives between Intelligence and Rationality, a perception on the part of social skeptics which we addressed in an earlier blog.

Intelligence is smart people who do or think unauthorized things. Rationality is smart people who do or think correct things. Social Skepticism is about knowing the difference.

Ethical Skepticism says ‘Bullshit’ to this line of reasoning.

Which introduces the final point set of this blog, circumstances where the Dunning-Kruger Effect does not bear applicability. Instances where the sociopathology of the anosognosiac have crossed the line into abuse of both the Dunning-Kruger Effect and more importantly, those around them:

Specific instances in which the Dunning-Kruger Effect does not apply include:

1.  In matters of Public Policy.

e.g. ∈ You have the right to speak up about contaminants in your food, you do not have to be a chemist or agricultural scientist.

2.  In matters of Voting, Political Voice and Will.

e.g. ∈ You have the right to speak up about foreign trade policy and jobs, you do not have to be a degree holding economist.

3.  In situations where professionals and non-professionals are involved. Dunning-Kruger is speaking about continuous scale comparatives between peers, not discrete breakouts between groups, as in the case of professionals and various tiers of non-professionals (from layman to dilettante) in a given discipline. From the ‘notes/discussion’ section of the Kruger and Dunning study itself:

“There is no categorical bright line that separates “competent” individuals from “incompetent” ones. Thus, when we speak of “incompetent” individuals we mean people who are less competent than their peers.”²

e.g. ∈ You have the right to speak up about where NASA’s space programs are headed, you do not have to be an astrophysicist or on NASA’s advisory board.

4.  When the speaker is a victim of corporate, governmental, mafia, criminal, supposed or real expert actions or fraud.

e.g. ∈ You have the right to speak up about your vaccine injured child, you do not have to be an epidemiologist or medical doctor.

5.  In matters where there is more unknown than is known, or where science has studied very little.

e.g. ∈ Einstein bore the right to speak up about Special Relativity while simply an entry level patent engineer, he was not disqualified by a previous academic C-average, nor by his not holding a PhD.

6.  In matters where competency in reality only comprises simply a few memorized facts, procedure or trivia concerning the subject.

e.g. ∈ You have the right to speak up about water contamination in your community, you do not have to be involved in constructing assay sheets at your local processing plant.

7.  In matters where social conformance is conflated with competency (i.e. social skeptic ‘rationality’).

e.g. ∈ You have the right to speak up about science ignoring an important issue observed in your local community, you do not have to be a degree holding scientist in that arena.

8.  In matters of personal financial and household management.

e.g. ∈ You have the right to organize community to refuse a tax levied on your home for unfair reasons, you do not have to be a career politician or expert in the subject which is funded by the tax itself.

9.  In matters of personal health, disease prevention and health management.

e.g. ∈ You have the right to speak up about things harming your family’s health, you do not have to be a member of Science Based Medicine.

10.  In matters of personal religious practice or choice of faith.

e.g. ∈ You have the right to say that you observed something extraordinary or miraculous from a spiritual perspective, you do not have to be a priest or scientist.

11.  In any matter or circumstance where the Dunning-Kruger Effect is employed to intimidate or create compliance by means of fear/ridicule.

e.g. ∈ You have the right to speak up about unbridled immigration and population dumping, this does not make you a racist. You have the freedom and right to identify such things as acts of war.

12.  When courage, risk and personal gumption override calls for caution because the need, benefit or danger entailed dictate actions of character on the part of an agent of change.

e.g. ∈ You have the right to speak up about VINDA Autism, you do not have to be a Centers for Disease Control professional, in order to demand third party review of ‘settled science.’

The study authors, had they been following the protocols of science, should have included points such as these in their commentary and counter-point acknowledgement sections. This is what ethical skeptics and scientists for that matter, do; they remain aware of and allow-for counter-point arguments. They regard them as matters of importance. Unfortunately, save for number 3. above (and only in part even for that one), Kruger and Dunning did not bear such circumspection about their own findings in their work. Another shortfall in scientific method.

Knowing how to not use a weapon is the supreme qualification for a user of that weapon. Knowing how to not commit a Dunning-Kruger Effect error in its application, ironically is a key indicator as to one’s competency under a Dunning-Kruger perspective in the first place.

epoché vanguards gnosis


¹  The Works of Thomas Jefferson: A DECLARATION BY THE REPRESENTATIVES OF THE UNITED STATES OF AMERICA, IN GENERAL CONGRESS ASSEMBLED; http://oll.libertyfund.org/titles/800#Jefferson_0054-01_104

²  Journal of Personality and Social Psychology: American Psychological Association, December 1999 Vol. 77, No. 6, 1121-1134; Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments; Justin Kruger and David Dunning, Department of Psychology, Cornell University. A series study conducted by survey of a series of Cornell University undergraduates about competency and self perception, meta-cognition and projection. (http://gagne.homedns.org/~tgagne/contrib/unskilled.html).

May 12, 2016 Posted by | Agenda Propaganda, Argument Fallacies | , , , , , | Leave a comment

Chinese (Simplified)EnglishFrenchGermanHindiPortugueseRussianSpanish
%d bloggers like this: