The Ethical Skeptic

Challenging Pseudo-Skepticism, Institutional Propaganda and Cultivated Ignorance

Discerning Sound from Questionable Science Publication

Non-replicatable meta-analyses published in tier I journals do not constitute the preponderance of good source material available to the more-than-casual researcher. This faulty idea stems from a recently manufactured myth on the part of social skepticism. Accordingly, the life-long researcher must learn techniques beyond the standard pablum pushed by social skeptics; discerning techniques which will afford them a superior ability to tell good science from bad – through more than simply shallow cheat sheets and publication social ranking classifications.
The astute ethical skeptic is very much this life-long and in depth researcher. For him or her, ten specific questions can serve to elucidate this difference inside that highly political, complicated and unfair playing field called science.

the-ten-study-questionsRecently, a question was posed to me by a colleague concerning the ability of everyday people to be able to discern good scientific work from dubious efforts. A guide had been passed around inside her group, a guide which touted itself as a brief on 5 key steps inside a method to pin-point questionable or risky advising publications. The author cautioned appropriately that “This method is not infallible and you must remain cautious, as pseudoscience may still dodge the test.” He failed of course to mention the obvious additional risk possibility that the method could serve to screen science which either 1) is good but cannot possibly muster the credential, funding and backing to catch the attention of crowded major journals, or 2) is valid, however is also screened by power-wielding institutions which could have the resources and connections as well as possible motive to block research on targeted ideas. The article my friend’s group was circulating in consideration constituted nothing but a Pollyanna, wide-eyed and apple pie view of the scientific publication process. One bereft of the scarred knuckles and squint-eyed wisdom requisite in discriminating human motivations and foibles.

There is much more to this business of vetting ideas than simply identifying the bad people and the bad subjects. More than simply crowning the conclusions of ‘never made an observation in my life’ meta-analyses as the new infallible standard of truth.

Scientific organizations are prone to the same levels of corruption, bias, greed, desire to get something for as little input as possible, as is the rest of the population. Many, or hopefully even most, individual scientists buck this mold certainly, and are deserving of utmost respect. However, even their best altruism is checked by organizational practices which seek to ensure that those who crave power, are dealt their more-than-ample share of fortune, fame and friar-hood. They will gladly sacrifice the best of science in this endeavor. And in this context of human wisdom it is critical that we keep watch.

If you are a casual reader of science, say consuming three or four articles a month, then certainly the guidelines outlined by Ariel Poliandri below, in his blog entitled “A guide to detecting bogus scientific journals”, represent a suitable first course on the menu of publishing wisdom.¹ In fact, were I offered this as the basis of a graduate school paper, it would be appropriately and warmly received. But if this is all you had to offer the public after 20 years of hard fought science, I would aver that you had wasted your career therein.

1 – Is the journal a well-established journal such as Nature, Science, Proceedings of the National Academy of Sciences, etc.?
2 – Check authors’ affiliations. Do they work in a respectable University? Or do they claim to work in University of Lala Land or no university at all?
3 – Check the Journal’s speciality and the article’s research topic. Are the people in the journal knowledgeable in the area the article deals with?
4 – Check the claims in the title and summary of the article. Are they reasonable for the journal publishing them?
5 – Do the claims at least make sense?

This list represents simply a non-tenable way to go about vetting your study and resource material so that only pluralistic ignorance influences your knowledge base. It is lazy – sure to be right and safe – useless advisement, to a true researcher. The problem with this list resides inside some very simple industry realities:

1.  ‘Well-established journal’ publication requires sponsorship from a major institution. Scientific American cites that 88% of scientists possess no such sponsorship, and this statistic has nothing to do with the scientific groups’ relative depth in subject field.² So this standard, while useful for the casual reader of science, is not suitable at all for one who spends a lifetime of depth inside a subject. This would include for instance, a person studying impacting factors on autism in their child, or persons researching the effect of various supplements on their health. Not to mention of course, the need to look beyond this small group of publications applies to scientists who spend a life committed to their subject as well.

One will never arrive at truth by tossing out 88% of scientific studies right off the bat.

2.  Most scientists do not work for major universities. Fewer than 15% of scientists ever get to participate in this sector even once in their career.² This again is a shade-of-gray replication of the overly stringent filtering bias recommended in point 1. above. I have employed over 100 scientists and engineers over the years, persons who have collectively produced groundbreaking studies. For the most part, none ever worked for a major university. Perhaps 1 or 2 spent a year inside university affiliated research institutes. Point 2 is simply a naive standard which can only result in filtering out everything with the exception of what one is looking for. One must understand that, in order to survive in academia, one must be incrementally brilliant and not what might be even remotely considered disruptively brash. Academics bask in the idea that their life’s work and prejudices have all panned out to come true. The problem with this King Wears No Clothes process is that it tends to stagnate science, and not provide the genesis of great discovery.

One will never arrive at truth by ignoring 85% of scientists, right off the bat.

3.  There are roles for both specialty journals and generalized journals. There is a reason for this, and it is not to promote pseudoscience as the blog author implies (see statement in first paragraph above). A generalized journal maintains resource peers to whom they issue subject matter for review. They are not claiming peer evaluation to be their sole task. Larger journals can afford this, but not all journals can. Chalk this point up as well up to naivete. Peer review requires field qualification; however in general, journal publication does not necessarily. Sometimes they are one in the same, sometimes not. Again, if this is applied without wisdom, such naive discrimination can result in a process of personal filtering bias, and not stand as a suitable standard identifying acceptable science.

One will never arrive at truth by viewing science as a club. Club quality does not work.

4.  Check for the parallel nature of the question addressed in the article premise, methodology, results, title and conclusion.  Article writers know all about the trick of simply reading abstracts and summaries. They know 98% of readers will only look this far, or will face the requisite $25 to gain access further than the abstract. If the question addressed is not the same throughout, then there could be an issue. As well, check the expository or disclosure section of the study or article. If it consists even in part, of a polemic focusing on the bad people, or the bad ideas, or the bad industry player – then the question addressed in the methodology may have come from bias in the first place. Note: blog writing constitutes this type of writing. A scientific study should be disciplined to the question at hand, be clear on any claims made, and as well any preliminary disclosures which help premise, frame, constrain, or improve the predictive nature of the question. Blogs and articles do not have to do this; however, neither are they scientific studies. Know the difference.

Writers know the trick – that reviewers will only read the summary or abstract. The logical calculus of a study resides below this level. So authors err toward favoring established ideas in abstracts.

5.  Claims make sense with respect to the context in which they are issued and the evidence by which they are backed. Do NOT check to see if you believe the claims or they make some kind of ‘Occam’s Razor’ sense. This is a false standard of ‘I am the science’ pretense taught by false skepticism. Instead, understand what the article is saying and what it is not saying – and avoid judging the article based on whether it says something you happen to like or dislike. We often call this ‘sense’ – and incorrectly so. It is bias.

Applying personal brilliance to filter ideas, brilliance which you learned from only 12% of publication abstracts and 15% of scientists who played the game long enough – is called: gullibility.

It is not that the body of work vetted by such criteria is invalid; rather simply that – to regard science as only this – is short sighted and bears fragility. Instead of these Pollyanna 5 guidelines, the ethical skeptic will choose to understand whether or not the study or article in question is based upon standards of what constitutes good Wittgenstein and Popper science. This type of study can be conducted by private lab or independent researchers too. One can transcend the Pollyanna 5 questions above by asking the ten simple questions regarding any material – and outlined in the graphic at the top of this article. Epoché is exercised by keeping their answers in mind, without prejudice, as onward you may choose to read. Solutions to problems come from all levels and all types of contributors. This understanding constitutes the essence of wise versus naive science.

“Popper holds that there is no unique methodology specific to science. Science, like virtually every other human, and indeed organic, activity, Popper believes, consists largely of problem-solving.”³

There are two types of people, those who wish to solve the problem at hand, and those who already had it solved, so it never was a problem for them to begin with, rather simply an avenue of club agenda expression or profit/career creation.

Let’s be clear here: If you have earned tenure as an academic or journal reviewer or a secure career position which pays you a guaranteed $112,000 a year, from age 35 until the day you retire, this is the same as holding a bank account with $2,300,000 in it at age 35† – even net of the $200,000 you might have invested in school. You are a millionaire. So please do not advertise the idea that scientists are all doing this for the subject matter.

$2.3 million (or more in sponsorship) is sitting there waiting for you to claim it – and all you have to do is say the right things, in the right venues, for long enough.

This process of depending solely on tier I journals – is an exercise in industry congratulationism. There has to be a better way to vet scientific study, …and there is. The following is all about telling which ilk of person is presenting an argument to you.

The Ten Questions Differentiating Good Science from Bad

better-science-1Aside from examining a study’s methodology and logical calculus itself, the following ten questions are what I employ to guide me as to how much agenda and pretense has been inserted into its message or methodology. There are many species of contention; eight in the least if we take the combinations of the three bisected axes in the graph to the right. Twenty four permutations if we take the sequence in which the logic is contended (using falsification to promote an idea versus promoting the idea that something ‘will be falsified under certain constraints’, etc.) In general, what I seek to examine is an assessment of how many ideas the author is seeking to refute or promote, with what type of study, and with what inductive or deductive approach. An author who attempts to dismiss too many competing ideas, via a predictive methodology supporting a surreptitiously promoted antithesis, which cannot possibly evaluate a critical theoretical mechanism – this type of study or article possesses a great likelihood of delivering bad science. Think about the celebrity skeptics you have read. How many competing ideas are they typically looking to discredit inside their material, and via one mechanism of denial (usually an apothegm and not a theoretical mechanism)? The pool comprises 768 items – many to draw from – and draw from this, they do.

Let’s be clear here – a study can pass major journal peer review and possess acceptable procedural/analytical methodology – but say or implicate absolutely nothing for the most part. Ultimately being abused (or abusing its own research in extrapolating its reach) to say things which the logical calculus involved would never support (see Dunning-Kruger Abuse). Such conditions do not mean that the study will be refused peer review. Peer reviewers rarely ever contend (if they disregard the ‘domain of application’ part of a study’s commentary):

“We reject this study because it could be abused in its interpretation by malicious stakeholders.” (See example here: http://www.medicaldaily.com/cancer-risks-eating-gmo-corn-glyphosate-vs-smoking-cigarettes-according-411617)

Just because a study is accepted for and pass peer review, does not mean that all its extrapolations, exaggerations, implications or abuses are therefore true. You, as the reader are the one who must apply the sniff test as to what the study is implying, saying or being abused to say. What helps a reader avoid this? Those same ten questions from above.

null-hypothesisThe ten questions I have found most useful in discerning good science from bad, are formulated based upon the following Popperian four-element premise.² All things being equal, better science is conducted in the case wherein

  • one idea is
  • denied through
  • falsification of its
  • critical theoretical mechanism.

If the author pulls this set of four things off successfully, eschews promotion of ‘the answer’ (which is the congruent context to one having disproved a set of myriad ideas), then the study stands as a challenge to the community and must be sought for replication (see question IX below). For the scientific community at large to ignore such a challenge is the genesis of (our pandemic) pluralistic ignorance.

For instance, in one of the materials research labs I managed, we were tasked by an investment fund and their presiding board to determine the compatibility of titanium to various lattice state effects analogous to iron. The problem exists however in that titanium is not like iron at all. It will not accept the same interstitial relationships with other small atomic radius class elements that iron will (boron, carbon, oxygen, nitrogen). We could not pursue the question the way the board posed it. “Can you screw with titanium in exotic ways to make it more useful to high performance aircraft?”  We first had to reduce the question into a series of salient, then sequitur Bayesian reductions. The first question to falsify was “Titanium maintains its vacancy characteristics at all boundary conditions along the gamma phase state?” Without an answer (falsification) to this single question – not one single other question related to titanium could be answered in any way shape or form. Most skeptics do not grasp this type of critical path inside streams of logical calculus. This is an enormous source of confusion and social ignorance. Even top philosophers and celebrity skeptics fail this single greatest test of skepticism. And they are not held to account because few people are the wiser, and the few who are wise to it – keep quiet to avoid the jackboot ignorance enforced by the Cabal.

Which introduces and opens up the more general question of ‘What indeed, all things being considered, makes for good effective science?” This can be lensed through ten useful questions below, applied in the same fashion as the titanium example case:

I. Has the study or article asked and addressed the 1. relevant and 2. salient and 3. critical path next question under the scientific method?

If it has accomplished this, it is already contending for teir I science, as only a minority of scientists understand how to pose reductive study in this way. If it has not done this then do not even proceed to the next questions II though VII below. Throw the study in the waste can. Snopes is notorious for this type of chicanery. The material is rhetoric, targeting a victim group, idea or person.

If the answer to this is ‘No’ – Stop here and ignore the study. Use it as an example of how not to do science.

II. Did the study or article focus on utilization of a critical theoretical mechanism which it set out to evaluate for validity?

The litmus which differentiates a construct (idea or framework of ideas) from a theory, is that a theory contains a testable and critical theoretical mechanism. ‘God’ does not possess a critical theoretical mechanism, so God is a construct which cannot be measured or tested to any Popperian standard of science. God is not a theory. Even more so, many theories do not possess a testable mechanism, and are simply defaulted to the null hypothesis instead. Be very skeptical of such ‘theories’.

If the answer to this is ‘No’ – Regard the study or article as an opinion piece and not of true scientific incremental value.

III.  Did the study or article attempt to falsify this mechanism, or employ it to make predictions? (z-axis)

Karl Popper outlined that good science involves falsification of alternative ideas or the null hypothesis. However, given that 90% of science cannot be winnowed through falsification alone, it is generally recognized that a theory’s predictive ability can act as a suitable critical theoretical mechanism via which to examine and evaluate. Evolution was accepted through just such a process. In general however, mechanisms which are falsified are regarded as stronger science over successfully predictive mechanisms.

If the study or article sought to falsify a theoretical mechanism – keep reading with maximum focus. If the study used predictive measures – catalog it and look for future publishing on the matter.

IV.  Did the study or article attempt to deny specific idea(s), or did it seek to promote specific idea(s)? (x-axis)

Denial and promotion of ideas is not a discriminating facet inside this issue stand alone. What is significant here is how it interrelates with the other questions. In general attempting to deny multiple ideas or promote a single idea are techniques regarded as less scientific than the approach of denying a single idea – especially if one is able to bring falsification evidence to bear on the critical question and theoretical mechanism.

Simply keep the idea of promotion and denial in mind while you consider all other factors.

V.  Did the study affix its contentions on a single idea, or a group of ideas? (y-axis)

In general, incremental science and most of discovery science work better when a study focuses on one idea for evaluation and not a multiplicity of ideas. This minimizes extrapolation and special pleading loopholes or ignorance. Both deleterious implications for a study. Prefer authors who study single ideas over authors who try and make evaluations upon multiple ideas at once. The latter task is not a wise undertaking even in the instance where special pleading can theoretically be minimized.

If your study author is attempting to tackle the job of denying multiple ideas all at once – then the methodical cynicism alarm should go off. Be very skeptical.

VI.  What percent of the material was allocated towards ideas versus the more agenda oriented topics of persons, events or groups?

If the article or study spends more than 10% of its Background material focused on persons, events or groups it disagrees with, throw the study in the trash. If any other section contains such material above 0%, then the study should be discarded as well. Elanor Roosevelt is credited with the apothegm “Great minds discuss ideas; Average minds discuss events; Small minds discuss people.”

Take your science only from great minds focusing on ideas and not events or persons.

As well, if the author broaches a significant amount of related but irrelevant or non-salient to the question at hand material, you may be witnessing an obdurate, polemic or ingens vanitatum argument. Do not trust a study or article where the author appears to be demonstrating how much of an expert they are in the matter (through addressing related but irrelevant and non-salient or non-sequitur material). This is irrelevant and you should be very skeptical of such publications.

VII. Did the author put an idea, prediction or construct at risk in their study?

Fake science promoters always stay inside well established lines of social safety, so that they are 1) Never found wrong, 2) Don’t bring the wrong type of attention to themselves (remember the $2.6+ million which is at stake here), and 3) Can imply their personal authority inside their club as an opponent-inferred appeal in arguing. They always repeat the correct apothegm, and always come to the correct conclusion. The will make a habit of taunting those with redaction.

Advancing science always involves some sort of risk. Do not consider those who choose paths of safety, familiarity and implied authority to possess any understanding of science.

VIII.  Was the study? (In order of increasing gravitas)

1.  increasing-gravitasPsychology or Motivation (Pseudo-Theory – Explains Everything)

2.  Meta-Data – Studies of Studies (Indirect Data Only vulnerable to Simpson’s Paradox or Filtering/Interpretive Bias)

3.  Data – Cohort and Set Measures (Direct but still Data Only)

4.  Direct Measurement Observation (Direct Confirmation)

5.  Inductive Consilience Establishment (Preponderance of Evidence from Multiple Channels/Sources)

6.  Deductive Case Falsification (Smoking Gun)

All it takes in order to have a strong study is one solid falsifying observation. This is the same principle as is embodied inside the apothegm ‘It only takes one white crow, to falsify the idea that all crows are black’.

IX.  When the only viable next salient and sequitur reductive step, post study – is to replicate the results – then you know you have a strong argument inside that work.

X.  Big data and meta-analysis studies like to intimidate participants in the scientific method with the implicit taunt “I’m too big to replicate, bring consensus now.”

These questions, more than anything else – will allow the ethical skeptic to begin to grasp what is reliable science and what is questionable science. Especially in the context where one can no longer afford to dwell inside only the lofty 5% of the highest regarded publications or can no longer stomach the shallow talking point sheets of social skepticism – all of which serve only to ignore or give short shrift to the ideas to which one has dedicated their life in study.

TES Signature


¹  Poliandri, Ariel; “A guide to detecting bogus scientific journals”; Sci – Phy, May 12, 2013; http://sci-phy.com/detecting-bogus-scientific-journals/

²  Beryl Lieff Benderly, “Does the US Produce Too Many Scientists?; Scientific American, February 22, 2010; https://www.scientificamerican.com/article/does-the-us-produce-too-m/

³  Thornton, Stephen, “Karl Popper”, The Stanford Encyclopedia of Philosophy (Winter 2016 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2016/entries/popper/&gt;

†  Present Value of future cash flows with zero ending balance: 456 payments of $9,333 per month at .25% interest per payment period.

February 25, 2017 Posted by | Agenda Propaganda, Social Disdain | , , | Leave a comment

Abuse of the Dunning-Kruger Effect

When does a Dunning-Kruger misapplication flag the circumstance of ad hominem attack by a claimant who sees them self as superior minded? When you observe it being applied in situations and domains inside of which the study authors, Kruger and Dunning, never intended. It behooves the ethical skeptic to actually read the studies which are purported at face value to back habitual social skeptic condemnation tactics. Knowing how to not commit a Dunning-Kruger Effect error in application, ironically is a key indicator as to one’s competency under a Dunning-Kruger perspective in the first place.

A saying is attributed to Thomas Jefferson about the wisdom of self-knowledge, and goes as such “He who knows best, best knows how little he knows.” This quote is actually highlighted inside a celebrated study by Cornell University Psychologists, Justin Kruger and David Dunning; commonly referred to as the ‘Dunning-Kruger Effect’ study. Indeed this principle elicited by Jefferson is embodied inside two of the Seven Tropes of Ethical Skepticism:

I.    There is critically more we do not know, than we do know.

II.   We do not know, what we do not know. Only a sub-critical component of mankind effectively grasps this.

Dunning KrugerOne wonders if Thomas Jefferson, in recognizing this human tendency would have been the wiser to not attempt his bold assertions inside of “A Declaration by the Representatives of The United States of America, In General Congress Assembled.”¹ This haughty document, certainly venturing into an arena in which Jefferson himself had no personal degree or particular expertise, represented a projection into a subject about which he could not possibly have known competency.  Surely this is a case of Dunning-Kruger ‘fallacy’ if ever one was observed. A enormous boast of unseemly levels of claim to knowledge (ones which make socialist and social skeptics uncomfortable to this very day):

We hold these truths to be self-evident: that all men are created equal; that they are endowed by their creator with inherent and certain inalienable rights; that among these are life, liberty, & the pursuit of happiness: that to secure these rights, governments are instituted among men, deriving their just powers from the consent of the governed; that whenever any form of government becomes destructive of these ends, it is the right of the people to alter or abolish it, & to institute new government, laying it’s foundation on such principles, & organizing it’s powers in such form, as to them shall seem most likely to effect their safety & happiness.¹

Indeed, the Seven Tropes of Ethical Skepticism continue with a focus outlining that the necessity of knowledge, even in absence of knowledge, is to observe and correct when a party seeks to control by means of ignorance, provisional knowledge, methodical cynicism and authority alone (the three basic elements of Social Skepticism). It holds our common service to each other and love for our fellow man and his plight, to underpin with utmost importance our qualification to observe and direct the processes of knowledge. In this instance, regarding Jefferson and those who crafted our first ideas of what a government was to be, courage, risk and personal gumption outweighed calls for caution – specifically because of the particular need, benefit or danger entailed. Dunning-Kruger indeed did not apply because the situation dictated actions of character on the part of an agent of change (see #12 below). This is most often the circumstance which we as ethical skeptics face today.

Dunning-Kruger awareness does not apply as a fallacy of disqualification in such circumstances. This awareness about both the limits of knowledge, as well as when a Dunning-Kruger Effect does and does not apply, relate directly to the Seven Tropes of Ethical Skepticism. Several species/errors in application arise under a logical calculus which seeks to survey the landscape of the Dunning-Kruger Effect:

Dunning-Kruger Abuse (ad hominem)

/philosophy : pseudo-science : fascism/ : a form of ad hominem attack. Inappropriate application of the Dunning-Kruger fallacy in circumstances where it should not apply; instances where every person has a right, responsibility or qualification as a victim/stakeholder to make their voice heard, despite not being deemed a degree, competency or title holding expert in that field.

This circumstance of employment standing in stark contrast with legitimate circumstances where the Dunning-Kruger Effect does indeed apply. Including those circumstances where ironically, a fake skeptic is not competent enough to identify a broader circumstance of Dunning-Kruger in themselves and their favored peers (several species below).

Dunning-Kruger Effect

/philosophy : misconception : bias/ : an effect in which incompetent people fail to realize they are incompetent because they lack the skill or maturity to distinguish between competence and incompetence among their peers

A principle which serves to introduce the ironic forms of Dunning-Kruger Effect employed skillfully by Social Skepticism today:

Dunning-Kruger Denial

/philosophy : pseudo-science : false skepticism : social manipulation/ : the manipulation of public sentiment and perceptions of science, and/or condemnation of persons through skillful exploitation of the Dunning-Kruger Effect. This occurs in four speciated forms:

Dunning-Kruger Exploitation

/philosophy : pseudo-science : fascism/ : the manipulation of unconsciously incompetent persons or laypersons into believing that a source of authority expresses certain opinions, when in fact the persons can neither understand the principles underpinning the opinions, nor critically address the recitation of authority imposed upon them. This includes the circumstance where those incompetent persons are then included in the ‘approved’ club solely because of their adherence to proper and rational approved ideas.

Dunning-Kruger Milieu

/philosophy : pseudo-science : fascism/ : a circumstance wherein either errant information or fake-hoaxing exists in such quantity under a Dunning-Kruger Exploitation circumstance, or a critical mass of Dunning-Kruger Effect population is present, such that core truths observations, principles and effects surrounding a topic cannot be readily communicated or discerned, as distinct from misinformation, propaganda and bunk.

Dunning-Kruger Projection (aka Plaiting)

/philosophy : misconception : bias/ : the condition in which an expert in one discipline over-confidently fails to realize that they are not competent to speak in another discipline, instead relying upon their status in their home discipline or as a scientist, to underpin their authority or self-deception regarding an array of subjects inside of which they know very little.

Dunning-Kruger Skepticism

/philosophy : misconception : bias/ : an effect in which incompetent people making claim under ‘skepticism,’ fail to realize they are incompetent both as a skeptic and as well inside the subject matter at hand. Consequently they will fall easily for an argument of social denial/promotion because they

1.  lack the skill or maturity to distinguish between competence and incompetence among their skeptic peers and/or are

2.  unduly influenced by a condition of Dunning-Kruger Exploitation or Millieu, and/or are

3.  misled by false promotions of what is indeed skepticism, or possess a deep seated need to be accepted under a Negare Attentio Effect.

Dunning-Kruger Denial is a chief objective of social skepticism. So it was not surprising that social skepticism recognized this overall malady first; as exploiting its ad hominem potential, is one of the principal tactics of fake skepticism.

Nonetheless, back to the principal context of this blog, with regard to fair contextual application of actual underlying Dunning-Kruger principles, and framed in a more simple and condensed expression:

One does not possess the right, to dismiss the rights of others – by means of a Dunning-Kruger Effect accusation.

What the Kruger and Dunning Study Did Say

Dunning-Kruger ExhibitThe famously heralded study, one by Justin Kruger and David Dunning inside the Department of Psychology of Cornell University in 1999, implied the importance of recognizing when one has outlasted their competency in a given field versus their peers in that field – and the importance of keeping mute/inactive in circumstances where this could serve to embarrass or endanger. A study which would have certainly been embraced by the Royalist or Tory in the day of Thomas Jefferson.  More specifically the study outlined four pitfalls which were observed among 60 – 90 Cornell University undergraduate first year students (below).

(Note: This certainly a Dunning-Kruger commentary in itself as to Kruger and Dunning’s ability to develop unbiased inclusion criteria which would or would not serve to amplify desired effect. Have you ever known an undergraduate freshman who did not overestimate their success in an upcoming exam or evaluation? This is the definition of freshman.

Scientific parsimony would have been applicable here, especially from the perspective of selecting a source-S sample pool of silver-spooned Ivy-Leaguers who have been told their entire lives that they are the smartest person in the room/building. This is like observing if fights will break out when two people hit each other, by conducting surveys inside a drunken London mosh pit full of Manchester United and Arsenal Football Clubbers. It is stupidity dressed up in lab coats. An epistemologically shallow if not elegant convenience of social skeptic tradecraft. A common produit-decélèbre on their part – especially among psychology PhD’s.

What they observed in fact, was the unique nutrient solution of psychology and social pressure which serves to cultivate our brood of social skeptics. These test subjects and their indoctrinated peers will be sure to never step out of line, or speak up when they might be afraid, ever again. See # 11 below.)

Given this skewed inclusive criteria group, one with which Kruger and Dunning were very familiar and inside of which they had already bore an intuitive estimation of positive result, four predictions from the surveys were developed and confirmed:

Prediction 1. Incompetent individuals, compared with their more competent peers, will dramatically overestimate their ability and performance relative to objective criteria.

Prediction 2. Incompetent individuals will suffer from deficient metacognitive skills, in that they will be less able than their more competent peers to recognize competence when they see it–be it their own or anyone else’s.

Prediction 3. Incompetent individuals will be less able than their more competent peers to gain insight into their true level of performance by means of social comparison information. In particular, because of their difficulty recognizing competence in others, incompetent individuals will be unable to use information about the choices and performances of others to form more accurate impressions of their own ability.

Prediction 4. The incompetent can gain insight about their shortcomings, but this comes (paradoxically) by making them more competent, thus providing them the metacognitive skills necessary to be able to realize that they have performed poorly.²

None of the cautions above and below herein of course, serve to invalidate the effect Kruger and Dunning (and others since) have cited in the referenced study. These cautions simply function as a sentinel, flagging conditions wherein such a study might be abused for social ends. To that end, let us discuss some of those circumstances where a social skeptic might abuse such a study as a means of demanding conformance through social ridicule, on issues they are seeking to promote.

When Dunning-Kruger Effect Does Not Apply

 A reasonable man would suppose that underestimating one’s ability to adeptly handle the intricate subtleties of a Dunning-Kruger accusation, stands as a form of Dunning-Kruger fallacy in itself. But that does not inhibit our self-appointed elite, the social skeptic from slinging around the accusation with all the adeptness of a demolitions expert in a porcelain factory. The sad reality is that the majority of instances in which I have seen the accusation foisted, have been instances of invalid usage. In other words, as the social skeptic interprets this study and instructs their sycophants as to its employment, they and their disciples are now scientifically justified (remember they represent science) in making the following accusations.

How the four findings of the Dunning-Kruger study are abused in the anosognosia vulnerable mind:

  1. People whom I do not like, do stupid things.
  2. People whom I do not like, fail to recognize how smart I am.
  3. People whom I do not like, fail to recognize how stupid they are.
  4. It is simply a matter of me training the stupid, because as they become more informed like me, they will be come less stupid and recognize stupidity in others.

Do you see the sales cycle evolving here? This is a religious pitch used by fundamentalist Christianity. They could print this up in a tract and hand it out inside Airport bathrooms. In other words what the Dunning-Kruger misapplication has introduced is an act of social anosognosia (a deficit of self awareness) on the part of those who see themselves as superior minded. This relates to the more complex comparatives between Intelligence and Rationality, a perception on the part of social skeptics which we addressed in an earlier blog.

Intelligence is smart people who do or think unauthorized things. Rationality is smart people who do or think correct things. Social Skepticism is about knowing the difference.

Ethical Skepticism says ‘Bullshit’ to this line of reasoning.

Which introduces the final point set of this blog, circumstances where the Dunning-Kruger Effect does not bear applicability. Instances where the sociopathology of the anosognosiac have crossed the line into abuse of both the Dunning-Kruger Effect and more importantly, those around them:

Specific instances in which the Dunning-Kruger Effect does not apply include:

1.  In matters of Public Policy.

e.g. ∈ You have the right to speak up about contaminants in your food, you do not have to be a chemist or agricultural scientist.

2.  In matters of Voting, Political Voice and Will.

e.g. ∈ You have the right to speak up about foreign trade policy and jobs, you do not have to be a degree holding economist.

3.  In situations where professionals and non-professionals are involved. Dunning-Kruger is speaking about continuous scale comparatives between peers, not discrete breakouts between groups, as in the case of professionals and various tiers of non-professionals (from layman to dilettante) in a given discipline. From the ‘notes/discussion’ section of the Kruger and Dunning study itself:

“There is no categorical bright line that separates “competent” individuals from “incompetent” ones. Thus, when we speak of “incompetent” individuals we mean people who are less competent than their peers.”²

e.g. ∈ You have the right to speak up about where NASA’s space programs are headed, you do not have to be an astrophysicist or on NASA’s advisory board.

4.  When the speaker is a victim of corporate, governmental, mafia, criminal, supposed or real expert actions or fraud.

e.g. ∈ You have the right to speak up about your vaccine injured child, you do not have to be an epidemiologist or medical doctor.

5.  In matters where there is more unknown than is known, or where science has studied very little.

e.g. ∈ Einstein bore the right to speak up about Special Relativity while simply an entry level patent engineer, he was not disqualified by a previous academic C-average, nor by his not holding a PhD.

6.  In matters where competency in reality only comprises simply a few memorized facts, procedure or trivia concerning the subject.

e.g. ∈ You have the right to speak up about water contamination in your community, you do not have to be involved in constructing assay sheets at your local processing plant.

7.  In matters where social conformance is conflated with competency (i.e. social skeptic ‘rationality’).

e.g. ∈ You have the right to speak up about science ignoring an important issue observed in your local community, you do not have to be a degree holding scientist in that arena.

8.  In matters of personal financial and household management.

e.g. ∈ You have the right to organize community to refuse a tax levied on your home for unfair reasons, you do not have to be a career politician or expert in the subject which is funded by the tax itself.

9.  In matters of personal health, disease prevention and health management.

e.g. ∈ You have the right to speak up about things harming your family’s health, you do not have to be a member of Science Based Medicine.

10.  In matters of personal religious practice or choice of faith.

e.g. ∈ You have the right to say that you observed something extraordinary or miraculous from a spiritual perspective, you do not have to be a priest or scientist.

11.  In any matter or circumstance where the Dunning-Kruger Effect is employed to intimidate or create compliance by means of fear/ridicule.

.e.g. ∈ You have the right to speak up about unbridled immigration and population dumping, this does not make you a racist. You have the freedom and right to identify such things as acts of war.

12.  When courage, risk and personal gumption override calls for caution because the need, benefit or danger entailed dictate actions of character on the part of an agent of change.

e.g. ∈ You have the right to speak up about VINDA Autism, you do not have to be a Centers for Disease Control professional, in order to demand third party review of ‘settled science.’

The study authors, had they been following the protocols of science, should have included points such as these in their commentary and counter-point acknowledgement sections. This is what ethical skeptics and scientists for that matter, do; they remain aware of and allow-for counter-point arguments. They regard them as matters of importance. Unfortunately, save for number 3. above (and only in part even for that one), Kruger and Dunning did not bear such circumspection about their own findings in their work. Another shortfall in scientific method.

Knowing how to not use a weapon is the supreme qualification for a user of that weapon. Knowing how to not commit a Dunning-Kruger Effect error in its application, ironically is a key indicator as to one’s competency under a Dunning-Kruger perspective in the first place.

TES Signature


¹  The Works of Thomas Jefferson: A DECLARATION BY THE REPRESENTATIVES OF THE UNITED STATES OF AMERICA, IN GENERAL CONGRESS ASSEMBLED; http://oll.libertyfund.org/titles/800#Jefferson_0054-01_104

²  Journal of Personality and Social Psychology: American Psychological Association, December 1999 Vol. 77, No. 6, 1121-1134; Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments; Justin Kruger and David Dunning, Department of Psychology, Cornell University. A series study conducted by survey of a series of Cornell University undergraduates about competency and self perception, meta-cognition and projection. (http://gagne.homedns.org/~tgagne/contrib/unskilled.html).

May 12, 2016 Posted by | Agenda Propaganda, Argument Fallacies | , , , , , | Leave a comment

The Correlation-Causality One-Liner Can Highlight One’s Scientific Illiteracy

“Correlation does not prove causality.” You have heard the one-liner uttered by clueless social skeptics probably one thousand times or more. But real science rarely if ever starts with ‘proof.’ More often than not, neither does a process of science end in proof. Correlation was never crafted as an analytical means to proof. However this one-liner statement is most often employed as a means of implying proof of an antithetical idea. To refuse to conduct the scientific research behind such fingerprint signal conditions, especially when involving a risk exposure linkage, can demonstrate just plain ole malicious ignorance. It is not even stupid.

When a social skeptic makes the statement “Correlation does not prove causality,” they are making a correct statement. It is much akin to pointing out that a pretty girl smiling at you does not mean she wants to spend the week in Paris with you. It is a truism, most often employed to squelch an idea which is threatening to the statement maker. As if the statement maker were the boyfriend of the girl who smiled at you. Of course a person smiling at you does not mean they want to spend a week in Paris with you. Of course correlation does not prove causality. Nearly every single person bearing any semblance of rational mind understands this.  But what the one who has uttered this statement does not grasp, while feeling all smart and skeptickey in its mention, is that they have in essence revealed a key insight into their own lack of scientific literacy. Specifically, when a person makes this statement, three particular forms of error most often arise. In particular, they do not comprehend, across an entire life of employing such a statement, that

1.  Proof Gaming/Non Rectum Agitur Fallacy: Correlation is used as one element in a petition for ‘plurality’ and research inside the scientific method, and is NOT tantamount to a claim to proof by anyone – contrary to the false version of method foisted by scientific pretenders.

To attempt to shoot down an observation, by citing that it by itself does not rise tantamount to proof, is a form of Proof Gaming. It is a trick of trying to force the possible last step of the scientific method, and through strawman fallacy regarding a disliked observer, pretend that it is the first step in the scientific method. It is a logical fallacy, and a method of pseudoscience. Science establishes plurality first, seeks to develop a testable hypothesis, and then hopes, …only hopes, to get close to proof at a later time.

Your citing examples of correlation which fail the Risk Exposure Test, does not mean that my contention is proved weak.

… and yes, science does use correlation comparatives in order to establish plurality of argument, and consilience which can lead to consensus (in absence of abject proof). The correlation-causality statement, while mathematically true, is philosophically and scientifically illiterate.¹²³

2. Ignoratio Elenchi Fallacy (ingens vanitatum): What is being strawman framed as simply a claim to ‘correlation’ by scientific pretenders, is often a whole consilience (or fingerprint) of mutually reinforcing statistical inference well beyond the defined context of simple correlation.

Often when data shows a correlation, it also demonstrates other factors which may be elicited to demonstrate a relationship between two previously unrelated contributing variables or data measures.  There are a number of other factors which science employs through the disciplines of modeling theory, probability and statistics which can be drawn from a data relationship. In addition these inferences can be used to mutually support one another, and exponentially increase the confidence of contentions around the data set in question.²³

3.  Methodical Cynicism: Correlation is used as a tool to examine an allowance for and magnitude of variable dependency. In many cases where a fingerprint signal is being examined, the dependency risk has ALREADY BEEN ESTABLISHED or is ALLOWED-FOR by diligent reductive science. To step in the way of method and game protocols and persuasion in order to block study, is malevolent pseudoscience.

If the two variables pass the risk-exposure test, then we are already past correlation and into measuring that level of dependency, not evaluating its existence. If scientific studies have already shown that a chemical has impacts on the human or animal kidney/livers/pancreas, to call an examination of maladies relating to those organs as they relate to trends in use of that chemical a ‘correlation’ is an indication of scientific illiteracy on the part of the accuser. Once a risk relationship is established, as in the case of colon disorders as a risk of glyphosate intake, accusations of ‘correlation does not prove causality’ constitute a non-sequitur Wittgenstein Error inside the scientific method. Plurality has been established and a solid case for research has been laid down. To block such research is obdurate scientific fraud.²³

Calling or downgrading the sum total of these inferences through the equivocal use of the term ‘correlation,’ not only is demonstrative of one’s mathematical and scientific illiteracy, but also demonstrates a penchant for the squelching of data through definition in a fraudulent manner. It is an effort on the part of a dishonest agent to prevent the plurality step of the scientific method.
None of this has anything whatsoever to do with ‘proof.’

A Fingerprint Signal is Not a ‘Correlation’

earth mag fieldAn example of this type of scientific illiteracy can be found here (Correlation Is Not Causation in Earth’s Dipole Contribution to Climate – Steven Novella). There is a well established covariance, coincidence, periodicity and tail sympathy; a long tight history of dynamic with respect to how climate relates to the strength of Earth’s magnetic dipole moment. This is a fingerprint signal. Steven Novella incorrectly calls this ‘correlation.’ A whole host of Earth’s climate phenomena move in concert with the strength of our magnetic field. This does not disprove anthropogenic contribution to current global warming. But to whip out a one liner and shoot at a well established facet of geoscience, all so as to protect standing ideas from facing the peer review of further research is not skepticism, it is pseudoscience. The matter merits investigation. This hyperepistemology one-liner does not even rise to the level of being stupid.

Measuring of An Established Risk Relationship is Not a ‘Correlation’

Risk Exposure Exists CorrelationAn example of this type of scientific illiteracy can be found inside pharmaceutical company pitches about how the increase in opioid addiction and abuse was not connected with their promotional and lobbying efforts. Correlation did not prove causality. Much of today’s opiate epidemic stems from two decades of promotional activity undertaken by pharmaceutical companies. According to New Yorker Magazine, companies such as Endo Pharmaceuticals, Purdue Pharma and Johnson & Johnson centered their marketing campaigns on opioids as general use pain treatment medications. Highly regarded medical journals featured promotions directed towards physicians involved in pain management. Educational courses on the benefits of opioid-based treatments were offered. Pharmaceutical companies made widespread use of lobbyist groups in their efforts to disassociate opiate industry practices from recent alarming statistics (sound familiar? See an example where Scientific American is used for such propaganda here). One such group received $2.5 million from pharmaceutical companies to promote opioid justification and discourage legislators from passing regulations against unconstrained opioid employment in medical practices. (See New Yorker Magazine: Who is Responsible for the Pain Pill Epidemic?) The key here is, that once a risk relationship is established, such as between glyphosate and cancer, one cannot make the claim that correlation does not prove causality in the face of two validated sympathetic risk-dependency signals. It is too late, plurality has been established and the science needs to be done. To block such science is criminal fraud.

Perhaps We Need a New Name Besides Correlation for Such Robust Data Fit

Both of these examples above elicit instances where fake skeptic scientific illiteracy served to mis-inform, mis-lead or cause harm to the American Public. Correlation, in contrast, is simply a measure of the ‘fit’ of a linear trend inside the relationship between a two factor data set. It asks two questions (the third is simply a mathematical variation of the second):

  1. Can a linear inference be derived from cross indexing both data sets?, and
  2. How ‘close to linearity’ do these cross references of data come?
  3. How ‘close to curvinlinearity’ do these cross references of data come?

The answer to question number 2 is called an r-factor or correlation coefficient. Commonly, question number 3 is answered by means of a coefficient of determination and is expressed as an r² factor (r squared).³ Both are a measure of a paired-data set fit to linearity. That is all. In many instances pundits will use correlation to exhibit a preestablished relationship, such as the well known relationship between hours spent studying and academic grades. They are not establishing proof with a graph, rather simply showing a relationship which has already been well documented through several other previous means. However, in no way shape or form does that mean that persons who apply correlation as a basis of a theoretical construct are therefore then contending a case for proof. This is a relational form of the post hoc ergo propter hoc fallacy. This is a logical flaw, served up by the dilettante mind which confuses the former case, an exhibit, and conflates it with the later use, the instance of a petition for research.

Correlation Dismissal Error (Fingerprint Ignorance)

/philosophy : logic : evidence : fallacy/ : when employing the ‘correlation does not prove causality’ quip to terminally dismiss an observed correlation, when the observation is being used to underpin a construct or argument possessing consilience, is seeking plurality, constitutes direct fingerprint evidence and/or is not being touted as final conclusive proof in and of itself.

THIS is Correlation (Pearson’s PPMCC)      It does not prove causality (duh…)¹²

Cor 1

This is a Fingerprint Signal and is Not Simply a Correlation³∋

diabetes and glyphosate

There are a number of other methods of determining the potential relationship between two sets of data, many of which appear to the trained eye in the above graph. Each of the below relational features individually, and increasingly as they confirm one another, establish a case for plurality of explanation. The above graph is not “proving” that glyphosate aggravates diabetes rates. However, when this graph is taken against the exact same shape and relationship graphs for multiple myloma, non-Hodgkin’s Lymphoma, bladder cancer, thyroid disease, pancreatic cancer, irritable bowel syndrome, inflammatory bowel syndrome, lupus, fibromyalgia, renal function diminishment, Alzheimer’s, Crohn’s Disease, wheat/corn/canola/soy sensitivity, SIBO, dysbyosis, esophageal cancer, stomach cancer, rosacea, gall bladder cancer, ulcerative colitis, rheumatoid arthritis, liver impairment and stress/fatty liver disease, … and for the first time in our history a RISE in the death rates of of middle aged Americans…

… and the fact that in the last 20 years our top ten disease prescription bases have changed 100%… ALL relating to the above conditions and ALL auto-immune and gut microbiome in origin. All this despite a decline in lethargy, smoking and alcohol consumption on average. All of this in populations younger than an aging trend can account for.

Then plurality has been argued. Fingerprint signal data has been well established. This is an example of consilience inside an established risk exposure relationship. To argue against plurality through the clueless statement “Correlation does not prove causality” is borderline criminal. It is scientifically illiterate, a shallow pretense which is substantiated by false rationality (social conformance) and a key shortfall in real intelligence.

Contextual Wittgenstein Error Example – Incorrect Rhetoric Depiction of Correlation

cor 2

The cartoon to the left is a hypoepistemology which misses the entire substance of what constitutes fingerprint correlation. A fingerprint signal is derived when the bullet-pointed conditions exist – None of which exist in the cartoon invalid comparison to the left – this is a tampering with definition, enacted by a person who has no idea what correlation in this context, even means. A Wittgenstein Error. In other words: scientifically illiterate propaganda. Conditions which exist in a proper correlation, or more, condition:

  • A constrained pre-domain and relevant range which differ in stark significance
  • An ability to fit both data sets to curvinlinear or linear fit, with projection through golden section, regression or a series of other models
  • A preexisting contributor risk exposure between one set of unconstrained variables and a dependent variable
  • A consistent time displacement between independent and dependent variables
  • A covariance in the dynamic nature of data set fluctuations
  • A coincident period of commencement and timeframe of covariance
  • A jointly shared arrival distribution profile
  • Sympathetic long term convex or concave trends
  • A risk exposure (see below) – the cartoon to the left fails the risk exposure test.

Rhetoric: An answer, looking for a question, targeting a victim

Fingerprint Elements: When One or More of These Risk Factor Conditions is Observed, A Compelling Case Should be Researched¹²³

Corresponding Data – not only can one series be fitted with a high linear coefficient, another independent series can also be fitted with a similar and higher coefficient which increases in coherence throughout a time series both before and during its domain of measure, and bears similar slope, period and magnitude. In this instance as well, a preexisting risk exposure has been established.  This does not prove causality, however is a strong case for plurality especially if a question of risk is raised. To ignore this condition, is a circumstance where ignorance ranges into fraud.

Cor 1a

Covariant Data – not only can one series be fitted with a high coefficient, another independent series can also be observed with a similar fit which increases in coherence as a time series both before and during its domain of measure, and bears similar period and magnitude. Adding additional confidence to this measure is the dx/dy covariance, Browning Covariance, or distance covariance, etc. measure which can be established between the two data series; that is, the change in x(1)…x(n) versus y(1)…y(n). In this instance as well, a preexisting risk exposure has been established.  This does not prove causality, however is a very strong case for plurality especially if a question of risk is raised. To ignore this condition, is a circumstance where socially pushed skepticism ranges into fraud.

 Cor 1b

Co-incidence Data – two discrete measures coincide as a time series both before and during its domain of measure, and bear similar period and magnitude. Adding additional confidence to this measure magnitude consistency which can be established between the two data series; that is, the discrete change in x(1)…x(n) versus y(1)…y(n). In this instance as well, a preexisting risk exposure has been established.  This does not prove causality, however is a moderately strong case for plurality especially if a question of risk is raised. To ignore this condition, is a circumstance where arrogant skepticism ranges into fraud.

Cor 1c

Jointly Distributed Data – two independent data sets exhibit the same or common arrival distribution functions. Adding additional confidence to this measure magnitude consistency which can be established between the two data series; that is, the discrete change in x(1)…x(n) versus y(1)…y(n). In this instance as well, a preexisting risk exposure has been established.  This does not prove causality, however is a moderately strong case for plurality especially if a question of risk is raised. To ignore this condition, is a circumstance where arrogant skepticism ranges into fraud.

Cor 1d

Probability Function Match – two independent data sets exhibit a resulting probability density function of similar name/type/shape. Adding additional confidence to this measure magnitude consistency which can be established between the two data series; that is, the discrete change in x(1)…x(n) versus y(1)…y(n). In this instance as well, a preexisting risk exposure has been established.  This does not prove causality, however is a moderately strong case for plurality especially if a question of risk is raised. To ignore this condition is not wise.

Cor 1e

Marginal or Tail Condition Match – the tail or extreme regions of the data exhibit coincidence and covariance. Adding additional confidence to this measure magnitude consistency which can be established between the two data series when applied in the extreme or outlier condition; that is, the discrete change of these remote data in x(1)…x(n) versus y(1)…y(n). In this instance as well, a preexisting risk exposure has been established.  This does not prove causality, however is a moderately strong case for plurality especially if a question of risk is raised. To ignore this condition, is a circumstance where even moderate skepticism ranges into fraud activity.

 Cor 1f

Sympathetic Long Term Shared Concave or Convex – long term trends match each other, but more importantly each is a departure from the previous history and occurred simultaneously, offset by a time displacement, are both convex or concave and co-vary across the risk period. Adding additional confidence to this measure magnitude consistency which can be established between the two data series; that is, the discrete change in x(1)…x(n) versus y(1)…y(n). In this instance as well, a preexisting risk exposure has been established.  This does not prove causality, however is a compellingly strong case for plurality especially if a question of risk is raised. To ignore this condition, is a circumstance where even moderate skepticism ranges into fraud activity.

 Cor 1g

Discrete Measures Covariance – the mode, median or mean of discrete measures is shared in common and/or in coincidence, and also vary sympathetically over time. Adding additional confidence to this measure magnitude consistency which can be established between the two data series; that is, the discrete change in mode and mean over time. In this instance as well, a preexisting risk exposure has been established.  This does not prove causality, however is a moderate case for plurality especially if a question of risk is raised. To ignore this condition is not wise.

Cor 1h

Risk Exposure Chain/Test – two variables, if technical case were established that one indeed influenced the other, would indeed be able to influence one another. (In other words, if your kid WAS eating rat poison every Tuesday, he WOULD be sick on every Wednesday – but your kid eating rat poison would not make the city mayor sick on Wednesday). If this condition exists, along with one or more of the above conditions, a case for plurality has been achieved. To ignore this condition, is a circumstance where even moderate skepticism ranges into fraud activity.

 Cor 1i

These elements, when taken in concert by honest researchers, are called fingerprint data. When fake skeptics see an accelerating curve which matches another accelerating curve – completely (and purposely) missing the circumstance wherein any or ALL of these factors are more likely in play – to say “correlation” is what is being seen, demonstrates their scientific illiteracy. It is up to the ethical skeptic to raise their hand and say “Hold on, I am not ready to dismiss that data relationship so easily. Perhaps we should conduct studies which investigate this risk linkage and its surrounding statistics.”

To refuse to conduct the scientific research behind such conditions, especially if it involves something we are exposed to three times a day for life, constitutes just plain active ignorance and maliciousness. It is not even stupid.

 TES Signature

¹  Madsen, Richard W., ” Statistical Concepts with Applications to Business and Economics,” Prentice-Hall, 1980; pp 604 – 610.

²  Gorini, Catherine A., “Master Math Probability,” Course Technology, 2012; pp. 175-196, 252-274.

³  Levine, David M.; Stephan, David F., “Statistics and Analytics,” Pearson Education, 2015; pp. 137-275.

∋  Graphic employed for example purposes only. Courtesy of work of Dr. Stephanie Seneff, sulfates, glyphosates and gmo food; MIT, september 19, 2013.

January 17, 2016 Posted by | Argument Fallacies | , , , , , , , , , , , | 2 Comments

The Dark Side of SSkepticism: The Richeliean Appeal

A Richeliean Appeal is a contention which is declared correct by means of power or celebrity held on the part of the claimant. This includes instances where ‘consensus’ is declared by those influencing the consensus itself. As well, it can involve a Richeliean skeptic who encourages and enjoys a form of ‘social peer review,’ empowered via politics or a set of sycophants who are willing to enact harm to a level which the Richeliean power holder himself would not personally stoop.

Malevolence of the Richeliean Appeal

card richelieu - Copy - CopyIf you conduct research inside an issue of contention, or have a child who has been cognitively impaired through the incompetence of medical or pharma oligarchy, or had your health damaged by processed food, or have developed a new medical device, supplement or treatment, or have even innocently shown interest towards a subject which is forbidden access by Social Skepticism, then odds are you are highly familiar with the Richeliean Appeal. A Richeliean Appeal is a form of the Appeal to Skepticism, a tactic of intrigue, malevolence, fear-intimidation, high-school style social chiding or the implicit threat which is tendered to intimidate a specific person or group. It is usually implied by those who are impressed with their own celebrity, title, or social position they hold inside of a club. Many times it comes in the form of a threat to have a social clique bully a prematurely identified victim en masse. You will see this practiced by that tiny malevolent minority who hang out on social media and undertake harm on people who think differently. They have no idea that they are a joke to the great majority of Americans, and perform a great service to swing the mind of Americans away from the very movement they espouse. Anger is a sign of losing, even if framed inside a chucklehead diversion.

Hint: Weak ideas require enforcement by childish intimidation and clique bullying. Strong ideas launch movements on their own gravitas.

Social Skeptics enjoy such a perch of bully-tactic power, and use it fully to enable authority on subjects which would stand under a condition of plurality were they to be deliberated solely on ethics and evidence alone. The term is derived from the coercive behavior of Armand-Jean du Plessis, better known as French Cardinal Richelieu (1585 – 1642 ad), heralded as the father of the modern totalitarian state, Duvalism (the dispensation of the State as equal in status to God), socialized power and the modern secret police.¹ ² It is the tandem god set (Ω • ⊕) in which the Richeliean Skeptic enjoys free unmerited power, combined with a lack of being held to accountability.

The reason that Social Skeptics abet and aspire to celebrity, is the heady power of Richeliean Appeal it affords them.

Any entity, be it person, organization or nation which derives prurient satisfaction in the cruel or public punishment of those unlike themselves, or even those who have committed an offense, is an entity of an unaccountable and malevolent nature. Such, as well is the nature of SSkeptic power used as a battering ram on those who disagree with their religion.

richelieu quote - Copy - CopySocial Skepticism appreciates many of the neutral to dark techniques employed by Armand-Jean du Plessis de Richelieu, during the secretive development of his reign of power in the French court, in its own efforts to seek consensus and consolidation of power. The issue is not that everything enforced by Social Skepticism is necessarily incorrect, nor that every enforcement action itself is necessarily wrong. Rather, it is the subterfuge by which the enforcement is dealt, coupled with the intermixing of both questionable and correct conclusion alike – the failure of the ethics which declines to distinguish between the two – which renders the approach a rogue action on the part of those seeking to consolidate power. A Richeliean Appeal can be enacted supporting a contention which is correct, or possibly incorrect. The essence of a Richeliean Appeal is that, ‘correct’ is only a designation enabled by the power of the claimant. Since the claimant is in power, or has the power to harm, therefore the contention is correct by power. This includes the power of the mob or a set of sycophants willing to enact harm to a level which the Richeliean power holder would not himself personally stoop.

Richeliean Appeal to Skepticism

/Appeal to Skepticism : coercion/ : an inflation of personal gravitas, celebrity or influence by means of implicit or explicit threats of coercive tactics which can harm or seek to embarrass a victim one wishes to be silenced. Coercive tactics include threats to harm family, contact employers, ridicule, tamper with businesses, employment of celebrity status to conduct defamation activities or actions to defraud, or otherwise cause harm to persons, reputation or property. This includes the circumstance where a Richeliean skeptic encourages and enjoys a form of ‘social peer review,’ empowered via politics or a set of sycophants who are willing to enact harm to a level which the Richeliean power holder himself would not personally stoop.

Richeliean Appeal to Authority

/Appeal to Authority : coercion/ : a contention which is considered correct by means of social power or celebrity held on the part of its proponent. An appeal to consensus made by a group which influenced or measured the claimed consensus. An appeal to an authority who is notable at least in part for authoritarian or coercive measures they have employed to maintain power. Also an employment of coercive tactics which include censorship or propaganda-charging the media, establishing a large network of internal spies or sycophants, forbidding the discussion of specific matters in public or publishing of one sided science studies, patrolling of public assemblies or media forums or seeking to harm or defame who dare to disagree.

Richelieu’s Law

/Argument : locution : coercion/ : given a sufficient quantity of statements of merit on the part of an individual, a case can be made that one of those statements either serves to condemn that individual or runs anathema to the essence of all their other statements (apparent hypocrisy). An exploitative coercive argument which proceeds along the lines of the Richeliean quote: “Give me six lines written by the most honest man and I will find in them something to hang him.”

The tactics employed by Social Skepticism which create the environment enabling the Richeliean Appeal currently include:

  • informal organizations never held to public or peer accountability – imputing no liability to corporate sponsors
  • staffed by a variety of non-science persons who volunteer time extra-professionally
  • claiming to represent correctness or the well being of the people
  • organized and personal public and celebrity ridicule tactics, attacks, defamation and tortious interference
  • attempts to blackmail, approach employers, publicly humiliate or anonymously harass
  • ‘investigators’ pretending to do scientific inquiry
  • academic celebrity promotion, agent, and publicist employment
  • scientific method masquerades, pretense of representing science
  • propaganda one liners, catch phrases, weapon words and circular recitations
  • domination of education unions and systems
  • enforcement of informal professional penalties for dissent
  • funded legal intimidation of those who dissent
  • squelching of free speech through warnings to media and celebrity intimidation
  • enlisting the aid of government agencies to enforce data screening
  • proselytization of children and intimidation of teachers
  • screening and qualification of those allowed into science and technical academia
  • media forum and publication channel policing, fabricating, intimidation and monitoring and
  • intimidation, monitoring and control of scientists and researchers

A Richeliean Appeal is Not Tantamount to Peer Review

peer review is not - CopyBy teaching that skepticism is the privilege sword of a closed group acting outside science, Social Skeptics labor under the fable that they are enacting a form of social peer review on behalf of science. Well, let’s dispense with three ideas right off the bat:

A.  Social Skeptics do not represent science, nor are they practicing scientific method,

B.  The critical assessments of Social Skeptics are not congruent with, nor do they stem from the same ethic as does peer review, and

C.  Peer review is issued inside of a discipline of expertise. A Richeliean Appeal to SSkepticism is issued regardless of the expertise of the ‘reviewer.’

Peer review results in the following categorical dispositions, enacted by an actual expert under qualified ethical circumstances:

  • to unconditionally accept a manuscript or a contention,
  • to accept it in the event that its authors improve it in certain ways,
  • to reject it, but encourage revision and invite resubmission,
  • to reject it outright.³

A Richeliean Appeal, in contrast, involves only

  • a prejudicial desire to dispense with a person or a subject
  • an aspiration to political power and celebrity influence of popular opinion
  • a focus on mechanisms of control and policing

a desire to enact harm on opposing persons and ideas. A willingness to look the other way when such activity is encouraged or effected by allies.

The idea in the mind of Social Skeptics that they are applying some kind of “peer review” by critiquing you or applying ‘critical thinking’ on various topics is fallacious in both its application and is justification. Scientist issue peer review inside of preparation for journal publishing or even after, through their credibility and status inside a scientific discipline.

SSkeptics like to contend that they are not conducting peer review because you are not their peer. The simple irony is that, in the vast majority of instances, they are not your peer, in ethic, expertise, experience, acumen nor discipline status. Do not let them play this trick.

Social Skeptics wish to emulate this status falsely and solely through the power enabled by the mob, and their celebrity status acquired therein. This is why you observe Social Skeptics continually clamoring for attention and celebrity status/noteworthiness.

Take such aspirations as a warning sign of those seeking the power of The Richeliean Appeal.

TES Signature


¹  Armand-Jean du Plessis, cardinal et duc de Richelieu. 2015. Encyclopædia Britannica Online. Retrieved 12 October, 2015, from http://www.britannica.com/biography/Armand-Jean-du-Plessis-cardinal-et-duc-de-Richelieu

²  New Advent: Armand-Jean du Plessis, Duke de Richelieu; Retrieved 12 October 2015; http://www.newadvent.org/cathen/13047a.htm

³  Wikipedia: Scholarly Peer Review; Retrieved 12 October, 2015; https://en.wikipedia.org/wiki/Scholarly_peer_review

October 13, 2015 Posted by | Institutional Mandates, Social Disdain, Tradecraft SSkepticism | , , , , , , , , , | Leave a comment

Gaming the Lexicology of Ideas through Neologism

Were I a fake skeptic, wishing to obfuscate social understanding of a new set of observations or a new science, I would seek to deny this disfavored subject the lexicon necessary in developing descriptives and measures under the scientific method (Wittgenstein Error – Descriptive). I would disposition its terminology as constituting ‘made up words;’ citing it as too novel, unnecessary or too peculiar to the understanding of the first person I ever heard utter its terms. Conversely, any half witted term my allies made up would be granted unqualified and immediate gravitas, based on who said it, and who its intended victims were.
All this constitutes the gaming of lexicology in order to control access to science. To Wittgenstein, all perfidious activity, every bit the same as what he defined to be pseudoscience.

Brick Walls of Denial - CopyWhen faced with a new term, the Ethical Skeptic must adhere to a disciplined framework of how to regard the new term, and ensure that their methods of thinking do not unnecessarily sway their judgement into a domain of prejudice and ignorance. A neologism is not simply a new word. Nor does its designation, in a professional context, imply that a term designated as such is invalid or made up. The Ethical Skeptic must be diligent in their effort to not replicate these mistakes and abuses of Social Skepticism; those who employ the term ‘neologism’ (sic) in a pejorative, abusive and equivocal fashion. This constituting lexicon gaming; an attempt to filter out ideas and concepts which they disfavor or by which they are threatened.

The actual term employed, in neutral context, to frame a description of a new word is neolexia, not neologism.

To deny a subject its own descriptive and measure language, is to artificially relegate it into the realm of incoherence, independent of its verity or lack thereof. Ethical Skepticism demands that a contention be found right or wrong through diligent observation and measure, and not through ignorance born of gaming its denial of a critical language.

Neologisms, as opposed to neolexia, are very often valid and frequently employed terms and concepts, which simply have not been accepted completely into the entire public vernacular. Consider below, the difference in philosophy’s framing of each definition, as compared to the equivocal and abusive employment of the term (#3 below) – the abusive habit of today’s Social Skeptic.

Neolexia (from the Greek néo-, “new”, and lexikó, “dictionary”) ¹ ²

  • a new word
  • the lexicon or archive of neologism attributable to a specific person, discipline, publication, period, or event.

Neologism (legitimate, from the Greek néo-, “new”, and lógos, “speech”)¹ ² ³

  • a newly coined term, word or phrase that may be in the process of entering common use, but has not yet been accepted into mainstream language¹
  • a new corpora³
  • a term compounded from accepted terms
  • a new employment context or meaning for an existing word (excluding malapropism)³
  • a new word or phrase describing a new concept
  • an isolate term describing a neglected or newly critical concept

‘Neologism’ (psychology/pseudo-professional/pejorative-equivocal) ² ³

  • A made up word, meaningful to only its inventor
  • A feared word in the eyes of person wishing to suppress the idea it represents

The Three Tests to Qualify a Neolexia as a Neologism (and not a ‘Neologism’)

Designating a term one does not like as a ‘neologism’ (the quotes denoting employment in the pseudo-professional pejorative) is a common technique of enforcing a prejudicial Wittgenstein Descriptive Error. In general, a term is not simply a neolexia or a ‘neologism’ simply because someone has employed it to describe a concept or subject which threatens the recipient. A neologism is a word, phrase or employment which is being considered for legitimate use in describing a formerly tough-to-articulate or identify concept. In the lexicon of Social Skeptics, the term is employed, ironically as a ‘neologism’ itself (ie. wrong employment), per the following

‘Neologism’ (in Social Skepticism)

/pseudo skepticism : obfuscation methods & tools/ : a term which serves to identify, describe, frame or measure inside a subject which is threatening to the recipient – so therefore is dispositioned by the recipient as new, unnecessary or made up. A word which is falsely cited as ‘made up’ because it has been crafted, employed or uttered by a person who is disliked, or regarding a subject which the pseudo skeptic wishes to squelch.

Neologism Fallacy –  falsely condemning a term by citing it to be a ‘neologism’ in the pejorative, when in fact the word is in common legitimate use, or is accepted as a neologism, or passes the three tests to qualify as a functional neologism.

Neologism Error – falsely deeming a word as a neologism when it is in fact a neolexia. Granting a word which does not qualify as a neologism, status as a neologism simply because of who originated the word, and who indeed are its intended victims.

Neologasm – excessive use of the pejorative designation of words as constituting ‘neologism,’ in order to block ideas or deny science one disfavors.

This is the instance where a person wishes to disparage a subject or person by citing it as made up, and therefore invalid. It is no different than declaring a whole subject to be a pseudoscience, in absence of any investigation or research. The disposition may indeed be correct, but the means by which the user arrives at such a disposition is pseudoscience (Wittgenstein Error).

In fact, the professional designation of a term or concept as a neologism is not a pejorative or obfuscating exercise. In general there are three qualifications which allow for a neolexia, a new word (neutrally employed), to qualify as a neologism (being considered for or newly used, to articulate a concept). These are the three logical characteristic litmus tests of such a new word – involving, its

  • Non-Novelty
  • Isolate Employment
  • Possession of a Logical Critical Path

Or as expressed in the inverse, the three qualifications which relegate a word into the bucket of pejorative ‘neologism’ (ironically we need a new word for this concept to avoid its equivocal use) are its being novel, superfluous and not necessary in articulating a specific logical critical path (see below).

to qualify as a neologism - CopyFor example, let’s examine the neolexia plangonophile

A plangonophile is a doll enthusiast

1. The term has been in use for longer than 25 years (French) – NOVELTY

2. it serves as a stand alone concept, in that it does not overlap with existing terms and has a specific descriptive counterpart in discourse – ISOLATE

3. It is a necessary component in a logical critical path (describing concepts differentiating doll enthusiasm from collecting or manufacturing) – CRITICAL PATH

Therefore, plangonophile is a neologism (in the non-pejorative)†

In contrast, let’s consider the neolexia ‘truthiness’

Truthiness is a proposed neologism, outlining a quality characterizing a “truth” that a person making an argument or assertion claims to know intuitively “from the gut” or because it “feels right.”‡ This term fails the qualification to become a neologism – and is relegated to a useless neolexia because

1. The term has been in use by only one person (Stephen Colbert) for less than a year – NOVELTY‡

2. it overlaps with concepts of gut feel or intuitive grasp, common sense or confidence, and lacks a specific descriptive counterpart in discourse, other than employment in humorously attacking disparate ideas one does not like – NON-ISOLATE

3. It is  NOT a necessary component in a logical critical path (it does not improve philosophy, only serves to improve rhetoric and polemic, obdurate or bandwagon discourse) – NON-CRITICAL PATH

Therefore, truthiness is a useless neolexia – a neologism (in the pejorative). Its acceptance is only driven forward by social pressure, and not the discipline of lexicology.

The Ethical Skeptic will take note that the term truthiness, nonetheless, was granted immediate entré into the ranks of neologism, based simply upon who uttered it, and who its intended victims were. This is not only pseudoscience, but social fraud. The Wittgenstein error of playing with language in order to promote or obscure political and scientific discourse to one’s liking.

Were I a fake skeptic, wishing to obfuscate social understanding that doll collecting was on the increase, I would seek to deny its terminology any role in the lexicon of that which is descriptive and measurable (Wittgenstein Error – Descriptive). I would disposition the term plangonophile as a ‘neologism’ and be incensed at the pseudoscience each time I heard it. I wold cite it as too new, or too peculiar to the understanding of the first person I ever heard mention the term. This is simply today’s social skepticism method of blocking science through the descriptives necessary in making observations and measurements. To Wittgenstein, every bit the same set of activity as what he defined to be pseudoscience.

A second technique I could employ, would be to create several dozen categories of doll collection subsets, from existing terminology (Barbie collecting, Troll Doll collecting, GI Joe collecting, American Girl Doll collecting) by means of which I could hide aggregate data and intelligence regarding the overall trends inside plangonophilia. This is the process called deconstructionism. It is a common means of obfuscating data, and blocking necessity under Ockham’s Razor.

Each of these techniques stands exemplary of the Wittgenstein Error of blocking the ability of science to develop the descriptive language, relationships and measures necessary in the advancement of science and understanding. A keen minded Ethical Skeptic is able to spot such dark intellectual work as it happens, and stand in the gap for new and developing science. You are not there to provide peer review, that will come at a later date. In the early phases of the scientific method, the Ethical Skeptic is an ally. Fully desirous of seeing what is valid and invalid concerning the new subject under contention or sponsorship.

Falsely declaring a term or measure I do not like, as a ‘neologism,’ while at the same time granting the made up expressions of my allies immediate gravitas, is habitual pseudoscience.

TES Signature


¹  Neolexia, http://neolexia.net/index.php?/definition/definition/

²  Wikipedia: Neologism, https://en.wikipedia.org/wiki/Neologism

³  Working with Specialized Language: A Practical Guide to Using Corpora, Lynne Bowker, Jennifer Pearson; Taylor & Francis, Sep 26, 2002.

†  The International Dictionary of Neologisms, http://neologisms.us/

‡  Wikipedia: Truthiness, https://en.wikipedia.org/wiki/Truthiness

September 26, 2015 Posted by | Agenda Propaganda, Argument Fallacies | , , , , , , , | Leave a comment

%d bloggers like this: