The Ethical Skeptic

Challenging Pseudo-Skepticism, Institutional Propaganda and Cultivated Ignorance

Formal vs Informal Fallacy and Their Abuse

One can only truly understand how a formal fallacy is qualified, by understanding the relationship between first order logic and formal theory construction.  This allows the philosopher to examine flaws which might serve to negate propositions because of a failure of formal theory. These are called formal fallacies. Informal ‘fallacy’ on the other hand – is an ignominious title ascribed to every bit of circumstantial critique which falls outside of this class of fatal proposition error – or might be boasted as inappropriate basis for an attempt at refutation.
Formal fallacies are fatal to their associated proposition, but in no way serve to prove nor disprove any purported truth. Informal ‘fallacies’ most of the time are abused by those pretending to cite something fatal to the argument at hand. Such is rarely the case; ironically demonstrating a formal fallacy of its own in the offing.

First Order Logic – Predicate Calculus

In First Order Logic, one entity possesses an effect resulting in another entity or entity state via a principle or a mechanism; or simply by means of an observed relationship if the principle or mechanism is not clearly defined or understood. This relationship between one individual entity and another is called a condition of Predicate Calculus.  An apple, released from its tree branch, will fall to the earth. I do not have to identify nor understand the M-theory mechanism(s) which cause this, rather just simply observe it to be (consistent) true. This order of reason is known commonly in philosophical prior art as the modus ponens or ‘If P then Q’ proposition. (1 Rosen)

Modus Ponens

/philosophy : argument : formal structure/ : the necessity that an argument follow a form of claim such that its soundness and formal structure can be followed by others. A discipline featuring the formal structure ‘If P then Q‘ premise in its expression such that claims may not be slipped by surreptitiously inside a condition of poor scientific method, fallacy or little or no actual study or supporting fact whatsoever.

I have made an effort to demonstrate the simple and elegant nature of Predicate Calculus below, in term of cracker crumbs (Q) and cracker eating (P). Please note that in the context of Predicate Calculus, for the sake of parsimony and reduction clarity and/or value, ‘crumbs’ is excluded necessarily – as it is an entity class – and entity classes serve to violate the singular nature of a Predicate Calculus. Whereas ‘cracker crumbs’ and cracker eating are individual entities. Always bear in mind that we, in order to avoid the ambiguity or organic untruth practiced inside social skepticism, are restricted to an individual entity in First Order Logic and typically want to avoid propositions involving unqualified entity classes (see Discerning Sound from Questionable Science Publication). (1 Rosen)

    Eating Crackers Seems to Always Produce Cracker Crumbs (modus ponens)

Example of modus ponens discipline usefulness in detecting deception ambiguity or organic untruth:

Ambiguous Statement             “There is no evidence for this claim”

Proposition form                      Q

modus ponens version            “[specific studies completed showed] there is no evidence for this claim”

Proposition form                      [If P then] Q

Claim validity                            Not Sound – premises are assumed or are incorrect

Formal Theory = Predicate Calculus + Logical Calculus

Predicate Calculus as we have seen, establishes the relationship between two individual entities. This type of parsimonious proposition usually stems from an empirical observation set. Newton is credited with formulation of the theory of gravity, through his observing of an apple falling from an apple tree. Hence definition of the “If two massive bodies, then attractive acceleration by formula of characteristic mass and distance” (If P then Q) proposition by observation. (2 Newton) Note that the principle or mechanism which creates the relationship, or even the characteristic mathematics of such a relationship, if either or both are known, is called the Logical Calculus. (1 Rosen) Below we have depicted both a Predicate Calculus and a Logical Calculus packaged into what is commonly known as The Formal Theory of Gravity:


Apples and gravity are salient to arguments about force and acceleration (salience)

Predicate Calculus

An apple, released from its tree branch, will fall (accelerate) to the earth. (modus ponens)

Objects accelerating are consistent in context and mathematical mechanism to physical action of gravity (sequitur)

Logical Calculus

If two massive bodies, then attractive acceleration by formula of characteristic mass and distance, given by the following (3 Wikipedia):

note: the above represents an observation proof through straightforward replication and mathematical confirmation. Most arguments are not so easily resolved. Other types of logical calculus might involve mathematical derivation, or assembly of arrival distributions, premises, constraints, logical relationships and mechanisms which justify a proposed conclusion.

So when we as professors of philosophy have stepped beyond a condition of Predicate Calculus and developed a proposition which explains such Predicate Calculus, ie. the Logical Calculus, we have the basis of what is called Formal Theory. When we screw up the calculus, salience or sequitur which is crafted to make such a proposition, this is called a Formal Fallacy.

Formal and Informal Fallacy

Skepticism therefore, is not a process by which one decides consensus or falsification outcomes (science), rather it is a process of identifying when the predicate calculus or logical calculus has been abrogated inside a claim to truth (proposition). For instance, were Newton to cite that

  1. Object A and B attract each other.
  2. Men and women are objects.
  3. Therefore men and women are attracted to each other.

This proposition would feature three formal fallacies: 1) affirming the consequent, 2) entity class characterization by single entity and 3) two equivocal substitutions of logical entities (Masked Man fallacy. Please note that employment of equivocation in order to accomplish a substitution of equivalents, is a formal fallacy, despite the fact that equivocation itself is an informal fallacy of ambiguity. In this context, equivocation is not employed inside a context of solely ambiguity). The distinguishing formal factor here is that each flaw is FATAL to the critical path logical calculus of the argument itself. The conclusion just happens to accidentally also be true, but its logical critical path is invalid. Accordingly, the answer or ‘truth’ versus ‘untruth’ entailed as the conclusion of a formal fallacy, still may or may not be correct, regardless of the status of the proposition under examination. This serves to elucidate what should be going on in the mind of the ethical skeptic:

Our job as skeptics therefore is not to probe truth itself, nor to pretend to step in and act in lieu of science; rather, our job is to bear vigilance inside the processes by which we arrive at scientifically derived truths. A skeptic who enforces uncertain truth at face value, or by appeal to fallacy (fallacy fallacy), or does so by means of surreptitious advocacy (rhetoric), or by means an inverse negation (informal fallacy), is not a skeptic at all – rather an agenda bearer. This is best discerned in how the supposed ‘skeptic’ deals with an ability to suspend judgement as to what is held as truth – regardless of a particular proposition’s state – or what is called epoché.

A formal fallacy therefore is the singular state wherein, a skeptic can indeed declare a proposition to be in error by means of its predicate (modus ponens), sequitur or logical construction. This does not mean that the truth attempting to be sought is wrong – simply that the means employed to getting there is fatally flawed inside its own structure (the orange box in the graphic above). An argument from fallacy, or fallacy fallacy, would be an instance wherein a faking skeptic employs either a formal, or even more a general critique or informal fallacy, to declare a subject or truth to be therefore, false. Also know as an ‘appeal to fallacy’, such an error in predicate calculus is also itself, a formal fallacy.

Appeal to Fallacy (Fallacy Fallacy)

/philosophy : argument : formal fallacy : pseudo-invalidation/ : when an arguer employs either a formal, or even more an informal fallacy, to stand as the basis to declare a subject or claimed truth to be therefore, false. A formal fallacy or redress on the basis of soundness or induction inference, only serves to invalidate an opponent’s argument structure. All three flaws serve to tender nothing about the verity of the argument’s conclusion, which may or may not be independently also true. As well, any instance wherein a circumstantial, expression, personal or informal critique or other informal fallacy is inappropriately cited as a mechanism to invalidate an opponent’s argument or stand as basis for dismissal of a subject.

An unsophisticated arguer’s flawed attempts for instance, to justify the nearby-Earth existence of aliens, does not serve to justify a position therefore that aliens do not exist nearby Earth. Only science can validate/invalidate such an argument – and not an armchair philosopher. That is why I do not delve into the subject of nearby-Earth aliens often. As an ethical skeptic I possess scant information on nearby aliens with which to work. I cannot make any comment on the matter – save to observe the chicanery of the religious certainty on both sides of the construct (belief on the part of UFO fanatics and null hypothesis abuse on the part of those seeking UFO denial). I have been in every single continent on this Earth except for Antarctica, and almost every one of its deserts and jungles, save for a few I still have on my bucket list. There are rather astounding mysteries to be found. Why people have such an emotional investment on such an issue, with scant investment in their own research, is beyond me. But I digress…

“A fallacy is, very generally, an error in reasoning.  This differs from a factual error, which is simply being wrong about the facts.  To be more specific, a fallacy is an “argument” in which the premises given for the conclusion do not provide the needed degree of support.”   (Michael Labossiere, philosophy professor, Florida A&M)

“However, not just any type of mistake in reasoning counts as a logical fallacy.  To be a fallacy, a type of reasoning must be potentially deceptive, it must be likely to fool at least some of the people some of the time.  Moreover, in order for a fallacy to be worth identifying and naming, it must be a common type of logical error.” (Gary Curtis, author, The Fallacy Files)

Formal Fallacy

/philosophy : predicate or logical calculus : paralogism/ : a violation of any rule of formal inference —called also paralogism. Any common flaw in the sequitur nature of premise to conclusion, logical or predicate structure which could be cited as the fatal basis of a refutation regarding a given proposition or argument.

The proposition that is formally fallacious is always considered wrong. However, the question in view is not whether its conclusion is true or false, but whether the form of the proposition supporting its conclusion is valid or invalid, and if its premises provide for logical connection into the argument (i.e. sequitur context, and not the validity per se of the premises themselves, which pertains to salience and soundness). The argument may agree in its conclusion with an eventual truth only by accident. What gives unity to different fallacies inside this view is not their characteristic dialogue structure, rather the nature of integrity inside the concepts of deduction and (non-inductive) proof upon which the proposition is critically founded. (4 Hansen, SEOP) (5 Wikipedia)

One thing to be made clear here is the issue of soundness and premises. The soundness of an argument relates to the validity of its premises. However, the linkages in sequitur logic which make the premises salient to the argument, do pertain to formal fallacy. Many fallacy definitions miss this distinction – that the salience or sequitur nature of a premise does not solely relate to the issue of soundness. It is part of the Predicate Calculus as well. The graphic above helps me differentiate between informal fallacy soundness (yellow box) and formal logic (orange box) and circumstantial informal critique (grey box).

This circumstantial informal critique category in the graphic above, introduces an even weaker from of counter argument, perhaps even more appropriately cited as a ‘criticism’ or ‘disputation’ involving a focus on informal ‘fallacies’. An informal fallacy does not serve to fatally invalidate an argument, rather only cast suspicion onto the nature of its expression.

Informal ‘Fallacy’

/philosophy : proposition expression : flaws/ : flaws in the expression, features, intent or dialogue structure of a proposition or series of propositions. Any criticism of an argument by means of other than structure (formal) flaws; most often when the contents of an argument’s stated premises fail to adequately support its proposed conclusion (soundness), or serious errors in foundational facts are presented.

An informal fallacy is really anything else which is circumstantially wrong with an argument, which does not relate to its predicate, salient, sequitur or logical construction. For instance, relevance is an informal fallacy (ad hominem or an appeal to skepticism as examples of irrelevant informal ‘fallacies’). When the contents of an argument’s stated premises fail to adequately support its proposed conclusion – this relates to the soundness of the argument. It has nothing to do with the logical calculus or predicate modus ponens (the yellow box in the graphic above). Nor in reality, is citing a lack of soundness a form of informal critique. It rises to a position of equal significance with both factual error and error in structure. This certainly a much more important feature set than say, an ad hominem ‘fallacy’.

The Ethical Skepticism Alternative

However, in philosophical circles, this raises the question as to whether or not ‘informal fallacies’, aside from issues of argument soundness, are even fallacies at all – or simply an attempt to promote the perception of technicalities into the appearance of invalidating an argument (by conflating anything and everything to involve the soundness or logic of the argument), which they do not indeed invalidate. This is a common magician’s trick of social skepticism.

One exception exists however in the form of the informal fallacy of ‘lacking soundness’. Soundness is the condition wherein supporting assumptions solidly underpin the validity of an argument’s logical calculus, and not the strength of the logical calculus itself. Therefore, a lack of soundness, despite not being regarded a formal fallacy of logic, is fatal to an argument just as is a formal fallacy (not fatal however to its conclusion necessarily). So soundness is an all important first step in the evaluation of an argument’s strength, despite its existence as an informal fallacy.

Moreover, if we hold this as one bookend of deception, the false employment of formal and informal fallacy, on the other end of deception is the use of purported ‘facts’ inside a science which is unsound, logically a failure,  and provides no inductive strength. Facts in this situation are useless. They are mere tidbits of propaganda which happen to be correct, but their domain of induction extends very little. The fact spinner will never relate this weakness and imply the contention that fact ≡ science. This is nowhere near the case. Most of science revolves around a principal called plenary condition.

Plenary Science

/philosophy : scientific method : inductive and deductive strength : completeness/ : a conclusion of science or a method of science which is fully researched, complete in alternative address, entire in its domain of necessity-based research, absolute in its determinations and unqualified by agenda, special pleading or conditions. A conclusion which is complete in every reasonable avenue of examination; fully vetted or constituted by all entitled to conduct such review/research. This plenary entitled group to include the sponsors who raised Ockham’s Razor necessity in the first place, as well as those stakeholders who will be directly placed at risk by such a conclusion or research avenue’s ramifications.

Therefore, we see that the simply playground of ‘fallacy and fact’ is not sufficient basis from which to determine sound scientific conclusion. Instead, I carry in mind a framework of argument theory, involving a hierarchy of the five primary argument issues in descending order of importance, which is prioritized like this

Argument Theory

/philosophy : argument strength : evaluation heirarchy/ : the formal and informal methods of evaluating the robust, weak or fatal nature of argument validity.

1.  Coherency – argument is expressed with elements, relationships, context, syntax and language which conveys actual probative information

2.  Soundness – premises support or fail to adequately support its proposed conclusion

3.  Formal Theory – strength and continuity of predicate and logical calculus (basis of formal fallacy)

4.  Inductive Strength – sufficiency of completeness and exacting inference which can be drawn

5.  Factualness – validity of information elements comprised by the argument or premises

6.  Informal Strength – informal critique of expression, intent or circumstantial features

Articles 1 through 3 above are often potentially fatal to an argument, while article 2 is the only Formal Fallacy concerned item. Articles 4 and 5 may only serve to weaken an argument or its propositions. However, articles 4 and 5 may also be used as pretense and distraction.

    This is why fake skeptics scream so often about ‘facts’, ‘evidence’ and (informal) ‘fallacies’, because

  • Facts constitute a relatively weak form inference as compared to soundness, predicate and logical deduction; offering a playground of slack and deception/diversion in the process of boasting about argument strength or lack thereof, and
  • Most faking skeptics do not grasp principles of soundness, predicate and logical calculus, nor the role of induction inference in the first place. ‘Facts’ are the first rung on the hierarchy which they possess the mental bandwidth to understand and debate.
  • A deductive falsification finishes its argument at the Soundness and Formal Theory levels of strength assessment. It is conclusive regardless of circumstantial informal issues. These are rendered moot precisely because falsification has been attained. Faking skeptics seek to distract from the core modus ponens of a falsification argument by pulling it down into the mud of circumstantial ‘facts’ instead; relying upon the reality that most people cannot discern falsification from inference.
  • Informal ‘fallacies’ sound like crushing intellectual blows in an argument, when in fact most of the time they are not. These are tool of those who seek to win at all costs, even if upon an apparent technicality. An arguer who possesses genuine concern about the subject, is not distracted by irrelevant or partially salient technicality.
  • Provided that articles 1 through 4 are sound, observation is always stronger than philosophy. This includes instances of accusation of anecdote, once an Ockham’s Razor necessity is established. Fake skeptics hold this relationship in reverse, and in the resulting promotion of article 5 above its normal importance, conduct pseudoscience.

It is not that facts and evidence are not important, rather it is the critical modus ponens in how they are employed, which is salient (see The Tower of Wrong). So the philosopher must be careful about how such mechanisms as informal critique and facts are employed. It is usually ethical to maintain discipline around your formal and informal critiques of an opponent’s argument. Point out fatal flaws – but only ask questions concerning informal fallacies and facts, because they may be immaterial to the issue at hand. In the end, either technique is employed so as to help the opponent become more clear (and hopefully valuable) in their argument, and not as a means of destroying and bashing a person, nor an attempt to make one’s self appear to be ‘smart’.

Such motives are not indicative of a concern over the subject at all, rather simply an ego which is out of control (an informal ‘fallacy’).

1.  Rosen, Stanley; The Philosopher’s Handbook: Essential Readings from Plato to Kant, Random House Reference, New York, April 2003; pp. 581 – 589.

2.  Newton, Sir Isaac; Mathematic Principles of Natural Philosophy (The Principia); Propositions: Proposition 6, Theorem 6; London, 12 Jan 1725.

3.  Wikipedia: Newton’s law of universal gravitation;

4.  Hansen, Hans, “Fallacies”, The Stanford Encyclopedia of Philosophy (Summer 2015 Edition), Edward N. Zalta (ed.), URL = <;.

5.  Wikipedia: Formal Fallacy;

March 11, 2017 Posted by | Argument Fallacies | , , | Leave a comment

Poser Science: Proof Gaming

Science begins its work based upon a principle called necessity, not upon proof. Science then establishes proof, if such can be had. Popper critical rationality as it turns out, involves more than irrefutable proof, contrary to what gaming social skeptics might contend. Proof Gaming is a method of tendering an affectation of sciencey methodology, yet still effectively obfuscating research and enforcing acceptable thought.

im-a-skeptic-burden-of-proofIn order for science to begin to prove the existence of the strange animal tens of thousands of credible persons report roaming in the woods, I must first bring in its dead carcass.  But if I bring in its dead body, then I have no need for science to examine that such an animal exists in the first place; I have already done the science.  The demand that I bring in a dead body, given a sufficient level of Ockham’s Razor necessity-driving information, is a false standard threshold for science to begin its diligence, and such a demand constitutes pseudoscience.

Now of course, Karl Popper in his brief entitled Die beiden Grundprobleme der Erkenntnistheorie contended that science should be demarcated by the proper assignment of truth values to its assertions, or ‘sentences’: ergo, science is the set of sentences with justifiably assigned truth values.¹ This was called a mindset of ‘critical rationality’.¹ It was a step above simple scientific skepticism. The task of the philosophy of science is to explain suitable methods by which these assignments are then properly made.¹ However, one can extend the philosophy of science to construct elaborate methods, which prevent the assimilation of ideas or research which one disfavors, by gaming these methods such that philosophy stands and acts in lieu of science. One such trick of conducting science research by means of solely philosophy, all from the comfort of one’s arm chair, is called Proof Gaming. Popper contended later in his work, as outlined by the Internet Encyclopedia of Philosophy here:

As a consequence of these three difficulties [the problem or necessity of induction] Popper developed an entirely different theory of science in chapter 5, then in Logik der Forschung. In order to overcome the problems his first view faced, he adopted two central strategies. First, he reformulated the task of the philosophy of science. Rather than presenting scientific method as a tool for properly assigning truth values to sentences, he presented rules of scientific method as conducive to the growth of knowledge. Apparently he still held that only proven or refuted sentences could take truth values. But this view is incompatible with his new philosophy of science as it appears in his Logik der Forschung: there he had to presume that some non-refuted theories took truth values, that is, that they are true or false as the case may be, even though they have been neither proved nor refuted [William of Ockham’s ‘plurality’]. It is the job of scientists to discover their falsity when they can. (IEoP)¹

Social skeptics will cite the base logic of Popper’s first work, yet omit his continued work on induction (Logik der Forschung) – as a process of sleight-of-hand in argument. So, critical rationality as it turns out, involves more than irrefutable proof, contrary to what gaming social skeptics might contend. Science begins its work based upon a principle called necessity, not upon proof. Science then establishes proof, if such can be had. Sadly, much of science cannot be adjudicated on anything like what we would call iron-clad proof, and instead relies upon a combination of falsified antithetical alternatives or induction based consilience.

The gaming of this reality constitutes a process of obfuscation and deceit called Proof Gaming. Proof Gaming is the process of employing dilettante concepts of ‘proof’ as a football in order to win arguments, disfavor disliked groups or thought, or exercise fake versions or standards of science. Proof gaming presents itself in six speciations. In the presence of sufficient information or Ockham’s Razor plurality, such tactics as outlined below, constitute a game of pseudoscience. Posing the appearance of science-sounding methods, yet still enabling obfuscation and a departure from the scientific method in order to protect the religious ideas one adopted at an early age.

Let’s examine the six types of this common social skeptic bad science method, formal and informal fallacy.

Proof Gaming

/philosophy : argument : pseudoscience : false salience/ : employing dilettante concepts of ‘proof’ as a football in order to win arguments, disfavor disliked groups or thought, or exercise fake versions of science. Proof gaming presents itself in six speciations:

Catch 22 (non rectum agitur fallacy) – the pseudoscience of forcing the proponent of a construct or observation, to immediately and definitively skip to the end of the scientific method and single-handedly prove their contention, circumventing all other steps of the scientific method and any aid of science therein; this monumental achievement prerequisite before the contention would ostensibly be allowed to be considered by science in the first place. Backwards scientific method and skipping of the plurality and critical work content steps of science.

Fictitious Burden of Proof – declaring a ‘burden of proof’ to exist when such an assertion is not salient under science method at all. A burden of proof cannot possibly exist if neither the null hypothesis or alternative theories nor any proposed construct possesses a Popper sufficient testable/observable/discernible/measurable mechanism; nor moreover, if the subject in the matter of ‘proof’ bears no Wittgenstein sufficient definition in the first place (such as the terms ‘god’ or ‘nothingness’).

Herculean Burden of Proof – placing a ‘burden of proof’ upon an opponent which is either arguing from ignorance (asking to prove absence), not relevant to science or not inside the relevant range of achievable scientific endeavor in the first place. Assigning a burden of proof which cannot possibly be provided/resolved by a human being inside our current state of technology or sophistication of thought/knowledge (such as ‘prove abiogenesis’ or ‘prove that only the material exists’). Asking someone to prove an absence proposition (such as ‘prove elves do not exist’).

Fictus Scientia – assigning to disfavored ideas, a burden of proof which is far in excess of the standard regarded for acceptance or even due consideration inside science methods. Similarly, any form of denial of access to acceptance processes normally employed inside science (usually peer review both at theory formulation and at completion). Request for proof as the implied standard of science – while failing to realize or deceiving opponents into failing to realize that 90% of science is not settled by means of ‘proof’ to begin with.

Observation vs Claim Blurring – the false practice of calling an observation or data set, a ‘claim’ on the observers’ part.  This in an effort to subjugate such observations into the category of constituting scientific claims which therefore must be now ‘proved’ or dismissed (the real goal: see Transactional Occam’s Razor Fallacy).  In fact an observation is simply that, a piece of evidence or a cataloged fact. Its false dismissal under the pretense of being deemed a ‘claim’ is a practice of deception and pseudoscience.

As Science as Law Fallacy – conducting science as if it were being reduced inside a court of law or by a judge (usually the one forcing the fake science to begin with), through either declaring a precautionary principle theory to be innocent until proved guilty, or forcing standards of evidence inside a court of law onto hypothesis reduction methodology, when the two processes are conducted differently.

All of these tactics are common practices which abrogate the role and discipline of science.  Additionally, a key set of symptoms to look for, in determining that Proof Gaming is underway, are when

  1. one of these tactics is conducted inside a media spotlight,  and when
  2. every media outlet is reciting the same story, and same one liner such as ‘extraordinary claims demand extraordinary evidence’, verbatim.

This is an indicator that a campaign is underway to quash a subject.

The sad reality is, that on most tough issues, any one single person or small group of outsiders is poorly equipped to prove a subject beyond question. Popper recognized this later in his life work.  We simply do not have the resources and time to accomplish such a task.  SSkeptics know this and use it to their advantage.  The people who are calling for research for example on the connection between cognitive delays in children and the potential role which immunizations have had on this, are simply asking for science to do the research. The response they receive is “You can’t prove the link,” thus we are justified in waging a media campaign against you and scientifically ignoring this issue. This is Proof Gaming.  Complicating this is the fact that the issue is broader than simply MMR and Thimerosal (the majority body of current study), involving the demand for science to research the causes of valid skyrocketing levels of developmental delays, autoimmune disorders, and learning disabilities in our children. The issue bears plurality and precaution, but is answered with ignorance. The Proof Gamers who sling epithets such as “Deniers” and “Anti-vaccinationistas” and “Autistic Moms” are committing scientific treason. One should note that the handiwork of such SSkeptics is rarely characterized by outcomes of value or clarity, is typically destructive and control oriented, and is reliably made media-visible (see our next Poser Science series on the tandem symbiosis between virtue signalling and malevolence).

Hype and name calling has no place in pluralistic research, and the media pundits who commit this are practicing pseudoscience plain and simple. Once plurality has been established, the games should be over.  But not for Proof Gamers.   Attacking proponents who have done case research to call for further science (not proving the subject) for not “proving beyond a shadow of a doubt,” their contentions, is an act of pseudoscience.

This fake demand for proof before research is Proof Gaming, is an abrogation of the Scientific Method and is Pseudoscience.

TES Signature

¹  The Internet Encyclopedia of Philosophy, “Karl Popper: Critical Rationalism”;

February 28, 2017 Posted by | Agenda Propaganda, Argument Fallacies, Social Disdain, Tradecraft SSkepticism | , , , , , , | Leave a comment

Discerning Sound from Questionable Science Publication

Non-replicatable meta-analyses published in tier I journals do not constitute the preponderance of good source material available to the more-than-casual researcher. This faulty idea stems from a recently manufactured myth on the part of social skepticism. Accordingly, the life-long researcher must learn techniques beyond the standard pablum pushed by social skeptics; discerning techniques which will afford them a superior ability to tell good science from bad – through more than simply shallow cheat sheets and publication social ranking classifications.
The astute ethical skeptic is very much this life-long and in depth researcher. For him or her, ten specific questions can serve to elucidate this difference inside that highly political, complicated and unfair playing field called science.

the-ten-study-questionsRecently, a question was posed to me by a colleague concerning the ability of everyday people to be able to discern good scientific work from dubious efforts. A guide had been passed around inside her group, a guide which touted itself as a brief on 5 key steps inside a method to pin-point questionable or risky advising publications. The author cautioned appropriately that “This method is not infallible and you must remain cautious, as pseudoscience may still dodge the test.” He failed of course to mention the obvious additional risk possibility that the method could serve to screen science which either 1) is good but cannot possibly muster the credential, funding and backing to catch the attention of crowded major journals, or 2) is valid, however is also screened by power-wielding institutions which could have the resources and connections as well as possible motive to block research on targeted ideas. The article my friend’s group was circulating in consideration constituted nothing but a Pollyanna, wide-eyed and apple pie view of the scientific publication process. One bereft of the scarred knuckles and squint-eyed wisdom requisite in discriminating human motivations and foibles.

There is much more to this business of vetting ideas than simply identifying the bad people and the bad subjects. More than simply crowning the conclusions of ‘never made an observation in my life’ meta-analyses as the new infallible standard of truth.

Scientific organizations are prone to the same levels of corruption, bias, greed, desire to get something for as little input as possible, as is the rest of the population. Many, or hopefully even most, individual scientists buck this mold certainly, and are deserving of utmost respect. However, even their best altruism is checked by organizational practices which seek to ensure that those who crave power, are dealt their more-than-ample share of fortune, fame and friar-hood. They will gladly sacrifice the best of science in this endeavor. And in this context of human wisdom it is critical that we keep watch.

If you are a casual reader of science, say consuming three or four articles a month, then certainly the guidelines outlined by Ariel Poliandri below, in his blog entitled “A guide to detecting bogus scientific journals”, represent a suitable first course on the menu of publishing wisdom.¹ In fact, were I offered this as the basis of a graduate school paper, it would be appropriately and warmly received. But if this is all you had to offer the public after 20 years of hard fought science, I would aver that you had wasted your career therein.

1 – Is the journal a well-established journal such as Nature, Science, Proceedings of the National Academy of Sciences, etc.?
2 – Check authors’ affiliations. Do they work in a respectable University? Or do they claim to work in University of Lala Land or no university at all?
3 – Check the Journal’s speciality and the article’s research topic. Are the people in the journal knowledgeable in the area the article deals with?
4 – Check the claims in the title and summary of the article. Are they reasonable for the journal publishing them?
5 – Do the claims at least make sense?

This list represents simply a non-tenable way to go about vetting your study and resource material so that only pluralistic ignorance influences your knowledge base. It is lazy – sure to be right and safe – useless advisement, to a true researcher. The problem with this list resides inside some very simple industry realities:

1.  ‘Well-established journal’ publication requires sponsorship from a major institution. Scientific American cites that 88% of scientists possess no such sponsorship, and this statistic has nothing to do with the scientific groups’ relative depth in subject field.² So this standard, while useful for the casual reader of science, is not suitable at all for one who spends a lifetime of depth inside a subject. This would include for instance, a person studying impacting factors on autism in their child, or persons researching the effect of various supplements on their health. Not to mention of course, the need to look beyond this small group of publications applies to scientists who spend a life committed to their subject as well.

One will never arrive at truth by tossing out 88% of scientific studies right off the bat.

2.  Most scientists do not work for major universities. Fewer than 15% of scientists ever get to participate in this sector even once in their career.² This again is a shade-of-gray replication of the overly stringent filtering bias recommended in point 1. above. I have employed over 100 scientists and engineers over the years, persons who have collectively produced groundbreaking studies. For the most part, none ever worked for a major university. Perhaps 1 or 2 spent a year inside university affiliated research institutes. Point 2 is simply a naive standard which can only result in filtering out everything with the exception of what one is looking for. One must understand that, in order to survive in academia, one must be incrementally brilliant and not what might be even remotely considered disruptively brash. Academics bask in the idea that their life’s work and prejudices have all panned out to come true. The problem with this King Wears No Clothes process is that it tends to stagnate science, and not provide the genesis of great discovery.

One will never arrive at truth by ignoring 85% of scientists, right off the bat.

3.  There are roles for both specialty journals and generalized journals. There is a reason for this, and it is not to promote pseudoscience as the blog author implies (see statement in first paragraph above). A generalized journal maintains resource peers to whom they issue subject matter for review. They are not claiming peer evaluation to be their sole task. Larger journals can afford this, but not all journals can. Chalk this point up as well up to naivete. Peer review requires field qualification; however in general, journal publication does not necessarily. Sometimes they are one in the same, sometimes not. Again, if this is applied without wisdom, such naive discrimination can result in a process of personal filtering bias, and not stand as a suitable standard identifying acceptable science.

One will never arrive at truth by viewing science as a club. Club quality does not work.

4.  Check for the parallel nature of the question addressed in the article premise, methodology, results, title and conclusion.  Article writers know all about the trick of simply reading abstracts and summaries. They know 98% of readers will only look this far, or will face the requisite $25 to gain access further than the abstract. If the question addressed is not the same throughout, then there could be an issue. As well, check the expository or disclosure section of the study or article. If it consists even in part, of a polemic focusing on the bad people, or the bad ideas, or the bad industry player – then the question addressed in the methodology may have come from bias in the first place. Note: blog writing constitutes this type of writing. A scientific study should be disciplined to the question at hand, be clear on any claims made, and as well any preliminary disclosures which help premise, frame, constrain, or improve the predictive nature of the question. Blogs and articles do not have to do this; however, neither are they scientific studies. Know the difference.

Writers know the trick – that reviewers will only read the summary or abstract. The logical calculus of a study resides below this level. So authors err toward favoring established ideas in abstracts.

5.  Claims make sense with respect to the context in which they are issued and the evidence by which they are backed. Do NOT check to see if you believe the claims or they make some kind of ‘Occam’s Razor’ sense. This is a false standard of ‘I am the science’ pretense taught by false skepticism. Instead, understand what the article is saying and what it is not saying – and avoid judging the article based on whether it says something you happen to like or dislike. We often call this ‘sense’ – and incorrectly so. It is bias.

Applying personal brilliance to filter ideas, brilliance which you learned from only 12% of publication abstracts and 15% of scientists who played the game long enough – is called: gullibility.

It is not that the body of work vetted by such criteria is invalid; rather simply that – to regard science as only this – is short sighted and bears fragility. Instead of these Pollyanna 5 guidelines, the ethical skeptic will choose to understand whether or not the study or article in question is based upon standards of what constitutes good Wittgenstein and Popper science. This type of study can be conducted by private lab or independent researchers too. One can transcend the Pollyanna 5 questions above by asking the ten simple questions regarding any material – and outlined in the graphic at the top of this article. Epoché is exercised by keeping their answers in mind, without prejudice, as onward you may choose to read. Solutions to problems come from all levels and all types of contributors. This understanding constitutes the essence of wise versus naive science.

“Popper holds that there is no unique methodology specific to science. Science, like virtually every other human, and indeed organic, activity, Popper believes, consists largely of problem-solving.”³

There are two types of people, those who wish to solve the problem at hand, and those who already had it solved, so it never was a problem for them to begin with, rather simply an avenue of club agenda expression or profit/career creation.

Let’s be clear here: If you have earned tenure as an academic or journal reviewer or a secure career position which pays you a guaranteed $112,000 a year, from age 35 until the day you retire, this is the same as holding a bank account with $2,300,000 in it at age 35† – even net of the $200,000 you might have invested in school. You are a millionaire. So please do not advertise the idea that scientists are all doing this for the subject matter.

$2.3 million (or more in sponsorship) is sitting there waiting for you to claim it – and all you have to do is say the right things, in the right venues, for long enough.

This process of depending solely on tier I journals – is an exercise in industry congratulationism. There has to be a better way to vet scientific study, …and there is. The following is all about telling which ilk of person is presenting an argument to you.

The Ten Questions Differentiating Good Science from Bad

better-science-1Aside from examining a study’s methodology and logical calculus itself, the following ten questions are what I employ to guide me as to how much agenda and pretense has been inserted into its message or methodology. There are many species of contention; eight in the least if we take the combinations of the three bisected axes in the graph to the right. Twenty four permutations if we take the sequence in which the logic is contended (using falsification to promote an idea versus promoting the idea that something ‘will be falsified under certain constraints’, etc.) In general, what I seek to examine is an assessment of how many ideas the author is seeking to refute or promote, with what type of study, and with what inductive or deductive approach. An author who attempts to dismiss too many competing ideas, via a predictive methodology supporting a surreptitiously promoted antithesis, which cannot possibly evaluate a critical theoretical mechanism – this type of study or article possesses a great likelihood of delivering bad science. Think about the celebrity skeptics you have read. How many competing ideas are they typically looking to discredit inside their material, and via one mechanism of denial (usually an apothegm and not a theoretical mechanism)? The pool comprises 768 items – many to draw from – and draw from this, they do.

Let’s be clear here – a study can pass major journal peer review and possess acceptable procedural/analytical methodology – but say or implicate absolutely nothing for the most part. Ultimately being abused (or abusing its own research in extrapolating its reach) to say things which the logical calculus involved would never support (see Dunning-Kruger Abuse). Such conditions do not mean that the study will be refused peer review. Peer reviewers rarely ever contend (if they disregard the ‘domain of application’ part of a study’s commentary):

“We reject this study because it could be abused in its interpretation by malicious stakeholders.” (See example here:

Just because a study is accepted for and pass peer review, does not mean that all its extrapolations, exaggerations, implications or abuses are therefore true. You, as the reader are the one who must apply the sniff test as to what the study is implying, saying or being abused to say. What helps a reader avoid this? Those same ten questions from above.

null-hypothesisThe ten questions I have found most useful in discerning good science from bad, are formulated based upon the following Popperian four-element premise.² All things being equal, better science is conducted in the case wherein

  • one idea is
  • denied through
  • falsification of its
  • critical theoretical mechanism.

If the author pulls this set of four things off successfully, eschews promotion of ‘the answer’ (which is the congruent context to one having disproved a set of myriad ideas), then the study stands as a challenge to the community and must be sought for replication (see question IX below). For the scientific community at large to ignore such a challenge is the genesis of (our pandemic) pluralistic ignorance.

For instance, in one of the materials research labs I managed, we were tasked by an investment fund and their presiding board to determine the compatibility of titanium to various lattice state effects analogous to iron. The problem exists however in that titanium is not like iron at all. It will not accept the same interstitial relationships with other small atomic radius class elements that iron will (boron, carbon, oxygen, nitrogen). We could not pursue the question the way the board posed it. “Can you screw with titanium in exotic ways to make it more useful to high performance aircraft?”  We first had to reduce the question into a series of salient, then sequitur Bayesian reductions. The first question to falsify was “Titanium maintains its vacancy characteristics at all boundary conditions along the gamma phase state?” Without an answer (falsification) to this single question – not one single other question related to titanium could be answered in any way shape or form. Most skeptics do not grasp this type of critical path inside streams of logical calculus. This is an enormous source of confusion and social ignorance. Even top philosophers and celebrity skeptics fail this single greatest test of skepticism. And they are not held to account because few people are the wiser, and the few who are wise to it – keep quiet to avoid the jackboot ignorance enforced by the Cabal.

Which introduces and opens up the more general question of ‘What indeed, all things being considered, makes for good effective science?” This can be lensed through ten useful questions below, applied in the same fashion as the titanium example case:

I. Has the study or article asked and addressed the 1. relevant and 2. salient and 3. critical path next question under the scientific method?

If it has accomplished this, it is already contending for teir I science, as only a minority of scientists understand how to pose reductive study in this way. If it has not done this then do not even proceed to the next questions II though VII below. Throw the study in the waste can. Snopes is notorious for this type of chicanery. The material is rhetoric, targeting a victim group, idea or person.

If the answer to this is ‘No’ – Stop here and ignore the study. Use it as an example of how not to do science.

II. Did the study or article focus on utilization of a critical theoretical mechanism which it set out to evaluate for validity?

The litmus which differentiates a construct (idea or framework of ideas) from a theory, is that a theory contains a testable and critical theoretical mechanism. ‘God’ does not possess a critical theoretical mechanism, so God is a construct which cannot be measured or tested to any Popperian standard of science. God is not a theory. Even more so, many theories do not possess a testable mechanism, and are simply defaulted to the null hypothesis instead. Be very skeptical of such ‘theories’.

If the answer to this is ‘No’ – Regard the study or article as an opinion piece and not of true scientific incremental value.

III.  Did the study or article attempt to falsify this mechanism, or employ it to make predictions? (z-axis)

Karl Popper outlined that good science involves falsification of alternative ideas or the null hypothesis. However, given that 90% of science cannot be winnowed through falsification alone, it is generally recognized that a theory’s predictive ability can act as a suitable critical theoretical mechanism via which to examine and evaluate. Evolution was accepted through just such a process. In general however, mechanisms which are falsified are regarded as stronger science over successfully predictive mechanisms.

If the study or article sought to falsify a theoretical mechanism – keep reading with maximum focus. If the study used predictive measures – catalog it and look for future publishing on the matter.

IV.  Did the study or article attempt to deny specific idea(s), or did it seek to promote specific idea(s)? (x-axis)

Denial and promotion of ideas is not a discriminating facet inside this issue stand alone. What is significant here is how it interrelates with the other questions. In general attempting to deny multiple ideas or promote a single idea are techniques regarded as less scientific than the approach of denying a single idea – especially if one is able to bring falsification evidence to bear on the critical question and theoretical mechanism.

Simply keep the idea of promotion and denial in mind while you consider all other factors.

V.  Did the study affix its contentions on a single idea, or a group of ideas? (y-axis)

In general, incremental science and most of discovery science work better when a study focuses on one idea for evaluation and not a multiplicity of ideas. This minimizes extrapolation and special pleading loopholes or ignorance. Both deleterious implications for a study. Prefer authors who study single ideas over authors who try and make evaluations upon multiple ideas at once. The latter task is not a wise undertaking even in the instance where special pleading can theoretically be minimized.

If your study author is attempting to tackle the job of denying multiple ideas all at once – then the methodical cynicism alarm should go off. Be very skeptical.

VI.  What percent of the material was allocated towards ideas versus the more agenda oriented topics of persons, events or groups?

If the article or study spends more than 10% of its Background material focused on persons, events or groups it disagrees with, throw the study in the trash. If any other section contains such material above 0%, then the study should be discarded as well. Elanor Roosevelt is credited with the apothegm “Great minds discuss ideas; Average minds discuss events; Small minds discuss people.”

Take your science only from great minds focusing on ideas and not events or persons.

As well, if the author broaches a significant amount of related but irrelevant or non-salient to the question at hand material, you may be witnessing an obdurate, polemic or ingens vanitatum argument. Do not trust a study or article where the author appears to be demonstrating how much of an expert they are in the matter (through addressing related but irrelevant and non-salient or non-sequitur material). This is irrelevant and you should be very skeptical of such publications.

VII. Did the author put an idea, prediction or construct at risk in their study?

Fake science promoters always stay inside well established lines of social safety, so that they are 1) Never found wrong, 2) Don’t bring the wrong type of attention to themselves (remember the $2.6+ million which is at stake here), and 3) Can imply their personal authority inside their club as an opponent-inferred appeal in arguing. They always repeat the correct apothegm, and always come to the correct conclusion. The will make a habit of taunting those with redaction.

Advancing science always involves some sort of risk. Do not consider those who choose paths of safety, familiarity and implied authority to possess any understanding of science.

VIII.  Was the study? (In order of increasing gravitas)

1.  increasing-gravitasPsychology or Motivation (Pseudo-Theory – Explains Everything)

2.  Meta-Data – Studies of Studies (Indirect Data Only vulnerable to Simpson’s Paradox or Filtering/Interpretive Bias)

3.  Data – Cohort and Set Measures (Direct but still Data Only)

4.  Direct Measurement Observation (Direct Confirmation)

5.  Inductive Consilience Establishment (Preponderance of Evidence from Multiple Channels/Sources)

6.  Deductive Case Falsification (Smoking Gun)

All it takes in order to have a strong study is one solid falsifying observation. This is the same principle as is embodied inside the apothegm ‘It only takes one white crow, to falsify the idea that all crows are black’.

IX.  When the only viable next salient and sequitur reductive step, post study – is to replicate the results – then you know you have a strong argument inside that work.

X.  Big data and meta-analysis studies like to intimidate participants in the scientific method with the implicit taunt “I’m too big to replicate, bring consensus now.”

These questions, more than anything else – will allow the ethical skeptic to begin to grasp what is reliable science and what is questionable science. Especially in the context where one can no longer afford to dwell inside only the lofty 5% of the highest regarded publications or can no longer stomach the shallow talking point sheets of social skepticism – all of which serve only to ignore or give short shrift to the ideas to which one has dedicated their life in study.

TES Signature

¹  Poliandri, Ariel; “A guide to detecting bogus scientific journals”; Sci – Phy, May 12, 2013;

²  Beryl Lieff Benderly, “Does the US Produce Too Many Scientists?; Scientific American, February 22, 2010;

³  Thornton, Stephen, “Karl Popper”, The Stanford Encyclopedia of Philosophy (Winter 2016 Edition), Edward N. Zalta (ed.), URL = <;

†  Present Value of future cash flows with zero ending balance: 456 payments of $9,333 per month at .25% interest per payment period.

February 25, 2017 Posted by | Agenda Propaganda, Social Disdain | , , | Leave a comment

The Tower of Wrong: The Art of Professional Lying

James Joyce is credited with this wisdom, “A man of genius makes no mistakes; his errors are volitional and are the portals of discovery.” Indeed, I would choose rather to be informatively incorrect, over disinformatively or uselessly correct, any day. This contrast in type of ‘wrong’ illuminates the domain of Machiavellian ideas, The Tower of Wrong; ideas which are woven of fact, yet serve to constitute in the end only adornments of error.
Beyond the three proposition framings of Wittgenstein, there exist six mechanisms of social imposition and the football-like nature of how quasi-truth is handled, which serve as the linchpins inside professional lying. The Tower of Wrong depicts how partly correct, correct but useless or dis-informing evidence (Wittgenstein sinnlos) is to be clarified as distinct from deontological information – information reliable in being critically predictive or bearing falsification outcomes.

evidence-sculptingUnder a Popperian standard of scientific demarcation, if something is rendered moot through consilience of its opposing thesis, then it is not falsified necessarily, however we may select it to stand as either a null hypothesis or a provisionally accepted norm nonetheless – most philosophers grasp this.  Of key concern however inside such a process of knowledge development, is when the possibility exists that our resulting relegation of an opposing idea to the state of moot-ness (pseudoscience) might stand as simply a provisional assumption bearing a dangerous undetermined risk? In general, a provisional conclusion is regarded to possess informing ability if that provision then becomes critically predictive when posed inside its structure of consilience.  By ‘critically’ – I mean that the provisional assumption itself serves to produce the prediction, not that it simply resides as a feature inside a host of other predictive peer elements. Evolution is an example of one such reliable predictor. However, purely random allele mutation is not a critically reliable predictor inside evolutionary theory, despite evolution itself so being.

Thus I cannot simply declare falsification to be the sole threshing tool means by which one establishes knowledge/truth/accuracy/foundation philosophy. Given this playground of slack, just below the threshold of Popper falsification, it behooves the ethical skeptic to be wary of the ploys which can serve to deceive inside claims of ‘facts’ and ‘evidence’. It is not simply that our minds can deceive us into selecting for desired outcomes, this is a given. Moreover, our most risk-bearing vulnerability instead resides in the fact that stacks of unvetted, non-reliably predictive ‘evidence’ can provisionally stack (see The Warning Indicators of Stacked Provisional Knowledge) and serve to misinform and mislead us as to wrong or useless conclusions under a ‘scientific’ context as well. The following questions should be asked, when any proposition or claim to settled science has been issued as authority:

The Test of the Professional-Social Lie (Five Mechanisms)

1.  The (Wonka) Golden Ticket – Have we ever really tested the predictive strength of this idea standalone, or evaluated its antithetical ideas for falsification?

Einfach Mechanism – an explanation, theory or idea which resolves a contention under the scientific method solely by means of the strength of the idea itself. An idea which is not vetted by the rigor of falsification, predictive consilience nor mathematical derivation, rather is simply considered such a strong, or Occam’s Razor (sic) simple an idea that the issue is closed as finished science from its proposition and acceptance onward. An einfach mechanism may or may not be existentially true.

2.  Cheater’s Hypothesis – Does an argument proponent constantly insist on a ‘burden of proof’ upon any contrasting idea, a burden that they never attained for their argument in the first place? An answer they fallaciously imply is the scientific null hypothesis; ‘true’ until proved otherwise?

Imposterlösung Mechanism – the cheater’s answer. Employing the trick of pretending that an argument domain which does not bear coherency nor soundness – somehow (in violation of science and logic) falsely merits assignment of a ‘null hypothesis’. Moreover, then that null hypothesis must be assumed sans any real form or context of evidence, or Bayesian science cannot be accomplished. Finally then, that a null hypothesis is therefore regarded by the scientific community as ‘true’ until proved otherwise. A 1, 2, 3 trick of developing supposed scientifically accepted theory which in reality bears no real epistemological, logical, predicate structure nor scientific method basis whatsoever.

3.  Omega Hypothesis (HΩ) – Is the idea so important, that it now stands more important that the methods of science, or science itself. Does the idea leave a trail of dead competent professional bodies behind it?

Höchste Mechanism – when a position or practice, purported to be of scientific basis, is elevated to such importance that removing the rights of professionals and citizens to dissent, speak, organize or disagree (among other rights) is justified in order to protect the position or the practice inside society.

4.  Embargo Hypothesis (Hξ) – was the science terminated years ago, in the midst of large-impact questions of a critical nature which still remain unanswered? Is such research now considered ‘anti-science’ or ‘pseudoscience’?

Entscheiden Mechanism – the pseudoscientific or tyrannical approach of, when faced with epistemology which is heading in an undesired direction, artificially declaring under a condition of praedicate evidentia, the science as ‘settled’.

5.  Evidence Sculpting – has more evidence been culled from the field of consideration for this idea, than has been retained? Has the evidence been sculpted to fit the idea, rather than the converse?

Skulptur Mechanism – the pseudoscientific method of treating evidence as a work of sculpture. Methodical inverse negation techniques employed to dismiss data, block research, obfuscate science and constrain ideas such that what remains is the conclusion one sought in the first place. A common tactic of those who boast of all their thoughts being ‘evidence based’. The tendency to view a logical razor as a device which is employed to ‘slice off’ unwanted data (evidence sculpting tool), rather than as a cutting tool (pharmacist’s cutting and partitioning razor) which divides philosophically valid and relevant constructs from their converse.

6.  Lindy-Ignorance Vortex – do those who enforce or imply a conforming idea or view, seem to possess a deep emotional investment in ensuring that no broach of subject is allowed regarding any thoughts or research around an opposing idea or specific ideas or avenues of research they disfavor? Do they easily and habitually imply that their favored conclusions are the prevailing opinion of scientists? Is there an urgency to reach or sustain this conclusion by means of short-cut words like ‘evidence’ and ‘fact’? If such disfavored ideas are considered for research or are broached, then extreme disdain, social and media derision are called for?

Verdrängung Mechanism – the level of control and idea displacement achieved through skillful employment of the duality between pluralistic ignorance and the Lindy Effect. The longer a control-minded group can sustain an Omega Hypothesis perception by means of the tactics and power protocols of proactive pluralistic ignorance, the greater future acceptability and lifespan that idea will possess. As well, the harder it will to be dethrone as an accepted norm or perception as a ‘proved’ null hypothesis.

If the answer to any or all six of these questions is a very likely yes, it does not mean that the defended idea is necessarily invalid; rather that the methods of socially arriving at, accepting and enforcing it are invalid. These are the litmus tests of professional lying at play. Take notice that a ‘fact’ therefore does not serve to necessarily transfer or increase knowledge. Evidence is an amorphous hard to grasp principle which can be sculpted to fit an idea through the actions of a perfidious minded party. A principle which wise philosophers understand, but pseudo-skeptics do not.

dont-farm-tumbleweedsThe danger of such unethical practice sets inside of science are two-fold. Fist, at face value, incorrect ideas and tyrannical social science or public policy can be enforced as scientifically correct paradigms by means of these four mechanisms. But even more important,

even valid science can lose its public trust credibility when enforced by unethical means such as these four mechanisms.

One cannot simply run around conducting unethical social activity in the name of science, and justify it through one’s credentials being, or pretending to be scientific. The danger in discrediting valid science is simply too high – one is ‘farming tumbleweeds’, as the adage goes.  Man-made global climate change is an example of just such a situation, wherein unethical strong-arm and pre-emptive measures were used to enforce an academic idea before it was fully vetted by science (see Carl Sagan, The Cosmic Connection, 1972).  AGW turned out after the fact to have merit, but only after further studies occurred after the social chicanery, arrogance and derision had been well underway. We made enemies, rather than science. In this regard, AGW proponents, practicing these four mechanisms, turned out to be their own worst enemy – and every bit as damaging to the climate change advocacy message as are the AGW deniers today.

Which introduces now, this broader context of just what constitutes different states of being ‘wrong’. Wittgenstein identified a tri-fold disposition framework for propositions, which help the ethical skeptic work their way through this menagerie of ‘wrong’ and sort their way to the deontological goals of value and clarity. The ability to discern much of this, the critical set of nuance inside of Popperian theory of science demarcation and Wittgenstein delineation of meaninglessness, nonsense and uselessness, resides at the heart of what I call The Tower of Wrong.

Useless (sinnlos) Correctness Resides at the Heart of the Professional Lie

confused-deluded-and-the-skilled-liarSo we have established that the value of a proposition therefore relates to its nature in being critically informative or predictive. It cannot simply hide on the team of players composing a proposition or theory, it has to be THE star player when its time has come to stand at bat. What then do we do with Snoping; a condition wherein a proposition is factually correct, but because of the non-salient or useless nature of the chosen question or quickly ascertained ¡fact! surrounding it, only serves to dis-inform? The Tower of Wrong shows us how partly correct, correct but useless or dis-informing evidence (Wittgenstein sinnlos) is to be clarified as distinct from deontological information – information reliable in being critically predictive or bearing falsification outcomes.

Recently we finished a vitriolic presidential election, inside of which a particular accusation had been made from one of the candidates towards the other. Specifically, Hillary Clinton was accused of mishandling classified material at origin, by sending it through non-secure means of communication, and handling it in a non-secured environment and by means of non-secured premises and procedure.  The accusation pertained to a batch of several thousand emails which bore classified material and classified context, but were sent over personal computers and media services in violation of the National Security Act.

Clinton’s technically correct response to the allegations was issued as follows:

I have a lot of experience dealing with classified material, starting when I was on the Senate Armed Services Committee going into the four years as secretary of state,” she said. “Classified material has a header which says ‘top secret, secret, confidential.’ Nothing, and I will repeat this, and this is verified in the report by the Department of Justice, none of the emails sent or received by me had such a header.   ~Hillary Clinton ¹

Now let’s break this set of propositions down by their logical calculus under The Tower of Wrong deontological framing:

  • First sentence – true (red herring, appeal to authority)
  • Second sentence – true (serves to dis-inform – ingens vanitatum – see below)
  • Third sentence – true (ignoratio elenchi – a misdirection in argument around threatening ‘classified material at origin’ laws under national security.)

In other words,  what Ms. Clinton did here was authoritatively lie, through facts and argument misdirection.  How do I know? I was a director level Black Top Secret intelligence officer for years. I know how classified material is to be handled at origin. Ms. Clinton conveniently misdirected the argument to a context of administrative handling conditions, wherein she either originated classified material, or re-posted or discussed such material stripped of its Controlling Authority context and marking. Nice trick. Origin classified material NEVER has ‘top secret, secret, confidential’ markings. Those dispositions are only tendered later by the Controlling Authority.² However, classified material of such nature prior to disposition is handled in the same way as is all classified material – and any recitation or discussion of such materials retains the classification of the referenced material itself (recitation: Executive Order 13526 and National Security Act procedures for handling classified material at origin).² If what Ms. Clinton claimed about having ‘a lot of experience dealing with classified material [at] the Armed Services Committee’ was true, and I think it was; then Ms. Clinton knew this to be a misdirection. She lied by means of an ignoratio elenchi fallacy called ingens vanitatum. A key element inside The Tower of Wrong.

Ingens Vanitatum

/philosophy : argument : fallacy : ignoratio elenchi/ : knowing a great deal of irrelevance. Knowledge of every facet of a subject and all the latest information therein, which bears irony however in that this supervacuous set of knowledge stands as all that composes the science, or all that is possessed by the person making the claim to knowledge. A useless set of information which serves only to displace any relevance of the actual argument, principle or question entailed.

Ingens Vanitatum Argument – citing a great deal of irrelevance. A posing of ‘fact’ or ‘evidence’ framed inside an appeal to expertise, which is correct and relevant information at face value; however which serves to dis-inform as to the nature of the argument being vetted or the critical evidence or question being asked.

Hillary Clinton’s statement was a correct lie in other words. She lied with facts. The statement does not serve to inform, rather it serves to dis-inform us all. This is what is called an Organic Untruth. It is one of the tools in the utility belt of the skilled professional liar and stands as one of the stack of key elements inside The Tower of Wrong (more specifically the ’50 Shades of Correct’ below).  So without any further ado, let us expand on this towering set of conditions of incorrectness.  In the chart below, you will observe the three Wittgenstein Proposition Framings, in burgundy (bedeutungslos, unsinnig and sinnlos) – comprising the stack elements which constitute the journey from confusion, to delusion, to lying…

…highlighting the final breakthrough in the mind of the ethical skeptic: Value and Clarity. The critical deontological nature of relevant, salient and scientific critical path information, as they are enabled by knowledge and the state of being found incorrect (has value: see blue pyramid stacks below).

The Components of the Professional Lie

At the heart of the professional lie, resides the agenda they are seeking to protect, the Omega Hypothesis. This is the agenda, conclusion or theory – which has become more important to protect, than the integrity of science itself.

Omega Hypothesis (HΩ) – the argument which is foisted to end all argument, period. A conclusion which has become more important to protect, than the integrity of science itself. An invalid null hypothesis or a preferred idea inside a social epistemology. A hypothesis which is defined to end deliberation without due scientific rigor, alternative study consensus or is afforded unmerited protection or assignment as the null. The surreptitiously held and promoted idea or the hypothesis protected by an Inverse Negation Fallacy. Often one which is promoted as true by default, with the knowledge in mind that falsification will be very hard or next to impossible to achieve.

The Omega Hypothesis is enacted and supported through the following Tower of Wrong elements (Wittgenstein sinnlos) and the four Wittgenstein sinnlos Mechanisms. As a note: The definition I have crafted here for sinnlos, develops the concept into a more clear and complete fit in terms of today’s methods of misinformation – rather than solely its classic Wittgenstein framing as ‘senseless’, which overlaps too heavily inside English as opposed to German usage lexicons with his unsinnig ‘nonsense’ class of proposition. In addition I have taken the concept of sinnlos and applied it into the following four stack elements (Ambiguity, Organic Untruth, Inadequacy and Mechanism) which function to underpin a professional lie. The final elements are four mechanisms which are exercised by the most prolific, celebrity, power holding and habitual appeal-to-authority enactors of the professional lie.

Wittgenstein Epistemological Error (Proposition Framings) – the categorization of a proposition into meaninglessness, nonsense or uselessness based upon its underlying state or lacking of definition, structure, logical calculus or usefulness in addressing a logical critical path.

bedeutungslos – meaningless. A proposition or question which resides upon a lack of definition, or which contains no meaning in and of its self.

unsinnig – nonsense. A proposition of compromised coherency. Feynman ‘not even wrong.’

sinnlos – useless. A contention which does not follow from the evidence, is correct at face value but disinformative or is otherwise useless.


Equivocation – the misleading use of a term with more than one meaning, sense, or use in professional context by glossing over which meaning is intended in the instance of usage, in order to mis-define, inappropriately include or exclude data in an argument.

Proxy Equivocation – the forcing of a new or disliked concept or term, into the definition of an older context, concept or term, in order to avoid allowing discrete attention to be provided to the new concept or term. Often practiced through calling the new concept/term, falsely, a neologism or brush off with the statement ‘that idea has already been addressed.’

Ambiguity – the construction or delivery of a message in such words or fashion as to allow for several reasonable interpretations of the context, object, subject, relationship, material or backing of the intended message.

Slack Exploitation – a form of equivocation or rhetoric wherein an arguer employs a term which at face value appears to constrain the discussion or position contended to a specific definition or domain. However, a purposely chosen word or domain has been employed which allows for several different forms/domains of interpretation of the contention on the part of the arguer. Often this allows the arguer to petition the listener to infer a more acceptable version of his contention, when in fact he is asserting what he knows to be a less acceptable form of it.

Uti Dolo (trick question) – a question which is formed for the primary purpose of misleading a person into selecting (through their inference and/or questioner’s implication) the incorrect answer or answer not preferred inside a slack exploited play of ambiguity, interpretation, sequence, context or meaning. The strong version being where the wrong context is inferred by means of deceptive question delivery; the weak version being where the question is posed inside a slack domain where it can be interpreted legitimately in each of two different ways – each producing a differing answer.

Amphibology – a situation where a sentence may be interpreted in more than one way due to ambiguous sentence structure. An amphibology is permissible, but not preferable, only if all of its various interpretations are simultaneously and organically true.

Context Dancing – the twisting of the context inside which a quotation or idea has been expressed such that it appears to support a separate argument and inappropriately promote a desired specific outcome.

Wittgenstein Error – manipulation of definitions, or the lack thereof.

Descriptive – the inability to discuss, observe or measure a proposition or contention, because of a language limitation, which has limited discourse and not in reality science’s domain of observability.

Contextual – employment of words in such as fashion as to craft rhetoric, in the form of persuasive or semantic abuse, by means of shift in word or concept definition by emphasis, modifier, employment or context.

Accent Drift – a specific type of ambiguity that arises when the meaning or level of hyperbole of a sentence is changed by placing an unusual prosodic stress (emphasis on a word), or when, in a written passage, it’s left unclear which word the emphasis was supposed to fall on.²

Subject Ambiguity – the construction or delivery of a message in such words or fashion as to allow for several reasonable interpretations of person, place or thing to which the message applies.

Organic Untruth

A constructive form of argument which uses concealed ambiguity at the core of its foundational structure. A statement which is true at face value, but was not true or was of unknown or compromised verity during the timeframe, original basis or domain of context under discussion. Ignoratio elenchi is a misdirection in argument, whereas an ingens vanitatum argument is a method of lying through this same misdrection or misleading set of ‘true facts’.

Ingens Vanitatum Argument – citing a great deal of expert irrelevance. A posing of ‘fact’ or ‘evidence’ framed inside an appeal to expertise, which is correct and relevant information at face value; however which serves to dis-inform as to the nature of the argument being vetted or the critical evidence or question being asked.

Lob & Slam Ploy – a version of good cop/bad cop wherein a virtual partnership exists between well known fake news ‘satire’ news outlets, and so called ‘fact checkers’ media patrols. The fake news is generated and posed to the web as satire, subsequently stripped of its context by a third party, and then inserted into social media as true – whereupon it is virally circulated. Subsequently, ‘fact checking’ agencies are then alerted to this set up (the Lob), and then slam home the idea of the fake nature of the ‘news’, as well as the lack of credibility and gullible nature of those who passed it around through social media. This in itself is a fake ploy, a form a Fake-Hoaxing and Hoax Baiting practiced by social agenda forces seeking to artificially enhance the credibility of a news ‘fact checker’.

Praedicate Evidentia – any of several forms of exaggeration or avoidance in qualifying a lack of evidence, logical calculus or soundness inside an argument.

Praedicate Evidentia – hyperbole in extrapolating or overestimating the gravitas of evidence supporting a specific claim, when only one examination of merit has been conducted, insufficient hypothesis reduction has been performed on the topic, a plurality of data exists but few questions have been asked, few dissenting or negative studies have been published, or few or no such studies have indeed been conducted at all.

Praedicate Evidentia Modus Ponens – any form of argument which claims a proposition consequent ‘Q’, which also features a lack of qualifying modus ponens, ‘If P then’ premise in its expression – rather, implying ‘If P then’ as its qualifying antecedent. This as a means of surreptitiously avoiding a lack of soundness or lack of logical calculus inside that argument; and moreover, enforcing only its conclusion ‘Q’ instead. A ‘There is not evidence for…’ claim made inside a condition of little study or full absence of any study whatsoever.


tree-of-knowledge-obfuscation-smThe entire core of fallacy, crooked thinking and misrepresentation of Data, Method, Science, Argument and Assumption which is reflected inside the Tree of Knowledge Obfuscation as it pertains to a subject. This is paired as it pertains to persons with misrepresentation of Opponents, Semantics, Groups, Self and Authorities.

Any condition where a conclusion is chosen to be drawn from, or the science is regarded as settled under, a less than satisfactory representation, possession or understanding of the available evidence or under a condition where the available evidence does not satisfactorily provide for a basis of understanding, null hypothesis, Ockham’s Razor plurality, or alternative formulation (as in arguing M Theory).

Mechanism (of Social Lying)

Einfach Mechanism – an explanation, theory or idea which resolves a contention under the scientific method solely by means of the strength of the idea itself. An idea which is not vetted by the rigor of falsification, predictive consilience nor mathematical derivation, rather is simply considered such a strong or Occam’s Razor (sic) simple an idea that the issue is closed as finished science from its proposition and acceptance onward. An einfach mechanism may or may not be existentially true.

Imposterlösung Mechanism – the cheater’s answer. Employing the trick of pretending that an argument domain which does not bear coherency nor soundness – somehow (in violation of science and logic) falsely merits assignment of a ‘null hypothesis’. Moreover, then that null hypothesis must be assumed sans any real form or context of evidence, or Bayesian science cannot be accomplished. Finally then, that a null hypothesis is therefore regarded by the scientific community as ‘true’ until proved otherwise. A 1, 2, 3 trick of developing supposed scientifically accepted theory which in reality bears no real epistemological, logical, predicate structure nor scientific method basis whatsoever.

Höchste Mechanism – when a position or practice, purported to be of scientific basis, is elevated to such importance that removing the rights of professionals and citizens to dissent, speak, organize or disagree (among other rights) is justified in order to protect the position or the practice inside society.

Entscheiden Mechanism – the pseudoscientific or tyrannical approach of, when faced with epistemology which is heading in an undesired direction, artificially declaring under a condition of praedicate evidentia, the science as ‘settled’.

Skulptur Mechanism – the pseudoscientific method of treating evidence as a work of sculpture. Methodical inverse negation techniques employed to dismiss data, block research, obfuscate science and constrain ideas such that what remains is the conclusion one sought in the first place. A common tactic of those who boast of all their thoughts being ‘evidence based’. The tendency to view a logical razor as a device which is employed to ‘slice off’ unwanted data (evidence sculpting tool), rather than as a cutting tool (pharmacist’s cutting and partitioning razor) which divides philosophically valid and relevant constructs from their converse.

Verdrängung Mechanism – the level of control and idea displacement achieved through skillful employment of the duality between pluralistic ignorance and the Lindy Effect. The longer a control-minded group can sustain an Omega Hypothesis perception by means of the tactics and power protocols of proactive pluralistic ignorance, the greater future acceptability and lifespan that idea will possess. As well, the harder it will to be dethrone as an accepted norm or perception as a ‘proved’ null hypothesis.

This wiggle room between what is considered to be ‘correct’ and what is indeed true-informing, resides at the heart of the 50 Shades of Correct. As you make your journey past the confused, deluded and lying members of our society, this mental framework is useful in vetting those who are interested in pushing agendas, from those who are keenly and openly interested in the truth.

Epoché Vanguards Gnosis.

TES Signature

¹  Politifact, “Hillary Clinton says none of her emails had classification headers,” Lauren Carroll, ;

²  Executive Order 13526- Classified National Security Information/National Security Act

January 11, 2017 Posted by | Agenda Propaganda, Argument Fallacies, Institutional Mandates | , , , , | Leave a comment

Dear Journalism Schools We Deserve Better Quality Graduates as Aspiring Science Communicators

Trust is at an all time low for Science Journalism in 2017, even off an already abysmally low performance from the last time trust in influencing professions was measured and ranked in 2012/2013. I would daresay now, the data is showing that science communicators are ranked right alongside Congressmen and used car salespeople regarding their established level of public trust. They have worked hard to earn this notorious accolade. These are not the sharpest tools in the drawer. We deserve better than this.
This blog article seeks to outline some of the characteristics we deserve and should demand from our science communicating journalists. And here is a thought, it would also be nice if they actually were real scientists, technicians, engineers or medical professionals.

Science Communicators are Ranked Alongside Used Car Salespeople in Terms of Trustworthiness

Now I do not pretend in the least that the solution to this is anything close to being easy to devise; as I advise my alma mater from time to time regarding what industry needs most from its science and engineering graduates. In my labs, advisory and operating companies, I grew frustrated at having to retrain every STEM graduate over their first three years of work, in order to unlearn them the quasi-baloney they were taught in undergraduate school. This was becoming very costly in terms of time and useless salary burden. Instead, I shifted to a program of hiring interns as soon as they had passed their Freshman year at three particular universities, and systematically training them alongside their college education – ending up hiring 100% of the interns which I had used in this fashion inside a variety of STEM analytical, design and research job functions.

I found it interesting to note how much a person can accomplish, if you do not tell them beforehand that they don’t know how to do it.

Interns served to provide creative new approaches to industry practices which were long tried, true and worn out. This was refreshing and surprising, and in small ways reflected a mutual positional symbiosis between the intern and the company.

My interns did not spend the summers partying in Europe and learning how wonderful a snowflake they were. They learned the hard lessons of client demands, complicated design challenges and demanding bosses.

trust-in-science-journalistsShifting the advance and transfer in the course of these schools’ rather large ship, in order to keep up with the pace of changing technology and economic understanding is monumental in the least; only accomplished through the work of literally hundreds of advisors, instructors and textbook authors for each university school alone.  Developing professionals prepared to deal with modern science, engineering and business challenges is a daunting task, no doubt. So when it comes to extrapolating this process into changing the course of the ship of journalism, I can understand that this is no simple matter. Yet it still needs to be done. In his Westview Press opinion piece, “Good News Bad News: Journalism Ethics and the Public Interest”, philosopher Jeremy Iggers laments about the role of journalism being ethically more suited to the measuring and exposition of public sentiment and not as a mouthpiece for corporate indoctrination interests.

“Although journalism’s ethics rest on the idea of journalism as a profession, the rise of market-driven journalism has undermined journalists’ professional status. Ultimately journalism is impossible without a public that cares about the common life. A more meaningful approach to journalism ethics must begin with a consideration of the role of the news media in a democratic society and proceed to look for practical ways in which journalism can contribute to the vitality of public life.” (Iggers, J.)†

We believe that science journalism’s betrayal of this ethic is the primary contributor to its decline in perception. Indeed, trust is at an all time low for Science Journalism in 2017, even off an already abysmally low performance from the last time trust in influencing professions was measured and ranked in 2012/2013 (depicted to the right, from two polls).¹ ² I would daresay now, after the horridly bad year of political advocacy masquerading as science, that science communicators are ranked right alongside the Congressmen and Car Salespeople chart data in their level of public trust. They have worked hard to earn this notorious accolade.

And Here are Some Examples Why

As an example, some summaries below come from the bio’s of Tier I Science Communicating journalists. I am not really wishing to focus on the persons, rather the ideas entailed here, so these are posed anonymously. The persons involved are high quality individuals in matters other than their claim to represent science. I am critiquing practices of an industry, not the people themselves. However, that being said, none of these people are even remotely qualified to comment or communicate the topics about which they boast as authorities – and worse than that, boast as spokespersons for an entire super-discipline called science itself.

…career working in government relations and public policy, ended up as an entrepreneur before landing at NASA where fell in love with its openness and limitless ability to inspire. Dedicated the last few years of life to extending that openness to space fans and journalists everywhere, using social media and a warm “class clown” persona to connect with the people who most want to hear the message. Holds no STEM education or employment background.

…received master’s in journalism at the University of ______________ (also my undergrad alma mater), and teach journalism at _______University in _______. I previously taught high school and often think of my journalism as a form of teaching, by helping others understand science and medical research and by debunking (despite holding absolutely no skills or quals whatsoever) misinformation about vaccines, chemicals and other misunderstood topics. Biggest life qualifications were hiking and diving in Europe, getting married and having a child within the last two years.

…an inquisitive (non-degree holding) agnostic born in _____________ and living in _____________, with her nerdy husband, curious toddler daughter, infant son, and needy dog. Her interests and pastimes fluctuate wildly, but always consist of family, reading and writing, cheese, and the world of genetics/bioinformatics. Most significant publication is a polemic attacking a person they did not like.

…has been an adjunct professor (largest accomplishment) in the graduate Science, Health and Environmental Reporting program at _________ University for the past few years. A frequent lecturer and has appeared at the 92nd Street Y in New York, Yale University and New York University among many others. Was named a Fellow of the American Association for the Advancement of Science (AAAS) (a non-expert volunteer organization of non-scientist political activists) for the Section on General Interest in Science and Engineering. Holds no science nor engineering employment or degree history.

…previously, spent nearly 14 years at ________ in positions culminating as executive editor. Work in writing and overseeing articles about space topics helped garner that magazine the Space Foundation’s Public Outreach Award (Appeal to Authority Reach Around). Was Science Writer in Residence at the ___________. Chapter on science editing appears in A Field Guide for Science Writers. Former chair of Science Writers in New York and a member of the American Society of Magazine Editors and the Society of Environmental Journalists. Mostly ceremonial, low activity and high visibility accolade-infusing positions. Holds no science, engineering, environmental science, astrophysics positions, experience nor degrees.

folta-said-it-firstIn the end, there exists a distinct difference between a mom advocating on behalf of finding out why her children are chronically sick, and seeking to establish as sponsor, plurality under Ockham’s Razor – and an unqualified mom who pretends to represent broad sets of scientific knowledge, final conclusions and attempts to squelch and bully the voices of those who have been, by a sufficient threshold of Ockham’s Razor evidence, arguably harmed. Science communication habitually evades ‘facts’ and ‘evidence’ in favor of social psychological manipulation specifically because of an inability on the part of the participants therein, to recognize what indeed is fact and evidence to begin with.

If, in similar shortfall to Kevin Folta, you cannot understand the difference between a sponsor seeking necessity research based upon direct observation science, and a pretender enforcing ‘correct’ ‘settled’ science through journalism, who is not even qualified to make such a determination – then you don’t understand the first thing about ethics, morality, logic, argument, skepticism and most especially the scientific method or science. You are ignorantly celebrating and enabling a cabal of writing, speaking malevolent idiots.

These people are not journalists, they are hired guns of propaganda. They are stupid, insensitive bullies, except where their progressive agenda tells them to feign compassion. Their inability to spot their role inside the game being played, constitutes a key feature of what Nassim Taleb calls the Intellectual Yet Idiot class of science communicator. They have not been educated, they have been trained to do a job and perform a crony role. Which introduces the issue, of what needs to change inside this training pipeline, in order to correct this enormous pathway of social damage.

What We Need and Deserve

trust-in-science-journalists-by-satThese are abysmally poor, unqualified and telltale propaganda-laden and indoctrinated biographies. Common themes promoted by these authors include: identifying the bad guys first, identifying ‘pseudoscience’ immediately, identifying the ‘anti-GMO-science-technology’ among us, associative condemnation and strawman as ‘tin-foil hat’ types, plagiarizing pre-written propaganda, targeting working Americans, misandry and class hatred, liberal socialist politics and hatred of working moms & the middle class. Often crafting articles which leverage all this condemnation through employment of explanitude based disciplines such as psychology, in pretense of being and doing science (there is a notorious #1 ranked social skeptic who is both a psychology Ph.D. and science communicating journalist, examine the chart to the right and take a hint here).³

These are not the sharpest tools in the drawer. What is being exploited is the relative lack of aptitude (see SAT by Selected Major chart to the right) and experience on the part of these celebrities; a gap of competence which affords crony entities the ability to craft, pass without scientist or peer input, and promulgate straight to truth, specific unchallenged agendas. The individuals sacrifice their integrity by taking celebrity and book deals as payment for their unethical service role. They become giddy as to how many people they can impart harm, and yet at the same time deceive as many people as possible into thinking that they represent science. It is the joy of magicianship and sleight-of-hand for the intelligent ones, and the heady rush of sudden fame for the not-so-bright ones. All payment for surrendering the will and the critical mind, and regurgitating the correct things which they are handed.

We deserve better than this. Our journalism schools are key in this formula of weakness.

In particular, our journalism schools (to be fair, some of these science communicators above did not even attend journalism school) should prepare to deliver:

  1.  Better logic mastery, science & analytical aptitude
  2.  A keener understanding of the Scientific Method
  3.  A modern understanding of the Public Trust and accountability inside the context of a constitutional-rights driven free nation
  4.  A keener ability to discern between actual skepticism versus corporate or social doubt-ism/cynicism/profiteering/bullying
  5.  Ethical integrity to avoid groups who tout fostering their careers through compliant reporting & plagiarized regurgitation
  6.  Exposure to a major portion of the US Demographic, not simply their liberal arts college, fraternity/sorority and 4 months of partying in Europe and having babies.
  7.  Understanding that republishing prepacked material/phrase-lifts/propaganda without recitation (from any source) is plagiarism. Even if they perceive it to be OK because the sources wants the material spread widely. Sometimes we cite sources because the information is wrong too, and we need to know who is crafting/spreading it.
  8.  Understanding The Art of Scientific Research
  9. Spending significant time (2-4 years) serving in impoverished nations or in a mission-oriented field such as the military or an objective charity in a tough environment.
  10.  A background in diverse sets of interests other than homemaker puffery and liberal think-tank cocoons.
  11.  Two to four years of experience actually doing something other than being a celebrated journalist or academic journalist.

And here is a thought, it would also be nice if they actually were scientists or even STEM graduates. This is what we deserve and should demand from our science communicating journalists.

TES Signature

†  (Iggers, J.)  Iggers, Jeremy; “Good News Bad News: Journalism Ethics and the Public Interest; Westviewpress (1998); abstract summary, PN4756.I34 1998.

¹  Gallup Survey: The Least-Trusted Jobs in America; Nov 26 – 29, 2012;

²  YouGov Survey: Trust in Journalists to Accurately Report Science; Dec 6 – 7, 2013;

³  Chariot Learning, Average SAT Score By Intended College Major; Mike Bergin; Nov 03, 2014;

January 9, 2017 Posted by | Agenda Propaganda, Institutional Mandates | , , , | Leave a comment

%d bloggers like this: