A single observation does not necessarily constitute an instance of the pejorative descriptive ‘anecdote’. Not only do anecdotes constitute data, but one anecdote can serve to falsify the null hypothesis and settle a scientific question in short order. Such is the power of a single observation. Such is the power of wielding skillfully, scientific inference. Fake skeptics seek to emasculate the power of the falsifying observation, at all costs.
It is incumbent upon the ethical skeptic, those of us who are researchers if you will – those who venerate science both as an objective set of methods as well as their underlying philosophy – incumbent that we understand the nature of anecdote and how the tool is correctly applied inside scientific inference. Anecdotes are not ‘Woo’, as most fake skeptics will imply through a couple of notorious memorized one-liners. Never mind what they say, nor might claim as straw man of their intent, and watch instead how they apply their supposed wisdom. You will observe such abuse of the concept to be most often the case. We must insist upon the theist and nihilist religious community of deniers, that inside the context of falsification/deduction in particular, a single observation does not constitute an instance of ‘anecdote’ (in the pejorative). Not only do anecdotes constitute data, but one anecdote can serve to falsify the Null (or even null hypothesis) and settle the question in short order. Such is the power of a single observation.
To an ethical skeptic, inductive anecdotes may prove to be informative in nature if one gives structure to and catalogs them over time. Anecdotes which are falsifying/deductive in nature are not only immediately informative, but moreover they are even more importantly, probative. Probative with respect to the null. I call the inferential modemodus absens the ‘null’ because usually in non-Bayesian styled deliberation, the null hypothesis, the notion that something is absent, is not actually a hypothesis at all. Rather, this species of idea constitutes simply a placeholder – the idea that something is not, until proved to be. And while this is a good common sense structure for the resolution of a casual argument, it does not mean that one should therefore believe or accept the null, as merely outcome of this artifice in common sense. In a way, deflecting observations by calling them ‘anecdote’ is a method of believing the null, and not in actuality conducting science nor critical thinking. However, this is the reality we face with unethical skeptics today. The tyranny of the religious default Null.
The least scientific thing a person can do, is to believe the null hypothesis.
Wolfinger’s Misquote
/philosophy : skepticism : pseudoscience : apothegm/ : you may have heard the phrase ‘the plural of anecdote is not data’. It turns out that this is a misquote. The original aphorism, by the political scientist Ray Wolfinger, was just the opposite: ‘The plural of anecdote is data’. The only thing worse than the surrendered value (as opposed to collected value, in science) of an anecdote is the incurred bias of ignoring anecdotes altogether. This is a method of pseudoscience.
Our opponents elevate the scientific status of a typical placeholder Null (such-and-such does not exist) and pretend that the idea, 1. actually possesses a scientific definition and 2. bears consensus acceptance among scientists. These constitute their first of many magician’s tricks, that those who do not understand the context of inference fall-for, over and over. Even scientists will fall for this ole’ one-two, so it is understandable as to why journalists and science communicators will as well. But anecdotes are science, when gathered under the disciplined structure of Observation (the first step of the scientific method). Below we differentiate four contexts of the single observation, in the sense of both two inductive and two deductive inference contexts, only one of which fits the semantics regarding ‘anecdote’ which is exploited by fake skeptics.
Inductive Anecdote
Inductive inference is the context wherein a supporting case or story can be purely anecdotal (The plural of anecdote is not data). This apothegm is not a logical truth, as it could apply to certain cases of induction, however does not apply universally.
Null: Dimmer switches do not cause house fires to any greater degree than do normal On/Off flip switches.
Inference Context 1 – Inductive Data Anecdote: My neighbor had dimmer switched lights and they caused a fire in his house.
Inference Context 2 – Mere Anecdote (Appeal to Ignorance): My neighbor had dimmer switched lights and they never had a fire in their house.
Hence we have Wolfinger’s Inductive Paradox.
Wolfinger’s Inductive Paradox
/philosophy : science : data collection : agency/ : an ‘anecdote’ to the modus praesens (observation or case which supports an objective presence of a state or object) constitutes data, while an anecdote to the modus absens (observation supporting an appeal to ignorance claim that a state or object does not exist) is merely an anecdote. One’s refusal to collect or document the former, does not constitute skepticism. Relates to Hempel’s Paradox.
Finally, we have the instance wherein we step out of inductive inference, and into the stronger probative nature of deduction and falsification. In this context an anecdote is almost always probative. As in the case of Wolfinger’s Inductive Paradox above, one’s refusal to collect or document such data, does not constitute skepticism.
Deductive or Falsifying Anecdote
Deductive inference leading to also, falsification (The plural of anecdote is data). Even the singular of anecdote is data under the right condition of inference.
Null: There is no such thing as a dimmer switch.
Inference Context 3 – Deductive Anecdote: I saw a dimmer switch in the hardware store and took a picture of it.
Inference Context 4 – Falsifying Anecdote: An electrician came and installed a dimmer switch into my house.
For example, what is occurring when one accepts materialism as an a priori truth pertains to those who insert that religious agency between steps 2 and 3 above. They contend that dimmer switches do not exist, so therefore any photo of one necessarily has to be false. And of course, at any given time, there is only one photo of one at all (all previous photos were dismissed earlier in similar exercises). Furthermore they then forbid any professional electrician from installing any dimmer switches (or they will be subject to losing their license). In this way – dimmer switches can never ‘exist’ and deniers endlessly can proclaim to non-electricians ‘you bear the burden of proof’ (see Proof Gaming). From then on, deeming all occurrences of step 2 to constitute lone cases of ‘anecdote’, while failing to distinguish between inductive and deductive contexts therein.
Our allies and co-observers as ethical skeptics need bear the knowledge of philosophy of science (skepticism) sufficient to stand up and and say, “No – this is wrong. What you are doing is pseudoscience”.
Unconscious bias occurs with everyone and inside most deliberation. Such innocent manifestation of bias, while important in focus, is not the First Duty of the ethical skeptic. The list of fallacies and crooked thinking below outline something beyond just simple bias – something we call agency. The tricks of obfuscation of evidence for which ethical skeptics keep vigilant watch.
Michael Shermer has outlined in his November 2018 editorial for Scientific American, a new fallacy of data which he calls the ‘Fallacy of Excluded Exceptions’. In this informal fallacy, evidence which does not serve to confirm one’s a priori conclusion is systematically eliminated or ignored, despite its potentially robust import. This is a form of, not unconscious bias, but a more prevalent and dangerous mode of corrupted thinking which we at The Ethical Skeptic call ‘agency’. The First Duty of Ethical Skepticism is to oppose agency (not simply bias).
Fallacy of Excluded Exceptions
/philosophy : pseudoscience : data manipulation/ : a form of data skulpting in which a proponent bases a claim on an apparently compelling set of confirming observations to an idea, yet chooses to ignore an also robust set of examples of a disconfirming nature. One chisels away at, disqualifies or ignores large sets of observation which are not advantageous to the cause, resulting only seeing what one sought to see to begin with.
“Excluded exceptions test the rule. Without them, science reverts to subjective speculation.” ~ Michael Shermer 1
Despite his long career inside skepticism, Michael is sadly far behind most ethical skeptics in his progression in understanding the tricks of fake epistemology and agency. That is because of his anchoring bias in that – he regards today’s ‘skepticism’ as representative of science and the scientific method, as well as all that is good in data reduction and syllogism. He is blinded by the old influences of bandwagon hype. Influences as master which he must still serve today, and which serve to agency-imbue his opinions. The celebrity conflict of interest.
Agency and bias are two different things. Ironically, agency can even tender the appearance of mitigating bias, as a method of its very insistence.
Well we at The Ethical Skeptic have been examining tricks of data manipulation and agency for decades, and already possessed a name for this fallacy Michael has been compelled to create from necessity on his own – precisely because it is a very common trick we have observed on the part of fake skeptics to begin with. Michael’s entrenchment inside social skepticism is the very reason why he could not see this fallacy until now – he is undergoing skeptive dissonance and is beginning to spot fallacies of agency his cronies have been committing for decades. Fallacies which he perceives to be ‘new’. Congratulations Michael, you are repenting. The next step is to go out and assist those your cronies and sycophants have harmed in the past through fake skepticism. Help them develop their immature constructs into hypotheses with mechanism, help them with the scientific method, help them with the standards of how to collect and reduce data and argument. Drop the shtick of a priori declarations of ‘ you are full of baloney’ and help them go and find that out for themselves. Maybe. 2
Agency versus Bias
Bias is the Titanic’s habit of failing to examine its ice berg alerts.
Agency is the Titanic telling the ship issuing ice berg alerts to ‘shut up’.
If all we suffered from was mere bias, things might even work out fine.
But reality is that we are victims of agency, not bias.
Just maybe as well, embarking upon such a journey you will find as I did – that you really did not understand the world all that well, nor have things as figured out as you had assumed. Your club might have served as a bit of a cocoon, if you will. Maybe in this journey you have so flippantly stumbled upon, you will observe as ‘new’ a fallacy that ethical skeptics have identified for a long time now; one which your cabal has routinely ignored.
Evidence Sculpting (Cherry Sorting)
/philosophy : pseudoscience : data manipulation/ : has more evidence been culled from the field of consideration for this idea, than has been retained? Has the evidence been sculpted to fit the idea, rather than the converse?
Skulptur Mechanism – the pseudoscientific method of treating evidence as a work of sculpture. Methodical inverse negation techniques employed to dismiss data, block research, obfuscate science and constrain ideas such that what remains is the conclusion one sought in the first place. A common tactic of those who boast of all their thoughts being ‘evidence based’. The tendency to view a logical razor as a device which is employed to ‘slice off’ unwanted data (evidence sculpting tool), rather than as a cutting tool (pharmacist’s cutting and partitioning razor) which divides philosophically valid and relevant constructs from their converse.
Your next assignment Michael, should you choose to accept it, is to learn about how agency promotes specific hypotheses through the targeting of all others (from The Tower of Wrong: The Art of Professional Lying):
Embargo Hypothesis (Hξ)
/philosophy : pseudoskepticism/ : was the science terminated years ago, in the midst of large-impact questions of a critical nature which still remain unanswered? Is such research now considered ‘anti-science’ or ‘pseudoscience’? Is there enormous social pressure to not even ask questions inside the subject? Is mocking and derision high – curiously in excess of what the subject should merit?
Entscheiden Mechanism – the pseudoscientific or tyrannical approach of, when faced with epistemology which is heading in an undesired direction, artificially declaring under a condition of praedicate evidentia, the science as ‘settled’ and all opposing ideas, anti-science, credulity and pseudoscience.
But Michael, as you begin to spot agency inside purported processes of epistemology, we have to warn you, there is more – oh, so much more which you do not know. Let’s take a brief look shall we?
Agency as it Pertains to Evidence and Data Integrity
So, in an effort to accelerate Michael’s walk through the magical wonderland of social skepticism, and how it skillfully enforces conformance upon us all, let us examine the following. The fallacies, modes of agency and methods of crooked thinking below relate to manipulations of data which are prejudices, and not mere unconscious biases – such as in the case of anchoring bias, wherein one adopts a position which is overly influenced by their starting point or the first information which arrived. They may hold a bias, but at least it is somewhat innocent in its genesis, i.e. not introduced by agency. Prejudicial actions in the handling and reduction of evidence and data, are the preeminent hint of the presence of agency, and the first things which the ethical skeptic should look out for inside a claim, denial, mocking or argument.
Unconscious bias happens with everyone, but the list of fallacies and crooked thinking below, outline something more than simple bias. They involve processes of pseudo-induction, panduction, abduction and pseudo-deduction, along with the desire to dissemble the contribution of agency. You can find this, along with agency-independent and unconscious biases, all defined at The Tree of Knowledge Obfuscation: Misrepresentation of Evidence or Data
And of course, all of these fallacies, biases, modes of agency and crooked thinking can be found and defined here:
And as well, more modes of agency can be found at The Tree of Knowledge Obfuscation itself.
Intelligence professionals are trained to watch for the most reasonable terrifying answer – while science professionals are trained to develop the most comforting answer – that which is the simplest and most career-sustaining. Intelligence professionals regard no story as too odd, no datum as too small – and for good reason. These are the details which can make or break a case, or maybe save lives.
Fake skeptics rarely grasp that an anecdote is used to establish plurality, not a conclusion. Particularly in the circumstance where an anecdote is of possible probative value, screening these out prematurely is a method of pseudo-scientific data filtering (cherry sorting). A facade of appearing smart and scientific, when nothing of the kind is even remotely true.
‘Anecdote’ is not a permissive argument affording one the luxury of dismissing any stand-alone observation they disdain. Such activity only serves to support a narrative. The opposite of anecdote after all, is narrative.
“Nothing is too small to know, and nothing too big to attempt.” Or so the utterance goes attributed to William Van Horne, namesake of the Van Horne Institute, a quasi-academic foundation focusing on infrastructure and information networks. In military intelligence, or more specifically inside an AOC (area of concern) or threat analysis, one learns early in their career an axiom regarding data: ‘Nothing is too small’. This simply means that counter-intelligence facades are often broken or betrayed by the smallest of detail, collected by the most unlikely of HUMINT (Human Intelligence) or ELINT (Electronic Intelligence) channels. In this context, observations are not simply vetted by the presumed reliability of their source, but also by the probative value which they offer inside the question at hand. If one rejects highly probative observations, simply because one has made an assumption as to their ‘reliability’ – this is a practice of weak intelligence/science. Why do you think it no coincidence, that the people who actually sincerely want to figure things out (Intelligence Agency Professionals), maintain the most powerful processors and comprehensive data marts of intelligence data in the world? Be wary of a skeptic who habitually rejects highly probative observations, ostensibly because of a question surrounding their ‘reliability’, and subsequently pretends that their remaining and preferred data set is not cherry picked. Such activity is data skulpting – in other words, pseudoscience.
Intelligence is the process of taking probative observations and making them reliable – not, taking reliable information and attempting to make it probative.
A sixth sigma set of observations, all multiplied or screened by a SWAG estimate of reliability, equals and always equals a SWAG set of observations.
Should not science function based upon the same principles? I mean if we really wanted to know things, why would science not adopt the methods, structures, and lingo of our most advanced intelligence agencies? This is the definition of intelligence after all, the ‘means of discovery’ – and a particular reason why I include Observation and Intelligence Aggregation right along with establishing Necessity, as the first three steps of the scientific method. Or could it be that certain syndicates do not want specific things to be known in the first place? That the ‘science’ of these syndicates, is less akin to an intelligence agency, and more akin to a club, activist-endorsing and virtue signalling over corporate and Marxist agendas. In this article, we delver more specifically into the principle of anecdote, and how this rhetorical football plays into just such method of deception.
The plural of anecdote, is data.
~ Raymond Wolfinger, Berkeley Political Scientist, in response to a student's dismissal of a simple factual statement by means of the pejorative categorization 'anecdote'1
The opposite of anecdote, is narrative.
Anecdote to the affirmation is data, while anecdote to the absence is merely a story. Deception habitually conflates the two.
A dismissal of ‘anecdote’, is every bit the cherry picking which its accuser decries.
~ The Ethical Skeptic
One should take note that RationalWiki incorrectly frames the meaning of anecdote as “use of one or more stories, in order to draw a conclusion.”2 This is a purposefully pedestrian and narrative-friendly framing of the definition of anecdote. In reality, an anecdote is used to establish plurality – not a conclusion. One specific function of social skepticism is to prevent plurality from broaching on critical topics, at any cost. No surprise here that they would also therefore seek to cite anecdote as being tantamount to an attempt at proof. In such a manner, all disdained data can be dismissed at its very inception – through a rhetorical artifice alone. Plurality can never become necessary under Ockham’s Razor.
Had the author of RationalWiki ever prosecuted an intelligence scenario, or directed a research lab (both of which I have done) – they might have understood the difference between ‘conclusion’ and ‘plurality’ inside the scientific method. But this definition was easy, simpleton, and convenient to narrative-building methodology. Sadly, all one has to do in this day and age of narrative and identity – is declare one’s self a skeptic, and start spouting off stuff you have heard in the form of one liners. Below one will observe an example where celebrity skeptic Michael Shermer also fails to grasp this important discipline of skepticism and science (see The Real Ockham’s Razor).
But first, let us examine the definition of the term anecdote itself. Google Dictionary defines anecdote as the following (time graph and definition, both from Google Dictionary):3
Anecdote
/noun : communication : late 17th century: from French, or via modern Latin from Greek anekdota ‘things unpublished,’ from an- ‘not’ + ekdotos, from ekdidōnai ‘publish’/ : a short and amusing or interesting story about a real incident or person. An account regarded as unreliable or hearsay.
Anecdote Error
/philosophy pseudoscience : invalid inference/ : the abuse of anecdote in order to squelch ideas and panduct an entire realm of ideas. This comes in two forms:
Type I – a refusal to follow up on an observation or replicate an experiment, does not relegate the data involved to an instance of anecdote.
Type II – an anecdote cannot be employed to force a conclusion, such as using it as an example to condemn a group of persons or topics – but an anecdote can be employed however to introduce Ockham’s Razor plurality. This is a critical distinction which social skeptics conveniently do not realize nor employ.
Under this context of definition, below is a generally accepted footprint of usage of the term anecdote. One should take note of the the luxuriously accommodating equivocal reach of this word, ranging very conveniently from being defined as both ‘real incident’ all the way to ‘lie’.
Fake skeptics wallow and thrive in broadly equivocal terminology like this. Indeed, this shift in definition has been purposeful over time. Notice how the pejorative use of the term came into popularity just as the media began to proliferate forbidden ideas and was no longer controllable by the agency of social skepticism. Social Skeptics were enlisted to be the smart but dumb (Nassim Taleb’s ‘Intellectual-Yet-Idiot’ class) players who helped craft the new methods of thought enforcement, the net effect of which you may observe on the graph below:
Cherry Sorting: A Sophist’s Method of Screening Out Data they Find ‘Not Reliable’
Set aside of course the context of rumors and unconfirmed stories. Such things indeed constitute anecdote under the broad equivocal footprint of the term; however, this is not what is typically targeted when one screens out observations by means of the fallacious use of the term. Being terrified (both existential or career impact contexts) of an answer can serve to effect a bias, an imbalance which is critical in the a priori or premature assessment of an observation as being ‘unreliable’. Even the most honest of scientist can succumb to such temptation. Are you as a scientist going to include observations which will serve to show that vaccines are causing more human harm than assumed? Hell no, because you will lose your hard earned career in science. Thus, crafting studies which employ high reliability/low probative value observations is paramount under such a reality. If you ensure that your data is reliable and that your databases are intimidatingly large (a meta-study for instance), then you are sure to appear scientific; even if you have never once spoken to a stakeholder inside the subject in question, nor conducted a single experimental observation.
Developing information of a probative nature naturally involves more expense, since more effort is required in its collection – which also frequently happens to render it more informative. Such information demands a greater level of subject matter expertise, effort, and often set of logical skills as well. To retreat to solely an analytical position towards information which avoids such risk/expertise/cost, rather than groom probative information and seek to mitigate its risk through consilience efforts and direct observation – such is the means by which society actively crafts false knowledge on a grand scale.
Intelligence professionals are trained to watch for the most reasonable terrifying answer – while science professionals are trained to develop the most comforting answer – that which is the simplest and most career-sustaining.
The bottom line is this, Intelligence Professionals, those who truly seek the answers from the bottom of their very being – are promoted in their career for regarding data differently than does weak scientific study or do pseudo-skeptics. The issue, as these professionals understand, is NOT that anecdotes are unreliable. All questionable, orphan or originating observations are ‘unreliable’ to varying degrees, until brought to bear against other consilience/concomitance: other data to help corroborate or support its indication. This is the very nature of intelligence, working to increase the reliability and informative (probative) nature of your data.
One should guard against the circumstance wherein an appeal to authority defacto decides any assumption of reliability and informative ability on the part of an observation. One falsifying anecdote may be unreliable, but is potentially also highly informative. Ethical skepticism involves precisely the art of effecting an increase in reliability of one’s best or most highly probative data. Of course I do not believe with credulity in aliens visiting our Earth; however, if we habitually screen out all the millions of qualified reports of solid, clear and intelligent craft of extraordinary capability, flying around in our oceans and skies – we are NOT doing science, we are NOT doing skepticism. In such a play we become merely anecdote data skulpting pseudo-scientists. Furthermore, if we regard the people who are conducting such probative database assembly to be ‘pseudo-scientists’ – we are exercising nothing even remotely associated with skepticism.
A 70% bootstrap deductive observation is worth ten 90% bootstrap inductive ones… If one examines only 90% plus reliable data, one will merely confirm what we already know. Reliability can be deceiving when assumed a priori. It is logical inference through repeated observation, and not the bias of authority we bring to the equation, which must serve and accrue any assessment of reliability.
The habit of first assaying the reliability of an observation constitutes what is known inside the philosophy of skepticism as a ‘streetlight effect’ error, regardless of any presence or absence of bias. In intelligence, and in reality, the collection of probative evidence is always the first priority. No inference in science hinges on the accuracy/bias status of one point of evidence alone. Bias always exists. The key is to filter such influence out through accrued observation, not another biased subjective assessment. Never prematurely toss out data you don’t like, or which just does not seem to fit the understanding. The inference itself is that which bears the critical path of establishing reliability. Reliability and Probative Value in data is something which is established after the fact through comparison and re-observation, not something which is presumed during data collection. Assuming such things early in the data collection process constitutes the ultimate form of subjective cherry picking – cherry sorting. With cherry sorting, one will be sure to find the thing one sought to find in the first place, or confirm the common understanding. Research awards will be granted. Careers will be boosted.
[As a note: one of my intelligence professional associates who read this article has emailed me and reminded me of the “I” community’s mission to focus on ‘capability and intent’, in addition to probative and reliable factors inside observation gathering. In contrast, capability and intent are assumptions which social skeptics bring to the dance, already formed – a surreptitious method of already forming the answer as well. Capability and intent assumptions help the social skeptic to justify their cherry sorting methodologies. It is a form of conspiracy theory spinning being applied inside science. This is why ethical skepticism, despite all temptation to the contrary, must always be cautious of ‘doubt’ – doubt is a martial art, which if not applied by the most skilled of practitioner, serves only to harm the very goals of insight and knowledge development. Or conversely, is a very effective weapon for those who desire to enforce a specific outcome/conclusion.]
Be cautious therefore when ‘reliability’ constitutes a red herring, employed during the sponsorship stage of the scientific method. With today’s information storage/handling capacity and the relative inexpensive nature of our computational systems – there exists no excuse for tossing out data for any reason – especially during the observation, intelligence and necessity steps of the scientific method. We might possibly apply a plug confidence factor to data, as long as all data can be graded on such a scale, otherwise an effort in evaluating reliability is a useless and symbolic gesture of virtue. Three errors which data professionals make inside Anecdote Data Skulpting, which involve this misunderstanding of the role of anecdote, and which are promoted and taught by social skeptics today, follow:
Anecdote Data Skulpting (Cherry Sorting)
/philosophy : pseudo-science : bias : data collection : filtering error/ : when one applies the categorization of ‘anecdote’ to screen out unwanted observations and data. Based upon the a priori and often subjective claim that the observation was ‘not reliable’. Ignores the probative value of the observation and the ability to later compare other data in order to increase its reliability in a more objective fashion, in favor of assimilating an intelligence base which is not highly probative, and can be reduced only through statistical analytics – likely then only serving to prove what one was looking for in the first place (aka pseudo-theory).
‘Anecdote’ is not a permissive argument affording one the luxury of dismissing any stand alone observation they desire. This activity only serves to support a narrative. After all, the opposite of anecdote, is narrative.
1. Filtering – to imply that information is too difficult to reduce/handle or make reliable, is unnecessary and/or is invalid(ated); and therefore filtering of erstwhile observations is necessary by means of the ‘skeptical’ principle of declaring ‘anecdote’.
Example: This is not the same as the need to filter information during disasters.4 During the acute rush of information stage inside a disaster – databases are often of little help – so responses must be more timely than intelligence practices can respond. But most of science, in fact pretty much all of it, does not face such an urgent response constraint. Save for an ebola epidemic or something of that nature. What we speak of here, is where professionals purposely ignore data, at the behest of ‘third party’ skeptics – and as a result, science is blinded and emasculated.
The discovery of penicillin was a mere anecdote. It took 14 years to convince the scientific community to even try the first test of the idea. Dr. Alexander Fleming’s accident, and the ensuing case study regarding his contamination of his Petri dishes with Penicilium mold, indeed constituted an observation of science, and not a mere ‘fantabulous story’ as skeptics of the day contended.5 In similar fashion, the anecdotes around the relationship between h. pylori and peptic ulcers were rejected for far too long by a falsely-skeptical scientific community. The one liner retort by social skeptics that ‘The science arrived right on time, after appropriate initial skepticism’ collapses to nothing but utter bullshit, once one researches the actual narratives being protected, methods, and familiar/notorious special interest (pharmaceutical) clubs involved.
Another example of anecdote data skulpting can be found here, as portrayed in celebrity sskeptic Michael Shermer’s treatise on the conclusions he claims science has made about the afterlife and the concept of an enduring soul-identity. Never mind that the title implies that the book is about the ‘search’ (of which there has been a paltry little) – rest assured that the book is only about the conclusion: his religion of nihilism.
In this demonstration of the martial art of denial, Michael exhibits the foible of data skulpting, by eliminating 99% of the observational database, the things witnessed by you, nurses, doctors, your family and friends, through regarding every bit of it as constituting dismissible ‘anecdote’. He retains for examination, the very thin margin of ‘reliable’ data, which he considers to be scientific. The problem is, that the data studies he touts are not probative – but rather merely and mildly suggestive forms of abductive reasoning and not bases for actual scientific inference. Convenient rationalizing which just also happens to support his null hypothesis – which just happens to be his religious belief as well. As an ignostic atheist and evolutionist, I also reject this form of pseudo science.
For example, in the book Michael makes the claim that “science tells us that all three of these characteristics [of self and identity] are illusions.” Science says no such thing. Only nihilists and social skeptics make this claim – the majority of scientists do not support this notion at all.6
If one were to actually read Shermer’s famously touted Libet and Haynes study on free will and identity, one would find that the authors only claim their studies to be mildly suggestive at best – in need of further development. Michael employs this instead as an appeal to authority. This is dishonest. The studies are good, but they are inductive in an extremely best perspective. The difference here being, ethical skeptics do not reject material monist studies; however nihilists, through their habit of rejecting near death studies and through ample hand waving, enforce merely single-answer echo chamber and pseudoscience. They are forcing to conclusion through cherry sorting, a single answer during the data collection process.
As well, this boastful conclusion requires one to ignore, and artificially dismiss or filter out as ‘anecdote’, specific NDE and other studies which falsify or countermand Shermer’s fatal claim to science.7
Yes, these case studies are difficult to replicate – but they were well conducted, are well documented, and were indeed falsifying (the strongest and most probative basis for inference). And they are science. So Shermer’s claim that ‘science tells us’ – is simply a boast. Dismissing these type of studies because they cannot be replicated or because of a Truzzi Fallacy of plausible deniability or because they ‘lacked tight protocols’ (a red herring foisted by Steven Novella) – is the very essence of cherry sorting we decry in this article: dismissing probative observations in favor of ‘reliable’ sources. In the end, all this pretend science amounts to nothing but a SWAG in terms of accuracy in conclusion. Something real skeptics understand but fake skeptics never seem to grasp.
Under any circumstance, certainly not a sound basis for a form of modus ponens assertion containing the words ‘science tells us’. Beware when a person denies, based merely upon such a boast of authority.
Moreover, one must pretend that the majority of scientists really do not count in this enormous boast as to what ‘science tells us’. In Anthropogenic Global Warming science, the majority opinion stands tantamount to a final conclusion. In the case of material monism however, suddenly scientific majorities no longer count, because a couple celebrity skeptics do not favor the ‘wrong’ answer. Do we see the intimidation and influencing game evolving here? The graph below, from a Pew Research 2009 survey of scientists, illustrates that in actually the majority of scientists believe in a ‘god’ concept. That stands as a pretty good indicator on the identity-soul construct itself, as many people believe in a soul/spirit-identity, but not necessarily in the gods of our major religions. I myself fall inside the 41% below (although I do not carry a ‘belief’, rather simply a ‘lack of allow-for’), and I disagree with Shermer’s above claim.8
Pew Research: Scientists and Belief, November 5, 2009
In similar fashion, using habits one can begin to observe and document, fake skeptics block millions of public observations as ‘anecdotes’; for instance, regarding grain toxicity, skin inflammation, chronic fatigue syndrome, autism, etc., through either calling those observers liars, or by declaring that ‘it doesn’t prove anything’. Of course they do not prove anything. That is not the step of science these sponsors are asking be completed in the first place.
2. Complexifuscating – expanding the information around a single datum of observation, such that it evolves into an enormously complicated claim, or a story expanded to the point of straw man or adding a piece of associatively condemning information. These three elements in bold are all key signature traits which ethical skeptics examine in order to spot the tradecraft of fake skeptics: Observation versus claim blurring, straw man fallacy and associative condemnation.
Example: Michael Shermer regularly dismisses as ‘anecdote’, instances wherein citizen observations run afoul of the corporate interests of his clients. This is a common habit of Steven Novella as well. The sole goal (opposite of anecdote is narrative) involved is to obfuscate liability on the part of their pharmaceutical, agri-giant or corporate clients. This is why they run to the media with such ill-formed and premature conclusions as the ‘truth of science’. In the example below, Michael condemned an individual’s medical case report – rather than treating such reports as cases for an argument of plurality and study sponsorship (there are literally hundreds of thousands of similar cases. Here he converts it to a stand alone complex claim, contending ‘proof of guidelines for nutrition’ (dismissed by cherry sorting), and associatively condemns it by leveraging social skeptic bandwagon aversion to the ‘Paleo Diet’. I assure you, that if the man in the case study below had lost weight from being a vegan, Michael would have never said a word. All we have to do is replicate this method, straw man other similar reports and continually reject them as anecdote – and we will end with the defacto scientific conclusion we sought at the outset.
Of course I am not going to adopt the Paleo Diet as authority from just this claim alone – but neither have I yet seen any good comparative cohort studies on the matter. Why? Because it has become a career killer issue – by the very example of celebrity menacing provided below – pretending to champion science, but in reality championing political and corporate goals:
This is not skepticism in the least. It is every bit the advocacy which could also be wound up in the man’s motivation to portray his case. The simple fact is, if one can set aside their aversion to cataloging information from the unwashed and non -academic, that it is exactly the stories of family maladies, personal struggles and odd and chronic diseases which we should be gathering as intelligence – if indeed the story is as ‘complicated’ as Michael Shermer projects here. These case stories help us rank our reduction hierarchy and sets of alternatives, afford us intelligence data (yes, the plurality of anecdote is indeed data) on the right questions to ask – and inevitably will lead us to impactful discoveries which we could never capture under linear incremental and closed, appeal to authority only, science – science methods inappropriately applied to asymmetric, complex systems.
An experiment is never a failure solely because it fails to achieve predicted results. An experiment is a failure only when it also fails adequately to test the hypothesis in question, when the data it produces don’t prove anything one way or another.
~ Robert Pirsig, American writer, philosopher and author of Zen and the Art of Motorcycle Maintenance: An Inquiry into Values (1974)
In similar regard, and as part of a complexifuscation strategy in itself, Michael Shermer in the tweet above, misses the true definition of anecdote, and instead applies it as a pejorative inside a scientific context. That meaning, a case which fails to shed light on the idea in question, and not the distinguishing criteria that the case ‘stands alone’, lacking sponsorship by science. All data can easily be made to appear ‘stand-alone’, if one’s club desires this to be so.
3. Over Playing/Streetlight Effect – to assume that once one has then bothered to keep/analyze data, they hold now some authoritative piece of ‘finished science’. The process of taking a sponsorship level of reliable data, and in absence of any real probative value, declaring acceptance and/or peer review success. A form of taking the easy (called ‘reliable’) route to investigation.
Streetlight Effect
/philospophy : science : observation intelligence/ : is a type of observational bias that occurs when people only search for something where it is easiest to look.
Example: A study in the March 2017 Journal of Pediatrics, as spun by a narrative promoter CNN9 incorrectly draws grand conclusions off of 3 and 5 year old data, derived from questionnaires distributed to cohorts of breast feeding and bottle feeding Irish mothers. While useful, the study is by no means the conclusive argument which it is touted as being by CNN (no surprise here); rendered vulnerable to selection, anchoring, and inclusion biases.
This is an example of an instance wherein the traditional approach to making and reputed reliability of the observation method was regarded as being high, but the actual probative value of the observations themselves was low. We opted for the easier route in collection and inference. This case was further then touted by several advocacy groups as constituting finished science (pseudo-theory) – an oft practiced bias inside data-only medical study cabal.
Not to mention the fact that other, more scientific alternative study methods, based upon more sound observational criteria, could have been employed instead. Data is easy now – so lets exploit ‘easy’ to become poseurs at science, use our interns to save money – all the while filtering out the instances where ‘easy’ data might serve to promote an idea we do not like.
Inference to be drawn from a single linear inductive or shallow data study hinges on the constraint, assumption, and inclusion biases/decisions made by the data gatherers and analysts involved in that study. Such activity is no better than an anecdote in reality, when one considers that the simple notion that both cohorts are going to spin their preferred method of feeding, nay their child – as being/performing superior. All this is no better than anecdote itself – and as such is acceptable for inclusion to establish plurality, yes – but is by no means the basis for a conclusion.
Anecdote is not tantamount to conclusion. However, neither is its exclusion.
~ The Ethical Skeptic
And conclusions are what fake skeptics are all about. Easy denial-infused conclusions. Easily adopted and promulgated conclusions. Because, that is ‘skeptical’.
The Ethical Skeptic, “’Anecdote’ – The Cry of the Pseudo-Skeptic” The Ethical Skeptic, WordPress, 7 Jan 2018, Web; https://wp.me/p17q0e-6Yx