The Ethical Skeptic

Challenging Pseudo-Skepticism, Institutional Propaganda and Cultivated Ignorance

The Spectrum of Evidence Manipulation

Unconscious bias occurs with everyone and inside most deliberation. Such innocent manifestation of bias, while important in focus, is not the First Duty of the ethical skeptic. The list of fallacies and crooked thinking below outline something beyond just simple bias – something we call agency. The tricks of obfuscation of evidence for which ethical skeptics keep vigilant watch.

Michael Shermer has outlined in his November 2018 editorial for Scientific American, a new fallacy of data which he calls the ‘Fallacy of Excluded Exceptions’. In this informal fallacy, evidence which does not serve to confirm one’s a priori conclusion is systematically eliminated or ignored, despite its potentially robust import. This is a form of, not unconscious bias, but a more prevalent and dangerous mode of corrupted thinking which we at The Ethical Skeptic call ‘agency’. The First Duty of Ethical Skepticism is to oppose agency (not simply bias).

Fallacy of Excluded Exceptions

/philosophy : pseudoscience : data manipulation/ : a form of data skulpting in which a proponent bases a claim on an apparently compelling set of confirming observations to an idea, yet chooses to ignore an also robust set of examples of a disconfirming nature. One chisels away at, disqualifies or ignores large sets of observation which are not advantageous to the cause, resulting only seeing what one sought to see to begin with.

“Excluded exceptions test the rule. Without them, science reverts to subjective speculation.” ~ Michael Shermer 1

Despite his long career inside skepticism, Michael is sadly far behind most ethical skeptics in his progression in understanding the tricks of fake epistemology and agency. That is because of his anchoring bias in that – he regards today’s ‘skepticism’ as representative of science and the scientific method, as well as all that is good in data reduction and syllogism. He is blinded by the old influences of bandwagon hype. Influences as master which he must still serve today, and which serve to agency-imbue his opinions. The celebrity conflict of interest.

Agency and bias are two different things.
Ironically, agency can even tender the appearance of mitigating bias, as a method of its very insistence.

Well we at The Ethical Skeptic have been examining tricks of data manipulation and agency for decades, and already possessed a name for this fallacy Michael has been compelled to create from necessity on his own – precisely because it is a very common trick we have observed on the part of fake skeptics to begin with. Michael’s entrenchment inside social skepticism is the very reason why he could not see this fallacy until now – he is undergoing skeptive dissonance and is beginning to spot fallacies of agency his cronies have been committing for decades. Fallacies which he perceives to be ‘new’. Congratulations Michael, you are repenting. The next step is to go out and assist those your cronies and sycophants have harmed in the past through fake skepticism.  Help them develop their immature constructs into hypotheses with mechanism, help them with the scientific method, help them with the standards of how to collect and reduce data and argument. Drop the shtick of a priori declarations of ‘ you are full of baloney’ and help them go and find that out for themselves. Maybe. 2

Bias is the Titanic’s habit of failing to examine its ice berg alerts.
Agency is the Titanic telling the ship issuing ice berg alerts to ‘shut up’.
If all we suffered from was mere bias, things might even work out fine.
But reality is that we are victims of agency, not bias.

Just maybe as well, embarking upon such a journey you will find as I did – that you really did not understand the world all that well, nor have things as figured out as you had assumed. Your club might have served as a bit of a cocoon, if you will. Maybe in this journey you have so flippantly stumbled upon, you will observe as ‘new’ a fallacy that ethical skeptics have identified for a long time now; one which your cabal has routinely ignored.

Evidence Sculpting (Cherry Sorting)

/philosophy : pseudoscience : data manipulation/ : has more evidence been culled from the field of consideration for this idea, than has been retained? Has the evidence been sculpted to fit the idea, rather than the converse?

Skulptur Mechanism – the pseudoscientific method of treating evidence as a work of sculpture. Methodical inverse negation techniques employed to dismiss data, block research, obfuscate science and constrain ideas such that what remains is the conclusion one sought in the first place. A common tactic of those who boast of all their thoughts being ‘evidence based’. The tendency to view a logical razor as a device which is employed to ‘slice off’ unwanted data (evidence sculpting tool), rather than as a cutting tool (pharmacist’s cutting and partitioning razor) which divides philosophically valid and relevant constructs from their converse.

Your next assignment Michael, should you choose to accept it, is to learn about how agency promotes specific hypotheses through the targeting of all others (from The Tower of Wrong: The Art of Professional Lying):

Embargo Hypothesis (Hξ)

/philosophy : pseudoskepticism/ : was the science terminated years ago, in the midst of large-impact questions of a critical nature which still remain unanswered? Is such research now considered ‘anti-science’ or ‘pseudoscience’? Is there enormous social pressure to not even ask questions inside the subject? Is mocking and derision high – curiously in excess of what the subject should merit?

Entscheiden Mechanism – the pseudoscientific or tyrannical approach of, when faced with epistemology which is heading in an undesired direction, artificially declaring under a condition of praedicate evidentia, the science as ‘settled’ and all opposing ideas, anti-science, credulity and pseudoscience.

But Michael, as you begin to spot agency inside purported processes of epistemology, we have to warn you, there is more – oh, so much more which you do not know. Let’s take a brief look shall we?

Agency as it Pertains to Evidence and Data Integrity

So, in an effort to accelerate Michael’s walk through the magical wonderland of social skepticism, and how it skillfully enforces conformance upon us all, let us examine the following. The fallacies, modes of agency and methods of crooked thinking below relate to manipulations of data which are prejudices, and not mere unconscious biases – such as in the case of anchoring bias, wherein one adopts a position which is overly influenced by their starting point or the first information which arrived. They may hold a bias, but at least it is somewhat innocent in its genesis, i.e. not introduced by agency. Prejudicial actions in the handling and reduction of evidence and data, are the preeminent hint of the presence of agency, and the first things which the ethical skeptic should look out for inside a claim, denial, mocking or argument.

Unconscious bias happens with everyone, but the list of fallacies and crooked thinking below, outline something more than simple bias. They involve processes of pseudo-induction, panduction, abduction and pseudo-deduction, along with the desire to dissemble the contribution of agency. You can find this, along with agency-independent and unconscious biases, all defined at The Tree of Knowledge Obfuscation: Misrepresentation of Evidence or Data

And of course, all of these fallacies, biases, modes of agency and crooked thinking can be found and defined here:

And as well, more modes of agency can be found at The Tree of Knowledge Obfuscation itself.

The Tree of Knowledge Obfuscation

A compendium of fallacy and corrupted thought commonly employed inside Social Skepticism

 

 

epoché vanguards gnosis

——————————————————————————————

How to MLA cite this blog post =>

The Ethical Skeptic, “The Spectrum of Evidence Manipulation” The Ethical Skeptic, WordPress, 2 Nov 2018; Web, https://wp.me/p17q0e-8wy

November 2, 2018 Posted by | Argument Fallacies, Tradecraft SSkepticism | , , | Leave a comment

The Fermi Paradox is Babysitting Rubbish

The curtains of paradox are woven of the fabric of ill assumption and intent. So goes this apothegm of ethical skepticism. Our assumptions surrounding the promotional rhetoric of the Fermi Paradox are immature and lacking in skeptical circumspection. The odds are, that a civilization long determined to not exist, will contact us well before we are equipped to resolve any questions raised inside the Fermi Paradox’s very posing to begin with.

Primate sign language existed long before we taught American Sign Language to Koko the Gorilla, and well before we even knew that many primates possessed both a vocal and gestural language all their own.1 2 It took us a mere 30,000 years to figure out that animals and plants on our very own planet communicate by means around which we did not bear the first inkling of awareness.3 How much more time will mankind need in order to understand a potential communication means, which is completely alien to anything we have ever experienced? How do we go about establishing a probability that we will be able to discern such an incongruous construct to our own forms of communication, and easily distinguish it from all forms of background noise inside our cosmos?

The reality concerning the rhetorical ‘Fermi Paradox’, as it is called, centers around the tenet of ethical skepticism which cites that our most dangerous weakness resides in the fact that we do not know what we do not know. We have signal-searched an infinitesimally small segment of our galaxy, and an even smaller segment of time.4 Yet, in our lack of wisdom we begin to demand of the cosmos, pseudo-reductionist answers which we are not prepared to accept in the least. The Fermi Paradox, along with its rhetorical resolution, stand exemplary of just such an exercise in pretend epistemology. The Fermi Paradox proceeds as such (from Wikipedia):5

The Fermi paradox is a conflict between arguments of scale and probability that seem to favor intelligent life being common in the universe, and a total lack of evidence of intelligent life having ever arisen anywhere other than on the Earth.

Totally. In a blog earlier this year for the SETI Institute, Seth Shostak, Senior Astronomer for the SETI Institute, opines that the Fermi Paradox, and in particular the ‘total lack of evidence of intelligent life having ever arisen anywhere‘ component itself, constitute ‘strong arguments’.6

The fact that aliens don’t seem to be walking our planet apparently implies that there are no extraterrestrials anywhere among the vast tracts of the Galaxy. Many researchers consider this to be a radical conclusion to draw from such a simple observation. Surely there is a straightforward explanation for what has become known as the Fermi Paradox. There must be some way to account for our apparent loneliness in a galaxy that we assume is filled with other clever beings.

A lot of folks have given this thought. The first thing they note is that the Fermi Paradox is a remarkably strong argument.

Please note that the idea that ‘they don’t exist’ as a scientific construct is simple, but it is not straightforward, as Shostak incorrectly claims. It is a highly feature stacked alternative. His thoughts in this regard lack philosophical rigor. Moreover, that the Fermi Paradox, and in particular its last component boast in the form of an appeal to ignorance, constitute any form of ‘strong argument’ is laughable pseudoscience to say the least. An amazing level of arrogance. However, since Seth has made the claim, let’s examine for a moment the Fermi Paradox, in light of ethical skepticism’s elements which define the features of a strong argument.7

   The Fermi Paradox Fails Assessment by Features of a Strong Argument

Formal Strength

1.  Coherency – argument is expressed with elements, relationships, context, syntax and language which conveys actual probative information or definition

The Paradox is simple. But never confuse simple with the state of being coherent. This is a common tradecraft inside social skepticism. The statement bears no underpinning definition. It seeks a free pass from the perspective that everyone knows what ‘intelligent life’, ‘evidence’, ‘scale’ and ‘probability’ are, right? For me, as an ethical skeptic, I fear what I do not know, that I do not know. I possess no definition of how evidence of this type would appear, nor the specific measures of probability and scale entailed in such a search. I cannot presume such arrogance of knowledge on my part – and certainly cannot pretense a resolution in its offing, before I even start looking.

This is no different than saying ‘God is Love’. Simplicity does not convey coherence (in the eyes of an ethical skeptic) – as it can constitute merely a charade. The principle is not coherent because it has been issued as law before any of its foundational elements of soundness have been framed; much less measured. This is what renders the principle both an Einfach Mechanism as well as a Imposterlösung Mechanism. Incoherent pretenses of science. A null hypothesis which has not earned its mantle of venerability.

2.  Soundness – premises support or fail to adequately support its proposed conclusion

The premise that there exists ‘a total lack of evidence of intelligent life having ever arisen anywhere other than on the Earth‘ is unsound. Notice the prejudicial modifier ‘total’, employed in framing a supposed ‘lack of evidence’. Total lack, not just a lack, but a total lack – well now I believe you then. This is prejudicial language feeding into casuistry; it is agency – and does not stand as a derivation of science by any means. The term ‘common in the universe‘ is also not constrained, relegating the Paradox artificially into a divergent model structure. This as well renders the syllogism unsound.

A similar conjecture could be made in terms of a personal accusation in this form: There is absolutely a paradox surrounding your claim to never have beaten your wife, yet we can find absolutely no evidence whatsoever to support such a claim on your part that you have never beaten anyone’s wife.

3.  Formal Theory – strength and continuity of predicate and logical calculus (basis of formal fallacy)

The Formal Theory of the model consists simply of a rhetorical syllogism citing that there exists a paradox. What is being sold are the premises and not the inert syllogism itself: the statement ‘a total lack of evidence of intelligent life having ever arisen anywhere other than on the Earth‘. This is a syllogistic approach to reverse-selling an unfounded premise assumption without due rigor of science, also know as rhetoric.

4.  Inductive Strength – sufficiency of completeness and exacting inference which can be drawn (as a note, deductive inference when it exists, relates to 3. Formal Theory)

Since observation has not been completed in reality (see below, the Parce-Ames equation), there exists no inductive strength for the Fermi Paradox rhetorical argument.

Informal Strength

5.  Circumstantial Strength – validity of information elements comprised by the argument or premises

Since observation has not been completed in reality (see below, the Parce-Ames equation), there exists no factual strength for the Fermi Paradox rhetorical argument.

6.  Integrity of Form/Cogency – informal critique of expression, intent or circumstantial features

The Fermi Paradox as it is currently expressed (and there is a future in which it can exist as an actual scientific principle) bears several forms of informal fallacy:

It constitutes non rectum agitur fallacy of science method
It stands as an appeal to authority
It stands as an appeal to ignorance
It is both an Einfach Mechanism as well as a Imposterlösung Mechanism (as there exists a very paltry set of ‘factualness’ surrounding this subject)
It is an Omega Hypothesis

To an ethical skeptic, the presence of technique involving reverse-selling a premise by means of structured rhetoric, inside a context of tilted language and equivocal definition, bearing a complete lack of soundness, and finally featuring the five fallacies at the end of this list, all collectively hint at one thing – A LIE.  Lies are sold socially, by social skeptics. With that idea in mind of a social construct being sold as science, examine for a moment many of the explanations for the Fermi Paradox (which also assume it to be a strong argument – which it is not), which include placing galaxy-inhabiting civilizations inside the same dilemma context in which mankind currently resides. Alien civilizations all blow themselves up with nukes at some point. They all die from carbon harvesting global warming. They pollute themselves into extinction. They all could not exit their solar system as intact beings. They could not solve mass/energy to speed of light relativism, etc. This habit of judging novel observations in light of current and popular controversies is an exercise in socially constructed science, called the familiar controversy bias. It is a key indicator that social skepticism, and not science, is attempting to sway the perception of the public at large inside an issue. A rather humorous example of such socially induced bias can be found here. And in light of the Fermi Paradox constituting rhetoric itself – such extrapolations of current controversies off of its presumptive base, form sort of a double layer cake of rhetoric. An amazing feat of organic untruth (lying with facts).

Familiar Controversy Bias

/philosophy : informal fallacy : habituation : bias/ : the tendency of individuals or researchers to frame explanations of observed phenomena in terms of mainstream current or popular controversies. The Fermi Paradox exists because aliens all eventually blow themselves up with nuclear weapons right after they discover radio. Venus is a case of run-away global warming from greenhouse gasses. Every hurricane since 1995 has been because of Republicans. Every disaster is God’s punishment for some recent thing a nation conducted. Mars is a case of ozone depletion at its worst, etc. Every paradox or novel observation is readily explainable in terms of a current popular or manufactured controversy. Similar to the anachronistic fallacy of judging past events in light of today’s mores or ethics.

Continuing with the Shostak blog article then, Seth whips out another extraordinary claim casually near its end.

Consequently, scientists in and out of the SETI community have conjured up other arguments to deal with the conflict between the idea that aliens should be everywhere and our failure (so far) to find them.

One does not have to ‘conjure up arguments’ to his ontological-preference Fermi Paradox resolution (advanced extraterrestrial civilizations do not exist at all), as Shostak claims8 – as such alternative arguments are mandated by Ockham’s Razor. They should be studied, and do not need to be forced or conjured in any way. The Paradox itself in no way suggests that there ‘aren’t any advanced extraterrestrial civilizations out there’, as Shostak all-too-eagerly opines as well. This idea of complete absence bears no scientific utility, neither as a construct nor as a null hypothesis. So why push it so hard? Perhaps, again in fine form of rhetoric – far away advanced extraterrestrial civilizations are not the target of this lazy abductive inference at all. Rather the real focus is on promoting the concept of non-existence of nearby advanced civilizations, or even visiting ones. A very familiar target set comprising a curiously large portion of Shostak’s vitriol, air time and professional focus.

These are all extraordinary claims, made by a person with zero evidence to support them – coupled with a high anchoring bias in squelching this issue before the public at large. Seth Shostak’s entire mind, purpose and reason for being, is based upon a psychological obsession with the dissemination of propaganda surrounding this issue. He was selected for the symbolic role and the suit he inhabits precisely because of these foibles. He is babysitting a symbolic issue, passing out pablum to the public and helping obfuscate the answer to a question which his sponsors do not want asked in the first place.

Remember, that in order to get the right answer, one need only ask a wrong question (see Interrogative Biasing: Asking the Wrong Question in Order to Derive the Right Answer). The Fermi Paradox is an example of just such a tactic of obfuscation. It is a religious action – stemming from a faith, which we will outline below.

The Faith of the Fermi Paradox

The fact that we accept the Fermi Paradox, given the following conditions, renders it more a statement of faith than a statement of science by any means.

Critical Path Logic: Fatal

The preferred rhetorical conclusion it entails employs the implicit concepts ‘alien’, ‘extraterrestrial’, ‘scale’, ‘probability’, ‘evidence’ and ‘life’ in rhetorical, prejudicial, incoherent and unsound syllogism. While the topic is valid, the question in its current form, is not.

However, let us presume this condition of fatality to be irrelevant, and continue down its logical critical path in reductionist series risk:

Reductionist Series Risk: Extremely High

α:  It presumes mankind to know the relevant range of what constitutes an inhabitant life form

β :  It presumes mankind to know the means by which inhabitants would ostensibly communicate

γ :  It presumes that all inhabitants are distant

δ :  It presumes that technology takes only a single path and direction similar to mankind’s own

ε :  It presumes that all communication media throughout the galaxy are similar to ours

ζ :  It presumes that we would recognize all forms of communication similar to ours

η :  It presumes that inhabitants would broadcast in omnidirectional and powerful EM signals or would be directing their EM energy straight toward us only

θ :  It presumes that inhabitants would broadcast ‘in the clear’ (i.e. unencrypted outside the cosmic background radiation)

ι :    It presumes that broadcasting inhabitants would have also presumed that no one was listening to them and/or would not care

κ :  It presumes that life can exist inside only our relative frame of reference/dimensionality

λ :  It presumes that we have examined a significant amount of space

μ :  It presumes that we rigorously know what space and time are, and its reductive inference upon radiation to be

ν :  It presumes that we have rigorously studied the timeframe in which an advanced civilization could broadcast during its development history

Finally we address key elements of the same logical critical path in macroscopic or parallel risk

Macroscopic Parallel Risk: Fatally High

  •  It presumes that mankind’s life originated only upon Earth through abiogenesis 
  •  It presumes that all intelligent life is noisy
  •  It presumes that all universal inhabitants are full time bound by our frame of reference/dimensionality
  •  It presumes that we have actually looked for inhabitant signals
  •  It presumes that humankind’s existence is lacking in agency
  •  It presumes that science/skepticism is lacking in agency
  •  It presumes that those who might have observed such communication in the past (distant or recent), would expose this circumstance
  •  It precludes the idea that a subset of mankind is already communicating

The Omega Hypothesis therefore – the idea being artificially enforced at all costs – is expressed no better than by Seth Shostak himself, its proponent and babysitter:9

“Some even insisted that there was no paradox at all: the reason we don’t see evidence of extraterrestrials is because there aren’t any.”

This is what is known inside ethical skepticism as babysitter rhetoric – false wisdom promulgated to stand in as a proxy for wisdom one desires to block. It is wishful thinking; pre-emptive thinking. The better-fit (least convoluted in necessary assumptions) explanation is, that ‘they’ are already aware of us, and have been for some time. This actually is a very elegant resolution for the Fermi Paradox at a local level, along with a battery of robust observations which lay fallow and unattended inside of so-called ‘fringe’ science – a hypothesis which requires significantly less gymnastics in denying data and twisting philosophy, than comparatively that required to enforce a single mandatory ‘nobody is home’ Omega Hypothesis. In this regard, I am not a proponent of enforcing one, Ockham’s Razor violating answer, over the condition of plurality which would dictate examining two possible solutions. I remain open to both ideas, as this is the ethic of skepticism – anathema to the cadre of pretenders who oppress this subject.

Babysitter

/philosophy : rhetoric : pseudoscience : science communicator/ : a celebrity or journalist who performs the critical tasks of agency inside a topic which is embargoed. The science communicator assigned a responsibility of appeasing public curiosity surrounding an issue which the public is not authorized to research nor understand. A form of psychosis, exhibited by an individual who is a habituated organic liar. A prevarication specialist who spins a subset of fact, along with affectations of science, in such as way as to craft the appearance of truth – and further then, invests the sum of their life’s work into perpetuating or enforcing a surreptitious lie.

So let’s develop a kind of Reverse Drake Equation why don’t we, based upon the above cited criteria of probability then (the Greek alphabet labelled items above as opposed to the bullet pointed items). This is a kind of risk chain assessment. Remember that risks in a risk chain in series are multiplicative as you add them into the mix. However, some of the above risks are in parallel, so they cannot be added into the series based formula below (Parce-Ames equation). The series based risks are highlighted by their corresponding Greek alphabet characters above, and are assigned a serial factor used inside the formula below.  Parallel risk elements cannot be added into a risk reductionist critical path (as they are subjective and duplicative in nature and therefore are not able to be employed inside a reductionist approach) and are therefore excluded from the equation. Beware of those who intermix parallel and series risk arguments, as they are plural arguing. A sign of lack in intellectual rigor, and a key sign of agency.

Parce-Ames Probability Dynamic

The Parce-Ames equation demonstrates the ludicrous folly of the Fermi Paradox. It serves to expand the dynamic regarding the probability that we would have detected even one (x) of the total population (N) of advanced civilizations (from the Drake Equation) in our galaxy by this moment in our history. The Parce-Ames Probability Dynamic therefore, hinges off of the probability around fourteen low-confidence and independent input variables, as factored into 250 billion stars, all compounding risk in series and according to the following equation:

P(N(x))  =  N/2.5 x 10¹¹  ·  Σ(Ψ)  ·  α  ·  β  ·  γ  ·  δ  ·  ε  ·  ζ  ·  η  ·  θ  ·  ι  ·  κ  ·  λ  ·  μ  ·  ν

where:

P(N(x)) = the probability that we would have detected even one (x) of N advanced civilizations in our galaxy by this moment in our history

Σ(Ψ) = the sum total of all stars (Σ) studied by all observation apertures (Ψ) on Earth

and

α :  the chance that we grasp adequately what constitutes an inhabitant life form
β :  the chance that we have correctly assumed how inhabitants would ostensibly communicate
γ :  the chance that inhabitants are inside our search band
δ :  the chance that a given inhabitant technology takes a path and direction similar to mankind’s own
ε :  the chance that any communication is similar to ours
ζ :  the chance that we would recognize all forms of communication similar to ours
η :  the chance that inhabitants would broadcast in omnidirectional and powerful EM signals or would be directing their EM energy straight toward us only
θ :  the chance that inhabitants would broadcast ‘in the clear’ (i.e. unencrypted outside the cosmic background radiation)
ι :    the chance that broadcasting inhabitants would have also presumed that no one was listening to them and/or would not care
κ :  the chance that life can exist inside only our relative frame of reference/dimensionality
λ :  the chance percentage of signal-detectable space we have examined
μ :  the chance that we rigorously know what space and time are, and its reductive inference upon radiation to be
ν :  the chance that we have rigorously studied the timeframe in which an advanced civilization could/would broadcast in a detectable form during its development history

The journalists at Science News sum this equation dynamic up in one recitation:10

A new calculation shows that if space is an ocean, we’ve barely dipped in a toe. The volume of observable space combed so far for E.T. is comparable to searching the volume of a large hot tub for evidence of fish in Earth’s oceans, astronomer Jason Wright at Penn State and colleagues say in a paper posted online September 19 at arXiv.org.

Another way to put this, in terms of the discussion herein is that, the Parce-Ames equation always approaches zero, unless a majority of answers are ascertained and refined in accuracy by an observing civilization. We as a civilization are nowhere near the dynamic range of the Parce-Ames curve progression. We are in the first hot tub of ocean water, swimming around looking for fish and yelling ‘a total lack of any evidence!’ as bubbles come streaming up in sequence with our underwater declarations. And we have on our smart sciencey swim trunks too.

The stark reality is – that in absence of a civilization coming alongside and teaching us many of the objective elements of the Parce-Ames equation, we face very little chance of ever striking out on our own and finding (even by means of radio-telescope) a nearby, much less galactic, civilization. As you can see in the graphic above, the inflection point of knowledge which would equip us to answer the Fermi Paradox is far past the more likely state of our being contacted first.

The dramatically higher odds are, that an intelligence inhabiting life form will find us, long before we ever find even one, ourselves. The idea therefore, that another advanced culture is aware of or has visited Earth, is well supported by Ockham’s Razor, and should stand as a construct of science, even now. To avoid this alternative, is a form of pseudoscience. The more likely realities are that either:

1. they will find us first, by detecting the gamma ray bursts from our 2243 nuclear weapon detonations, long before we resolve even the first variable inside the Drake Equation – or

2. they already were engaged with ‘us’ a long time ago.

Both of these explanations are much less feature stacked than is the ‘they do not exist’ alternative being promoted by social skepticism.

We have no idea how an alien might exist, communicate or travel. We possess no compelling argument which falsifies the very possible hypothesis that they were already here long ago, and are still hanging around. Not one shred of science – therefore, plurality under Ockham’s Razor is mandated. And if you do not understand what this means, neither are you ready to argue this topic.

The First Duty of Ethical Skepticism, is not to promulgate answers. I do not hold an answer inside this subject. Rather it is to spot and to oppose agency. Especially the rhetoric of babysitting agency. Foolishness, dressed up as science. Wonder in the purported offing – but oppressive in its reality of enforcement.

Fake skepticism.

epoché vanguards gnosis

——————————————————————————————

How to MLA cite this blog post =>

The Ethical Skeptic, “The Fermi Paradox is Babysitting Rubbish” The Ethical Skeptic, WordPress, 2 Oct 2018; Web, https://wp.me/p17q0e-8jd

October 2, 2018 Posted by | Argument Fallacies, Institutional Mandates | , | Leave a comment

Panduction: The Invalid Form of Inference

One key, if not the primary form of invalid inference on the part of fake skeptics, resides in the methodology of panductive inference. A pretense of Popper demarcation, panduction is employed as a masquerade of science in the form of false deduction. Moreover it constitutes an artifice which establishes the purported truth of a favored hypothesis by means of the extraordinary claim of having falsified every competing idea in one felled swoop of rationality. Panduction is the most common form of pseudoscience.

Having just finished my review of the Court’s definition of malice and oppression in the name of science, as outlined in the Dewayne Johnson vs. Monsanto Company case, my thinking broached a category of pseudoscience which is practiced by parties who share similar motivations to the defendant in that landmark trial. Have you ever been witness to a fake skeptic who sought to bundle all ‘believers’ as one big deluded group, who all hold or venerate the same credulous beliefs? Have you ever read a skeptic blog, claiming a litany of subjects to be ‘woo’ – yet fully unable to cite any evidence whatsoever which served to epistemologically classify that embargoed realm of ideas under such an easy categorization of dismissal? What you are witness to, is the single most common, insidious and pretend-science habit of fake skeptics, panduction.

It’s not that all the material contained in the embargoed hypotheses realm has merit. Most of it does not. But what is comprised therein, even and especially in being found wrong, resides along the frontier of new discovery. You will soon learn on this journey of ethical skepticism, that discovery is not the goal of the social skeptic; rather that is exactly what they have been commissioned to obfuscate.

Science to them is nothing more than an identity which demands ‘I am right’.

There exist three forms of valid inference, in order of increasing scientific gravitas: abduction, induction and deduction. Cleverly downgrading science along these forms of inference in order to avoid more effective inference methods which might reveal a disliked outcome, constitutes another form of fallacy altogether, called methodical deescalation.  We shall not address methodical deescalation here, but rather, a fourth common form of inference, which is entirely invalid in itself. Panduction is a form of ficta rationalitas; an invalid attempt to employ critical failures in logic and evidence in order to condemn a broad array of ideas, opinions, hypotheses, constructs and avenues of research as being Popper falsified; when in fact nothing of the sort has been attained. It is a method of proving yourself correct, by impugning everyone and everything besides the idea you seek to protect, all in one incredible feat of armchair or bar-stool reasoning. It is often peddled as critical thinking by fake skeptics.

Panduction is a form of syllogism derived from extreme instances of Appeal to Ignorance, Inverse Negation and/or Bucket Characterization from a Negative Premise. It constitutes a shortcut attempt to promote one idea at the expense of all other ideas, or kill an array of ideas one finds objectionable. Nihilists employ panduction for example, as a means to ‘prove’ that nothing exists aside from the monist and material entities which they approve as real. They maintain the fantasy that science has proved that everything aside from what they believe, is false by a Popperian standard of science – i.e. deducted. This is panduction.

Panduction

/philosophy : invalid inference/ : an invalid form of inference which is spun in the form of pseudo-deductive study. Inference which seeks to falsify in one felled swoop ‘everything but what my club believes’ as constituting one group of bad people, who all believe the same wrong and correlated things – this is the warning flag of panductive pseudo-theory. No follow up series studies nor replication methodology can be derived from this type of ‘study’, which in essence serves to make it pseudoscience.  This is a common ‘study’ format which is conducted by social skeptics masquerading as scientists, to pan people and subjects they dislike.

There are three general types of Panduction. In its essence, panduction is any form of inference used to pan an entire array of theories, constructs, ideas and beliefs (save for one favored and often hidden one), by means of the following technique groupings:

  1. Extrapolate and Bundle from Unsound Premise
  2. Impugn through Invalid Syllogism
  3. Mischaracterize though False Observation

The first is executed through attempting to falsify entire subject horizons through bad extrapolation. The second involves poorly developed philosophies of denial. Finally the third involves the process of converting disliked observations or failures to observe, into favorable observations:

Panduction Type I

Extrapolate and Bundle from Unsound Premise – Bucket Characterization through Invalid Observation – using a small, targeted or irrelevant sample of linear observations to extrapolate and further characterize an entire asymmetric array of ideas other than a preferred concealed one. Falsification by:

Absence of Observation (praedicate evidentia modus ponens) – any of several forms of exaggeration or avoidance in qualifying a lack of evidence, logical calculus or soundness inside an argument. Any form of argument which claims a proposition consequent ‘Q’, which also features a lack of qualifying modus ponens, ‘If P then’ premise in its expression – rather, implying ‘If P then’ as its qualifying antecedent. This as a means of surreptitiously avoiding a lack of soundness or lack of logical calculus inside that argument; and moreover, enforcing only its conclusion ‘Q’ instead. A ‘There is not evidence for…’ claim made inside a condition of little study or full absence of any study whatsoever.

Insignificant Observation (praedicate evidentia) – hyperbole in extrapolating or overestimating the gravitas of evidence supporting a specific claim, when only one examination of merit has been conducted, insufficient hypothesis reduction has been performed on the topic, a plurality of data exists but few questions have been asked, few dissenting or negative studies have been published, or few or no such studies have indeed been conducted at all.

Anecdote Error – the abuse of anecdote in order to squelch ideas and panduct an entire realm of ideas. This comes in two forms:

Type I – a refusal to follow up on an observation or replicate an experiment, does not relegate the data involved to an instance of anecdote.

Type II – an anecdote cannot be employed to force a conclusion, such as using it as an example to condemn a group of persons or topics – but an anecdote can be employed however to introduce Ockham’s Razor plurality. This is a critical distinction which social skeptics conveniently do not realize nor employ.

Cherry Picking – pointing to a talking sheet of handpicked or commonly circulated individual cases or data that seem to confirm a particular position, while ignoring or denying a significant portion of related context cases or data that may contradict that position.

Straw Man – misrepresentation of either an ally or opponent’s position, argument or fabrication of such in absence of any stated opinion.

Dichotomy of Specific Descriptives – a form of panduction, wherein anecdotes are employed to force a conclusion about a broad array of opponents, yet are never used to apply any conclusion about self, or one’s favored club. Specific bad things are only done by the bad people, but very general descriptives of good, apply when describing one’s self or club. Specifics on others who play inside disapproved subjects, general nebulous descriptives on self identity and how it is acceptable ‘science’ or ‘skepticism’.

Associative Condemnation (Bucket Characterization and Bundling) – the attempt to link controversial subject A with personally disliked persons who support subject B, in an effort to impute falsehood to subject B and frame its supporters as whackos. Guilt through bundling association and lumping all subjects into one subjective group of believers. This will often involve a context shift or definition expansion in a key word as part of the justification. Spinning for example, the idea that those who research pesticide contribution to cancer, are also therefore flat Earther’s.

Panduction Type II

Impugn through Invalid Syllogism – Negative Assertion from a Pluralistic, Circular or Equivocal Premise – defining a set of exclusive premises to which the contrapositive applies, and which serves to condemn all other conditions.

Example (Note that ‘paranormal’ here is defined as that which a nihilist rejects a being even remotely possible):

All true scientists are necessarily skeptics. True skeptics do not believe in the paranormal. Therefore no true scientist can research the paranormal.

All subjects which are true are necessarily not paranormal. True researchers investigate necessarily true subjects. Therefore to investigate a paranormal subject makes one not a true researcher.

All false researchers are believers. All believers tend to believe the same things. Therefore all false researchers believe all the same things.

Evidence only comes from true research. A paranormal investigator is not a true researcher. Therefore no evidence can come from a paranormal subject.

One may observe that the above four examples, thought which rules social skepticism today, are circular in syllogism and can only serve to produce the single answer which was sought in the first place. But ruling out entire domains of theory, thought, construct, idea and effort, one has essentially panned everything, except that which one desires to be indeed true (without saying as much).  It would be like Christianity pointing out that every single thought on the part of mankind, is invalid, except what is in the Bible. The Bible being the codification equivalent of the above four circular syllogisms, into a single document.

Panduction Type III

Mischaracterize through False Observation – Affirmation from Manufacturing False Positives or Negatives – manipulating the absence of data or the erroneous nature of various data collection channels to produce false negatives or positives.

Panduction Type III is an extreme form of an appeal to ignorance. In an appeal to ignorance, one is faced with observations of negative conditions which could tempt one to infer inductively that there exists nothing but the negative condition itself. An appeal to ignorance simply reveals one of the weaknesses of inductive inference.  Let’s say that I find a field which a variety of regional crow murders frequent. So I position a visual motion detection camera on a pole across from the field in order to observe crow murders who frequent that field. In my first measurement and observation instance, I observe all of the crows to be black. Let us further then assume that I then repeat that observation exercise 200 times on that same field over the years. From this data I may well develop a hypothesis that includes a testable mechanism in which I assert that all crows are black. I have observed a large population size, and all of my observations were successful, to wit: I found 120,000 crows to all be black. This is inductive inference. Even though this technically would constitute an appeal to ignorance, it is not outside of reason to assert a new null hypothesis, that all crows are black – because my inference was derived from the research and was not a priori favored. I am not seeking to protect the idea that all crows are black simply because I or my club status are threatened by the specter of a white crow. The appeal to ignorance fallacy is merely a triviality in this case, and does not ‘disprove’ the null (see the Appeal to Fallacy). Rather it stands as a caution, that plurality should be monitored regarding the issue of all crows being black.

But, what if I become so convinced that the null hypothesis in this case is the ‘true’ hypothesis, or even preferred that idea in advance because I was a member of a club which uses a black crow as its symbol? In such a case I approach the argument with an a priori belief which I must protect. I begin to craft my experimental interpretation of measurement such that it conforms to this a priori mandate in understanding. This will serve to produce four species of study observation procedural error, which are in fact, pseudoscience; the clever masquerade of science and knowledge:

A.  Affirmation from Result Conversion  – employing a priori assumptions as filters or data converters, in order to produce desired observational outcomes.

1.  Conversion by a priori Assumption (post hoc ergo propter hoc). But what if the field I selected, bore a nasty weather phenomenon of fog, on an every other day basis. Further then, this fog obscured a good view of the field, to the point where I could only observe the glint of sunlight off the crow’s wings, which causes several of them to appear white, even though they are indeed black. But because I know there are no white crows now, I use a conversion algorithm I developed to count the glints inside the fog, and register them as observations of black crows? Even though a white crow could also cause the same glint. I have created false positives by corrupted method.

2.  Conversion by Converse a priori Assumption (propter hoc ergo hoc – aka plausible deniability). Further then, what if I assumed that any time I observed a white crow, that this would therefore be an indication that fog was present, and a condition of Data Conversion by a priori Assumption was therefore assumed to be in play? I would henceforth, never be able to observe a white crow at all, finding only results which conform to the null hypothesis, which would now be an Omega Hypothesis (see The Art of Professional Lying: The Tower of Wrong).

Example: Viking Mars Lander Data Manipulation

Two Mars Viking Landers were sent to Mars, in part to study for signs of life. NASA researchers took soil samples the Viking landers scooped from the surface and mixed it with nutrient-rich water. If the soil had life, the theory went that the soil’s microbes would metabolize the nutrients in the water and release a certain signature of radioactive molecules. To their pleasant surprise, the nutrients metabolized and radioactive molecules were released – suggesting that Mars’ soil contained life. However, the Viking probes’ other two experiments found no trace of organic material, which prompted the question: If there were no organic materials, what could be doing the metabolizing? So by assumption, the positive results from the metabolism test, were dismissed as derivative from some other chemical reaction, which has not been identified to date. The study was used as rational basis from which to decline further search for life on Mars, when it should have been appropriately deemed ‘inconclusive’ instead (especially in light of our finding organic chemicals on Mars in the last several months)1

B. Affirmation from Observation Failure Conversion – errors in observation are counted as observations of negative conditions, further then used as data or as a data screening criterion.

Continuing with our earlier example, what if on 80% of the days in which I observed the field full of crows, the camera malfunctioned and errantly pointed into the woods to the side, and I was fully unable to make observations at all on those days? Further then, what if I counted those non-observing days as ‘black crow’ observation days, simply because I had defined a black crow as being the ‘absence of a white crow’ (pseudo-Bayesian science) instead of being constrained to only the actual observation of an actual physical white crow? Moreover, what if, because of the unreliability of this particular camera, any observations of white crows it presented were tossed out, so as to prefer observations from ‘reliable’ cameras only? This too, is pseudoscience in two forms:

1.  Observation Failure as Observation of a Negative (utile absentia). – a study which observes false absences of data or creates artificial absence noise through improper study design, and further then assumes such error to represent verified negative observations. A study containing field or set data in which there exists a risk that absences in measurement data, will be caused by external factors which artificially serve to make the evidence absent, through risk of failure of detection/collection/retention of that data. The absences of data, rather than being filtered out of analysis, are fallaciously presumed to constitute bonafide observations of negatives. This is improper study design which will often serve to produce an inversion effect (curative effect) in such a study’s final results. Similar to torfuscation.

2.  Observation Failure as Basis for Selecting For Reliable over Probative Data (Cherry Sorting) – when one applies the categorization of ‘anecdote’ to screen out unwanted observations and data. Based upon the a priori and often subjective claim that the observation was ‘not reliable’. Ignores the probative value of the observation and the ability to later compare other data in order to increase its reliability in a more objective fashion, in favor of assimilating an intelligence base which is not highly probative, and can be reduced only through statistical analytics – likely then only serving to prove what one was looking for in the first place (aka pseudo-theory).

These two forms of conversion of observation failures into evidence in favor of a particular position, are highlighted no better than studies which favor healthcare plan diagnoses over cohort and patient input surveys. Studies such as the Dutch MMR-Autism Statistical Meta-Analysis or the Jain-Marshall Autism Statistical Analysis failed precisely because of the two above fallacious methods regarding the introduction of data. Relying only upon statistical analytics of risk-sculpted and cherry sorted data, rather than direct critical path observation.

 Example: Jain-Marshall Autism Study

Why is the 2015 Jain-Marshall Study of weak probative value? Because it took third party, unqualified (health care plan) sample interpretations of absences (these are not observations – they are ‘lack-of’ observations – which are not probative data to an intelligence specialist – nor to a scientist – see pseudo-theory) from vaccinated and non-vaccinated children’s final medical diagnoses at ages 2, 3, and 5. It treated failures in the data collection of these healthcare databases, as observations of negative results (utile absentia). A similar data vulnerability to the National Vaccine Injury Compensation System’s ‘self-volunteering’ of information and limitation of detection to within 3 years. This favors a bad, non-probative data repository, simply because of its perception as being ‘reliable’ as a source of data. This fails to catch 99% of signal observations (Cherry Sorting), and there is good demonstrable record of that failure to detect actual injury circumstances.2

One might chuckle at the face value ludicrousness of either Panduction Type III A and B. But Panduction Type III is regularly practiced inside of peer reviewed journals of science. Its wares constitute the most insidious form of malicious and oppressive fake science. One can certainly never expect a journalist to understand why this form of panduction is invalid, but certainly one should expect it of their peer review scientists – those who are there to protect the public from bad science. And of course, one should expect it from an ethical skeptic.

epoché vanguards gnosis

——————————————————————————————

How to MLA cite this blog post =>

The Ethical Skeptic, “Panduction: The Invalid Form of Inference” The Ethical Skeptic, WordPress, 31 Aug 2018; Web, https://wp.me/p17q0e-8c6

 

August 31, 2018 Posted by | Argument Fallacies, Tradecraft SSkepticism | , , | Leave a comment

Chinese (Simplified)EnglishFrenchGermanHindiPortugueseRussianSpanish
%d bloggers like this: