The Ethical Skeptic

Challenging Pseudo-Skepticism, Institutional Propaganda and Cultivated Ignorance

Critical Attributes Which Distinguish the Scientific Method

The scientific method bears several critical attributes which distinguish it from both mere experiment, as well as its masquerade, the Lyin’tific Method. It behooves the ethical skeptic to understand the critical features which distinguish science from its pretense; to maintain the skill in spotting social manipulation in the name of science.

The experimental method is a subset of the scientific method. There exists a distinct difference between these two protocols of science. The experimental method is oriented towards an incremental continuation of existing knowledge development, and accordingly begins with the asking of a question, bolstered by some quick research before initiating experimental testing. But not all, nor even the majority of knowledge development can be prosecuted in this fashion of study. Under the scientific method, one cannot boast about possessing the information necessary in asking a question at the very start. Asking an uninformed question may serve to bias the entire process – or kill the research artificially without the full awareness of the sponsors or stakeholders. Accordingly, in the scientific method, a question is not asked until step 8 – this in an effort to avoid the pitfalls of pseudo-theory. This is purposeful, because the astute researcher often does not know the critical path necessary to reach his discovery – at the very beginning. Science involves a intelligence development period wherein we ask, 1. what are the critical issues that need to be resolved, 2. what are the irrelevant factors we can ignore for now? and 3. how do I chain these issue resolutions into a critical path of knowledge development? In absence of this process, there exists a bevy of questions – wherein just selecting one and starting experiments, is akin to shooting in the dark.

The materials physicist Percy Bridgman, commented upon the process by which we ‘translate’ abstract theories and concepts into specific experimental contexts and protocols. Calling this work of reduction and translation ‘operationalism’ – Bridgman cautioned that experimental data production is often guided by substantial presuppositions about the subject matter which arise as a part of this translation. Often raising concern about the ways in which initial questions are formulated inside a scientific context. True science is a process which revisits its methodological constructs (modes of research method) as often as it does its epistemological (knowledge) ones. Accordingly, this principle identified by Bridgman is the foundation of the philosophy which clarifies the difference between the scientific method, and the experimental method. It is unwise to consider the two as being necessarily congruent.1

The process of developing a scientific question, is many times daunting, involving commitment from a sponsor, a long horizon of assimilating observational intelligence and persistence in seeking to establish a case for necessity. A necessity which serves to introduce plurality of argument (see Ockham’s Razor), which can be brought before peers. Advising peers who are in support of the research and assist in developing the construct being addressed, into a workable hypothesis. These peers are excited to witness the results of the research as well.

Science is a process of necessity in developing taxonomic observation, which seeks to establish a critical path of continuously evaluated and incremental in risk conjecture, probing by means of the most effective inference method available, the resolution of a query and its inevitable follow-on inquiry, in such manner that this process can be replicated and shared with mankind.

The Lyin’tific Method in contrast, will target one or more of these critical attributes to be skipped, in an effort to get to a desired answer destination in as expedient a manner as is possible – yet still appear as science.

The Critical Attributes of Science

My identification of the critical attributes of science, and especially the early and neglected part of the scientific method, are reflected by a statement on the part of the fictional character Sherlock Holmes, from the 1887 novel, A Study in Scarlett by Sir Arthur Conan Doyle.

“It is a capital mistake to theorize before you have all the evidence. It biases the judgment.”

Therefore,

Although science is indeed an iterative process, nonetheless true science, as opposed to general or developmental study, involves these critical path steps at the beginning of the scientific method:

  1. Observation
  2. Intelligence
  3. Necessity

Science thereafter is an iterative method which bears the following necessary features:

  1. Flows along a critical path of dependent, salient and revelatory observation and query
  2. Develops hypothesis through testable mechanism
  3. Is incremental in risk of conjecture (does not stack conjectures)
  4. Examines probative study in preference over reliable data
  5. Seeks reliable falsification over reliable inductive inference
  6. Seeks reliable consilience over reliable abductive inference
  7. Does not prematurely make a claim to consensus in absence of available deduction
  8. Shares results, next questions, next steps and replication guidance.

Social skeptics seek to deny the first three steps of science, along with routinely ignoring its necessary features. Social skeptics then further push the experimental method in place of the above attributes of science – asking a biased and highly uninformed question (also known in philosophy as rhetoric), while promoting science as nothing but exclusive club lab activity.  Finally, incorporating their corrupted version of ‘peer review’ wherein they seek to kill ideas before they can be formulated into a hypothesis and be studied. This is a process of corruption.

Most unanswered questions reside in a state of quandary precisely because of a failure in or refusal to pursue the above characteristics of science.

Accordingly, the scientific method begins with a process of circumspection and skepticism, which is distinctly different from the inception of the much more tactical experimental method. To scoff at this distinction, reveals a state of scientific illiteracy and of never having done actual scientific research nor discovery.

While both the experimental method and the scientific method are valid process descriptions applicable to science, there does exist an abbreviated version of the scientific method which sometimes slips by as valid to political agenda proponents and the mainstream press – that method which is practiced in the pesticide and vaccine industries.  It follows:

The Lyin’tific Method: The Ten Commandments of Fake Science

When you have become indignant and up to your rational limit over privileged anti-science believers questioning your virtuous authority and endangering your industry profits (pseudo-necessity), well then it is high time to undertake the following procedure.

1. Select for Intimidation. Appoint an employee who is under financial or career duress, to create a company formed solely to conduct this study under an appearance of impartiality, to then go back and live again comfortably in their career or retirement. Hand them the problem definition, approach, study methodology and scope. Use lots of Bradley Effect vulnerable interns (as data scientists) and persons trying to gain career exposure and impress. Visibly assail any dissent as being ‘anti-science’, the study lead will quickly grasp the implicit study goal – they will execute all this without question. Demonstrably censure or publicly berate a scientist who dissented on a previous study – allow the entire organization/world to see this. Make him become the hate-symbol for your a priori cause.

2. Ask a Question First. Start by asking a ‘one-and-done’, noncritical path & poorly framed, half-assed, sciencey-sounding question, representative of a very minor portion of the risk domain in question and bearing the most likely chance of obtaining a desired result – without any prior basis of observation, necessity, intelligence from stakeholders nor background research. Stress that the scientific method begins with ‘asking a question’. Avoid peer or public input before and after approval of the study design. Never allow stakeholders at risk to help select nor frame the core problem definition, nor the data pulled, nor the methodology/architecture of study.

3. Amass the Right Data. Never seek peer input at the beginning of the scientific process (especially on what data to assemble), only the end. Gather a precipitously large amount of ‘reliable’ data, under a Streetlight Effect, which is highly removed from the data’s origin and stripped of any probative context – such as an administrative bureaucracy database. Screen data from sources which introduce ‘unreliable’ inputs (such as may contain eyewitness, probative, falsifying, disadvantageous anecdotal or stakeholder influenced data) in terms of the core question being asked. Gather more data to dilute a threatening signal, less data to enhance a desired one. Number of records pulled is more important than any particular discriminating attribute entailed in the data. The data volume pulled should be perceptibly massive to laymen and the media. Ensure that the reliable source from which you draw data, bears a risk that threatening observations will accidentally not be collected, through reporting, bureaucracy, process or catalog errors. Treat these absences of data as constituting negative observations.

4. Compartmentalize. Address your data analysts and interns as ‘data scientists’ and your scientists who do not understand data analysis at all, as the ‘study leads’. Ensure that those who do not understand the critical nature of the question being asked (the data scientists) are the only ones who can feed study results to people who exclusively do not grasp how to derive those results in the first place (the study leads). Establish a lexicon of buzzwords which allow those who do not fully understand what is going on (pretty much everyone), to survive in the organization. This is laundering information by means of the dichotomy of compartmented intelligence, and it is critical to everyone being deceived. There should not exist at its end, a single party who understands everything which transpired inside the study. This way your study architecture cannot be betrayed by insiders (especially helpful for step 8).

5. Go Meta-Study Early. Never, ever, ever employ study which is deductive in nature, rather employ study which is only mildly and inductively suggestive (so as to avoid future accusations of fraud or liability) – and of such a nature that it cannot be challenged by any form of direct testing mechanism. Meticulously avoid systematic review, randomized controlled trial, cohort study, case-control study, cross-sectional study, case reports and series, or reports from any stakeholders at risk. Go meta-study early, and use its reputation as the highest form of study, to declare consensus; especially if the body of industry study from which you draw is immature and as early in the maturation of that research as is possible.  Imply idempotency in process of assimilation, but let the data scientists interpret other study results as they (we) wish. Allow them freedom in construction of Oversampling adjustment factors. Hide methodology under which your data scientists derived conclusions from tons of combined statistics derived from disparate studies examining different issues, whose authors were not even contacted in order to determine if their study would apply to your statistical database or not.

6. Shift the Playing Field. Conduct a single statistical study which is ostensibly testing all related conjectures and risks in one felled swoop, in a different country or practice domain from that of the stakeholders asking the irritating question to begin with; moreover, with the wrong age group or a less risky subset thereof, cherry sorted for reliability not probative value, or which is inclusion and exclusion biased to obfuscate or enhance an effect. Bias the questions asked so as to convert negatives into unknowns or vice versa if a negative outcome is desired. If the data shows a disliked signal in aggregate, then split it up until that disappears – conversely if it shows a signal in component sets, combine the data into one large Yule-Simpson effect. Ensure there exists more confidence in the accuracy of the percentage significance in measure (p-value), than of the accuracy/salience of the contained measures themselves.

7. Trashcan Failures to Confirm. Query the data 50 different ways and shades of grey, selecting for the method which tends to produce results which favor your a priori position. Instruct the ‘data scientists’ to throw out all the other data research avenues you took (they don’t care), especially if it could aid in follow-on study which could refute your results. Despite being able to examine the data 1,000 different ways, only examine it in this one way henceforth. Peer review the hell out of any studies which do not produce a desired result. Explain any opposing ideas or studies as being simply a matter of doctors not being trained to recognize things the way your expert data scientists did. If as a result of too much inherent bias in these methods, the data yields an inversion effect – point out the virtuous component implied (our technology not only does not cause the malady in question, but we found in this study that it cures it~!).

8. Prohibit Replication and Follow Up. Craft a study which is very difficult to or cannot be replicated, does not offer any next steps nor serves to open follow-on questions (all legitimate study generates follow-on questions, yours should not), and most importantly, implies that the science is now therefore ‘settled’. Release the ‘data scientists’ back to their native career domains so that they cannot be easily questioned in the future.  Intimidate organizations from continuing your work in any form, or from using the data you have assembled. Never find anything novel (other than a slight surprise over how unexpectedly good you found your product to be), as this might imply that you did not know the answers all along. Never base consensus upon deduction of alternatives, rather upon how many science communicators you can have back your message publicly. Make your data proprietary. View science details as a an activity of relative privation, not any business of the public.

9. Extrapolate and Parrot/Conceal the Analysis. Publish wildly exaggerated & comprehensive claims to falsification of an entire array of ideas and precautionary diligence, extrapolated from your single questionable and inductive statistical method (panduction). Publish the study bearing a title which screams “High risk technology does not cause (a whole spectrum of maladies) whatsoever” – do not capitalize the title as that will appear more journaly and sciencey and edgy and rebellious and reserved and professorial. Then repeat exactly this extraordinarily broad-scope and highly scientific syllogism twice in the study abstract, first in baseless declarative form and finally in shocked revelatory and conclusive form, as if there was some doubt about the outcome of the effort (ahem…). Never mind that simply repeating the title of the study twice, as constituting the entire abstract is piss poor protocol – no one will care. Denialists of such strong statements of science will find it very difficult to gain any voice thereafter. Task science journalists to craft 39 ‘research articles’ derived from your one-and-done study; deem that now 40 studies. Place the 40 ‘studies’, both pdf and charts (but not any data), behind a registration approval and $40-per-study paywall. Do this over and over until you have achieved a number of studies and research articles which might fancifully be round-able up to ‘1,000’ (say 450 or so ~ see reason below). Declare Consensus.

10. Enlist Aid of SSkeptics and Science Communicators. Enlist the services of a public promotion for-hire gang, to push-infiltrate your study into society and media, to virtue signal about your agenda and attack those (especially the careers of wayward scientists) who dissent.  Have members make final declarative claims in one liner form “A thousand studies show that high risk technology does not cause anything!” ~ a claim which they could only make if someone had actually paid the $40,000 necessary in actually accessing the ‘thousand studies’. That way the general public cannot possibly be educated in any sufficient fashion necessary to refute the blanket apothegm. This is important: make sure the gang is disconnected from your organization (no liability imparted from these exaggerated claims nor any inchoate suggested dark activities *wink wink), and moreover, who are motivated by some social virtue cause such that they are stupid enough that you do not actually have to pay them.

To the media, this might look like science. But to a life-long researcher, it is nowhere near valid.  It is pseudo-science at the least; and even worse than in the case of the paranormal peddler – it is a criminal felony and assault against humanity. It is malice and oppression, in legal terms.

The discerning ethical skeptic bears this in mind and uses it to discern the sincere researcher from the attention grabbing poseur.

epoché vanguards gnosis

How to MLA cite this blog post =>
The Ethical Skeptic, “The Scientific Method Contrasted with The Experimental Method” The Ethical Skeptic, WordPress, 31 March 2018, Web; https://wp.me/p17q0e-7qG

March 31, 2018 Posted by | Ethical Skepticism | , , | Leave a comment

The Role of Critical Path in Logic, Systems and Science

It is the essence of critical thinking. A critical path is the ordered chain of incremental events or logic which produces the most elegant pathway to a reasonably constrained objective. While an aspect of critical thinking however, its application is more related to actual discovery and accomplishment than it is to the more common, cynical martial art of denial. Pop critical thinking is more about explaining the reasons why, what is right is right. Critical path inside of science is about applying intelligence to discover the pathway to what is right. Fearing being not informative, even more than merely being wrong.

It is the most effective (not efficient) path of progress which cuts through the bullshit in order to get to an objective. If you are a regular reader of The Ethical Skeptic, you will hear me speak often of critical path. The ability to discern it, is one of the skills which allows a sincere researcher to spot the agenda faker. It is the same reason why a whitetail buck deer will reliably traverse a path following the ridgeline through a chain of mountains, avoiding the gorges and ravines to either side. Critical logic is the inverse of the condition involving ingoratio elenchi (a misdirection in argument) or ingens vanitatum (a great deal of irrelevance). Modern versions of ‘critical thinking’ taught by today’s social skeptics are usually entropic in their application; producing misinformation, irrelevance and conformity. Energy sucking gorges and ravines of distraction and misdirection. This amateur teaching by activist numpties results in an array of enforced, conforming and simplest answers (see The Art of the Professional Lie). A critical path is in many regards, the opposite of this pseudo-skeptical process. Critical thinking may produce the same conclusion as critical path logic, but in the case of ‘critical thinking’ this is simply an accident in outcome. It is the critical path of logic which is the process of science, and not ‘critical thinking’. Just because someone might virtue signal around promoting the right answer inside a scientific question, does not serve to legitimize the process, path or method they employed to get there.

You will not find critical path logic defined in most scientific or philosophical handbooks and guides. Believe me I have over 40 of such guides in my personal library. In guides to science, you will most likely find the scientific method1† – which is in its essence a process involving a critical path discipline; incomplete though it may be in constituting actual science. In books of philosophy, sadly, rather than finding tenets on logical calculus and the development of knowledge, one finds a list of previous philosophers and their work, also bereft of critical path elemental exposé.

†Please note that Wikipedia has removed its older definition of ‘scientific method’ which began with ‘Define a Question’ as the first step (see the old definition extracted April 1, 2014 in The Scientific Method is Not Simply The Experimental Method), and has replaced it with ethical skepticism’s – ‘Conduct Observation’ now instead!  This is a major breakthrough, and while a remote stretch to imply that The Ethical Skeptic provided contribution to this change; nonetheless, it has taken time and activism on the part of real researchers just like us to supersede the false version of the scientific method, formerly taught by social skeptics over the last 6 decades. They have yet to add in the steps of ‘Frame Intelligence’ and ‘Establish Necessity’ (Ockham’s Razor), before asking a question – but this is a step in the right direction. The Ethical Skeptic is very pleased with this. This evolution is part of the contribution to the dismantling of the social skepticism movement currently underway.

An exception to such paucity on the topic may be found (below quote) in physics and computational scientist Stephen Wolfram’s ‘Completion Theory’, inside of his work, A New Kind of Science, (p. 1037).2 In this critical development of the philosophy of science, Wolfram outlines a set of approaches and principles used in converting conditions of non-confluence in a model structure, into one of confluence or even, unifinality (note, not monofinality). These modeling paradigms outline the benefit of reducing probative insight questions into elements of critical pair testing (highly constrained Bayesian model segments) which force two divergent analogues into one single conclusion, artificially. This approach is forcing rationalization to Bayesian reduction in its ethic of course, but more importantly outlines that – pairing of critical elements must be done iteratively and in the right succession, in order to produce a single, non-entropic answer. This is the essence of a critical path of logic, reducing an argument into single steps of validity which can be used to underpin a larger more comprehensive conjecture.

If one has a multiway system that terminates but is not confluent then it turns out often to be possible to make it confluent by adding a finite set of new rules. Given a string p which gets transformed to either q or r by the original rules, one can always imagine adding a new rule q → r or r → q that makes the paths from p immediately converge. To do this explicitly for all possible p that can occur would however entail having infinitely many new rules. But as noted by Donald Knuth and Peter Bendix in 1970, it turns out often to be sufficient just iteratively to add new rules only for each so-called critical pair q, r that is obtained from strings p that represent minimal overlaps in the left-hand side of the rules one has.

Similarly, German polymath Gottfried Wilhelm (von) Leibniz’s devices for the evaluation of the validity of given theses (ars iudicandi) and for finding new truths on the basis of given truths (ars inveniendi) outline principles which show that scientific logical reduction is nigh unto the art of mathematical reduction – itself the most pure form of critical path in logic.3 The Stanford Encyclopedia of Philosophy summarizes critical path logic thusly, in its series on foundational philosophy of logic:4

Leibniz stresses in the “Nouveaux essais” that syllogistic is part of a sort of universal mathematics, an art of infallibility (art d’infaillibilité). This art is not restricted to syllogisms, but concerns all kinds of formal proofs, i.e. all reasoning in which inferences are executed by virtue of their form.

What is Critical Path? Driving a Dark Highway at Night to an Unknown Destination

Critical path logic is simply a deployment of true critical thinking itself. Critical thinking has nothing whatsoever to do with ‘Stanovich goal enabling behaviors and cognitive dispositions’5 – as that is nothing but compliance and the ability to spot how to comply. This is the definition of critical thinking, and it has nothing to do with what you currently know – or pressure you receive from your peers, to conform to ‘rationality‘:

Critical Thinking

/philosophy : skepticism : science/ : the ability to assemble an incremental, parsimonious and probative series of questions, in the right sequence, which can address or answer a persistent mystery – along with an aversion to wallowing in or sustaining the mystery for personal or club gain. Critical thinking is the ability to understand, along with the skill in ability to deploy for benefit (value, clarity, risk and suffering alleviation), critical path logic and methodology. A process of methodically and objectively evaluating a claim to verity, through seeking new observations/questions which can be creatively and intelligently framed to challenge elements of fiat knowledge which underpin the claim, regardless of how compulsive, reasonable, justified and accepted that knowledge might be promoted or perceived.

There are several species of critical path logic application most commonly employed inside business, law, warfare and science today (I have been extensively and professionally involved in all of these):

  • Legal Prosecution – the critical sequence of evidences and questions which serve to convict or acquit a defendant.
  • Operations Research – the critical sequence of task and work content analytics which serve to attain a specific goal in the most objective/constraint effective method.
  • Science – the critical/sound basis from which to ask each next incremental question under the scientific method, which will serve to most completely and confidently answer a single query into the unknown.
  • Patent Prosecution – the critical set of disclosures, prior art, framings, constraints and claims, which inexorably lead to a non-obvious, teachable and novel invention, use or application.
  • Target Prosecution – in the Navy, when hunting a sub, the sub’s position is unknown at first. However, it can be eventually derived through a deductive process of critical path called Target Motion Analysis. This is a logical and incremental series of questions which, once answered, reveal a set of, or even one single possible instance of, a sub’s location, depth, speed and heading.

Therefore it has become clear over the last 200 years of the development of the philosophy of scientific logic (albeit more of late as opposed to of former), that, inference itself is drawn from three things which act in concert to clarify (reverse entropy) knowledge inside a horizon of unknown:

  • Develop a mathematical pathway of logical reduction – which evolves flexibly by novel outcomes rather than deterministically (as does maths),
  • Constrain in order to iteratively and convergently test critical pairs of modus ponens conjecture (the novelty), and
  • Sequence testing in such a fashion as to maximize probative potential and either an intelligence structure or unifinality (reduce entropy of knowledge).

For instance, let us compare two natural questions, an orphan question and a scientific question (bearing features of critical path):

Did a consciousness craft the universe?   <— This is a probative question, but it is not critical path.  It bears no underlying sound premise, is not parsimonious, incremental nor sequitur inside any particular argument bearing a logical calculus.  This is an orphan question. Even if we obtained the answer from some certified divine revelation, we would not know what to do with it next. It is next to useless, as both a question and an answer. Being right or wrong is inconsequential, as it is not informative.

Can a signal indicating observer effect on one particle, between a particle and its anti-particle carry information about that observation to the complimentary particle, faster than c (speed of light)?   <— Bears premise, is parsimonious and testable, incremental and sequitur – and finally, is highly probative; that is, it bears informative potential which can be crafted into intelligence, which further allows us to craft and constrain a further series of related probative questions.  This is the essence of critical path. It is a turning on, of the headlights of science, while it drives down a dark highway at night. But not only that, it also eventually selects the most effective route to the destination – or even the destination itself.

Social Skeptics will make scientific arguments of denial, which sound like they are principles of science, but in reality bear no more critical path value, than does the orphan question above. Anecdote and eyewitness testimony are to be ignored, complex ideas are wrong, conspiracy theories are wrong, pseudosciences are false topics, question the facts, examine every alternative you can think of.  These are orphan conjectures in science; crafted and disguised so as to not look like religious statements. But they are religious tenets and tools, nonetheless.

If something is false, it should eventually falsify itself through accrued intelligence. And in being found wrong, become highly informative in the process. If we choose instead to pre-certify it as wrong and then choose to block further research through use of apothegms, no informative critical path development (intelligence) can ever be undertaken from that point.  Wrong and seeing, is a world better state than is correct and blind.

This is the essence of a critical path in logic, science and systems. Most people, including many prominent skeptics and scientists, do not get this. It is a discipline of methodically focusing on what is important – and making the inquiry outcome even more rewarding in the process. Walt Whitman laments in “Thoughts”, inside his work Leaves of Grass, about the inability to cut through bullshit and focus on the path of the salient, incremental and sequitur, as such:6

OF persons arrived at high positions, ceremonies,
wealth, scholarships, and the like;
To me, all that those persons have arrived at, sinks
away from them, except as it results to their
Bodies and Souls,
So that often to me they appear gaunt and naked;
And often, to me, each one mocks the others, and
mocks himself or herself,
And of each one, the core of life, namely happiness,
is full of the rotten excrement of maggots,
And often, to me, those men and women pass unwit-
tingly the true realities of life, and go toward
false realities,
And often, to me, they are alive after what custom has
served them, but nothing more,
And often, to me, they are sad, hasty, unwaked son-
nambules, walking the dusk.

In fact, Whitman’s lament stands as metaphor to the elegance of an effective scientific process – it is satisfying and honing in skill, every bit as much as it is illuminating. As head of a materials lab, as CEO of a markets research and intelligence company, as a Director in Intelligence, as a philosopher and as a systems engineer planning over $600 billion in trade throughout my career (all $5 million to $100 million income companies), I have applied extensively, all three forms of professional critical path variant: science, logic and systems. My firms have been in premium demand for this role for over 3 decades; brought in to solve scientific, business and national infrastructure challenges which daunt classic organizations. In my experience, less than 1% of the population grasps the role of critical path in argument, planning and scientific reduction. Whitman’s unwaked sonnambules. Lawyers and mathematicians often do, and scientists sometimes do grasp critical logic. But diagnosticians, technicians, the dilettante and abduction/induction specialists rarely exhibit the skills honed under experience in handling critical path disciplines (see Diagnostician’s Error and The Three Types of Reason). This is why it is important to sense what type of mind you are dealing with early on inside a discussion.  One seldom can accompany a numpty across a path of critical logic or progression, as they do not bear the background nor skill set to assimilate such things. They only know the talking points by means of which they were trained.

Anecdote – BAD! Complex Alternative – BAD! Pseudosciences – BAD! Eyewitness Testimony – BAD! Conspiracy Theory – BAD!

A critical path involves several components of defining feature, which are similar in nature to, however much more than simply engineering critical path method:7 (note: In engineering critical path planning, ostensibly one knows all the tasks, work content, slack, drag and interconnections in advance, and is simply arranging them into a duration minimized framework. In science and in logic, one must apply the intelligence of early steps, in order to improve the clarity, structure and efficacy of the latter steps. There is fog on that horizon – one must adopt a pair of fog lights. See below.)

Critical Path (of systems, science or logic)

/philosophy : skepticism : science : critical thought/ : a preselected and interdependently ordered chain of incremental tasks, experiments or arguments which produce the most elegant pathway of progress to a reasonably constrained goal or answer. Elegance being defined as resource efficiency, plenary completeness and expediency, employed in ethical balance. Each step in a critical path relies upon a foundation of it previous steps/logic, yet adds in one incremental goal, test or claim which is being examined for validity or is sought for accomplishment (incrementalism).  In science, a critical path constitutes a series of tests or analyses crafted in such a succession hierarchy so as to produce a constrained and deductive incremental answer.  In the philosophy of logic, a critical path is the assembly of prior art foundational modus ponens or tolens arguments of logical calculus, which lead to sound basis for a greater incremental truth conjecture.  Finally in systems engineering, a critical path is the chain of interdependency of necessary and sufficient tasks, arranged in their most elegant progression, which leads to accomplishment of an incremental process step or overall planned goal.

The features of a critical path of logic involve the following:

  • a preselected and interdependently ordered chain of incremental tasks, experiments or arguments
  • an elegant pathway of progress, being defined as resource efficiency, plenary completeness and expediency, employed in ethical balance
  • a reliance upon prior art, a foundation of previous steps/logics which imbues soundness
  • is incremental in the nature of each critical step
  • employs a mathematical pathway of logical reduction
  • constrains in order to iteratively and convergently test critical pairs of modus ponens conjecture
  • tests in such a fashion as to maximize probative potential and either an unfolding intelligence structure or unifinality
  • employs feedback from intelligence selected early testing, to modify downline testing steps
  • is deductive in as much as is possible, versus other forms of inference
  • and does not wander aimlessly in testing, rather employs necessity and intelligence to strike a more elegant path to unifinality.

My Example

Of course this does not equate exactly to a logical critical path example, but it does combine systems planning and science into one discipline of critical logic. To the right is an example drawn from a classified lab I managed in the past – the specifics of which are unimportant, save to say that we were comparing the compatibility of various Transition metals as to their lattice substitution tolerance. In an effort to circumvent testing 9 different metals, under three different parameters, with 3 settings to each parameter (just to start off), which would involve 81 peer parallel experiments and take 8 months of costly and valuable reactor time, we decided to shortcut this by focusing instead on comparatives along one indice. In theory, if we did not find probative advantage early on, we could end up having to pursue all 81 orthogonal study permutations. But by focusing on 1 indice, running testing based on that 1 input and 1 setting in order to hone in on our most probable candidate, we were able to develop intelligence around the performance of various materials – which would accelerate our prosecution of the broader question (Q3301). The 3301 testing series related to the compatibility of niobium diboride in this role, as compared to a variety of other materials. Previous testing had shown niobium to bear significant advantage in both substitution and even some issues of interstitial phase displacement. We used this intelligence to our elegance advantage, if you will – by focusing on the most critical components of the broader question, 3301.

Before we simply shotgun tested every parameter and combination, we did some quick up front comparatives with our most promising element, which allowed us to focus on the core issues, and cut out 85% of the 81 permutations of test necessary in answering this one question alone.  This because, there were a series of at least 35 more questions necessary to answer before we could approach this material as a technology, not just a science.

The essence of this approach was to establish a path of testing which was focused on a guess (constraint), and then employ early results to modify the number of downline tests necessary (intelligence). Wolfram’s path convergence.

In other words, we pursued a relational dynamic between converging the schedule of events inside our testing, to become congruent with the unfolding of the critical logic inside the discovery. This is the process of developing and using intelligence – wherein our method made for our guesses getting better and better, very fast. Some scientists even bristled over this. We pursued what was probative as a priority, not simply what was methodically reliable.

We were not afraid to be wrong – we feared being not informative, even more.

Now this is but simply one version of a critical path – those of simply scheduling and logic may differ in structure, but not really in ethic. They all focus on what is important, and critical in attaining the goal, based upon advanced observations and only that information necessary in getting to the objective. Nothing more. This is the essence of a critical path.

The faking skeptic will toss out every manner of ignoratio elenchi and ingens vanitatum. They want to look the part, and enjoy being praised as science. They have no tolerance for wrongness, because being informative is not their primary goal. Be wise to this.

epoché vanguards gnosis

How to MLA cite this blog post =>
The Ethical Skeptic, “The Role of Critical Path in Logic, Systems and Science” The Ethical Skeptic, WordPress, 25 March 2018, Web; https://wp.me/p17q0e-7mp

March 25, 2018 Posted by | Ethical Skepticism | , , | 2 Comments

Diagnostic Habituation Error and Spotting Those Who Fall Its Prey

Diagnostic methods do not lend themselves to discovery goals; only to conclusions. The wise skeptic understands the difference in mindset of either approach, its value in application, and can spot those who fall prey to diagnostic habituation; hell bent on telling the world what is and what is not true.
Linear diagnostic thinkers tend to regard that one must ‘believe’ in or have scientific proof of their idea prior to conducting any research on it in the first place. They will not state this, however – watch carefully their illustration of applied scientific methodology. A bias towards prescriptive conclusions, obsession over beliefs, enemies and wanting proof as the first step of the scientific method will eventually broach in their worn out examples of poorly researched 1972 Skeptic’s Handbook bunk exposé.
When Lab Coats Serve to Deceive Self
There's no such thing as...

Nickell plating is the method of twisted thinking wherein one adorns lab coats and the highly visible implements of science in order to personally foist a display of often questionable empirical rigor. In a similar fashion, lab coats can also be used to deceive self, if one does not “live the examined life” as cited in the Socratic Apology 38a path context. Diagnostic Habituation Error is a very common judgement error in scientific methodology, often committed by professionals who work in very closed set domains, realms which involve a high degree of linear thinking, or matching of observation (or symptom) to prescriptive conclusion. The medical field is one such discipline set inside of which many professionals become blinded by protocols to such an extent that they fail to discern the more complex and asymmetrical demands of science in other disciplines.

For instance, medical diagnosticians use a term called a SmartPhrase in order to quickly communicate and classify a patient medical record entry for action or network flagging. A SmartPhrase is an a priori diagnostic option, a selection from cladistic clinical language, used in patient Electronic Health Record (EHR) documentation. While its intent was originally to compress and denote frequently used language, it has emerged as a defacto diagnostic option set as well. Wittgenstein would be nodding his aged head to this natural evolution. The nomenclature and diagnostic option set afforded makes life immersed inside Electronic Health Records easier for physicians. It makes science easier – but comes at a cost as well. A cost which the diagnostician must constantly bear in mind.

Not all sciences are like diagnostic medicine and astronomy. Most are vastly more complex in their deontologically reductive landscape. Diagnostician’s Error – is the failure to grasp this.

It would not constitute a far stretch of the imagination to understand why a clinical neurologist might not understand the research complexity or sequencing entailed in scientifically identifying a new species, assessing the impact of commodities on economics and poverty or the discovery of a new material phase state.  Despite their scientific training, they will habitually conclude that no such new species/state exists, because the traps we set for them are empty, our observations must have come from flawed observational memory, or that the textbook doctrine on ‘supply and demand’/’elastic and inelastic’ demand curves apply to our situation. Diagnostics in the end, do not lend themselves to discovery.  This is why it is all to common to observe clinical diagnosticians in Social Skeptic roles, denying the existence of this or that or pooh-poohing the latest efforts to use integrative medicine on the part of the public. These ‘skeptics’ comprehend only an abbreviated and one dimensional, linear version of the scientific method; if they apply any at all. In diagnostics, and in particular inside of medicine, the following compromises to the scientific method exist: (diagnostic and clinical medicine and not medical research):

  • symptom eventually equals previously known resolution
  • only the ‘most likely’ or ‘most risk bearing’ alternatives need be tested
  • very little need for discovery research
  • absence of evidence always equals evidence of absence
  • only lab experimental testing is valid
  • single parameter measure judgements are employed with abandon
  • the first question asked is an experiment, little advance thought is required
  • the first question presumes an whole domain of familiar ‘known’
  • the intelligence research has already been completed by others – and is assumed comprehensive
  • necessity observation is done by the patient (but is discounted in favor of experiment)
  • Ockham’s Razor involves fixed pathways
  • the set of possible outcomes is fixed and predetermined
  • an answer must be produced at the end of the deliberative process

The key, for The Ethical Skeptic, is to be able to spot those individuals who not only suffer from forms of Diagnostic Habituation, but also have a propensity to enforce the conclusions from such errant methodology and thinking on the rest of society.  Not all subjects can be resolved by diagnostics and linear thinking. This form of thinking usually involves avoiding a rigor called a logical calculus, in favor of something called abductive (or simplest explanation) reasoning.  If one is not absolutely sure that the domain inside of which they are working is truly abductive (disease is 98% abductive – the rest of our reality is not) – then it would be an error to use this type of reasoning universally as one’s approach to broader science. One does not research anomalous phenomena by using abductive reason for instance; because abductive logical inference was what kept the world locked in religious and Dark Age understandings of our realm for so long.

Reasoning Types†

Abductive Reason (Diagnostic Inference) – a form of precedent based inference which starts with an observation then seeks to find the simplest or most likely explanation. In abductive reasoning, unlike in deductive reasoning, the premises do not guarantee the conclusion. One can understand abductive reasoning as inference to the best known explanation.

Strength – quick to the answer. Usually a clear pathway of delineation. Leverages strength of diagnostic data.

Weakness – Uses the simplest answer (ergo most likely). Does not back up its selection with many key mechanisms of the scientific method. If an abductive model is not periodically tested for its predictive power, such can result in a state of dogmatic axiom.

Inductive Reason (Logical Inference) – is reasoning in which the premises are viewed as supplying strong evidence for the truth of the conclusion. While the conclusion of a deductive argument is certain, the truth of the conclusion of an inductive argument may be probable, based upon the evidence given combined with its ability to predict outcomes.

Strength – flexible and tolerant in using consilience of evidence pathways and logical calculus to establish a provisional answer (different from a simplest answer, however still imbuing risk into the decision set). Able to be applied in research realms where deduction or alternative falsification pathways are difficult to impossible to develop and achieve.

Weakness – can lead research teams into avenues of provisional conclusion bias, where stacked answers begin to become almost religiously enforced until a Kiuhn Paradigm shift or death of the key researchers involved is required to shake science out of its utility blindness on one single answer approach. May not have examined all the alternatives, because of pluralistic ignorance or neglect.

Deductive Reason (Reductive Inference) – is the process of reasoning from one or more statements (premises) to reach a logically certain conclusion. This includes the instance where the elimination of alternatives (negative premises) forces one to conclude the only remaining answer.

Strength – most sound and complete form of reason, especially when reduction of the problem is developed, probative value is high and/or alternative falsification has helped select for the remaining valid understanding.

Weakness – can be applied less often than inductive reason.

Diagnostic Habituation Error

/philosophy : science : method : linear diagnostics : unconscious habituation/ : the tendency of medical professionals and some linear thinkers to habitually reduce subjects of discourse inside protocols of diagnosis and treatment, when not all, or even most fields of discourse can be approached in this manner. Diagnosis must produce an answer, is performed inside a closed set of observational data domain, constrained fields of observation (eg. 1500 most common human maladies), are convergent in model nature, tend to increasing simplicity as coherency is resolved and develop answers which typically select from a closed field of prescriptive conclusions. All of these domain traits are seldom encountered in the broader realms of scientific research.

DIAGNOSTICIANS

Detecting a Linear Diagnostic Thinker – Habituated into Selecting From a Prescriptive Answer Inventory
They tend to think that one must ‘believe’ in or have scientific proof of their idea prior to conducting any research on it in the first place. They will not state this, however – watch carefully their illustration of applied scientific method. They will rarely grasp an Ockham’s Razor threshold of plurality, nor understand its role; obsessively clinging to the null hypothesis until ‘proof’ of something else arrives. Gaming method, knowing full well that ‘proof’ seldom arrives in science.

The determination of a diagnosis of inherited static encephalopathy may be a challenging endeavor at first, and indeed stands as a process of hypothesis reduction and science.  However, this reduction methodology differs from the broader set of science and in particular, discovery science in that it features the following epignosis characteristics. The problem resides when fake skeptics emulate the former process and advertise that its method applies to their ability to prescriptively dismiss what they do not like.

your deceptive practicesA key example of applied Diagnostic Habituation Error can be found here. An elegant demonstration of how well-applied diagnostic methodology inside a clinical technical role can serve to mislead its participant when applied in the broader realms of science. This treatise exhibits a collegiate level dance through repetitious talk about method, parlaying straight into sets of very familiar, poorly researched canned conclusions, excused by high school level pop-skeptic dogma. Worn out old propaganda about about how memory is fallible if we don’t like its evidence, and if you research anything forbidden, your mind is therefore ‘believing’ and playing tricks on its host.

Diagnosis Based Science (How it differs from the broader set of science reduction and discovery)

the diagnostic habituation errorObservational Domain is Set, Experimental Only and Controlled – the human body is the domain and the set of observable parameters is well established, known and relatively easily and only measured.

Example:  Observable parametrics in the human body consist of blood measures, skin measures, neurological signals, chemical signatures and hormone levels, physical measures and those measures which can be ascertained through bacteriology and virology. In medicine, the scientific method starts there.  In discovery science, method does not start with an experiment, it starts with observation, necessity and intelligence. Despite the complexity which is inherent inside these observational domains, still the set is highly restricted and the things observed-for, well known for the most part. In contrast, examining the galaxy for evidence of advanced life will be a long, poorly understood and failure laden pathway. We cannot begin this process with simply the Drake Equation and an experiment and hope to have success.

Field of Observation is Constrained and Well Established with Gnosis Background – there are only a few subset disciplines inside which observations can be made. Each is well documented and for which is published a guiding set of protocols, advisement, and most recent knowledge base regarding that discipline.

Example:  There exist only a closed set of systems inside the human body, which are for the most part well understood in terms of dysfunction and symptom. Integumentary, skeletal, nervous, cardiovascular, endocrine and muscular systems. Compare this to the energy systems which regulate our planetary environment. Most are not well understood in terms of impact, and we are not even sure how many constitute the major contributors to climate impact, or even how to measure them. I am all behind science on Climate Change, but in no way do I regard the discipline as a diagnostic field. I am wary of those who treat it as such.  And they are many.

Fixed Ockham’s Razor Mandate, Single Hypothesis and Extreme Null Evidence Standards – The protocols of diagnosis always dictate that the most likely or danger-entailing explanation be pursued in earnest, first and only. Once this hypothesis has been eliminated, only then can the next potential explanation be pursued. Absence of evidence is always taken as evidence of absence. This is not how discovery or asymmetric science works. Very rarely is diagnostic science challenged with a new medical discovery.

Example:  When a 55 year old patient is experiencing colon pain, the first response protocol in medicine is to order a colonoscopy. But in research regarding speciation for instance, or life on our planet, one does not have to solely pursue classic morphology studies to establish a phylogeny reduction. One can as well simultaneously pursue DNA studies and chemical assay studies which take a completely different tack on the idea at hand, and can be used to challenge the notion that the first phylogeny classification was a suitable null hypothesis to begin with. Real research can begin with several pathways which are in diametric opposition.

Diagnoses are Convergent in Nature – the methods of reduction in a diagnosis consistently converges on one, or maybe two explanatory frameworks inside a well known domain of understanding. In contrast, the broader world of modeling results in convergent models very rarely; moreover, often in non-discriminating or divergent models which require subjective reasoning in order to augment in terms of a decision process (if a decision is chosen at all).

Example:  If I have a patient complaining of tinnitus, my most complex challenge exists on the first day (in most cases). I am initially faced with the possible causes of antibiotics effects, hearing loss, intestinal infection, drug use, excessive caffeine intake, ear infections, emotional stress, sleep disorder or neurological disorder. From there evidence allows our models to converge on one optimal answer in short order in most cases.  Compare in contrast an attempt to discern why the level of poverty in a mineral rich country continues to increase, running counter to the growing GDP derived through exploitation of those minerals. The science and models behind the economics which seek to ascertain the mechanisms driving this effect can become increasingly divergent and subjective as research continues.

Tendency is Towards Increasing Simplicity as Coherency is Resolved – medical diagnoses tend to reduce information sets as coherency is attained and focus on one answer.  Please note that this is not the same as reducing complexity.  The reduction of complexity is not necessarily a scientific goal – as many correct solutions are indeed also inherently complex.

Example:  As I begin to diagnose and treat a case of Guillain–Barré syndrome, despite the initial chaos which might be entailed in symptom and impact mitigation, or the identification of associated maladies – eventually the patient and doctor are left with a few reduced and very focused symptomatic challenges which must be addressed.  CNS impacts, nerve damage, allergies and any residual paralysis, eventually the set of factors reduces to a final few.  In contrast, understanding why the ecosystem of the upper Amazon is collapsing, despite the low incidence of human encroachment, is a daunting and increasingly complex challenge. Its resolution may require much out-of-the-box thinking on the part of researchers who constantly exhaust multiple explanatory pathways and cannot wait for each one-by-one, prescriptive solution or explanation or a null hypothesis to ‘work itself out’ over 25 years.

Selects from Solutions Inside a Closed Field of Prescriptive Options – Almost all medical diagnoses are simply concluded from a well or lesser, but known set of precedent solutions from which decisions can be made and determinations drawn.  This is not the case with broader scope or discovery science.

Example:  In the end, there are only a set of about 1500 primary diseases from which we (most of the time) can regularly choose to diagnose a set of symptoms, most with well established treatment protocols. Contrast this with the World Health Organization’s estimate that over 10,000 monogenic disorders potentially exist.¹ The research task entailed inside monogenic nucleotide disorders is skyrocketing and daunting.  This is discovery science. The diagnosis of the primary human 1500 diseases, is not. Different mindsets will be needed to approach these very different research methodologies.

An Answer Must be Produced or We Fail – 100% of diagnostic processes involve the outcome of a conclusion. In fake skepticism, of course the participants are rife with ‘answer which as the greatest likelihood of being true’ type baloney.  To the Diagnostic Habituated fake skeptic, an answer has to be produced – NOW. But in a discovery process, we do not necessarily have to have an answer or disposition on a subject.  Be very wary of those who seem to force answers and get angry when you do not adopt their conclusion immediately. Be wary of those who have an answer for all 768 entries in The Skeptic’s Dictionary (including those who wrote the material). They are not conducting a scientifically reliable or honest exercise, rather are simply propping up a charade which seeks to alleviate the mental dissonance stress from something larger which disturbs them greatly. See Corber’s Burden. As one claims to be an authority on all that is bunk, their credibility declines in hyperbolic inverse proportion to the number of subjects in which authority is claimed.

Example:  If one is experiencing pain, for the most part both the patient and the researcher will not stop until they have an answer.  A conclusive finish to the pain itself is the goal after all, and not some greater degree of human understanding necessarily.  Contrast this with grand mysteries of the cosmos. We do not yet have an answer to the Sloan Digital Sky Survey “Giant Blob” Quasar Cluster² observation and how it could easily exist under current understandings of classical cosmology or M-theory. We have to await more information. No one has even suggested forcing a ‘answer which has the greatest likelihood of being true.’ To do so would constitute pseudoscience.

It is from this constrained mindset which the Ethical Skeptic must extract himself/herself, in order to begin to grasp why so many subjects are not well understood, and why we must think anew in order to tackle the grander mysteries of our existence. The more we continue to pepper these subjects with prescripted habituated diagnoses, the more those who have conducted real field observation will object. We have well observed the falling back on the same old ‘conspiracy theorist’ pejorative categorization of everyone who disagrees with the diagnoses proffered by these linear thinkers. It is not that a diagnostic approach to science always produces an incorrect answer. But if we follow simply the error of diagnosis habituation, then let’s just declare that mind Ξ brain right now, and we can close up shop and all go home. And while I might not bet against the theory were it on the craps table in Vegas and I were forced to make a selection now, neither am I in an ethical context ready to reject its antithesis simply because some diagnostic linear thinkers told me to.

I am an Ethical Skeptic, I don’t reject your idea as false, but I await more information. As a discovery researcher I refuse to simply accept your diagnostically habitual ‘critical thinking;’ nor its identifying which answer the constrained set has shown ‘is most likely true.’

That is not how real skepticism and real science work.


¹  World Health Organization, “Genes and Human Disease,” Genomic Resource Center, Spring 2015; http://www.who.int/genomics/public/geneticdiseases/en/index2.html.

² The Biggest Thing in the Universe, National Geographic; January 11, 2013, National Geographic Society;  http://news.nationalgeographic.com/news/2013/01/130111-quasar-biggest-thing-universe-science-space-evolution/.

† Abductive, Inductive, and Deductive Reason definitions – are modified from their approximate definitions provided by Wikipedia, in its series on reasoning and logical inference.

https://en.wikipedia.org/wiki/Abductive_reasoning

https://en.wikipedia.org/wiki/Inductive_reasoning

https://en.wikipedia.org/wiki/Deductive_reasoning

May 14, 2015 Posted by | Argument Fallacies | , , , , , | Leave a comment

Chinese (Simplified)EnglishFrenchGermanHindiPortugueseRussianSpanish
%d bloggers like this: