“You ever hear the expression ‘Simplest answer’s often the correct one?’ ”
“Actually, I’ve never found that to be true.”
– Gone Girl, 2014¹
Indeed, the actual role of Ockham’s Razor, the real scientific principle, is to begin the scientific method, not complete it in one felled swoop. Rational Thinking under Ockham’s Razor (ie. science) is the demonstrated ability to handle plurality with integrity.
FALSE ONE-LINER which is NOT OCKHAM’S RAZOR:
“All things being equal, the simplest explanation tends to be the correct one.”
The above statement is NOT Ockham’s Razor. It is a sleight of hand expression (called by SSkeptics, “Occam’s Razor”) used to squelch disdained topics which would otherwise be entertained for research by Ethical Skepticism. The weakness of the statement resides in the philosophical principle that the simplest answer is typically the one which falls in line with the pre-cooked assumptions. Moreover, implicit within this statement is the claim that all data and observations must immediately be ‘explained’ so that a disposition can be assembled a priori and anecdotally; as a means of preventing data aggregation or intelligence development steps of science. In these two ways, the statement is employed to obfuscate and abrogate the application of the scientific method. This trick is a common sales technique, having little to do with rationality. Don’t let your integrity slip to the point where you catch yourself using it to deceive others. The two formal fallacies introduced via this errant philosophy are:
Transactional Occam’s Razor Fallacy
The false contention that a challenging claim or observation must immediately be ‘explained.’ Sidestepping of the data aggregation and intelligence steps of the scientific method. The boast of claiming to know which question should be asked under the scientific method.
Existential Occam’s Razor Fallacy
The false contention that the simplest explanation tends to be the scientifically correct one. Suffers from the weakness that myriad and complex underpinning assumptions, all of which tender the appearance of ‘simplicity,’ have not been vetted by science.
Indeed, the actual role of Ockham’s Razor, the real scientific principle, is to begin the scientific method, not complete it in one felled swoop of denial.
“Pluralitas non est ponenda sine neccesitate” or “Plurality should not be posited without necessity.”
The words are those of the medieval English philosopher and Franciscan monk William of Ockham (ca. 1287-1347).²
This apothegm simply means that, until we have enough evidence to compel us, science should not invest its resources into outside theories. Not because they are false or terminally irrelevant, rather existentially they are unnecessary in the current incremental discourse of science. But Ockham’s Razor most importantly also means that once there exists a sufficient threshold of evidence to warrant attention, then science should seek to address the veracity of a an outside claim. Plurality is a condition of science which is established by observations and sponsorship, not by questions, peer review or claims. To block the aggregation and intelligence of this observational data, or attempt to filter it so that all data are essentially relegated as fiat anecdote, is pseudoscience. It is fraud, and is the chief practice of those in the Social Skepticism movement today. The claim of “Prove it” – or Proof Gaming Fallacy, embodies this fundamental misunderstanding of Ockham’s Razor on the part of those who have not pursued a rigorous philosophical core inside their education.
This threshold of plurality and in contrast, the ‘proof’ of an idea, are not the same standard of data, testing and evidence. Muddying the two contexts is a common practice of deception on the part of SSkeptics. Proof is established by science, plurality is established by sponsors. SSkeptics regard Ockham’s Razor as a threat to their religion, and instead quote the former substitute above, which while sounding similar and ‘sciencey’, does not mean the same thing at all. An imposter principle which rather seeks to blur the lines around and prevent competing ideas from attaining this threshold of plurality and attention under the scientific method. Their agenda is to prohibit ideas from attaining this threshold at ANY cost. This effort to prohibit an idea its day in the court of science, constitutes in itself, pseudoscience.
This method of pseudoscience is called the DRiP Method.
Misuse of “Occam’s” Razor to effect Knowledge Filtering
One of the principal techniques, if not the primary technique of the practitioners of thought control and Deskeption, is the unethical use of Knowledge Filtering. The core technique involves the mis-use of Ockham’s Razor as an application to DATA and not to competitive thought Constructs. This is a practice of pseudoscience and is in its essence dishonesty.
Ockham’s Razor, or the discernment of Plurality versus Singularity in terms of competing Hypotheses, is a useful tool in determining whether Science should be distracted by bunk theories which would potentially waste everyone’s time and resources. Data on the other hand is NOT subject to this threshold.
By insisting that observations be explained immediately, and through rejecting a datum, based on the idea that it introduces plurality, one effectively ensures that no data will ever be found which produces a competing Construct. You will in effect, perpetually PROVE only what you are looking for, or what you have assumed to be correct. No competing idea can ever be formulated because outlier data is continuously discarded immediately, one datum at a time. This process of singularly dismissing each datum in a series of observations, which would otherwise constitute data collection in an ethical context is called “Knowledge Filtering” and stands as a key step in the Cultivation of Ignorance, a practice on the part of Social Skepticism. It is a process of screening data before it can reach the body of non-expert scientists. It is a method of squelching science in its unacknowledged steps of process and before it can gain a footing inside the body of scientific discourse. It is employed in the example graphic to the right, in the center, just before the step of employing the ‘dismissible margin’ in Social Skepticism’s mismanagement of scientific consensus.
Plurality is a principle which is applied to Constructs and Hypotheses, not Data.
I found a curious native petroglyph once while on an archaeological rafting excursion, which was completely out of place, but who’s ocre had been dated to antiquity. I took a photo of it to the state museum and was unable to find the petroglyph in the well documented library of Native American Glyphs. I found all the glyphs to the right and all the glyphs to the left of the curious one. However, the glyph in question had been excluded from the state documentation work performed by a local university professor. The museum director, when I inquired replied appropriately “Perhaps the Glyph just didn’t fit.” He had hit the nail on the head. By Occam’s Razor, the professor had been given tacit permission to filter the information out from the public database, effectively erasing its presence from history. He did not have to erase the glyph itself, rather simply erase the glyph from the public record, our minds and science – and excuse it all as an act of ‘rational thinking.’
The Purpose of Ockham’s Razor is to BEGIN the scientific method, not screen data out and finish it.
Data stands on its own. Additionally, when found in abundance or even sometimes when found in scarcity, and not eliminated one at a time by the false anecdotal application of “Occam’s” Razor, can eventually be formulated into a Construct which then will vie for plurality under the real Ockham’s Razor. A useful principle of construct refinement, prior to testing, under the scientific method.
The Role of Ockham’s Razor: To Screen for Plurality at the Beginning of the Scientific Method
3. INTELLIGENCE/AGGREGATION OF DATA (The Three Key Questions)
4. CONSTRUCT FORMULATION
5. SPONSORSHIP/PEER INPUT (Ockham’s Razor)
6. HYPOTHESIS DEVELOPMENT
7. PREDICTIVE TESTING
8. COMPETITIVE HYPOTHESES FRAMING (ASKING THE RIGHT QUESTION)
9. FALSIFICATION TESTING
10. HYPOTHESIS MODIFICATION
11. FALSIFICATION TESTING/REPEATABILITY
12. THEORY FORMULATION/REFINEMENT
13. PEER REVIEW (Community Vetting)
Map of Deskeption 7.8.1: Misuse of “Simplicity”
Think through this comparative and your ethics will begin to change. Simplicity can be a deceptive tactic, when used to obfuscate:
1) Simple explanations have complex underpinnings. Our “simple” explanations are only simple, because we choose to contort reality in extremely exhaustive complexities in order to force simplicity. You can lead a horse to water, but you can’t make him drink. But if he does drink, that is not a simple action by any means, it may appear to be simple but that is an illusion on the part of the casual observer. Simplicity, many times, is only an illusion.
2) Beware of the tyranny of the simple. Simplicity as a principle of discretion is best suited for the clear application of judgments and governances, and as such is usually based on sets of laws and procedure which change only slowly and under great necessity. Laws only change as men change, and men are slow to change. Because of this, laws of governance are always behind current understandings. Unassailable principles of governance have little place in discovery and science.
3) Simplicity conveys neither straightforwardness, nor elegance; which are central tenets of understanding. “The simplest vehicle I know of is a unicycle. I’ll be damned if after all these years of trying, I still have not managed to learn how to ride one.”
4) Simplicity implies that enough data exists to warrant a conclusion regarding an observation, then further implies that a disposition must be tendered immediately. Simplicity in this fashion is sold through construction of a false dilemma, a fatal logical fallacy.
When Rational Thinking becomes nothing more than an exercise in simply dismissing observations to suit one’s inherited ontology, then the entire integral will and mind of the individual participating in such activity, has been broken.
¹ Gone Girl, screenplay by Gillian Flynn, motion picture by 20th Century Fox; September 26, 2014; David Fincher.
² The Stanford Encyclopedia of Philosophy, William of Ockham; http://plato.stanford.edu/entries/ockham/
If a skeptic contending an argument cannot cite exception observations to his case, does not practice an awareness of their own weaknesses, and possesses a habitual inability to hold a state of epoché and delare “I don’t know;” then be very wary. Because that person may be trying to sell you something, promote an agenda or may even indeed be lying.
There are three indicators I look for in a researcher who is objective and sufficiently transparent enough to be trusted in the handling of the Knowledge Development Process (Skepticism and Science). These are warning flags to watch for, which I have employed in my labs and businesses for decades. They were hard earned lessons about the nature of people, but have proven very helpful over the years.
1. The demonstrated ability to neutrally say “I do not have an answer for that” or “I do not know”
One of the ways to spot a fake skeptic, or at a professional level, those who pretend to perform and understand science – is that the pretender tends to have a convenient answer for just about everything. Watch the Tweets of the top 200 narcissistic Social Skeptics and you will observe (aside from their all tweeting the exact same claims 5 minutes after the previous tweet of verbatim contention) the clear indication that there exists not one pluralistic topic upon which they are not certain of the answer. Everything has a retort, one liner, or club recitation. Your dissent on the issue is akin to attacking them personally. They get angry and attack you personally as the first response to your objection. Too many people fall for this ruse. In this dance of irrationality passed off as science, factors of personal psychological motivation come into play and serve to countermand objective thinking. This can be exposed though a single warning flag indicating that the person you are speaking with might not be trustworthy. An acquaintance of mine, who is a skeptic, was recently faced with a very challenging paradigm shaking observation set. I will not go into what it was, as siding with issues of plurality is not the purpose of this blog. The bottom line was that as I contended “Ken, this is an excellent opportunity to make observations around this phenomenon to the yeah or nay, damn you are lucky,” he retorted “No, I really would rather not. I mean not only is it a waste of time, but imagine me telling the people at work ‘Oh, I just went to ___________ last night and took a look at what was happening.’ That would not go over well, and besides it really is most likely [pareidolia] and will be a distraction.” Ken in this short conversation, revealed an abject distaste for the condition of saying ‘I do not know.’ His claim to absolute knowledge on this topic over the years, revealed to be simply a defense mechanism motivated by fear. Fear of the phenomena itself. Fear of his associates. Fear of the state of unknowing.
It struck me then, that Ken was not a skeptic. It was more comfortable to conform with pat answers and to pretend to know. He employed SSkepticism as a shield of denial, in order to protect his emotions and vulnerabilities. The Ten Pillars hidden underneath the inability to say “I do not know” can serve as the underpinning to fatal flaws in judgement. Flaws in judgement which might come out on the battlefield of science, at a time most unexpected. Flaws in judgement which might allow one to fall prey to a verisimilitude of fact which misleads one into falsely following a SSkeptic with ulterior motives or influences, were you not the wiser.
The Ten Pillars which hinder a person’s ability to say “I don’t know” are:
I. Social Category and Non-Club Hatred II. Narcissism and Personal Power
III. Promotion of Personal Religious Agenda IV. Emotional Psychological Damage/Anger V. Overcompensation for a Secret Doubt
VI. Fear of the Unknown
VII. Effortless Argument Addiction VIII. Magician’s Deception Rush
IX. Need to Belittle Others X. Need to Belong/Fear of Club Perception
These are the things to watch out for in considering the advice of someone who is promoting themselves as a skeptic, or even as a science professional; one key little hint to be observed inside the ability of an individual to dispassionately declare “I don’t know” or “I do not have an answer for that” on a challenging or emotional subject.
2. A balanced perspective CV of pertinent past personal mistakes and what one has learned from them
I cringe when I review my past mistakes, those both personal and professional. My cringing is not so much over mistakes from lack of knowledge, as in the instance of presuming for example that a well recognized major US Bank would be ethical and not steal for themselves and their hedge/mutual fund cronies, 85% of my parent’s retirement estate once it was put into an Crummy Irrevocable Trust. Some mistakes come from being deceived by others or from a lack of knowledge or an inability to predict markets. But what is more important in this regard, is both the ethical recall of and recollection of how we handle mistakes we commit from missteps in judgement. Those are the anecdotes I maintain on my mistake resume. For instance, back in the 90’s I led research team on a long trek of predictive studies supporting a particular idea regarding the interstitial occupancy characteristics of an alpha phase substance. I was so sure of the projected outcome, that my team missed almost a year of excellent discovery and falsifying work, which could have shed light on this interstitial occupancy phenomenon. It was a Karl Popper moment, my gold fever over a string of predictive results, misled my thinking into a pattern of false understanding as to what was occurring in the experiments. It was only after a waste of time and a small sum that my colleague nudged me with a recommendation that we shift focus a bit. I had to step off that perch, and realize that a simple falsification test could have elicited an entirely new, more accurate and enlightening pathway. Again I did this in an economic analysis of trade between Asia and the United States. I missed for months an important understanding of trade, because I was chasing a predictive evidence set and commonly taught MBA principles relating costs and wealth accrual in trade transactions. I earned two very painful stripes on my arm from those mistakes, and today – I am keenly aware of the misleading nature of predictive studies – observing a SSkeptic community in which deception by such Promotification pseudo-science runs pandemic.
Mistakes made good are the proof in the pudding of our philosophy. I can satisfy my Adviser with recitations or study the Stanford Encyclopedia of Philosophy for years; but unless I test at least some of the discovered tenets in my research life, I am really only an intellectual participant. Our mistakes are the hashmarks, the stripes we bear on the sleeves of wisdom and experience. Contending that “Oh I was once a believer and now I am a skeptic” as does Michael Shermer in his heart tugging testimony about his errant religious past and his girlfriend, is not tantamount to citing mistakes in judgement. That is an apologetic explaining how one does not make mistakes, only others do. This is not an objective assessment of self, nor the carrying a CV of lessons learned. Michael learned nothing in the experience, and rather the story simply acts as a confirmation moral; a parable to confirm his religious club. Other people made the mistake, he just caught on to it as he became a member of the club. This type of ‘mistake’ recount is akin to saying “My only mistake was being like you.” That is not an objective assessment of personal judgement at all. It is arrogance, belied by a fake cover of objectivity and research curiosity.
Watch for this sign in lack of objectivity, especially the fake-mistake kind. It can serve to be extraordinarily illuminating.
3. A reasonable set of statistical outliers in their theory observation set
Outliers call out liars. This is a long held, subtle tenet of understanding which science lab managers employ to watch for indications of possible group think, bias or bandwagon effect inside their science teams. Not that people who commit such errors are necessarily liars, but we are all prone to undertake the habit of observational bias, the tendency to filter out information which does not fit our paradigm or objective of research. One way to spot this is to be alert for the presence and nature of and method by which exceptions and outliers are handled in the field of data being presented in an argument. Gregory Francis, professor of psychological sciences at Purdue University, has employed the professional observation of critiquing studies in which the resulting signals are “too good to be true.” Francis has been known to critique psychology papers in this regard. His reviews are based on the principle that experiments, particularly those with relatively small sample sizes, are likely to produce “unsuccessful” findings or outlier data, even if the experiments or observation are supporting a credible and measurable phenomenon.¹ Now, as a professional who conducts hypothesis reduction hierarchy in their work, I know that it is a no-no to counter inception stage falsifying work with countermanding predictive theory – a BIG no no. This is a common tactic of those seeking to squelch a science. There is a time and a place in the scientific method for the application of predictive data sets, and that is not at Sponsorship Ockham’s Razor Peer Input step. In other words, I professionally get upset when a new science is shot at inception. I want to see the falsifying data, not the opposition’s opinion. So set aside the error Greg Francis is making here in terms of the scientific method inside Epigenetics, and rather focus on the astute central point that he is making, which is indeed correct.
“If the statistical power of a set of experiments is relatively low, then the absence of unsuccessful results implies that something is amiss with data collection, data analysis, or reporting,” – Dr. Greg Francis, Professor of Psychology, Purdue University¹
An absence of outlier data can be a key warning flag that the person offering the data is up to something. They are potentially falling prey to an observer’s bias, some form of flaw in their positioning of the observation set, or even in purposeful deception – as is the case with many of our Social Skeptics. Indications that the person is adhering to a doctrine, a social mandate, a personal religious bias, or a furtive agenda passed to them by their mentors.
Be circumspect and watch for how a self purported skeptic handles outlier data.
- Do they objectively view themselves, the body of what is known, and the population of observations and data?
- Do they simply spin a tale of complete correctness without exception?
- Do they habitually begin explanatory statements with comprehensive fatalistic assessments or claims to inerrant authority without exception?
- Do they bristle at the introduction of new data?
- Do they cite perfection of results inside an argument of prospective or well established plurality?
- Do they even mock others who cite outlier data?
- Do they attack you if you point this out?
If so be wary. Michael Shermer terms this outlying group of countermanding ideas, data and persons the “Dismissible Margin.” It is what SSkeptics practice in order to prepare a verisimilitude of neat little simple arguments. These are key warning indicators that something is awry. If a skeptic cannot acknowledge their own biases, the outlier data itself, nor do they possess the ability to say “I don’t know;” they may very well be lacking in the ability to command objective thinking. They may very well be lying. Lying in the name of science.
¹ Epigenetics Paper Raises Questions, The Scientist, Kate Yandell, October 16, 2014; http://www.the-scientist.com/?articles.view/articleNo/41239/title/Epigenetics-Paper-Raises-Questions/
Once plurality has been argued, I would rather the Scientific Method show my premise to be false, than be told in advance what ‘critical thinking’ says.
The public at large, and our leaders cannot always envision this process of corruption. SSkeptics thrive inside and promote this general ignorance. It is up to people who have actually performed real science – to possess the responsibility of courage which is incumbent with the privilege of acumen; to stand up and say “No, I am not ready to dismiss this subject, just because you tell me to.”
Any man can be made to appear irrational and vile, if his enemies only are allowed to speak on his behalf.
Homeopathy, as defined by a particular series of strawman fallacies, is a completely ridiculous pseudoscience, yes. The problem is that the United States Food and Drug Administration, the manufacturers of homeopathic product and the industry in general do not define homeopathy by means of these strawman fallacies. Only Social Skepticism does. Why?
As it turns out, the claims from Social Skepticism that homeopathy is defined as employment of extreme dilutions, placebo, metaphysical entities, vital energies or adherence to antiquated science, all reveal themselves upon diligent investigation, to be false.
The Ethical Skeptic is forced to ask: Why? (Note: I am not a homeopathy proponent, I believe there should be one standard for OTC medicinals, and efficacy (not safety) approval should come through a qualified consumer review panel – anonymous so that it cannot be corrupted by SSkeptics and Big Pharma – and not have the process left to an overwhelmed FDA, or prejudiced SSkeptic and Big Pharma lobby groups. The most informed researchers are mom’s and dad’s – we just need to harness the power of enough of their ethics and knowledge. And in today’s data and intelligence technology environment, this can be done. Unfortunately lobbyists, money and Social Skepticism are getting in the way of science.)
In order to give that inquiry justice, that will have to be the subject of another blog article altogether. But I suppose we will find a hint below with respect to the FDA’s 1962 delineation of HPUS qualified substances. An issue which begs the question, and forces us too examine the all-to-common milieu around SSkeptic definitions, and in particular theirs of ‘homeopathy':
- Who Loses? and
- Who Benefits?
An exercise I command regularly in my advisement services with nations suffering from corruption and oppression. We have already published a blog study (see the graphic below) showing that homeopathic cold remedies sold in drug stores over the counter were exactly the same ingredients and dosages as employed in the big pharma equivalents; so The Ethical Skeptic has suspected that some chicanery was afoot in the Social Skepticism movement regarding homeopathy. It appears to be a whipping boy subject to further a furtive oligarch agenda. The definitions below elicit this dishonesty and the specific fallacies involved:
From The Skeptic’s Dictionary, Social Skepticism’s definitions of homeopathy:¹
“…the less you use it, the stronger it gets.” – Phil Plait
“…helping the vital force restore the body to harmony and balance.”
“…involves the appeal to metaphysical entities and processes.”
“…trying to balance “humors” by treating a disorder with its opposite (allos).”
“…generally defined as a system of medical treatment based on the use of minute quantities of remedies…”
“…potency could be affected by vigorous and methodical shaking (succussion).”
“…succussion could release “immaterial and spiritual powers,” thereby making substances more active.”
“Homeopaths refer to “the Law of Infinitesimals”… The law of infinitesimals seems to have been partly derived from his notion that any remedy would cause the patient to get worse before getting better and that one could minimize this negative effect by significantly reducing the size of the dose. Most critics of homeopathy balk at this “law” because it leads to remedies that have been so diluted as to have nary a single molecule of the substance one starts with.”
“…homeopathic remedies work by altering the structure of water, thereby allowing the water to retain a “memory” of the structure of the homeopathic substance that has been diluted out of existence.”
“…homeopathic remedies, if effective, are no more effective than placebos.”
“…homeopaths believe, …that the [scientific practice of a] placebo-controlled randomised (sic) controlled trial is not a fitting research tool with which to test homeopathy.”
“…homeopathy claims a special exemption from the rules of logic and science…”
This array of mis-definitions and prejudicial portrayals suffers from not only the fact that they are incorrect – homeopathy, as defined by the professional organizations who regulate and manufacture in that industry (in other words the “authorities” as SSkeptics like to cite), employs standard industry effective ingredients in standard OTC dilutions – but also suffers employment of
6 fatal, non-trivial fallacies in the definition of homeopathy foisted by Social Skepticism
- a fallacy of composition in that one historical person or fringe idea is used as rationale to dismiss an entire equivocal subject
- a lie of equivocation, in employing a mis-definition to impugn a similarly titled topic ‘homeopathy’
- a characterization from a negative premise, in that they presume all users of the term ‘homeopathy’ practice pseudo science or adhere to anti-scientific agendas
- a fictus scientia fallacy, wherein one claims that science has tested industry defined homeopathy, when indeed it has only tested SSkeptic defined homeopathy
- a scarecrow error, in fabricating completely fictitious, ridiculous or expired beliefs as constituting the claims of a disdained group, in order to discredit the group
- an antiquing fallacy, showing its false, hoax based or dubious past inside a set of well known anecdotal cases. Also the instance where a thesis is deemed incorrect because it was commonly held when something else, clearly false, was also commonly held.
But in the real world, where real professional business is done, it’s the same damn stuff, same damn substrate, same damn dilutions, same damn product – Just costs 15-55% less than the products pushed by the sponsors of Social Skepticism (see below).
And here is why. The United States Food and Drug Administration defines homeopathy² in its jurisdictional provisions under Section CPG 400.400 of its Compliance Policy Guidelines regarding homeopathy
The following terms are used in this document and are defined as follows:
1. Homeopathy: The practice of treating the syndromes and conditions which constitute disease with remedies that have produced similar syndromes and conditions in healthy subjects.
2. Homeopathic Drug: Any drug labeled as being homeopathic which is listed in the Homeopathic Pharmacopeia of the United States (HPUS), an addendum to it, or its supplements. The potencies of homeopathic drugs are specified in terms of dilution, i.e., 1x (1/10 dilution), 2x (1/100 dilution), etc. Homeopathic drug products must contain diluents commonly used in homeopathic pharmaceutics. Drug products containing homeopathic ingredients in combination with non-homeopathic active ingredients are not homeopathic drug products.
According to the Homeopathic Pharmacopeia of the United States, a homeopathic drug does not have to involve the principle of dilution at all.³ In fact the seven criteria, recognized by the FDA and the HPUS, which selectively or collectively qualify a drug as being homeopathic are
1) US FDA compliant as safe and effective
2) prepared according to the specifications of the General Pharmacy
3) submitted documentation to USFDA and HPUS
4) drug proving and clinical verification in accordance with modern evolved scientific practices
5) substance was in use prior to 1962 – Note Public Domain Medicinal – ‘Who loses and who benefits?’
6) two adequately controlled double blind clinical studies
7) clinical experience or data documented in the medical literature
Not a single one of these requirements for listing on the HPUS involves or required extreme dilutions, placebo, metaphysical entities, vital energy or adherence to antiquated science. Imagine that. The FDA and SSkepticism definitions do not match in the least. An all too common occurrence.
Again, I am not a proponent of homeopathy. I buy the product if it is a low cost alternative to the big pharma inflated equivalent. But given the rancor with which Social Skepticism deals with the subject, and the patterns of habitual corruption employed in their opposition, the natural question I then must ask is: Why?
¹ homeopathy, The Skeptic’s Dictionary, October 11 2014; http://skepdic.com/homeo.html. Note that the employment of the definition is shrouded in its recitation offing, yet the contention that this constitutes the definition of homeopathy remains logically clear once the text is removed of antiquing fallacy, misdirection and equivocation prejudice employed in the history of the subject outlined in the reference. In this context the ‘history of the subject’ is employed in a context in which no other definition is offered, and is employed in such a way to discredit the subject as a whole – an invalid form of argument – and thus, substantiates the argument’s employment as a method of tendering a definition.
² The United States Food and Drug Administration, Inspections Compliance Enforcement and Criminal Investigations Section “CPG Sec. 400.400 Conditions Under Which Homeopathic Drugs May be Marketed,” http://www.fda.gov/iceci/compliancemanuals/compliancepolicyguidancemanual/ucm074360.htm.
³ The HPUS Criteria for Eligibility, http://www.hpus.com/eligibility.php.