When Observation Gives Way to Data-Centric Only Science We All Lose
Is the study which you are imperiously and gleefully foisting upon me a meta-analysis? Or is it muta-analysis? It behooves the ethical skeptic to be able to discern the difference. Data alone, does not a science make. Meta-Anlysis studies bear risks inherent in the study methodology which do not exist in Level I and II direct observation science studies. To ignore these risks is pseudoscience.
Are we relying too much upon meta-analysis and imparting too much gravitas to such study approaches on a large scale? The answer is yes, absolutely. Data is the management task of the technician. Even more, an excellent magician can blur the lines between data and method, introducing muta-science into the mix of contended data rigor. It is amazing to me that a study method, which does not allow for suitable replication or peer review, and which contains 21 elements of risk in series – is somehow regarded as the pinnacle of scientific rigor. How did we arrive at such a delusion? Obviously someone of celebrity merit said it and it was repeated from then on.
In contrast, observation remains the heartfelt journey of the scientist. It affords him or her the less common ability to detect bullshit-all-dressed-up-as-rigor.
This morning an excellent blog by The Neuroskeptic wafted across my Twitter screen. One which rekindled my interest inside a topic upon which I have expounded with my science and engineering teams at various times over the years. The Neuroskeptic pointed out a migration of nomeclature and terminology utilized inside medical science paper titles over primarily the last 50 years. The graph to the right is extracted from the March 19th Discover Magazine Online blog article by The Neuroskeptic, addressing this issue.¹ In this blog, he points out the – information technology driven, yet nonetheless peculiar – migration from the term observation, to the term data inside published medical paper titles from 1915 to 2015. As you might infer from the graphic to the right, beginning with the advent of the IBM 1400 common use mainframe in 1961,² until now, there has been an asymptotic decline in the use of observation as the basis of medical paper contention, in favor of data manipulation approaches. A certainly understandable trend. What I contend herein, is not that the rise in data-analyses is an inappropriate trend inside science; rather its employment in lieu of observation – and not as its compliment – or its employment as the gold standard of scientific analysis, both bear pitfalls. Warning flags that science may be trending into directions relegating it vulnerable to manipulation and agenda. And while this graphic only pertains to paper titles, I think the The Neuroskeptic has tapped into an appropriate way in which to introduce and elucidate this issue.
Observation, it is the first step in the scientific method. Otherwise, how does one even know what question to ask? One of the central tenets of Ethical Skepticism involves the skillful understanding of the difference between the role of observation, and its influence in the assimilation of data and crafting of intelligence. Data alone, does not a science make. It is only when we assemble data after a journey of observation, and in the purpose of crafting intelligence, and in answering a question born of necessity – that the process of science is undertaken. One does not simply begin science with data and a question. A meta-analysis can be rigorous, however to presume that since one has conducted a meta-analysis, therefore one has performed rigorous science is a logical fallacy. It is a game for the dilettante – in which they impress each other and the scientifically illiterate.
One thing I learned as the CEO of an information technology and business intelligence firm is… Anything can be crafted from data, in the eyes of the inexpert technician observer. You simply stack the data relationships in the right fashion, ask questions in the right fashion and sequence and juxtapose the results at the right time.
They key of data science is to be able to detect when you run the risk of deceiving yourself and need to bring in direct material in order to cull and craft such data. In order to accomplish such, one must undertake observations or one must employ the beneficial wisdom life journey of an observer. The strength of the data sets from which one draws has little to do with the strength of argument one claims – unless one possesses the journeyman expertise to interpret such data and its basis of origin. One must be able to skillfully produce intelligence in order to answer a question, not simply analyze data. In order to do that, one cannot be simply a data engineer with a general familiarity of the subject at hand. Nor can one be a scientist, who bears little expertise at the methods and tools entailed in data analysis.
Why do I call it intelligence? Because in actual Security and Intelligence work, one finds that data by itself is useless to most of the time, misleading. Almost any message can be spun from data. Only field work and a knowledgeable observer can distinguish data, and allow it to be placed into sets of useful query-based information – intelligence. Modern Skepticism, if it ever knew this, has certainly forgotten it. The advent of mass storage, relational database structures enabling query by example capability, and the readiness by which we can now assemble data to our fingertips, while mimicking the practices of intelligence, do not stand in adequate substitution of them.
Moreover, a key outcome of these data technology advantages is the increase in employment inside of science of what is called (in particular in medical science) the meta-analysis. A meta-analysis is a ‘study of studies,’ an approach which seeks to bring to coherence a measure of consensus over a specific question of science inside a complicated field of alternative study methods, and potentially conflicting conclusions. The question always remains, is such a drive to consensus representation – too premature, unfounded, unfriendly to replication or heavily risk laden to serve as a basis of claim? I contend that meta-analysis is fatally vulnerable to all such risks.
/science : data science : third level study/ : a post hoc study which does not directly observe, nor directly test, rather employs statistical procedures. Procedures crafted to substantiate a claim based upon integration of the results of a pooled group of first and second level studies which directly address the claim or related question under contention.
“A meta-analysis is considered the most rigorous of approaches to experimental study. Meta-analysis is a quantitative, formal, epidemiological study design used to systematically assess previous research studies to derive conclusions about that body of research. The examination of variability or heterogeneity in study results is also a critical outcome [of the meta-analysis].”³
~ A. B. Hadich, Hippokratia, 2010.
Potential Benefits of Meta-Anaylsis
- It can potentially access the entire body of research around a question.
- A more precise estimate of the effect of treatment or risk factor for disease, or other outcomes, than any individual study contributing to the pooled analysis.³
- A consolidated and quantitative review of a large, and often complex, sometimes apparently conflicting, body of literature.³
- The ability to contrast varied outcomes addressing a single question addressed by multiple parties and varying approaches to study.
- The ability to gauge the sensitivity of various peripheral issues surrounding a scientific question, based upon its frequency of relevancy inside the pooled analysis group.
- May be conducted from a clinical data distance, by a person with less skin in the game, and potentially therefore, presenting less bias and cost.
In general, because of such potential benefits, the meta-analysis is considered the pinnacle of medical scientific study rigor. Accordingly, the study approach use is skyrocketing in employment. The number of publications employing meta-analysis over time, through 2012, has skyrocketed beyond reason (results from PubMed search using text “meta-analysis”) is depicted in the graphic to the above right.‡ It exhibits the increasing popularity of such a bypass to Level I and II direct observational and expert study. It represents an alarming and increasing reliance upon a data-centric only approaches to science, which leverages only upon our increased technological prowess at handling data, and not in an overall increase in method coherence or knowledge of subject.
But are there weaknesses entailed in a data-centric only approach to science? Achilles heels which only appear once one is able to delve into and understand the observation set which composes the foundational basis of the pooled first and second level studies inside the analysis? And is there sufficient time, resource and money to effect replication and peer review of such a monumental study? It is not as impressive to me, that the meta-analysis technician understand the question to which the meta-analysis pertains. Instead, I am impressed when the meta-analysis is founded upon a study which proves both that the question being asked is the next logical calculus or reductive critical path question in the scientific method, and that the question has been vetted to be such, as agreed by the study authors, inside the conclusions of the pooled studies themselves. This sadly however, is almost never the case in a meta-analysis. In general, a study which is further removed from the observational basis from which it is derived, bears a greater risk of unseen error and flaw in logical derivations which can be used for conclusion. This is according to Filbert’s Law:
/philosophy : science : analysis : data : risk/ : or the law of diminishing information return. Increasing the amount of data brought into an analysis does not necessarily serve to improve salience, precision or accuracy of the analysis, yet comes at the cost of stacking risk in veracity. The further away one gets from direct observation, and the more one gets into ‘the data’ only, the higher is the stack of chairs upon which one stands. To find a result use a small sample population, to hide a result use a large one.
If one is to ask a broad final consensus question of science, from a series of 1,000 studies many of whom did not ask, nor result in such a question-related set of data, one runs the risk of becoming fouled in the deceptive nature of questioneering and muta-science.
Observing the Candle
- It is a candle.
- It is key-lime yellow.
- I can light it.
- I can lick my finger and move it back and forth through the flame for fun.
“No, I want you to observe harder…” He smiled and left me with my candle, a pen and 10 sheets of notebook paper. “Crap” I thought. “Sigh…” Okay, so I will observe every little minute detail of this candle and every nuance of that detail, in order to get to my 250 observation count. I breathed, I relaxed and cleared my mind, and stared at the candle well until after my classmates had left the physics lab.
- The flame is in a sinusoidal oscillating rhythm.
- The average rhythm is .6 seconds.
- The rhythm possesses a variation in time from .9 seconds to .2 seconds.
- The rhythm moves from .9 seconds to .2 seconds in progression.
- The flame appears as if aspirating or breathing.
- There is yellow in the flame.
- There is blue in the flame.
- There is orange in the flame.
- The blue increases and decreases with the aspirating sinusoidal pattern…
And so on it went. My best observations came as a direct result of my curiosity about what was happening inside this flame. How the candle was made, and the complex interaction between the wax, the ether which combusted versus the paraffin which remained. Suddenly I realized what a wick was for in the first place. I began to formulate ideas as to how to make this device even better, with role plays drifting through my naturally distracted mind. By the time I arrived at 250 observations, I had found that I was not yet done. I knew more about a candle, than I thought possible. I needed to fill in some blanks with some reference technical information, but I understood candle-dom.
A mother who cares for and loves her encephalitic disabled or autistic child from its first moments kicking in her womb, to the most recent day she is spat upon by ill meaning ‘skeptics’ pretending to represent the medical science community, …she is the observer. Everyone else is a data and formula poser. She knows the science, they only know the script. They wave a single academic meta-analysis in the air as if a Bible, yet possess an empty set of intimate knowledge of the questions entailed, the subtle nature of the subject, its risk or even the right question to ask under the scientific method.
Data is the management task of the technician. Stand alone data can serve to deceive even more easily than it can underpin analyses which enlighten.
Observation is the heartfelt journey of the true scientist.
This is why, even inside subjects towards which I instinctively react in declaring bunk, I bristle at ‘skeptics’ who do not go into the field and make observations. I do not believe in astrology. But if you have not immersed yourself into it, then do not make claims to me regarding ‘evidence’ and science. I am not impressed at your ability to recite a talking sheet. This is a charade worse than the astrology itself, no matter how likely it is that what you think is indeed correct. If you have not sat quietly in ‘haunted’ houses for nights on end, as part of your life’s journey, don’t pretend to tell me all about the reality of ghosts. If you have not chased a mach VIII object in your F-14 and filmed it on the gun camera and pulled the tape on the fire control radar, don’t pretend to tell me all about UFO’s. And if you have spent sizeable amounts of your ‘skeptic’ time debunking such subjects, and then wander over and purport to tell me all about the evidence behind medical conformity which you would like to enforce upon me, I am going to call you a ‘fucking idiot.’
Don’t be deluded by your databases, formulas and ‘evidence’ crafted by those just like you. It is nothing but counter-bunk, to an ethical skeptic.
Simply having run a statistical analyses on candles, can deceive one into thinking that they know even the first thing about candles at all. This is the principle behind what I call Muta-Science. Or in any ethical world, pseudoscience. Bad method, even more questionable results – advertised as science of the ‘highest rigor.’
Fatal Pitfalls of a Data-Centric Meta-Analysis Only Approach to Science
This principal fallacy, the lack of observation in science, or its sacrifice at the hands of the ‘data scientist’ (a scientist who bears little or no qualifying reductive expertise in the subject at hand) introduces the faulty form of science which is often conflated with a rigorous and focused meta-analysis. A muta-analysis changes, by means of sleight-of-hand which cannot easily be isolated, the question being asked, the underlying data itself, the contentions of past science, and the methods of science – by skipping around all but one of the steps of the scientific method.
“In general, post hoc analysis should be deemed exploratory and not conclusive. In the best-case scenario, by revealing the magnitude of effect sizes associated with prior research, meta-analysis can suggest how future studies might be best designed to maximize their individual power.”
~T. Greco, Pitfalls of Meta-Analyses‡
In other words, meta-analyses are not meant to provide boast to consensus, nor completion of the scientific method. It is ironic indeed that where we deny the need to assemble data and intelligence at the beginning of the scientific method, we bless such action as ‘the gold standard’ when used inappropriately in lieu of or to artificially force consensus as the end of the scientific method.
/science : data science : pseudoscience : spin : questioneering/ : the most unreliable of scientific studies. Often a badly developed meta-analysis, which cannot be easily replicated or peer reviewed, contains a high degree of unacknowledged risk, or was executed based upon a poor study plan. An appeal to authority based upon faulty statistical knowledge development processes. Processes which alter or do not employ full scientific methodology, in favor of a premature claim to consensus or rigor implied by the popularity of a statistical study type. A method which does not directly observe, nor directly test, rather employs statistical procedures to answer a faulty inclusion criteria selected, asked, agenda bearing or peripherally addressed scientific question.
The Probability of Failure is High and Unacknowledged: Muta-Analysis Employed Simply as a Data-Centric Appeal to Authority
Uncertainty Imparted from Source Study Material Practices and Lack of Observational Basis
- Lack of qualified individual data or poor study design results in such a large domain of material so as to dilute, or render-below-p, important signal data.
- Lack of qualified individual data or poor study design causes access to such a complex domain of material so as to conceal important focused signal data inside the noise generated.
- Data obtained from study summaries, rather than original data – serves to surreptitiously mislead.
- Studies which corroborate or correspond to current avenues of science tend to have robustly informative titles to catch attention, while studies which have spotted a counter signal, only title their study precisely to the observed signal effect and not the broader topic, and will tend to show up less often as dissenting or countermanding material in a study search.
- Large pool study populations challenge the literature search’s ability to understand each study adequately for inclusion criteria.
- Does not draw knowledge from the literature, commentary, peer review commentary or expert editorial/caution around a subject, otherwise available from scientists and experts.
- When used to impart inferences from studies, pertaining to peripheral topics, which the Level I and II study authors never intended, nor to which they would agree.
- Can be used to impart a premature consensus conclusion from the authors of its pooled studies, which they never addressed nor intended to issue.
- The mistaken perception can be held that increases in data or precision, result in increases in study quality, result in increases in accuracy.
- Meta-analysis makes it possible to look at events that were too rare in the original studies to show a statistically significant difference. However, analyzing rare events represents a problem because small changes in data can determine important changes in the results and this instability can be exaggerated by the use of relative measures of effect instead of absolute ones.‡
- There is resistance from authors to allow ready access to their own dataset containing individual patient data, because of a variety of proprietary reasons and concerns over misinterpretation or liability.‡
Study Scale and Cost Detractor Risks
- A meta-analysis is work content and resource costly and difficult to replicate or review. Therefore it can be tempting to take its results as consensus finished science – appeal to authority by cost or effort – when such a claim is not warranted or is premature.
- Peer review is difficult to impossible to execute because the qualifications to review the data at hand require specific technical, not science, expertise, effort, cost and domain visibility.
- An expensive and work content heavy study – which risks a statistical outcome of underwhelming proportion or one lacking heterorgeneity, may be tempted to revise the study or question, once principal results are in hand, in order to resolve a weaker, but more monumental outcome for peer review and publication.
- The very potential of wasted money, introduces an intrinsic conflict of interest, which does not exist in incremental Level I and II studies.
- Database queries are run by low cost analysts and research assistants, as a necessity of cost. These non-experts may inadvertently tamper with the available set of input data by means of the criteria employed in a search, its exclusion and inclusion conditions or the inability to know when a suitably representative sample has been extracted from a domain with which the assistant has no familiarity.
Misalignment Between Expertise Demand and Task Assignments Imparting Uncertainty and Risk
- Can be executed by academic students and research assistants, at an arms-length from the field, or by data technicians who do not fully grasp the field or question at hand because the lead scientists are not familiar with the database/query technology employed in the meta-analysis.
- Executed by data technicians who are not skilled as science or observation, nor the process of developing a logical critical reduction path.
- Literature searches performed by semi-qualified, data technician or research assistant parties serve to bias the formulation of question and bias the results.
- Technical tasks inside the study do not allow senior research or observation professionals ease of visibility and access into the data practices and intermediate results. No litmus testing is easily allowed midstream. All of which serve to weaken the overall strength and circumspection of the study. The results are the results, and who has the expertise to say they are wrong?
Study Design & Execution Risks
- Subtle failures to adequately define the question studied serve to bias the results, through faulty inclusion criteria.
- Electronic-only database searches artificially skew the study pool and induce bias through excluding studies without an exclusion criteria
- Inclusion criteria crafted to unfairly represent the objections of the dissenting body of science inside a given question.
- Inclusion criteria posed to answer a different question than the ones posed in the first and second level pool studies from which it draws.
- Development of a conclusion from Level I and II studies which provide for little or no heterogeneity in conclusion base.
- Quality scores in study design may exclude more salient studies with lower quality scores in favor of less salient studies which have high quality scores, thereby both imbuing bias into the selection criteria while at the same time advertising a high quality selection rating.
- Sensitivity analyses must presume a correlative conjecture – yet do not prove that relationship at the same time. The danger is that these relationships will be granted the same gravitas as the primary claim. “Correlation does not prove causality, unless it is a side observation inside a meta-analysis.”
- Both data sourcing and arm’s length Cochrane-style Reviews come from contributors and groups based all around the world with the majority of the work carried out online. Enormous bias in data interpretation can be imputed through such a process.
Abuse of Study Intent Gravitas and Implications
- In the face of sufficient risk, unknown or detractors, a meta-analysis can be employed as a defensive boast to provide a smoke screen of rigor which cannot be otherwise claimed in level I or II study.
- In the face of conflicting results, a meta-analysis can be utilized more as an involuntary ‘vote’ by scientists or popularity contest fake-measure for deriving artificially crafted ‘consensus.’
- Can be used, because of its ‘most rigorous form of study’ POTENTIAL – as an appeal to authority.
- The question arises as to why, in a field such as medicine where direct observations are so readily available, do we constantly value science which involves NO observation at all? This renders the study methodology vulnerable to market forces who wish to bypass liability imparted by means of direct observation.
- Cochrane Collaboration approaches may cause undue credibility to be granted biased studies from vested/conflicted interests through otherwise respected systematic review channels.
Note: None of the above risks exist in a Level I and II study, which fail more from errors in measurement, alternative diligence and conclusive shortcuts – not risks which are imparted by the nature of the study methodology itself. This is the unacknowledged risk, taken on by the person making a claim via meta-analysis.
High Risk + Used as a Claim to Gold Standard Rigor or Consensus = High Probability of Abuse
Remember that risk is not defined as the ‘probability of failure,’ rather ‘a cumulation of the number of risk bearing elements in series, which could potentially serve to undermine a process or result.’† In this regard, a meta-analysis bears more risk than does a level I or II study – because of the complexity and inability to replicate the logical calculus or scale of effort involved. Twelve high probability tasks targeting accomplishment of a goal, results in one low probable outcome of success. Twenty one risk points (ten of the above 31 points are parallel risks or potential abuses of employment, not intrinsic series method risks) affords an even less probable avenue to success, approaching a probability of failure of 1.00.† Figure 1 shows the cumulative series on such methodology risk, where P(f) is the the probability of failure, and its approach to a 1.00 asymptote.
Figure 1: Highly populated probability of failure, series†
So, with the ‘gold standard of rigor’ claim with regard to meta-analyses comes the knowledge that we are purchasing such a claim to authority at the cost of an enormous amount of unacknowledged but adopted risk. To wholesale allow for the establishment of consensus evidence, on the basis of solely meta-study approaches, is scientific foolishness and self deception.
It is amazing to me that a study method, which does not allow for replication, nor suitable peer review, and which contains 21 elements of risk in series – is somehow regarded as the pinnacle of scientific rigor.
A meta-analysis – CAN be – the gold standard of study rigor and verity. But more importantly, we introduce error into science when we issue this appeal to authority basis to studies which fail to address their own propensity to misidentify, misconstrue, mislead, and misinform science and the public.
¹ The Neuroskeptic; Discover Magazine Online: From “Observations” to “Data”: The Changing Language of Science | March 19, 2016 10:32 am; http://blogs.discovermagazine.com/neuroskeptic/2016/03/19/from-observations-to-data/#more-7513.
² Computer History Museum: Timeline of Computer History; http://www.computerhistory.org/timeline/computers/.
³ Haidich, A. B. (2010). Meta-analysis in medical research. Hippokratia, 14(Suppl 1), 29–37; http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3049418/.
† Bin Suoa, Yong sheng Chenga; Calculation of Failure Probability of Series and Parallel Systems for Imprecise Probability; Jun Lihttp://www.mecs-press.org/ijem/ijem-v2-n2/IJEM-V2-N2-12.pdf.
‡ Greco, T., Zangrillo, A., Biondi-Zoccai, G., & Landoni, G. (2013). Meta-analysis: pitfalls and hints. Heart, Lung and Vessels, 5(4), 219–225.
No comments yet.
This blogsite rigorously complies with the Fair Use Act (17 U.S.C. § 107)
“Refreshing to the heart of new and weary seekers of truth alike. Some of the most compelling new philosophy of our time. If you claim to be a skeptic and have not read The Ethical Skeptic, you risk sophomoric bandwagon irrelevancy.” -TRB
“TES, I hope you realize the high quality of material you have produced here. Hopefully you will choose a world stage someday and take personal credit for it. The material is that good.” -AOD
There is a pro-science, educated, rational and resolute movement afoot. A movement of conscience on the part of people just like me. Science and Engineering professionals who apply skepticism daily in their STEMM disciplines, but who nonetheless are raising a warning flag of concern. Welcome to my blog. Within its pages, I hope to portray and teach genuine skepticism, or what is called Ethical Skepticism. Indeed, its mission is to promote the wonder of science through a contrast of authentic skeptical discipline, versus its distorted, pseudo-intellectual and socio-politically motivated counterfeit. I am a graduate level science and engineering professional who faithfully participates in man’s quest for knowledge. I lament however its imprisonment by control driven special interests and vigilante bullying from dogmatic social epistemologists such as science communicating journalists, stage magicians, agenda celebrities, psychologists and oligarch, religious and cartel activists. As you survey my blog, hopefully you will encounter things you’ve personally never considered before. Indeed, its mission is to act as a resource guide for their victims and to foster foremost a discerning perspective for us all on the Cabal of pretenders who abuse and control falsely in the name of science.
A series in parts, which defines the philosophy and outlines the tenets and structure of Ethical Skepticism
A compendium of fallacy and corrupted thought commonly employed inside Social Skepticism
The formal and informal fallacy of deceptively promoting one’s self and ideals through pretense of skepticism
It is plurality, and not the simplest explanation, which bears merit in professional research and the actual scientific method
The compulsory set of core religious beliefs misrepresented as skepticism, atheism, free thinking and science
ABOUT SOCIAL SKEPTICISM AND SSKEPTICS
Social Skepticism is a sponsored activist movement which functions as an integral part of the socially engineered mechanisms attempting to dominate human thought, health, welfare and education. This control serving as means to an end, towards subjection of all mankind’s value to mandated totalitarian institutions. Institutions which avert legal exposure by abusing skepticism to serve their goals. Ends formulated by a social elite; however, which stand threatened by innate elements of mankind’s being and background.
An ideologue driven enforcement therefore of a social epistemology crafted to obfuscate mankind’s understanding of such innate elements. Its members practice a form of vigilante bullying, employed in lieu of science to dismiss disliked subjects, persons and evidence before they can ever see the light of scientific day. Seeking to establish as irrefutable truth a core philosophy of material monism, dictating that only specific authorized life physical and energy domains exist. A comprehensive program of enforcement sought accordingly, through rather than the risk of ethical scientific methodology, instead a practice of preemptive methodical cynicism and provisional knowledge which underpins an embargo policy regarding, cultivates ignorance and institutionalizes intimidation surrounding any subject which could ostensibly serve as a pathway to falsify their power enabling illusory religion of Nihilism.
These pretenders typically have never conducted any science themselves, nor do they represent science or scientific thinking.
Social Skeptics falsely identify themselves as ‘skeptics.’ Indeed rather, SSkeptics are self or institutionally appointed Bernaysian engineering activists, posing as rational and logical subject matter authorities enforcing one specific answer in a broad array of pluralistic topics of contention, while at the same time “doubting“ all other potentialities. Far from actually practicing skepticism and abandoning the scientific method when it does not suit their embargo, SSkeptics seek to intimidate scientists and the media, enforce doctrines lacking scientific basis and imperiously pass them to the public as unassailable truth.
We Are Anonymous http://anonhq.com/
Skeptopathy Magazine http://skeptopathy.com/wp/
Hoofnagle the Science Cat https://www.facebook.com/HoofnagleScienceCat/
Debunking Skeptics http://www.debunkingskeptics.com/
Skeptical about Skeptics http://www.skepticalaboutskeptics.org/
Michael Prescott http://michaelprescott.typepad.com/michael_prescotts_blog/
Brian Martin http://www.bmartin.cc/index.html
My Socrates Note http://my-socrates-note.blogspot.com/?m=1
Facebook Groups https://www.facebook.com/groups/925334447494947/
The Difference Between Ethical and Social Skepticism
Ethical Skepticism is a blend of Empirical and Philosophical Skepticism, the tenets of both of which are vetted as to their efficacy in delivering value and clarity inside man’s knowledge development process. It rejects the abuse of Cartesian Doubt as a racket of a priori simplistic predictive based knowledge, self delusion and Methodical Cynicism. Instead, Ethical Skepticism dictates a mute disposition on any topic which science has not studied or the Ethical Skeptic himself has not studied. Ethical Skepticism petitions for Ockham’s Razor plurality in research when sponsorship has shown adequate necessity, and opposes all efforts to squelch such research.
Ethical Skeptics apply skepticism as one of a set of tools employed inside a life characterized by open curiosity, discipline, observation. They continually investigate in order to ask the right question in accordance with the complete scientific method; not defend the right answer. They bear paramount, the personal and professional ethic of defending the integrity of the knowledge development process. Skepticism is a way of preparing the mind and data sets, in order to accomplish science.
Social Skepticism is false a priori deduction combined with stacked provisional induction used as a masquerade of science method in order to enforce a belief set as constituting science. It is an abuse of Cartesian Doubt as a racket of a priori, simplistic, provisional, risk-ignorant knowledge, self delusion and methodical cynicism. It seeks an embargo of certain aspects of man’s knowledge development process. It rejects Philosophical Skepticism and employs Empirical Skepticism only when its tenets support specific knowledge embargo agendas. Instead of tendering mute disposition on any topic which science has not studied, Social Skepticism corrupts science into methodical cynicism employed to to squelch such research and enforces false interpretations of scientific conclusions to support its embargo goals.
Social Skeptics wear SSkepticism as an identity, apply intimidation and doubt only to subjects they disdain, and enforce an embargo regarding any and all observations or science which might serve to undermine their Cabal authorized ontology. They eschew data collection; instead undertaking social activism and unethical activity, any means necessary to enforce the ‘right answer’ and secure the power of their sponsor institutions. Social Skeptics abuse skepticism to act in lieu of science, not as subset thereof.
- Follow The Ethical Skeptic on WordPress.com
Top Posts & Pages
- Ten Reasons People No Longer Find Skeptics Credible
- The Tower of Wrong: The Art of Professional Lying
- Image: The Tree of Knowledge Obfuscation
- The Tree of Knowledge Obfuscation
- No You are Not an Atheist, You are a Nihilist
- What is Social Skepticism?
- Formal vs Informal Fallacy and Their Abuse
- No Difference Between Fundamentalism and Pseudoscience
- The Appeal to Skepticism Fallacy
- Formal vs Informal Fallacy and Their Abuse
- Poser Science: Proof Gaming
- Discerning Sound from Questionable Science Publication
- The Tower of Wrong: The Art of Professional Lying
- Dear Journalism Schools We Deserve Better Quality Graduates as Aspiring Science Communicators
- The Ten Indicators of Methodical Genocide
- A Word About Polls
- And I Have Touched the Sky: The Appeal to Plenitude Error
- Contrasting Deontological Intelligence with Cultivated Ignorance
- Nurturing the New Mind: The Disruptive Nature of Ethics
- The Warning Indicators of Stacked Provisional Knowledge
- The Seven Features of Great Philosophy
- Spotting the Humpty Numpty
- The Joy of Sleight-of-Hand Manipulation
- Differentiating Scientific Literacy from Social Propaganda
- How Glyphosate Practices Serve to Increase Our Diet Risk Exposure
- Lies of Which I Disabused Myself Along the Way
- Islam, Corruption and Socialism All Relate in Direct Proportion to Human Suffering
- Ethical Skepticism – Part 8 – The Watchers Must Also Be Watched
- What Corporations Do When Bankrupt of Ideas/Ethics
- The Inverse Problem and False Claims to ‘Settled Science’
- Abuse of the Dunning-Kruger Effect
- The War Against Supplements Continues to Revel in Harmful Pseudoscience
- Ethical Skepticism – Part 7 – The Unexpected Virtue of Allow-For Thinking
- Never Never Land: The Folly of Pretense Concerning Our Cerebral Injury Children
- The Skeptic’s Guide to Dismissing Public Claims of Illnesses
- Foundation Works on Ethical Skepticism
- Deception Through Abuse of the Post Hoc Ergo Propter Hoc Fallacy
- Major Flaws Within the Neurodiversity Movement
- When Observation Gives Way to Data-Centric Only Science We All Lose
- When a Social Skeptic Claims to be Evidence Based
- Garbage Skepticism: The Definition
- The Correlation-Causality One-Liner Can Highlight One’s Scientific Illiteracy
- Irish Pennants: The Nature of Flawed versus Sound Definitions
- The Nature of Argument
- The Ethical Skeptic’s Argument Assessment Checklist
- No Promenade in the Savage Dance
- The Kuhn-loss Interplay of Scientific Revolution and Resilience
- The Warning Signs that a Social Epistemology is at Play
- Islam Judaism and Christianity: Time to Remove and Renounce Your Holy Verses Celebrating Violence
- The Celeber Cavilla Fallacy
- Are You a Cynic? You Might be Surprised
- The Best Snake Oil is One You Don’t Even Realize is Being Peddled
- Ethical Skepticism – Part 6 – Say What You Mean and Mean What You Say
- Ten Reasons People No Longer Find Skeptics Credible
- The Seven Steps of How I Recovered my Gut Flora and My Health
- No, I Won’t Back Down
- The Dark Side of SSkepticism: The Richeliean Appeal
- On Being a Young Person Contemplating Joining a Faith
- SSkeptic Weapon Word Top 25
- The Malicious Social Lie called Privilege
- The (Ethical Skeptic) Definition of God
- Deconstructing the Rhetoric around What Constitutes Pseudoscience
- Gaming the Lexicology of Ideas through Neologism
- Popper Demarcation Practice and Malpractice
- The Art of Rhetoric
- How You Persuade Makes All the Difference
- How You Say It Makes All the Difference
- Corber’s Burden of Skepticism and The Omega Hypothesis
- The Burden of Proof (in Gumballs)
- Oh, Those Darned Narcissists
- The Five Types of Null Hypothesis Error
- Wittgenstein Error and Its Faithful Participants
- Rationality is Not What False Skeptics Portray
- The Rising Age of the Cartel: Your Freedoms Were Simply an Experiment
- A Mediocracy in 4.0: Discounting College Acceptance Aptitude Testing is a Grave Error
- Aristotle: Discerning the True Skeptic
- iSkeptic – The Three Laws (and a Fourth)
- Why Sagan is Wrong – The Fake Skeptic Detection Kit
- If the New Religiously Unaffiliated are Not Choosing Atheism, Then Just What are They?
- Diagnostic Habituation Error and Spotting Those Who Fall Its Prey
- Nihilism’s Twisting & Turing Denial of Free Will
- The Deontologically Accurate Basis of the Term: Social Skepticism
- Have You Grown Weary of This? There is a Better Path
- A New Ethic
- Why I Don’t Golf
- The Lifecycle of Fake Skepticism – What’s the Harm?
- The Tree of Knowledge Obfuscation
- An Internet Pre-filtered by Authorized Knowledge is a Mistake
- The Misrepresented and So Called ‘War on Science’
- The Ten EnDamnedments – Where the Moral Arc is Headed
- Yes Skeptics Have a PR Problem – Social Skeptics
- When Consensus is Nothing But Pluralistic Ignorance
- The Sorwert Scale of Fake Skepticism
- The Critical Role of Sponsors in the Scientific Method
- An Official ‘Thank You’ to Science Based Medicine
- No You are Not an Atheist, You are a Nihilist
- Methodical Cynicism: The Lyin’tific Method
- Methodical Cynicism: The Presentation
- Your Self is a Mere Illusion of Neurofunction
- Bad Science Being Bad
- The MiHoDeAL Claim to Knowledge
- Ethical Skepticism – Part 4 – The Panoply of Belief
- Latent Demand for Critical Thinking about Skepticism
- The Urgent Need to Reform the ABCD Seed Cartel Science Around Glyphosate
- Hell Hath No Punishment Like Watching Your Children Suffer
- The Magician’s Rush of Fake Skepticism
Site infoThe Ethical Skeptic
Blog at WordPress.com.