Contrasting Deontological Intelligence with Cultivated Ignorance

A deontologist prefers a state of ‘unknown’ over choice of a highly probable stacked provisional knowledge or abductive reason, because of the more informative deontology of declaring a precise answer to be unknown, over ‘probably known’ inside a context of low intelligence and unevaluated risk. According to Wittgenstein, the formulation of elemental intelligence is the critical first step of science – which steers our methods away from the pitfalls of having to employ ‘skeptics’ to defend answers derived from stacks of highly probable knowledge – which bear a high risk of ultimately turning out to be wrong – a state to which we are blinded by the processes we chose to undertake and the clowns we hire to defend its answers.

Data and Deontology: A Revolution in the Making

fallacy dataAnother revolution is underway in the development of data structures employed by economic entities (corporations, funds, banks, trade partners and economies, etc.). Database normalization is the process of organizing the columns (attributes), rows (records) and tables (relations) of a relational database, along with the disciplining of the parsed data (answers) in order to reduce data redundancy and make data integrity more robust inside a high transaction environment (IT departments for banks, finance departments, consumer goods traders, brokerages, etc). First Order Logic normalization was employed since the 1970’s in both hierarchical and then further relational data base structures into the 80’s and 90’s. Third Order Logic, or ‘third normal form’ databases have performed as the standard for relational structures thereafter. Query languages such as SQL have traditionally been able to access answers from such third normalized structures in intuitive if not rules based lookup protocols such as single access protocols or intuitive query by example (QBE) user interfaces. Odds are, if you have used Microsoft’s Access or the user friendly dbms Airtable, then you already have had exposure to a query by example user interface. The companies I have owned/managed have thrived off the flexible and crucial role of the relational database in managing our customers, products, transactions, money and cash flows, along with other business information and intelligence (information and intelligence are two distinctly different things).

If all this sounds like gobbledygook, my apologies. It sounds like gobbledygook to me as well, and I have owned and managed corporations executing this type of information technology solution set for a portion of my three decades of work. My job as CEO or similar, was to translate the technical language of my senior IT Techs and System Engineers, and express it such a way as to allow client CEO’s to understand the transformative advantages of such technologies for their businesses. Normalization is akin to a very efficient set of file drawers – a method of disciplining how files are indexed, sorted, labeled, held and accessed in such as fashion as to allow the file drawer manager the ability to answer any and every question thrown at them, in as expedient a fashion as is possible. In addition, such discipline affords the file drawer manager the ability to quickly assess the level of integrity inside his or her stored data. This is a very satisfying situation to the mind of those crazy individuals who manage to keep their desks in tightly aligned and neat order.

But all this is changing. A new gunslinger is in town. Query Oriented Normalization (QON) is replacing the old relational normalization structure (or more specifically 3rd normal form databases), as well as the even older hierarchical database structures of data still in use in some of the older, larger institutions of science and technology. Before we address what a QON intelligence structure is, lets take a step back and examine exactly what deontology means:

Deontology

/philosophy : science : knowledge development and integrity : ethics/ : an approach to the ethics of knowledge development that focuses on the rightness or wrongness of actions themselves, as opposed to the attractive or unattractive nature of the consequences of those actions (Consequentialism) or to the character, credentials and habits of the actor (Virtue Ethics). In science, the process of valuing the scientific method over any of its particular conclusions or the people/institutions claiming them.

A deontologist seeks to reduce the unnecessary complexity of a process of questioning and the associated answers or lack thereof. Then further by conformance to a set of accepted practices, induce or deduce answers to specific questions which collectively serve to reduce the overall level of ignorance (a priori doubt, belief and stacked provisional knowledge) featured inside a given topic. Principally this process results in what is called an epistemology. A deontologist prefers a state of ‘unknown’ over even a highly probable stacked set of provisional knowledge, because of the preferential deontological ethics of declaring a precise answer to be unknown, over ‘probably known’ inside a context of low intelligence and unevaluated risk. This because when the deontologist surveys the horizon of what is truly unknown, he is able then to reduce process and focus on the correct next question under the scientific method.

Now – let’s examine the three forms of database management approach in the context of deontological ethics and the development of knowledge.  Ideally, a structure of knowledge (intelligence) comprises five interlinking elements (nodes and spans):¹

  1.  A precise question       – – –     (‘elementary proposition’ as Wittgenstein calls it) node Q(x)
  2.  Its answer                      – – –     (‘atomic’ fact as Wittgenstein calls it) node A(x)
  3.  A logical association to predicate answers                – – –    (‘certain relation’ as Wittgenstein calls it) spanning tree
  4.  A linkage to a fortiori questions                                  – – –    (‘features’ as Wittgenstein calls them) spanning tree
  5.  A logical phylogeny introducing a posteriori questions             – – –    (‘successor’ as Wittgenstein calls it) spanning tree.¹

In the past, both hierarchical and relational data have presumed that only answers (elements 2. and 4. above) exist, and further that they exist only in the form of object data (data repositories bearing no question which frames their context of employment) – often independent of question, and even less associated with predicate answers and a posteriori questions. You have often heard scientists remark “we answered one question only to have 6 others pop up.” Oddly enough, this is the correct state of affairs inside a knowledge development structure as it relates to the process of science. This is exactly how it should be.

Data is a set of answers without context of question. Intelligence is a framework of questions which have either certain or null answers. The latter is more informative than the former.

4.2211  Even if the world is infinitely complex, so that every fact consists of an infinite number of atomic facts and every atomic fact is composed of an infinite number of objects, even then there must be objects and atomic facts.¹   ~Wittgenstein, Ludwig; Tractatus Logico-Philosophicus

The Framing of Intelligence as Opposed to Diagnostic Data

An ethically answered intelligence question should result in 6 new ethical questions. An ethically answered reductive question should then reduce this set. What one typically fails to account for inside this despair inducing evaluation is the reduced set of risk produced along with the overall displacement of ignorance attained through the improvement of knowledge – two of the consequentialist objectives (expressed as value and clarity respectively in ethical skepticism) of such a process of reduction. Such is the nature of gnosis in our realm, and in absence of possessing all the answers already (a priori doubt, belief and stacked provisional knowledge) – is the very nature of deontological ethics.

Let’s examine this principle below in relation to the three database structure types we outlined above then. In the QON structure we observe all 5 interlinking elements (nodes and spans) present in true Wittgenstein based knowledge development. The QON structure not only catalogs answers in the form of data – but arranges a minimum spanning tree sequence of question as it relates to answer.

QON structures serve to establish intelligence, while the two classic datalogging structures to the left only serve to catalog data.

qon-concept-contrast

You will notice that several features serve to distinguish the QON structure from both the hierarchical and relational database structures (catalogs).

  1. The QON structure frames a record (answer) in terms of both a particular question and its particular (atomic) answer, in a constrained 1:1 relationship – the older structures only frame a repository of answers in a one-to-many relationship, with no linking to question.
  2. The QON structure first reduces the set of questions which are to be asked (reduction), as well as conducts a minimum spanning tree configuration of those questions, so that the path to answering them is pursued in the most expedient and logical framework achievable (if ascertainable).
  3. The QON structure is reflexive, i.e. allows multiple questions to evolve from one successfully answered question.
  4. The QON structure is recursive, i.e. allows multiple questions to solve or modify each other.
  5. The QON structure prefers a null answer, as opposed to a probable answer inference because:
    • a condition of ignorance of risk escalation with each successive probable answer
    • the pseudo-enhancing of answer probability solely on the basis that the answer serves to agree with the probability of other reflexive answers (see Unity of Knowledge Error)
    • the whipsaw amplification effect imbued by any error in successor relationships, or their communication or preference
    • null critical paths can be targeted for prioritization by researchers – rather than being ignored because they contain plug answers.
  6. The QON structure possesses no accommodation for a priori knowledge – a data catalog cannot indicate the difference.
  7. Successive queries inside a QON intelligence structure become more informative as each link/answer is resolved. A massing of facts in contrast is not necessarily more informative in relation to its size.

This (1 thru 7) is called a Q(x) to A(x) sequence of structured intelligence – and it is highly informative in its own essence (deconvolutional in neural network terms). So highly informative that it reduces the need to rely upon ‘Occam’s’ Razor mandates, likely guesses and unwise inferences on data. Even if you call all this nonsense ‘evidence’.

5.133  All inference takes place a priori.

5.1363  If from the fact that a proposition is obvious to us it does not follow that it is true, then obviousness is no justification for our belief in its truth.

5.14  If a proposition follows from another, then the latter says more than the former, the former less than the latter.¹

~Wittgenstein, Ludwig; Tractatus Logico-Philosophicus

In short, the QON model of data development more closely reflects the reality which we face in the prosecution of science. A diagnostician in contrast, will typically only demand a measuring machine and a hierarchical or relational database to address his or her abductive scientific challenges. He or she will ‘doubt’ any catalog proposition which runs counter to their prescribed notions. This is pseudoscience as the majority of scientific endeavor does not function in this fashion.

Doubt, belief and provisional knowledge are all in reality the same exact thing. All a form of succumbing to the short cut temptation to establish ‘likely’ guesses on specific answers; guesses which conform with our other ‘likely’ (or preferred) guesses. This in lieu of doing the field work necessary in reducing the Q(x) to A(x) sequence of structured intelligence requisite under the scientific method.

The diagnostician (see Diagnostician’s Error) therefore thrives off provisional knowledge and doubt (which along with belief, comprise the fabric of the lie):

There are two forms of ‘doubt’

Methodical Doubt – doubt employed as a skulptur mechanism, to slice away disliked observations until one is left with the data set they favored before coming to an argument. The first is the questionable method of denying that something exists or is true simply because it defies a certain a priori stacked provisional knowledge. This is nothing but a belief expressed in the negative, packaged in such a fashion as to exploit the knowledge that claims to denial are afforded immediate acceptance over claims to the affirmative. This is a religious game of manipulating the process of knowledge development into a whipsaw effect supporting a given conclusion set.

Deontological Doubt (epoché) – if however one defines ‘doubt’ – as the refusal to assign an answer (no matter how probable) for a specific question – in absence of assessing question sequence, risk and dependency (reduction), preferring instead the value of leaving the question unanswered (null) over a state of being ‘sorta answered inside a mutually reinforcing set of sorta answereds’ (provisional knowledge) – then this is the superior nature of deontological ethics.

Most fake skeptics define ‘doubt’ as the former and not the latter – and often fail to understand the difference.

The Whipsaw Effect of Probable Stacked Knowledge and Perception

5.5262 The truth or falsehood of every proposition alters something in the general structure of the world. ~Wittgenstein, Ludwig; Tractatus Logico-Philosophicus

Now all of this is not to contend that the realm of information technology is ready to tackle the challenge of datalogging the entire catalog of current knowledge and next appropriate scientific question. The purpose of this contrast in data structures is to elucidate the superior nature of deontological data structures to those which serve probable elements of knowledge. Why they are superior, and why the raw materials of a priori doubt, belief and stacks of provisional knowledge serve only the tradecraft of the lie. Science is developed along the lines of the QON ethic of intelligence development. Science is the process of asking and answering the right questions at the right time, and converting those binary relationships into usable minimum spanning tree pathways to knowledge.

In short, under a deontological context, two knowns and four nulls inside a Q(x) to A(x) intelligence structure – is considered more informative and superior to 6 probables in a normal structure of data.

canonization-rate-versus-neg-publicCarl Bergstrom of the University of Washington and Kevin Gross of North Carolina State University and their team of researchers, recently published a paper entitled ‘Publication bias and the canonization of false facts‘.² Nothing elicits the whipsaw effect of tampering with the processes of intelligence crafting, by means of provisional knowledge, more than the expose elicited inside this paper. The graphic to the right is extracted from the publication for reporting purposes only. It depicts the volatile effect which suppressing publication of negative outcome studies has upon consensus and canonization of scientific ideas. An important principle to observe here (and indeed a contention made by Gross-Bergstrom et. al.) is that the tightening of p-value measures is not a panacea in mitigation of canonizing false facts by means of false sequential method. Despite our precision of measure and tolerance, there exists a point at which, our suppression of negative outcome studies only serves to whip our consensus as a body scientific into unrealistic ranges of conclusivity. What is shown here of importance is that – the structure of intelligence afforded by inclusion of negative studies far outweighs the impact of precision increases of any particular answer contained in a positive-study-only publication approach.

In science, Q(x) to A(x) QON intelligence structures and deontological discipline are vastly more important than just making more likely p-value guesses which support our other likely guesses.

But this error in the form of publishing bias is not the only pitfall which can be encountered in mishandled science. What the deontological intelligence philosopher would note is that a series of mistakes and missteps can serve to impact or amplify this effect before negative and positive outcome publication biases are even introduced. All of them involve premature questions, and a complete ignorance of the intelligence surrounding a subject, in favor of the ‘data’ involved in the subject. Specifically they are (white numbered on the chart below as well):

  1. Questions may be asked in the wrong order – and serve to mislead – when assumed answered by a probable answer.
  2. Questions may be framed in the wrong context and seek to answer too many things at once – a condition which can be masked by a probable answer.
  3. Answers may be developed for the wrong question – and serve to confuse.
  4. a fortiori and a posteriori relationships may be assumed as a result of a probable answer being issued.
  5. Risk has not been evaluated in relationship to stacking of successive probable answers. Risk multiplied by the impacted a fortiori and a posteriori linkages.

Why a QON Driven Scientific Method Based Upon Intelligence and Not Simply Curiosity/Prejudice – Will Change Everything

time-is-coming-for-intelligenceThis is why a null answer is preferable over a probable answer inside a structure of intelligence. Below we see an depiction wherein – tampering with probabilities and dependencies in an intelligence structure (technically a deconvolutional Restricted Boltzmann Machine progression) can serve to produce dramatically differing outcomes of conclusion from that of a more classic Wittgenstein deontological reduction/deduction. Notice that all the atomic answers in the stacked provisional knowledge were the most probable answers (abductive reason) available inside the QON sequence chosen blindly (sans intelligence). But the conslusions were wrong. Today we non-skeptically rest upon foundations of knowledge in many arenas, developed from the method of stacked provisional knowledge depicted below. We fail to acknowledge the questions we have mishandled or skipped, and the incumbent risk we have introduced to the process, by not appropriately using intelligence as the formulative part of the scientific method – and rather ‘asking a question’ as the first step in the scientific method.

query-2

beware-data-versus-questionExacerbating this is the specter of publication. Assume that the above deontology reflects the reality on a critical social issue such as food contamination or the discovery of a new species. The abysmal depiction above becomes even more dire when one introduces the impact of publication bias into this process. Not only is the conclusion set wrong, but moreover we endure the danger of publication bias further then skewing the perceptions of science, and finally – per the conclusions of the Bergstrom-Gross study – canonize a completely incorrect scientific consensus, fully unable to overturn it from that point on – because no successive questions remain. This is the sixth vulnerability in terms of mistakes and missteps inside the scientific method, followed by the seventh vulnerability, the role of social skepticism.

6.  Publication and Acceptance of specific answers can serve to whipsaw the consensus conclusions or perceptions thereof, of science.

7.  Introduce false skepticism, people so energized through the heady role of ‘representing science’ – that critical questions which normally should be introduced to challenge the nature of stacked provisional knowledge – can never be asked. Opponents are framed under Frank’s Law as ‘anti-_________’ and the provisional knowledge achieves the status of a fervently protected religion all on its own.

Praedicate Evidentia

/philosophy : argument : organic untruth/ : any of several forms of exaggeration or avoidance in qualifying a lack of evidence, logical calculus or soundness inside an argument. A trick of preemptive false-inference, which is usually issued in the form of a circular reasoning along the lines of ‘it should not be studied, because study will prove that it is false, therefore it should not be studied’ or ‘if it were true, it would have been studied’.

Praedicate Evidentia – hyperbole in extrapolating or overestimating the gravitas of evidence supporting a specific claim, when only one examination of merit has been conducted, insufficient hypothesis reduction has been performed on the topic, a plurality of data exists but few questions have been asked, few dissenting or negative studies have been published, or few or no such studies have indeed been conducted at all.

Praedicate Evidentia Modus Ponens – any form of argument which claims a proposition consequent ‘Q’, which also features a lack of qualifying modus ponens, ‘If P then’ premise in its expression – rather, implying ‘If P then’ as its qualifying antecedent. This as a means of surreptitiously avoiding a lack of soundness or lack of logical calculus inside that argument; and moreover, enforcing only its conclusion ‘Q’ instead. A ‘There is not evidence for…’ claim made inside a condition of little study or full absence of any study whatsoever.

This is called in fake skepticism: ‘the evidence’. No question has really been answered in a QON reduction sequence – but we have studies, we do have studies. We therefore, are the science. A perch of power which is now necessary in defending against those who know that foundational questions have been ignored. Who needs intelligence when you have the ‘evidence’?

This is the origin of social skepticism. It is the process of cultivating ignorance.

epoché vanguards gnosis


¹  Wittgenstein, Ludwig; Tractatus Logico-Philosophicus; London: Kegan Paul, Trench, Trubner & Co., Ltd.; New York: Harcourt, Brace & Company, Inc., 1922.

²  Gross, K., Bergstrom, Carl T., et. al.; Publication bias and the canonization of false facts; Cornell University Library; arXiv:1609.00494 [physics.soc-ph]; Sept 5, 2016.

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments