The Peculiar Schema of DNA Codon’s Second Letter

The second letter of the three digit DNA codon bears remarkable schema of such extraordinary improbability, that the question arises, “How did this 1 in 3.4 octillion occurrence happen at all?” Is its essence a signature livestock crypto-branding, wherein the iron just happens to be struck at the single best intersection of complexity and immutability inside the planet’s common genome?

The protein makeup and physiology of every living organism which has lived on Earth is defined by its genome. Genes consist of long sequences of nucleic acid bases (digits) which provide the information essential in constructing these proteins and their expressed features. DNA is the information library which is employed by ribosomes and messenger RNA to craft the structure and function of the organism. Our current DNA codex has stood as the basis of life since the common ancestor of Archaea first appeared on Earth shortly after it formed, 4.5 billion years ago.

Recent and so-called ‘molecular clock studies’ have pushed the origin of our DNA-based life back to a mere 60 million years after Earth’s very formation.1 They make the stark argument that the code upon which DNA functionally depends has been around pretty much as long as has our Earth or its Moon and oceans at the very least. It took another 2.2 billion years for Earth’s life to evolve from Archaea and into our current domain, Eukaryota. Eukaryotes consist of a cell or cells in which the genetic material is DNA in the form of chromosomes contained within a distinct nucleus. Mankind is of course then, part of the domain Eukaryota (albeit post Cambrian Explosion).

This sequence of events and their timing are critical path to the argument presented inside this article. Therefore, as is my standard practice, a summary graphic is in order. Our current understanding of the age of the Moon, Earth, life on Earth, and DNA are as follows (drawn from the four sources footnoted):2 3 4 5 Bear in mind as you continue to read, the timing of the four key events depicted on the left hand side of the timeline chart below (using Three Kingdom classification): The formation of the Earth, introduction of the Moon, and the appearance of oceans and life on Earth.

A Timeline of Earth Life from LUCA to Genetic Engineers

Before we begin, please forgive me if I wax speculatively poetic for just a moment. It is almost as if, in death and dying, the incentive strategy of DNA is to motivate higher order creatures not only to propagate species through sexual reproduction and evolution, but at a certain point to branch their now engineered polymorphisms farther into space as a part of the sentient pursuit of immortality as well. Very much bearing the stealth, coincidence, robustness, and brilliance in conquest characteristic of incentivized distributed ledger warfare. Intent is a fortiori to warfare. For it is intent, and not the social concept of ‘design’, which we are to discern. We do not bear the necessary qualifications to adjudicate the latter.

Perhaps as much as anything, this mercenary and suffering-strewn pathway to almost certain extinction, encourages more nihilism than anything men may long ponder. For this very present pain buoys upon an undercurrent of our conscious lives, rendering theist and atheist alike, understandable compatriots in its existential struggle. In a figurative sense, both children of the same ruthless God drunk on their very suffering and confusion – capitalizing mortality far more than any other methodological element. Death is life’s raison d’être after all, and not the result of a mistake on the part of one of its mere hapless victims. A most-likely Bronze Age mythology recounted in the Gnostic text The Hypostasis of the Archons aptly framed this as ‘The Empire of the Dead and Dying’, caught up in an eons-long struggle with putative ‘Forces of Light’.6

Now let’s table until the end of this discussion the obvious coincidence that Earth’s Moon and its oceans arrived an incredibly short time after the Earth’s formation – essentially at the same time. Thereafter then, nothing else of such an astronomically monumental scope occurred for one whole third of the existence of the Universe itself. I have always regarded this as being a rather odd happenstance. As if the Moon was most likely a very large Oort Cloud icy planetesimal, heavy with frozen salt water, oxygen, nitrates, carbon monoxide, carbon dioxide, methane, sulfur, and ammonia (and LUCA?). An interloper (much like Uranus’ moon Miranda, Saturn’s moon Enceladus, or Jupiter’s moon Ganymede) which then surrendered these signature elements to accrete and compose the new surface crust of its larger companion, and sputtered the final ‘seas’ (maria) of water into space and mostly onto Earth. All this only after the Moon’s extinct ocean tides (and not lava basalt floods – remember this is just one theory) had gradually introduced entropy enough to slow its rotation into a tidal lock with its new host, Earth. Under such a construct, this would be why the Earth-facing side of the Moon appears like flat ocean bottom (pink in the graphic above – the Oceanus Procellarum), while the far side is craggy and non-eroded. One can clearly observe static-water-eroded older craters contrasting with pristine newer ones, complimented by horizon-disciplined ocean-silt planes in these 4K clarity Moonscapes (start at 5:20 into the video – these are ocean bottom erosion craters, not ‘lava filled’). Our barren, now desiccated and ‘same isotope ratios’ gamete gave its very life in order to birth its offspring, a living Earth. But I digress. We will come back to this issue at the end of this article. This alternative construct on the Moon’s origin will be the subject of another article sometime in the future for The Ethical Skeptic.

All speculation aside, a more astounding aspect of this timeline is the relative quickness by which life appeared on the newly formed Earth-Moon binary. Moreover, it is not the mere appearance of life itself which stands as the most intriguing aspect of this event for me. Not to take the emergence of life for granted, but certainly one can be forgiven for pondering an even more challenging issue: the very quick appearance of the complex code upon which all Earth life is based, the DNA Codex – or what is also called the ‘Standard Code’.7 Be forewarned however, this sudden and early introduction of a fully functional and logical Standard Code is not the only mystery encompassed therein.

Peculiar Schema Surrounding the DNA Codon Second Base

Our genetic code consists of four types of DNA nucleotide (A-adenine, C-cytosine, T-thymine, G-guanine) structured into triplet sequences (XXX, or for example ‘ATC’) called codons.8 To put it another way, this simply means that the ‘alphabet’ of DNA only contains 4 letters, and each ‘word’ in the DNA lexicon only possesses three letters (or ‘bases’). This leaves a set of 64 possible combinations, or words (called ‘codons’ or ‘slots’ in this article) in the language of DNA. More specifically, the set of all possible three-nucleotide combinations is 4 × 4 × 4 = 64, which comprises coding for 19 amino acid molecules, a sulfur/methionine-start code (ATG), and three silence-stop codes (TGA, TAG, TAA). One can observe the breakout of this codex by 32 left and right-handed protein doublets (64 ÷ 2) in the graphic I created here: Third Codon Letter Left and Right-Handed 32-Slot Apportionment.

However, perhaps a better way to view the assignment of codon slot to specific amino acid molecule is through examining the full 64 slot breakout by amino acid or control codon (with duplications). That breakout can be viewed in the graphic I created here: DNA Codon Slot by Nucleon Count. As a note, I like to create my own graphics from scratch. One will find that they do not truly understand a subject until they accurately describe it in detail for themself. The errors encountered along such a journey typically demonstrate that one did not possess nearly the grasp of an issue as one might have thought upon first study. One will also find that their ability to retain the material in detail is enhanced through such exercise. While similar in nature to the Feynman technique, my chosen approach differs from Feynman’s in that it does not put on the pretentious charade of packaging ideas for ‘sixth graders’, which is in reality an attempt at celebrity-building. In similar ethic one will not find me playing bongo drums for a doting media. Such buffoonery exemplifies why ignorance around the DNA codon schema is ubiquitous today. Remember these tenets of ethical skepticism:

Deception is an attempt to make the complicated appear simple.
Accurate, is simple.

There is a thing called a Bridgman Point, below which, to simplify something further, is to also introduce critical error. Sixth-grade speak often resides below this critical threshold, and the entailed error escapes the simple minds of both the presenter and recipient. Such runs anathema to the philosophy of skepticism.

Nonetheless, the most astounding aspect of this latter breakout method (all 64 slots ranked by nucleon count) is the utter organization around which the codon-to-amino assignment is made. The DNA codon second digit (base) schema is akin to an organized and well-kept room, wherein even the items which are out of place, are forced out of place for a specific purpose. When I assembled Graphic A below, it reminded me very much of the Resistor Color Band-Code codex we employed in Electrical Engineering classes in undergraduate school and assembly/soldering of circuit boards in Navy cryptography electronics. Bear in mind that the resistor 5-Band-Code engineer’s benchmark standard to the right (courtesy of Digi-Key Electronics Supply) bears less organization and symmetry than does the DNA codex in Graphic A below.

For this reason and many others, the Standard Code DNA Codex is sometimes referred to by a Francis Crick assigned moniker, the ‘Frozen Accident’.9 However, what we will observe later in this article is that this event was not characterized by simply one frozen accident, but rather several highly improbable ones which concatenate into a single scope of events. Nonetheless, this organization/symmetry regarding the slot-to-nucleon-count schema, which leverages the second letter of the DNA codon, can be more easily viewed in Graphic A below.

It took me around ten years of grappling with this and falsifying probably 8 or 10 other linear inductive approaches to viewing it, to finally break this code through a deductive approach and winnow it out into its 33 components of logic and symmetry (A through Y below). The broken code can be viewed here: DNA Codex Broken by CTGA Nucleon N-O Stem and Control Slot, and of course its matching Graphic:

Graphic A – DNA Codon Slot by Second and Third Base Matched to Assigned Amino Acid Nucleon Count – this frozen accident lacks the necessary 1. active presence of evolution, 2. chemical affinity feedback mechanism, and 3. method to resolve the nucleon count regression vs. NO2 vs. Complex NO vs. bilateral/stop/start symmetry affinity conflicts to be naturally selected – and therefore can only have been derived by deliberation alone. Whoever assembled this codex did not care that the presence of intent was discernible – an intent which in fact may serve as a demarcation of intellectual property, cryptographic genetic exclusion, and/or origin.

While most genetic scientists recognize the peculiarities entailed in the schema surrounding the second base of the DNA codon,10 few perhaps fully perceive its extraordinary logical structure along with the infinitesimally small possibility of the Standard Code having occurred (or even evolved) by accident. The reader should know that I presented this construct to a Chair in Genetics at a very prominent university years ago. That discussion constituted the first he had ever heard of the idea or realized fully many of the oddities identified therein. He forwarded me a complimentary copy of the genetics textbook he had authored, which I treasure to this day. This was not a shortfall on his part as I am sure that the domain bore ample professional challenge over the decades above and beyond this set of potential observations. Nonetheless, I remain doubtful (not skeptical) that this construct has been given adequate examination and fare due inside scientific discourse.

Numerous improbable to impossible idiosyncrasies stand out inside Graphic A above, an illuminating version of how to depict the schema surrounding the second letter (base) of the DNA codon. For example, a critical observation to note under this method of examining the schema is that there is no ‘third base degeneracy’ inside the Standard Code, as many commonly hold. The symmetry simply dovetails into more specialized and partitioned schema, bearing even more symmetry (a lower entropy state, not higher – as can also be seen in the G and T blocks in the chart to the right). Upon close examination the 64-slot code’s being fully fleshed-out is not necessarily the result of degeneration, but rather bears just as significant a likelihood that this ‘every-slot-occupied’ tactic is purposed to prohibit amino acid assignment degeneration in the first place. But one can only observe this by arranging the codex table into a C-T-G-A sequence for the final two bases (second and third). Once this is done, one can see that the symmetry organizes around the second base of the codon, and the third base simply expresses as a dovetailing of this order. This notion that the Codex features third base degeneracy, is an idea containing an a priori Wittgenstein lexicon bias. Inference from such an assumption is unsound. This matter must be left open from the standpoint of skepticism.

There is no ‘third base degeneracy’ inside the Standard Code, as many commonly hold. The Codex symmetry simply dovetails into more specialized and partitioned schema as we incorporate the third base, bearing even more (occult) symmetry. If there is indeed any ‘degeneracy’ it involves the first base alone, as the second and third bases are highly organized inside this schema. This is the exact opposite of what evolution could have possibly produced.

Moreover, this code could not have evolved, because the code has to be both struck and immutable, before reproduction can function to produce evolution in the first place. This Codex is the proverbial egg in the ‘which came first – chicken or egg’ paradox.

Abductive thinking and lexicon biases of this nature impede our ability to conduct science around the issue. Why are so few geneticists truly familiar with this material and why do only a paltry number of studies exist to date on this very important and astounding construct? The issue incorporates a feature of the philosophy of logic which I call ‘critical path’. One of mankind’s greatest skills is the ability to deliberate a subject at great length, yet still manage to avoid anything of a critical path benefit throughout its discourse (aka ingens vanitatum). DNA is no different.

Critical Path Question: Could our Second Base CTGA-N Codex have developed outside any context of intent?

Now that I have buried the lede far enough, let’s get right to the key point then. The likelihood of this remarkable schema occurring inside a solely accidental context, I have calculated to be 1 chance in 3.42 x 1027 (3.42 octillion). One can view the reference table and series probability formulas employed for the combinations in this first table, Table 1 – Probability Reference Table. These reference probabilities are then combined into a series of calculations, culminating in the remote likelihood of Item B below, and then finally a preliminary estimate of the codon second letter schema’s occurring naturally, based upon just three primary factors (the remainder are included in items A through Y later in this article):

Item B. Possibility of a combined set of two (y) blocks of 16 (x) contiguous second base assignments in increasing nucleon count order, along with 7 other blocks of 4 accomplishing the same (see Table 1 – P(x,y) = ((1 – P(x))^O)^y or P= 7.12 x 10-17). Please note that we only count two contiguous 16-slot blocks as coherent, not four, because of uncertainty in the other two. This also for conservancy.

Item C. The likelihood of having such a structure result in symmetry between start and first stop blocks, and in addition displacing the second stop-block codes to the end of the series (P(x) = 0.00024), and

Item E. The likelihood of having an entire block of second base amino acids be composed solely of NO2 isomers, given that there is no chemical feedback from the amino acid to the codon development/assignment, along with the fact that the Standard Code is itself a prerequisite in order to have evolution in the first place (P(x) = 0.00002).

This results in a compounded probability of 3.42 x 10-27. Remember, for conservancy, we have chosen to only quantify items B, C, and E from the list of idiosyncrasies below. When I ran the numbers using all items A through Y, the calculations just compounded to outlandishness. Items A and B as well were simply two sides of the same coin, so all my trials of calculation have only used A or B, but not both. These three evaluation factors seemed to be the most compatible with a reasonable quantification effort, and to my mind offered a smaller range of potential error. My belief is that this notion of degeneracy, and poor portrayals of the Standard Code, have blinded most of science’s ability to observe this anomaly to its full extent. As a former intelligence officer, this is what I have been trained to do, spot the things no one else has. I have made a very successful career of this skill.

However, given that our Standard Code is not the ultimate code which possibly could have developed, and indeed there are most likely up to 1 x 104 codes of equal or superior optimization,11 a net subtraction of 10,000 adjusts the final probability tally (and reduces by one significant digit). But the reality is the adjustment is minute. The net remoteness of the standard code would still range at just about 3.4 x 10-27, with even these 10,000 possibilities removed (they would be subtracted, not factored in this case). The combined series of calculations can be viewed in this second table, Table 2 – Probability Calculation Table for 3 Factors. Those calculations are conceptually depicted in Graphic B below.

Graphic B – Standard Codex Ranges Well Into Impossibility and Well Beyond NO2 and Nucleon Count Affinity Conflicts – which force manual assignment.

The abstract in Graphic B still places the Standard Code 99.9999…% along the journey from a more likely version, and on towards an ultimate ‘perfection’, but at the same time also highly unlikely code. As you examine the chart above, note that the Standard Code is not structured to flag attention with that 3.42 x 1027 beacon of perfection, but rather a much more tantalizing 3.42 x 1027 – 10,000 efficacy woven into a fabric of stealth (intent?). While this is only a suspicion, I cannot shake the perception that this pattern is not meant to be an ultimate optimization in the first place (although it is abundantly close), but rather a watermark. A branding if you will, identifying the species’ trademark/point of origin (ownership?), regardless of what the creature has evolved into at any point in the future. This leaves perhaps tens of thousands of other standard codes which might be usable in other ‘DNA-based life circumstances’.

This anomaly resides coincidentally at a very opportune Indigo Point inside inflection theory, bearing a raison d’être in that once the code is struck, it never changes, nor does it evolve. What I have found in my career is that benefit stakes from coincidences/uncertainty seldom go uncaptured. Look back at Graphic A again now and see if such an idea makes sense.

In other words, is what is contained in Graphic A a crypto-trademark? A cattle brand? Its branding iron being struck at the only point which functionally resides at the intersection of complexity and immutability inside a genome-in-common.

A lighthouse signature affixed to the lone uncompromising rock amidst the raging torrent of evolution.

Not merely serving as a brand, but moreover a crypto-codex. A Standard Code which would function simultaneously to prevent outside-crossbreeding (even with other DNA-based life), lay fierce claim to planet ownership, and yet enable a catch & release monitoring program to quickly identify interlopers into a planet’s (or series’ thereof) biosphere.

Granted this stack of ideas is highly speculative and skeptical neutrality upon its first hearing is certainly understandable. I would suggest the reader hold such a line of thinking (per hoc aditum) in suspension (epoché) and continue reading through Items A through Y below.

The chart to the right is extracted from the footnoted Koonin/Novozhilov study and expresses those authors’ visualization of this penultimate concept. I don’t agree with those authors’ study conclusions but I applaud their boldness, career risk, and critical path work on this matter. Click on the thumbnail in order to obtain a larger image of the chart. The Standard Code is represented conceptually by the blue dot, while the ultimate optimal code would reside at the tip of the tallest peak in the chart. Indeed the Standard Code represents almost codex perfection. Something mere chemical and metabolic affinities (even if they were plausible, which is highly doubtful) cannot come close to explaining, much less attaining.

One should note that various constructs (not true hypothesis) exist as to chemical/metabolic connections between DNA codes slot number and nucleon count.12 However, we discount this because the purported chemistry involved would have had to select which chemistry to serve, between nucleon count and the N-O stem of each amino acid molecule – serving a mix of one or the other in terms of chemical affinity, but not both perfectly at the same time. The selection here transcended chemical affinity roles and selected correctly for both (the blue bars in the above chart). Both the one-way aspect of gene expression, and the difficulty in selecting correctly for two conflicting chemistries at the same time, deductively strengthen a logical-only scenario. Such force-to-convention speculation ends up constituting only ad hoc apologetic.

Moreover, regarding this rather extraordinary schema, several additional detailed observations may be made. Note that only the items in bold/red were used in the actual probability calculations.

A. There exists a slot-order to nucleon count linearity bearing a coefficient of determination (R2) of .971 within codon groups 1 – 48 and 49 – 64 (.5757 overall), and against the second base blocks that are formed by this progression. It is not the linearity itself which is a tail event, as anything which is sorted by magnitude can take on linearity – but rather it is the cohesive groupings by second base of the DNA codon which result from this linear series arrangement, which must be quantified inside a salient probability function (see at bottom right hand side of Graphic C below). Below, one can observe where I ran a Monte Carlo simulation of combined possibilities for the sequence of 60 amino acid and 4 control-code slots. The distribution function and degrees of freedom in random assignment of codon to slot, fail miserably to establish an orderly/logical relationship with amino acid nucleon count or blocks of same second base codons.

On iteration 335 I got a coefficient of determination of R2 = .1087, which was relatively high as compared to the previous trials. It was there that I framed a rudimentary top-end and ‘degrees of freedom’ for this apparent Chi-squared arrival distribution curve. Despite the remarkable nature of the Iteration 335 coefficient, it still resided a woeful 1 x 10-27 in distance from a more likely code, to the existing Standard Code. Had I continued to run these iterations in the Monte Carlo simulation it would have taken me 10.8 sextillion years to finally hit upon a code as remote in possibility as is the DNA Standard Code Earth life functions upon. That is under a condition wherein I purposely try to encounter such a code with an iteration every three or four seconds on average. Think how long this endeavor would take if I was just randomly hitting keys throughout the exercise, and obtained one cycle of the Monte Carlo simulation once every 1,000 years. Remember that abiogenesis only had one shot at producing the Standard Code through such randomness – as it cannot ‘evolve’.

Note that this Monte Carlo simulation is not used in the probability calculations. It is run as part of the set of exercises which might serve to elicit something missing, falsify the main thesis, provide relative perspective, or stimulate different thinking around the matter. It bears a fascinating result nonetheless.

Graphic C – Monte Carlo Simulation of 20 Amino Acids and 3 Stops into 64 Logical Slots over 335 Iterations

Continuing from this critical point, we observe even more idiosyncrasy in terms of items B through Y below.

B.  All codons are grouped into contiguous blocks of 16 logical assignments, and when sequenced C-T-G-A, for both the second and then third bases of the Codex, produce a split linear progression against nucleon count of 5 discrete groupings (2 overlap in G). Only two blocks are evaluated for probability under this analysis.

C.  Stop code assigned to slot 64 with two stop-codes being grouped into a contiguous pair, when stop-codes bear no chemical feature from which affinity may ostensibly originate. Third stop code bears symmetry with the methionine start code.

D.  Use of an amino acid (methionine) as sequence start code and in contrast, silence as the sequence stop code, two distinct places both of which are logically assigned and not remnants of failed chemistry.

E.  Assignment of solely hydrophobic NO2 isomers to the T-coded block.

F.  Methionine start code and tryptophan-bock stop code bear mirrored symmetry in the T and G blocks, with each spanning a distance of 8 slots from the start of the block, 16 slots apart and each 24 slots inward from the first and last amino acid assignment. Nucleon nor N-O affinity cannot generate this type of symmetry.

G.  A-coded amino acid group block employs all doublet code assignments.

H.  C-coded amino acid group block employs all quadlet code assignments.

I.  The C-T-G-A sequence which produces symmetry at the macro level, also produces partitioned symmetry at the individual codon sequence level, which is also optimized with a C-T-G-A sequence through all 64 slots.

J.  Absolute necessity of all three codon digits for any type of basis for functional life/evolution/Archaea. There existed neither the time nor logical basis for the Standard Code to have functioned upon a two digit (XX) codon, before adding the third digit as a suffix. There are critical amino acids and controls which depend upon a specific 3-digit codex in Archaea, our oldest form of life on Earth.

K.  Inability of the Standard Codex to derive from a process of evolution.

  1. The code is a prerequisite for evolution itself, so it could not have evolved.
  2. The chemistry (if such chemistry is ever found) could serve nucleon count or N-O stem affinity, but not both. Only logical assignment could balance both requirements without fail and achieve symmetry at the same time.
  3. Evolution would have more likely selected for a simpler array of assignment (32 digits, etc.). A suggested early-on two digit Codex could explain part of this, but cannot explain a start and stop-code symmetry which depends upon 3 digits, nor the short amount of time in which the three digit code arose (in something which does not change).
  4. If the Code evolved – this evolution should have continued across 4.5 billion years (after originating in its entirety in less than 30 million) – yet it did not.13
  5. See D. above. Evolutionary changes in DNA occur through the accretion/change of information linked to function and not through the leveraging of silence (absence of information).

L.  Control start and first stop doublets bear common symmetry and regression fit – and as well, both associated codon molecules are adjusted by isomer in order to adhere to this symmetry and fit. Sulfur suffix applied to methionine start-control codon – boosts it into a position of correct regression linearity by doublet slot. Tryptophan bears an isomer appendage which is essentially the complexity equivalent of cysteine, thereby reducing it into the correct regression linearity control doublet slot as well.

M.  All complex NxOx distal amino acid codes grouped together after the first stop code, with N4O2 assigned only under guanine block and remainder grouped into the adenine block.

N.  Featuring no true grouping of odd amino acid counts (3, 5, etc) which should have occurred in an affinity or other unguided scenario.

O.  The only odd molecules tryptophan (C11H12N2O2) and methionine (C5H11NO2S) being only ones assigned to singlet slots – and just happen to both bear symmetry and both be paired with control codons. These doublets are then placed symmetrically from the beginning (8) of each of their respective blocks, 16 slots apart, simultaneously with symmetric distance (24) from the outer edges of the C-T-G-A block as a whole. This is an extraordinary feat, given that chemical affinity would have not only not resulted in this, but moreover prevented it from occurring in the first place (were affinity involved at all, either nucleon or N-O stem).

P.  T and G blocks possessing symmetrical doublet assignment patterns.

Q.  The assignment of all 64 logical slots when life bore a much greater probability of beginning with a far less extensive codex size. Akin to evolution starting with a snail, instead of Archaea. However, once the context of very-quickly-assigned logical symmetry is broached, the possibility arises that the 64 slot code being fully fleshed-out (every slot assigned) is not the result of degeneration, but rather it is precisely a tactic to prohibit degeneration in the first place. This is a starkly different basis of understanding this code.

R.  Control start and stop codes solely employ left-handed codon suffixes. (Note: this is logical only, not the same as molecule chirality)

S.  The positive correlation between the number of synonymous codons for an amino acid and the frequency of the amino acid in living organism proteins.14

T.  Maximization of synonymous point mutations by means of the third letter of the codon – the letter which also bears the greatest frequency of mutation.15

U.  This codex was not only a ‘Frozen Accident’ – which is highly improbable as an event in and of itself. Moreover, this accident also selected for a 1 in 3.4 x 1027 optimized configuration, on its first and only try.

V.  There were no evolutionary trials or precedents from which to strengthen the code’s logical structure as exhibited in items A through U above. Our very first version of life, Archaea depended upon this full logical structure in order to exist.

W.  Polyadenylation (the addition of a poly(A) tail to an RNA transcript), uses the best option of the four bases from which to form the inert tail of RNA. The Adenine block both consists of all doublet assignments, increasing the likelihood of accidental transcription producing amino acid gibberish, and as well is the 2nd base block which contains two stop-silence codes, increasing the likelihood that an end-amino-silence will serve to truncate and end the polyadenylation tail.

X.  Next to last, the item which should also be quantified in the probability calculations, but I do not know of an accurate way to estimate its probability arrival function: Each of the regression lines which describe the split symmetry of slot versus nucleon counts features four things:

  1. both follow a 3:1 m slope (y = mx + b), and
  2. both trend lines intercept the scatterplot right where the control codon is positioned, wherein these positions themselves also bear symmetry relative to the codon block boundaries and T-G symmetry itself, and
  3. adjustments were made to the molecules associated with the control code, cysteine (C3H7NO2 appended with S), tryptophan (C3H7NO2 appended with C8H5N), and methionine (C5H11NO2 appended with S) in order to achieve this positioning (see DNA Codex Broken by CTGA Nucleon N-O Stem and Control Slot), and finally the most incredible feat,
  4. Tryptophan (C3H7NO2 appended with C8H5N), when taken in total, and broken into its molecular constituents – fits BOTH the C-T-G and G-A symmetry and linearity, connecting both groups with a common anchor point in the form of their common stop code assignment (amino acid silence).

The abject unlikelihood of all this occurring by happenstance, and on essentially the first try, pushes the bounds of Ockham’s Razor, rationality, and critical thinking to the point of accepting the absurd frozen accident construct (it is not a mature hypothesis) merely because it is ‘simple’ to understand. Such an approach is no less ad hoc than ‘God did it’ (equally absurd and simple), and only becomes a religion when the idea is enforced as truth/science. Which serves to introduce of course the construct that just happens to feature all of these foibles.

Fast and Simple: Grant Unto Me Six Miracles and I Can Explain All the Rest

Finally regarding ‘Item Y’ if you will, perhaps what it most daunting is the short amount of time which was allotted for such a code to ‘freeze’ into existence in the first place, along with the immediacy in which it happened upon a hostile and 60 million year-old planet – a mere 20 million years after the Moon was ‘ejected’ (more than likely ‘arrived’ with life’s DNA Codex already intact) from Earth by a trillion-to-one collision with the hypothetical planetary body Theia.16 The astute reader should have noticed by now that science possesses multiple ‘trillion to one’ happenstance claims, all compounding inside this argument. One can throw tantrums all they want about ‘irreducible complexity’ (which this is not) being ‘debunked’ (whatever either of those terms may mean), but those who issue such memorized dross must recognize that the theory they are defending is even worse. Our reliance upon the absurd in order to cling to a religious null hypothesis is becoming almost desperate in appearance. This upside down condition of ignorance is called an Ockham’s Inversion.

Ockham’s Inversion

When the ‘simplest explanation’ is not so simple after all. The condition when the rational or simple explanation or null hypothesis requires so many risky, stacked or absurd assumptions in order to make it quickly viable, that is has become even more outlandish than the complex explanation it was supposed to surpass in likelihood.

Now let us hearken back to those four key timeline events which we tabled at the outset of this article. Let’s consider the highly stacked set of unlikely elements and their probability, which together compose this Ockham’s Inversion in our current understanding. To wit:

The Six Miracle Theory

Moon created through Earth collision with Theia                                     1 in 1 x 1012
Moon and oceans arrive so soon after Earth formed                               1 in 1 x 103
Occurrence of Francis Crick’s DNA Codex ‘Frozen Accident’                 1 in 1 x 109
Immediate arrival of ‘Frozen Accident’ after Moon and oceans               1 in 1 x 103
Infinitesimal possibility of Standard Code DNA codon schema               1 in 3.4 x 1027
60 M year duration window for these events in sequence                       1 in 1 x 106

Compounds up to 1 chance in ~3.4 x 1060
Planets in Milky Way ~4.0 x 1011 stars x 5 planets = 2.0 x 1012

A reduction in magnitude to 1 chance in 1.73 x 1048 in possibility, in one galaxy alone…

In other words, there are not enough candidate planets (2 x 1012) in the entire Milky Way to pull off this incredibly-remote in possibility six-miracle event by accident. This does not necessarily suggest the intervention of aliens or Gods (each fallaciously ad hoc), however it does imply that something exceptional and beyond our current frame of understanding has occurred behind the appearance and ascendancy of life on Earth. What we seldom recognize, is that explaining all this as devoid-of-intent through the luxury of ‘4.5 billion years’ (only 60 million years in reality), as it turns out is just as fallaciously ad hoc as are the God and alien notions.

There exists no argument inside evolution which serves to falsify nor dismiss intent. In fact, both the structure and unlikely nature of the CTGA-N Standard Codex deductively insist that intent must be present at the inception of a DNA base-code.

Just as is the case that I don’t have to declare a murder or accident when I encounter a dead body, citing the presence of intent does not mean I must also identify the agency wherein that intent originated. All I have to know, is that I have a dead body and that something extraordinary has transpired. We have a dead body here – we have intent. This is science. Plugging one’s ears, refusing to examine the corpse, and a priori declaring an answer which fits with one’s religion, is not science. I have no doubt that whatever the agency is that precipitated this codex, it is a natural feature of our realm. I simply regard this feature to likely be one we don’t yet know about nor understand – an agency which does not require our permission in order to exist.

It is of the most supreme irony that one third of the known history of the Universe has passed since this gift, and all we have produced with it are some violent fashionista monkeys.

One or more of our grand assumptions about all origins is more than likely a colossal error.

By the same token, it is one thing to claim a single grand accident under the idea, ‘Grant me one miracle and I can explain all the rest’. We stomach such miracles of science all the time, at the very least to serve as placeholders until we attain better information. However it is another thing entirely to demand in the name of science, six miracles inside a tight sequence, in order to protect one’s religious beliefs. That is where I respond ‘Not so fast’.

It is not that which you parrot, but rather that which you forbid examination, which defines one’s religious belief.

What is it the skeptics always say about ‘Occam’s Razor’, ‘The least feature-stacked possibility tends to be the correct one’? Well this is one hell of a stacked set of features which support our current null hypothesis around the appearance of life on Earth (abiogenesis). In any genuine scientific context the hypothesis that the Moon brought us our crust elements and life itself, making Earth what it is today, would be foremost in our theory. Instead, we have chosen the Six Miracle Theory. However, why such a fortuitous event occurred in the first place, is a matter of another critical path of inquiry altogether.

In order to advance as mankind, we will need open our minds to possibilities which address, not mere ‘gaps’ in our knowledge, but rather vast barren domains of Nelsonian ignorance. Unfortunately our Wittgenstein definitions and religious assumptions regarding the appearance of life on Earth, are serving to bias us into an extreme corner of statistically remote plausibility, from which we staunchly refuse to budge.

The Ethical Skeptic, “The Peculiar Schema of DNA Codon’s Second Letter”; The Ethical Skeptic, WordPress, 24 Feb 2021; Web, https://theethicalskeptic.com/?p=48816

And I Have Touched the Sky: The Appeal to Plenitude Error

It is not that the contentions founded upon an appeal to infinity are necessarily and existentially incorrect, rather simply that the appeal itself is premature under the definition of what constitutes good science. So I replaced the word god, with the word infinity – OK, good; but have I really accomplished science through such an action? Wittgenstein challenges this notion. The context in which ‘infinity’ is abused as an obvious scientific alternative or worse, apologetic employed in order to leverage social conformity (pseudoscience), are outlined inside what is called the Appeal to Infinity (or Plenitude) error.

tes-looks-at-the-starsDon’t get me wrong, I have both pondered the mathematical, scientific and philosophical ramifications of the concept of infinity in our observable universe, as well as frequently used such precepts of expansive/endless domain as a justification behind why incredibly unlikely things are observed to have occurred. (Please note that the context of ‘infinity’ here includes use of ‘suitable large numbers’ which might allow for a special pleading instance not constituting infinity, yet still remain suitably large enough to accomplish the same goals.) Such an approach is not an invalid domain of scientific reasoning as the basis for the beginning of hypothesis formulation – but neither does such a rationale set constitute finished science according to Popper (nor in reality Wittgenstein either). Obviousness does not stand tantamount to verity for Wittgenstein, nor does simply the formulation of an idea stand tantamount to hypothesis for Popper. Infinity (also known as ‘plenitude’ based theory), as a explanatory construct rendering infinitesimally unlikely things as now likely (set aside the existential nature of its mathematical and philosophical uses) only serves as a placeholder inside science, and albeit one which can someday hopefully be matured into a truly scientific hypothesis, stands as a placeholder nonetheless.

‘You’ are a trivial happenstance wrought through an infinity of possibility; yet upon this infinite basis ‘you’ could not possibly have happened before, nor sustain, and can never happen again.

    ~ The Existential Nihilist employs the concept of infinity as a hypocritical appeal to both special plead and deny in the same breath

I remember fondly, one night staring out the back sliding glass doors of my childhood home, my father engrossed inside his nightly television routine nearby in the den of our house. I was seven years old, as to my recollection the leaves were colorful and newly removed from the trees surrounding the house, which we had also newly occupied. Our pregnant cat scurried urgently about examining every nook and cranny in the entire house for some amazing plot on her mind. This late hour and bare tree condition allowed me to observe and ponder again the nighttime sky for the first time in several months. Summer was ending and that exciting cyclical period of life where I was able to stay up past dark had commenced once again. I commented to my father as I stared through the glass doors at the nascent night sky “Dad, if there is endless time, and endless stars, why would the sky not be daylight all day and all night?” Wow consider the possibilities, playtime 24 hours a day – except for during that horrid waste of time they forced us into, called school. I hated school.

School was the place where uber-rules-followers used social demeaning as a tactic of class stratification (not that I used those words then). A tactic which some immature instructors even bought into as well. I was barely a C-average student every year of my young life until the day that I scored at a college junior year level in science, in my fifth grade achievement tests. Again in seventh grade I scored a perfect score on the science achievement tests. After asking if I had cheated (both times), my school finally broached the idea that perhaps I needed to be taught in a different way. So ended the track of memorizing spelling words and formulas and facts, and thus inaugurated the track of pursuing projects, ideas, goals and research. The spelling, facts and Laplace Transformations simply fell into place along the way. Don’t stand over me, hands on hips (figuratively) forcing me to make journal entries. Rather allow me to explore my passions through journalism, and I will teach you how to write a journal. Don’t teach me facts about the Punic Wars, let me model the Naval Battles and how they were fought, and maybe could have been prosecuted even better. Tolstoy is a far better teacher of grammar than is sentence diagramming. Anna Karenina’s winding family hierarchy bore with it kind gifts of Russian-challenged complexities in English language, both in logical calculus and structure. Such folding of reality inside out transitioned me from January of my soul into vernal brightness – an evolution of elegant whisper kissing my forehead and changing my life, forever.

why-plenitude-is-a-problemI was invested from a young age into ideas, and not simply social protocols and procedures. This is part of my nature as a philosopher. Thankfully there existed standardized tests in those days, or I would have been relegated to a dunce track. No, this perception on my part is not in any way seeking to impugn specific jobs nor career tracks, as fake skeptics are wont to suggest; rather merely to point out that such a track would have been unfulfilling to me. All a result of my failure to comply with the standard mold they so sought in education.

My dad apparently regarded my sliding glass door observation to constitute a pretty astute question. He was a trial lawyer and enjoyed skills in the art of argument. But he always plied his wisdom with me by means of a Folgerbergian ‘thundering velvet hand’. I learned the nature of argument from him, and him alone. He calmly replied “well perhaps there is not endless time and/or there are not endless stars?” I watched the sky for some time before being rushed off to bed by my mom; petitioning the same question to her, to which she replied “God made just the right number of stars and the right amount of time, so that you can enjoy the night sky.” What a great answer. That made sleeping so much more a pleasant experience. Wow, God set all this up just for us.

For the World is Hollow and I have Touched the Sky

for-infinity-is-our-godSo there we had it. The three alternatives inside of which I was imprisoned for the next 15 some odd years of my life.  There are finite sets of stars and time, god set up the stars and time just right, or – maybe the assumptions which I brought to the argument were incorrect in the first place. It was probably around age 12 or so where I began to protest against the concept of finite-ness as compared to infinity. I often quipped to my eye-rolling buddies in high school (I had been moved a year ahead of my normal age group) – ‘The only thing less palatable than infinity, is finite-ness.’ I considered the idea that, once existence was observed, then infinity was a fortiori. For how could one then truly define a boundary, much less find it? Such a boundary was rendered absurd in an existential context, surely only a boundary-state (a brane or transition if not) and not indeed the end of infinity. “For the World is Hollow and I have Touched the Sky” was one of my favorite Star Trek episodes (although by this time well into syndication), not only from the perspective that Dr. McCoy got it on with some hott alien chick, but also because this issue was touched. In the plot, a man crawls to the end of the ‘sky’ and touches it, only to find it a tactile boundary, a dome of deception – the sense of which drove him to an insanity of just desserts for violating the strictures of the ‘god’ which ruled their planet and forbade such arrogance – as asking questions.

One should bear in mind that in certain contexts, an appeal to infinity is no more scientific than is an appeal to God. It just appears more scientific to the non-philosopher.

Perhaps the best take-away from that Star Trek episode is one which I carry to this very day.

The universe is at least in part incomprehensible. It is not that it is simply unmeasurable, as this claim actually constitutes an organic untruth. The fact is that we cannot seek to measure that which we do not comprehend in the first place. Our skills of measure are not as limiting for man, as are our skills of comprehension – that is our boundary, our dome of deceit after all, and not this fictitious field-of-measurability which nihilists claim they have identified.

nonsenseDon’t get scared, just deal with and expect it. Embrace the unknown, embrace the absurdity. Do not substitute a pretense of knowledge as a methodology towards feeling better about the unknown – this is no different than appealing to God. Your mind does not yet possess the tools to survey reality from the right perspectives. Such were the whispers which reverberated in my mind each night. Accordingly, began my track of leveraging the bookends of infinity (the absence of finite-ness) in contrast with the finite-ness of the hand of God. I roiled against such a bifurcation, again questioning infinity as an adequate argument against the ‘god’ argument which I had already come to reject in the ensuing years.

And here is why I reject infinity as a bifurcating excuse of science, situations wherein it is used simply as a lever and apologetic in opposition to those who make ‘God of the Gaps’ claims (which  I equally eschew). An appeal to infinity (or suitable large number/domain thereof) is NOT a scientific idea for several reasons of demarcation:

  • Infinity does not bear a measurable nor definable set of features in an epistemological sense (the same as ‘god’ in reality under an Appeal to Elves argument)
  • Infinity is easier to propose and codify than it is to resolve, reduce, induce or deduce (this is the reverse trajectory from Wittgenstein defined science)
  • The antithetical idea can neither be defined nor tested, in order to offer Popper falsification of infinity as a null hypothesis
  • The concept of ‘infinity’ as the proposed hypothetical answer, answers the wrong question at hand under the scientific method. I am not burdened with answering the question ‘how did consciousness or life originate?;’ rather, ‘How did the 3 letter codon basis of DNA-protein synthesis originate in Archaea on Earth so quickly?’ The former question is asked out of sequence and stands as a non rectum agitur fallacy. And this would be OK, if it were not used to beat people over the head in promotion of nihilism. What created life? God! Infinity! Yawn – these are the same exact unsinnig (Wittgenstein: nonsensical) answer.
  • Infinity moves quicker as a handle, a term, than it does as a true philosophical/scientific concept. The concept is not easily intelligible nor observable, however it can be sustained under a Wittgenstein set of knowledge features. This renders the concept of infinity vulnerable to being used as a baseball bat to enforce proper thinking.
  • It usually is intertwined into a practice of casuistry – the daisy chaining of premises which feature an appeal to plenitude tucked inside as the ‘one miracle’ which cleverly allows its proponent to then from there, prove anything they desire.
  • It can explain everything, much as Marxist class struggle theory and the Freudian psychology of sex, plenitude can explain the existence of anything and everything. This is not science.

All these things are anathema to sound science. It is not that the contentions founded upon an appeal to infinity are necessarily and existentially incorrect, rather simply that the appeal itself is premature under the definitions of what constitutes good science. But you will observe social skeptics appealing to infinity as if they are applying good science. This is not correct in the least. The context in which infinity is abused as an obvious scientific alternative or worse, apologetic employed in order to leverage social conformity (pseudoscience), are outlined inside what is called the Appeal to Infinity error:

Appeal to Infinity (Plenitude)

/philosophy : pseudoscience : argument : error in logical calculus/ : a variation of an appeal to magic wherein the infinite size (or other suitably large scale) of the containing domain is posited as the all powerful but scientific rationale behind the existence of a stack of incredibly unlikely happenstance. A closure of scientific argument and refusal to consider other alternatives, especially when an appeal to infinity hypothesis is unduly regarded as the null hypothesis – and further then is defended as consensus science, without appropriate underlying reductive science ever actually being done.

Afford me the miracle of a plea into infinity, and I can prove anything or render anything absurd.​
Bounce your critical logic off of a boundary condition and it will come back in a completely different form.​
Love, art, music, consciousness therefore are absurd because I cannot reconcile them with my lexicon discipline. ​

Appeal to Lotto – Informing a person who has been harmed that their instance of harm is extremely uncommon (‘they won the Lotto, simply because someone had to win’ scam). A double appeal to infinity involving convincing a target regarding the personal experience involved in a remote happenstance. A million dollars just fell out of the sky in neat little stacks and then subsequently, you just happened to be the first person to walk by and observe it – two appeals to infinity stacked upon one another. Often used as a sales pitch or con job. Any instance where a ‘Law of Large Numbers’ is used as an apologetic to justify why a person was harmed or an extremely unlikely occurrence emerged.

Omnifinity – any argument which ascribes to a theoretical god, such powers, knowledge and capability such that the god in question is simultaneously able to do anything, and at the same time evade any level of comprehension on our part. This type of god is simply a placeholder argument (the ultimate special pleading) which is a parallel argument to the Infinity of the Gaps argument below. These are twin arguments, which contrary to superficial appearances, are the same exact argument. Neither one constitutes science.

Infinity of the Gaps – any argument where an appeal to infinity is simply employed to avoid the appearance of using a ‘god of the gaps’ explanation, when in reality the employment of infinity as the explanation for an infinitesimally remote chance occurrence is virtually as ridiculous or lacking in epistemological merit as is the god explanation – see Appeal to Elves. Infinity is as convenient a deep well of logic transmutation, as if god itself.

Infinity as Science – any argument where an appeal to infinity is spun as constituting a superior scientific explanation, in comparison to, and in an effort to avoid examining the underlying assumptions which precipitated the invalid perception/belief that an event or series of events are extremely rare or statistically next to impossible in the first place.

Boundary Semantics – pushing the meaning of a term (such as ‘proof’ or ‘knowledge’) into highly or specially plead realms of extreme definition variants, in order to provide an special pleading exception out of any or every argument. This is never a form of being semantically precise, despite a temptation to regard these types of extreme definitions as such. Rather is simply form of equivocation based explanitude.

Explanitude – the condition where a theory or approach has been pushed so hard as authority, or is developed upon the basis of unacknowledged domain uncertainty (such as Marxist class struggle theory or Freudian psychology of sex), that it begins to provide a basis of explanation for, or possesses an accommodation/justification for every condition which is observed or that the theory domain promotes. A theory or approach which seems to be able to explain everything, likely explains nothing (Popper/Pigliucci).

I am sure that I will never truly understand neither infinity nor finite-ness. It makes it very difficult however, to stomach abiogenesis now, knowing that life began right on the heels of the Heavy Bombardment period for Earth – and no, an Appeal to Infinity falls hollow in the face of such a tightening window of finite-ness. Nor however, will I gain fully an explanatory alternative to the prevailing beliefs of abiogenesis and consciousness. Such a sad state of affairs. But I can discipline my mind to be robust against falling prey to a misuse of infinity in the meantime. I can say “I do not know” or ‘I do not possess an adequate explanation/definition for that’ – and yes, be conducting real science.

I do not have to, nor will I as an ethical skeptic, pose inside such a costume of social conformity.

epoché vanguards gnosis

Ethical Skepticism – Part 6 – Say What You Mean and Mean What You Say

Social Skeptics bear the habit of hiding what it is they are seeking to promote. They accomplish this misrepresentation through terminological equivocation and the employment of club signature intimidation words. It behooves the Ethical Skeptic to understand what a person means when they utter certain words, and ensure that the words are not being implied as club weapons to enforce specious religious doctrines. It behooves the Ethical Skeptic to understand their own employment of such words, and exercise the use of them in a context of ethical clarity; to disarm the social inference that such words mean more, than they really do. To err in either regard is the source of all fanaticism.

Say What You Mean

Social Skeptics erroneously influence their acolytes through misleading them as to the meaning behind the terms they employ, and the nature of the underlying philosophy entailed. They believe that their use of the terms evolution, atheism and science affords them immediate scientific gravitas and a perch of correctness. When a person slings around the terms evolution, atheism and science, for me this is not tantamount to an immediate free pass into the graces of trustworthiness. I regularly encourage the Social Skeptic vulnerable among us to understand what it is indeed that they mean, by the terms they employ. Clarity is one of the consequentialist goals of Ethical Skepticism. If you represent critical thinking, science and rationality, then one would be hypocritical to not employ complex terms in a frame of meaningful reference. Otherwise the terms are simply used as a weapon of pretense and intimidation. I use the words evolution, atheism and science – therefore anything I say is scientifically correct, and I have an entire cadre of bullies available to back me up if I so choose. This is not science, it is a hypo epistemological process of fraud.

As an Ethical Skeptic, if I am to continue inside a discourse of life and meaning with such a person, I need to know if they really understand what they are saying when they spout off the words so frequently uttered by their ‘mentors.’ I really need to know what they mean by

Evolution – do they mean speciated diversity of life through the generational culling of environmentally stimulated allele changes?

Or …do they mean that life sprang up on Earth through abiogenesis and random primordial ooze, therefore we are simply a one way genetic expression machine which has deterministically resulted in the fluke illusion of consciousness?  The former fact is science, the latter argument is a highly separate religion – often protected by and conflated inside the club weapon word ‘evolution.’

Atheism – do they mean a personal ethic of not commenting or concluding around this undefinable construct called ‘god?’

Or …do they mean that they hate (and habitually apologize around this) anything to do with a certain religion, its adherents and any idea that a magical bearded entity poofed the universe into existence in 6 days, 6000 years ago? Do they really mean that they choose to venerate Material Monism, and an existential lack of any innate purpose to this biosphere Earth, or any other similar events which occur in our Universe? Really, because I am not sure how one derives such a conclusion. I did not possess their enthalpy laden spaceship, that much psychic clairvoyance, nor that much time, in order to determine such an extraordinary claim myself. The former choice is an ethical action, the latter argument is a highly separate religion called Nihilism – often protected by and conflated inside the club weapon word ‘atheism.’

Science – do they mean both the body of accepted knowledge and the method by which we objectively qualify and build that knowledge?

Or …do they mean screaming about a selective set of physical measures which target confirmation and methodically avoid falsification of a specific religious understanding of the world around us? Do they mean an ontology protected through a non acknowledged Omega Hypothesis (the hypothesis which is developed to end all argument) masquerading as the ‘null hypothesis,’ through an inverse negation fallacious approach – and therefore socially enforced as truth? The former definition is science, the latter argument is a highly separate religious hypoepistemology – often protected by and conflated inside the club weapon word ‘science.’

Science is also about clarity, value, disciplined thinking and trustworthiness. When you hear me use the words above, I mean the former and not the latter in each case. If I attempted to imply the orange ontologies in the chart below, as scientific truth – I could not look at myself in the mirror in the morning – from such a display of dishonesty. Passing off one’s ontology as a science, constitutes not only pseudoscience, but is a Wittgenstein Error (Epistemological) as well. Be wary of those who can do such without conscience. Be very wary of those who can not only look at themselves in the mirror after promoting such fraud, but aspire to celebrity in the process as well. The incorrect use of these words abrogates your claim to represent scientific thinking. Say what you mean – and you will gain the respect of those who truly understand philosophy and science.

Say What You Mean and Mean What You Say - Copy

Mean What You Say

The Lie of AllegianceIf you join a movement, organization or philosophical movement – do so because you really understand and really mean and believe those tenets which are promoted by that movement. Don’t do so because you desire to appear as smart and scientific, or need some kind of self affirmation and acceptance, pep rallies or the rush of shaming others whom you regard as beneath you intellectually or socially. Such dispositions render one vulnerable to being manipulated by celebrity and malevolent influences. Otherwise, you are living what is called a Lie of Allegiance. If you, quietly over a couple beers, will soften your stance and reflect on a whole series of doubts you carry – but must hold in abeyance – then you are living a Lie of Allegiance. People in churches do this to make their families happy. People in Social Skepticism do this, and worse, in order to gain acceptance to that club. This personal foible is anathema to the Ethical Skeptic.

Fanaticist’s Error

/philosophy : self understanding : cognitive dissonance : error/ : mistaking one’s fanaticism or being ‘hardcore’ as positively indicative of the level of understanding and commitment one possesses inside a philosophy or adopted belief set. The reality is that being fanatical or hardcore indicates more one’s dissonance over not fully believing, nor fully understanding the nature of the belief tenets to which they have lent fealty.

A fanaticist is different from a fanatic. A fanatic simply loves a particular subject or brand. A fanaticist on the other hand employs their outward extremism as a cover to hide an unacknowledged and suppressed inner cognitive dissonance.

A useful tool in Social Skepticism, the Lie of Allegiance, keeps the faithful unified and aligned in playing select activist roles.  A Lie of Allegiance is often promoted through one-liners, weapon words and circularly quoted propaganda, initially deployed by celebrity SSkeptics, and enforced by the faithful, looking for purpose power and reward. It relies upon the ignorance of its participants, leveraged through the application of pep rallies and the pummeling of effigies of evil opponents. This is why the acolytes and trolls of Social Skepticism often focus on politics and persons, and not science itself. They either do not fully understand, nor do they fully believe, the philosophy to which they have lent their fealty.

This inner dissonance, prompts what we observe as fanaticism.

The Lie of Allegiance

1. The origin of fanaticism. The core argument which binds together a group on one side in a false dilemma

2.  A core philosophy (such as Nihilism or Material Monism) which is masked by a differing but similar and more attractive cover philosophy (such as atheism) because of the cover philosophy’s generally more acceptable nature.

3.  A principle which is not fully regarded as truth by many or most of the members of a club of adherents, rather is adopted as a preemptive compromise in order to gain acceptance in that club. A principle employed only as the default, Omega Hypothesis, or battle cry agenda around which to combat those on the other side of the false dilemma argument.  The measure of adherence to the Lie of Allegiance principle is more a reflection of disdain towards those of antithetical positions, than it is an expression of rational conclusion on the part of the adherent.

Corollaries

i.  Many of the proponents in a Lie of Allegiance based organization, do not fully understand their Lie of Allegiance, nor perceive its contrast with the cover philosophy to which they in reality adhere.

Example:  Most self proclaimed atheists cannot coherently frame the difference between atheism, skepticism, agnosticism, naturalism, nihilism, ignosticism, monism, materialism, tolerance and apatheism.

ii.  Many members involved in a Lie of Allegiance do not in reality care about the specifics of the teaching under which they profess fealty.  Specific psychologies involving the Ten Pillars are at play inside the binding power of the Lie of Allegiance.

Example:  Many self proclaimed atheists wear the badge as a result of an emotional state, rather than a discriminating choice of conscience.  This renders them susceptible to Nihilist’s, who use rally cries and the pummeling of christian issues in effigy, as a way to enlist the emotional allegiance of those who have poorly rationalized their ontology.

Social Skeptics bear the habit of hiding what it is they are seeking to promote. They accomplish this misrepresentation through terminological equivocation and the employment of club signature weapon words. It behooves the Ethical Skeptic to understand what a person means when they utter certain words, and ensure that the words are not being implied as club weapons to enforce specious religious doctrines. It behooves the Ethical Skeptic to understand their own employment of such words, and exercise the use of them in a context of ethical clarity; to disarm the social inference that such words mean more, than they really do.

I look at myself in the mirror each morning, and I like and respect the guy I see there.