The Peculiar Schema of DNA Codon’s Second Letter

The second letter of the three digit DNA codon bears remarkable schema of such extraordinary improbability, that the question arises, “How did this 1 in 3.4 octillion occurrence happen at all?” Is its essence a signature livestock crypto-branding, wherein the iron just happens to be struck at the single best intersection of complexity and immutability inside the planet’s common genome?

The protein makeup and physiology of every living organism which has lived on Earth is defined by its genome. Genes consist of long sequences of nucleic acid bases (digits) which provide the information essential in constructing these proteins and their expressed features. DNA is the information library which is employed by ribosomes and messenger RNA to craft the structure and function of the organism. Our current DNA codex has stood as the basis of life since the common ancestor of Archaea first appeared on Earth shortly after it formed, 4.5 billion years ago.

Recent and so-called ‘molecular clock studies’ have pushed the origin of our DNA-based life back to a mere 60 million years after Earth’s very formation.1 They make the stark argument that the code upon which DNA functionally depends has been around pretty much as long as has our Earth or its Moon and oceans at the very least. It took another 2.2 billion years for Earth’s life to evolve from Archaea and into our current domain, Eukaryota. Eukaryotes consist of a cell or cells in which the genetic material is DNA in the form of chromosomes contained within a distinct nucleus. Mankind is of course then, part of the domain Eukaryota (albeit post Cambrian Explosion).

This sequence of events and their timing are critical path to the argument presented inside this article. Therefore, as is my standard practice, a summary graphic is in order. Our current understanding of the age of the Moon, Earth, life on Earth, and DNA are as follows (drawn from the four sources footnoted):2 3 4 5 Bear in mind as you continue to read, the timing of the four key events depicted on the left hand side of the timeline chart below (using Three Kingdom classification): The formation of the Earth, introduction of the Moon, and the appearance of oceans and life on Earth.

A Timeline of Earth Life from LUCA to Genetic Engineers

Before we begin, please forgive me if I wax speculatively poetic for just a moment. It is almost as if, in death and dying, the incentive strategy of DNA is to motivate higher order creatures not only to propagate species through sexual reproduction and evolution, but at a certain point to branch their now engineered polymorphisms farther into space as a part of the sentient pursuit of immortality as well. Very much bearing the stealth, coincidence, robustness, and brilliance in conquest characteristic of incentivized distributed ledger warfare. Intent is a fortiori to warfare. For it is intent, and not the social concept of ‘design’, which we are to discern. We do not bear the necessary qualifications to adjudicate the latter.

Perhaps as much as anything, this mercenary and suffering-strewn pathway to almost certain extinction, encourages more nihilism than anything men may long ponder. For this very present pain buoys upon an undercurrent of our conscious lives, rendering theist and atheist alike, understandable compatriots in its existential struggle.6 In a figurative sense, both children of the same ruthless God drunk on their very suffering and confusion – capitalizing mortality far more than any other methodological element. Death is life’s raison d’être after all, and not the result of a mistake on the part of one of its mere hapless victims. A most-likely Bronze Age mythology recounted in the Gnostic text The Hypostasis of the Archons aptly framed this as ‘The Empire of the Dead and Dying’, caught up in an eons-long struggle with putative ‘Forces of Light’.7

Now let’s table until the end of this discussion the obvious coincidence that Earth’s Moon and its oceans arrived an incredibly short time after the Earth’s formation – essentially at the same time. Thereafter then, nothing else of such an astronomically monumental scope occurred for one whole third of the existence of the Universe itself. I have always regarded this as being a rather odd happenstance. As if the Moon was most likely a very large Oort Cloud icy planetesimal, heavy with frozen salt water, oxygen, nitrates, carbon monoxide, carbon dioxide, methane, sulfur, and ammonia (and LUCA?). An interloper (much like Uranus’ moon Miranda, Saturn’s moon Enceladus, or Jupiter’s moon Ganymede) which then surrendered these signature elements to accrete and compose the new surface crust of its larger companion, and sputtered the final ‘seas’ (maria) of water into space and mostly onto Earth. All this only after the Moon’s extinct ocean tides (and not lava basalt floods – remember this is just one theory) had gradually introduced entropy enough to slow its rotation into a tidal lock with its new host, Earth. Under such a construct, this would be why the Earth-facing side of the Moon appears like flat ocean bottom (pink in the graphic above – the Oceanus Procellarum), while the far side is craggy and non-eroded. One can clearly observe static-water-eroded older craters contrasting with pristine newer ones, complimented by horizon-disciplined ocean-silt planes in these 4K clarity Moonscapes (start at 5:20 into the video – these are ocean bottom erosion craters, not ‘lava filled’). Our barren, now desiccated and ‘same isotope ratios’ gamete gave its very life in order to birth its offspring, a living Earth. But I digress. We will come back to this issue at the end of this article. This alternative construct on the Moon’s origin will be the subject of another article sometime in the future for The Ethical Skeptic.

All speculation aside, a more astounding aspect of this timeline is the relative quickness by which life appeared on the newly formed Earth-Moon binary. Moreover, it is not the mere appearance of life itself which stands as the most intriguing aspect of this event for me. Not to take the emergence of life for granted, but certainly one can be forgiven for pondering an even more challenging issue: the very quick appearance of the complex code upon which all Earth life is based, the DNA Codex – or what is also called the ‘Standard Code’.8 Be forewarned however, this sudden and early introduction of a fully functional and logical Standard Code is not the only mystery encompassed therein.

Peculiar Schema Surrounding the DNA Codon Second Base

Our genetic code consists of four types of DNA nucleotide (A-adenine, C-cytosine, T-thymine, G-guanine) structured into triplet sequences (XXX, or for example ‘ATC’) called codons.9 To put it another way, this simply means that the ‘alphabet’ of DNA only contains 4 letters, and each ‘word’ in the DNA lexicon only possesses three letters (or ‘bases’). This leaves a set of 64 possible permutations, or words (called ‘codons’ or ‘slots’ in this article) in the language of DNA. More specifically, the set of all possible three-nucleotide permutations is 4 × 4 × 4 = 64, which comprises coding for 19 amino acid molecules, a sulfur/methionine-start code (ATG), and three silence-stop codes (TGA, TAG, TAA). One can observe the breakout of this codex by 32 left and right-handed protein doublets (64 ÷ 2) in the graphic I created here: Third Codon Letter Left and Right-Handed 32-Slot Apportionment.

However, perhaps a better way to view the assignment of codon slot to specific amino acid molecule is through examining the full 64 slot breakout by amino acid or control codon (with duplications). That breakout can be viewed in the graphic I created here: DNA Codon Slot by Nucleon Count. As a note, I like to create my own graphics from scratch. One will find that they do not truly understand a subject until they accurately describe it in detail for themself. The errors encountered along such a journey typically demonstrate that one did not possess nearly the grasp of an issue as one might have thought upon first study. One will also find that their ability to retain the material in detail is enhanced through such exercise. While similar in nature to the Feynman technique, my chosen approach differs from Feynman’s in that it does not put on the pretentious charade of packaging ideas for ‘sixth graders’, which is in reality an attempt at celebrity-building. In similar ethic one will not find me playing bongo drums for a doting media. Such buffoonery exemplifies why ignorance around the DNA codon schema is ubiquitous today. Remember these tenets of ethical skepticism:

Deception is an attempt to make the complicated appear simple.
Accurate, is simple.

There is a thing called a Bridgman Point, below which, to simplify something further, is to also introduce critical error. Sixth-grade speak often resides below this critical threshold, and the entailed error escapes the simple minds of both the presenter and recipient. Such runs anathema to the philosophy of skepticism.

Nonetheless, the most astounding aspect of this latter breakout method (all 64 slots ranked by nucleon count) is the utter organization around which the codon-to-amino assignment is made. The DNA codon second digit (base) schema is akin to an organized and well-kept room, wherein even the items which are out of place, are forced out of place for a specific purpose. When I assembled Graphic A below, it reminded me very much of the Resistor Color Band-Code codex we employed in Electrical Engineering classes in undergraduate school and assembly/soldering of circuit boards in Navy cryptography electronics. Bear in mind that the resistor 5-Band-Code engineer’s benchmark standard to the right (courtesy of Digi-Key Electronics Supply) bears less organization and symmetry than does the DNA codex in Graphic A below.

For this reason and many others, the Standard Code DNA Codex is sometimes referred to by a Francis Crick assigned moniker, the ‘Frozen Accident’.10 However, what we will observe later in this article is that this event was not characterized by simply one frozen accident, but rather several highly improbable ones which concatenate into a single scope of events. Nonetheless, this organization/symmetry regarding the slot-to-nucleon-count schema, which leverages the second letter of the DNA codon, can be more easily viewed in Graphic A below.

It took me around ten years of grappling with this and falsifying probably 8 or 10 other linear inductive approaches to viewing it, to finally break this code through a deductive approach and winnow it out into its 33 components of logic and symmetry (A through Y below). The broken code can be viewed here: DNA Codex Broken by CTGA Nucleon N-O Stem and Control Slot, and of course its matching Graphic:

Graphic A – DNA Codon Slot by Second and Third Base Matched to Assigned Amino Acid Nucleon Count – this frozen accident lacks the necessary 1. active presence of evolution, 2. chemical affinity feedback mechanism, and 3. method to resolve the nucleon count regression vs. NO2 vs. Complex NO vs. bilateral/stop/start symmetry affinity conflicts to be naturally selected – and therefore can only have been derived by deliberation alone. Whoever assembled this codex did not care that the presence of intent was discernible – an intent which in fact may serve as a demarcation of intellectual property, cryptographic genetic exclusion, and/or origin.

While most genetic scientists recognize the peculiarities entailed in the schema surrounding the second base of the DNA codon,11 few perhaps fully perceive its extraordinary logical structure along with the infinitesimally small possibility of the Standard Code having occurred (or even evolved) by accident. The reader should know that I presented this construct to a Chair in Genetics at a very prominent university years ago. That discussion constituted the first he had ever heard of the idea or realized fully many of the oddities identified therein. He forwarded me a complimentary copy of the genetics textbook he had authored, which I treasure to this day. This was not a shortfall on his part as I am sure that the domain bore ample professional challenge over the decades above and beyond this set of potential observations. Nonetheless, I remain doubtful (not skeptical) that this construct has been given adequate examination and fare due inside scientific discourse.

Numerous improbable to impossible idiosyncrasies stand out inside Graphic A above, an illuminating version of how to depict the schema surrounding the second letter (base) of the DNA codon. For example, a critical observation to note under this method of examining the schema is that there is no ‘third base degeneracy’ inside the Standard Code, as many commonly hold. The symmetry simply dovetails into more specialized and partitioned schema, bearing even more symmetry (a lower entropy state, not higher – as can also be seen in the G and T blocks in the chart to the right). Upon close examination the 64-slot code’s being fully fleshed-out is not necessarily the result of degeneration, but rather bears just as significant a likelihood that this ‘every-slot-occupied’ tactic is purposed to prohibit amino acid assignment degeneration in the first place. But one can only observe this by arranging the codex table into a C-T-G-A sequence for the final two bases (second and third). Once this is done, one can see that the symmetry organizes around the second base of the codon, and the third base simply expresses as a dovetailing of this order. This notion that the Codex features third base degeneracy, is an idea containing an a priori Wittgenstein lexicon bias. Inference from such an assumption is unsound. This matter must be left open from the standpoint of skepticism.

There is no ‘third base degeneracy’ inside the Standard Code, as many commonly hold. The Codex symmetry simply dovetails into more specialized and partitioned schema as we incorporate the third base, bearing synonymy (not degeneracy) and even more (occult) symmetry.

If there is indeed any ‘degeneracy’ it involves the first base alone, as the second and third bases are highly organized inside this schema. Synonymy and degeneracy existed independently and first – only then was departure from this utilized for function. This is the exact opposite of what evolution could have possibly produced.

Moreover, this code could not have evolved, because the code has to be both struck and immutable, before reproduction can function to produce evolution in the first place. This Codex is the proverbial egg in the ‘which came first – chicken or egg’ paradox.

Abductive thinking and lexicon biases of this nature impede our ability to conduct science around the issue. Why are so few geneticists truly familiar with this material and why do only a paltry number of studies exist to date on this very important and astounding construct? The issue incorporates a feature of the philosophy of logic which I call ‘critical path’. One of mankind’s greatest skills is the ability to deliberate a subject at great length, yet still manage to avoid anything of a critical path benefit throughout its discourse (aka ingens vanitatum). DNA is no different.

Critical Path Question: Could our Second Base CTGA-N Codex have developed outside any context of intent?

Now that I have buried the lede far enough, let’s get right to the key point then. The likelihood of this remarkable schema occurring inside a solely accidental context, I have calculated to be 1 chance in 3.42 x 1027 (3.42 octillion). One can view the reference table and series probability formulas employed for the combinations in this first table, Table 1 – Probability Reference Table. These reference probabilities are then combined into a series of calculations, culminating in the remote likelihood of Item B below, and then finally a preliminary estimate of the codon second letter schema’s occurring naturally, based upon just three primary factors (the remainder are included in items A through Y later in this article):

Item B. Possibility of a combined set of two (y) blocks of 16 (x) contiguous second base assignments in increasing nucleon count order, along with 7 other blocks of 4 accomplishing the same (see Table 1 – P(x,y) = ((1 – P(x))^O)^y or P= 7.12 x 10-17). Please note that we only count two contiguous 16-slot blocks as coherent, not four, because of uncertainty in the other two. This also for conservancy.

Item C. The likelihood of having such a structure result in symmetry between start and first stop blocks, and in addition displacing the second stop-block codes to the end of the series (P(x) = 0.00024), and

Item E. The likelihood of having an entire block of second base amino acids be composed solely of NO2 moieties, given that there is no chemical feedback from the amino acid to the codon development/assignment, along with the fact that the Standard Code is itself a prerequisite in order to have evolution in the first place (P(x) = 0.00002).

This results in a compounded probability of 3.42 x 10-27. Remember, for conservancy, we have chosen to only quantify items B, C, and E from the list of idiosyncrasies below. When I ran the numbers using all items A through Y, the calculations just compounded to outlandishness. Items A and B as well were simply two sides of the same coin, so all my trials of calculation have only used A or B, but not both. These three evaluation factors seemed to be the most compatible with a reasonable quantification effort, and to my mind offered a smaller range of potential error. My belief is that this notion of degeneracy, and poor portrayals of the Standard Code, have blinded most of science’s ability to observe this anomaly to its full extent. As a former intelligence officer, this is what I have been trained to do, spot the things no one else has. I have made a very successful career of this skill.

However, given that our Standard Code is not the ultimate code which possibly could have developed, and indeed there are most likely up to 1 x 104 codes of equal or superior optimization,12 a net subtraction of 10,000 adjusts the final probability tally (and reduces by one significant digit). But the reality is the adjustment is minute. The net remoteness of the standard code would still range at just about 3.4 x 10-27, with even these 10,000 possibilities removed (they would be subtracted, not factored in this case). The combined series of calculations can be viewed in this second table, Table 2 – Probability Calculation Table for 3 Factors. Those calculations are conceptually depicted in Graphic B below.

Graphic B – Standard Codex Ranges Well Into Impossibility and Well Beyond NO2 and Nucleon Count Affinity Conflicts – which force manual assignment.

The abstract in Graphic B still places the Standard Code 99.9999…% along the journey from a more likely version, and on towards an ultimate ‘perfection’, but at the same time also highly unlikely code. As you examine the chart above, note that the Standard Code is not structured to flag attention with that 3.42 x 1027 beacon of perfection, but rather a much more tantalizing 3.42 x 1027 – 10,000 efficacy woven into a fabric of stealth (intent?). While this is only a suspicion, I cannot shake the perception that this pattern is not meant to be an ultimate optimization in the first place (although it is abundantly close), but rather a watermark. A branding if you will, identifying the species’ trademark/point of origin (ownership?), regardless of what the creature has evolved into at any point in the future. This leaves perhaps tens of thousands of other standard codes which might be usable in other ‘DNA-based life circumstances’.

This anomaly resides coincidentally at a very opportune Indigo Point inside inflection theory, bearing a raison d’être in that once the code is struck, it never changes, nor does it evolve. What I have found in my career is that benefit stakes from coincidences/uncertainty seldom go uncaptured. Look back at Graphic A again now and see if such an idea makes sense.

In other words, is what is contained in Graphic A a crypto-trademark? A cattle brand? Its branding iron being struck at the only point which functionally resides at the intersection of complexity and immutability inside a genome-in-common.

A lighthouse signature affixed to the lone uncompromising rock amidst the raging torrent of evolution.

Not merely serving as a brand, but moreover a crypto-codex. A Standard Code which would function simultaneously to prevent outside-crossbreeding (even with other DNA-based life), lay fierce claim to planet ownership, and yet enable a catch & release monitoring program to quickly identify interlopers into a planet’s (or series’ thereof) biosphere.

Granted this stack of ideas is highly speculative and skeptical neutrality upon its first hearing is certainly understandable. I would suggest the reader hold such a line of thinking (per hoc aditum) in suspension (epoché) and continue reading through Items A through Y below.

The chart to the right is extracted from the footnoted Koonin/Novozhilov study and expresses those authors’ visualization of this penultimate concept. I don’t agree with those authors’ study conclusions but I applaud their boldness, career risk, and critical path work on this matter. Click on the thumbnail in order to obtain a larger image of the chart. The Standard Code is represented conceptually by the blue dot, while the ultimate optimal code would reside at the tip of the tallest peak in the chart. Indeed the Standard Code represents almost codex perfection. Something mere chemical and metabolic affinities (even if they were plausible, which is highly doubtful) cannot come close to explaining, much less attaining.

One should note that various constructs (not true hypothesis) exist as to chemical/metabolic connections between DNA codes slot number and nucleon count.13 However, we discount this because the purported chemistry involved would have had to select which chemistry to serve, between nucleon count and the N-O stem of each amino acid molecule – serving a mix of one or the other in terms of chemical affinity, but not both perfectly at the same time. The selection here transcended chemical affinity roles and selected correctly for both (the blue bars in the above chart). Both the one-way aspect of gene expression, and the difficulty in selecting correctly for two conflicting chemistries at the same time, deductively strengthen a logical-only scenario. Such force-to-convention speculation ends up constituting only ad hoc apologetic.

Moreover, regarding this rather extraordinary schema, several additional detailed observations may be made. Note that only the items in bold/red were used in the actual probability calculations.

A. There exists a slot-order to nucleon count linearity bearing a coefficient of determination (R2) of .971 within codon groups 1 – 48 and 49 – 64 (.5757 overall), and against the second base blocks that are formed by this progression. It is not the linearity itself which is a tail event, as anything which is sorted by magnitude can take on linearity – but rather it is the cohesive groupings by second base of the DNA codon which result from this linear series arrangement, which must be quantified inside a salient probability function (see at bottom right hand side of Graphic C below). Below, one can observe where I ran a Monte Carlo simulation of combined possibilities for the sequence of 60 amino acid and 4 control-code slots. The distribution function and degrees of freedom in random assignment of codon to slot, fail miserably to establish an orderly/logical relationship with amino acid nucleon count or blocks of same second base codons.

On iteration 335 I got a coefficient of determination of R2 = .1087, which was relatively high as compared to the previous trials. It was there that I framed a rudimentary top-end and ‘degrees of freedom’ for this apparent Chi-squared arrival distribution curve. Despite the remarkable nature of the Iteration 335 coefficient, it still resided a woeful 1 x 10-27 in distance from a more likely code, to the existing Standard Code. Had I continued to run these iterations in the Monte Carlo simulation it would have taken me 10.8 sextillion years to finally hit upon a code as remote in possibility as is the DNA Standard Code Earth life functions upon. That is under a condition wherein I purposely try to encounter such a code with an iteration every three or four seconds on average. Think how long this endeavor would take if I was just randomly hitting keys throughout the exercise, and obtained one cycle of the Monte Carlo simulation once every 1,000 years. Remember that abiogenesis only had one shot at producing the Standard Code through such randomness – as it cannot ‘evolve’.

Note that this Monte Carlo simulation is not used in the probability calculations. It is run as part of the set of exercises which might serve to elicit something missing, falsify the main thesis, provide relative perspective, or stimulate different thinking around the matter. It bears a fascinating result nonetheless.

Graphic C – Monte Carlo Simulation of 20 Amino Acids and 3 Stops into 64 Logical Slots over 335 Iterations

Continuing from this critical point, we observe even more idiosyncrasy in terms of items B through Y below.

B.  All codons are grouped into contiguous blocks of 16 logical assignments, and when sequenced C-T-G-A, for both the second and then third bases of the Codex, produce a split linear progression against nucleon count of 5 discrete groupings (2 overlap in G). Only two blocks are evaluated for probability under this analysis.

C.  Stop code assigned to slot 64 with two stop-codes being grouped into a contiguous pair, when stop-codes bear no chemical feature from which affinity may ostensibly originate. Third stop code bears symmetry with the methionine start code.

D.  Use of an amino acid (methionine) as sequence start code and in contrast, silence as the sequence stop code, two distinct places both of which are logically assigned and not remnants of failed chemistry.

E.  Assignment of solely hydrophobic NO2 moieties to the T-coded block.

F.  Methionine start code and tryptophan-bock stop code bear mirrored symmetry in the T and G blocks, with each spanning a distance of 8 slots from the start of the block, 16 slots apart and each 24 slots inward from the first and last amino acid assignment. Nucleon nor N-O affinity cannot generate this type of symmetry.

G.  A-coded amino acid group block employs all doublet code assignments.

H.  C-coded amino acid group block employs all quadlet code assignments.

I.  The C-T-G-A sequence which produces symmetry at the macro level, also produces partitioned symmetry at the individual codon sequence level, which is also optimized with a C-T-G-A sequence through all 64 slots.

J.  Absolute necessity of all three codon digits for any type of basis for functional life/evolution/Archaea. There existed neither the time nor logical basis for the Standard Code to have functioned upon a two digit (XX) codon, before adding the third digit as a suffix. There are critical amino acids and controls which depend upon a specific 3-digit codex in Archaea, our oldest form of life on Earth.

K.  Inability of the Standard Codex to derive from a process of evolution.

  1. The code is a prerequisite for evolution itself, so it could not have evolved.
  2. The chemistry (if such chemistry is ever found) could serve nucleon count or N-O stem affinity, but not both. Only logical assignment could balance both requirements without fail and achieve symmetry at the same time.
  3. Evolution would have more likely selected for a simpler array of assignment (32 digits, etc.). A suggested early-on two digit Codex could explain part of this, but cannot explain a start and stop-code symmetry which depends upon 3 digits, nor the short amount of time in which the three digit code arose (in something which does not change).
  4. If the Code evolved – this evolution should have continued across 4.5 billion years (after originating in its entirety in less than 30 million) – yet it did not.14
  5. See D. above. Evolutionary changes in DNA occur through the accretion/change of information linked to function and not through the leveraging of silence (absence of information).

L.  Control start and first stop doublets bear common symmetry and regression fit – and as well, both associated codon molecules are adjusted by isomer in order to adhere to this symmetry and fit. Sulfur suffix applied to methionine start-control codon – boosts it into a position of correct regression linearity by doublet slot. Tryptophan bears an isomer appendage which is essentially the complexity equivalent of cysteine, thereby reducing it into the correct regression linearity control doublet slot as well.

M.  All complex NxOx distal amino acid codes grouped together after the first stop code, with N4O2 assigned only under guanine block and remainder grouped into the adenine block.

N.  Featuring no true grouping of odd amino acid counts (3, 5, etc) which should have occurred in an affinity or other unguided scenario.

O.  The only odd molecules tryptophan (C11H12N2O2) and methionine (C5H11NO2S) being only ones assigned to singlet slots – and just happen to both bear symmetry and both be paired with control codons. These doublets are then placed symmetrically from the beginning (8) of each of their respective blocks, 16 slots apart, simultaneously with symmetric distance (24) from the outer edges of the C-T-G-A block as a whole. This is an extraordinary feat, given that chemical affinity would have not only not resulted in this, but moreover prevented it from occurring in the first place (were affinity involved at all, either nucleon or N-O stem).

P.  T and G blocks possessing symmetrical doublet assignment patterns.

Q.  The assignment of all 64 logical slots when life bore a much greater probability of beginning with a far less extensive codex size. Akin to evolution starting with a snail, instead of Archaea. However, once the context of very-quickly-assigned logical symmetry is broached, the possibility arises that the 64 slot code being fully fleshed-out (every slot assigned) is not the result of degeneration, but rather it is precisely a tactic to prohibit degeneration in the first place. This is a starkly different basis of understanding this code.

R.  Control start and stop codes solely employ left-handed codon suffixes. (Note: this is logical only, not the same as molecule chirality)

S.  The positive correlation between the number of synonymous codons for an amino acid and the frequency of the amino acid in living organism proteins.15

T.  Maximization of synonymous point mutations by means of the third letter of the codon – the letter which also bears the greatest frequency of mutation.16

U.  This codex was not only a ‘Frozen Accident’ – which is highly improbable as an event in and of itself. Moreover, this accident also selected for a 1 in 3.4 x 1027 optimized configuration, on its first and only try.

V.  There were no evolutionary trials or precedents from which to strengthen the code’s logical structure as exhibited in items A through U above. Our very first version of life, Archaea depended upon this full logical structure in order to exist.

W.  Polyadenylation (the addition of a poly(A) tail to an RNA transcript), uses the best option of the four bases from which to form the inert tail of RNA. The Adenine block both consists of all doublet assignments, increasing the likelihood of accidental transcription producing amino acid gibberish, and as well is the 2nd base block which contains two stop-silence codes, increasing the likelihood that an end-amino-silence will serve to truncate and end the polyadenylation tail.

X.  Next to last, the item which should also be quantified in the probability calculations, but I do not know of an accurate way to estimate its probability arrival function: Each of the regression lines which describe the split symmetry of slot versus nucleon counts features four things:

  1. both follow a 3:1 m slope (y = mx + b), and
  2. both trend lines intercept the scatterplot right where the control codon is positioned, wherein these positions themselves also bear symmetry relative to the codon block boundaries and T-G symmetry itself, and
  3. adjustments were made to the molecules associated with the control code, cysteine (C3H7NO2 appended with S), tryptophan (C3H7NO2 appended with C8H5N), and methionine (C5H11NO2 appended with S) in order to achieve this positioning (see DNA Codex Broken by CTGA Nucleon N-O Stem and Control Slot), and finally the most incredible feat,
  4. Tryptophan (C3H7NO2 appended with C8H5N), when taken in total, and broken into its molecular constituents – fits BOTH the C-T-G and G-A symmetry and linearity, connecting both groups with a common anchor point in the form of their common stop code assignment (amino acid silence).

The abject unlikelihood of all this occurring by happenstance, and on essentially the first try, pushes the bounds of Ockham’s Razor, rationality, and critical thinking to the point of accepting the absurd frozen accident construct (it is not a mature hypothesis) merely because it is ‘simple’ to understand. Such an approach is no less ad hoc than ‘God did it’ (equally absurd and simple), and only becomes a religion when the idea is enforced as truth/science. Which serves to introduce of course the construct that just happens to feature all of these foibles.

Fast and Simple: Grant Unto Me Six Miracles and I Can Explain All the Rest

Finally regarding ‘Item Y’ if you will, perhaps what it most daunting is the short amount of time which was allotted for such a code to ‘freeze’ into existence in the first place, along with the immediacy in which it happened upon a hostile and 60 million year-old planet – a mere 20 million years after the Moon was ‘ejected’ (more than likely ‘arrived’ with life’s DNA Codex already intact) from Earth by a trillion-to-one collision with the hypothetical planetary body Theia.17 The astute reader should have noticed by now that science possesses multiple ‘trillion to one’ happenstance claims, all compounding inside this argument. One can throw tantrums all they want about ‘irreducible complexity’ (which this is not) being ‘debunked’ (whatever either of those terms may mean), but those who issue such memorized dross must recognize that the theory they are defending is even worse. Our reliance upon the absurd in order to cling to a religious null hypothesis is becoming almost desperate in appearance. This upside down condition of ignorance is called an Ockham’s Inversion.

Ockham’s Inversion

When the ‘simplest explanation’ is not so simple after all. The condition when the rational or simple explanation or null hypothesis requires so many risky, stacked or absurd assumptions in order to make it quickly viable, that is has become even more outlandish than the complex explanation it was supposed to surpass in likelihood.

Now let us hearken back to those four key timeline events which we tabled at the outset of this article. Let’s consider the highly stacked set of unlikely elements and their probability, which together compose this Ockham’s Inversion in our current understanding. To wit:

The Six Miracle Theory

Moon created through Earth collision with Theia                                     1 in 1 x 1012
Moon and oceans arrive so soon after Earth formed                               1 in 1 x 103
Occurrence of Francis Crick’s DNA Codex ‘Frozen Accident’                 1 in 1 x 109
Immediate arrival of ‘Frozen Accident’ after Moon and oceans               1 in 1 x 103
Infinitesimal possibility of Standard Code DNA codon schema               1 in 3.4 x 1027
60 M year duration window for these events in sequence                       1 in 1 x 106

Compounds up to 1 chance in ~3.4 x 1060
Planets in Milky Way ~4.0 x 1011 stars x 5 planets = 2.0 x 1012

A reduction in magnitude to 1 chance in 1.73 x 1048 in possibility, in one galaxy alone…

In other words, there are not enough candidate planets (2 x 1012) in the entire Milky Way to pull off this incredibly-remote in possibility six-miracle event by accident. This does not necessarily suggest the intervention of aliens or Gods (each fallaciously ad hoc), however it does imply that something exceptional and beyond our current frame of understanding has occurred behind the appearance and ascendancy of life on Earth. What we seldom recognize, is that explaining all this as devoid-of-intent through the luxury of ‘4.5 billion years’ (only 60 million years in reality), as it turns out is just as fallaciously ad hoc as are the God and alien notions (see Researchers use Moore’s Law to calculate that life began before Earth existed).

There exists no argument inside evolution which serves to falsify nor dismiss intent. In fact, both the structure and unlikely nature of the CTGA-N Standard Codex deductively insist that intent must be present at the inception of a DNA base-code.

Just as is the case that I don’t have to declare a murder or accident when I encounter a dead body, citing the presence of intent does not mean I must also identify the agency wherein that intent originated. All I have to know, is that I have a dead body and that something extraordinary has transpired. We have a dead body here – we have intent. This is science. Plugging one’s ears, refusing to examine the corpse, and a priori declaring an answer which fits with one’s religion, is not science. I have no doubt that whatever the agency is that precipitated this codex, it is a natural feature of our realm. I simply regard this feature to likely be one we don’t yet know about nor understand – an agency which does not require our permission in order to exist.

It is of the most supreme irony that one third of the known history of the Universe has passed since this gift, and all we have produced with it are some violent fashionista monkeys.

One or more of our grand assumptions about all origins is more than likely a colossal error.

By the same token, it is one thing to claim a single grand accident under the idea, ‘Grant me one miracle and I can explain all the rest’. We stomach such miracles of science all the time, at the very least to serve as placeholders until we attain better information. However it is another thing entirely to demand in the name of science, six miracles inside a tight sequence, in order to protect one’s religious beliefs. That is where I respond ‘Not so fast’.

It is not that which you parrot, but rather that which you forbid examination, which defines one’s religious belief.

What is it the skeptics always say about ‘Occam’s Razor’, ‘The least feature-stacked possibility tends to be the correct one’? Well this is one hell of a stacked set of features which support our current null hypothesis around the appearance of life on Earth (abiogenesis). In any genuine scientific context the hypothesis that the Moon brought us our crust elements and life itself, making Earth what it is today, would be foremost in our theory. Instead, we have chosen the Six Miracle Theory. However, why such a fortuitous event occurred in the first place, is a matter of another critical path of inquiry altogether.

In order to advance as mankind, we will need open our minds to possibilities which address, not mere ‘gaps’ in our knowledge, but rather vast barren domains of Nelsonian ignorance. Unfortunately our Wittgenstein definitions and religious assumptions regarding the appearance of life on Earth, are serving to bias us into an extreme corner of statistically remote plausibility, from which we staunchly refuse to budge.

The Ethical Skeptic, “The Peculiar Schema of DNA Codon’s Second Letter”; The Ethical Skeptic, WordPress, 24 Feb 2021; Web, https://theethicalskeptic.com/?p=48816