The question is not one of whether the analyst is right or wrong, only fools adjudicate such matters; but rather, ‘does their logical calculus or method of leverage potentially allow for a greater insight, and is it appropriately conservative in this quest?’
The astute investigator is much more effective in seeking to increase the reliability of probative information, than by attempting to increase the probative nature of reliable information. This is a key tenet of Ethical Skepticism. The reliability of an observation is established through its ability to be corroborated, and not necessarily by its precision nor sourcing reputation. This is the central tenet of The Riddle of Certainty. The key indicator of certainty is not the assumed reliability of the data you are handling (remember this is a stacked or even worse, circular assumption), but rather the consilience in observation which can be drawn from additional sources, types of sources, disciplines, and analytical perspectives which can be brought to bear. This involves answering more than simply one question, regardless of that one question’s relevance or the confidence interval which may serve to bound its single answer. Both of these artificial forms of certainty can serve to falsely reassure the person conducting inquiry – rendering them essentially nothing more than a lab tech. One who dazzles outsiders through proprietary practice notation and jargon, and not a true investigator questioning why a specific line of prosecution is being followed.
Possibly the most common error of smart engineer, is to optimize a thing which should not exist. Everyone’s been trained in high school and college that you’ve got to answer the question, convergent logic. You can’t tell the professor, ‘Your question is dumb’. You’ll get a bad grade. You have to answer the question. So everyone’s basically, without knowing it, got a mental straight jacket on. That is they will continue to work on optimizing the thing, that simply should not exist.~ Elon Musk, Tik-Tok, 23 Dec 2021
Once the investigator has achieved successful consilience through a series of differing perspective questions, he or she should eventually arrive at the point where they are able to predict the next avenue of anticipated consilient observation. Once this is repeated several times, one has established reliability from out of probative information – but not by means of the self-gratifying comedy of confidence intervals and p-values. When applied solely for technical dazzle or as truth-panacea, these tools are rendered mere club-costume accoutrements – bearing little in common with probative inquiry. After this achievement is the context where p-values and confidence intervals begin to add value to certainty, not before.
One trick I have employed with specific clients is to assign a difficult analytical task to the team at the very start, in order to see who attacks the problem without asking why we are addressing it in the first place.
For instance, I once asked a team to conduct a constrained least squares optimization on a set of process nodes, complete with the set of differential vectors separating each functional node from its value chain. The answer was actually irrelevant because the one node which imparted the value, bore a standard deviation of over 100% of its average value to begin with.
This distracted the ‘I am the smartest person in the room’ types so that the team had time to undertake ‘a critical path of salient questions seeking consilience’, as opposed to ‘an analytical path of relevant-but-trivial questions seeking irrelevant levels of precision’. Before they finished, the rest of the team had already determined the answer to the project’s challenge.
If you have five low error-tolerance (reliable) principal inputs to a challenge, along with one wild-ass guess principal input – your answer will constitute nothing more than a wild-ass guess. Therefore, most of the time it is something other than precision which is critical to determining the answer. Unwise is the investigator who does not understand this.
The Riddle of Certainty
Zone IV or ‘Precise Accuracy’ is a fantasy as regards most nascent or little-understood arenas of study.
Most of science and skepticism dwells in Zone I, moving into Zone II, but falsely believe that they reside in Zone IV (Texas Sharpshooter Fallacy). They are not allowed a viewpoint outside the cage of precision, as that flags a lack of club-enforced traits.
This constitutes trees blinding one to the forest.
The astute intelligence professional seeks to work inside Zone III instead, drawing consilience from a variety of sources and analytical perspectives – realizing that answers are more difficult to come by than one might presume.
For instance, in one market strategy wherein I was a member of the team, our goals was to design an intelligence network which would track an illness within a farm animal population. My recommendation to the team was to forgo statistical confidence sampling of individual farms as the first step, and simply establish a thinner, but wider-spread network of detection points through the value chain. Alert detections in this case were much more valuable than was technical precision.
An investigator is much more effective in seeking to increase the reliability of probative information, than by attempting to increase the probative nature of reliable information.
Fewer and more spread detections are more valuable than deep but concentrated ones. Information should be rated on its reach and not simply its confidence.
Wrong answers under the right approach, serve to inform. Right answers under the wrong approach result in an endless parade of paradox and naked emperors.
This is critical for the astute professional to understand. For examples of these principles in action, please see the following articles.
The Ethical Skeptic, “The Riddle of Certainty”; The Ethical Skeptic, WordPress, 24 Dec 2021; Web, https://theethicalskeptic.com/?p=59117