The Inverse Problem and False Claims to ‘Settled Science’
Science achieves its strongest theoretical basis when both the forward problem and the inverse problem agree, as to the outcomes attributed to a set of input variables inside a proposed solution. To simply craft models, parameters, constraints, arrival distributions, relationships – all of which impart risk to the model – and then presume that our current understanding of such will then guarantee a valid field result or outcome – is unfinished science at best, and pseudoscience or oligarch arrogance at worst.
Claims to consensus are invalid and claims to fished science are inaccurate, in a circumstance where the forward problem and the inverse problem of science – do not meet in agreement first. This is the circumstance we observe inside many of today’s most popular and vociferously contested scientific controversies. The public or outcome stakeholder observes one thing, and those observations stand in direct conflict with the forward model theoretical problem being pushed by conflict of interest, ‘skeptical’ agenda or profit-targeting studies. One cannot under a claim to science, simply brush the public/victim/stakeholder’s observation off as a MiHoDeAL claims set. This constitutes a Truzzi Fallacy. It is an abrogation of scientific method committed by ignoring the testing exposure and informative advantage entailed through the inverse problem.
This principle is codified in much of Karl Popper’s work concerning verisimiltude and the assimilation of knowledge. Popper proposed the idea that most of science cannot rest upon a stack of empirical (historicist) ‘prophecies’ alone. He contended that most of our knowledge is attained through ‘highly informative theories which have a lesser chance of being true.’¹ Most of science is not like the science involved in predicting planetary motions for instance. Such forward problem prophetic constructs as eclipses and planetary motions are a rare condition in science. Instead, he conjectured that, the more information a theory places under testable exposure, the more informative it becomes. For our purposes here, a theory only survives a Popper Verisimilude condition when it can be independently derived or confirmed from the field observations from/to which it relates. Field observation generates alternative ideas and increases the number of features of information which an explanatory theory must bear under falsification testing. In other words, field observations, feature testing, sensitivity analyses and confirmations – make a theory less probable – and therefore more highly informative.
Science, or to be precise, the working scientist, is interested, in Popper’s view, in theories with a high informative content, because such theories possess a high predictive power and are consequently highly testable. But if this is true, Popper argues, then, paradoxical as it may sound, the more improbable a theory is the better it is scientifically, because the probability and informative content of a theory vary inversely—the higher the informative content of a theory the lower will be its probability, for the more information a statement contains, the greater will be the number of ways in which it may turn out to be false.¹
In other words, we reduce the risk of our forward model being errantly assumed as correct – by increasing the number of its features which are subject to falsification. We avoid the increasing orange curve in the graphic above. All models are going to bear these assumptions and features, whether we acknowledge them or not. So it is best to acknowledge and test them. The inverse problem allows for such features to be acknowledged by necessity, and then brought into the crucible of science (falsification).
From the Stanford Encyclopedia of Philosophy, the outline therein by Steven Thornton continues.
This, then, Popper argues, is the reason why it is a fundamental mistake for the historicist to take the unconditional scientific prophecies of eclipses as being typical and characteristic of the predictions of natural science—in fact such predictions are possible only because our solar system is a stationary and repetitive system which is isolated from other such systems by immense expanses of empty space. The solar system aside, there are very few such systems around for scientific investigation—most of the others are confined to the field of biology, where unconditional prophecies about the life-cycles of organisms are made possible by the existence of precisely the same factors. Thus one of the fallacies committed by the historicist is to take the (relatively rare) instances of unconditional prophecies in the natural science as constituting the essence of what scientific prediction is, to fail to see that such prophecies apply only to systems which are isolated, stationary, and repetitive, and to seek to apply the method of scientific prophecy to human society and human history.¹
In applying this, the ethical skeptic therefore views the role of the inverse problem as introducing stark informative advantage to the scientific process, along the following lines of Popperian logic.
- All predictive theories/models contain the following features (parameters):
- Control Variables
- Arrival Distributions
- Interleaving Effects
- Neural or Feedback Mechanisms.
- Predictive explanatory models (forward problem) which do not require exhaustive testing of these features, are rare.
- When a forward problem model alone is assembled, it contains these feature elements, along with their imparted risk, whether or not we acknowledge either.
- To improve a match to predicted outcome in the forward model, these model features must be assumed by any study addressing the topic, whether acknowledged or not.
- An inverse problem process involves the assembly of field observations which serve to do the following:
- Acknowledge the presence of and role imparted by each model feature
- Bring each feature into coherent measurable sensitivity relationships with the real world which increase their Popper exposure and informative context
- Reduce the risk imparted by each element by testing their impact by means of two reductive methods (forward and inverse)
- Reduce the overall field of uncertainty inside the subject (intelligence)
- Highlight conditions/domains where a forward problem model may be, with or without our awareness, inaccurate, divergent in solution, inconclusive or incoherent
- Dispel false notions of simplicity which promote ignorance around a subject
- Introduce the avenue through which
- falsification of model or features can be attained,
- competing theories can be developed and
- an increase in the epistemological basis of our overall understanding can be attained.
This is the process of science. The last three bullet points in particular constituting the basis for what Popper called ‘truthlikeness’ or ‘verisimilitude.’¹
Therefore, we see that in most of science, if field observations can be readily made, and the organization making a claim to evidence has not undertaken such observations to confirm or follow-up on their conjectured theory – then they have been guilty of Forward Problem Blindness, or unfinished science. Under such a condition, one cannot make a claim to settled science or consensus.
/philosophy : science : epistemology : observation : prediction theory/ : to predict the result of a measurement requires (1) a model of the system under investigation, and (2) a physical theory linking the parameters of the model to the parameters being measured. This prediction of observations, given the values of the parameters defining the model constitutes the “normal problem,” or, in the jargon of inverse problem theory, the forward problem. The “inverse problem” consists in using the results of actual observations to infer the values of the parameters characterizing the system under investigation.² ~Wolfram Media
It is not enough to theorize and predict, a scientist must also (if feasible) neutrally observe, confirm, follow-up and craft imputed theory from outcome intelligence as well.
/philosophy : science : epistemology : explanatory model or construct/ : a theoretical relationship or algorithm which is conjectured to comprise input variables, arrival distributions, controls and measures, parameters, constraints, assumptions, dependencies and interleaved feedback networks – all resulting in a given set of observable outcome measures. A completed solution is the condition where both the forward problem and the inverse problem agree in support of the proposed theoretical relationship.
A theory derives verity in both successfully predicting outcomes as well as being independently predictable from its observed impacts.
Therefore, as we step from the realm of model development and into the domain of scientific study (which is simply an empirical form of model development) we carry with us the following observed risk:
Forward Problem Blindness (Unfinished Science)
/philosophy : science : epistemology : observation : prediction theory : pseudoscience/ : the “inverse problem” consists in using the results of actual observations to infer the values of the input parameters characterizing a system under investigation. Science which presupposes a forward problem solution, or employs big data/large S population measures only inside, a model and the physical theory linking input parameters forward to that model’s predicted outcome – without conducting direct outcome observation confirmation or field measure follow-up to such proposed values and linkages – stands as unfinished science, and cannot ethically justify a claim to consensus or finished science.
The four types of Forward Problem Blindness Errors:
Type I – Cohort/Subset Ignorance – wherein special populations or peripheral groups consisting of different inherent profiles are not studied because the survey undertaken was inclusive but too large, or the peripheral groups themselves, while readily observable, were ignored or screened out altogether.
Type II – Parameter Ignorance – wherein a model or study disregards an important parameter – which is tendered an assumption basis which is not acknowledged by the study developer nor peer review, and is then lost as to its potential contribution to increased understanding, or even potential model or study error.
Type III – Lack of Field Confirmation or Follow-Up – wherein a theoretical forward problem model is established and presumed accurate, yet despite the ready availability of a field confirming basis of observation – no effort was ever placed into such observation, confirmation of measures and relationships, or observations were not undertaken to determine long term/unanticipated outcomes.
Type IV – Field of Significant Unknown – wherein established ideas of science are applied to develop a theoretical forward problem model – and because of the familiarity on the part of science with some of the elements of the solution proposed – the solution is imputed tacit verity despite being applied inside a new field for the first time, or inside a field which bears a significant unknown.
Each of these Forward Problem Blindness error types will results in some kind of disposition other than accuracy – unless one is really lucky. And no, the process of peer review will not necessarily catch this. A model presumed accurate can still be inaccurate, divergent in solution, inconclusive or incoherent as the case may be, undetected – that is unless one undertakes the necessary follow-up and field sensitivity measures incumbent in the inverse problem.
Common Examples of Application
Earthquake Predictive Model Confirmation
Vaccine Impact Follow-up by Genetic Subgroup and Malady
Field Validation of Public Consistently Contested Observations
Impacts of Pesticides Employed in Food on Human Health
Economic Control Measures and Their Impacts
In each of these examples, were a scientist to make a claim based upon a forward problem prediction alone, which is then just assumed to be correct without field follow-up, this would constitute an instance of unfinished science. Sadly, much of our conflict of interest and profit-driven science today, exists in this state of incompleteness.
Forward Problem Blindness in such cases constitutes a willful error of pseudoscience.
¹ Thornton, Stephen, “Karl Popper”, The Stanford Encyclopedia of Philosophy (Winter 2015 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/win2015/entries/popper/>.
² Tarantola, Albert. “Inverse Problem.” From MathWorld–A Wolfram Web Resource, created by Eric W. Weisstein. http://mathworld.wolfram.com/InverseProblem.html.