He Knew He Was Right

The crucial point in my conception of non-empirical theory confirmation is that all three arguments of non-empirical theory confirmation that I’ve described before rely on assessments of limitations to underdetermination. In effect, scientists infer the strength of limitations to underdetermination from observing a lack of known alternatives, the surprising explanatory extra value of their theory or a tendency of predictive success in the research field. Eventually, these observations amount to theory confirmation because strong limitations to underdetermination increase the probability that the known theory is viable. The connection between the number of possible alternatives and the chances of predictive success is intuitively most plausible when looking at the extreme cases: if there are infinitely many alternatives to choose from and just one of them is empirically viable, the chances to pick the correct one are zero. If there is just one possible consistent theory – and if I assume that there is a viable scientific theory at all -, the chance that the consistent theory I found will be predictively successful is 100 percent.

Richard Dawid, on String Theory and Post-Empiricism

This type of reasoning – from bounded probability, given an infinite search space – throws up all kinds of surprises. But there’s another theme here, slightly submerged: it turns out that theory choice, or axiom selection, isn’t really arbitrary (even if it is necessarily “ungrounded”). There are background reasons why a particular theory proposes itself, or a particular collection of axioms seems initially plausible.

The argument I’m familiar with is that such a choice “proves” itself through its own performativity: it’s retroactively validated (in the weak sense of “shown to be useful”, rather than a strong sense of “proven to be true”) by the results it makes available. But this may be a kind of rationalisation – see, we were right to start here after all! – of a choice that was already guided by criteria that aren’t formally specifiable (i.e. you couldn’t generate the “good” starting-points by following a computational procedure). We start out with a sense of the affordances and constructive capacities of particular forms and combinatorial operations, and pick out likely candidates based on practical intuitions.

This is certainly how it goes in programming – to the extent that I’m a “good” programmer, it’s because experience enables me to be consistently “lucky” in picking out a pragmatically viable approach to a problem. There’s usually an “experimental” stage where one sets up a toy version of a problem just to see how different approaches play out – but what one is experimenting with there is theoretical viability, not empirical confirmation.

Often the initial intuition is something like “this is likely either to turn out to be right, or to fall down quickly so we can discard it early and move on to something else”: what we dread, and become practised in avoiding, is floundering about with something which is wrong in subtle ways that only reveal themselves much later on.