He Knew He Was Right

The crucial point in my conception of non-empirical theory confirmation is that all three arguments of non-empirical theory confirmation that I’ve described before rely on assessments of limitations to underdetermination. In effect, scientists infer the strength of limitations to underdetermination from observing a lack of known alternatives, the surprising explanatory extra value of their theory or a tendency of predictive success in the research field. Eventually, these observations amount to theory confirmation because strong limitations to underdetermination increase the probability that the known theory is viable. The connection between the number of possible alternatives and the chances of predictive success is intuitively most plausible when looking at the extreme cases: if there are infinitely many alternatives to choose from and just one of them is empirically viable, the chances to pick the correct one are zero. If there is just one possible consistent theory – and if I assume that there is a viable scientific theory at all -, the chance that the consistent theory I found will be predictively successful is 100 percent.

Richard Dawid, on String Theory and Post-Empiricism

This type of reasoning – from bounded probability, given an infinite search space – throws up all kinds of surprises. But there’s another theme here, slightly submerged: it turns out that theory choice, or axiom selection, isn’t really arbitrary (even if it is necessarily “ungrounded”). There are background reasons why a particular theory proposes itself, or a particular collection of axioms seems initially plausible.

The argument I’m familiar with is that such a choice “proves” itself through its own performativity: it’s retroactively validated (in the weak sense of “shown to be useful”, rather than a strong sense of “proven to be true”) by the results it makes available. But this may be a kind of rationalisation – see, we were right to start here after all! – of a choice that was already guided by criteria that aren’t formally specifiable (i.e. you couldn’t generate the “good” starting-points by following a computational procedure). We start out with a sense of the affordances and constructive capacities of particular forms and combinatorial operations, and pick out likely candidates based on practical intuitions.

This is certainly how it goes in programming – to the extent that I’m a “good” programmer, it’s because experience enables me to be consistently “lucky” in picking out a pragmatically viable approach to a problem. There’s usually an “experimental” stage where one sets up a toy version of a problem just to see how different approaches play out – but what one is experimenting with there is theoretical viability, not empirical confirmation.

Often the initial intuition is something like “this is likely either to turn out to be right, or to fall down quickly so we can discard it early and move on to something else”: what we dread, and become practised in avoiding, is floundering about with something which is wrong in subtle ways that only reveal themselves much later on.

heart disease and bootleg clothing

“Speech” is both directly a form of social action, and an activity in which forms of social action can be modelled without being immediately enacted. What we want from “free speech” is freedom to model action in different ways – to anticipate, consider, scope out, render imaginatively tractable the widest possible variety of situations and events. The usual argument is that any restraint on this capacity is a restraint on our ability to navigate the world, to deal with the unknown, to adjust and optimise and reconsider in the light of new information. But it might also be: restraint on our ability to generate dehumanising representations of others which prepare the ground for dehumanising treatment of them, or serve to justify that treatment after the fact.

Even as we understand that the social value of “speech” comes precisely from the absence of a direct causal link between representation and action, we also have a variety of ways of noticing when speech isn’t causally innocuous. Free speech fundamentalism often requires us to not notice these occasions, or to position them as always remediable through further discussion (as Judith Butler, with impeccable liberal logic, ends up doing in Excitable Speech). But the exercise of what Lyotard called “terror” – the forcible silencing and removal of disputants – passes through representations, even if it isn’t confined to them.

The defense of “free speech” often ends up turning into a defence of “free social action” (provided it can be categorised as “speech”) rather than of “free consideration of social action”. This is morally incoherent, but naturally appeals to dickheads, of whom there are remarkably many on the internet. Posting revenge porn on reddit isn’t “modelling” anything: it’s doing something, to someone.

I want at some point to go back and look carefully at MacKinnon’s use of Lyotard in Only Words, because while I think it’s a bit of a missed connection I also think that something interesting nearly happened there. Or that the reasons why it didn’t quite happen may be interesting in themselves.

Lather, Rinse, Repeat

The (possibly alarmist) claim recently surfaced on social media that it was only a matter of time before some enterprising hacker managed to connect the records held by porn sites of their users’ browsing histories to the individual identities of those users, creating considerable opportunities for individual blackmail or general mischief. My personal reaction to this scenario –oh god please no – was balanced by a tranquil sense that a great many people would be in the same boat, and that the likely social impact of mass disclosure was difficult to anticipate. It might be horrific and hilarious in about equal measure. However, sites such as Pornhub already occasionally release their own statistical analyses, showing which US states evince the greatest interest in teenagers, spanking, interracial couples and so on. Public access to their – suitably anonymised – access logs might yield much of sociological interest.

My review of Tim Jordan’s Information Politics: Liberation and Exploitation in the Digital Society is now up at Review 31.