Immanence and Objectivity

In her 1979 paper “Cognitive Repression in Contemporary Physics”, Evelyn Fox Keller describes the scientific viewpoint associated with classical (i.e. Newtonian) mechanics as based on a pair of conjoined assumptions. Firstly, that the subject of scientific knowledge is strictly separable from the possible objects of such knowledge; and secondly, that it is possible to establish a direct correspondence between what is known of each such object and its actuality. According to this viewpoint, “nature” is both objectifiable and ideally knowable by a scientific subject which stands apart from that nature in order to observe it.

We have here an operation of dividing-and-regluing very like that described by Laruelle as characteristic of philosophy (and, indeed, Laruelle would likely describe Fox Keller’s account as true of a certain philosophy-of-science, or philosophical epistemology, rather than of science itself as a practical stance). The immanent Real is split into an objectifiable domain of distinct entities, and a transcendental order of knowledge which proposes to organise those entities into a world (“the scientific worldview”, let’s say). Rather than thinking “according to the Real”, or from the premise that both “knower” and “known” are immanent to the same reality (and thus share a fundamental identity), the stance Fox Keller describes is “decisional” in Laruelle’s sense: it begins by making a cut, and by giving itself the authority to repair that cut.

Fox Keller observes that the principles of objectifiability and knowability break down in the face of quantum mechanical phenomena: they cannot be maintained simultaneously, and every attempt to do so produces metaphysical monsters in the guise of “interpretations” of quantum mechanics (as Derrida once put it: coherence in contradiction indicates the force of a desire). Her Piagettian reading of the resistance to non-classical epistemology in terms of affective positions is suggestive (in that smug “clever men in white coats are really just big babies” sort of way that has never quite seemed to go out of vogue, for reasons I could probably venture some pseudo-psychological explanations for myself), but doesn’t particularly help us resolve the problem of how to develop such an epistemology.

Susskind describes the situation as follows: in a classical system, we are confident of always being able to make a “gentle enough” measurement that the system being measured is not perturbed: the apparatus is able to determine how the system would behave if the apparatus itself were not present. This is, in fact, perfectly possible a lot of the time. Within a quantum system, however, measurement is carried out by means of operations* which participate in the total behaviour of the system itself, such that we are always observing the outcome of what I will call an effectuation. Any such effectuation is the effectuation both of a measurement and of a new state of the system. Both (classical) objectifiability and (classical) knowability are untenable under these conditions; the former because the apparatus of measurement is not strictly separable from the system being measured, and the latter because the process of obtaining information about one aspect of the system may render information about another aspect inaccessible.

This is helpful, but doesn’t go quite far enough. If we had not measured the particle’s position, we should have been able to measure its momentum (and vice versa); but this does not mean that, at some moment prior to measurement, the particle necessarily had both position and momentum (i.e. possessed some complete, if hidden, state in which both position and momentum were simultaneously inscribed). Rather, measurement-of-position and measurement-of-momentum are distinct operations that effectuate one value at the expense of being able to effectuate the other.

A thoroughgoingly immanent account of how science proceeds will be one which sees scientific theory-building, measurement and knowledge-formation as effectuations of the same Real, rather than the work of one kind of agent – the detached scientific knower – upon one kind of patient – “Nature”, etherised upon a table. That is one way of looking at the problematic with which Laruelle is engaged, and of understanding why the “quantum” has taken on such a totemic significance for him in his later work.


  • As Susskind also reminds the reader, an operator is a mathematical entity, which acts on state vectors to produce new state vectors. It does not, however, change physical reality – rather, it describes how a real-valued measurement is derived from a state vector in a quantum system. How the state of that system changes in the process of carrying out the measurement is a quite different matter. Accordingly, I have corrected “operator”, where it appears above, to “operation”. Caveat lector, as always when I’m trying out new stuff.

Solace of Quantum

Early on in Leonard Susskind and Art Friedman’s Quantum Mechanics: The Theoretical Minimum, the authors illustrate a difference between the logic of predicates and sets, and the logic of observations at quantum scale. One way to look at it is in terms of modelling.

If the state space of a classical system is the set of all the states the system can be in, then a proposition about the system corresponds to a predicate P that picks out certain states as possible states of the system if that proposition is true. Susskind’s example is a single roll of a six-sided die. Let the “state” of the die be the number that is facing upwards after the roll – the state space S is then the set {1, 2, 3, 4, 5, 6}. To the proposition “the value of the die is even” corresponds the predicate P that picks out the subset of the state space S' = {2, 4, 6}. We have S, the state space, P, the predicate, and S' = {x ε s & P(x)}, the subset of the state space which satisfies the predicate.

The state space is thus a model for propositions about the system, which means that we can translate statements about propositions into statements about predicates and subsets. If two propositions about the system are true at once, then the states the system can be in are those which are picked out by both of their corresponding predicates. For example, take the propositions “the value of the die is even”, and “the value of the die is greater than three”. Considered separately, these correspond to two predicates, P1 and P2, which pick out two different subsets of S, S1 = {2, 4, 6} and S2 = {4, 5, 6}. The state subset corresponding to the proposition “the value of the die is even, AND is greater than three” is the intersection of the subset S1 picked out by P1 and the subset S2 picked out by P2: {4, 6}. The subset corresponding to the proposition “the value of the die is even, AND/OR is greater than three” is the union of S1 and S2: {2, 4, 5, 6}. There is thus a “primitive” or “direct” set-theoretic interpretation of propositions concerning the state of the system, based on Boolean algebra.

An important characteristic of classical systems is that they “hold still” while being observed: the state space is a static image of possible measurement results within a given reference frame, and we can combine measurements freely without that image shifting beneath our feet. You can measure both the position and the velocity of a billiard ball, in whichever order you like, since there exists a state of the billiard-ball-system that captures both of these properties simultaneously.

The way we model the state of a quantum system is necessarily different, because the state of such a system is not independent of observation: to every measurement of the truth of a proposition about a quantum system corresponds a new distribution of possible states of the system. We can lose information by measuring (for example, if we observe the velocity of a particle, we lose information about its position). The order in which observations are made is therefore significant: O1 followed by O2 may well get you a different result to O2 followed by O1, which means that the algebra of quantum observation – unlike the Boolean algebra of classical systems – is noncommutative.

Susskind makes the following eyebrow-raising statement – “the space of states of a quantum system is not a mathematical set; it is a vector space” – and then qualifies it in a footnote: “To be a little more precise, we will not focus on the set-theoretic properties of state spaces, even though they may of course be regarded as sets” (italics mine). This is a subtle distinction. A vector space is a set, or at least is not not a set – working in vector spaces doesn’t in any sense take you out of the set-theoretic mathematical universe. But the way states “fit together” in a quantum system – their way of being compossible – is not readily definable in terms of primitive set-theoretic operators like union and intersection: you need the language of linear algebra, of orthonormal bases and inner products, to make sense of it.

In the light of this, it becomes a little clearer why Laruelle mobilises a metaphorics of “quantumness” – of vectors and matrices, superpositions and complex conjugates – as a way of undermining or circumventing the static philosophical architecture of “Being”: the claim he’s trying to establish is that non-standard philosophy is in some sense to standard philosophy as quantum mechanics is to classical mechanics…

Laruelle: from identity to inclusion

I think of Laruelle’s notion of identity as working a bit like the way inclusion functions work (with some caveats, which I’ll raise at the end). Here’s how.

The identity function on a domain is the function which sends every value in that domain to itself: f(x) = x.

Every function has a domain (which identifies the kinds of values it accepts as inputs) and a codomain (which identifies the kinds of values it produces as outputs). We can write this as follows: f : A -> B means “f has the domain A, and the codomain B”, or “f accepts values of type A, and produces values of type B”. The identity function IdA : A -> A is the function f(x) = x for the domain A; that is, it accepts values from the domain A as inputs, and send every value in that domain to itself. Similarly, there is an identity function IdB: B -> B which is the function f(x) = x for the domain B; and so on.

An inclusion function from a domain A to a codomain B, where A is a subset of B, is almost exactly like the identity function on A – it sends every value in A to itself – except that the codomain is B rather than A. So it is the function f : A -> B where f(x) = x.

For example, there is an inclusion function from Z, the set of integers, to R, the set of real numbers, which just is the identity function on Z except that the codomain is R rather than Z. It sends every integer to itself, but to itself “considered as” a real number (since the integers are a subset of the reals). We might say that every integer has a “regional” identity as an integer-among-integers, mapped by the identity function IdZ : Z -> Z, and an “integer-among-the-reals” identity as a real number, mapped by the inclusion function f : Z -> R. Both functions are identical in terms of their inputs and outputs, but they have different meanings.

Now, it seems to me that for Laruelle, the Real is “like” (but see caveats below) a sort of global codomain that absolutely everything has an inclusion function into, since all “regional” domains are just subsets of this codomain. So for absolutely anything you like, it has both a “regional” identity (mapped by the identity function on its regional domain) and a One-in-one identity (mapped by the inclusion function from that domain into the Real).

When we consider everything in the aspect of its One-in-one identity, we can consider juxtapositions that aren’t otherwise possible. For example, let M be the domain of men, and F the domain of women. If we know that there is a common codomain (H, the domain of human beings) of which both M and F are subsets, then we can consider the identities man-among-humans and woman-among-humans as potentially overlapping, able to be combined or mutually transformed in various ways that the identities man-among-men and woman-among-women seemingly are not.

The Laruellian principle of unilateral identity-in-the-last-instance with the Real can be understood as something “like” a global generalisation of this move from identity to inclusion.

Now, there are two caveats to be raised here. The first is that, in set theoretic terms, a global codomain is impossible, because of Russell’s paradox – “the One is not”, as Badiou says. Accordingly, we have to pass from something like a set into something like a “proper class”; and functions are defined between sets, not between sets and proper classes. So at the threshold of the Real, the mathematical analogy breaks down, as we should probably expect it to.

The second is that Laruelle himself is quite emphatic that the kinds of ordering and partitioning operations that sets and functions between sets enable you to perform, belong to the domain of “transcendental” material – language, symbolisation, the material through which the force-of-thought has its material effects. The real, being One, is non-partitionable; and, being foreclosed to thought, is not indexable or schematisable in the ways that the set-theoretic mathematical universe is. (The move to the “quantum” is I think intended as a move outside the set-theoretic mathematical universe; I’ll talk a bit more about that some other time). “Regional” domains may be structured and more-or-less set-like, but that is both their prerogative and their weakness with respect to the Real (or its weakness with respect to them, since it underdetermines them). We must not picture whatever structures we can imagine being stabilised, held fixedly within an underlying global order of structure that is just like them only somehow bigger.

The problem then is that a global generalisation of the move from identity to inclusion takes us beyond what is structurally thinkable, or at least beyond what is thinkable using the tools we use to think about and within regional domains. This prevents us from setting up any particular regional domain (e.g. “physics”) as a master-domain against which all the others can be relativised. We have in Laruelle something like an anti-totalitarian thinking of (or “according to”) totality.

How Low Can You Go?

Simon Reynolds has put out a call for notable bass moments, and I feel called to represent…the discreditable.

We’d better start with the Neph.

Where can one go from there? To mid-80s thrash metal, of course!

Ricocheting back to Goff, it’s the Sisters of Mercy with one of the most gloriously satisfying steady-quaver basslines ever:

But I think the crown really has to go to the late, lamented Cliff Burton, as is only right:

According to the writ of the law

“Antonin Scalia” sounds so like the villain in a particularly bloodthirsty Italian Opera. One hopes he was fetched to his immortal destiny in a suitably terrifying coup de theatre…

DEVIL (offstage): Justice Scalia! Justice Scalia!
SCALIA: Who’s there? Who’s there? Merciful heaven, can it be…?
DEVIL: Why there’s no mercy in heaven for you, Justice Scalia! No more than you showed to those poor souls above whom you sat in judgement in this life!
SCALIA: I discharged my responsibilities according to the writ of the law, no more, no less!
DEVIL: A no less dreadful law proclaims your doom. [Rises through floor, to blaring horns] Rise now, Justice Scalia and come with me!
SCALIA: Oh saints! Oh Holy Mother, have pity!
DEVIL: The saints weep for you, Justice Scalia; for so they do for all forsaken souls. Come now!
[Thunder. The DEVIL drags SCALIA down into HELL]

Future Sailors: Notes on Inventing The Future (ii)

What does the “full” in “full automation” mean? Does it mean the automation of literally everything, or the automation of everything in some class of presumably-automatable things? We can rule out the former immediately, because of the halting problem. This is a more abstract, and much less interesting, reason than “what about care work?”; but it also shows that “full automation” can’t interestingly be assumed to mean the automation of literally everything: it must mean the automation of some class of presumably-automatable things, and that immediately opens up the question of how that class is to be specified. (Again, for abstract and not very interesting reasons, it can’t be specified automatically: it’s impossible to have an automatic procedure for determining whether or not something is automatable, at least if we assume that “automatable” implies “computable” in some sense).

Without totally dismissing the idea that some kinds of automation might make some kinds of care work easier to do (in the manner of “labour-saving devices”), I think we can rule out robot mental health nurses, childminders, or other kinds of looker-after-of-vulnerable-human-beings. That human beings are recurrently vulnerable in ways that require looking after, and that this looking-after isn’t amenable to automation, ought to frame our understanding of the ways in which automation can expand our capacity for action, or relieve us of the necessity of having to do certain kinds of work for ourselves. Lyotard makes the point that all human beings pass through a kind of neoteny, which we call childhood, and that this is as much “the season of the mind’s possibilities” as it is a kind of impairment. Childhood is a condition under which the things we want to do, and the things we need doing for us, are complex and dialogical, and we remain substantially under this condition as adults even if we have managed to find transitional objects to tide us over. Automation can do very little to relieve us of the work of childhood (although psychoanalysis is often concerned with a kind of automatism that takes hold in these processes, replacing dialogic tension and release with monologic fixation).

So, the Universal Abstract Subject that finds its opportunities for self-and-other-actualisation enhanced and amplified by technology is in a sense a subject separated from its childhood, a grown-up subject, with relatively stable needs and purposes. I am playing a game, and I want to be more successful at it; I script a bot to execute efficiently some of the combinations of moves I commonly have to make, or to evaluate the state of play and suggest winning strategies. Or: I have been given a well-specified and repetitive task to do, and it occurs to me that something much less intelligent than me could do it instead. Or: I offload part of the cognitive burden of detecting patterns in sensory information to a machine-learning system that has become superlatively good at noticing when something looks a bit like a cat, so that I can concentrate on something the machine-learning system isn’t quite so good at doing. What do these scenarios have in common? That the goals to be achieved are specified in advance, and that technical means exist through which they can be accomplished by a proxy agent with less and less involvement from the agent (me) whose goals they originally were.

“Full” automation, then, means that things we already know we want to do, and already know how to do, should be done less and less by us, and more and more by proxies, so that we can spend more time on things we don’t already know we want to do, or how to do. There isn’t actually any specifiable endpoint to this process: we’ll never know when we’ve finished, when we’ve automated all the things. The argument of ItF, as I understand it, is that we’re lagging a long way behind: that there are still a great many things that human beings are doing unnecessarily, because capitalism (like Sports Direct) will happily use cheap labour rather than even quite “low” technology for as long as it can get away with it. The demand for full automation is then a (perfectly reasonable) demand to “catch up” with what technology has already made possible. But the dynamic I’ve been describing here suggests that this will mean not so much the elimination of work as its ongoing transformation.

Future Sailors: Notes on Inventing The Future (i)

Neoliberalism, as ItF narrates it, was neither a social movement (“politics from below”) nor, initially, a state project (“politics from above”), but a kind of conspiracy which aimed to capture both state power and social energies and turn them to its own ends. Its effective cunning, its ability to transform “contradictions” into “productive tensions”, was in part due to the fact that it had no identity to uphold: no fixed allegiance, no-one in whose name it was to speak. If neoliberalism was finally a class politics, it was a politics of the owners of capital qua owners of capital, whoever they might happen to be. We must understand Hayek’s “individual freedom” not only negatively, as freedom from state intervention or bonds of collective allegiance, but also positively, as freedom to act as a certain kind of abstract agent: a receiver and emitter of pricing signals. I alone must be able to say what the value of something is to me (deregulation), and I must have as many occasions as possible to deliver evaluations of this kind (marketisation).

The universal abstract subject (UAS) of neoliberalism is a transducer of price signals, a node in a massively distributed information system the global goal of which is to become more efficient. The role of producer is secondary: there must be inputs from labour in order for the system to have priceable things to regulate, but from the point of view of the system labour itself exists only as another priceable thing. As a counter to this, ItF proposes an alternative UAS, a self-and-other-actualising producer of values (to give it a slightly Nietzschean accent) whose “synthetic freedom” is the freedom to invent new things to do, and new means by which it can become more free to do them.

Politics in the name of a UAS, i.e. a vision of generic human capacity, is politics in a different register to politics in the name of justice, rights or equality. It is only interested in the ways people can be wounded, stigmatised, humiliated or excluded insofar as such injuries impinge on their ability to exercise the capacity by virtue of which they qualify as instances of the UAS. To many people this will seem weirdly amoral. Neoliberalism can be moralised, if we posit property rights as in some way fundamental to human dignity rather than simply as necessary preconditions for the exercise of liberal agency, but it has no real need of morality to motivate its defence of strong property rights: they are functionally indispensable, that’s all. ItF’s modernist UAS, liberated from work and empowered by technology to mould reality to its individual and collective wishes, has (increasingly) complex needs, but once again the reason why these needs should be met is not that suffering cries to the heavens for remedy, but that synthetic freedom requires us to be constantly levering ourselves further and further away from the precarity and waste of subsistence-level survival. (An open, and troubling, question for ItF’s whole project is how this “ourselves” can be properly inclusive of all humanity, and not merely codify the entitlement of its wealthiest part to all the resources they can lay their hands on).

In some quarters, access to the internet is now being discussed as a universal human right. From the point of view of ItF’s UAS this is exactly as it should be, because internet access is functionally necessary for certain kinds of projects which a subset of humanity is now able to undertake (creating open source software, for example, or arguing about books on blogs). But this is then a conception of “rights” which is contingent on the ongoing elaboration of human powers and freedoms, rather than grounded in human nature or motivated by a ceaseless ethical vigilance over human vulnerability.

What ItF wants to have in common with neoliberalism, then – besides its demonstrated effectiveness as a hegemonic strategy – is a certain detachment from morality, a sense that moral concerns are politically secondary. (This sense is also present in some, but by no means all, formulations of revolutionary politics). There is a deliberate break here with what we might call pastoral politics, politics which proposes an ideal moral order and looks for ways that this order can be established so that we can all live peacefully within it. Modernist Prometheanism is decidedly anti-pastoral, because its demand for openness about possible human goals and purposes is incompatible with any scheme in which everyone has and knows their place in a global moral order. Its strongest critique of neoliberalism, then, is not that the latter is amoral and destructive, but that it, too, constitutes a premature and drastically limiting decision in favour of a singular vision of human purpose.

%d bloggers like this: