(response to a request to describe what I think Laruelle’s “one good idea” actually is)
Laruelle’s thinking sets off from a description of the decision schema, which he claims characterises “philosophy” in general: you split the world into representation (“transcendental”) and represented (“immanence”), posit a necessary relationship – of correlation or exchangeability – between the two, and reflect that relationship within the representation. That closes the loop of “auto-position”: your posit is necessary, because it posits its own necessity. (It’s a bit like Bible-bashers quoting 2 Timothy 3:16 at you as proof that the Bible is true). The representation then appears as “sufficient”, because it controls on its own terms the relationship between itself and that which it represents.
Laruelle’s Good Idea is that you can “suspend” that schema, by substituting a different posit: the relationship between any representation and the Real is unilateral and non-representable, so you can’t reflect that relationship in a self-authorising, sufficient way. That then changes the status of philosophical “decisions”: they no longer have the status of contending claims to sufficient truth, but are instead instances of a particular structure of thought that can be analysed in a non-decisional way on the basis of this alternative posit.
This is where it ought to get interesting, but doesn’t. Laruelle stipulates that a non-philosophical “theory” or “science” of philosophy should exist, but what he actually comes up with largely consists of repeating that stipulation.
In a lot of ways Laruelle’s development is similar to Richard Rorty’s. You start with “Philosophy and the Mirror of Nature”, which basically shows how lots of different philosophies try to close the loop of auto-position and how that never really works. That then leads into a generalised anti-foundationalism, the aporias of which Rorty tries to escape by turning to an account of philosophies as “final vocabularies” amenable to liberal-ironist unfinalising and mutation. The end-point is a congenial liberalism, concerned for the suffering of hurt and humiliated victims, and bearing a vague accusation against philosophy that it has turned aside, in its love of abstraction which is really a kind of delusional self-love, from the interests of suffering humanity and needs to be resubordinated to those interests.
I think a basic distinction needs to be introduced here, between technical domains that function according to various logics, and the ideology (neoliberalism, for example) that says that society is a system that should function according to some such logic, the “should” typically being a mixture of description and prescription (akin to what Laruelle calls an “amphibology”). It isn’t the logic itself that’s neoliberal, it’s the deployment of that logic in a particular hegemonic social project.
Foucauldians and critical theorists tend to want to see technical domains as embedded in epistemic formations, the way e.g. “psychiatric medicine” is theorised by Foucault as embedded in a wider system of power/knowledge, characterised by a continuous two-way transit between power relations and conceptual structures. The argument I would make is that computer technology is fundamentally unlike psychiatric medicine, because it has its own hard, objectively specifiable, intrinsic constraints which determine both its own character and development, and how it can be ideologically embedded or reflected. The halting problem, for example, just is not a social artifact: there is no possible alternative ideological dispensation under which computation-without-the-halting-problem could proceed. It constrains absolutely and unilaterally what computation can and cannot be. Technological determinism!
Some familiar features of the way we use computers could be, and in all likelihood would be, different under counterfactual social conditions. You get a glimpse of a parallel universe when you use a complete Smalltalk environment, for example, where the entire system down to the lowest level is available for inspection and modification. Anyone who remembers tinkering with home computers in the 1980s will remember the moment when the grey IBM PCs started to take over, and a computer went from being something you hacked on (in machine code, if you were really good) to something you ran spreadsheets on. That’s an interesting story to tell; but it’s a story about how s particular software stack has been put together, not a story about the fundamental nature of software itself. Conway’s law states that “organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations”, and that’s at least half-true at least half of the time. A corollary is that different forms of social organisation might well give rise to different designs for working software systems. In order to think software as such, one needs to think about what would remain invariant across such transformations; otherwise you’re just doing social anthropology, again.
We knew it wouldn’t last for ever. We “literary critics” or even “theorists” had to move quickly while the going was good, make the most of our chances, rush on excitedly, trying not to take too much notice of the slow, heavy, inexorable tread of the law somewhere close behind. The philosophers were back there somewhere, tortoise to our hare. In 1986 their books came out.
Geoffrey Bennington, Deconstruction and the Philosophers (The Very Idea), in Legislations: The Politics of Deconstruction (London: Verso, 1994), p. 11.
It’s worth remembering that “deconstruction” was always also an extra-philosophical phenomenon, a phenomenon which mounted its own retort upon philosophy, spreading out into the art world, the world of “literary criticism” as the latter attempted to alchemise itself into “theory”, and many other regions besides. “Deconstruction is America“, Derrida once said – a little hyperbolically, but you kind of knew what he meant. He wasn’t averse, either, to giving a symptomatic reading of this “spread”, of all the network-effects that constituted deconstruction as a multiple practice across multiple sites, never finally localisable within any of them. The effects were deconstruction, just as much as they were effects of deconstruction (that famous double genitive…).
With some philosophers, perhaps most of them really, you can usefully distinguish between what you might call the “toolbox” of characteristic rhetorical moves, modes of address, angles of attack and so on, and the deeper metaphysical system which provides the rationale for proceeding in that way. While there are now some Derrideans who are basically tool-users, repeatedly and somewhat mechanically “doing Derrida” to this or that theoretical object, there are others who are engaged in attempts at critical reconstruction of the philosophical core of Derrida’s thinking, who don’t necessarily write as or even like Derrideans but who find some intrigue in Derrida that they feel is worth pursuing in their own way. Same with Deleuze, same with Badiou. I’ve always felt (contra DeLanda, say) that with Deleuze the toolbox is actually more valuable than the system, such as it is; and I feel much the same way about OOP.
That’s not a disaster; a good toolbox (or two) is a thing worth having. But my bet (prior to reading the book) is that an attempt such as Pete’s to isolate and criticise the metaphysical core of OOP will end up punching smoke for the most part. I don’t at all mean that (in particular) Graham Harman’s work is substanceless, or valueless – just that its substance and value aren’t where a philosopher of Pete’s disposition would tend to look for them. The tortoise and the hare may sometimes seem to be running along the same track, but they aren’t actually in the same race.
Because I studied the Romantics – Wordsworth, Coleridge, Shelley, Keats – under the waning light of literary deconstruction, I can never hear the word “aesthetic” without unconsciously appending the word “ideology”. And because I read Althusser shortly afterwards, I can never hear the word “ideology” without immediately wondering about the possibility of some “science” that might suspend ideology’s imaginary self-sufficiency. This sequence, aesthetic-ideology-science, sets up an enclosure and proposes a path out of that enclosure. But the first term in the sequence, the aesthetic, forecloses the last: in the romantic conception, at least, science is at best an instrument for tracing paths within a realm of experience that vastly exceeds the scope of what scientific investigation and description are able to grasp. It may produce new objects of aesthetic experience, some of them sublime, but it cannot go beyond that experience. The sublime, in aesthetics, is an operator of capture: through it, whatever points beyond experience is converted into experiential intensity, and becomes the occasion for a reaffirmation of the powers of the subject. (By “operator of capture”, I mean that aspect of a system of ideas that comes into play whenever an exit from that system presents itself, ensuring that you never leave. You may if you wish picture the giant bubble that pursues and enfolds the fleeing Patrick McGoohan, ensuring that he never escapes from the Village). In Lyotard’s formulation, the sublime in art is that which presents that there is something unpresentable, something which the aesthetic sense is unable to assimilate. For romanticism, the focus always returns to presentation: that which the sublime says there is, the unpresentable, is incapable of further conceptualisation.
Scientific concepts are those which begin where aesthetic categories and modes of apprehension leave off. They are able to treat of objects which are not the objects of any possible experience. What we experience, looking through a microscope, is not the world of microscopic-scale objects, but a magnified projection of that world; the microscopic domain itself is inaccessible to human experience, being beyond the limits of our senses, but not to conceptualisation. Our theories about micro-scale entities and interactions are not theories about magnified projections of those things: they aim to describe how the entities themselves behave. (This is not the same as claiming to know what the entities are “in-themselves”. The point is simply that when you are describing, you are describing something, and that the object of this kind of description sits on the thing-projected end of the projection relationship, rather than the projected-thing end. Neither “end” need be accorded the status of noumenon, pre-theoretic given or whatever). Projection is an aesthetic activity which brings something into view – and “science” is also an arsenal of aesthetic means, of projections of various kinds, which convert the non-visible into an image. Heat maps of the sun; spiralling tracks in a particle chamber. For aesthetics, and for the aesthetic ideology of romanticism, these projections are all we know and all we need to know of the things projected; for science, they are the way-stations of an investigation.