# Universality and Heresy

Now the violent exclusion inherent in the institution or realization of the universal can take many different forms, which are not equivalent and do not call for the same politics. A sociological and anthropological point of view will insist on the fact that setting up civic universality against discrimination and modes of subjection in legal, educational, moral forms involves the definition of models of the human, or norms of the social. Foucault and others have drawn our attention to the fact that the Human excludes the “non-Human”, the Social excludes the “a-social”. [cf the Afropessimist version of this critique, which identifies this exclusion with its specific, racialising form in anti-blackness]

These are forms of internal exclusion, which affect what I would call “intensive universalism” even more than “extensive universalism”. They are not linked with the territory, the imperium; they are linked with the fact that the universality of the citizen, or the human citizen, is referred to a community. But a political and ethical point of view, which we can associate with the idea or formula of a “community without a community”, or without an already existing community, has to face yet another form of violence intrinsically linked with universality. This is the violence waged by its bearers and activists against its adversaries, and above all against its internal adversaries, i.e. potentially any “heretic” within the revolutionary movement.

Many philosophers – whether they themselves adversaries or fervent advocates of universalistic programs and discourses, such as Hegel in his chapter on “Terror” in the Phenomenology or Sartre in the Critique of Dialectical Reason – have insisted on this relationship, which is clearly linked to the fact that certain forms of universalism embody the logical characteristic of “truth”, i.e. they suffer no exception. If we had time, or perhaps in the discussion, our task now should be to examine the political consequences that we draw from this fact. I spoke of a quasi-Weberian notion of “responsibility”. Responsibility here would not be opposed simply to “conviction” (Gesinnung), but more generally to the ideals themselves, or the ideologies that involve a universalistic principle and goal.

A politics of Human Rights in this respect is typically a politics that concerns the institutionalization of a universalistic ideology, and before that a becoming ideological of the very principle that disturbs and challenges existing ideologies. Universalistic ideologies are not the only ideologies that can become absolutes, but they certainly are those whose realization involves a possibility of radical intolerance or internal violence. This is not the risk that one should avoid running, because in fact it is inevitable, but it is the risk that has to be known, and that imposes unlimited responsibility upon the bearers, speakers and agents of universalism.

Etienne Balibar, On Universalism

If I had to give a name to the present moment in philosophy, I would call it the time of the heretics – this is the moment in which heresy is elevated into a value, almost a (negative-)universal value, the value of the exception, of that which is not tolerated by any politics which “[embodies] the logical characteristic of ‘truth'”. Can one distinguish the figure of the heretic from that of the renegade? Certainly the renegades like to think of themselves as heretics; but the true heretic is always something more and other than simply a renegade.

# Notes on “the digital”

It is mathematically demonstrable that the ontology of set theory and the ontology of the digital are not equivalent.

The realm of the digital is that of the denumerable: to every finite stream of digits corresponds a single natural number, a finite ordinal. If we set an upper bound on the length of a stream of digits – let’s say, it has to fit into the available physical universe, using the most physically compact encoding available – then we can imagine a “library of Boole”, finite but Vast, that would encompass the entirety of what can be digitally inscribed and processed. Even larger than the library of Boole is the “digital universe” of sequences of digits, D, which is what we get if we don’t impose this upper bound. Although infinite, D is a single set, and is isomorphic to the set of natural numbers, N. It contains all possible digital encodings of data and all possible digital encodings of programs which can operate on this data (although a digital sequence is not intrinsically either program or data).

The von Neumann universe of sets, V, is generated out of a fundamental operation – taking the powerset – applied recursively to the empty set. It has its genesis in the operation which takes 0, or the empty set {}, to 1, or the singleton set containing the empty set, {{}}, but what flourishes out of this genesis cannot in general be reduced to the universe of sequences of 0s and 1s. The von Neumann universe of sets is not coextensive with D but immeasurably exceeds it, containing sets that cannot be named or generated by any digital procedure whatsoever. V is in fact too large to be a set, being rather a proper class of sets.

Suppose we restrict ourselves to the “constructible universe” of sets, L, in which each level of the hierarchy is restricted so that it contains only those sets which are specifiable using the resources of the hierarchy below it. The axiom of constructibility proposes that V=L – that no set exists which is not nameable. This makes for a less extravagantly huge universe; but L is still a proper class. D appears within L as a single set among an immeasurable (if comparatively well-behaved) proliferation of sets.

A set-theoretic ontology such as Badiou’s, which effectively takes the von Neumann universe as its playground, is thus not a digital ontology. Badiou is a “maximalist” when it comes to mathematical ontology: he’s comfortable with the existence of non-constructible sets (hence, he does not accept the “axiom of constructibility”, which proposes that V=L), and the limitations of physical or theoretical computability are without interest for him. Indeed, it has been Badiou’s argument (in Number and Numbers) that the digital or numerical enframing of society and culture can only be thought from the perspective of a mathematical ontology capacious enough to think “Number” over and above the domain of “numbers”. This is precisely the opposite approach to that which seeks refuge from the swarming immensity of mathematical figures in the impenetrable, indivisible density of the analog.

# We Shall Come Rejoicing

Trying to get my head around the interplay between the locality and gluing axioms in a sheaf. In brief, and given a metaphorical association of the “more global” with the “above” and of the “more local” with the “below”:

The locality axiom means that “the below determines the (identity of the) above”: whenever two sections over an open set U are indistinguishable based on their restrictions to the sections over any open cover of U, they are the same. There is no way for data that are more-locally the same to correspond to data that are more-globally different. Our view can be enriched as we move from the global to the local, but not the other way around.

The gluing axiom means that “the above determines the (coherence of the) below”: each compatible-in-pairs way of gluing together the sections over an open cover of U has a representative among the sections over U, of which the sections in the glued assemblage are the restrictions. There is no coherent more-local assemblage that does not have such a more-global representation. The global provides the local with its law, indexing its coherence.

A theme of postmodernism, and particularly of Lyotard’s treatment of the postmodern, was “incommensurability”. Between distinct local practices – language games – there is no common measure, no universal metalanguage into which, and by means of which, every local language can be translated. The image of thought given by sheaves does not contradict this, but it complicates it. The passage from the local to the global draws out transcendental structure; the passage from the global to the local is one of torsion, enrichment, discrimination. The logics of ascent and descent are linked: we cannot “go down” into the local without spinning a web of coherence along the way, and we cannot “come up” into the global without obeying a strict rule of material entailment.

# Reasons of Space

“The logical space of reasons” is a figure of speech, and a telling one. C20th philosophy is full of phase spaces and logical spaces, rhetorical spaces and epistemic spaces, from Wittgenstein’s Logische Raum and Heidegger’s Spielraum through to the smooth and striated spaces of Deleuze & Guattari. Spatialisation is one way to “go transcendental”: the movement from point to space is always a virtualising movement, a movement in the direction of overarching (rather than “underlying”) logic, higher-order organisation.

Henri Lefebvre’s argument about spatial metaphors is, more or less, a reworking of Merleau-Ponty’s phenomenological centreing of spatial intuition on the body: for “the body” as that of an individual phenomenal subject, Lefebvre substitutes the social body. For Lefebvre, the contouring of “mental space”, with all its various metaphorical deployments in philosophy and mathematics, is consequent upon the structuring and restructuring of social and geographical space, in particular the space of the city. One could always see Lefebvre’s own project of tracing shifts in the structure of social space as outlining a kind of space of spaces within which such transformations could be plotted; but that would seem somewhat against the spirit of it.

The critical question here concerns the status of the transcendental, relative to the social (and historical, geographical, physical etc) matrix of which it is the transcendental. A “logical space of reasons” might be one sort of thing in a city where argumentative cliques congregated in coffee shops and bars to debate the latest pamphlets and manifestos, and another in a remote village with a primarily oral culture, where collective decision-making and tribal identity were mediated by the same stock of stories and story-tellers. Or – and the use of “space” as metaphor predisposes us towards this alternative – we might be trying to talk about differently-situated instantiations of the same thing, transcendentally speaking.

The key notion that I draw from Zalamea’s metaphorical deployment of sheaf theory is that there is not one “space of reasons”; that the transcendental is only available on condition that one navigates from space to space, constructing “spaces of spaces” through transcendentalising operations. That is why it makes sense to me to argue that the space of reasons is not a “full body”, and is in fact incompletable. There is not an independent space of types, governing a subordinate domain of values: type and value are inextricably entwined.

# Two rationalist subjectivities

Reason is not the already-accomplished apparatus of rationality, and the space of reasons can never be laid out in its totality under a single gaze. In short, it is not a “full body”, sufficient and all-comprehending.

Neither can a “rationalist project” be oriented by the thought of one day completing rationality, or take the measure of its accomplishments based on their perceived proximity to such a goal. Rationality is locally perfectible – there exist problems to which there are solutions, and even classes of problems to which there are general solutions – but not globally: there is no universal procedure which will render every circumstance as a problem to which there is a solution.

There is no end of phrases, and no end to the task of linking phrases together.

The obscure subject of a rationalist politics will be that which, in the name of a “full body” of accomplished (or to-be-accomplished) rationality, calls for everything to be restored to order. Society organised according to geometric principles! Its speech purged of fallacies, its politics free of antagonism…

We know that such an obscure subject must devote itself increasingly to destruction, culminating in a frenzy against the real. Only the destruction of error can restore the integrity of the full body; and there is no end of error.

The faithful subject of a rationalist politics will be that which proceeds by proofs, which is to say by logical invention. To trust in reason is to trust in a generic capacity, without any guarantee drawn from the particularity of this or that person or community. It is to trust in the next step of the proof, without the certain knowledge of the world’s approval (reason can still scandalise the world).

The obscure subject has no need of fidelity, since there is literally nothing left for it to prove. All that remains is the identification – and consignment to perdition – of the infidel.

It is by no means the rationalist orientation in politics that has generated the most ferociously indiscriminate instances of this subjective figure.

# “With A Single Bound, He Was Free”

A trio of questions from Z:

It is not immediately clear that the rationalism you describe is conscious of its historicity, of the way particular ‘rational’ subjects and knowledge practices are historically constituted — and therefore it appears to be unconscious of its limitations. The “rhetoric of transcendence” further suggests this inattention. Again, the question here is not one of intellectual imperialism, but about how, whether, and with what limitations you can know. Is a rationality that transcends the standpoints of particular rational subjects possible? Can it be practiced by these subjects? Are the tools being relied upon, whether maths or something else, capable or sufficient of producing knowledge that transcends given subjects’ limitations?

Yes, yes and yes, otherwise we might as well pack up and go home. But the trap’s in the word “transcendence”, which implies a magic trick that one could always remain unpersuaded had actually been performed. Actually, the notion that “the standpoints of particular…subjects” constrain rationality in such a way that it must somehow escape them in order to function as the rationality it thinks it is, is fatally question-begging. Such “transcendence” is in fact an everyday occurrence, banal and unmagical: it takes place every time you or I take up an argumentative form and commit ourselves to reasoning consequentially according to its rubric.

There is no fact of the matter about my standpoint that can have any bearing whatsoever on the validity of a mathematical proof: if I am able to know that the proof has been performed successfully, it is because I am able to follow it through. If I am able to detect an error, it is because in the process of following the proof through I have stumbled upon an invalidating condition (usually a contradiction of some kind). Formalisms enable us to get to places that are not prescribed or localised by our immediate situations as users-of-form. That is, in fact, precisely what we use them for.

Most of the things we know, we don’t know in quite that kind of way of course. Much of the time, what we know or think we know is stabilised as knowledge against a backdrop of assumptions and heuristics supplied by our “standpoints”, which is one of the reasons why we misunderstand each other so often and so deeply. So I’m not suggesting that we take proof-following reasoning as a model for reasoning in general, or that knowers in general are not situated and not simultaneously enabled and constrained by situational constraints and affordances. What I do want to argue is that our image of a knower’s “situation” as wholly-localised and wholly-limiting is false: we are in fact situated within view, and within reach, of a rich variety of navigational affordances which enable us to reason from context to context. Reason is not an instantaneous ascent into the empyrean heights, from whence the whole terrain is visible at once: it involves traversals, translations, the construction of linkages from context to context, and whereabouts you start on the map is often significant. The crux I think is that it’s significant but not wholly determining: we don’t have to have perfect, godlike freedom in order to have some degrees of freedom.

# Rationalism in the present

The label “rationalism” has already a somewhat anachronistic aura about it, as if it named something that had no proper place in the present. We have been (or, plausibly, “have never been“) rationalists; but who could be such a thing now? Both the rationalism of the past and the rationalism of the future have a phantasmal quality; it doesn’t seem unreasonable to many people to treat them purely as objects of fantasy, and to focus their critique, such as it is, at the level of libidinal investment. What do these strange people want from rationality? How do these wants relate to the usual generators of desire – anxiety about social position, for example? Why the embattled posture, the rhetoric of transcendence?

Answers to these questions are not difficult to produce – in a sense they’re encoded into the questions themselves – and so the desire-named-rationalism can without much effort be rendered transparent and intelligible. What the would-be rationalist really wants – we are immediately sure of it – is to recover a (fantasised-as-) lost position of mastery, no doubt imbricated with the self-image of the colonial slaveowner; they feel threatened by women and queers and people of colour, whose political demands they wish to subordinate to their own privileged sense of what would be “reasonable”; and so on. Inasmuch as all of this registers only at the level of unconscious fantasy, they are (for now) at least one step away from the out-and-out racists and sexists and reactionaries. If only they could be brought to acknowledge the unsavory unconscious content of all their high-minded talk, they might yet be saved.

Now, this hermeneutic has its own self-sufficient logic: it supplies to itself guarantees of its own correctness. It does not have to reckon with rationalism as a concrete position, taken in the here-and-now, because its founding gesture is one of incredulity that such a position could be held in earnest, that it might have any ramifications beyond the fugitive gratification it offers to a handful of hapless nerds. You cannot be serious. It will not, for example, distinguish between the doing of mathematics, an activity which has real ramifications inasmuch as one thing really does lead to another, and the performance of mathiness, the brandishing of the matheme as a totem of sophistication (or abstract fedora). In short, the source of its power (as a derailer of argument) lies in its capacity for inattention: since I already “know” that the object of your attention is a fantasy with no real purchase on the present, I am authorised to focus my attention on your attention, rather than upon the thing attended-to.

It’s in the specific polemical context in which proponents of rationalism encounter this hermeneutic – and while that is often a very narrow and specialised context indeed, it is nevertheless legitimately of concern to us – that we find ourselves both at bay, and empowered by concrete demonstrations of the viability of rationalism in the present. The terrain under dispute is not, or not immediately, that of the concrete conditions of everyday life. What we’re trying to do, ultimately, is strengthen the hand of a certain kind of argument, in the hope of bringing closer some of the goods that this kind of argument is – we believe – uniquely able to envisage. It’s all pretty meta. But we do think it’s important – or we wouldn’t bother – and I for one do find it galling when people whose reaction to the accelerationist manifesto was to describe its program as inextricably colonialist, then describe the accelerationists’ sense of being put somewhat on the back foot as histrionic.

A few words are in order about the use made of mathematics. I don’t believe, and don’t believe that anyone else believes, that a sound knowledge of category theory is necessary for salvation. We’re not trying to become Pythagorean sages here. What I think has become apparent during the course of the HKW summer school is that the current rationalist use of “higher” mathematics is partly revisionary and partly metaphorical: it’s about taking apart some old and creaky logico-mathematico-ideological constructions, which had trapped us in a false image of thought, and provoking new images of thought by giving a motivated and metaphorically suggestive account of the technical machinery used to do so. Some of the work involved in doing this is very technical, and requires those performing it to learn and practice some real and quite difficult mathematics. But the ultimate purpose is not to become surpassingly good at maths, but to get away from an inadequate sense of what “rationality” can mean, so that we are not presented with a bogus choice between (for example) first-order predicate logic on the one hand, and everything that isn’t first-order predicate logic on the other. Rationalism in the present moment means using whatever tools are available to reflect on rationality and extend our sense of what it is capable of. It turns out that fancy mathematics is quite indispensable to this endeavour, but we do not hold it to be synonymous with thinking itself. In fact, those of us who are good Badiousians will be well-accustomed to the vertiginous transit between mathematics and poetry:

Someone saw that very clearly, my colleague, the French analytic philosopher Jacques Bouveresse, from the Collège de France. In a recent book in which he paid me the honor of speaking of me, he compared me to a five-footed rabbit and says in substance: “This five-footed rabbit that Alain Badiou is runs at top speed in the direction of mathematic formalism, and then, all of a sudden, taking an incomprehensible turn, he goes back on his steps and runs at the same speed to throw himself into literature.” Well, yes, that’s how with a father and a mother so well distributed, one turns into a rabbit.

The good rationalist, I submit, will be a five-footed rabbit, composing a living present out of the energetic, irreconcilable distribution of antecedents.

# An emerging orientation

Why am I so excited about the HKW Summer School? Because it represents an attempt to take some cultural initiative: this is “us” showing what we’ve got and what we can do with it, and showing-by-doing that what can be done in this way is actually worth doing.

I don’t expect everyone to be convinced by such a demonstration – in fact, I expect quite a few people to be dismayed about it, to feel that this is an upstart, renegade movement with distinctly not-for-People-Like-Us values and practices (maths! logics! don’t we know Lawvere* was a worse fascist than Heidegger?). It’s likely that not a few leftish PLUs will be rocking up any moment now to tell us all to curb our enthusiasm. But a glance over the history of Marxist thought will show that there have been plenty of times and places in which the initiative has indeed been held by rationalists – albeit often by warring rationalists, who disagreed ferociously with each other about how a rational politics was to be construed and practised. It’s not at all clear that the present moment, which places such overriding importance on affective tone, is not in fact the anomaly. That’s not to say that we should ditch everything that has declared itself over the past decade – on the contrary, it represents a vast, complex, necessary and unfinished project to which we should aim to contribute meaningfully. But we can only do so by approaching that project from a perspective which it does not encompass, and is hugely unwilling – and perhaps unable – to recognise as valid. To do so requires confidence, of a kind that those who are already confident in their moral standing will find unwarranted and overweening. We are going to be talked down to a lot; we are going to be called names; we are going to have to develop strong memetic defenses against the leftish words-of-power that grant the wielder an instant power of veto over unwelcome ideas. We have a lot to prove. Calculemus!

• a fairly hardcore Maoist, as it happens.

# Sheaves for n00bs

What itinerary would a gentle introduction to sheaves have to take? I would suggest the following:

• A basic tour of the category Set, introducing objects, arrows, arrow composition, unique arrows and limits. (OK, that’s actually quite a lot to start with).
• Introduction to bundles and sections, with a nicely-motivated example.
• Enough topology to know what open sets are and what a continuous map is, topologically speaking.
• Now we can talk about product spaces and fiber bundles.
• Now we can talk about the sheaf of sections on a fiber bundle.
• Now we back up and talk about order structures – posets, lattices, Heyting algebras, and their relationship to the lattice of open sets in a topological space. We note in passing that a poset can be seen as a kind of category.
• Functors, covariant and contravariant.
• That’s probably enough to get us to a categorial description of first presheaves (contravariant functor from a poset category to e.g. Set) and sheaves (presheaves plus gluing axiom, etc). Show how this captures the fundamental characteristics of the sheaf of sections we met earlier.
• Then to applications outside of the sheaf of sections; sheaf homomorphisms and sheaf categories; applications in logic and so on. This is actually where my understanding of the topic falls of the edge of the cliff, but I think that rehearsing all of the material up to this point might help to make some of it more accessible.

Anything really essential that I’ve missed? Anything I’ve included that’s actually not that important?

# What is the ontology of code?

If, as is sometimes said, software is eating the world, absorbing all of the contents of our lives in a new digital enframing, then it is important to know what the logic of the software-digested world might be – particularly if we wish to contest that enframing, to try to wriggle our way out of the belly of the whale. Is it perhaps object-oriented? The short answer is “no”, and the longer answer is that the ontology of software, while it certainly contains and produces units and “unit operations” (to borrow a phrase of Ian Bogost’s), has a far more complex topology than the “object” metaphor suggests. One important thing that practised software developers mostly understand in a way that non-developers mostly don’t is the importance of scope; and a scope is not an object so much as a focalisation.

The logic of scope is succinctly captured by the untyped lambda calculus, which is one of the ways in which people who really think about computation think about computation. Here’s a simple example. Suppose, to begin with, we have a function that takes a value x, and returns x. We write this as a term in the lambda calculus as follows:

$\lambda x.x$

The $\lambda$ symbol means: “bind your input to the variable named on the left-hand side of the dot, and return the value of the term on the right-hand side of the dot”. So the above expression binds its input to the variable named x, and returns the value of the term “$x$”. As it happens, the value of the term “$x$” is simply the value bound to the variable named x in the context in which the term is being evaluated. So, the above “lambda expression” creates a context in which the variable named x is bound to its input, and evaluates “x” in that context.

We can “apply” this function – that is, give it an input it can consume – just by placing that input to the right of it, like so:

$(\lambda x.x)\;5$

This, unsurprisingly, evaluates to 5.

Now let’s try a more complex function, one which adds two numbers together:

$\lambda x.\lambda y.x+y$

There are two lambda expressions here, which we’ll call the “outer” and “inner” expressions. The outer expression means: bind your input to the variable named x, and return the value of the term “$\lambda y.x+y$ ”, which is the inner expression. The inner expression then means: bind your input to the variable named y, and return the value of the term “$x+y$”.

The important thing to understand here is that the inner expression is evaluated in the context created by the outer expression, a context in which x is bound, and that the right-hand side of the inner expression is evaluated in a context created within this first context – a new context-in-a-context, in which x was already bound, and now y is also bound. Variable bindings that occur in “outer” contexts, are said to be visible in “inner” contexts. See what happens if we apply the whole expression to an input:

$(\lambda x.\lambda y.x+y)\;5 = \lambda y.5+y$

We get back a new lambda expression, with 5 substituted for x. This expression will add 5 to any number supplied to it. So what if we want to supply both inputs, and get $x+y$?

$\begin{array} {lcl}((\lambda x.\lambda y.x+y)\;5)\;4 & = & (\lambda y.5+y)\;4 \\ & = & 5 + 4 \\ & = & 9\end{array}$

Some simplification rules in the lambda calculus notation allow us to do away with both the nested parentheses and the nested lambda expressions, so that the above can be more simply written as:

$\lambda xy.x+y\;5\;4 = 9$

There is not much more to the (untyped) lambda calculus than this. It is Turing-complete, which means that any computable function can be written as a term in it. It contains no objects, no structured data-types, no operations that change the state of anything, and hence no implicit model of the world as made up of discrete pieces that respond as encapsulated blobs of state and behaviour. But it captures something significant about the character of computation, which is that binding is a fundamental operation. A context is a focus of computation in which names and values are bound together; and contexts beget contexts, closer and richer focalisations.

So far we have considered only the hierarchical nesting of contexts, which doesn’t really make for a very exciting or interesting topology. Another fundamental operation, however, is the treatment of an expression bound in one context as a value to be used in another. Contexts migrate. Consider this lambda expression:

$\lambda f.f\;4$

The term on the right-hand side is an application, which means that the value bound to f must itself be a lambda expression. Let’s apply it to a suitable expression:

$\begin{array} {lcl}(\lambda f.f\;4) (\lambda x.x*x) & = & (\lambda x.x*x)\;4 \\ & = & 4*4 \\ & = & 16\end{array}$

We “pass” a function that multiplies a number by itself, to a function that applies the function given to it to the number 4, and get 16. Now let’s make the input to our first function be a function constructed by another function, that binds one of its variables and leaves the other “free” – a “closure” that “closes over” its context, whilst remaining partially open to new input:

$\begin{array} {lcl}(\lambda f.f\;4) ((\lambda x.\lambda y.x*y)\;5) & = & (\lambda f.f\;4) (\lambda y.5*y) \\ & = & (\lambda y.5*y) 4 \\ & = & 5* 4 \\ & = & 20\end{array}$

If you can follow that, you already understand lexical scoping and closures better than some Java programmers.

My point here is not that the untyped lambda calculus expresses the One True Ontology of computation – it is equivalent to Turing’s machine-model, but not in any sense more fundamental than it. “Functional” programming, a style which favours closures and pure functions over objects and mutable state, is currently enjoying a resurgence, and even Java programmers have “lambdas” in their language nowadays; but that’s not entirely the point either. The point I want to make is that even the most object-y Object-Oriented Programming involves a lot of binding (of constructor arguments to private fields, for example), and a lot of shunting of values in and out of different scopes. Often the major (and most tedious) effort involved in making a change to a complex system is in “plumbing” values that are known to one scope through to another scope, passing them up and down the call stack until they reach the place where they’re needed. Complex pieces of software infrastructure exist whose entire purpose is to enable things operating in different contexts to share information with each other without having to become tangled up together into the same context. One of the most important questions a programmer has to know how to find the answer to when looking at any part of a program is, “what can I see from here?” (and: “what can see me?”).

Any purported ontology of computation that doesn’t treat as fundamental the fact that objects (or data of any kind) don’t just float around in a big flat undifferentiated space, but are always placed in a complex landscape of interleaving, interpenetrating scopes, is missing an entire dimension of structure that is, I would argue, at least as important as the structure expressed through classes or APIs. There is a perspective from which an object is just a big heavy bundle of closures, a monad (in the Leibnitzian rather than category-theoretical sense) formed out of folds; and from within that perspective you can see that there exist other things which are not objects at all, or not at all in the same sense. (I know there are languages which model closures as “function objects”, and shame on you).

It doesn’t suit the narrative of a certain attempted politicisation of software, which crudely maps “objects” onto the abstract units specified by the commodity form, to consider how the pattern-thinking of software developers actually works, because that thinking departs very quickly from the “type-of-thing X in the business domain maps to class X in my class hierarchy” model as soon as a system becomes anything other than a glorified inventory. Perhaps my real point is that capitalism isn’t that simple either. If you want a sense of where both capitalism and software are going, you would perhaps do better to start by studying the LMAX Disruptor, or the OCaml of Jane Street Capital.