Category Archives: Maths

An emerging orientation

Why am I so excited about the HKW Summer School? Because it represents an attempt to take some cultural initiative: this is “us” showing what we’ve got and what we can do with it, and showing-by-doing that what can be done in this way is actually worth doing.

I don’t expect everyone to be convinced by such a demonstration – in fact, I expect quite a few people to be dismayed about it, to feel that this is an upstart, renegade movement with distinctly not-for-People-Like-Us values and practices (maths! logics! don’t we know Lawvere* was a worse fascist than Heidegger?). It’s likely that not a few leftish PLUs will be rocking up any moment now to tell us all to curb our enthusiasm. But a glance over the history of Marxist thought will show that there have been plenty of times and places in which the initiative has indeed been held by rationalists – albeit often by warring rationalists, who disagreed ferociously with each other about how a rational politics was to be construed and practised. It’s not at all clear that the present moment, which places such overriding importance on affective tone, is not in fact the anomaly. That’s not to say that we should ditch everything that has declared itself over the past decade – on the contrary, it represents a vast, complex, necessary and unfinished project to which we should aim to contribute meaningfully. But we can only do so by approaching that project from a perspective which it does not encompass, and is hugely unwilling – and perhaps unable – to recognise as valid. To do so requires confidence, of a kind that those who are already confident in their moral standing will find unwarranted and overweening. We are going to be talked down to a lot; we are going to be called names; we are going to have to develop strong memetic defenses against the leftish words-of-power that grant the wielder an instant power of veto over unwelcome ideas. We have a lot to prove. Calculemus!

  • a fairly hardcore Maoist, as it happens.

Sheaves for n00bs

What itinerary would a gentle introduction to sheaves have to take? I would suggest the following:

  • A basic tour of the category Set, introducing objects, arrows, arrow composition, unique arrows and limits. (OK, that’s actually quite a lot to start with).
  • Introduction to bundles and sections, with a nicely-motivated example.
  • Enough topology to know what open sets are and what a continuous map is, topologically speaking.
  • Now we can talk about product spaces and fiber bundles.
  • Now we can talk about the sheaf of sections on a fiber bundle.
  • Now we back up and talk about order structures – posets, lattices, Heyting algebras, and their relationship to the lattice of open sets in a topological space. We note in passing that a poset can be seen as a kind of category.
  • Functors, covariant and contravariant.
  • That’s probably enough to get us to a categorial description of first presheaves (contravariant functor from a poset category to e.g. Set) and sheaves (presheaves plus gluing axiom, etc). Show how this captures the fundamental characteristics of the sheaf of sections we met earlier.
  • Then to applications outside of the sheaf of sections; sheaf homomorphisms and sheaf categories; applications in logic and so on. This is actually where my understanding of the topic falls of the edge of the cliff, but I think that rehearsing all of the material up to this point might help to make some of it more accessible.

Anything really essential that I’ve missed? Anything I’ve included that’s actually not that important?

What is the ontology of code?

If, as is sometimes said, software is eating the world, absorbing all of the contents of our lives in a new digital enframing, then it is important to know what the logic of the software-digested world might be – particularly if we wish to contest that enframing, to try to wriggle our way out of the belly of the whale. Is it perhaps object-oriented? The short answer is “no”, and the longer answer is that the ontology of software, while it certainly contains and produces units and “unit operations” (to borrow a phrase of Ian Bogost’s), has a far more complex topology than the “object” metaphor suggests. One important thing that practised software developers mostly understand in a way that non-developers mostly don’t is the importance of scope; and a scope is not an object so much as a focalisation.

Cover of "Object-Oriented Modeling and Design with UML"
No.

The logic of scope is succinctly captured by the untyped lambda calculus, which is one of the ways in which people who really think about computation think about computation. Here’s a simple example. Suppose, to begin with, we have a function that takes a value x, and returns x. We write this as a term in the lambda calculus as follows:

\lambda x.x

The \lambda symbol means: “bind your input to the variable named on the left-hand side of the dot, and return the value of the term on the right-hand side of the dot”. So the above expression binds its input to the variable named x, and returns the value of the term “x”. As it happens, the value of the term “x” is simply the value bound to the variable named x in the context in which the term is being evaluated. So, the above “lambda expression” creates a context in which the variable named x is bound to its input, and evaluates “x” in that context.

We can “apply” this function – that is, give it an input it can consume – just by placing that input to the right of it, like so:

(\lambda x.x)\;5

This, unsurprisingly, evaluates to 5.

Now let’s try a more complex function, one which adds two numbers together:

\lambda x.\lambda y.x+y

There are two lambda expressions here, which we’ll call the “outer” and “inner” expressions. The outer expression means: bind your input to the variable named x, and return the value of the term “\lambda y.x+y ”, which is the inner expression. The inner expression then means: bind your input to the variable named y, and return the value of the term “x+y”.

The important thing to understand here is that the inner expression is evaluated in the context created by the outer expression, a context in which x is bound, and that the right-hand side of the inner expression is evaluated in a context created within this first context – a new context-in-a-context, in which x was already bound, and now y is also bound. Variable bindings that occur in “outer” contexts, are said to be visible in “inner” contexts. See what happens if we apply the whole expression to an input:

(\lambda x.\lambda y.x+y)\;5 = \lambda y.5+y

We get back a new lambda expression, with 5 substituted for x. This expression will add 5 to any number supplied to it. So what if we want to supply both inputs, and get x+y?

\begin{array} {lcl}((\lambda x.\lambda y.x+y)\;5)\;4 & = & (\lambda y.5+y)\;4 \\ & = & 5 + 4 \\ & = & 9\end{array}

Some simplification rules in the lambda calculus notation allow us to do away with both the nested parentheses and the nested lambda expressions, so that the above can be more simply written as:

\lambda xy.x+y\;5\;4 = 9

There is not much more to the (untyped) lambda calculus than this. It is Turing-complete, which means that any computable function can be written as a term in it. It contains no objects, no structured data-types, no operations that change the state of anything, and hence no implicit model of the world as made up of discrete pieces that respond as encapsulated blobs of state and behaviour. But it captures something significant about the character of computation, which is that binding is a fundamental operation. A context is a focus of computation in which names and values are bound together; and contexts beget contexts, closer and richer focalisations.

So far we have considered only the hierarchical nesting of contexts, which doesn’t really make for a very exciting or interesting topology. Another fundamental operation, however, is the treatment of an expression bound in one context as a value to be used in another. Contexts migrate. Consider this lambda expression:

\lambda f.f\;4

The term on the right-hand side is an application, which means that the value bound to f must itself be a lambda expression. Let’s apply it to a suitable expression:

\begin{array} {lcl}(\lambda f.f\;4) (\lambda x.x*x) & = & (\lambda x.x*x)\;4 \\ & = & 4*4 \\ & = & 16\end{array}

We “pass” a function that multiplies a number by itself, to a function that applies the function given to it to the number 4, and get 16. Now let’s make the input to our first function be a function constructed by another function, that binds one of its variables and leaves the other “free” – a “closure” that “closes over” its context, whilst remaining partially open to new input:

\begin{array} {lcl}(\lambda f.f\;4) ((\lambda x.\lambda y.x*y)\;5) & = & (\lambda f.f\;4) (\lambda y.5*y) \\ & = & (\lambda y.5*y) 4 \\ & = & 5* 4 \\ & = & 20\end{array}

If you can follow that, you already understand lexical scoping and closures better than some Java programmers.

My point here is not that the untyped lambda calculus expresses the One True Ontology of computation – it is equivalent to Turing’s machine-model, but not in any sense more fundamental than it. “Functional” programming, a style which favours closures and pure functions over objects and mutable state, is currently enjoying a resurgence, and even Java programmers have “lambdas” in their language nowadays; but that’s not entirely the point either. The point I want to make is that even the most object-y Object-Oriented Programming involves a lot of binding (of constructor arguments to private fields, for example), and a lot of shunting of values in and out of different scopes. Often the major (and most tedious) effort involved in making a change to a complex system is in “plumbing” values that are known to one scope through to another scope, passing them up and down the call stack until they reach the place where they’re needed. Complex pieces of software infrastructure exist whose entire purpose is to enable things operating in different contexts to share information with each other without having to become tangled up together into the same context. One of the most important questions a programmer has to know how to find the answer to when looking at any part of a program is, “what can I see from here?” (and: “what can see me?”).

Any purported ontology of computation that doesn’t treat as fundamental the fact that objects (or data of any kind) don’t just float around in a big flat undifferentiated space, but are always placed in a complex landscape of interleaving, interpenetrating scopes, is missing an entire dimension of structure that is, I would argue, at least as important as the structure expressed through classes or APIs. There is a perspective from which an object is just a big heavy bundle of closures, a monad (in the Leibnitzian rather than category-theoretical sense) formed out of folds; and from within that perspective you can see that there exist other things which are not objects at all, or not at all in the same sense. (I know there are languages which model closures as “function objects”, and shame on you).

It doesn’t suit the narrative of a certain attempted politicisation of software, which crudely maps “objects” onto the abstract units specified by the commodity form, to consider how the pattern-thinking of software developers actually works, because that thinking departs very quickly from the “type-of-thing X in the business domain maps to class X in my class hierarchy” model as soon as a system becomes anything other than a glorified inventory. Perhaps my real point is that capitalism isn’t that simple either. If you want a sense of where both capitalism and software are going, you would perhaps do better to start by studying the LMAX Disruptor, or the OCaml of Jane Street Capital.