When it comes to being whizz at programming, Moldbug fails the basic comprehension test:

“Even forgetting the macros, it’s pretty easy to see why you don’t need dynamic higher-order programming. For example, a very common operation on functions is to compose them – you have f(x) and g(x), you make h(x), defined as f(g(x)). In the right formal model, a compose operator is really no different from addition. And once you have first-class functions – which are not absolutely necessary, but certainly useful – it seems like a trivial extension.

But is it what you, the programmer, want to do? Actually, no. It is simply bad programming style. Its only effect is to make your program less comprehensible.

Because in a language with first-class functions but without dynamic higher-order programming, all you need to do is build a data structure which is the ordered pair (f(x) g(x)), and define h(x) as an operation on this structure. This may be slightly more cumbersome for the programmer. At least, if one line of code counts as ‘slightly more cumbersome.'”

No. That is not how function composition works.

I will now briefly try to describe why this is wrong, why it is *elementarily* wrong, and why Moldbug is both a n00b and a scrub.

Let’s say we have two functions, f : List a -> Int and g : Int -> Int. f takes a list and returns its length; g doubles the integer passed to it. We wish to compose both functions to make a new function, g . f = h : List a -> Int, which takes a list and returns double its length.

Suppose we try to do as Mr Yarvin proposes, and create an ordered pair (f(x), g(x)). f(x) will take a list, x, and return its length. g(x) will take an integer, x, and return x*2. We cannot pass the same x to both f and g, since no x can be both a list and an integer (not even in Python!). There is no therefore no way to construct this ordered pair.

The function h does not in any case take a pair of inputs. It takes a single input, a list, and returns a single output, an integer.

Function composition works by *chaining* functions together, so that the output of one becomes the input of another. If we wanted to apply a function h to the outputs of both f(x) and g(x), the “composition” operator would have a completely different signature, viz:

compose’ : (a -> b) -> (a -> c) -> (b -> c -> d) -> a -> d

compose’ f g h x = h (f x) (g x)

Which is also, nota bene, a damn sight easier to write and use if you’re using a sensible language that handles partial application properly. And nobody, literally nobody, calls it “dynamic higher-order programming”. Not least because it’s all worked out at compile-time anyway.

You do realize that he’s defining ‘x’ in each of the three cases f(x), g(x), h(x) differently? And you do understand that the name of a variable INSIDE a function need not be the same identified as outside the function?

Curtis is using ‘x’ differently inside each function definition scope.

E.g.

f(x) { x + 1}

g(x) {x * 2}

h(x) {f(g(x)) } // specific h() curry

f(12);

or

f(x) { x + 1}

g(x) { x * 2}

h(x) { h[0](h[1](h[2]))) } // general purpose h() currying tool

h([f(), g(), 12]);

My coding preferences are closer to yours than they are to Curtis’s, but you’ve gotten pretty rant and name-calling over something that boils down to YOUR misunderstanding what he’s saying.

No, I think that actually makes *less* sense. What exactly is being “curried” in your example?

In point-free style, thanks to partial application, you can write

h = f . g

as a shorthand for

h x = f(g(x))

The composition operator, (.), can be defined as follows:

(.) :: (b -> c) -> (a -> b) -> (a -> c)

(.) f g = \x -> f (g (x))

No currying there (except insofar as everything in Haskell just works that way). Now, what *you* have is something like this:

compose :: ((b -> c), (a -> b), a) -> c

compose (f, g, x) = f(g(x))

But why on earth you think that’s a) at all worth doing, or b) what Moldybug can possibly have meant, is completely beyond me.

Here’s a less confused example. Suppose we have a function

plus :: Int -> Int -> Int

plus a b = a + b

and we want to partially apply it to get a new function

plus3 = plus 3

Instead of using partial application as above, we could build a structure containing the function and its first parameter:

curried = (plus, 3)

and a thing that uses that structure:

doCurried :: ((a -> b -> c), a) -> b -> c

doCurried (f, a) b = f a b

such that

`doCurried (plus, 3) 4 = 7`

That’s all well and good, but it’s hardly an improvement.

Perhaps what Moldybug was getting at was this:

compose :: ((b -> c), (a -> b)) -> a -> c

compose (f, g) x = f(g(x))

such that

compose (times2, length) [1, 2, 3] = 6

just as

(times2 . length) [1, 2, 3] = 6

in which case, he should have written “the ordered pair (f, g)” rather than “the ordered pair (f(x), g(x))”. They have different meanings, you see?

What Moldybug seems to be objecting to, fundamentally, is closures; he’d rather see you explicitly pass around tuples containing all the pieces needed to do some computation than close over values currently in scope in order to construct a new scope in which some variable names are already bound.

One complaint here is that closures are hard to serialise, and hard to move around from computer to computer. But so are explicit tuples, if you allow them to contain references to mutable values or functions that aren’t defined on the receiving end. Once you’ve solved the move-it-over-there-and-execute-it problem for tuples containing functions plus all of their arguments, you’ve essentially solved it for functions plus everything bound within their scope.

That is, incidentally, the problem that Nock is meant to solve, by providing a transmissible representation for any algorithmic process plus all of the data it will operate on. It’s a *daft* solution, but there you go.

Having built a higher-order typed functional language since I wrote that, I guess I’m a *little* more pro-FP than I was in 2007. But not all that much. One of our summer interns, a very capable Haskeller, described Hoon as “procedural programming in a functional language,” which is about what I was shooting for.

There are definitely plenty of cases where the full toolbox of higher-order programming is called for, and lower-order solutions become absurdly cumbersome by comparison. These cases are by far the most common when you’re doing interesting things, like writing a compiler. Sadly, most programmers don’t spend most of their time doing interesting things.

Moreover, some people (due to their incredibly 31337 genes) are naturally good at both math and programming. Others (due to their not so incredibly 31337 genes) are only good at programming. Others (the majority of humanity, sadly) are not really capable of either. There are ten times as many people in group B than group C, perhaps, but there are ten times as many people in group B than group A.

Think of it as an ADA issue. If you are not in a wheelchair, you don’t understand what three little steps really means. Consider group C as paralyzed, and group B as paraplegic. Using higher-order tools instead of lower-order ones, when lower-order tools would do, is like having two steps instead of a little ramp at the entrance to your restaurant. For people in group A, it makes no real difference at all. The same for group C, because they’re at home eating through a straw. For group B, however, it’s a huge, huge deal and your choice of steps is incredibly insensitive. And that’s why we have an ADA.

Neither group B nor group C is ever going to climb El Capitan. That’s fine. If you want to open a restaurant that people have to rappel into, that’s a very cool idea, but you should be prepared to have only customers who are rock climbers by both birth and training.

Then there is the network problem. “Once you’ve solved the move-it-over-there-and-execute-it problem for tuples containing functions plus all of their arguments,” you definitely have earned a pony with that! I want my pony! If you could link me to such a pony I’d be really most grateful. I suspect it’s rather a hefty pony, however.

You absolutely do not want to send compiled code, whether it’s Nock, assembly, or whatever, over the networks. You need to separate out your code and your data, send only the latter, and use OS-level tools to find source code on the far side that can validate and retype your data.

There is a certain amount of hackishness to the whole moldbuggian enterprise. Not that this in any way proves it but it does seem symptomatic.