Tag Archives: octarine

What’s New In Octarine

Octarine is now at version 0.5, and has been enhanced with New Stuff:


Transform records (and anything else you have extractors/lenses for) with a fluent mapping DSL.


When you have a bean, but want a record, and your key names map (or can be mapped) reflectively onto bean property names:

Reflective JSON serialisation

You lazy so-and-so, you.

Serialisation/deserialisation of maps

Sometimes the “keys” in a JSON array are dynamic values, such as ids, rather than static property names. In that case, you need to be able to serialise/deserialise a map of values rather than a record. The serialisation/deserialisation libraries have been revamped to support this, and a variety of other cases (lists of maps of lists of records, etc.)


An Extractor<S, T> may be seen as a partial function from S to T: it can only “extract” a value of type T from a value of type S if such a value (or the material from which one can be created) is present. In Octarine, an Extractor<T, V> is a cross between a Function<T, Optional<V>> and a Predicate<T> . It has three methods:

  • V extract(T source) – extracts a value of type V directly from the source (or fails with an exception).
  • Optional<V> apply(T source) – returns either a value of type V extracted from the source and wrapped in an Optional, or Optional.empty if no such value is available.
  • boolean test(T source) – returns true if the source contains the kind of value that we want, and false otherwise.

The obvious example is a Record , which might or might not contain a value for a given Key<T> . We have:

We can enhance extractors with predicates to look for values matching additional criteria besides existence – for example:

This makes them useful when composing tests on a record:

Which in turn makes them useful when filtering a collection of records:

Any OptionalLens<T, V> in Octarine is also an Extractor<T, V> , and any plain old Lens<T, V> can be turned into an Extractor<T, V> by calling Lens::asOptional on it.

Octarine vs Rekord: Design Comparison

Rekord is an excellent Java 7 library by my friend and sometime colleague Samir, which overlaps in several respects with my Java 8 library Octarine. The similarities are due both to a common ancestry (we both used to work on a codebase that made extensive use in testing of Nat Pryce’s make-it-easy) and an ongoing process of looking in on each other’s code from time to time to see if there’s anything worth pinching. More generally, I think myself and Samir have been scratching similar itches: we’ve both suffered under the tyrannical reign of Java Beans, and have both wanted to find better ways of dealing with record types in Java.

One major design difference between the two libraries has to do with their approach to type tagging. A Rekord is always a Rekord<T> of some particular type T, and is always constrained to use keys which are connected to that type:

The first type parameter in each of the Key<Sandvich, T> declarations identifies that key as one that can be used when creating Rekords of type Rekord<Sandvich>. This means that common key names that might occur in multiple contexts are kept separate by the type system. This helps you to avoid constructing a Rekord<Sandvich> containing a Key<Hair, Style> field. The Sandvich type acts as a kind of namespace for keys.

In Octarine, a Record is just a Record: it can contain key-value pairs using any keys you like. There is a reason for this. I wanted Octarine to move away altogether from the ORM pattern of mirroring types in a domain (database tables, for example) in the record types that represented values taken from that domain. The reason is that we are often interested in records that span domains; for example, if I join the Person table to the Address table, to get a record that represents a person together with their current address, that record will contain values drawn from the columns of both tables.

With Octarine, it’s quite legitimate to define collections of keys that refer to the columns in different tables, but to compose records that use keys from several such collections. Disambiguation of keys is done by prefixing the key name with the name of the interface it’s defined in, for example:

Where Octarine uses type tags is in validation – given a Schema<T>, you can use it to extract a Valid<T> from a Record. It’s up to the schema to determine which keys are permitted/required, and with what values. Schemas can also be permissive, doing just enough validation to ensure that a record can be used in the expected way, but not objecting to the presence of unfamiliar key-value pairs. This seems to me to be the right way to deal with data at the edges of a system, where code that works with an older version of a protocol should not necessarily fail just because a newer version adds a few extra pieces of information. From the point of view of some code that works with Persons, the presence of some Address fields is neither here nor there.

Among the merits of Rekord’s approach is that it enlists the help of the compiler in making sure that data held in Rekords is coherent: there is a real sense in which a Rekord has a type, even though it is not an instance of that type in the traditional sense. Octarine trades off a bit of safety in the interests of flexibility, but also provides a mechanism for tagging records with types that can be used to verify that they are suitable for particular purposes. There’s no reason why each library could not be extended to support the other’s more/less permissive approach (as Karg does, for example, with its support for both typed and untyped keys).

Validation in Octarine

A Record  in Octarine is a heterogeneous map, whose keys carry type information about their values. The only type guarantee you get is that if a Record  contains a Key<T> , then the value pointed to by that key will be of type T:

Some additional safety can be given by validating a Record  against a Schema<T> . If the validation is successful, a Valid<T>  can be obtained, which is a Record  ornamented with a type tag linking it to the schema that validated it. You can then enlist the help of the compiler in ensuring that code that will only work with a Valid<T>  is only ever passed a Valid<T> .

Here’s how it works:

The Person  interface here is never instantiated. It has two purposes:

  1. To act as a holder for the Key s and Schema  associated with Person s
  2. To be used as a type tag identifying Record s that have passed validation as instances of  Valid<Person>

The Schema<Person>  is created from a lambda expression that consumes a record and a consumer of validation errors, to which it can send strings describing any errors that it detects. In the example above, we test for the presence of two keys, and apply an additional test to the value of the “age” key.

You can call Validation::isValid  to find out whether validation has succeeded, and Validation::validationErrors  to get a list of all the validation errors that have been detected. Only if the list of validation errors is empty will Validation::get  return a Valid<T> ; otherwise a RecordValidationException  will be thrown.

The tests for the presence of mandatory keys are a bit cumbersome, so you can use a KeySet  to check that all the keys belonging to Person  are present:

Obviously, the compiler can’t guarantee that validation will succeed, since the tests carried out by the schema are only executed at runtime. However, you can subsequently guarantee that other code will only be passed a record that has been validated, by requiring a type of Valid<T>  rather than just a plain Record . This is how Octarine’s validation provides a bridge between the type-unsafe world of external data, and the type-safe world of your program’s domain model.

Enriching is Easy

Today I’d like to talk about a pattern that as far as I know is unique to Java 8. It makes use of the unusual way Java 8 implements lambdas as single-method interfaces, and the support for default methods in interfaces, to produce what I call “enriched lambdas”.

Here’s a simple example: a “Logged function” that logs every call that is made to it:

Now we can add the logging behaviour to any function by assigning or casting it to LoggedFunction rather than Function:

How does this work? LoggedFunction has only one method without a default implementation, so any lambda that matches the signature of this method can be cast to an instance of LoggedFunction. Instead of overriding the “apply” method of LoggedFunction, the lambda will override the “applyLogged” method, which the default “apply” method implementation delegates to.

Octarine uses this pattern in a variety of ways, notably to build up Schemas, Serialisers and Deserialisers from single lambda expressions:

In this case, the unimplemented method is used to configure the object, and all other methods (with their default implementations) use this method to obtain their configuration.

This pattern is similar in some respects to the “enrich my library” pattern in Scala, which uses implicits – it provides a safe and convenient way to “upcast” simple lambdas to objects with enriched behaviour.