Part 2 - Exploring the Haskell Applicative Functor

The is part of a 3 part series of Functors, Applicatives and Monads in Haskell:


So in the last installment we looked at Functors. This time, we're going to explore what an Applicative is.

Obviously, it's a Haskell typeclass (again), but what actually is an Applicative.

The wikipedia is quite detailed:

In functional programming, an applicative functor is a structure intermediate between functors and monads, in that they allow sequencing of functorial computations (unlike plain functors) but without deciding on which computation to perform on the basis of the result of a previous computation (unlike monads). Applicative functors are the programming equivalent of lax monoidal functors with tensorial strength in category theory.

Phew!? Well, we've not covered Monads yet, so let's not go there just yet. But it is, apparently, something that's a bit more 'powerful' that a Functor and allows sequencing, whereas a Functor could be thought of as just 'being' the result. i.e. it could all happen in parallel, or in any order, but the end result would be the same.

Let's start with the type class to start building the intuition for an Applicative Functor:

instance Applicative f where
    pure :: a -> f a
    <*> :: f (a -> b) -> f a -> f b

Curiously, or not, <*> is the spaceship operator, but is also called 'apply'. For the Applicative type class, there are a minimum of two functions that need to be implemented for the instance to be complete; all of the other functions in the instance can be derived from these two due to the laws. Of course there are laws for the type class!

Before we look at the laws, let's just take stock of the <*> function's shape, and compare it to fmap:

fmap ::   (a -> b) -> f a -> f b
<*>  :: f (a -> b) -> f a -> f b

Note that the infix version of fmap is <$>.

So the difference between fmap and <*> is that the mapping function is 'inside' (or 'lifted into', in Haskell-speak) the type class instance. Hold that thought whilst we look at the laws for the Applicative type class.

Laws

From the wikipedia page, the laws are:

Identity

    pure id <*> v = v

Composition

    pure (.) <*> u <*> v <*> w = u <*> (v <*> w)

Homomorphism

    pure f <*> pure x = pure (f x)

Interchange

    u <*> pure y = pure ($ y) <*> u

When I first looked at these, my mind largely refused to process what I was seeing (apart from the Identity law), so let's try to unpack them a little.

Identity

This one should be relatively easy. The id function (identity) is:

id :: a -> a

i.e. whatever type you put in you get back, with the added law that it must be the same thing. so:

pure id <*> v == v
-- or (where f is the Applicative Functor instance, and v' is 'f v')
f (a -> a) <*> f v' = f v'

That seems reasonable. Lifting the id function into the applicative functor, and applying it to a value, must return that value.

Composition

<*> :: f (a -> b) -> f a -> f b
(.) :: (b -> c) -> (a -> b) -> a -> c

-- and the composition law (here '===' meaning 'being the same as')
    pure (.) <*> u <*> v <*> w === u <*> (v <*> w)

So if f (b -> c) is u, f (a -> b) is v and f w' is w, then we can do the following subsitutions:

pure (.) === f ((b -> c) -> (a -> b) -> (a -> c))

pure (.) <*> u === (<*>) (pure (.)) u
               === (<*>) (f ((b -> c) -> (a -> b) -> (a -> c))) -> f (b -> c)

Therefore:

u :: f (b -> c)
pure (.) <*> u === f ((a -> c) -> a -> c)

And:

v :: f (a -> c)
pure (.) <*> u <*> v === f ((a -> c) -> a -> c) <*> (a -> c)
pure (.) <*> u <*> v === f (a -> c)

Applying that to w (which is f a, where w is some type a, we get):

pure (.) <*> u <*> v <*> w === f c

And that has to obey the equivalence:

pure (.) <*> u <*> v <*> w === u <*> (v <*> w)

Transposing back into the the right hand side, we get:

u <*> (v <*> w) === f (b -> c) <*> (f (a -> b) <*> f a)
                === f (b -> c) <*> f b
		=== f c

Which is a whole lot simpler. But that's basically the composition law taken care of.

Homomorphism

A homomorphism means 'same-shape', which essentially means that two things are homomorphic is they have the same shape. How does this translate into the Applicative Functor law:

    pure f <*> pure x === pure (f x)

Working through it, with the function a -> b as f and a value x (of type a):

pure fn <*> pure x === f (a -> b) <*> f a === f b

-- and

pure (f x) === f ((a -> b) a) === f b

i.e. the function that takes an a and returns a b when applied using <*> to a value of type a lifted into the functor, must, after the application, have the same 'shape'; i.e. be in the same functor.

Interchange

    u <*> pure y === pure ($ y) <*> u

So the ($ y) may have confused you. Let's look at the signature of ($):

($) :: (a -> b) -> a -> b

So that's just function application, with ($) as the infix operation. e.g.

f a === f $ a

Therefore:

-- assuming type of y is of type 'a'
($ y) === (a -> b) -> b

The above says ($ y) is a function that takes a (function that takes an a and returns a b) and runs that function with y, as y has the type a.

e.g.

-- where :t (+2) :: Num a => a -> a
-- i.e. (+2) is a function that adds to to whatever it is applied to.
(+2) 4 = 6
(+2) $ 4 = 6
($ 4) (+2) = 6
-- or
let f x = 2 + x
f 4 = 6
f $ 4 = 6
($ 4) f = 6

So, back to interchange.

    u <*> pure y === pure ($ y) <*> u

u must be of the form f (a -> b) and y must be of type a, so:

(<*>) :: f (a -> b) -> f a -> f b

f (a -> b) <*> f a === f ((a -> b) -> b) <*> f (a -> b)
               f b === f b

So what does the applicatve functor let us do?

This comparison is really between a Functor and a Monad, where the Applicative Functor sits between them. You can think of the f bit as being the context in which the computation is performed. Obviously, the <*> function has access to the context under which the (a -> b) is performed. Maybe and [] (list) are also Applicative Functors, and therefore Functors, as every Applicative is also a Functor.

Maybe Applicatve instance

instance Applicative Maybe where
    pure x = Just x
    Nothing <*> _ = Nothing
    Just f <*> Nothing = Nothing
    Just f <*> Just x = Just (f x)

Why might this be useful?

The Maybe type is often used to represent values that might exist, or rather, use Nothing when a value doesn't exist. Why might a value not exist? Let's say we are reading a String into an Int. A safe way to do this is with readMaybe which returns Nothing if it can't parse the string.

readMaybe :: Read a => String -> Maybe a

readMaybe "123" :: Maybe Int
Just 123
readMaybe "three" :: Maybe Int
Nothing

If you, say, had two strings with numbers and you wanted to add them:

s1 = "1"
s2 = "2"
(+) <$> readMaybe s1 <*> readMaybe s2
Just 2

If either had failed (i.e. was Nothing) then the result would be Nothing, but you wouldn't have to check for this as part of the calculation.

List Applicative instance

For lists, it looks like:

instance Applicative [] where
    pure x = [x]
    gs <*> xs = [g x | g <- gs, x <- xs]

An example will probably help:

[(+1), (+2)]    <*> [1,2,3] -> [2,3,4,3,4,5]
[(+1), (*2)]    <*> [1,2,3] -> [2,3,4,2,4,6]
[(+1), (+(-1))] <*> [1,2,3] -> [2,3,4,0,1,2]

[f, g]          <*> [x,y,z] -> [f x, f y, f z, g x, g y, g z]

But how about:

[(+), (*)] <*> [1,2] <*> [3,4] -> [4,5,5,6,3,4,6,8]

i.e. the first 4 terms are the result of (+), and the last 4 terms are from (*), as in:

[1+3, 1+4, 2+3, 2+4, 1*3, 1*4, 2*3, 2*4]

Okay, that's a bit mad. However, combining fmap and spaceship we can represent matrix multiplication of a row and column as:

(*) <$> row <*> col

And remember, ((*) <$> row (in this case) is a list of partially applied multiplications, where the values in row are bound to the l.h.s of the multiplication. This list of functions is then applied, using the starship operator <*> to the col list.

So, again, what is an Applicative Functor

By now, you may be realising that it's a bit abstract. It's certainly a 'thing' in Haskell, and has appeared in other languages, but really it's just a type of a functor, from the perspective of Category Theory.

For me, it's what you can do with an Applicative Functor, and what sort of form it takes. You hardly ever start off with <*>. You usually have a function (a -> b) and start applying that as a Functor, and then if that needs more values, then they can be applied with <*>.

So there's an element of sequencing of a function over it's arguments, but not to be confused with a Monad which has to base it's computation on the last result, due to how bind (>>=) works.

One particularly useful applicative library is optparse-applicative which uses an Applicative Functor to define and parse options from the command line.

You can't hold 'state' in an applicative, but you do have the context that the applied function executes in, and the applying function (the implementation of <*> for the instance) has access to that context, and the applied function.

e.g. <*> for Maybe 'knows' it's a Maybe and thus can make 'choices' based on whether the value exists or not. Equally for a list. And also for a Parser in the optparse-applicative library. The structure of the Applicative is presevered (as it is the context), and thus, can be used. e.g.

Some: f <$> x <*> y <*> z would produce a value of a rich type a that several functions could use. This is a Builder type pattern. e.g. it could be evaluated (or 'run'), or examined, or tested, etc.

Limitations

You can't short-circuit an applicative expression. All the arguments have to be applied, so you can't decide mid-way through a computation to not do the rest of the computation. This is something a Monad can do. However, with laziness, you can still use applicatives with infinite lists or streams; you just can abort a computation within an applicative <*> expression.

In Summary

An Applicative Functor is essentially the type class Applicative (in Haskell at least). It doesn't have a direct translation in Category theory, and really is just a type of a functor.

However, the type class Applicative is useful for expressing computations where preserving structure (abstractly) is useful. Examples of this are Maybe and Either where functions can be applied to 'values in a context'. i.e. a failed value propagates as a failure, and errors can be combined.

However, if a decision to continue needs to be made, or state needs to be held, or any other kind of effect is needed, then the more powerful Monad is required to express this kind of computation within the realm of pure functions.

Comments are disabled for this page.