To recap, authorization is

the function of specifying access rights/privileges to resources related to information security and computer security in general and to access control in particular.

This is in contrast to *authentication*, which is the act of showing that someone is who they claim to be.

Authorization is a very important process, especially in a business like CircuitHub where we host many confidential projects. Accidentally exposing this data could be catastrophic to both our business and customers, so we take it *very* seriously.

Out of the box, Servant has experimental support for authorization, which is a good start. `servant-server`

gives us `Servant.Server.Experimental.Auth`

which makes it a doddle to plug in our existing authorization mechanism (cookies & Redis). But that only shows that we know *who* is asking for resources, how do we check that they are *allowed* to access the resources?

As a case study, I want to have a look at a particular end-point, `/projects/:id/price`

. This endpoint calculates the pricing options CircuitHub can offer a project, and there are few important points to how this endpoint works:

- The pricing for a project depends on the user viewing it. This is because some users can consign parts so CircuitHub won’t order them. Naturally, this affects the price, so pricing is viewer dependent.
- Some projects are owned by organizations, and should be priced by the organization as a whole. If a user is a member of the organization that owns the project pricing has been requested for, return the pricing for the organization. If the user is not in the organization, return their own custom pricing.
- Private projects should only expose their pricing to superusers, the owner of the project, and any members of the project’s organization (if it’s owned by an organization).

This specification is messy and complicated, but that’s just reality doing it’s thing.

Our first approach was to try and represent this in Servant’s API type. We start with the “vanilla” route, with no authentication or authorization:

Next, we add authorization:

```
type API =
AuthProtect CircuitHub
:> "projects"
:> Capture "id" ProjectId
:> "price"
:> Get '[ JSON ] Pricing
```

At this point, we’re on our own - Servant offers no authorization primitives (though there are discussions on this topic).

My first attempt to add authorization to this was:

```
type API =
AuthorizeWith ( AuthProtect CircuitHub )
:> "projects"
:> CanView ( Capture "id" ProjectId )
:> "price"
:> Get '[ JSON ] Pricing
```

There are two new routing combinators here: `AuthorizeWith`

and `CanView`

. The idea is `AuthorizeWith`

somehow captures the result of authenticating, and provides that information to `CanView`

. `CanView`

itself does some kind of authorization using a type class based on its argument - here `Capture "id" ProjectId`

. The result is certainly something that worked, but I was unhappy with both the complexity to implement it (which is scope to get it wrong), and the lack of actual evidence of authorization.

The latter point needs some expanding. What I mean by “lacking evidence” is that with the current approach, the authorization is essentially like writing the following code:

If I later add more resource access into `doThings`

, what will hold me accountable to checking authorization on those resources? The answer is… nothing! This is similar to boolean blindless - we performed logical check, only to throw all the resulting evidence away immediately.

At this point I wanted to start exploring some different options. While playing around with ideas, I was reminded of the wonderful paper “Ghosts of Departed Proofs”, and it got me thinking… can we use these techniques for authorization?

The basic idea of GDP is to name values using higher-rank quantification, and then - in trusted modules - produce proofs that refer to these names. To name values, we introduce a `Named`

type, and the higher-ranked function `name`

to name things:

```
module Named ( Named, forgetName, name ) where
newtype Named n a = Named { forgetName :: a }
name :: a -> ( forall name. Named name a -> r ) -> r
name x f = f ( Named x )
```

Note that the *only* way to construct a `Named`

value outside of this module is to use `name`

, which introduces a completely distinct name for a limited scope. Within this scope, we can construct proofs that refer to these names. As a basic example, we could use GDP to prove that a number is prime:

```
module Prime ( IsPrime, checkPrime ) where
data IsPrime name = IsPrime
checkPrime :: Named name Int -> Maybe (IsPrime name)
checkPrime named | isPrime (forgetName named) = Just IsPrime
| otherwise = Nothing
```

Here we have our first proof witness - `IsPrime`

. We can witness whether or not a named `Int`

is prime using `checkPrime`

- like the boolean value `isPrime`

this determines if a number is or isn’t prime, but we get evidence that we’ve checked a *specific* value for primality.

This is the whirlwind tour of GDP, I highly recommend reading the paper for a more thorough explanation. Also, the library `justified-containers`

explores these ideas in the context of maps, where we have proofs that specific items are in the map (giving us total lookups, rather than partial lookups).

This is all well and good, but how does this help with authorization? The basic idea is that authorization is itself a proof - a proof that we can view or interact with resources in a particular way. First, we have to decide which functions need authorization - these functions will be modified to require proof values the refer to the function arguments. In this example, we’ll assume our Servant handler is going to itself make a call to the `price :: ProjectId -> UserId -> m Price`

function. However, given the specification above, we need to make sure that user and project are compatible. To do this, we’ll name the arguments, and then introduce a proof that the user in question can view the project:

```
price
:: Named projectId ProjectId
-> Named userId UserId
-> userId `CanViewProject` projectId
-> m Price
```

But what is this `CanViewProject`

proof?

A first approximation is to treat it as some kind of primitive or axiom. A blessed function can postulate this proof with no further evidence:

```
module CanViewProject ( CanViewProject, canViewProject ) where
data CanViewProject userId projectId =
TrustMe
canViewProject
:: Named projectId ProjectId
-> Named userId UserId
-> m ( Maybe ( CanViewProject userId projectId ) )
canViewProject = do
-- ... lots of database access/IO
if ...
then return ( Just TrustMe )
else return Nothing
```

This is a good start! Our `price`

function can only be called with a `CanViewProject`

that matches the named arguments, and the only way to construct such a value is to use `canViewProject`

. Of course we could get the implementation of *this* wrong, so we should focus our testing efforts to make sure it’s doing the right thing.

However, the Agda programmer in me is a little unhappy about just blindly postulating `CanViewProject`

at the end. We’ve got a bit of vision back from our boolean blindness, but the landscape is still blurry. Fortunately, all we have to do is recruit more of the same machinery so far to subdivide this proof into smaller ones:

```
module ProjectIsPublic ( ProjectIsPublic, projectIsPublic ) where
data ProjectIsPublic project = TrustMe
projectIsPublic
:: Named projectId ProjectId
-> m ( Maybe ( ProjectIsPublic projectId ) )
```

```
module UserBelongsToProjectOrganization
( UserBelongsToProjectOrganization, userBelongsToProjectOrganization )
where
data UserBelongsToProjectOrganization user project = TrustMe
userBelongsToProjectOrganization
:: Named userId UserId
-> Named projectId ProjectId
-> m ( Maybe ( UserBelongsToProjectOrganization userId projectId ) )
```

```
module UserIsSuperUser ( UserIsSuperUser, userIsSuperUser ) where
data UserIsSuperUser user = TrustMe
userIsSuperUser :: Named userId UserId -> m ( Maybe ( UserIsSuperUser userId ) )
```

```
module UserOwnsProject ( UserOwnsProject, userOwnsProject ) where
data UserOwnsProject user project = TrustMe
userOwnsProject
:: Named userId UserId
-> Named projectId ProjectId
-> m ( Maybe ( UserOwnsProject userId projectId ) )
```

Armed with these smaller authorization primitives, we can build up our richer authorization scheme:

```
module CanViewProject where
data CanViewProject userId projectId
= ProjectIsPublic (ProjectIsPublic projectId)
| UserOwnsProject (UserOwnsProject userId projectId)
| UserIsSuperUser (UserIsSuperUser userId)
| UserBelongsToProjectOrganization
(UserBelongsToProjectOrganization userId projectId)
canViewProject
:: Named userId UserId
-> Named projectId ProjectId
-> m ( Maybe ( CanViewProject userId projectId ) )
```

Now `canViewProject`

just calls out to the other authorization routines to build it’s proof. Furthermore, there’s something interesting here. `CanViewProject`

doesn’t postulate anything - everything is attached with a proof of the particular authorization case. This means that we can actually open up the whole `CanViewProject`

module to the world - there’s no need to keep anything private. By doing this and allowing people to pattern match on `CanViewProject`

, authorization results become reusable - if something else only cares that a user is a super user, we might be able to pull this directly out of `CanViewProject`

- no need for any redundant database checks!

In fact, this very idea can help us implement the final part of our original specification:

Some projects are owned by organizations, and should be priced by the organization as a whole. If a user is a member of the organization that owns the project pricing has been requested for, return the pricing for the organization. If the user is not in the organization, return their own custom pricing.

If we refine our `UserBelongsToProjectOrganization`

proof, we can actually maintain a bit of extra evidence:

```
data UserBelongsToProjectOrganization userId projectId where
UserBelongsToProjectOrganization
:: { projectOrganizationId :: Named orgId UserId
, organizationOwnsProject :: UserOwnsProject orgId projectId
}
-> UserBelongsToProjectOrganization userId projectId
withUserBelongsToProjectOrganizationEvidence
:: UserBelongsToProjectOrganization userId projectId
-> ( forall orgId. Named orgId UserId -> UserOwnsProject orgId projectId -> r )
-> r
withUserBelongsToProjectOrganizationEvidence UserBelongsToProjectOrganization{..} k =
k projectOrganizationId organizationOwnsProject
```

Now whenever we have a proof `UserBelongsToProjectOrganization`

, we can pluck out the actual organization that we’re talking about. We also have evidence that the organization owns the project, so we can easily construct a new `CanViewProject`

proof - proofs generate more proofs!

```
price
:: Named projectId ProjectId
-> Named userId UserId
-> userId `CanViewProject` projectId
-> m Price
price projectId userId = \case
UserBelongsToProjectOrganization proof ->
withUserBelongsToProjectOrganizationEvidence proof \orgId ownership ->
price projectId orgId (UserOwnsProject ownership)
```

At the start of this post, I mentioned that the goal was to integrate this with Servant. So far, we’ve looked at adding authorization to a single function, so how does this interact with Servant? Fortunately, it requires very little to change. The Servant API type is authorization free, but does mention authentication.

```
type API =
AuthProtect CircuitHub
:> "projects"
:> Capture "id" ProjectId
:> "price"
:> Get '[ JSON ] Pricing
```

It’s only when we need to call our `price`

function do we need to have performed some authorization, and this happens in the server-side handler. We do this by naming the respective arguments, witnessing the authorization proof, and then calling `price`

:

```
priceProject :: User -> ProjectId -> Handler Pricing
priceProject user projectId = do
name (userId user) \namedUserId ->
name projectId \namedProjectId ->
canViewProjectProof <-
canViewProject namedUserId namedProjectId
case mcanViewProjectProof of
Nothing ->
fail "Authorization failed"
Just granted ->
price namedProjectId namedUserId granted
```

That’s where I’ve got so far. It’s early days so far, but the approach is promising. What I really like is there is almost a virtual slider between ease and rigour. It can be easy to get carried away, naming absolutely everything and trying to find the most fundamental proofs possible. I’ve found so far that it’s better to back off a little bit - are you *really* going to get some set membership checks wrong? Maybe. But a property check is probably gonig to be enough to keep that function in check. We’re *not* in a formal proof engine setting, pretending we are just makes things harder than they need to be.

`fast-downward`

library and show how it can be used to solve planning problems. The name comes from the use of the backend solver - Fast Downward. But what’s a planning problem?
Roughly speaking, planning problems are a subclass of AI problems where we need to work out a *plan* that moves us from an initial state to some goal state. Typically, we have:

- A known
*starting state*- information about the world we know to be true right now. - A set of possible
*effects*- deterministic ways we can change the world. - A
*goal*state that we wish to reach.

With this, we need to find a plan:

- A
*solution*to a planning problem is a*plan*- a totally ordered sequence of steps that converge the starting state into the goal state.

Planning problems are essentially state space search problems, and crop up in all sorts of places. The common examples are that of moving a robot around, planning logistics problems, and so on, but they can be used for plenty more! For example, the Beam library uses state space search to work out how to converge a database from one state to another (automatic migrations) by adding/removing columns.

State space search is an intuitive approach - simply build a graph where nodes are states and edges are state transitions (effects), and find a path (possibly shortest) that gets you from the starting state to a state that satisfies some predicates. However, naive enumeration of all states rapidly grinds to a halt. Forming optimal plans (least cost, least steps, etc) is an extremely difficult problem, and there is a *lot* of literature on the topic (see ICAPS - the International Conference on Automated Planning and Scheduling and recent International Planning Competitions for an idea of the state of the art). The `fast-downward`

library uses the state of the art Fast Downward solver and provides a small DSL to interface to it with Haskell.

In this post, we’ll look at using `fast-downward`

in the context of solving a small planning problem - moving balls between rooms via a robot. This post is literate Haskell, here’s the context we’ll be working in:

```
{-# language DisambiguateRecordFields #-}
module FastDownward.Examples.Gripper where
import Control.Monad
import qualified FastDownward.Exec as Exec
import FastDownward.Problem
```

If you’d rather see the Haskell in it’s entirety without comments, simply head to the end of this post.

As mentioned, in this example, we’ll consider the problem of transporting balls between rooms via a robot. The robot has two grippers and can move between rooms. Each gripper can hold zero or one balls. Our initial state is that everything is in room A, and our goal is to move all balls to room B.

First, we’ll introduce some domain specific types and functions to help model the problem. The `fast-downward`

DSL can work with any type that is an instance of `Ord`

.

```
data Room = RoomA | RoomB
deriving (Eq, Ord, Show)
adjacent :: Room -> Room
adjacent RoomA = RoomB
adjacent RoomB = RoomA
data BallLocation = InRoom Room | InGripper
deriving (Eq, Ord, Show)
data GripperState = Empty | HoldingBall
deriving (Eq, Ord, Show)
```

A ball in our model is modelled by its current location. As this changes over time, it is a `Var`

- a state variable.

A gripper in our model is modelled by its state - whether or not it’s holding a ball.

Finally, we’ll introduce a type of all possible actions that can be taken:

With this, we can now begin modelling the specific *instance* of the problem. We do this by working in the `Problem`

monad, which lets us introduce variables (`Var`

s) and specify their initial state.

First, we introduce a state variable for each of the 4 balls. As in the problem description, all balls are initially in room A.

Next, introduce a variable for the room the robot is in - which also begins in room A.

We also introduce variables to track the state of each gripper.

This is sufficient to model our problem. Next, we’ll define some effects to change the state of the world.

Effects are computations in the `Effect`

monad - a monad that allows us to read and write to variables, and also fail (via `MonadPlus`

). We could define these effects as top-level definitions (which might be better if we were writing a library), but here I’ll just define them inline so they can easily access the above state variables.

Effects may be used at any time by the solver. Indeed, that’s what solving planning problems is all about! The hard part is choosing effects intelligently, rather than blindly trying everything. Fortunately, you don’t need to worry about that - Fast Downward will take care of that for you!

The first effect takes a ball and a gripper, and attempts to pick up that ball with that gripper.

```
pickUpBallWithGripper :: Ball -> Gripper -> Effect Action
pickUpBallWithGripper b gripper = do
Empty <- readVar gripper -- (1)
robotRoom <- readVar robotLocation -- (2)
ballLocation <- readVar b
guard (ballLocation == InRoom robotRoom) -- (3)
writeVar b InGripper -- (4)
writeVar gripper HoldingBall
return PickUpBall -- (5)
```

First we check that the gripper is empty. This can be done concisely by using an incomplete pattern match.

`do`

notation desugars incomplete pattern matches to a call to`fail`

, which in the`Effect`

monad simply means “this effect can’t currently be used”.Next, we check where the ball and robot are, and make sure they are both in the same room.

Here we couldn’t choose a particular pattern match to use, because picking up a ball should be possible in either room. Instead, we simply observe the location of both the ball and the robot, and use an equality test with

`guard`

to make sure they match.If we got this far then we can pick up the ball. The act of picking up the ball is to say that the ball is now in a gripper, and that the gripper is now holding a ball.

Finally, we return some domain specific information to use if the solver chooses this effect. This has no impact on the final plan, but it’s information we can use to execute the plan in the real world (e.g., sending actual commands to the robot).

This effect moves the robot to the room adjacent to its current location.

```
moveRobotToAdjacentRoom :: Effect Action
moveRobotToAdjacentRoom = do
modifyVar robotLocation adjacent
return SwitchRooms
```

This is an “unconditional” effect as we don’t have any explicit guards or pattern matches. We simply flip the current location by an adjacency function.

Again, we finish by returning some information to use when this effect is chosen.

Finally, we have an effect to drop a ball from a gripper.

```
dropBall :: Ball -> Gripper -> Effect Action
dropBall b gripper = do
HoldingBall <- readVar gripper -- (1)
InGripper <- readVar b
robotRoom <- readVar robotLocation -- (2)
writeVar gripper Empty -- (3)
writeVar b (InRoom robotRoom) -- (4)
return DropBall -- (5)
```

First we check that the given gripper is holding a ball, and the given ball is in a gripper.

If we got here then those assumptions hold. We’ll update the location of the ball to be the location of the robot, so first read out the robot’s location.

Empty the gripper

Move the ball.

And we’re done! We’ll just return a tag to indicate that this effect was chosen.

With our problem modelled, we can now attempt to solve it. We invoke `solve`

with a particular search engine (in this case A* with landmark counting heuristics). We give the solver two bits of information:

- A list of all effects - all possible actions the solver can use. These are precisely the effects we defined above, but instantiated for all balls and grippers.
- A goal state. Here we’re using a list comprehension which enumerates all balls, adding the condition that the ball location must be
`InRoom RoomB`

.

```
solve
cfg
( [ pickUpBallWithGripper b g | b <- balls, g <- grippers ]
++ [ dropBall b g | b <- balls, g <- grippers ]
++ [ moveRobotToAdjacentRoom ]
)
[ b ?= InRoom RoomB | b <- balls ]
```

So far we’ve been working in the `Problem`

monad. We can escape this monad by using `runProblem :: Problem a -> IO a`

. In our case, `a`

is `SolveResult Action`

, so running the problem might give us a plan (courtesy of `solve`

). If it did, we’ll print the plan.

```
main :: IO ()
main = do
res <- runProblem problem
case res of
Solved plan -> do
putStrLn "Found a plan!"
zipWithM_
( \i step -> putStrLn ( show i ++ ": " ++ show step ) )
[ 1::Int .. ]
( totallyOrderedPlan plan )
_ ->
putStrLn "Couldn't find a plan!"
```

`fast-downward`

allows you to extract a totally ordered plan from a solution, but can also provide a `partiallyOrderedPlan`

. This type of plan is a graph (partial order) rather than a list (total order), and attempts to recover some concurrency. For example, if two effects do not interact with each other, they will be scheduled in parallel.

All that’s left is to run the problem!

```
> main
Found a plan!
1: PickUpBall
2: PickUpBall
3: SwitchRooms
4: DropBall
5: DropBall
6: SwitchRooms
7: PickUpBall
8: PickUpBall
9: SwitchRooms
10: DropBall
11: DropBall
```

Woohoo! Not bad for 0.02 secs, too :)

It might be interesting to some readers to understand what’s going on behind the scenes. Fast Downward is a C++ program, yet somehow it seems to be running Haskell code with nothing but an `Ord`

instance - there are no marshalling types involved!

First, let’s understand the input to Fast Downward. Fast Downward requires an encoding in its own SAS format. This format has a list of variables, where each variable contains a list of values. The contents of the values aren’t actually used by the solver, rather it just works with indices into the list of values for a variable. This observations means we can just invent values on the Haskell side and careful manage mapping indices back and forward.

Next, Fast Downward needs a list of operators which are ground instantiations of our effects above. Ground instantiations of operators mention exact values of variables. Recounting our gripper example, `pickUpBallWithGripper b gripper`

actually produces 2 operators - one for each room. However, we didn’t have to be this specific in the Haskell code, so how are we going to recover this information?

`fast-downward`

actually performs expansion on the given effects to find out *all* possible ways they could be called, by non-deterministically evaluating them to find a fixed point.

A small example can be seen in the `moveRobotToAdjacentRoom`

`Effect`

. This will actually produce two operators - one to move from room A to room B, and one to move from room B to room A. The body of this `Effect`

is (once we inline the definition of `modifyVar`

)

Initially, we only know that `robotLocation`

can take the value `RoomA`

, as that is what the variable was initialised with. So we pass this in, and see what the rest of the computation produces. This means we evaluate `adjacent RoomA`

to yield `RoomB`

, and write `RoomB`

into `robotLocation`

. We’re done for the first pass through this effect, but we gained new information - namely that `robotLocation`

*might* at some point contain `RoomB`

. Knowing this, we then rerun the effect, but the first `readVar`

gives us two paths:

```
readVar robotLocation
>>= \RoomA -> writeVar robotLocation RoomB -- If we read RoomA
>>= \RoomB -> writeVar robotLocation (adjacent RoomB -> RoomA) -- If we read RoomB
```

This shows us that `robotLocation`

might also be set to `RoomA`

. However, we already knew this, so at this point we’ve reached a fixed point.

In practice, this process is ran over all `Effect`

s at the same time because they may interact - a change in one `Effect`

might cause new paths to be found in another `Effect`

. However, because `fast-downward`

only works with finite domain representations, this algorithm always terminates. Unfortunately, I have no way of enforcing this that I can see, which means a user *could* infinitely loop this normalisation process by writing `modifyVar v succ`

, which would produce an infinite number of variable assignments.

CircuitHub are using this in production (and I mean real, physical production!) to coordinate activities in its factories. By using AI, we have a declarative interface to the production process – rather than saying what steps are to be performed, we can instead say what state we want to end up in and we can trust the planner to find a suitable way to make it so.

Haskell really shines here, giving a very powerful way to present problems to the solver. The industry standard is PDDL, a Lisp-like language that I’ve found in practice is less than ideal to actually encode problems. By using Haskell, we:

- Can easily feed the results of the planner into a scheduler to execute the plan, with no messy marshalling.
- Use well known means of abstraction to organise the problem. For example, in the above we use Haskell as a type of macro language – using do notation to help us succinctly formulate the problem.
- Abstract out the details of planning problems so the rest of the team can focus on the domain specific details – i.e., what options are available to the solver, and the domain specific constraints they are subject to.

`fast-downward`

is available on Hackage now, and I’d like to express a huge thank you to CircuitHub for giving me the time to explore this large space and to refine my work into the best solution I could think of. This work is the result of numerous iterations, but I think it was worth the wait!

Here is the complete example, as a single Haskell block:

```
{-# language DisambiguateRecordFields #-}
module FastDownward.Examples.Gripper where
import Control.Monad
import qualified FastDownward.Exec as Exec
import FastDownward.Problem
data Room = RoomA | RoomB
deriving (Eq, Ord, Show)
adjacent :: Room -> Room
adjacent RoomA = RoomB
adjacent RoomB = RoomA
data BallLocation = InRoom Room | InGripper
deriving (Eq, Ord, Show)
data GripperState = Empty | HoldingBall
deriving (Eq, Ord, Show)
type Ball = Var BallLocation
type Gripper = Var GripperState
data Action = PickUpBall | SwitchRooms | DropBall
deriving (Show)
problem :: Problem (Maybe [Action])
problem = do
balls <- replicateM 4 (newVar (InRoom RoomA))
robotLocation <- newVar RoomA
grippers <- replicateM 2 (newVar Empty)
let
pickUpBallWithGripper :: Ball -> Gripper -> Effect Action
pickUpBallWithGripper b gripper = do
Empty <- readVar gripper
robotRoom <- readVar robotLocation
ballLocation <- readVar b
guard (ballLocation == InRoom robotRoom)
writeVar b InGripper
writeVar gripper HoldingBall
return PickUpBall
moveRobotToAdjacentRoom :: Effect Action
moveRobotToAdjacentRoom = do
modifyVar robotLocation adjacent
return SwitchRooms
dropBall :: Ball -> Gripper -> Effect Action
dropBall b gripper = do
HoldingBall <- readVar gripper
InGripper <- readVar b
robotRoom <- readVar robotLocation
writeVar b (InRoom robotRoom)
writeVar gripper Empty
return DropBall
solve
cfg
( [ pickUpBallWithGripper b g | b <- balls, g <- grippers ]
++ [ dropBall b g | b <- balls, g <- grippers ]
++ [ moveRobotToAdjacentRoom ]
)
[ b ?= InRoom RoomB | b <- balls ]
main :: IO ()
main = do
plan <- runProblem problem
case plan of
Nothing ->
putStrLn "Couldn't find a plan!"
Just steps -> do
putStrLn "Found a plan!"
zipWithM_ (\i step -> putStrLn $ show i ++ ": " ++ show step) [1::Int ..] steps
cfg :: Exec.SearchEngine
cfg =
Exec.AStar Exec.AStarConfiguration
{ evaluator =
Exec.LMCount Exec.LMCountConfiguration
{ lmFactory =
Exec.LMExhaust Exec.LMExhaustConfiguration
{ reasonableOrders = False
, onlyCausalLandmarks = False
, disjunctiveLandmarks = True
, conjunctiveLandmarks = True
, noOrders = False
}
, admissible = False
, optimal = False
, pref = True
, alm = True
, lpSolver = Exec.CPLEX
, transform = Exec.NoTransform
, cacheEstimates = True
}
, lazyEvaluator = Nothing
, pruning = Exec.Null
, costType = Exec.Normal
, bound = Nothing
, maxTime = Nothing
}
```

Obviously, interacting with ListenBrainz requires some sort of IO so whatever API I will be offering has to live within some sort of monad. Currently, there are three major options:

*Supply an API targetting a concrete monad stack.*Under this option, our API would have types such as

where

`M`

is some*particular*monad (or monad transformer).*Supply an API using type classes*This is the

`mtl`

approach. Rather than choosing which monad my users have to work in, my API can be polymorphic over monads that support accessing the ListenBrainz API. This means my API is more like:*Use an extensible effects framework.*Extensible effects are a fairly new entry, that are something of a mix of the above options. We target a family of concrete monads -

`Eff`

- but the extensible effects framework lets our effect (querying ListenBrainz) seamlessly compose with other effects. Using`freer-effects`

, our API would be:

So, which do we choose? Evaluating the options, I have some concerns.

For option one, we impose pain on all our users who want to use a different monad stack. It’s unlikely that your application is going to be written solely to query ListenBrainz, which means client code becomes littered with `lift`

. You may write that off as syntactic, but there is another problem - we have committed to an interpretation strategy. Rather than describing API calls, my library now skips directly to prescribing how to run API calls. However, it’s entirely possible that you want to intercept these calls - maybe introducing a caching layer or additional logging. Your only option is to duplicate my API into your own project and wrap each function call and then change your program to use your API rather than mine. Essentially, the program itself is no longer a first class value that you can transform.

Extensible effects gives us a solution to both of the above. The use of the `Member`

type class automatically reshuffles effects so that multiple effects can be combined without syntatic overhead, and we only commit to an interpretation strategy when we actually run the program. `Eff`

is essentially a *free monad*, which captures the syntax tree of effects, rather than the result of their execution.

Sounds good, but extensible effects come with (at least) two problems that make me hesistant: they are experimental and esoteric, and it’s unclear that they are performant. By using *only* extensible effects, I am forcing an extensible effects framework on my users, and I’d rather not dictate that. Of course, extensible effects can be composed with traditional monad transformers, but I’ve still imposed an unnecessary burden on my users.

So, what do we do? Well, as Old El Paso has taught us: why don’t we have both?

It’s trivial to actually support both a monad transformer stack *and* extensible effects by using an `mtl`

type class. As I argue in Monad transformers, free monads, mtl, laws and a new approach, I think the best pattern for an `mtl`

class is to be a monad homomorphism from a program description, and often a free monad is a fine choice to lift:

But what about `f`

? As observed earlier, extensible effects are basically free monads, so we can actually share the same implementation. For `freer-effects`

, we might describe the ListenBrainz API with a GADT such as:

```
data ListenBrainzAPICall returns where
GetListens :: ... -> ListenBrainzAPICall Listens
SubmitListens :: ... -> ListenBrainzAPICall ()
```

However, this isn’t a functor - it’s just a normal data type. In order for `Free f a`

to actually be a monad, we need `f`

to be a functor. We could rewrite `ListenBrainzAPICall`

into a functor, but it’s even easier to just fabricate a functor for free - and that’s exactly what `Coyoneda`

will do. Thus our `mtl`

type class becomes:

```
class Monad m => MonadListenBrainz m where
liftListenBrainz :: Free (Coyoneda ListenBrainzAPICall) a -> m a
```

We can now provide an implementation in terms of a monad transformer:

```
instance Monad m => MonadListenBrainz (ListenBrainzT m)
liftListenBrainz f =
iterM (join . lowerCoyoneda . hoistCoyoneda go)
where
go :: ListenBrainzAPICall a -> ListenBrainzT m a
```

or extensible effects:

```
instance Member ListenBrainzAPICall effs => MonadListenBrainz (Eff effs) where
liftListenBrainz f = iterM (join . lowerCoyoneda . hoistCoyoneda send) f
```

or maybe directly to a free monad for later inspection:

For the actual implementation of performing the API call, I work with a concrete monad transformer stack:

which both my extensible effects “run” function calls, or the `go`

function in the `iterM`

call for `ListenBrainzT`

’s `MonadListenBrainz`

instance.

In conclusion, I’m able to offer my users a choice of either:

- a traditional monad transformer approach, which doesn’t commit to a particular intepretation strategy by using an
`mtl`

type class - extensible effects

All without extra syntatic burden, a complicated type class, or duplicating the implementation.

You can see the final implementation of `listenbrainz-client`

here.

The ReaderT design pattern has been mentioned recently, so where does this fit in? There are two options if we wanted to follow this pattern:

- We require a HTTP
`Manager`

in our environment, and commit to using this. This has all the problems of providing a concrete monad transformer stack - we are committing to an interpretation. - We require a family of functions that explain how to perform each API call. This kind of like a van Laarhoven free monad, or really just explicit dictionary passing. I don’t see this really gaining much on abstracting with type classes.

I don’t feel like the ReaderT design pattern offers anything that isn’t already dealt with above.

]]>`mtl`

-like type classes based on the idea of lifting languages for a given effect into larger monad transformer stacks. This approach felt so mechanical to me I set about exploring a way to formalise it, and am happy to announce a new experimental library – `transformers-eff`

.
`transformers-eff`

takes inspiration from the work of algebraic effects and handlers, and splits each effect into composable programs for introducing effects and handlers that eliminate these effects. As the name indicates, this work is also closely related to monad transformer stacks, as they provide the implementation of the specific effects. I believe the novelty in my approach is that we can do this entirely within the system of monad transformers, and this observation makes it very convenient to create re-usable effects.

Before looking at an example, I want to start by presenting the core API. First, we have the `Eff`

monad transformer:

If you squint, you’ll see that `Eff`

has the familiar shape of a *monad transformer* - it transforms a given monad `m`

, providing it access to effects described by `f`

. As `Eff f m`

is itself a monad, it’s possible to stack `Eff`

s together. The type parameter `f`

is used to indicate which effects this `Eff`

transformer talks about.

Next, the library provides a way to eliminate `Eff`

by *translating* it into a concrete monad transformer:

```
translate :: (Monad m,Monad (t m),MonadTrans t)
=> (forall x r. f x -> ContT r (t m) x)
-> Eff f m a
-> t m a
```

Translations are defined by a single function that is very similar to the type of “lifts” we saw in my previous blog post. The difference here is that the homomorphism maps into `ContT`

, which allows the translation to adjust control flow. For many effects it will be enough to simply `lift`

directly into this, but it can be useful to inspect the continuation, for example to build non-deterministic computations.

Finally, we have one type class method:

However, this type class is fairly constrained in its instances, so you should read `m`

as actually being some sort of monad transformer stack containing `Eff f`

.

Let’s dive in and look at some examples.

Last post we spent a lot of time looking at various representations of the reader monad, so let’s see how this looks under `transformers-eff`

.

We already have a definition for our language, `r -> a`

as we saw last week. While we could work directly with this, we’ll be interpreting into `ReaderT`

so I’ll use the `Reader`

newtype for a little extra readibility. Given this language, we just need to write a translation into a concrete monad transformer, which will be `ReaderT`

:

```
effToReaderT :: Monad m => Eff (Reader e) m a -> ReaderT e m a
effToReaderT = translate (\r -> lift (hoist generalize r))
```

This is a little dense, so let’s break it down. When we call `translate`

, we have to provide a function with the type:

The `ReaderT r m`

part is coming from the type we gave in the call to `translate`

, that is – the type of `effToReaderT`

. We don’t really need to concern outselves with continuations for this effect, as reading from a fixed environment does not change the flow of control - so we’ll begin with `lift`

. We now have to produce a `ReaderT r m a`

from a `Reader r a`

. If we notice that `Reader r a = ReaderT r Identity a`

, we can make use of the tools in the `mmorph`

library, which lets us map that `Identity`

to any `m`

via `hoist generalize`

.

We still need a way to easily introduce these effects into our programs, and that means writing an `mtl`

type class. However, the instances require almost no work on our behalf *and* we only have to provide two, making this is a very quick process:

```
class (Monad m) => EffReader env m | m -> env where
liftReader :: Reader env a -> m a
instance Monad m => EffReader env (Eff (Reader env) m) where
liftReader = interpret
instance {-# OVERLAPPABLE #-} EffReader env m =>
EffReader env (Eff effects m) where
liftReader = lift . liftReader
```

I then provide a user-friendly API built on this lift operation:

Finally, most users are probably more interested in running the effect rather than just translating it to `ReaderT`

, so let’s provide a convenience function to translate and run all in one go:

In total, the reader effect is described as:

```
class (Monad m) => EffReader env m | m -> env where
liftReader :: Reader env a -> m a
instance Monad m => EffReader env (Eff (Reader env) m) where
liftReader = interpret
instance {-# OVERLAPPABLE #-} EffReader env m =>
EffReader env (Eff effects m) where
liftReader = lift . liftReader
ask :: EffEnv e m => m e
ask = liftReader (Reader id)
effToReaderT :: Monad m => Eff (Reader e) m a -> ReaderT e m a
effToReaderT = translate (\r -> lift (hoist generalize r))
```

We also looked at a logging effect last week, and this can also be built using `transformers-eff`

:

```
data LoggingF message a = Log message deriving (Functor)
class (Monad m) => EffLog message m | m -> message where
liftLog :: Free (LoggingF message) a -> m a
instance Monad m => EffLog env (Eff (Free (LoggingF message)) m) where
liftLog = interpret
instance {-# OVERLAPPABLE #-} EffLog env m =>
EffLog env (Eff effects m) where
liftLog = lift . liftLog
log :: EffLog message m => message -> m ()
log = liftLog . liftF . Log
runLog :: (MonadIO m)
=> Eff (Free (LoggingF message) e) m a
-> (message -> IO ())
-> m a
runLog eff =
runIdentityT (translate (iterM (\(Log msg) -> liftIO (io msg))))
```

The interpretation here is given an `IO`

action to perform whenever a message is logged. I could have implemented this in a few ways - perhaps lifting the whole computation into `ReaderT (message -> IO ())`

, but instead I have just used `IdentityT`

as the target monad transformer, and added a `MonadIO`

constraint onto `m`

. Whenever a message is logged, we’ll directly call the given `IO`

action. As you can also see, I’ve used a free monad as the source language for the effect. This example demonstrates that we are free to mix a variety of tools (here free monads, `MonadIO`

and the identity transformer) in order to get the job done.

We saw above that when we introduced our `EffLog`

type class, it was immediately available for use along side `EffReader`

effects - and we didn’t have to do anything extra! To me, this is a huge win - I frequently find myself frustrated with the amount of work required to do when composing many different projects together with `mtl`

, and this is not just a theoretical frustration. To provide just one example from today, I wanted to use `ListT`

with some Yesod code that required `MonadLogger`

. There is obviously no `MonadLogger`

instance for `ListT`

, and it’s almost unsolvable to provide such an instance withoutrs/o using orphan instances - neither one of those libraries should need to depend on the other, so we’re stuck! If you stay within `Eff`

, this problem doesn’t occur.

Many will be quick to point out that in `mtl`

it doesn’t necessary make sense to have all transformers compose due to laws (despite the lack of any laws actually being stated…), and I’m curious if this is true here. In this library, due to the limitation on having to write your effectful programs based on an underlying algebra, I’m not sure it’s possible to introduce the problematic type class methods like `local`

and `catch`

.

In the `mtl`

approach a single monad transformer stack might be able to deal with a whole selection of effects in one go. However, I’ve found that this can actually make it quite difficult to reason about the flow of code. To provide an example, let’s consider this small API:

```
findOllie :: (MonadDb m, MonadPlus m) => m Person
findOllie =
do x <- dbLookup (PersonId 42)
guard (personName x == "Ollie")
return x
type QueryError = String
dbLookup :: (MonadDb m, MonadError QueryError m) => PersonId -> m Person
data DbT m a
instance Monad m => Monad (DbT m)
instance Monad m => MonadDb (DbT m)
runDb :: (MonadIO m) :: DbT m a -> m a
```

If we just try and apply `runDb`

to `findOllie`

, we’ll get

`runDb findOllie :: (MonadError QueryError m, MonadIO m, MonadPlus m) => m Person`

We still need to take care of `MonadError`

and `MonadPlus`

. For `MonadError`

I’ll use `ExceptT`

, and for `MonadPlus`

I’ll use `MaybeT`

:

`runMaybeT (runExceptT (runDb findOllie)) :: IO (Maybe (Either QueryError Person))`

Next, let’s consider a few scenarios. Firstly, the case where everything succeeds -

```
> runMaybeT (runExceptT (runDb findOllie))
Just (Right Person ...)
```

However, that query could fail, which would cause an error

```
> runMaybeT (runExceptT (runDb findOllie))
Just (Left "Table `person` not found")
```

Still as expected. Finally, person 42 might not actually be me, in which case we get

```
> runMaybeT (runExceptT (runDb findOllie))
Just (Left "")
```

Huh? What’s happened here is that we’ve hit the `MonadPlus`

instance for `ExceptT`

, and because our `QueryError`

is a `String`

we have a `Monoid`

instance, so we were given an “empty” error. This is not at all what we were expecting!

While this example is a contrived one, I am very nervous that this accidental choice of instances could happen deep within another section of code, for example where I expect to do some local error handling and accidentally eliminate a chance of failure that I was expecting to deal with elsewhere.

In `transformers-eff`

this is not possible, as each `Eff`

deals with one *and only one* effect at a time. This could be done with `mtl`

by introducing a separate type class for failure and only adding an instance for `MaybeT`

, we are working around the problem by convention, and I would much rather bake that in to the types.

The underlying implementation of `Eff`

is built on top of continuations, and due to aggressive inlineing, GHC is able to work some serious magic. In fact, in all the benchmarks I’ve produced so far, `Eff`

is as fast as `transformers`

, and even comes out slightly faster in one (though within the same order of magnitude).

As `Eff`

is just another monad transformer, you can stack in other monad transformers. Note that by doing this you may lack the type class instances you need, so explicit `lift`

ing might be necessary. I mainly expect this being useful by putting `Eff`

“on the top” - for example I can use `Eff`

locally with in a `Snap`

monad computation, provided I eventually run back down to just `Snap`

. This is the same pattern as locally using `transformers`

.

**Extensible**. The approach we take should be*open*, allowing us to define new effects.**Composable**. It should be possible to mix different effects with well defined, predictable behaviour.**Efficient**. We should only have to pay a minimal cost for the use of the abstraction.**Terse**. Haskell is generally not verbose, and whatever system we use should allow us to avoid excessive verbosity. The system should work with us, we should not have to work for it.

I would also add in a 5th point

**Inferable**. Type annotations should not be required for successful compilation.

With this list in mind, what are the current solutions, and how do they measure up?

Starting with the most basic, we can simply choose a concrete monad that does everything we need and work entirely in that – which is usually going to be `IO`

. In a sense this is composable – certainly all programs in one monad compose together – but it’s composable in the same sense that dynamically typed languages fit together. Often choosing a single monad for each individual computation is too much, and it becomes very difficult to work out exactly what effects are being used in our individual functions: does this computation use `IO`

? Will it throw exceptions? Fork threads? You don’t know without reading the source code.

Building a concrete monad can also be a lot of work. Consider a computation that needs access to some local state, a fixed environment and arbitrary `IO`

. This has a type such as

However, to actually interact with the rest of the Haskell ecosystem we need to define (at least) instances of `Functor`

, `Applicative`

and `Monad`

. This is boilerplate code and entirely determined by the choice of effects – and that means we should strive to have the compiler write it for us.

To combat this, we can make use of monad transformers. Unlike monads, monad transformers compose, which means we can build larger monads by stacking a collection of monad transformers together. The above monad `M`

can now be defined using off-the-shelf components, but crucially we can derive all the necessary type classes in one fell swoop with the `GeneralizedNewtypeDeriving`

language extension

```
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
newtype M a = M (ReaderT Environment (StateT State IO) a)
deriving (Functor, Applicative, Monad)
```

This saves typing considerably, and is a definite improvement. We’ve achieved more of points 1 and 2 (extenability and composability) by having both programs *and* effects compose. Point 4 (terseness) is improved by the use of `GeneralizedNewtypeDeriving`

. There is a slight risk in terms of efficiency, but I believe if `transformers`

would just `INLINE`

a few more definitions, the cost can be entirely erased. All of this code will infer as we’d expect, as we’re working entirely with explicit types

However, while we had to type less to *define* the effects, we have to type more to *use* the effects! If we want to access the environment for example, we can use the `ask`

operation from `Control.Monad.Trans.Reader`

, but we have to wrap this up in the `M`

`newtype`

:

However, if we want to retrieve the current state in the computation, we can use `get`

from `Control.Monad.Trans.State`

, but we also have to `lift`

that into the `ReaderT`

monad that is wrapping `StateT`

:

This is unfortunate – `lift`

is mostly noise that we don’t want to be concerned with. There is also the problem in that the amount of `lift`

s to perform is tied directly to the underlying definition of `M`

. If I later decide I want to layer in the chance of failure (perhaps with `MaybeT`

), I now have to change almost *all* code using `lift`

, by adding an extra one in!

`lift`

is a mechanical operation that is determined by the type of monad transformer stack and the operation that we want to perform. As we noted, for different stacks, the amount of `lift`

ing will vary, but it is determined by the type of stack. This suggests that these `lift`

s could be inferred by the use of type classes, and this is the purpose of the monad transformer library – `mtl`

.

The `mtl`

is a library consisting of type classes that abstract over the operations provided by each monad transformer. For `ReaderT`

, we have the `ask`

operation, and likewise for `StateT`

we have `get`

and `put`

operations. The novelty in this library is that the instances for these type classes are defined inductively over monad transformer stacks. A subset of the instances for `MonadReader`

for example, show

```
class MonadReader r m | m -> r where
ask :: m r
instance Monad m => MonadReader r (ReaderT r m) where
ask = Control.Monad.Trans.ReaderT.ask
instance (MonadReader r m) => MonadReader r (StateT s) where
ask = lift ask
```

We can read this as:

a

*base case*if the outermost transformer is`ReaderT`

, in which case no`lift`

ing has to be performed.an

*inductive case*, stating that*if*we know there is a`MonadReader`

instance somewhere within the stack (that is, somewhere in the stack we are using`ReaderT`

), then the outer monad transformer (in this case`StateT`

) is also an instance of`MonadReader`

by simply passing those operations through to the underlying instance via one application of`lift`

.

With these instances the lifting now becomes automatic entirely at the use of the respective operations. But not only does it become easier to use the operations, our programs also become more generic and easier to reason about. For example, while `env`

previously had the type `M Environment`

, it could now generalise to simply

Stating that `env`

is reusable in *any* computation that has access to `Environment`

. This leads to both more options for composition (we’re not tied to working in `M`

), but also types that are more expressive of what effects are actually being used by the computation. In this case, we didn’t use `StateT`

, so we didn’t incur a `MonadState`

type class constraint on `m`

.

Type classes open up a risk of losing type inference, and the approach in `mtl`

is to use functional dependencies. `mtl`

makes use of functional dependencies in order to retain type inference, but this comes at a compositional cost – the selected effect proceeds by induction from the outer most monad transformer until we reach the *first* matching instance. This means that even if there are multiple possible matches, the first one encountered will be selected. The following program demonstrates this, and will fail to type check:

```
Couldn't match type ‘Int’ with ‘[Char]’
arising from a functional dependency between:
constraint ‘MonadReader String (ReaderT Int (ReaderT String IO))’
arising from a use of ‘ask’
```

When we used `ask`

induction proceeded from the outermost transformer - `ReaderT Int`

. This is an instance of `MonadReader`

, and due to the functional dependency will be selected even though it doesn’t contain the `String`

that we’re looking for. This manifests as a type error, which can be frustrating.

In practice, I’m not convinced this is really a problem, but in the scenario where environments don’t match up we have a few options:

Adapt the environment with tools like

`mapReaderT`

or`magnify`

(from`lens`

).Use

`monad-classes`

which uses a little more type level computation to allow this to work. I’m not entirely sure what the story for inference is here, but I think there*may*be a risk of less inference.Forgo the functional dependencies, as in

`mtl-unleashed`

. In this case you really do give up type inference, and I don’t consider it a viable option (it fails to satisfy point 5 in my criteria in the intro).

Interestingly, the generality we gained by being polymorphic over our choice of monad also opens the door to something we couldn’t do with monad transformers, which is to choose a different implementation of the type class. For example, here’s a different implementation of `MonadReader`

for `M`

:

```
instance MonadReader Environment M where
ask = do
env <- M ask
liftIO (putStrLn "Requesting environment")
liftIO (putStrLn ("It is currently " ++ show env)
return env
```

While a slightly contrived example, we see that we now have the ability to provide a different interpretation for `ask`

which makes use of the underlying `IO`

in `M`

by logging whenever a computation looks at the environment. This technique is even more useful when you start defining domain specific effects, as it gives you the option to provide a pure variant that uses mock data, which can be useful for unit testing.

Let’s move away from monad transformer stacks and see what the other options are. One option that’s getting a lot of attention is the use of *free monads*. A free monad is essentially a type of construction that adds just enough structure over some data in order to have the structure of a monad – and nothing extra. We spend our days working with monads, and the reason the approach afforded by free monads is appealing is due to the way that we build them – namely, we just specify the syntax! To illustrate this, let me the consider the almost traditional example of free monads, the syntax of “teletype” programs.

To begin with, I have to define the syntax of teletype programs. These programs have access to two operations - printing a line to the screen, and reading a line from the operator.

This functor defines the syntax of our programs - namely programs that read and write to the terminal. The parameter `a`

allows us to chain programs together, such as this `echo`

program that prints whatever the user types:

However, this is kind of messy. The free monad construction allows us to generate a monad out of this functor, which provides the following presentation:

```
echo :: Free TeletypeF ()
echo = do
l <- getLine
printLine l
getLine :: Free TeletypeF String
getLine = liftF (GetLine id)
printLine :: String -> Free TeletypeF ()
printLine l = liftF (PrintLine l ())
```

This definition of `echo`

looks much more like the programs we are used to writing.

The remaining step is to provide an interpretation of these programs, which means we can actually run them. We can interpret our teletype programs by using `STDOUT`

and `STDIN`

from `IO`

:

```
runTeletype :: Free TeletypeF a -> IO a
runTeletype =
iterM (\op ->
case op of
GetLine k -> readLine >>= k
PrintLine l k -> putStrLn l >> k)
```

This rather elegant separation between syntax and semantics suggests a new approach to writing programs – rather than working under a specific monad, we can instead work under a free monad for some suitable functor that encodes all the operations we can perform in our programs.

That said, the approach we’ve looked at so far is not particularly extensible between different classes of effects, as everything is currently required to be in a single functor. Knowing that free monads are generated by functors, we can start to look at the constructions we can perform on functors. One very nice property of functors is that given *any* two functors, we can compose them. The following functors below witness three possible ways to compose functors:

```
data Sum f g a = InL (f a) | InR (g a) deriving (Functor)
data Product f g a = Product (f a) (g a) deriving (Functor)
data Compose f g a = g (f a) deriving (Functor)
```

Assuming `f`

and `g`

are `Functor`

s, all of these are also `Functor`

s - which means we can use them to build monads with `Free`

.

The most interesting of these constructions (for our purposes) is `Sum`

, which lets us choose between two different `Functor`

s. Taking a more concrete example, I’ll repeat part of John A. De Goes “Modern FP” article. In this, he defines two independent functors for programs that can access files in the cloud, and another for programs that can perform basic logging.

```
data CloudFilesF a
= SaveFile Path Bytes a
| ListFiles Path (List Path -> a)
deriving (Functor)
data LoggingF a
= Log Level String a
deriving (Functor)
```

Both of these can now be turned into monads with `Free`

as we saw before, but we can also combine both of these to write programs that have access to both the `CloudFilesF`

API *and* `LoggingF`

:

However, in order to use our previous API, we’ll have to perform another round of lifting:

```
-- API specific to individual functors
log :: Level -> String -> Free LoggingF ()
log l s = liftF (Log l s ())
saveFile :: Path -> Bytes -> Free CloudFilesF ()
saveFile p b = lift (SaveFile p b ())
-- A program using multiple effects
saveAndLog :: Free (Sum CloudFilesF LoggingF) ()
saveAndLog = do
liftLeft (log Info "Saving...")
liftRight (saveFile "/data" "\0x42")
-- Lifting operations
liftLeft :: Free f a -> Free (Sum f g) a
liftLeft = hoistFree InL
liftRight :: Free g a -> Free (Sum f g) a
liftRight = hoistFree InR
```

This is a slightly unfortunate outcome - while we’ve witnessed that there is extensiblity, without more work the approaches don’t compose particularly well.

To solve the problem of having to lift everything leads us to the need for an `mtl`

-like solution in the realm of free monads - that is, a system that automatically knows how to lift individual functors into our composite functor. This is essentially what’s happening in the `extensible-effects`

library - as a user you define each individual `Functor`

, and then `extensible-effects`

provides the necessary type class magic to combine everything together.

We should also mention something on efficiency while we’re here. Free monads have at least two presentations that have different use cases. One of these is extremely easy to inspect (that is, write interpters) but has a costly implementation of `>>=`

. We know how to solve this problem, but the trade off switches over to being costly to inspect. Recently, we learnt how to perform reads and binds in linear time, but the constant factors are apparently a little too high to be competative with raw `transformers`

. So all in all, there is an efficiency cost of *just working with* a free monad approach.

`mtl`

and lawsI want to now return to the monad transformer library. To recap, the definition of `MonadReader`

is –

But this alone makes me a little uneasy. Why? I am in the class of Haskellers who consider a type class without a law a smell, as it leaves us unable to reason about what the type class is even doing. For example, it doesn’t require much imagination to come up with nonsense implementations of `ask`

:

```
newtype SomeM a = SomeM (StateT Int IO a)
deriving (Functor, Applicative, Monad)
instance MonadReader Int SomeM where
ask = SomeM $ do
i <- get
put (i + 1)
return i
```

But then again – who’s to say this is nonsense? Given that we were never given a specification for what `ask`

should do in the first place, this is actually perfectly reasonable! For this reason, I set out searching for a way to reason about `mtl`

-style effects, such that we could at least get *some* laws.

The `transformers`

library also give us `mtl`

-like type classes, one of which is `MonadIO`

. However, this type class does have laws as well:

```
-- liftIO . return = return
-- liftIO (f >>= g) = liftIO f >>= liftIO . g
class MonadIO m where
liftIO :: IO a -> m a
```

This law is an example of a *homomorphism*. To quote Wikipedia on the subject:

A homomorphism is a structure-preserving map between two algebraic structures (such as groups, rings, or vector spaces).

In this case the algebraic structure is the monad structure of `IO`

. We see that any monad that is an instance of `MonadIO`

has the ability to lift `IO`

operations, and as this is a homomorphism, the laws state that it will preserve the underlying structure of `IO`

.

It’s currently unclear how to apply this type of reasing to `MonadReader`

, given its current definition – `ask`

is just a value, it doesn’t even take an argument – so how can we even try and preserve anything?

Let’s take some inspiration from free monads, and consider the effect language for `MonadReader`

. If we only have `(Monad m, MonadReader r m)`

, then the only thing we can do on top of the normal monad operations is `ask`

the environment. This suggests a suitable functor would be:

I can now wrap this up in `Free`

in order to write programs with the ability to `ask`

:

Now we have an algebraic structure with properties (`Ask r`

is a `Monad`

) that we would like to preserve, so we can write this alternative form of `MonadReader`

:

```
-- liftAsk . return = return
-- liftAsk (f >>= g) = liftAsk f >>= liftAsk . g
class Monad m => MonadReader r m | m -> r where
liftAsk :: Ask r a -> m a
ask :: MonadReader r m => m r
ask = liftAsk (liftF (Ask id))
```

Et voilà! We now have an equally powerful `MonadReader`

type class, except this time we have the ability to reason about it and its instances. If we return to the instance that I was questioning earlier, we can redefine it under the new API:

```
instance MonadReader Int SomeM where
liftAsk askProgram = SomeM $ do
x <- get
out <- iterM (\(Ask k) -> return (k t)) askProgram
put (x + 1)
return out
```

Now that we have some laws, we can ask: is this a *valid* definition of `MonadReader`

? To check, we’ll use equational reasoning. Working through the first law, we have

```
liftAsk (return a)
= { definition of return for Free }
liftAsk (Pure a)
= { definition of liftAsk for SomeM }
SomeM $ do
x <- get
out <- iterM (\(Ask k) -> return (k t)) (Pure a)
put (x + 1)
return out
= { evaluate iterM for Pure a }
SomeM $ do
x <- get
out <- return a
put (x + 1)
return out
= { monad laws }
SomeM $ do
x <- get
put (x + 1)
return a
```

Already we have a problem. While we can see that this does return the original `a`

it was given, it does so in a way that also incurred some side effects. That is, `liftAsk (return a)`

is *not* the same as `return a`

, so this isn’t a valid definition of `MonadReader`

. Back to the drawing board… Now, it’s worth noting that there is an instance that *is* law abiding, but might still be considered as surprising:

```
instance MonadReader Int SomeM where
liftAsk askProgram =
iterM (\(Ask k) -> SomeM $ do
x <- get
put (x + 1)
k x )
```

Applying the same equational reasoning to this is much easier, and shows that the first law is satisfied

```
liftAsk (return a)
= { definition of liftAsk }
iterM (\(Ask k) -> SomeM $ do
x <- get
put (x + 1)
k x)
(return a)
= { definition of return for Free }
iterM (\(Ask k) -> SomeM $ do
x <- get
put (x + 1)
k x)
(Pure a)
= { definition of iterM given Pure}
return a
```

For the second law, I’ll omit the proof, but I want to demonstrate to sessions in GHCI:

```
> let runSomeM (M m) = evalState m 0
> runSomeM (liftAsk (ask >>= \r1 -> ask >>= \r2 -> return (r1, r2))
(1, 2)
> runSomeM (liftAsk ask >>= \r1 -> liftAsk >>= \r2 -> return (r1, r2)
(1, 2)
```

So while the answers agree - they probably don’t agree with your intuition! This is only surprising in that we have some assumption of how =Ask= programs should behave. Knowing more about =Ask=, we might seek this further law:

`ask >> ask = ask`

This law can also be seen as a reduction step in the classification of our `Ask`

programs, but a `Free`

monad is not powerful enough to capture that. Indeed, the documentation of `Free`

mentions exactly this:

A free

`Monad`

is one that does no work during the normalisation step beyond simply grafting the two monadic values together.`[]`

is not a free`Monad`

(in this sense) because`join [[a]]`

smashes the lists flat.

The law `ask >> ask = ask`

follows by normalisation of our “reader” programs, so a free monad will be unable to capture that by construction – the best we can do is add an extra law to our type class. However, what we can also do is play a game of normalisation by evaluation. First, we write an evaluator for `Free (AskF r)`

programs:

and then witness that we can reify these `r -> a`

terms back into `Free (Ask r) a`

:

You should also convince yourself that `(r -> a)`

really is a normal form, and you may find the above linked article on this useful for formal proofs (search for “normalisation”). What we’ve essentially shown is that *every* `Free (AskF r) a`

program can be expressed as a single `r -> a`

function. The normal form of `ask >> ask`

is now - by definition - a single `ask`

, which is the law we were originally having to state.

As we’ve witnessed that `r -> a`

is the normal form of `Free (AskF r) a`

, this suggests that we could just as well write:

```
-- liftAsk . return = return
-- liftAsk (f >>= g) = liftAsk f >>= liftAsk . g
class MonadReader r m | m -> r where
liftAsk :: (r -> a) -> m a
```

(The structure being preserved by the homomorphism is assuming that `(r -> a)`

is a reader monad).

Our strange instance now becomes

With a little scrutiny, we can see that this is not going to satisfy the homomorphism laws. Not only does it fail to satisfy the `return`

law (for the same reason), the second law states that `liftAsk (f >>= g) = liftAsk f >>= liftAsk . g`

. Looking at our implementation this would mean that we would have to increase the state based on the amount of binds performed in `f >>= g`

. However, we also know that `>>=`

for `r -> a`

simply reduces to another `r -> a`

function - the implication being that it’s impossible to know how many binds were performed.

Here a counter example will help convince us that the above is wrong. First, we know

`liftAsk (ask >> ask) = liftAsk ask`

because `ask >> ask = ask`

by definition.

By the homomorphism laws, we must also have

`liftAsk (ask >> ask) = liftAsk ask >> liftAsk ask`

Combining these, we expect

`liftAsk ask = liftAsk (ask >> ask) = liftAsk ask >> liftAsk ask`

However…

```
> runSomeM (liftAsk ask)
1
> runSomeM (liftAsk (ask >> ask))
1
> runSomeM (liftAsk ask >> liftAsk ask)
2
```

Now we can see that `SomeM`

’s current definition of `MonadReader`

fails. It’s much harder to write a law abiding form of `MonadReader Int SomeM`

- but it will essentially require some *fixed* data throughout the scope of the computation. The easiest is of course to change the definition of `SomeM`

:

```
newtype SomeM a = SomeM (ReaderT Int IO a)
instance MonadReader UTCTime SomeM where
liftAsk f = SomeM (fmap f ask)
```

You should convince yourself that this instance is now law abiding - for example by considering the above counter-example, or by performing equational reasoning.

The process we underwent to reach the new form of a =MonadReader= type class, extends well to many different type classes and suggests a new pattern for `mtl`

-like type class operations. Here’s a rough framework that I’m having a lot of success with:

To begin, think about the language that your effect will talk about. For the reader monad, we defined the `AskF`

functor, and the same can be done for the exception monad, the failure monad, the state monad, and so on. For more “domain specific” operations, a free monad also scales well - one could imagine a language for interacting with general relational databases, with operations to `SELECT`

, `UPDATE`

, `DELETE`

, and so on.

Individual operations are not enough, we also need a way to write programs using this language. This amounts to finding a suitable way to compose these operations together. An easy first approximation is to use a free structure, again – as we started with for the reader monad. In the case of the aforementioned domain specific relational database example, the free monad might be as far as we want to go.

It’s also worth exploring if there is a normal form that more succinctly captures the operations in your language along with equational reasoning. We saw that the normal form of `Free (AskF r) a`

was `r -> a`

, and the same process can be ran for `Free (StateF s) a`

- reaching `s -> (a, s)`

as a normal form. It’s important to note that if you go through the process of normalisation by evaluation, that you also make sure you can reify your evaluation result back into the original language. To illustrate why, consider the hypothetical relational database language:

```
data DatabaseF a = Query SqlQuery (Results -> a)
runDb :: Free DatabaseF a -> (DatabaseHandle -> IO a)
runDb h = iterM (\(Query q k) -> query h q >>= k)
```

This is fine for an interpreter, but `DatabaseHandle -> IO a`

is not a normal form because we can’t reify these terms *back* into `DatabaseF`

. This is important, because by working with a normal form it means that you can define a whole range of interpreters that see the necessary structure of the original programs. To illustrate one problem with `DatabaseHandle -> IO a`

, if we attempted to write a pure interpreter, we would be unable to see which queries were performed in order to produce the data under `a`

(not to mention the limitation that working in `IO`

would cause).

With your effect language defined, the next step is to define a type class for homomorphisms from this effect language into larger monad stacks. Often this will be a monad homomorphism – much as we saw with `MonadReader`

and `MonadIO`

– but the homomorphism need not be a monad homomorphism. For example, if your source effect language is a simple monoid, then the homomorphism will be a monoid homomorphism. We’ll see an example of this shortly.

With a type class of homomorphisms, we can now export a cleaner API. For `MonadReader`

, this means exporting convenience `ask`

operations that are defined in terms of `liftAsk`

with the appropriate program in our `AskF`

language.

I also suggest providing a “reference” implementation of this type class. For `MonadReader`

, this reference implementation is `ReaderT`

. The idea is that users can immediately take advantage of the effect we’re defining by introducing the appropriate monad transformer into their monad stack.

The type class allows them to more efficiently define the operations in terms of existing monadic capabilities (e.g., `IO`

), but for many simply reusing a transformer will be sufficient.

To conclude this article I want to explore one more application of this pattern applied to building a logging effect. In fact, it is this very problem that motivated the research for this blog post, and so we’ll end up building the foundations of my `logging-effect`

library.

The first step is to identify a language for programs that can perform logging. There’s not much involved here, simply the ability to append to the log at any point in time. Let’s formalise that idea with the appropriate functor:

This functor is parameterised by the type of log messages. The only constructor for `LoggingF`

takes a log message and the rest of the computation to run. We could stop here and lift `Free (LoggingF message) a`

programs, but I want to go a bit further and see are any other ways to express this. I’ll use normalisation by evaluation again, and see what happens.

```
runFreeLogging :: Free (LoggingF message) a -> (a, [message])
runFreeLogging (Pure a) = (a, [])
runFreeLogging (Free (AppendLogMessage m next)) =
case runFreeLogging next of
(a, messages) -> (a, m:messages)
```

We can also take a `(a, [message])`

and turn it back into the equivalent `Free (LoggingF message) a`

, so `(a, [message])`

is another candidate for the language of our logging programs.

But this `a`

bothers me. It occurs only in `LoggingF message`

to capture the rest of the computation, but never does the result of logging affect the choice of what that next computation is. This suggests that it’s mostly noise, and maybe we can just erase it. This would lead us to have logging programs of the type `[message]`

. This type is no longer the right kind for our lifting operation to be a monad homomorphism, which means we have to identify another algebraic structure. Well, lists are certainly a composable structure - they have all the properties of a *monoid*.

With that in mind, we need to consider what it means to be a monoid homomorphism into some monad. First, observe that monads also have a monoid-like operations:

```
monadMempty :: Monad m => ()
monadMempty = return ()
monadMappend :: Monad m => m () -> m () -> m ()
monadMappend l r = l >> r
```

We can now write our lifting type class with the laws of a monoid homomorphism:

```
liftLog mempty = mempty -- = return ()
liftLog (x <> y) = liftLog x <> liftLog y -- = liftLog x >> liftLog y
class MonadLog message m | m -> message where
liftLog :: [message] -> m ()
```

While we reached this type by normalisation-by-evaluation and then a little bit of fudging, there is another way we could have got here. In a sense, `[]`

can be seen as another construction like `Free`

- given any type `a`

, `[a]`

is a free monoid generated by `a`

. An easier route to this type class would have been to describe the individual operations in our logging programs by:

and then using `[]`

as our free construction. As `LoggingOp message`

~ `Identity message`

~ `message`

, we know we could also use `[message]`

, and we’re back at the type class above.

(In my `logging-effect`

library I chose a slightly different representation of the free monoid. Theoretically, this is a sounder way to talk about free monoids, but I’m mostly interested in the slight efficiency win by not having to build up lists only to immediately deconstruct them.)

The last steps are to provide polymorphic operations and a reference implementation that satisfies the laws:

```
logMessage :: (MonadLog message m) => message -> m ()
logMessage message = liftLog [message]
newtype LoggingT message m a = LoggingT (ReaderT (message -> IO ()) m a)
instance MonadIO m => MonadLog message (LoggingT message m) where
liftLog messages = LoggingT (\dispatchLog -> liftIO (for_ messages dispatchLog))
```

Does this reference implementation satisfy the monoid homomorphism laws that is required by `MonadLog`

?

```
liftLog mempty
= { definition of mempty for lists }
liftLog []
= { definition of liftLog for LoggingT }
LoggingT (\dispatchLog -> liftIO (for_ [] dispatchLog))
= { definition of for_ for [] }
LoggingT (\dispatchLog -> liftIO (return ()))
= { liftIO . return = return }
LoggingT (\dispatchLog -> return ())
= { definition of return for LoggingT }
return ()
```

So far so good!

```
liftLog (x <> y)
= { definition of liftLog for LoggingT }
LoggingT (\dispatchLog -> liftIO (for_ (x ++ y) dispatchLog))
= { for_ distributes over ++ }
LoggingT (\dispatchLog -> liftIO (for_ x dispatchLog >> for_ y dispatchLog)
= { liftIO (f >>= g) = liftIO f >>= liftIO . g }
LoggingT (\dispatchLog -> liftIO (for_ x dispatchLog) >> liftIO (for_ y dispatchLog))
= { definition of (>>=) for LoggingT }
LoggingT (\dispatchLog -> liftIO (for_ x dispatchLog)) >>
LoggingT (\dispatchLog -> liftIO (for_ y dispatchLog)) >>
= { definition of liftLog for LoggingT }
liftLog x >> liftLog y
```

Bingo!

In this post I presented a pattern for building `mtl`

-like type classes in a mechanical fashion, and this suggests that maybe some of the details can be automatically dealt with. In the next few days I’ll be presenting my `algebraic-transformers`

library which will show exactly that.

For a while, we’ve had bindings to SDL 2 on Hackage, but these bindings are as close to 1:1 as you can get in Haskell. This results in a library that certainly *can* be used in Haskell, but does not feel particularly like writing ordinary Haskell! A real concern here is that this raises the barrier to entry for those new to either game programming or writing games in Haskell (or both!) - a barrier that I would certainly like to see lowered. To address this, myself and many others have spent the last year working on high-level bindings to abstract away the C-like feel of the existing library, and to present a more Haskell interface.

To give you an idea of how things look, here’s a basic application that opens a window, clears the screen, and quits when the user presses ‘q’:

```
{-# LANGUAGE OverloadedStrings #-}
module Main where
import SDL
import Linear (V4(..))
import Control.Monad (unless)
main :: IO ()
main = do
initialize [InitEverything]
window <- createWindow "My SDL Application" defaultWindow
renderer <- createRenderer window (-1) defaultRenderer
appLoop renderer
appLoop :: Renderer -> IO ()
appLoop renderer = do
events <- pollEvents
let eventIsQPress event =
case eventPayload event of
KeyboardEvent keyboardEvent ->
keyboardEventKeyMotion keyboardEvent == Pressed &&
keysymKeycode (keyboardEventKeysym keyboardEvent) == KeycodeQ
_ -> False
qPressed = not (null (filter eventIsQPress events))
rendererDrawColor renderer $= V4 0 0 255 255
clear renderer
present renderer
unless qPressed (appLoop renderer)
```

Hopefully you’ll agree that the code above is close to idiomatic Haskell.

We’ve tried to be extensive with the bindings, and at the moment the following should (!) all be working:

- Graphics routines have been our highest priority. The bindings should give you full control over window management, access to SDL’s new hardware-accelerated 2D rendering routines, and also the ability to set up an OpenGL context (which is compatible with the
`OpenGL`

and`gl`

libraries). - SDL’s audio abstraction is available, and the bindings allow you to open audio devices and stream audio data.
- A clean implementation of SDL’s event system, designed for use with pattern matching.
- Access to input devices, including keyboard, mouse, pointer devices, and joysticks.
- A large collection of example code, ported from the popular “lazyfoo” and “twinklebear” tutorials.

The bindings are not 100% exhaustive - we’ve omitted some routines that are already provided by the Haskell runtime, but we also currently lack bindings to the following:

- Force-feedback (SDL’s “haptic” functionality). While we do have some code in the repository here, none of the contributors own a device that is compatible with SDL2 to actually test this work. If you do, please drop us a line and help out!
- Gesture recording for touch screens. We’re currently targetting desktops and laptops, but SDL has support for Android and iOS. Hopefully when GHC is easier to target these devices, we can start to explore these SDL bindings.
- Other SDL2 compatible libraries, such as
`SDL2_net`

and`SDL2_ttf`

. We’d love for these projects to have the same treatment, and are more than happy to host them under the`haskell-game`

Github account.

We hope this enables more people to begin building interactive software and games in Haskell. It’s still early days for these bindings, so if you find any bugs (runtime problems or API bugs), or if you find the bindings lacking in anyway, please don’t hesitate to open an issue on our issue tracker.

Happy hacking!

]]>`optional-args`

library, which provides new types for optional arguments along with heavy syntactic overloading. To follow that, Dimitri Sabadie published a blog post discouraging the use of the currently popular `Default`

type class. These are both good discussions, and as with any good discussion have been lingering around in the back of my head.
Since those discussions took place, I’ve been playing with my point in the FRP-web-framework design space - Francium. I made some big refactorings on an application using Francium, mostly extending so called “component” data types (buttons, checkboxes, etc), and was frustrated with how much code broke just from introducing new record fields. The Commercial Haskell group published an article on how to design for extensibility back in March, so I decided to revisit that.

It turns out that with a little bit of modification, the approach proposed in designing for extensibility also covers optional arguments pretty well!

First, let’s recap what it means to design for extensibility. The key points are:

- Functions take
`Settings`

values, which specify a general configuration. - These
`Settings`

values are opaque, meaning they cannot be constructed by a data constructor, but they have a smart constructor instead. This smart constructor allows you to provide default values. - Provide get/set functions for all configurable fields in your
`Settings`

data type, preventing the use of record syntax for updates (which leaks implementation details).

Regular Haskell users will already be familiar a pattern that can be seen in point 3: we often use a different piece of technology to solve this problem - lenses. Lenses are nice here because they reduce the surface area of our API - two exports can be reduced to just one, which I believe reduces the time to learn a new library. They also compose very nicely, in that they can be embedded into other computations with ease.

With point 3 amended to use some form of lens, we end up with the following type of presentation. Take a HTTP library for example. Our hypothetical library would have the following exports:

```
data HTTPSettings
httpKeepAlive :: Lens HTTPSettings Bool
httpCookieJar :: Lens HTTPSettings CookieJar
defaultHTTPSettings :: HTTPSettings
httpRequest :: HTTPSettings -> HTTPRequest -> IO Response
```

which might have usage

This is an improvement, but I’ve never particularly liked the reverse function application stuff with `&`

. The repeated use of `&`

is essentially working in an `Endo`

`Writer`

monad, or more generally - a state monad. The `lens`

library ships with operators for working specifically in state monads (of course it does), so let’s use that:

```
httpRequest :: State HTTPSettings x -> HTTPRequest -> IO Response
....
httpRequest
(do httpKeepAlive .= True)
aRequest
```

It’s a small change here, but when you are overriding a lot of parameters, the sugar offered by the use of `do`

is hard to give up - especially when you throw in more monadic combinators like `when`

and `unless`

.

With this seemingly simple syntactic change, something interesting has happened; something which is easier to see if we break open `httpRequest`

:

```
httpRequest :: State HTTPSettings x -> HTTPRequest -> IO Response
httpRequest mkConfig request =
let config = execState mkConfig defaultHttpSettings
in ...
```

Now the default configuration has moved *inside* the HTTP module, rather than being supplied by the user. All the user provides is essentially a function `HTTPSettings -> HTTPSettings`

, dressed up in a state monad. This means that to use the default configuration, we simply provide a do-nothing state composition: `return ()`

. We can even give this a name

and voila, we now have the lovely name-overloading offered by `Data.Default`

, but without the need to introduce a lawless type class!

To conclude, in this post I’ve shown that by slightly modifying the presentation of an approach to build APIs with extensibility in mind, we the main benefit of `Data.Default`

. This main benefit - the *raison d’être* of `Data.Default`

- is the ability to use the single symbol `def`

whenever you just want *a* configuration, but don’t care what it is. We still have that ability, and we didn’t have to rely on an ad hoc type class to get there.

However, it’s not all rainbows and puppies: we did have to give something up to get here, and what we’ve given up is a compiler enforced consistency. With `Data.Default`

, there is only a single choice of default configuration for a given type, so you know that `def :: HTTPSettings`

will be the same set of defaults *everywhere*. With my approach, exactly what `def`

means is down to the function you’re calling and how they want to interpret `def`

. In practice, due to the lack of laws on `def`

, there wasn’t much reasoning you could do about what that single instance was anyway, so I’m not sure much is given up in practice. I try and keep to a single interpretation of `def`

in my libraries by still exporting `defaultHTTPSettings`

, and then using `execState mkConfig defaultHTTPSettings`

whenever I need to interpret a `State HTTPConfig`

.

- Benjamin Kovach wrote about rebindable syntax and list comprehensions.
- Andraz Bajt wrote about GHC’s various “deriving” mechanisms.
- ertes took us through the concept of higher rank types, and their implementation in GHC.
- Roman Cheplyaka explained how to use existential quantification.
- Tim Docker showed us how the apparently simple “scoped type variables” extension is both useful and necessary.
- Tom Ellis showed us GHC’s special support for arrow notation.
- Sean Westfall showed us how to use Template Haskell.
- Mathieu Boespflug showed us the brand new static pointers extension. forthcoming in future GHC versions.
- Everyone else who submitted pull requests or otherwise informed me of minor typos.

I feel the guest posts have added a lot of variety to the series, and this year each post has consistently gone above and beyond my expectations, delivering incredibly high quality content. Once again, thank you all for your hard work - 24 DOGE wouldn’t be the same without you!

Over the course of the month, we’ve looked at just over 20 extensions - but as I mentioned in the opening post, the story certainly doesn’t stop there. GHC is full of many more interesting extensions - I was hoping to get on to looking at GADTs and data kinds, but alas - there are only so many days in the month. For an example of how these extensions all interact when we write “real-world” software, readers may be interested in viewing my recent Skills Matter talk - strongly typed publish/subscribe over websockets via singleton types.

I’ve been really happy to see comments this year from people who have learnt about new extensions, seen previous extensions in a different light, or simply formed a deeper understanding of extensions they were already using. While I was a little nervous about the series at the start, I’m now confident it’s been a great success. A huge thank you to everyone who participated in the discussions - as with 24 Days of Hackage in previous years, I feel the discussion around these posts is just as important.

Finally, a thank you to everyone who donated during the series - these tokens of appreciate are greatly appreciated.

To close 24 DOGE, well… a picture speaks a thousand words.

]]>`Static`

ProgrammingGHC already features quite the zoo of pointer types. There are bare `Ptr`

’s (for marshalling to and from foreign objects), `ForeignPtr`

’s (smart pointers that allow automatic memory management of the target object), weak pointers (references to objects that are ignored by the garbage collector), and `StablePtr`

’s (pointers to objects that are pinned to a specific location in memory). GHC 7.10 will add a new beast to this list: `StaticPtr`

, the type of pointers whose value never changes across program runs, even across program runs on different machines. The objects pointed to by static pointers are *static*, much in the same sense of the word as in other programming languages: their value is known at compile time and their lifetime extends across the entire run of the program. GHC 7.10 also comes with a new language extension to *safely* create static pointers: `StaticPointers`

.

Why yet another pointer type? And why grace it with yet another extension?

Static pointers turn out to be incredibly useful for distributed programming. Imagine that you have a fleet of networked computers, abstractly called *nodes*. You’d like these nodes to collaborate, say because you also have a fair amount of data you’d like to crunch through, or because some of these nodes provide services to other nodes. Static pointers help solve the age-old question of distributed programming: how can nodes easily delegate tasks to each other?

For most programming languages, this is a thorny question to ask: support for distributing computations comes as an afterthought, so there is no first class support. But there are exceptions: Erlang is one example of a language that has escaped from research labs one way or another and natively speaks distributed. Erlang supports literally sending the code for any native (non-foreign) function from node to node. Delegating a task called `myfun`

is a case of saying:

where `There`

is a variable containing some node identifier. This capability comes at a cost, however. It is in general hard to share optimized compiled code across a cluster of machines, which may not be running the exact same operating system or have the same system libraries available. So Erlang keeps to comparatively slow but easy to handle and easy to distribute interpreted bytecode instead. Moreover, if new code can be loaded into a running program at any moment or existing code monkey patched on-the-go, what tools do we have to reason about the resulting state of the program?

Haskell too natively speaks distributed, at least in its bleeding edge GHC variant. But at much lower cost. In a world where complete systems can be containerized using language agnostic technology, and shipped and deployed within minutes across a full scale cluster, do we really need our language runtimes to distribute *code*? Are we willing to accept the compromises involved? Perhaps that is a problem best solved once, for all programs in any language, using the likes of Docker or Rocket. And once our entire cluster is running instances of the same program by dint of distributing containers, all we need is a means to control which computations happen when, and where, by sharing *references* to functions. This works because, if all nodes are running the same program, then they all have access to the same functions.

Turning on `-XStaticPointers`

adds a new keyword `static`

and a new syntactic form to the language for *safely* creating such references: if expression `e`

has type `a`

, then `static e`

has type `StaticPtr a`

.

For example, here’s a program that obtains a static pointer to `f`

, and prints the info record associated with it:

```
module Main where
import GHC.StaticPtr
fact :: Int -> Int
fact 0 = 1
fact n = n * fact (n - 1)
main = do
let sptr :: StaticPtr (Int -> Int)
sptr = static fact
print $ staticPtrInfo sptr
print $ deRefStaticPtr sptr 10
```

The body of a static form can be any top-level identifier, but also arbitrary expressions, *so long as the expression is closed*, meaning that all variable names are either bound within the expression itself, or are top-level identifiers. That is, so long as the value of the expression could in principle be computed statically.

Given a static pointer, we can get back the value it points to using

`deRefStaticPtr :: StaticPtr a -> a`

Notice that we could as well have used a simple string to refer to `fact`

in the above program, construct a string table, so that if the program were distributed we could have each process communicate strings in lieu of functions to commuicate tasks to run remotely, using the string table to map strings back to functions. Something like this:

```
module Main where
import GHC.StaticPtr
import Data.Dynamic
fact :: Int -> Int
fact 0 = 1
fact n = n * fact (n - 1)
computation1 :: IO ()
computation1 = print $ fact 10
stringTable =
[ ("fact", toDynamic fact)
, ("computation1", toDynamic computation1)
]
main = do
send "some-node" "computation1"
```

where one could imagine node “some-node” running something like

assuming we have a `send`

function for sending serializable values as messages to nodes and a `expect`

function to receive them available.

Values in the string table are wrapped into `Dynamic`

to make them all have uniform type (that way a simple homegeneous list can do just fine as a datastructure). But there are three problems with this approach:

Constructing the string table is error prone: we might accidentally map the string

`"fact"`

to an entirely different function.No type safety.

`fromDynamic`

performs a type cast. This cast might fail if the type of value in the string table doesn’t match the expected type, making the program partial.It is antimodular: each module needs its own string table, which we then need to combine into a global string table for the whole program. If we add a any new module anywhere in the program, we need to also modify the construction of the string table, or accidentally forget to do so, which would constitute a bug.

(Some of these properties can be obtained with some clever Template Haskell hackery, but that solution is still fundamentally anti-modular, as well as contrived to use.)

It is for these three reasons that the `StaticPointers`

language extension comes in handy. There is no need for manually constructing tables. Constructing and dereferencing static pointers is type safe because the type of a static pointer is related to the type of the value that it points to. Separate modules are not a problem, because the compiler takes care of collecting the set of all static pointers in a program into its own internal table that it embeds in the binary.

This all sounds rather nice, but the static pointer type is kept abstract, as it should to ensure safety, so how can we serialize a static pointer to send over the wire, and deserialize it on the remote end to reconstruct the static pointer? The `GHC.StaticPtr`

module exports a few primitives to deal with just that. The idea is that each static pointer in a program is assigned a unique key (a `StaticKey`

). We can obtain the key for a static pointer using

```
type StaticKey = Fingerprint
-- Defined in GHC.Fingerprint.
data Fingerprint = Fingerprint {-# UNPACK #-} !Word64 {-# UNPACK #-} !Word64
deriving (Generic, Typeable)
staticKey :: StaticPtr a -> StaticKey
```

The type of keys is concrete (a key is a 128-bit hash), so keys can easily be encoded and decoded on the wire, using the `Binary`

type class provided by the binary package:

Provided a key, we can map it to a `StaticPtr`

using

Hold on a minute! This type is telling us that using `unsafeLookupStaticPtr`

we can map the key to a static pointer of any type, which we can then `deRefStaticPtr`

to a value of arbitrary type… Have we just lost type safety? In GHC 7.10, yes we have! In GHC 7.12, we will have a much safer lookup function:

(observe that this is a rank-2 type,) or equivalently

```
data DynStaticPtr = forall a. Typeable a => DynStaticPtr (StaticPtr a)
lookupStaticPtr :: StaticKey -> Maybe DynStaticPtr
```

This type says, provided a key and a continuation, `lookupStaticPtr`

will resolve the key to a static pointer and if successful feed it to the continuation. The type of the static key is not known a priori, but we can query the type inside the continuation using the supplied `Typeable`

constraint. The reason only the unsafe variant will ship in GHC 7.10 is because the safe variant will require a change to the `Data.Typeable`

API to be truly safe (see here for details), and because we do not yet store `Typeable`

constraints in the internal compiler-generated table mentioned above. In the meantime, this shouldn’t be a problem in practice: higher level libraries like Cloud Haskell and HdPH hide all uses of `lookupStaticPtr`

behind an API that does guarantee type safety - it’s just that we have to *trust* that their implementations always call `lookupStaticPtr`

at the right type, when ideally we wouldn’t need to entrust type safety to any library code at all, just the compiler.

Static pointers turn out to be suprisingly powerful. As it stands, the language extension nominally only allows sharing references to static values across the wire. But it’s easy to build a lot more power on top. In particular, it would be nice if programs could transmit not just static values over the wire, but indeed (nearly) any odd closure. Consider the following `main`

function:

```
main = do
putStrLn "Hi! Give me a number..."
x <- read <$> getLine
send "some-node" $ closure (static fact) `closureAp` closurePure 10
```

The idea (first found in the “Towards Haskell in the Cloud” paper) is to introduce a datatype of closures (which we’ll define concretely later), along with three combinators to create `Closure`

s from `StaticPtr`

s and from other `Closure`

s:

```
data Closure a
closure :: StaticPtr a -> Closure a
closurePure :: Serializable a => a -> Closure a
closureAp :: Closure (a -> b) -> Closure a -> Closure b
```

Notice that this datatype is nearly, but not quite, an applicative functor. We can only lift “serializable” values to a closure, not just any value. Given two existing `Closure`

s, we can create a new `Closure`

by “applying” one to another. Morally, we are making it possible to pass around not just static pointers to top-level values or purely static expressions, but things that represent (partially) applied static pointers. `Closure`

s are not always static: their value may depend on values known only at runtime, as in the example above.

Come to think of it, a `Closure`

very much acts like the closures that one would find deep in the bowels of GHC for representing partially applied functions during program execution. A closure is morally a code pointer paired with an *environment*, i.e. a list of actual arguments. Closures accumulate arguments as they are applied. In our case, the `StaticPtr`

represents a code pointer, and the environment grows everytime we `closureAp`

a `Closure`

to something else.

We’ll turn to how `Closure`

is defined in a minute, but first let’s talk about what it really means to be “serializable”:

```
data Dict c = c => Dict
class (Binary a, Typeable a) => Serializable a where
serializableDict :: StaticPtr (Dict (Serializable a))
```

This class definition says that if a value can be encoded/decoded to a `ByteString`

(see the binary package), and it can be queried for a representation of its type at runtime, then the value is *serializable*. However, serializable values also need to make it possible to obtain concrete “evidence” that the value really is serializable, in the form of a *static dictionary*. The idea is a neat trick. For all serializable values, we want to be able to obtain a static pointer to the evidence (or “dictionary”) associated with a class constraint. Because if we do, then we can “send” class dictionaries across the wire (or at least references to them)! But we can only take the static pointer of a value, so how does one make dictionary a first class value? The trick is to define a proxy datatype of dictionaries, using the `ConstraintKinds`

extension (the `Dict`

datatype). Any `Dict`

value is a value like any other, but it embeds a constraint in it, which at runtime corresponds to a dictionary.

For example, any concrete value of `Dict (Eq Int)`

carries a dictionary that can be seen as providing evidence that values of `Int`

type can indeed be compared for equality. For any type `a`

, `Dict (Serializable a)`

carries evidence that values of type `a`

are serializable. Any instance of `Serializable`

makes it possible to query for this evidence - for example:

Now we can turn to the definition of `Closure`

and its combinators:

```
data Closure a where
StaticPtr :: StaticPtr b -> Closure b
Encoded :: ByteString -> Closure ByteString
Ap :: Closure (b -> c) -> Closure b -> Closure c
deriving (Typeable)
closure :: StaticPtr a -> Closure a
closure = StaticPtr
closureAp :: Closure (a -> b) -> Closure a -> Closure b
closureAp = Ap
closurePure :: Serializable a => a -> Closure a
closurePure x =
StaticPtr (static decodeD) `closureAp`
closure serializableDict `closureAp`
Encoded (encode x)
where
decodeD :: Dict (Serializable a) -> ByteString -> a
decodeD Dict = decode
```

(There are many ways to define `Closure`

, but this definition is perhaps most intuitive.)

As we can see from the definition, a `Closure`

is not only a (quasi) applicative functor, but in fact a (quasi) *free* applicative functor. Using the `Ap`

constructor, we can chain closures into long sequences (i.e. build environments). Using `StaticPtr`

and `Encoded`

, we can further make any serializable value a `Closure`

of the following shape:

where `sptr_decodeD`

is the static pointer to `decodeD`

, `csdict`

is a static serialization dictionary, and `bs`

is a value encoded as a byte string.

Notice that any concrete `Closure`

type is itself serializable:

```
instance Binary (Closure a) where
put (Ap (Ap (StaticPtr sptr) dict) (Encoded bs)) =
putWord8 0 >> put sptr >> put dict >> put bs
put (StaticPtr sptr) = putWord8 1 >> put sptr
put (Ap cf cx) = putWord8 2 >> put cf >> put cx
get = do
hdr <- getWord8
case hdr of
0 -> do sptr <- get
dict <- get
bs <- get
return $ Ap (Ap (StaticPtr sptr) dict) (Encoded bs)
1 -> StaticPtr <$> get
2 -> Ap <$> get <*> get
instance Serializable (Closure Int)
serializableDict = static Dict
```

(Note that for most types, manually defined `Binary`

instances as above are unnecessary - any datatype with a `Generic`

instance can have its `Binary`

instance derived automatically).

Therefore, suprisingly, adding just static pointers as a primitive datatype in the compiler is all that’s necessary to be able to conveniently send even nearly arbitrary closures down the wire. It turns out that we don’t need to add full blown support for serializing arbitrary closures as an extra primitive to the compiler. That can all be done in user space, and with better control by the user on exactly how. The only limitation is that in effect the environment part of the closure needs to be serializable, but that’s a feature: it means that we can statically rule out accidentally serializing closures that capture gnarly things that we *don’t* want to serialize down the wire: think file handles, locks, sockets and other system resources, none of which the remote end would be able to make any sense of.

Static pointers are a lightweight extension to GHC, with direct applications to distributed programming, or in general, any form of pointer sharing across processes with distinct address spaces. As first observed in a seminal paper about distributed programming in Haskell, this extension adds just enough power to the GHC compiler and runtime to conveniently and safely send arbitrary serializable closures across the wire.

Distributed programming in Haskell is a reality today: there are several frameworks, most prominently Cloud Haskell, with several industrial applications. But the `StaticPointers`

extension is brand new, and in fact no compiler release including it has shipped yet! Framework and application support for it is still lagging behind, but you can help. In particular, adding support to distributed-static and distributed-process would be a great step forward in the usability Cloud Haskell. Other next steps include: adding support for interoperating multiple versions of a program in a cluster, fully implementing `lookupStaticPtr`

(see above), or improving the robustness and speed of message serialization (see for example these great results for an idea of what’s possible here). Those are just some ideas. If you’re interested in participating, the [GHC wiki][ghc-wiki-dH] contains quite a few pointers, and the cloud-haskell-developers@ and distributed-haskell@ mailing lists are good places to coordinate efforts. See you there!

Template Haskell is an extension of Haskell 98 that allows for compile-time metaprogramming – allowing one to directly convert back and forth between concrete Haskell syntax and the underlying abstract syntax tree (AST) of GHC. Anyone familiar with Lisp’s macro system will immediately recognize the similarities – though in Haskell, specific datatypes are used to represent an AST that is used to draw and splice back in code fragments. The ability to generate code at compile time allows one to implement macro-like expansions, polytypic programs, user directed optimization (such as inlining), and the generation of supporting data structures and functions from existing data structures and functions.^{1}

In brief, Oxford brackets `[|`

and `|]`

are used to get the abstract syntax tree for the enclosed expression and ‘splice’ brackets `$(`

and `)`

are used to convert from the abstract syntax tree back into Haskell. The Quotation Monad is used to give unique names to the parsed tokens from the supplied Haskell code, and reification can be used to look up the name, type, constructor, and state of expression, and as well as the AST of Haskell types.^{2}

Template Haskell was introduced by Tim Sheard and Simon Peyton Jones in their paper “Template Meta-Programming for Haskell” (The original paper can be found here) in 2002, though its changed quite a bit since (see here). It was inspired by C++ templates, though TH is functionally more similar to a macro system. Quasiquotation is often used in conjunction with Template Haskell, but makes up a pretty big section, so I will only briefly describe it here. Only another full article of its own, could do quasiquotation justice.

In the wild, Template Haskell is used extensively by Yesod for routing and HTML template binding.^{3} Outside of Haskell, compile-time metaprogramming is used for the creation of Domain Specific Languages (DSLs), typically in the domains of testing and modeling, and generative metaprogramming (compile-time or not) for object relational mapping, typically for mapping database schemas to non-compiled code. And within Lisp, which is famous for it’s macro system, metaprogramming is used to create syntax extensions (syntactic sugar), such as the syntax used in lisp comprehensions.^{4}

*All code in this guide was executed with GHCi version 7.6.3 and Template Haskell version 2.9.0.0*

To get started, start up GHCi with the Template Haskell extension by including `-XTemplateHaskell`

, then load the AST datatypes:

```
$ ghci -XTemplateHaskell
Prelude> :m + Language.Haskell.TH
Prelude Language.Haskell.TH>
```

To see the AST syntax of some Haskell code insert valid Haskell syntax into oxford brackets and run it through `runQ`

which stands for the Q monad (quotation monad):

```
> runQ [| 1 + 2 |]
InfixE (Just (LitE (IntegerL 1))) (VarE GHC.Num.+) (Just (LitE (IntegerL 2)))
```

If you parse through the parentheses you’ll see the return expression forms a tree – an abstract syntax tree!

Checkout the lift class source, which is what’s being invoked by the oxford brackets. The Language.Haskell.TH.Syntax contains the definitions of all the types used in the AST. Using these types, it’s possible to construct any fragment of the Haskell language. Have a look at the Lit data type as an example. Lit stands for literal,

```
data Lit = CharL Char
| StringL String
| IntegerL Integer -- ^ Used for overloaded and non-overloaded
-- literals. We don't have a good way to
-- represent non-overloaded literals at
-- the moment. Maybe that doesn't matter?
| RationalL Rational -- Ditto
| IntPrimL Integer
| WordPrimL Integer
| FloatPrimL Rational
| DoublePrimL Rational
| StringPrimL String -- ^ A primitive C-style string, type Addr#
deriving( Show, Eq, Data, Typeable )
```

tokens represented by it make up literals defined throughout your syntax in the AST, as you can see in our example AST above. Within Language.Haskell.TH.syntax, 35 generic data types are declared with the Data.Data module. If you’re curious about what the AST syntax is referring to study the source.

The Q monad handles the expression’s typing via context, and also gives it a unique name by appending an integer at the end of the expression name to handle scoping distinction. Quotations are lexically scoped and the Q monad handles this using it’s naming scheme. (see the user’s guide wiki for a more in depth explanation of TH’s lexical scoping).

Let’s bind the returned AST expression from the example above to Haskell and evaluate it, using the splice brackets:

```
> $( return (InfixE (Just (LitE (IntegerL 1))) (VarE (mkName "+")) (Just (LitE (IntegerL 2)))))
3
```

Ta da, you converted concrete Haskell to AST and then evaluated it. Though, as you can see, identifiers have to be defined with the `mkName`

type in the AST to evaluate properly.

It’s possible to avoid having to modify the AST to splice it back, but you’ll have to bind it to a variable, as my next example illustrates:

In this example, the Fibonacci sequence is generated using zipWith:^{5}

```
let fibs :: [Integer]
fibs = 0 : 1 : zipWith (+) fibs (tail fibs)
let fibQ :: Int -> Q Exp
fibQ n = [| fibs !! n |]
```

Now run `$( ... )`

to excute the expansion:

```
> $(fibQ 22)
17711
```

TH splices in the expression that `fibQ`

represents along with the variable (that is `fibs !! n`

).

Note, expressions and splices can be nested:

```
> $(runQ [| fibs !! $( [| 8 |]) |])
21
```

I’ll explain TH’s syntax next – but after, I’ll show some more impressive examples that show the possibilities of splicing and ASTs.

Template Haskell quotation expression come with 4 different parser types, and an extensive 5th optional type that allows one to define their own types of quotations, called quasi-quotations.

`[| ... |]`

, or`[e| ... |]`

, generates expression AST syntax; it has the type`Q Exp`

.`> runQ [| 1 + 2 |] InfixE (Just (LitE (IntegerL 1))) (VarE GHC.Num.+) (Just (LitE (IntegerL 2)))`

`[d| ... |]`

, generates a list of top-level declaration AST sytnax; it has the type`Q [Dec]`

.`> runQ [d|x = 5|] [ValD (VarP x_4) (NormalB (LitE (IntegerL 5))) []]`

`[t| ... |]`

, generates a type AST syntax; it has the type`Q Type`

.`> runQ [t|Int|] ConT GHC.Types.Int`

`[p| ... |]`

, generates a pattern AST syntax; it has the type`Q Pat`

.`> runQ [p|(x,y)|] TupP [VarP x_5,VarP y_6]`

Custom “quasi-quotations”, have the form

`["quoter"| ... |]`

. The “quoter” can be anything except e, d, t, and p, and the token cannot contain spaces. Though, all GHC is doing is determining which parser to use based on the context within the oxford brackets.^{6}Quasi-quotations is a big second part to meta-programming. They’re essentially what makes it possible to write DSLs. I’m not going to cover it here since this guide is pretty long as it is, but if you’re interested, there are many guides to using quasi-quotations, find them here, here, and here (this one assumes you’re familiar with Parsec parsing).

An important restriction on Template Haskell to remember is *when inside a splice you can only call functions defined in imported modules, not functions defined elsewhere in the same module.* Quotations and splice have to be defined in separate modules, otherwise you’ll see this error:

```
GHC stage restriction:
`...' is used in a top-level splice or annotation,
and must be imported, not defined locally
```

Though, if you’re just binding variables in GHCi with `let`

, you don’t have to worry about this – only when you’re compiling Haskell.

You’re probably wondering if you can evaluate a Q expression the other way, to see what the splice is evaluating. Of course you can – run `runQ(Q exp) >>= putStrLn.pprint`

to see what an expression with a `Q Exp`

type will evaluate to:

```
> let myExp :: Q Exp; myExp = runQ [| 1 + 2 |]
> runQ(myExp) >>= putStrLn.pprint
1 GHC.Num.+ 2
```

If you want to see the expansion of splices, use the flag `-ddump-splices`

when starting GHCi : `ghci -XTemplateHaskell -ddump-splices`

.

Now let’s test it on another fun example with primes:^{7}

```
let isPrime :: (Integral a) => a -> Bool
isPrime k | k <=1 = False | otherwise = not $ elem 0 (map (mod k)[2..k-1])
let nextPrime :: (Integral a) => a -> a
nextPrime n | isPrime n = n | otherwise = nextPrime (n+1)
-- returns a list of all primes between n and m, using the nextPrime function
let doPrime :: (Integral a) => a -> a -> [a]
doPrime n m
| curr > m = []
| otherwise = curr:doPrime (curr+1) m
where curr = nextPrime n
-- and our Q expression
let primeQ :: Int -> Int -> Q Exp
primeQ n m = [| doPrime n m |]
```

```
> $(primeQ 0 67)
<interactive>:18:3-13: Splicing expression
primeQ 0 67 ======> doPrime 0 67
[2,3,5,7,11,13,17,19,23,29,31,37,41,43,47,53,59,61,67]
```

Try it on a nested expression, to really see how useful the dump-splices flag is:

```
> $(primeQ ($(primeQ 0 23) !! 3) 167)
<interactive>:20:13-23: Splicing expression
primeQ 0 23 ======> doPrime 0 23
<interactive>:20:3-34: Splicing expression
primeQ ($(primeQ 0 23) !! 3) 167 ======> doPrime 7 167
[7,11,13,17,19,23,29,31,37,41,43,47,53,59,61,67,71,73,79,83,89,97,101,103,107,109,113,127,131,137,139,149,151,157,163,167]
```

`-ddump-splices`

and `>>= putStrLn.pprint`

should come in handy when debugging.

Now for probably, what I consider to be the hardest aspect of Template Haskell to understand – reification.

Reification allows one to query the state of Haskell `Name`

s and get information about them. Specifically, reify returns a data type called `info`

– which returns data in a specific format on any `Name`

in Haskell, where the format and information depends on whether it’s being interpreted in a type context or an expression context.

TH introduces two new identifiers specifically for reification: Prefix `Name`

s to be evaluted in an expression context with a single quote, and prefix `Name`

s to be evaluated in a type context with a double quote. Though, `Name`

s must be interpretable within those contexts to be reified. (If you intend to use reify on expressions, don’t use quotes in the names of those expressions – otherwise it won’t parse correctly.)

To use reify on a type, use double quotes:

```
> $(stringE . show =<< reify ''Bool)
"TyConI (DataD [] GHC.Types.Bool [] [NormalC GHC.Types.False [],NormalC GHC.Types.True []] [])"
```

Reifying a type returns the AST as represented by TH, here’s the AST in a diagram of the boolean type from above:

The AST of a simple primitive type like Bool produces a small tree, but when used on types deeper down the module chain, relatively large ASTs will be generated. Try reify on `''Lit`

or `''Exp`

to know what I mean, though reify can work on any Haskell type.

To reify an expression, use single quotes, here’s an example with our `primeQ`

expression from above:

```
> $(stringE . show =<< reify 'primeQ)
"VarI primeQ_1627395913 (AppT (AppT ArrowT (ConT GHC.Types.Int)) (AppT (AppT ArrowT (ConT GHC.Types.Int)) (AppT (ConT Language.Haskell.TH.Syntax.Q) (ConT Language.Haskell.TH.Syntax.Exp)))) Nothing (Fixity 9 InfixL)"
```

As you can see `info`

returns different information depending on whether it’s a type or an expression. A type returns its structure in TH’s AST semantics. An expression returns information regarding its name, type, it’s constructor, and it’s fixity.

Use reification of expressions to extract the types associated with the construction of an expression, then reify those types to get its structure in an AST. This allows one to generate the AST of any data type in Haskell – no matter how deep into Haskell it gets.

Reification is very useful from the standpoint of what one can do with an AST to draw and splice back in code fragments within a programming language. Below, in Examples, the second example shows how one can use reify to extract the types from a record’s constructor to write a generic Show function that can generate a `Show`

for any record.

A good example to show what one can do with Template Haskell is a type safe Haskell version of c’s printf function (from stdio.h):^{8}

*Main.hs*

```
{-# LANGUAGE TemplateHaskell #-}
-- Import our template "printf"
import PrintF (printf)
-- The splice operator $ takes the Haskell source code
-- generated at compile time by "printf" and splices it into
-- the argument of "putStrLn".
main = do
putStrLn $ $(printf "Hello %s %%x%% %d %%x%%") "World" 12
putStrLn $ $(printf "Hello %s %s %s %d") "Russian" "with" "Love" 5000
```

*PrintF.hs*

```
{-# LANGUAGE TemplateHaskell #-}
module PrintF where
-- NB: printf needs to be in a separate module to the one where
-- you intend to use it.
-- Import some Template Haskell syntax
import Language.Haskell.TH
-- Possible string tokens: %d %s and literal strings
data Format = D | S | L String
deriving Show
-- a poor man's tokenizer
tokenize :: String -> [Format]
tokenize [] = []
tokenize ('%':c:rest) | c == 'd' = D : tokenize rest
| c == 's' = S : tokenize rest
tokenize (s:str) = L (s:p) : tokenize rest -- so we don't get stuck on weird '%'
where (p,rest) = span (/= '%') str
-- generate argument list for the function
args :: [Format] -> [PatQ]
args fmt = concatMap (\(f,n) -> case f of
L _ -> []
_ -> [varP n]) $ zip fmt names
where names = [ mkName $ 'x' : show i | i <- [0..] ]
-- generate body of the function
body :: [Format] -> ExpQ
body fmt = foldr (\ e e' -> infixApp e [| (++) |] e') (last exps) (init exps)
where exps = [ case f of
L s -> stringE s
D -> appE [| show |] (varE n)
S -> varE n
| (f,n) <- zip fmt names ]
names = [ mkName $ 'x' : show i | i <- [0..] ]
-- glue the argument list and body together into a lambda
-- this is what gets spliced into the haskell code at the call
-- site of "printf"
printf :: String -> Q Exp
printf format = lamE (args fmt) (body fmt)
where fmt = tokenize format
```

Notice that we had to separate the splicing and the expression definitions in separate modules, as mentioned in the syntax section above.

Compile the following with:

`$ ghc --make Main.hs -o main`

running main will print out:

```
$ ./main
Hello World %%x%% 22 %%x%%
Hello Russian with Love 5000
```

Now for an example that shows what one can do with reify – a Generic Show that can produce a `Show`

for any record type:^{9}

*Main.hs*

```
{- Main.hs -}
module Main where
import Derive
data T = A Int String | B Integer | C
$(deriveShow ''T)
main = print [A 1 "s", B 2, C] -- prints exactly <<[A 1 "s",B 2,C]>>
```

*Derive.hs*

```
{- Derive.hs -}
module Derive where
import Language.Haskell.TH
import Control.Monad
data T1 = T1
data T2 a = T2 a
deriveShow t = do
-- Get list of constructors for type t
TyConI (DataD _ _ _ constructors _) <- reify t
-- Make `show` clause for one constructor:
-- show (A x1 x2) = "A "++show x1++" "++show x2
let showClause (NormalC name fields) = do
-- Name of constructor, i.e. "A". Will become string literal in generated code
let constructorName = nameBase name
-- Get variables for left and right side of function definition
(pats,vars) <- genPE (length fields)
-- Recursively build (" "++show x1++...++"") expression from [x1...] variables list
let f [] = [| "" |]
f (v:vars) = [| " " ++ show $v ++ $(f vars) |]
-- Generate function clause for one constructor
clause [conP name pats] -- (A x1 x2)
(normalB [| constructorName ++ $(f vars) |]) [] -- "A "++show x1++" "++show x2
-- Make body for function `show`:
-- show (A x1 x2) = "A "++show x1++" "++show x2
-- show (B x1) = "B "++show x1
-- show C = "C"
showbody <- mapM showClause constructors
-- Generate template instance declaration and then replace
-- type name (T1) and function body (\x -> "text") with our data
d <- [d| instance Show T1 where
show x = "text"
|]
let [InstanceD [] (AppT showt (ConT _T1)) [FunD showf _text]] = d
return [InstanceD [] (AppT showt (ConT t )) [FunD showf showbody]]
-- Generate n unique variables and return them in form of patterns and expressions
genPE n = do
ids <- replicateM n (newName "x")
return (map varP ids, map varE ids)
```

Compile the following with:

`$ ghc --make Main.hs -o main`

running main will print out:

```
$ ./main
[A 1 "s",B 2,C]
```

This guide was for the most part written from collecting information written in other guides on Template Haskell, quasi-quoting, and Lisp macros – from online, wiki, and academic sources. Please check my bibliography to see where what came from what so credit can be properly given where it’s due.

Meta-programming is a powerful programming technique that can allow for the generation of user generated syntax extensions and DSLs. This is useful in that it can allow a programmer to generate custom code generating syntax extensions without otherwise having to change the core language. Template Haskell in particular is especially powerful over similar programming language constructs (i.e. The C Preprocessor, Lisp’s Macro system) in that it makes use of ASTs, reification (through a specific function), and – much in the spirit of Haskell – type-safety. The examples presented above only scratch the surface of what’s possible with reification – imagine the ability to construction entire systems, and then use reify to build ASTs, then swap in and out entire modules, entirely with the use of Template Haskell.

Some questions that have arisen within me from writing this article are: What are the limits of TH’s data type system? Is it truly possible for TH to represent all of Haskell with the finite set of data types written into the module? Is it possible for future language features to defy this set? What are the limits of meta-programming – TH, macros, and similar meta-prorgramming constructs make it possible to write code that writes code – but are there limits to this – is it possible to write a macro that can generate a macro, and so on indefinitely?

Don’t forget to checkout the API. Everything you need to know, you can for the most part find in the source. Also TH does in fact have bugs, check the issue tracking page if you’re dealing with a known issue: see here.

(The MIT License)

Copyright (c) 2014 Sean Westfall

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the ‘Software’), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED ‘AS IS’, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Tim Sheard and Simon Peyton Jones, “Template meta-programming for Haskell,” ACM SIGPLAN 2002 Haskell Workshop 3 (October 2002): 1-6, doi: 10.1145/581690.581691↩︎

The Glorious Glasgow Haskell Compilation System User’s Guide, Version 7.8.3, 2007↩︎

Greg Weber, Code that writes code and conversation about conversations, 2011:↩︎

Peter Seibel,

*Practical Common Lisp*(Apress, 2005)↩︎Mike Ledger, “A look at QuasiQuotation,” 2012.↩︎

Sami Hangaslammi, Basic Tutorial of Template Haskell, 2011:↩︎

Peter Seibel,

*Practical Common Lisp*(Apress, 2005)↩︎Template Haskell, Haskell Wiki, last updated October 2014: https://www.haskell.org/haskellwiki/Template_Haskell↩︎