Haskell MOOC

Index | Part 1 | Part 2 | Submit | Results | My status | Statistics | Login

Part 2

by Joel Kaasinen (Nitor) and John Lång (University of Helsinki)

Beware! This is a preview of part 2 of the course. The material presented here is roughly ready, and shouldn’t change too much. However, you should know that:

  • Some exercises might change or be replaced with new exercises for the final release.
  • Feedback on this preview is much appreciated! You can help make part 2 better by testing it out. There are feedback links at the end of each section.

Enjoy!

9 Lecture 9: Recap of Part 1

This lecture goes over the basic parts of Haskell introduced in part 1 of the course: types, values, pattern matching, functions and recursion.

9.1 Types

Remember the primitive types of Haskell? Here they are:

Values Type Meaning
True, False Bool Truth values
0, 1, 20, -37, … Int Whole numbers
'A', 'a', '!', … Char Characters
"", "abcd", … String Strings, which are actually just lists of characters, [Char]
0.0, -3.2, 12.3, … Double Floating-point numbers
() () The so-called unit type with only one value

It’s possible to combine these primitive types in various ways to form more complex types. Function types, tuple types and list types are examples of types that combine other types.

Values Type Meaning
(1,2), (True,'c'), … (a, b) A pair of a value of type a and a value of type b
(1,2,3), (1,2,'c'), … (a, b, c) A triple of values (of types a, b and c)
[], [1,2,3,4], … [a] List of values of type a
not, reverse, \x -> 1, \x -> x, … a -> b Function from type a to type b

There’s one more powerful mechanism for creating more types: Algebraic datatypes (ADTs). Some examples include:

Values of these types include:

Values Type
True, False Bool
Red, Green, Blue Color
MakeVector 1.5 3.2 Vector2d
Person 13 "Bob" Person
TwoValues 1 3 PairOf Int
Empty, Node 3 (Node 4 Empty) IntList
Nothing, Just 3, Just 4, … Maybe Int
Nothing, Just 'c', Just 'd', … Maybe Char
Left "foo", Right 13, … Either String Int
Nil, Cons True Nil, Const True (Cons False Nil) List Bool
Leaf 7, Node 1 (Leaf 0) (Leaf 2), … Tree Int
MultiTree 'a' [MultiTree 'b' [], MultiTree 'c' []]], … MultiTree Char

You can combine parameterized types in complex ways, for example with something like Either [String->String] (Maybe String, Int).

The names of concrete types start with capital letters. Lowercase letters are used for type variables which indicate parametric polymorphism: functions and values that can have multiple types. Here are some examples of polymorphic function types:

9.1.1 More About Lists

List literals can be written in using the familiar [x,y,z] syntax. However, that notation is just a shorthand as lists are actually built up of the list constructors [] and (:). These constructors are also used when pattern matching lists. Here are some examples of lists:

Abbreviation Full list Type
[1,2,3] 1:2:3:[] [Int]
[[1],[2],[3]] (1:[]):(2:[]):(3:[]):[] [[Int]]
"foo" 'f':'o':'o':[] [Char], also known as String

There’s also a range syntax for lists:

Range Result
['a' .. 'z'] "abcdefghijklmnopqrstuvwxyz"
[0 .. 9] [0,1,2,3,4,5,6,7,8,9]
[0, 5 .. 25] [0,5,10,15,20,25]
[x .. y] everything from x to y
[9 .. 3] []
[y, y-1 .. x] everything from y to x in decreasing order
[9,8 .. 3] [9,8,7,6,5,4,3]

List comprehensions are another powerful way to create lists:

Comprehension Result
[x^3 | x <- [1..3]] [1,8,27]
[x^2 + y^2 | x <- [1..3], y <- [1..2]] [2,5,5,8,10,13]
[y | x <- [1..10], let y = x^2, even x, y<50] [4,16,36]
[c | c <- "Hello, World!", elem c ['a'..'z']] "elloorld"

In general, [f x | x <- xs, p x] is the same as map f (filter p xs). Also, [y | x <- xs, let y = f x] is the same as [f x | x <- xs]. Any combination of <-, let and [f x | ...] is possible.

Just one more note on the syntax. Recall that (:) associates to the right, e.g. True:False:[] is the same as True:(False:[]). (In fact, (True:False):[] is not even a list, because True:False attempts to add True in front of False which is not a list.)

9.2 Functions

The basic form of function definition is:

For example:

Functions taking multiple arguments are defined in a similar manner. Note how the type of a multi-argument function looks like.

Functions can be polymorphic, can take multiple arguments, and can even take functions as arguments. Here are more examples:

More sophisticated functions can be defined using pattern matching. We can pattern match on the constructors of algebraic datatypes like Maybe, and also lists with the constructors [] and (:).

Yet more sophistication can be achieved with guards. Guards let you define a function case-by-case based on tests of type Bool. Guards are useful in situations where pattern matching can’t be used. Of course, guards can also be combined with pattern matching:

Case expressions let us pattern match inside functions. They are useful in situations where the result of one function depends on the result of another and we want to match the pattern on the output of the other function:

Let-expressions enable local definitions. Where-clauses work similarly to lets. For example:

Lambda expressions are another occasionally useful syntax for defining functions. Lambda expressions represent anonymous (unnamed) functions. They can be used for defining local functions that are typically used only once.

Note that f x = y is the same thing as f = \x -> y.

Finally, binary operators have sections. Sections are partially applied operators. The section of an operator is obtained by writing the operator and one of its arguments in parentheses. For example, (*2) multiplies its argument by 2 from the right, e.g. (*2) 5 ==> 5 * 2. A fractional number (e.g. a Double) can be inverted with the section (1/), e.g. (1/) 2 ==> 0.5.

9.3 Functional Programming

Haskell is a functional programming language, which means that functions can be passed in as arguments and returned from functions. As a programming paradigm, functional programming aims to build programs by combining simple functions together to form larger and larger ones.

The most often presented example of functional programming is functional list manipulation using higher-order functions (functions that take functions as arguments) like map and filter. Here’s one example from part 1:

We also encountered other functional programming patterns in part 1, like partial application:

Also, function composition:

And finally, folds:

9.4 Recursion

To implement a function that uses repetition in Haskell, you need recursion. Haskell has no loops like other programming languages. Here are some simple recursive functions in Haskell:

To consume or produce a list you often need recursion. Here are the implementations of map and filter as examples of recursive list processing:

Sometimes, a recursive helper function is needed in case you need to keep track of multiple pieces of data.

Here’s a final example utilizing guards, pattern matching, a helper function and recursion:

9.5 Type Classes

The following functions are parametrically polymorphic:

Parametrically polymorphic functions always work the same way, no matter what types we’re working with. This means that we can’t define a special implementation of id just for the Int type, or fst for the type (Bool, String).

By contrast, ad hoc polymorphism allows different types to have different implementations of the same function. Ad hoc polymorphism in Haskell can be achieved by defining a type class and then declaring instances of that type class for various types. Ad hoc polymorphism is a handy way for expressing a common set of operations, even when the implementation of the operations depends on the type they’re acting on.

Functions that use ad hoc polymorphism have a class constraint in their types. Here are some examples:

A type like Num a => a -> a means: for any type X that’s a member of the Num class, this function has type X -> X. In other words, we can invoke negate on any number type, but not on other types:

Here’s a summary of some useful type classes from the standard library.

  • Comparison
    • Eq is for equality comparison. It contains the == operator
    • Ord is for order comparison. It contains the ordered comparison operators like < and =>, and functions like max and min.
  • Numbers
    • Num is for all number types. It contains +, -, * and negate.
    • Integral is for whole number types. Most notably, it contains integer division div.
    • Fractional is for number types that support division, /
  • Converting values to strings
    • Show contains the function show :: Show a => a -> String that converts values to strings
    • Read contains the function read :: Read a => String -> a that is the inverse of show

Sometimes, multiple class constraints are needed. For instance here:

Now that we’ve seen some classes and types, let’s look at the syntax of declaring classes and instances. Here are two class definitions:

Consider the following data structures:

All of these have sizes that we can count, but we need to perform the operation differently:

We can also easily declare Eq instances for Numbers and IntList:

However, since the Tree datatype is parameterized over the element type a, we need an Eq a instance in order to have an Eq (Tree a) instance. This is achieved by adding a class constraint to the instance declaration. This is called an instance hierarchy.

9.5.1 Deriving

Some standard type classes, most notably Show, Read, Eq and Ord can be derived, that is, you can ask the compiler to generate automatic instances for you. For example we could have derived all of these classes for our earlier Numbers example.

9.6 Quiz

What is the type of ('c',not)

  1. [Char]
  2. [Bool]
  3. (Char,Bool -> Bool)
  4. (Char,Bool)
  5. It is a type error.

What is the type of ['c',not]

  1. [Char]
  2. [Bool]
  3. (Char,Bool -> Bool)
  4. (Char,Bool)
  5. It is a type error.

Which of these is a value of the following type?

  1. X "foo"
  2. Y "foo"
  3. Z (X 1)
  4. X (Z 1)

What is the type of this function?

  1. Maybe a -> a
  2. [Maybe a] -> a
  3. [Maybe a] -> Bool
  4. [Maybe Bool] -> Bool

What is the type of this function?

  1. (Num a, Eq a) => a -> a -> Bool
  2. Num a => a -> a -> Bool
  3. Eq a => a -> a -> Bool
  4. a -> a -> Bool

9.7 Working on the Exercises

Here’s a short recap of how to work on the exercises. The system is the same as for part 1 of the course.

  1. Clone the GitHub repository https://github.com/moocfi/haskell-mooc
  2. Go to the exercises/ directory
  3. Run stack build to download dependencies
  4. Edit the file Set9a.hs
  5. Check your answers by running stack runhaskell Set9aTest.hs
  6. Return your answers on the Submit page
  7. Repeat as necessary. You can work on the exercise sets in any order, and you can return the sets as many times as you want!
  8. You can see total score on the My status page

9.8 Exercises

10 Lecture 10: Reductionism

  • Purity
  • Laziness
  • Haskell evaluation

10.1 Laziness & Purity

Purity and laziness were mentioned as key features of Haskell in the beginning of part 1. Let’s take a closer look at them.

Haskell is a pure functional language. This means that the value f x y is always the same for given x and y. In other words, the values of x and y uniquely determine the value of f x y. This property is also called referential transparency.

Purity also means that there are no side effects: you can’t have the evaluation of f x y read a line from the user - the line would be different on different invocations of f and would affect the return value, breaking referential transparency! Obviously you need side effects to actually get something done. We’ll get back to how Haskell handles side effects later.

Haskell is a lazy language. This means that a value is not evaluated if it is not needed. An example illustrates this best. Consider these two functions:

Evaluating f 1 does not halt due to the infinite recursion. However, this works:

Laziness is not a problem because Haskell is pure. Only the result of the function matters, not the side effects. So if the result of a function is not used, we can simply not evaluate it without changing the meaning (semantics) of a program. Well okay, sometimes we get a terminating program instead of one that goes on for ever, but adding laziness never makes a functioning Haskell program break.

If you’re interested in the theory behind this, check out the Church-Rosser theorem or the Haskell Wiki article Lazy vs. non-strict.

10.2 Equational Reasoning

Referential transparency, the feature that an expression always returns the same value for the same inputs, is a very powerful property that we can leverage to reason about programs.

In a C-style language, we might write a procedure that may not always return the same value for the same arguments:

int c = 0;
int funny(int x) {
  return x + c++;
}

The expression c++ increments the value of c and returns the old value of c. The next time it is evaluated, the value of c has increased by one. This means that depending on the current value of c, funny(0) might return 0, 1, 2, or any other integer value. (It might even return negative values if c overflows!)

In some situations this kind of behaviour with side-effects may be useful, but there are also times when it is more important to be able to reason about the code easily. The advantage of pure functions is that they can be analyzed using basic mathematical techniques. Sometimes applying math to our functions can even reveal simplifications or optimisations we otherwise wouldn’t have thought about.

Consider the following expression:

This expression can be simplified to just reverse. We begin by establishing some helpful facts (or lemmas). First, suppose that we know

  1. map id === id
  2. map f . map g === map (f.g)
  3. reverse . map f === map f . reverse

The fourth fact that we’re going to need is the following:

  1. (+1) . (-1) === id

We can prove fact 4 by reasoning about how (+1) . (-1) behaves for an arbitrary input x:

Because we didn’t assume anything about x, we may conclude that the above chain of equations holds for every x. Thus,

For those who are familiar with the technique of proof by induction, it is a fun exercise to prove the first three facts also. This course doesn’t discuss induction proofs, though, so don’t sweat if you don’t know induction.

Now, from facts 1-4 it follows that

This course won’t go into details about proving things about programs, but it’s good to know that pure functional programming is very compatible with analysis like this.

10.3 Infinite Lists

The benefits of laziness are best demonstrated with some examples involving infinite lists. Let’s start with repeat 1, which generates an infinite list of 1s. If we try to tell GHCi to print the value repeat 1, it will just keep printing 1s for ever until we interrupt it using Control-C:

However, due to laziness, we can work with infinite lists and write computations that end. We just need to use a finite number of elements from the infinite list. Here are some examples:

An infinite list that just repeats one element can sometimes be necessary, but it’s kind of pointless. Let’s look at some more useful infinite lists next. You can use the [n..] syntax to generate an infinite list of numbers, starting from n:

The function cycle repeats elements from the given list over and over again. It can be useful when dealing with rotations or cycles.

10.3.1 Example: Transaction Numbers

As a more concrete example of how cycle is useful, let’s look at computing the check digit of Finnish bank transfer transaction numbers (viitenumero). A transaction number consists of any number of digits, followed by a single check digit. The check digit is checked by multiplying the digits (from right to left) with the numbers 7, 3, 1, 7, 3, 1 and so on, and summing the results. If the result of the sum plus the check digit is divisible by 10, the number is valid.

Here’s a concrete example. 116127 is a valid transaction number. The computation goes like this:

digits:       1  1  6  1  2
              *  *  *  *  *
multipliers:  3  7  1  3  7
              3+ 7+ 6+ 3+14 = 33
check digit is 7, 33+7=40 is divisible by 10, valid

Here’s the Haskell code for a transaction number checker. Note how we use use the infinite list cycle [7,3,1] for the multipliers.

10.3.2 Example: Finding a Power

Finally, here’s how you would find the first power of 3 that’s larger than 100.

Let’s go through how this works step by step. Note how map and filter are processing the list lazily, one element at a time, as needed. This is similar to how generators or iterators work in languages like Python or Java.

10.4 How does Haskell Work?

Laziness will probably feel a bit magical to you right now. You might wonder how it can be implemented. Haskell evaluation is remarkably simple, it’s just different than what you might be used to. Let’s dig in.

In most other programming languages (like Java, C or Python), evaluation proceeds inside-out. Arguments to functions are evaluated before the function.

Haskell evaluation proceeds outside-in instead of inside-out. The definition of the outermost function in an expression is applied without evaluating any arguments. Here’s a concrete example with toy functions f and g:

Inside-out (normal) evaluation:

Haskell outside-in evaluation:

Note how the unused calculations 1234*1234 and 1*1000 didn’t get evaluated. This is why laziness is often helpful.

10.4.1 Pattern Matching Drives Evaluation

Let’s look at a more involved example, with pattern matching and more complex data (lists). Pattern matching drives Haskell evaluation in a very concrete way, as we’ll see. Here are some functions that we’ll use. They’re familiar from the Prelude, but I’ll give them simple definitions.

Here’s the inside-out evaluation of an expression:

Here’s how the evaluation proceeds in Haskell. Note how it’s not strictly outside-in, since we sometimes need to evaluate inside arguments to be able to know which pattern is matched.

Note that we didn’t need to evaluate any of the not applications.

Let’s introduce some terminology. We say that pattern matching forces evaluation. When Haskell evaluates something, it evaluates it to something called weak head normal form (WHNF). WHNF basically means a value that can be pattern matched on. An expression is in WHNF if it can’t be evaluated on its top level. This means it either:

  • is a constant, for example: 1
  • has a constructor at the top level, for example: False, Just (1+1), 0:filter f xs
  • is a function, for example: (\x -> 1+x)

The most notable class of expressions that is not in WHNF is function applications. If an expression consists of a function (that is not a constructor), applied to some arguments, it is not in WHNF. We must evaluate it in order to get something pattern matchable.

In the previous example we couldn’t evaluate length (map not (False:[])) at the top level because the argument to length wasn’t in WHNF. When we apply the second equation of map, we get length (not False : map not []), and now the argument to length is in WHNF since there is a constructor, (:), at the top level. This is a bit more evident if we switch from infix to prefix notation and write the argument to length as (:) (not False) (map not []).

In practice, pattern matching is not the only thing that forces evaluation. Primitives like (+) also force their arguments.

Instead of forcing, some sources talk about strictness, we can say for instance that (+) is strict in both arguments.

10.4.2 A Word About Sharing

There’s one more thing about Haskell evaluation. Any time you give a value a name, it gets shared. This means that every occurrence of the name points at the same (potentially unevaluated) expression. When the expression gets evaluated, all occurrences of the name see the result.

Let’s look at a very simple example.

Based on the previous sections, you might imagine evaluation works like the following. The evaluation is first represented textually, and then visually, as an expression tree.

However, what really happens is that the expression 2+2 named by the variable x is only computed once. The result of the evaluation is then shared between the two occurrences of x inside square. So here’s the correct evaluation, first textually, and then visually. Note how now instead of an expression tree, we have an expression graph. This is why Haskell evaluation is sometimes called graph reduction.

As another example, consider the function f below and its evaluation.

Haskell does not compute 1+1 twice because it was named, and the name was used twice. We can contrast this with another function that takes two arguments:

Here we have two different names for equivalent expressions, and Haskell doesn’t magically share them. Automatically sharing equivalent expressions is an optimization called Common Subexpression Elimination (CSE). You can learn a bit more CSE and Haskell here.

You can name things via

  • Function arguments
  • let ... in ...
  • where

Combined with laziness, sharing means that a name gets evaluated at most once.

10.4.3 Further Examples

You’ll find below a slightly contrived recursive definition of the function even. It will illustrate the concepts of forcing and sharing.

Firstly, note that || forces its left argument, but not its right argument. (In other words, || is strict in its left argument.) This is because we only need to evaluate the left argument of || in order to know which equation applies. This means by extension that even forces its first argument:

  • || forces x==0
  • x==0 forces x

Now let’s evaluate the expression even 2 to WHNF.

Note that with this alternate definition even would not have worked. Can you tell why?

Now we can really understand what’s going on in the infinite list example from earlier. Let’s use these definitions:

And here we go:

Whew.

10.5 Working with Infinite Lists

Functions that work with lists often have the best performance when they’re written in such a way that they utilize laziness. One way to try to accomplish this is to write list-handling functions that work well with infinite lists.

To write a function that transforms an infinite list, you need to write a function that only looks at a limited prefix of the input list, then outputs a (:) constructor, and then recurses. Here’s a first example.

A good heuristic for writing functions that work well with infinite lists is: can the head of the result be evaluated cheaply? Here are two examples of functions that don’t work with infinite inputs. In the case of mapTailRecursive, the problem is that it needs to process the whole input before being in WHNF. In the case of myDrop, the problem is that it uses the function length, doesn’t work for infinite lists since it tries to iterate until the end of the list.

Pretty much all the list functions in the standard library are written in this form, for example:

10.6 Interlude: Adding Strictness

Remember foldr from part 1? Let’s have a look at its cousin foldl. Here’s the definition of foldl for lists (it’s actually part of the Foldable typeclass and so works for various other types too). While foldr processes a list right-to-left, foldl processes a list left-to-right. To be a bit more exact, foldr associates to the right while foldl associates to the left. Note the difference in the next example:

Here are the definitions of foldl and foldr:

As foldr f y (x:xs) ==> f x (foldr f y xs), it enables lazy evaluation to focus on f on the second step. Hence, foldr works nicely with lazy or short-circuiting operations:

However foldl needs to process the whole list in order to produce a (WHNF) value. The reason is that foldl remains in the leftmost-outermost position for as long as its list argument remains non-empty. This makes foldl the priority for lazy evaluation. Only after the list becomes empty does the evaluation proceed into simplifying the folded values.

So why use foldl at all? Let’s return to our first fold example again. Now, since + is a strict operation, both types of fold need to build up an expression with lots of +s. The Haskell implementation needs to track this expression in memory, which is why a problem like this is called a space leak.

Now let’s instead look at what happens when we use foldl', a version of foldl that forces its second argument!

Now the work is performed incrementally while scanning the list. No space leak! Sometimes too much laziness can cause space leaks, and a bit of strictness can fix them.

You can find foldl' in the Data.List module, and it works just like this. But how could one implement foldl'? We certainly know by now how to do it for a specific type, say Int. We just add a pattern match on the second argument that doesn’t change the semantics of the function.

To write a generic implementation of foldl' we need to introduce a new built-in function, seq. The call seq a b evaluates to b but forces a into WHNF. Here are some examples of using seq in GHCi. To demonstrate what gets evaluated, we use the special value undefined, which causes an error if something tries to evaluate it into WHNF.

As an example of using seq in a function, here’s a version of head that doesn’t work for infinite lists (since it evaluates the last element of the list):

Let’s play around with it in GHCi:

Finally, here’s a definition for foldl'. Note how we need to introduce sharing of a new variable, z', to be able to make seq evaluate the new value and then use it in a recursive call. The new definition is also used in a more detailed evaluation of foldl' (+) 0 [1,2,3] below.

We won’t dive deeper into this subject on this course, but it’s important that you’re aware that seq exists. You can find more about seq on the Haskell Wiki and learn more about when it is necessary to add strictness in Real World Haskell. Often it’s nicer to use bang patterns instead of seq, as discussed by FPComplete and Real World Haskell.

10.7 Newtype Declarations

Recall lecture 7. Sometimes we need boxed types. There’s a special keyword newtype that can be used instead of data when a boxed type is needed. newtype expects exactly one constructor, with exactly one field. For instance,

However, the following won’t work, you need data:

So what’s the difference? In terms of writing code, nothing. You work with a newtype exactly as you would with a data. However, the memory layout is different. Using data introduces an indirection layer (the constructor), but using newtype doesn’t. The indirection for data is necessary to support multiple constructors and multiple fields. An illustration:

This difference has many repercussions. First of all, newtype is more efficient: the type can be said to “disappear” when compiling. The type is still checked though, so you get type safety without any performance impact. Secondly, newtypes are strict. Concretely, this means that Money x is in weak head normal form only if x is in WHNF. This can be witnessed in GHCi:

So when should you use newtype? In general it’s best to use newtype whenever you have a single-field single-constructor datatype. However nothing will go catastrophically wrong if you always use data. The newtype pattern is also often used when you need to define a different type class instance for a type. Here’s an example that defines a number type with an inverted ordering

10.8 Something Fun: Tying the Knot

Now that we know about sharing and path copying, we can make our own cyclic datastructures. Remember the cycle examples from the list lecture?

This is what it looks like in memory:

Earlier it was said that Haskell data forms directed graphs in memory. This is an example of a directed graph with a cycle.

How can we define structures like this? We just give a value a name, and refer to that name in the value itself. That is, the value is recursive or self-referential. This trick is known as tying the knot. A simple example:

Note how we use the name xs inside the definition of xs. When we make a recursive definition like this, sharing causes it to turn into a cyclic structure in memory.

A more fun example: a simple adventure game where the world is a self-referential structure. Note how the cyclic structure is built with local definitions that refer to each other.

Here’s what the world of the game looks like in memory:

               ,-----------------,
               v                 |
         +-----------------------|-----------------+
meadow-->|Room "It's..." ["Stay" o, "Enter cave" o]|
         +---------------------------------------|-+
                                    ^            v
         +--------------------------|----------------+
cave---->|Room "You are..." ["Exit" o, "Go deeper" o]|
         +-----------------------------------------|-+
                                       ^           v
         +-----------------------------|----------------------------+
tunnel-->|Room "This is..." ["Go back" o, "Go left" o, "Go right" o]|<--------,
         +------------------------------------------|-------------|-+         |
                                                    |             |           |
                     ,------------------------------'             |           |
                     v                                            v           |
         +---------------------+                +-----------------------------|-+
pit----->|Room "You fall..." []|    treasure--->|Room "A green..." ["Go back" o]|
         +---------------------+                +-------------------------------+

We’ve now seen three types of recursion. Recursive functions call themselves. Recursive types allow us to express arbitarily large structures. Recursive values are one way to implement infinite structures.

10.9 Something Fun: Debug.Trace

Even though Haskell is a pure programming language, we can sometimes gain insights by sprinkling in a bit of impurity.

We can use the function trace :: String -> a -> a from the module Debug.Trace to peek into Haskell evaluation. The expression trace "message" x is the same as x, but prints message when it is evaluated (forced). We can use trace to witness the laziness of the || operator:

We can also have a look at when list elements are evaluated. Note how length doesn’t need to evaluate the elements of the list, and sum needs to evaluate all of them. (To be precise, head xs doesn’t actually evaluate the first element of xs, but returns it to GHCi, which evaluates it in order to show it.)

Debug.Trace also offers useful variants of trace. A notable one is traceShowId x which prints show x and evaluates to x. Let’s verify the evaluation of our previous head-filter-map example using traceShowId. Note how even though we map traceShowId over the infinite list [0..], only 6 values are actually evaluated. The last 243 is the returned value, not a trace print.

Debug.Trace is especially useful when you have an infinite recursion bug. Here’s an example:

We can debug this by adding a trace to wrap the whole recursive case.

A ha! The problem is that our recursion base case of sumEverySecond 0 is not enough to stop the recursion.

Finally, a word of caution. Using trace, and especially traceShowId, can cause things that would not otherwise get evaluated to get evaluated. For example:

So feel free to use Debug.Trace when working on the exercises, but try to leave trace calls out of your final answers. Some exercise sets check your imports and disallow Debug.Trace.

We’ll see a more principled way of dealing with side effects in the next lecture!

10.10 Quiz

Which of these statements is true?

  1. reverse . reverse . reverse === reverse
  2. reverse . reverse === reverse
  3. reverse . id === id

Which of these is an infinite list that starts with [0,1,2,1,2,1,2...]?

  1. cycle [0,1,2]
  2. 0:repeat [1,2]
  3. 0:cycle [1,2]
  4. 0:[1,2..]

What’s the next step when evaluating this expression?

  1. head (False : True : [])
  2. head (not True)
  3. head (False : map not (False:[]))
  4. head (not True : map not (False:[]))

Which of these values is not in weak head normal form?

  1. map
  2. f 1 : map f (2 : [])
  3. Just (not False)
  4. (\x -> x) True

Which of these statements about the following function is true?

  1. f is strict in its left argument
  2. f is strict in its right argument
  3. f forces both of its arguments
  4. None of the above

Does this function work with infinite lists as input? Why?

  1. No, because it includes a [] case, which is never reached.
  2. No, because it uses map, which evaluates the whole list.
  3. Yes, because it only looks at the first element of the list before producing a WHNF value.
  4. Yes, because it calls map, which works with infinite lists.

What about this one?

  1. No, because it uses map, which evaluates the whole list.
  2. No, because computing the head of the result needs the whole input list.
  3. Yes, because it doesn’t includes a [] case
  4. Yes, because it calls map, which works with infinite lists.

10.11 Exercises

11 Lecture 11: RealWorld -> (a,RealWorld)

11.1 Contents

  • IO

11.2 You’ve Been Fooled!

Forget what we talked about functional programming and purity. Actually, Haskell is the world’s best imperative programming language! Let’s start:

Reading input and writing output was easy enough. We can also read stuff over the network. Here’s a complete Haskell program that fetches a some words from a URL using HTTP and prints them.

You can find this program in the course repository as exercises/Examples/FetchWords.hs, and you can run it like this:

$ cd exercises/Examples
$ stack runhaskell FetchWords.hs
word: haskell
word: for
word: ever

What’s going on here? Let’s look at the types:

A value of type IO a is an operation that produces a value of type a. So getLine is an IO operation that produces a string. The () type is the so called unit type, its only value is (). It’s mostly used when an IO operation doesn’t return anything (but rather just has side effects).

A comparison with Java (method) types might help:

Haskell type Java type
doIt :: IO () void doIt()
getSomething :: IO Int int getSomething()
force :: a -> b -> IO () void force(a arg0, b arg1)
mogrify :: c -> IO d d mogrify(c arg)

IO operations can be combined into bigger operations using do-notation.

11.2.1 Examples

You can find useful IO operations in the standard library modules Prelude and System.IO

Here’s an IO operation that asks the user for a string, and prints out the length of the string.

The value produced by the last line of a do block is the value produced by the whole block. Note how askForALine has the same type as getLine, IO String:

In addition to IO operations like query you can also run IO operations that produce values, like askForALine, in GHCi. You can use <- to capture the result of the operation into a variable if you want.

If you need to give your operation parameters, you can just make a function that returns an operation. Note how ask has a function type with a ->, just like a normal function. We also use normal function definition syntax to give the parameter the name question.

11.3 The Subtle return

The Haskell function return is named a bit misleadingly. In other languages return is a built-in keyword, but in Haskell it’s just a function. The return :: a -> IO a function takes a value and turns it into an operation, that produces the value.

That doesn’t sound very useful does it? Combined with do-notation it is. Here we return a boolean according to whether the user answered Y or N:

Note! This means that return does not stop execution of an operation (unlike return in Java or C). Remember that in do-blocks, the last line decides which value to produce. This means that this operation produces 2:

Let’s look at this another way. The do notation allows us to cause a sequence of side-effects, and finally to produce a value.

produceThree = do putStrLn "1"   -- side effect, produces (), which is ignored
                  return    2    -- no side effect, produces 2, which is ignored
                  getLine        -- side effect, produces a String, which is ignored
                  return    3    -- no side effect, produces 3, which is passed on
Prelude> final <- produceThree
1
this line is ignored
Prelude> final
3

Also note that these are the same operation:

Since return is a function, you should remember to parenthesize any complex expressions:

11.4 do and Types

Let’s look at the typing of do-notation in more detail. A do-block builds a value of type IO <something>. For example in

The lastOp must be of type IO X (for some X). The type of foo will also be IO X. Let’s look at an example with parameters next:

The lastOp must be of type Y -> IO X (so that lastOp arg has type IO X). The type of bar will be A -> B -> IO X (and inside bar we’ll have x :: A and y :: B).

If we use return:

The function quux will have type A -> IO B, where x :: A and value :: B.

Let’s look at the typing of <- next. If op :: IO X and you have var <- op, var will have type X. We’ve seen this in many GHCi examples.

The last line of a do cannot be foo <- bar. It can’t be let foo = bar either. The last line determines what the whole operation produces, so it must be an operation (for example, return something).

Here’s a worked example:

The typing rules guarantee that you can not “escape” IO. Even though <- gives you an X from an IO X, you can only use <- inside do. However a do always means a value of type IO Y. In other words: you can temporarily open the IO box, but you must return into it. “What happens in IO, stays in IO.”

We’ll talk more about what this means later. For now, it’s enough to know that if you have a function with a non-IO type, like for example myFunction :: Int -> [String] -> String, the function can not have IO happening inside it. It is a pure function.

11.5 Control Structures

For the following examples, we’ll need two new operations.

The usual tools of recursion, guards and if-then-else also work in the IO world. Here’s an IO operation that’s defined using a guard:

Here’s an operation that prints all numbers in a list using recursion and pattern matching:

Here are two slightly more complicated examples of recursive IO operations. They use the value produced by the recursive call. The operation readAndSum n reads n numbers from the user and prints their sum. The operation ask questions shows each string in questions to the user, reads a response, and returns a list of all the responses.

Additionally, we have some IO-specific control structures, or rather, functions. These come from the module Control.Monad.

Using these, we can rewrite our earlier examples:

11.6 A Word About do and Indentation

It’s easy to run into weird indentation problems when using do-notation. Here are some rules of thumb to help you get it right.

The most important rule of do and indentation is all operations in a do-block must start in the same column.

Some examples of this rule:

A related rule is when an operation goes over multiple lines, indent the follow-up lines. If you don’t indent, it’ll look like a new operation!

Here’s one more example, with nested do-blocks, and two different valid indentations.

11.7 Let’s Write a Program

After all these short one-off examples, let’s turn to something a bit longer. Let’s write a program to fetch all type annotations from all .hs files. We use IO operations like readFile and listDirectory to read and find files, but also pure code like map and filter to do the actual processing. First off, here’s a recap of the library operations we’re using:

And here’s the program itself. You can also find it in the course repository as exercises/Examples/ReadTypes.hs.

We can run this program by going to the directory exercises/Examples and running:

The exact output will vary according to the contents of the directory, of course.

11.8 What Does It All Mean?

Let’s return to the functional world. How can we reconcile IO operations with Haskell being a pure and lazy language? Something like putStrLn :: String -> IO () is a pure function that returns an operation. How is it pure? putStrLn x is the same when x is the same. In other words: an operation is a pure description of a chain of side effects. Only executing the operation causes those side effects. When a Haskell program is run, only one operation is executed - it’s called main :: IO (). Other operations can be run only by linking them up to main.

When in GHCi, if an expression you type in evaluates to an operation, GHCi runs that operation for you. Here’s a demonstration of the purity of print:

Operations are values just like numbers, lists and functions. We can write code that operates on operations. This function takes two operations, a and b, and returns an operation that asks the user which one he’d like to run.

Using operations specified as parameters lets us write functions like mapM_, which we met earlier. The implementation is a recursive IO operation that takes another IO operation as a parameter. Conceptually complicated, but simple when you read the code:

11.9 One More Thing: IORef

So far the only side-effects we’ve been able to produce in IO have been terminal (getLine, print) and file (readFile, listDirectory) IO. Imperative programs written in Java, Python or C have other types of side effects too that we can’t express in pure Haskell. One of these is mutable (i.e. changeable) state. A pure function can not read mutable state, because otherwise two invocations of the same function might not return the same value.

The Haskell type IORef a from the module Data.IORef is a mutable reference to a value of type a

Here are some examples of using an IORef in GHCi:

Here’s an example of using an IORef to sum the values in a list. Note the similarity with an imperative loop.

Using IORef isn’t necessary most of the time. Haskell style prefers recursion, arguments and return values. However real world programs might need one or two IORefs occasionally.

11.10 Summary of IO

A value of type IO X is an IO operation that produces a value of type X when run. Operations are pure values. Only running the operation causes the side effects.

IO operations can combined together using do-notation:

The return x operation is an operation that always produces value x. When x :: a, return x :: IO a.

Useful IO operations:

11.11 Quiz

What is the type of this IO operation?

  1. String -> IO String
  2. IO Int
  3. String -> IO Int
  4. IO String -> IO Int

Which of these lines could be used in place of ????

  1. q <- getLine
  2. return (y++z)
  3. return [q]
  4. ans <- return [y,z]

What values does blorg [1,2,3] print? That is, what values x does it call print x for. The value produced by blorg doesn’t count.

  1. It prints 1, 2, 3
  2. It prints 1, 2, 3, 6
  3. It prints 3, 2, 1
  4. It prints 3, 2, 1, 6

Which of these can a function of type Int -> IO Int do?

  1. No function of this type can be defined.
  2. Return a constant value.
  3. Run the IO operation it has been given and return its value.
  4. Query the user for a number and return it.

Which of these can a function of type IO Int -> Int do?

  1. No function of this type can be defined.
  2. Return a constant value.
  3. Run the IO operation it has been given and return its value.
  4. Query the user for a number and return it.

11.12 Exercises

12 Lecture 12: fmap fmap fmap

12.1 Contents

  • Functors

12.2 Functors

12.2.1 Preserving Structure

Remember the map function for lists? Here’s the definition again:

It applies a function g :: a -> b to each element of a list of type [a], returning a list of type [b]. Another way to express the type of map would be (a -> b) -> ([a] -> [b]). This is the same type because -> associates to right. The extra parentheses emphasize the fact that map converts the function g :: a -> b into a function map g :: [a] -> [b]. This means that map is a higher-order function that transforms functions to functions.

As map is parametrically polymorphic, its definition doesn’t depend on the type of values stored in the list. Thus, every function of type a -> b is converted into a function of type [a] -> [b] using exactly the same logic. Using the definition above, we can see that:

What’s notable here is that map preserves the structure of a list. The length of the list and the relative positions of the elements are the same. The general idea is demonstrated in the picture below.

Mapping a function to a list
Mapping a function to a list

Let’s see if we can find other similar functions. A value of type Maybe a is kind of like a list of length at most 1. Let’s map over a Maybe! Can you see the similarity with the definition of map?

Here too, the structure of the value is preserved. A Nothing turns into a Nothing, and a Just turns into a Just. Here too, we can think of the type as (a -> b) -> (Maybe a -> Maybe b), converting (or “lifting”) a normal function into a function that works on Maybes.

One more example: consider binary trees.

A binary tree might look like this:

A binary tree
A binary tree

After mapTree f the tree would looke like this:

A binary tree after mapping
A binary tree after mapping

12.2.2 The Functor Class

Now we have three different structure-preserving mapping functions. Three similar operations over different types. Could we write a type class to capture this similarity?

A naive attempt at writing a type class runs into problems. If we try to abstract over Maybe c, we can’t seem to write the right type for the map operation. We’d need to be able to change the type parameter c somehow.

Luckily Haskell type classes have a feature we haven’t covered before. You can write classes for type constructors in addition to types. What does this mean? Let’s just have a look at the standard type class Functor that does what we tried to do with our Mappable.

Note how the type parameter f is a type constructor: it’s being passed a and b arguments in different parts of the type of fmap. Now let’s see the instance for Maybe.

Now fmap has the right type and we can implement it like mapMaybe! Note how we’ve declared instance Functor Maybe instead of instance Functor (Maybe a). The type Maybe a isn’t a functor, the type constructor Maybe is.

The type constructor for lists is written []. It’s special syntax, just like other list syntax. However if the type [a] was written List a, the type constructor [] would mean List.

Here’s the final of our examples, as a Functor instance.

Sidenote: the term functor comes originally from a branch of mathematics called category theory. However, to work with Haskell you don’t need to know any category theory. As you progress in learning Haskell, you might get interested in category theory, and it can be a valuable source of new ideas for programming. Category theory can feel intimidating, so it’s good to know that you can get on fine without it. For now, when you see functor, you can just think “something I can map over”, or perhaps “a container”.

Let’s zoom out a bit. When we have an instance Functor MyFun we know that we can map a type X into a new type MyFun X (since MyFun is a type constructor), but also that we can lift a function f that takes an X argument into a function fmap f that takes a MyFun X argument! So you could say we’re mapping both on the type level and the value level.

Oh right, one more thing. Once you’ve gotten the hang of fmap you might find yourself using it quite a bit. For code that uses fmap heavily it can be nice to use its infix alias, <$>. Consider the symmetry between $ and <$> in these examples:

12.3 Lawful Instances

What is this “preserving of the structure” that was mentioned above exactly? The following two functor laws are expected to hold for any Functor instance f (though unfortunately Haskell compilers can’t enforce them):

  1. fmap id === id
  2. fmap (f . g) === fmap f . fmap g

Don’t worry if that sounded abstract! The first law says that a functor maps id :: a -> a into id :: f a -> f a. (id is the identity function, meaning that id x = x.) Let’s be concrete and see how it works for the list [1,2,3]:

On the other hand,

Hence, the result of fmap id [1,2,3] was the same as the result of id [1,2,3], so the first functor law holds in this case. It’s not hard to show that the first functor law holds for any list whatsoever.

The first functor law is really a very simple proposition if you think about it. It just says that you can either fmap id or just apply id directly without a noticeable difference.

How about the second functor law? For lists, consider what happens if we fmap the function negate.(*2) (remember, negate maps x to -x and (*2) multiplies its argument by 2):

Let’s consider the right-hand side of the second functor law in this case:

The second functor law turns out to hold in this particular case. In fact, it holds in all cases (exercise!).

12.4 Sidenote: Kinds

Remember that Functor was a class for type constructors. If we try to define an instance of Functor for a type, we get an error:

The error message talks about kinds. Kinds are types of types. A type like Int, Bool or Maybe Int that can contain values has kind *. A type constructor has a kind that looks like a function, for example, Maybe has kind * -> *. This means that the Maybe type constructor must be applied to a type of kind * to get a type of kind *.

We can ask GHCi for the kinds of types:

If we ask GHCi for info about the Functor class, it tells us that instances of Functor must have kind * -> *:

Here are some examples of even more complex kinds.

You won’t bump into kinds that much in Haskell programming, but sometimes you’ll see error messages that talk about kinds, so it’s good to know what they are.

12.5 Foldable, Again

We briefly covered the class Foldable, which occurs in many type signatures of basic functions, in part 1. For example:

From these type signatures we can see that Foldable, just like Functor, is a class for type constructors (things of kind * -> *). The essence of Foldable is to be a class for things you can fold over. The class definition could be as simple as

However, for performance reasons, the class contains many methods (you can see them yourself by checking :info Foldable in GHCi!), but when we’re defining an instance for Foldable it’s enought to define just foldr.

Another way of thinking of the Foldable class is processing elements left-to-right, in other words, if Functor was the class for containers, Foldable is the class for ordered containers.

As an example, let’s implement Functor and Foldable for our own pair type.

Now, we can use Pair almost wherever we can use a list:

Other types we’ve met that are Foldable include Data.Map and Data.Array.

12.6 Recap

So, to summarize, a functor is a type constructor f and the corresponding Functor f instance such that fmap satisfies the two functor laws. These laws assert that fmap must preserve the identity function and distribute over function composition. More informally, fmap lifts a function g :: a -> b operating on values to one operating on containers: fmap g :: f a -> f b. Basically all well-behaving data structures in Haskell are functors.

12.7 Quiz

What’s the type of fmap?

  1. a -> b -> f a -> f b
  2. (a -> b) -> f a -> f b
  3. Functor f => a -> b -> f a -> f b
  4. Functor f => (a -> b) -> f a -> f b

Which code snippet completes the next Functor instance?

  1. fmap f (Things x ys) = Things (f x) [f x]
  2. fmap f (Things x ys) = Things (f x) (map f ys)
  3. fmap f (Things x ys) = Things (f x) ys
  4. fmap f (Things x ys) = f (Things x ys)

What’s the kind of [a]?

  1. *
  2. * -> *
  3. [a]

What’s the kind of Foo?

  1. *
  2. * -> *
  3. Foo

What is the value of foldr (-) 1 (Just 2)?

  1. -1
  2. 1
  3. Just -1
  4. Just 1

Which code snippet completes the next Foldable instance?

  1. foldr f z (Things x ys) = f x z
  2. foldr f z (Things x ys) = foldr f x ys
  3. foldr f z (Things x ys) = f x (foldr f z ys)
  4. foldr f z (Things x ys) = foldr f z (x:ys)

12.8 Exercises

13 Lecture 13: A Monoid in the Category of Problems

  • Monads

In this lecture we’ll build up to the concept of a monad using a number of examples. By now you should be familiar with all the Haskell features needed for understanding monads.

Monads are a famously hard topic in programming, which is partly due to weird terminology, partly due to bad tutorials, and partly due to trying to understand monads too early when learning Haskell. Monads are introduced this late in the course in an attempt to make understanding them easier.

If you find this lecture hard, don’t despair, many others have found the topic hard as well. There are many many productive Haskell programmers who have managed to understand monads, so the task is not hopeless.

One final word of caution: monads, like functors, are a concept originally from a branch of mathematics called Category Theory. However, and I can’t stress this enough, you do not need to know anything or even care about category theory to understand monads in Haskell programming. Just like one can work with object-oriented programming or functional programming without knowing the theory of objects or functions, one can work with monads without understanding the math associated with them. Category theory can be a rewarding topic for a functional programmer, but it’s not a mandatory one.

13.1 Example 1: Maybes

When working with many Maybe values, the code tends to become a bit messy. Let’s look at some examples. First, we combine some functions returning Maybe String. Note the nested case we need in stealSecret: not fun to write.

Next up, we modify a list of pairs. We use the Maybe-returning function lookup from the Prelude. Here we have an if inside a case instead of a nested case.

This type of code is pretty common, and usually repeats the same pattern: if any intermediate result is Nothing, the whole result is Nothing. Let’s try to make writing code like this easier by defining a chaining operator ?>. The chaining operator takes a result and the next step of computation, and only runs the next step if the result was a Just value.

The chaining operator streamlines our examples nicely. Note how we can define simple helper functions that take care of one step of the computation instead of writing one big expression.

Here’s another example: safe list indexing built from safeHead and safeTail:

PS. note that ?> associates to the left as is the default in Haskell. That means that op ?> f ?> g means (op ?> f) ?> g. The alternative, op ?> (f ?> g) would not even type check!

Sidenote: this ?> operator expresses the if-result pattern that’s very common in other languages. Here is how one would write op val ?> f in Python and Java.

The difference between the if-result pattern and our ?> is that we use the Nothing value to explicitly signal failure, instead of relying on the fact that any variable can be None (or False) in Python, or that any Object reference can be null in Java.

13.2 Example 2: Logging

Let’s explore the concept of chaining with another example: logging. The type Logger represents a value plus a list of log messages (produced by the computation that produced the value).

Here’s a login function that logs some details about the usernames and passwords it processes. Note how we run into complicated code in login when we need to handle multiple Logger values.

Let’s try to streamline this code by defining a chaining operator for Logger. The important thing when doing multiple Logger operations is to preserve all the logs. Here’s a chaining operator, #>, and an example of how it can be used to log some arithmetic computations.

We can streamline login quite a bit by using #>. Note how we don’t need to worry about combining logs together. Also note how we use a lambda expression instead of defining a helper function.

To ramp things up a bit, let’s use Logger in a recursive list processing function. Here’s a logging version of filter. Note how the code chains a log message before the recursive call in order to keep the order of log entries nice.

13.3 Example 3: Keeping State

In the previous example we just wrote some state (the log). Sometimes we need computations that change some sort of shared state. Let’s look at accounts in a small bank. We’ll first define a datatype for the state of the bank: the balances of all accounts, as a map from account name to balance.

Here’s how we can deposit some money to an account. We use the function adjust from Data.Map to modify the map.

Withdrawing money is a bit more complicated, since we want to handle some special cases like the account not existing, or the account not having enough money. We use the library function findWithDefault to help us along.

Finally, let’s write a function that takes at most 100 money from one account, splits the money in half, and deposits it in two accounts. Pay attention to how we need to carefully thread the different versions of the bank, bank, bank1, bank2 and bank3 to make sure all transactions happen in the right order.

Code like this turns up often in Haskell when you’re doing serial updates to one value, while also performing some other computations on the side. It’s easy to make a mistake, and the type system won’t help you if you e.g. reuse the bank1 value. Let’s rewrite share so that we don’t need to refer to the bank itself. We can again use the same chaining idea to accomplish this.

Let’s see how chaining works with these bank operations.

Note how a value of type BankOp represents a process that transforms the bank. The initial state of the bank must be supplied using runBankOp. This makes sense because BankOp transformations can be composed, unlike Bank states. Having to use runBankOp makes the distinction between defining operations and executing them clearer.

Now that we’re familiar with manipulating BankOp values, we can implement share as a BankOp. We implement a helper distributeOp to make the code a bit neater.

That was pretty clean wasn’t it? We don’t need to mention the bank at all, we can almost program as if in an imperative language while staying completely pure.

You can find all of this code in the course repository under exercises/Examples/Bank.hs.

13.4 Finally: The Monad Type Class

We’ve now seen three different types with a chaining operation:

Just like previously with map and Functor, there is a type class that captures this pattern. Note that Monad is a class for type constructors, just like Functor.

There are some additional operations in Monad too:

Recall that the Functor class was about a generic map operation. Similarly, the Monad class is just about a generic chaining operation.

The expression operation >>= next takes a monadic operation operation :: m a, and does some further computation with the value that it produces using next :: a -> m b. If this feels too abstract, just recall how chaining works for Maybe:

13.5 Maybe is a Monad!

Here’s the full Monad instance for Maybe and some examples.

Here are the stealSecret and increase examples rewritten with monad operations. The changes are ?> to >>= and Just to return.

13.6 The Return of do

Here’s an example of what a complex monad operation might look like.

Let’s see what happens when we transform this code a bit. First off, let’s inline the definitions.

Due to lambda expressions continuing to the end of the expression, we can omit the parentheses. Let’s also indent differently.

Now we can notice the similarity with do notation. The do block below is actually the same code!

To clarify, do notation is just a nicer syntax for the monad operations (>>= and >>) and lambdas. Here’s how do notation gets transformed into monad operations. Note! the definition is recursive.

Here’s safeNth using do notation:

Here is increase one last time, now with do notation

13.7 Logger is a Monad!

We should be able to write a Monad instance for Logger ourselves, by setting >>= to #>. However, due to recent changes in the Haskell language we must implement Functor and Applicative instances to be allowed to implement the Monad instance. Functor we’ve already met, but what’s Applicative? We’ll find out later. Let’s implement the instances:

We don’t need the nomsg operation any more since it’s just return. We can also reimplement the annotate operation using monad operations.

Here are the compute and filterLog examples rewritten using do-notation. Note how nice filterLog is with do-notation.

13.8 The State Monad

Haskell’s State monad is a generalized version of our BankOp type. The State type is parameterized by two types, the first being the type of the state, and the second the type of the value produced. State Bank a would be equivalent to our BankOp a. You can find the State monad in the module Control.Monad.Trans.State of the transformers package. Here’s a simplified implementation of State.

Note how we declare an instance Monad (State s). We’re using a partially-applied type constructor because instances of Monad can only be declared for type constructors that take one more type parameter. This might be a bit clearer if you look at how m, Maybe and State occur in the type of >>= below.

class Monad m where
  (>>=) :: m a -> (a -> m b) -> m b

instance Monad Maybe where
  (>>=) :: Maybe a -> (a -> Maybe b) -> Maybe b

instance Monad (State s) where
  (>>=) :: State s a -> (a -> State s b) -> State s b

Let’s look at some examples of working with State. To start off, let’s consider computations of type State Int a, which represent working with a single counter.

Note how a value of type State s a represents a process that transforms the state (just like BankOp). The initial state must be supplied using runState. Again, having to use runState makes the distinction between defining operations and executing them clearer.

A state can replace an accumulator parameter when processing a list. Here are two examples: finding the largest element of a list, and finding values in a list that occur directly after a 0.

13.9 The Return of mapM

The control structures from the IO lecture work in all monads. Here are their real types.

As we can see here, we can use mapM over all of the monads we’ve met so far:

Some more examples:

Here’s filter reimplemented using the State monad:

We can write our own operations that work for all monads. This is made possible by type classes, as we’ve seen before. If you only use monad operations like return and do-notation, the type system will infer a generic type for your function.

We can use these generic operations in each of our example monads:

13.10 Monads are Functors

One useful operation hasn’t yet been introduced: liftM.

The liftM operation makes it easy to write code with pure and monadic parts.

Does the type of liftM look familiar? It’s just like the type of fmap! In fact, it’s easy to define a functor instance for a monad: just set fmap = liftM. Since every Monad needs to be a Functor these days, modern Haskell style prefers fmap over liftM.

13.11 One More Monad

The list monad (that is, the Monad instance for []) represents computations with multiple return values. It’s useful for searching through alternatives. Here’s a first example. For every x we produce both x and -x:

We can filter out unsuitable values by produing an empty list:

If we’re using do-notation the list monad starts to look more like a looping construct:

More interesting example: find all the pairs in a list that sum to k

A final, more complex example. We find all palindromes from a string using the list monad, and then find the longest one.

Here’s the surprisingly simple implementation of the list monad:

We’ve actually seen the list monad previously in the guise of list comprehensions. Compare this reimplementation of findSum to the earlier one that uses do-notation.

13.12 Oh Right, IO

As you’ve probably guessed by now, IO is a monad. However the implementations of the IO type and instance Monad IO are compiler built-ins. You couldn’t implement the IO monad just using standard Haskell, unlike Maybe monad, State monad and other monads we’ve seen.

However, true side effects fit the monad pattern just like State and Maybe. Just like with other monads, we’re separating the pure definitions of operations from the process of running the operations. As a bonus, you can use all the generic monad operations (mapM and friends) with IO.

Here are some examples of writing IO using monad operations.

13.13 Monads in Other Languages

Once you’ve gotten familiar with the concept of a monad, you’ll start seeing monadlike things in other languages too. The most well-known examples of this are Option types, Java Streams and JavaScript promises . If you know these languages or concepts from before, you might find this section illuminating. If you don’t, feel free to skip this.

13.13.1 Optionals

Many langages have an option type. This type is called Optional<T> in Java, std::optional<T> in C++, Nullable<T> in C#, and so on. These types often have behaviour resembling the Haskell Maybe monad, for example:

  • In Java, Optional.flatMap corresponds to >>=: it lets you apply a Function<T,<Optional<U>> to an Optional<T> and get an Optional<U>.
  • In C#, binary operations get automatically lifted to Nullable types. For example, a + null becomes null.

13.13.2 Streams

Java Streams have a monadlike API too. Streams are about producing many values incrementally. Just like with Optional, the method Stream.flatMap lets us take a Stream<T>, combine it with a Function<T,Stream<U>> and get a Stream<U>.

As an example, if lines is a Stream<String>, words takes a String and returns a Stream<String> and readInt takes a String and returns an Integer, we can write:

This corresponds to the following Haskell list monad code:

13.13.3 Promises

There is much disagreement about whether Promises in JavaScript really are monads or not. However, some similarities are obvious.

First, consider the similarities between Promise.then and >>=. Both take an operation (promise or monadic operation), and combine it with a function that returns a new operation.

Next, let’s consider the similarities between async/await and do-notation. Both are nicer syntaxes for working with the raw Promise.then or >>= mechanisms. We reimplement concatPromises using async/await, and concatMonadic using do-notation. Their behaviour stays the same.

13.14 Monads: Wrap-up

  • The Monad type class is a way to represent different ways of executing recipes
    • failure (Maybe)
    • logging
    • state
    • nondeterminism (the list monad)
    • IO
  • You can write monad code in two equivalent ways:
    • Using the Monad class operations (>>=, >>) directly
    • Using do-notation
  • When M is a monad, values of type M a are operations that produce a result of type a
  • Monads are a design pattern and a library (mapM etc)
    • Using common abstractions makes it easier to understand code
    • Reading a State operation is easier than deciphering a complicated recursion with state
  • Everything you can do with monads, you can also do without them
    • Exception: IO
    • Using a monad often simplifies code
  • Warning: the internet is full of tutorials trying to explain monads using a simple analogy
    • In my experience, this doesn’t work
    • What works is using different monads and slowly getting used to the concept

13.15 Sidenote: Standard Haskell

This and the previous lecture have covered many parts where the GHC version of Haskell differs from standard Haskell 2010. Here’s a short list of the changes GHC has made, just so you know:

  • length, sum, foldr etc. generalized to work on Foldable instead of just lists
  • Functor and Applicative are superclasses of Monad
  • The fail method has been moved from the Monad type class to its own MonadFail class

13.16 Quiz

What is the expression equivalent to the following do block?

  1. z >> \y -> s y >> return (f y)
  2. z >>= \y -> s y >> return (f y)
  3. z >> \y -> s y >>= return (f y)

What is the type of \x xs -> return (x : xs)?

  1. Monad m => a -> [a] -> m [a]
  2. Monad m => a -> [m a] -> [m a]
  3. a -> [a] -> Monad [a]
  4. None of the above

What is the type of \x xs -> return x : xs?

  1. Monad m => a -> [a] -> m [a]
  2. Monad m => a -> [m a] -> [m a]
  3. a -> [a] -> Monad [a]
  4. None of the above

What is the type of (\x xs -> return x) : xs?

  1. Monad m => a -> [a] -> m [a]
  2. Monad m => a -> [m a] -> [m a]
  3. a -> [a] -> Monad [a]
  4. None of the above

13.17 Exercises

14 Lecture 14: Let’s Use Some Libraries!

Now that you know monads, you pretty much know everything about Haskell to start writing real programs that use libraries to do useful things. This lecture will go over some examples of libraries that are commonly used in such real programs. Using these libraries is also a good way to practice using monads, reading docs, and understanding type errors.

14.1 Text and ByteString

So far, we’ve been using the Haskell String type to work with strings. However String is just [Char], a linked list of characters. This is horribly inefficient, both in terms of memory, and in terms of time. Once we move beyond processing short strings and start processing whole files or network requests, a more time efficient string type becomes a must.

There are two types that are used as replacements for String, with slightly different semantics:

  • Data.Text represents a sequence of Unicode characters, just like String, only more efficient. Used when dealing with text.
  • Data.ByteString represents a sequence of bytes. Used when dealing with binary data.

Additionally, both of these types come in lazy and strict variants. The docs for Data.Text summarize the difference well:

The strict Text type requires that an entire string fit into memory at once. The lazy Text type is capable of streaming strings that are larger than memory using a small memory footprint… Each module provides an almost identical API…

All of these types (Text and ByteString, strict and lazy) offer pack and unpack functions for converting from and to plain Strings. The types also come with specialized versions of familiar list functions like reverse, take, map and so on.

14.1.1 Examples with Text

Let’s go through a short GHCi session demonstrating the use of Data.Text. As the documentation says, the Data.Text module is designed to be imported qualified. We can convert a String into a Text with the function T.pack. Note how a value of type Text gets printed just like a String.

We can use the functions from Data.Text to operate on values of Text. Many of these are named like their counterparts for Strings or lists from Prelude.

A useful detail is that Text has a Monoid instance that glues Text values together. You can also use the functions T.append and T.concat.

If you want to write a recursive function that pattern matches on a Text like you would on a String, you can use the function T.uncons :: T.Text -> Maybe (Char, T.Text) to split a Text into a head and a tail. Here’s a simple example:

14.1.1.1 Strictness and Laziness

Note that Data.Text implements the strict Text type. You need to use Data.Text.Lazy for the lazy variant. As mentioned earlier, one difference between these two types is that the strict type does not work for infinite strings:

Another practical problem that sometimes crops is that you can end up with a mismatch between strict and lazy Texts when using libraries. You can usually fix this by using toStrict or fromStrict as needed.

14.1.2 Examples with ByteString

We can walk through pretty much the same GHCi session using ByteString instead of Text. However, note how the ByteString is built up from Word8 values and not Char values. A Char can represent an arbitrary unicode codepoint like for a character like 'Å', but a Word8 represents a byte: a number from 0 to 255. Unfortunately and somewhat confusingly, ByteString values get printed like Strings.

The same caveats apply to the differences between strict and lazy ByteString as for Text:

14.1.3 Sidenote: Encodings

You might be wondering why on earth do we have both Text and ByteString. The difference is subtle but real. When we operate on Text we operate character by character, regardless of what those characters are and how they are encoded. When we operate on ByteString we operate on bytes, regardless of what those bytes represent.

Characters, numbers, and data structures are abstractions that help us humans to deal with complex programming tasks. The computer memory is essentially just an enormous sequence of bytes. The machine doesn’t care how we interpret those bytes. The essential difference between Text and ByteString is the way how the bytes are grouped and interpreted.

To illustrate this difference, we’ll look at the UTF-8 text encoding. A text encoding is a method for representing characters as bytes. UTF-8 can represent all the millions of characters defined by Unicode. Because bytes can only store values from 0 to 255, this means that a character can get encoded into multiple bytes. The bits and bytes of the UTF-8 string “Ha∫keλ!” can be interpreted in various ways:

(If you see different characters in the picture and in “Ha∫keλ!”, it means that your browser is either interpreting the encoding incorrectly or the font you’re using doesn’t support all of the characters.)

If we read the same stream of bits using a different encoding, we’ll see other characters. For example the string above would be interpreted as “Ha∫keλ!” using the Latin-1 text encoding.

By the way, when dealing with raw binary data, it’s often convenient to use the hexadecimal number system which uses a single symbol 0, 1, …, 9, A, B, …, F to represent all of the sixteen possible combinations of four bits. We don’t need hexadecimal in this course but you can check out Wikipedia if you’re interested in learning more about hexadecimal.

We can explore the same example using code. The function Data.Text.Encoding.encodeUtf8 :: Text -> ByteString encodes the characters in a Text into bytes in a ByteString using UTF-8.

If we are processing ASCII text, that is, characters that can be represented using single bytes, we can use Text and ByteString interchangeably. There are namespaces Data.ByteString.Char8 and Data.ByteString.Lazy.Char8 that offer functions that operate on ByteStrings using Char values instead of Word8. However, care must be taken to make sure all the characters really are plain ASCII characters, or surprising things will happen.

14.2 Monads: Recap

The next libraries we’ll look at work inside the IO monad. Here’s a short recap of what we learned about monads in the last lecture.

  • When M is a monad, values of type M X are operations that can be executed to produce a value of type X.
  • Monadic operations can be implemented using
    • the methods of the Monad type class (return, >>=, >>),
    • do-notation,
    • and library functions like mapM.
  • Unlike in other languages, return is not a keyword and does not cause an operation to stop executing. Instead, return x is the operation that always produces x and does nothing else.
  • This is what do-notation looks like:

14.3 Writing a HTTP Server: WAI and Warp

Sometimes it feels like everything in the world happens over HTTP and Web Apis. Your web browser, your smartphone apps, your bank, your coffee pot, and even your doorbell, all talk to servers using the HTTP protocol.

Let’s look at how we can set up a simple HTTP server in Haskell. The standard low level components for this are called WAI and Warp. WAI, the Web Application Interface gives us a way to define how HTTP requests are handled. Warp is a simple HTTP server that runs the logic we have defined using WAI. That probably sounds a bit abstract right now, but a simple example will help.

The file exercises/Examples/HelloServer.hs implements a HTTP server that always responds with “Hello World!”. You can try it out by going to the exercises/Examples directory and running it with stack runhaskell HelloServer.hs. After that you can visit http://localhost:3421 in your browser to see the response from the server.

Let’s look at the types in this example. There’s a lot going on here. First off, Application is a type alias for a thing that implements the logic of a web server. The run function from Warp can run Applications:

We’ll talk about what Request and Response are soon, but from this type we can see that an Application is an IO operation that takes as arguments a request of type Request, and an IO operation respond :: Response -> IO ResponseReceived. Arguments like respond are called callbacks in many contexts. They allow us to call back to the library who called the application. The Application operation has to produce the same special ResponseReceived type that respond. You can think of this type as a token proving that respond was called by the Application.

That might sound intimidating: but looking at the code things are relatively simple: our server is an Application and takes two parameters: request and respond.

WAI uses lots of types like Port, Request, Response, Status to represent HTTP concepts. It’s useful to look these up in the documentation when you bump into them. For example Port is just an alias for Int. As another example, we can see the responseLBS function has the type

where Status is defined in Network.HTTP.Types.Status, ResponseHeaders is a type alias for [Header] from Network.HTTP.Types.Header, ByteString is a lazy ByteString, and the result type Response is defined by Network.WAI.

Finally, note that we take a shortcut and use the function Data.ByteString.Lazy.Char8.pack to convert a String into a ByteString. This only works for ASCII text.

A web server that always responds with the same text isn’t that interesting. Let’s look at how we can give different responses to different requests next. There are many parts in a HTTP request, but for this lecture we’re going to focus on the path. In a URL like http://example.com/abcd/ef/file, the /abcd/ef/file part is the path. WAI has the function

This gives us the path of the requested URL, split at the / characters.

The file exercises/Examples/PathServer.hs implements a web server that has three different pages:

As before, you can run the server by going into the exercises/Examples directory, and running stack runhaskell PathServer.hs.

14.4 Working With a Database: sqlite-simple

After implementing a HTTP server we can participate in the global graph of applications talking to each other that’s called the internet. But what use is talking if we can’t remember? A real application needs to be able to persist data even when it is restarted. A common way to accomplish this is to use a database.

There are many different kinds of databases, but arguably the most widely used simple database is SQLite. SQLite is a library that lets you store data in a file and process it using SQL, the Structured Query Language. With SQLite there is no need to run a separate database server a la PostgreSQL or MySQL.

In case you’re not familiar with SQL, don’t worry, you won’t need to write any queries of your own in the exercises. If you’d like to learn a bit of SQL now, there are lots of tutorials on the web. See W3Schools, SQL Zoo or Codecademy.

There are many libraries for Haskell for using SQLite, but we’ll look at one called sqlite-simple here. Let’s explore the library in GHCi for a bit.

All the functions live inside Database.SQLite.Simple. You can open a database by giving a filename to open, which is an IO operation that produces a Connection.

To run an SQL query you can use IO operation query_, which takes a Connection and a Query, and produces a list of results. The Query type is just a simple newtype around Text. The result type of query_is polymorphic: you can read any type that satisfies the FromRow type class from the database. If this feels confusing, compare it to the type of read: Read a => String -> a. The FromRow class is like Read for this database: it represents types that can be read out of the database. Anyway, let’s read the number 1 from the database:

By the way, all of these initial examples use simple SELECT x, y, z; queries that just return constant data. We’ll worry about actual tables in the database later.

Without the type signature, we get an error from GHCi, which can’t decide which type we want to read out from the database:

Before we go on, let’s have a closer look at FromRow. If you’ve bumped into SQL before, you know that an SQL query returns a number of rows, and each row consists of a number of values (also called columns). To be able to interpret the result of an SQL query into Haskell data, we need a way to interpret these values and rows. Thus sqlite-simple defines two classes, FromField and FromRow, and a bunch of instances like the following. (You can find these instances from the docs or by asking GHCi with :info FromRow etc.)

In essence, basic Haskell datatypes fulfill the FromField class, and various Haskell collections fulfill the FromRow class. Our earlier example was using the FromRow [a] and FromField Int instances to get a [[Int]] out of query_. Here’s a simple query that uses some other datatypes:

What happens if the SQL and Haskell types don’t match? Well, you get a runtime error, just like if you try to invoke read "True" :: Int.

To mirror the FromRow and FromField classes, sqlite-simple also defines the ToRow and ToField classes for writing into the database. Here’s the type of the query function, which allows us to use parameterized queries.

And here are some instances for ToRow and ToField:

Parameterized queries use a ? character to denote slots where parameters can be passed in. Here’s a simple example:

That’s pretty much all you need to know about sqlite-simple: open, query_, query, FromRow, ToRow. Oh right, one more thing. There are the functions execute and execute_ if you don’t need the result of the query. They’re useful for inserting things into the database, for example.

You’ll find an example program that uses sqlite-simple to maintain a phone book under exercises/Examples/Phonebook.hs. The program keeps the phonebook in a file called phonebook.db and works like this (run from the exercises/Examples directory in the course repository):

$ stack runhaskell Phonebook.hs
(a)dd or (q)uery?
a
Name?
bob
Phone?
1234
$ stack runhaskell Phonebook.hs
(a)dd or (q)uery?
a
Name?
bob
Phone?
5678
$ stack runhaskell Phonebook.hs
(a)dd or (q)uery?
a
Name?
samantha
Phone?
1357
$ stack runhaskell Phonebook.hs
(a)dd or (q)uery?
q
Name?
bob
2 numbers:
["1234"]
["5678"]

PS. If you’re devastated by the lack of compile-time type checking for SQL queries, you can have a look at some of the more advanced SQL libraries for Haskell like Beam or Opaleye. This course is using sqlite-simple for simplicity and to avoid dwelling on the details of SQL too much.

14.5 Exercises

15 Lecture 15: You’re Valid Even Without Monads

15.1 Introduction to Applicatives

The Applicative type class is a middle ground between Functor (which you can’t do that much with) and Monad (which pretty much allows you to write arbitrary programs). Reasons to use Applicative instead of Monad include:

  • Performance: since Applicative allows less operations, it can be optimized better than Monad.
  • Simplicity: an Applicative interface is easier to reason about.
  • Necessity: there just is no way to define a Monad instance for your type, but there is an Applicative instance. This is rare.

So what is an Applicative? Let’s look at a definition.

So an Applicative is a Functor that allows us to build singleton values via pure, and combine two values into one using liftA2. This adds a lot of power compared to a bare functor. Computations using functors are necessarily linear: fmap :: (a -> b) -> f a -> f b takes in one functorial value, and outputs another. By contrast, pure takes in no functorial value and outputs one, and liftA2 takes in two and returns one.

Sidenote: the term Applicative comes from the term Applicative Functor, which sounds like it comes from Category Theory, but was actually introduced in a programming paper.

That’s enough abstract mumbo jumbo for now. Let’s look at what sort of computations we can express using Applicative operations (and fmap). We’ll start with the Maybe applicative. Here’s a streamlined definition:

You’ll see the definition uses the same type of failure propagation as the Monad Maybe instance. Let’s use this when parsing monetary values:

That worked out nicely. However, if we try to expand on this we soon run into the limits of Applicative. For example, consider this sumMoney function that sums Money values but fails if they are not in the same currency:

We can’t apply it to two Maybe Money values using Applicative operations. We need the Maybe monad for that:

If we try to use liftA2, we’re stuck with a Maybe (Maybe Money) type. In addition, we can now get two different types of failures: Nothing and Just Nothing, depending on what level the error happens on. This would be a clear case for switching to a Monad instance.

liftA2 sumMoney (parseMoney "123" "e") (parseMoney "45" "€")
  ==> Just (Just (Money 168 EUR))
liftA2 sumMoney (parseMoney "123" "e") (parseMoney "45" "$")
  ==> Just Nothing
liftA2 sumMoney (parseMoney "123" "e") (parseMoney "xxx" "e")
  ==> Nothing

Sidenote: the name liftA2 sounds a bit cumbersome, but it’s an analogy with the liftM, liftM2 and so on functions for monads. Recall that liftM was just fmap, so perhaps liftA2 should’ve been called fmap2.

15.2 The List Applicative

Let’s have a look at the applicative instances for another Functor we’ve seen. The Applicative instance for the list functor goes through all possible combinations of values (just like the list monad). Here’s the instance:

And here’s an example: generating some phrases.

15.3 New Operators

There are a handful of operators for Applicatives that are quite handy. They are <$>, <*>, <* and *>.

Let’s start with <$>, which is just an infix version of fmap:

That’s kinda nice on its own, but it really gets to shine when combined with this Applicative operator:

The type tells you what <*> does: it’s function application lifted to an applicative. Here are some standalone examples:

The real magic happens when we combine <$> and <*>: we can then lift functions of arbitrarily many argments to an Applicative!

What’s going on here? Let’s step through the evaluation. The key is that each <*> partially applies one more argument to the function.

Perhaps looking at the types will make it clearer:

The next two operators are a bit simpler:

You can compare the types to a more familiar operator:

What the operators <* and *> mean is: run both of these operations, but only keep one result. The arrow points to the result that’s kept:

These operators might seem trivial, but they’re useful when combining checks. For example:

Now that we’ve seen all these operators, we can understand the full definition of Applicative. All the operators have definitions in terms of liftA2, so it’s enought to define liftA2 and pure when implementing an Applicative instance.

15.4 The Validation Applicative

Let’s look at an Applicative that’s a bit more interesting than Maybe or lists. Often in programming we need to validate some inputs from the user. In these cases it’s useful to gather together all the errors that the input might have. The file exercises/Examples/Validation.hs implements the Validation datatype:

The Applicative instance for Validation works like this:

Note how in contrast to the Maybe Applicative, we have many different kinds of failures.

Here’s a worked example that introduces some helpers and then uses them to congratulate somebody on their birthday:

Oh right, here are the Functor and Applicative instances for Validation:

The definition of liftA2 for Validation shows that the errors are collected together left-to-right. This can be seen in the example above where the expression liftA2 congratulate checkedName checkedAge outputs the error ("Name too long") from checkedName first, and the error ("Too old") from checkedAge last.

15.5 Validating Lists: traverse

So far we’ve dealt with fixed-size things and Applicatives: we’ve applied a function of two or three arguments to some things. What if we have an arbitrary amount of inputs? What if we need to validate a list?

Let’s look at some ways to implement a function like this:

As always, when working with lists, pattern matching and recursion are usually the way to go. Here’s a recursive solution:

It’s a bit of a chore to always spell out a recursion like this. If we were working in a Monad we could just use a helper like mapM:

The equivalent of mapM for Applicative is called traverse. It’s a member of the type class Traversable:

That’s one heck of a type signature, so let’s simplify it a bit. Lists are Traversable, so we can specialize this type into:

That looks like what we need! For Applicatives that are also Monads, traverse is just another name for mapM:

But for our Validation, which isn’t a Monad, traverse is exactly what we want:

Note how traverse for Validation collects all the errors together, in the order they occur in the original list.

PS. In fact, Validation is one of the few examples of an Applicative that can’t be a Monad. Can you figure out why?

15.6 Sidenote: Traversable

So what things are Traversable? Many familiar structures. Here are some examples:

So Traversable is a type class for all sorts of containers, kind of like Foldable. Indeed, if you look at the definition, Traversable is a subclass of Foldable. It also turns out that traverse and mapM are methods of the class!

It can be hard to keep the types straight here. Let’s go back to the type of traverse:

Here we have two functors: t and f. The t functor is also a Foldable and a Traversable and the f functor is also an Applicative. The traverse function lets us run f-operations inside a t-container.

Don’t worry if that feels abstract. In practice, you pretty much always use traverse on lists.

15.7 Dealing with Failure: Alternative

If you play around with applicatives a bit you’ll start to notice some limits to their power. For example, when writing parsers like we did in the parseMoney example, it would be nice to be able to try some a couple of different parsers and take any non-failure result. This is easy enough to write for a concrete Applicative like Maybe, as can be seen below.

How could we generalize eitherOf? We can’t give it the type Applicative f => f x -> f x -> f x because then the implementation would need to be effectively something like eitherOf a b = liftA2 something a b, but then eitherOf Nothing (Just x) would be Nothing (since that’s how the instance Applicative instance works)!

It turns out we need a new type class: Alternative. An Alternative adds two operations to Applicative: empty means no results, and <|> means combining results.

Now we can rewrite our parsing code using general operations:

We can also pick which Alternative we run our parser in to get different behaviours. Maybe gives us only one result, while [] gives us all possible results.

The Alternative instances for [] and Maybe are unsurprising:

The Validation type is also an Alternative. The instance collects together all error messages, just like the Applicative instance did.

Here’s a final example: validating contact information, which is either a phone number or an email address.

Note how here, just like in previous examples, the errors are collected left-to-right: the errors from validateEmail come before the errors from validatePhone. The error from checkDigits comes before the error from checkLength.

15.8 Sidenote: Applicatives in Context

15.8.1 Why Applicatives?

There are multiple reasons for learning Applicatives, even if they do not provide any additional power over Monads. First off, as discussed in Lecture 13, the GHC standard library nowadays makes Applicative instances for all Monads compulsory. Thus a working Haskell programmer is bound to see lots of Applicative instances.

Secondly, you’ll bump into Applicative operators a lot even in Monadic code. Expressions like f <$> x <*> y <*> z can be useful in many Monadic contexts. Also, since the Traversable type class is built in terms of Applicative, you’ll often use applicative operations with it.

Thirdly, Applicatives are an excellent exercise in understanding functional design patterns. They marry the Functor pattern with the Monoid pattern, and Alternative brings in yet another Monoid-like dimension. Being able to efficiently work with Applicatives will make working with further abstractions like monad transformers or lenses easier.

Lastly, there are a couple of types that are Applicatives but not Monads. Validation is one example, and a very practical one at that. Without understanding of Applicative, we’d have no way to recognize and generalize operations over types like this. Another such type is ZipList.

15.8.2 Applicatives in the Wild

Even though we only looked at some very simple and concrete Applicatives in this lecture, there are plenty of Haskell libraries that use Applicatives for big things. Here are some examples.

The same sort of idea as our Validation Applicative has been implemented in the validation and either libraries.

There are multiple parser libraries that use Applicatives. For example, regex-applicative, optparse-applicative, yamlparse-applicative, json-stream, and so on.

15.8.3 Monads and Applicatives

So what’s the relationship between an Monad and an Applicative? If an Applicative is also a Monad, the following laws hold:

When working in a Monad, you can freely mix Applicative and Functor with Monad operations. As an example, let’s rewrite mapM until it only uses applicative operations, and thus get an implementation for traverse. This is our starting point:

GHCi tells us it only works for Monads:

Let’s apply the pure === return and the liftA2 laws from above:

Ta-da! Now myMapM works for any Applicative:

15.9 Quiz

What’s the type of x in liftA2 (&&) Nothing x?

  1. Applicative f => f Bool
  2. Applicative Bool
  3. Maybe Bool

How many elements does liftA2 f xs ys have when xs and ys are lists?

  1. length xs + length ys
  2. length xs * length ys
  3. min (length xs) (length ys)

Which of these expressions is equivalent to liftA2 f x y? There might be multiple correct answers.

  1. f <$> x <*> y
  2. f <*> x <*> y
  3. f <*> x <$> y
  4. fmap f x <*> y
  5. pure f <*> x <*> y

If f :: X -> Maybe Y and xs :: [X], which of these expressions has a type different from the others?

  1. fmap f xs
  2. traverse f xs
  3. map f xs

For which Applicative do the expressions pure x <* pure y and pure x <|> pure y have the different result?

  1. Maybe
  2. []
  3. Validation

15.10 Exercises

16 Lecture 16: Odds and Ends

This final lecture will go over some minor topics that didn’t fit in anywhere else. You’ve finished all the hard parts of the course. Now it’s time to sit back, relax, and enjoy some cool Haskell!

16.1 Testing with QuickCheck

One of the benefits of purity is that pure functions are easy to test: you don’t need to set up any global state, you can just pass in arguments and check that the result is ok. In this section, we’ll take a quick tour of the property-based testing library QuickCheck, which has also been used for checking that your answers to exercises are right on this course!

Let’s look at testing a (faulty) implementation of reverse. You can find this and the following examples in the file exercises/Examples/QuickCheck.hs.

We can write and individual test case using the === operator from QuickCheck:

We can ask QuickCheck to run them in GHCi:

So far so good. However this isn’t really what QuickCheck was made for. QuickCheck was designed for property-based testing where you can state a property your code should have, and QuickCheck runs your code with randomized inputs, checking the property every time. What would be a simple property that a correct implementation of reverse has? Reversing a list twice should certainly give us back the same list. Let’s write it out:

Our Property has an argument, which means that QuickCheck will generate random values and run the test. We can use the verboseCheck function to see which values are run. We can also give a parameter to the test ourselves if we wish to check a specific value.

Even this property didn’t catch the bug in our implementation. Let’s try another one. Here’s a property about how rev (xs ++ ys) behaves. You might want to take a moment to convince yourself that it should hold for a correct rev function.

Let’s see if it holds for our implementation:

Finally, a failure! There’s a bit to unpack here. First off, QuickCheck tells us the arguments with which the property failed: they are [0,0] and [1]. We can check this ourselves:

Next up, what does “after 5 tests and 3 shrinks” mean? One of the cool things about QuickCheck is that when it finds a failure, it tries some related values in order to find a nicer, smaller failure. We can see this in action with verboseShrinking, which prints out all the failures QuickCheck goes through:

QuickCheck went down from a counterexample of [4,1,-1,4,4] all the way to [1,0,0]. Pretty sweet!

16.1.1 Modifiers

Sometimes you need to limit the values QuickCheck generates. Your function might not work on all inputs, for instance? Let’s try writing a test for last.

In this case, we can fix the test just by switching to another input type. QuickCheck defines the NonEmptyList type (not to be confused with Data.List.NonEmpty!), which is just a wrapper for a normal list. However, when generating values of NonEmptyList, QuickCheck won’t generate empty lists.

There are other modifiers like this, for example Positive for positive numbers, NonNegative for non-negative numbers, or SortedList for a sorted list. Here’s an example of a more complex test. We check that the nth element of cycle xs is correct. Both of the modifiers are needed, since !! doesn’t work with negative inputs, and cycle [] is an error.

16.1.2 Generators and forAll

Sometimes we need to limit the range of inputs to a test even further. As a simple example, here’s a test that Data.Char.toUpper changes the character passed to it:

Of course, it only changes lowercase letters. How can we write a test for that? There is no Lowercase modifier available that would work like Positive or NonEmptyList. We need to fall back to explicit generation of values using forAll:

Perfect! Let’s look at the types to see what’s going on here.

There are some new types here. A value of type Gen a is a generator for values of type a. We’ll talk a bit more about Gen in the next section, but in this section you’ll see a couple of functions that return Gens so that we can use them with forAll. The elements function is a generator that returns one of the elements of the given list at random, as you might have guessed.

The Testable type class is the same that the quickCheck function uses. It exists so that quickCheck can test types like [Int] -> Bool -> Property in addition to just plain Property values.

In addition, some simple types like Bool have a Testable instance, so that you can write tests using normal Haskell predicates instead of ===:

Getting back to forAll, we can use forAll to write more complex tests. Here’s a test that checks that sort xs has the same elements as xs. Note how we use NonEmptyList to guarantee that the forAll has some elements to pick from.

16.1.3 Further with QuickCheck

We’ve only just scratched the surface of QuickCheck. Here are some pointers to some things you’ll find useful when you start writing larger QuickCheck tests.

Sometimes the output from QuickCheck isn’t quite verbose enough. You can add your own lines to the output by using the counterexample combinator:

As an example, let’s add logging of the input to rev to propRevTwo:

As you might have guessed, Gen is a Monad. You can write your own generators by combining the generators defined by QuickCheck. You can check what your generators output using sample.

Closely related to generators is the Arbitrary type class. Arbitrary is how QuickCheck generates all those inputs automatically.

If you’re writing tests for custom types, you either need to use forAll, or implement an Arbitrary instance. Here’s what happens if you’re missing an instance:

Here are the two ways to fix it:

16.2 Phantom Types

Boo! There’s a ghost in the type system! Let’s have a look at what phantom types can do for you.

Phantom types are types that don’t take any values. They are related to newtypes (see lecture 10) in that both are a way to add additional type checking without affecting the evaluation of the program at all.

Let’s use phantom types to track which currency an amount of money is in. We define the phantom types EUR and USD (note how they don’t have any constructors!), and the parameterized type Money a that doesn’t use the type parameter a for anything. Then we can define two constants, one in euros and the other in dollars. You can find all of the code from this section in the file exercises/Examples/Phantom.hs.

Note how the type signatures for dollar and twoEuros are doing all the work here. An expression like Money 1 has the polymorphic type Money currency if we don’t restrict it to a more specific type. We give dollar and twoEuros more limited types explicitly. This is analogous to defining something like one :: Int; one = 1, since the constant 1 has the polymorphic type Num p => p but we give it a more restricted type.

Now that we have some constants, we can write functions that operate on them. Let’s start with a function scaleMoney that multiplies an amount of money by a number. The currency stays constant. Here too, the type signature is doing the work: without the type signature, Haskell would infer a type of Double -> Money a -> Money b.

Next up: adding two amounts that are in the same currency. We get a nice type error if we try to add values in two different currencies.

As before, the type signature is crucial. Here’s the same implementation with an unrestricted type. Now we can add anything to anything!

We can keep going with this approach, and define currency conversions. We define the type Rate that uses phantom types to track the currencies it’s translating between. The types of convert and invert are restricted to have the properties we want. There’s also an unrestricted version of the convert function to let you compare the types.

Note! The words currency, from, to and so for in the previous examples are just type variables. There’s nothing special going on with them. We could’ve as well given invert a type like Rate a b -> Rate b a without any change in type safety.

This approach that uses phantom types has clear benefits: giving us type errors for invalid code. Also, compared to defining lots of concrete types like data MoneyEur = MoneyEur Double, with phantom types we need to implement functions like scaleMoney and addMoney only once. Also, we’re able to define polymorphic and reusable concepts like Rate. You can contrast this approach with the Boxing section of Lecture 7.

However, phantom types also have downsides. Without advanced tricks we can’t really handle currencies that are defined at runtime (for example: reading an amount from a user). It’s also easy to end up in a place where you start needing language extensions like Generalized Algebraic Datatypes, type families and other type-level programming constructs. Eventually you’re all the way in the world of dependent typing.

So what are good applications for phantom types? When you need to track some simple, but crucial information, that is known at compile-time. An example that’s somewhat better than currencies is tracking whether inputs from the user have been sanitated to prevent attacks like SQL injection or cross-site scripting.

We can use the types Input Safe and Input Unsafe to track whether strings are safe for passing into the database or not. If our module only exports the makeInput function, and not the Input constructor, the type system ensures that any inputs must pass through the escapeInput function at some point before going into a database function like addForumComment.

16.3 Simultaneity

16.3.1 Parallelism

One of the great things about purity is that it makes parallelism, computing many things at the same time, very easy. Let’s have a look at how we can do this in Haskell. First off, we need to start a new GHCi that has parallel execution enabled. The simplest way is:

$ stack ghci --ghci-options "+RTS -N"

Next up, let’s define a very naive version of the Fibonacci function (remember Lecture 1?) , enable performance statistics with :set +s, and see how long it takes to compute five values of the function:

Now let’s bring in the module Control.Parallel.Strategies that defines ways to evaluate values in parallel. We’ll use the parList rseq strategy to evaluate all the elements of the list to WHNF, in parallel.

That’s almost 2 times as fast, on the 2-core machine that this example is being run on. Pretty nice. The coolest thing here is that we were able to define the computation (map fib ...) completely separately from the evaluation strategy (parList rseq), separating the what to compute from the how to compute.

16.3.2 Concurrency

Computer science makes the distinction between parallel and concurrent computations. Parallel computations are those that just run separate independent computations in parallel (in other words, parallelism is pure). Concurrent computations are those where there are multiple interacting threads of computation. Concurrency usually involves threads, locks, messages and deadlocks.

In addition to great tooling for parallelism, Haskell also has good tooling for concurrency via threads. Since concurrency is all about side-effects, concurrent computations happen in the IO Monad. The classic example of threading is two threads, the other one printing a stream of As and the other a stream of Bs. Here it is in Haskell:

Prelude Control.Concurrent> concurrency
AABABABABABABABABABABABABABABABABABABABABABABABABABABABABABABABABABABABABABABABB

The operation forkIO :: IO () -> IO ThreadId takes an IO operation and starts running it in the backgroud. It produces a ThreadId that can be used to e.g. terminate the thread.

If we want to add actual communication between threads, we can use abstractions like MVar (a mutable thread-safe variable) or Chan (a queue).

Here’s a simple example where one thread writes a values to an MVar and another one waits for them and prints them. An MVar works like a mailbox: it is either empty or full. Calling takeMVar on an empty box waits for the box to get filled (with a putMVar). Symmetrically, trying to putMVar into a full box waits until the box is empty.

16.4 Exercises

  • Set16a: QuickCheck
  • Set16b: Phantom types
  • No exercises for parallel or concurrent Haskel, sorry!

16.5 Where to Go From Here?

Congratulations! You’ve reached the end of this two part course on Functional Programming in Haskell. What next? You definitely know enough Haskell to keep learning on your own. The online Haskell community is very friendly and there are lots of blog posts and other content explaining advanced techniques and features. You can find lots of interesting stuff by following for example:

Just keep writing Haskell, studying things (like libraries and tools) when you bump into them, and slowly accumulate experience. Lots of the things you learn with Haskell will be transferable to other languages like TypeScript, Elm, Rust or F#.

Finally, here is an incomplete list of things that got left out of this course, but are worth looking into:

  • Language features
  • Abstractions
  • Tooling
  • Libraries
    • Parsec (parsing): RWH
    • Scotty (simple web framework), Aeson (json): blog
    • Servant (fancy web framework with phantom types)
  • Category theory
    • Many Haskell abstractions are based on Category theory
    • Category theory can be a valuable source of new ideas for programming
    • Category theory can feel intimidating, so it’s good to know that you can get on fine without it
    • Bartosz Milewski has lots of good material, for example Category Theory for Programmers
    • The Haskell Wiki and Wikibook have sections on category theory

16.6 Acknowledgements

This course was made possible by Nitor who donated hours and hours of Joel’s working time for this project. Thank you!

Thanks to the whole Haskell Mooc team, especially

  • John Lång for help on the material
  • Antti Laaksonen for setting up the course and helping with arrangements
  • Topi Talvitie for the exercise checking infrastructure

Thanks to all the students who patiently waited for part 2 and reported errors in the material & exercises!