Haskell
In the Haskell stuff, I was planning on moving on to some monad-related
stuff. But I had a reader write in, and ask me to write another
post on data structures, focusing on a structured called a
zipper.
A zipper is a remarkably clever idea. It's not really a single data
structure, but rather a way of building data structures in functional
languages. The first mention of the structure seems to be a paper
by Gerard Huet in 1997, but as he says in the paper, it's likely that this was
used before his paper in functional code --- but no one thought to formalize it
and write it up. (In the…
So, we've built up some
pretty nifty binary trees - we can use the binary tree both as the basis of an
implementation of a set, or as an implementation of a dictionary. But our
implementation has had one major problem: it's got absolutely no way to
maintain balance. What that means is that depending on the order in which
things are inserted to the tree, we might have excellent performance, or we
might be no better than a linear list. For example, look at these trees. As
you can see, a tree with the same values can wind up quite different. In a
good insert order, you can wind up…
(This is an edited repost of one of the posts from the earlier
version of my Haskell tutorial.)
(This file is a literate haskell script. If you save it
as a file whose name ends in ".lhs", it's actually loadable and
runnable in GHCI or Hugs.)
Like any other modern programming language, Haskell has excellent support
for building user-defined data types. In fact, even though Haskell is very
much not object-oriented, most Haskell programs end up being centered
around the design and implementation of data structures, using constructions
called classes and instances.
In this post, we'…
(This is a revised repost of an earlier part of my Haskell tutorial.)
Haskell is a strongly typed language. In fact, the type system in Haskell
is both stricter and more expressive than any type system I've seen for any
non-functional language. The moment we get beyond writing trivial
integer-based functions, the type system inevitably becomes visible, so we
need to take the time now to talk about it a little bit, in order to
understand how it works.
One of the most important things to recognize about Haskell's type system
is that it's based on type inference. What that means is that in…
(This is a heavily edited repost of the first article in my original
Haskell tutorial.)
(I've attempted o write this as a literate haskell program. What that
means is that if you just cut-and-paste the text of this post from your
browser into a file whose name ends with ".lhs", you should be able to run it
through a Haskell compiler: only lines that start with ">" are treated as
code. The nice thing about this is that this blog post is itself a
compilable, loadable Haskell source file - so I've compiled and tested
all of the code in here in exactly this context.)
Haskell is a functional…
Way back, about three years ago, I started writing a Haskell
tutorial as a series of posts on this blog. After getting to
href="http://scienceblogs.com/goodmath/2007/01/haskell_a_first_step_into_mona_1.php">monads, I moved on to other things. But based on some recent
philosophizing, I think I'm going to come back to it. I'll start by explaining
why, and then over the next few days, I'll re-run revised versions of old
tutorial posts, and then start new material dealing with the more advanced
topics that I didn't get to before.
To start with, why am I coming back to Haskell? What changed…
It's been a while since I've written about any data structures. But it just so happens that I'm actually really working on implementing a really interesting and broadly useful data structure now, called a Rope.
A bit of background, to lead in. I've got this love-hate relationship with
some of the development tools that Rob Pike has built. (Rob is one of the Unix
guys from Bell labs, and was one of the principal people involved in both the
Plan9 and Inferno operating systems.) Rob has implemented some amazing
development tools. The two that fascinate me were called Sam and Acme. The
best and…
Since I mentioned the idea of monoids as a formal models of computations, John
Armstrong made the natural leap ahead, to the connection between monoids and monads - which are
a common feature in programming language semantics, and a prominent language feature in
href="http://scienceblogs.com/goodmath/goodmath/programming/haskell/">Haskell, one of my favorite programming languages.
Monads are a category theoretic construction. If you take a monoid, and use some of the constructions we've seen in the last few posts, we can move build a meta-monoid; that is,
a monoid that's built from…
So, after our last installment, describing the theory of monads, and the previous posts, which focused on representing things like state and I/O, I thought it was worth taking a moment to look at a different kind of thing that can be done with monads. We so often think of them as being state wrappers; and yet, that's only really a part of what we can get from them. Monads are ways of tying together almost anything that involves sequences.
In previous parts of this tutorial, we've seen the Maybe type. It's a useful type for all sorts of things where there might be a value. For example, a…
As long as I'm doing all of these basics posts, I thought it would be worth
explaining just what a Turing machine is. I frequently talk about things
being Turing equivalent, and about effective computing systems, and similar things, which all assume you have some clue of what a Turing machine is. And as a bonus, I'm also going to give you a nifty little piece of Haskell source code that's a very basic Turing machine interpreter. (It's for a future entry in the Haskell posts, and it's not entirely finished, but it does work!)
The Turing machine is a very simple kind of theoretical computing…
As promised, I'm finally going to get to the theory behind monads. As a quick
review, the basic idea of the monad in Haskell is a hidden transition function - a
monad is, basically, a state transition function.
The theory of monads comes from category theory. I'm going to assume you know
a little bit about category theory - if you have trouble with it, go take a look
at my introductory posts here.
Fundamentally, in category theory a monad is a category with a particular kind of
structure. It's a category with one object. That category has a collection of
arrows which (obviously) are from…
Time for more monads. In this article, I'm going to show you how to implement a very simple
state monad - it's a trivial monad which allows you to use a mutable state
consisting of a single integer. Then we'll expand it to allow a more interesting
notion of state.
Let's get the trivial stuff out of the way. To start off, we'll use a simple fixed
state type which is just a wrapper for a single integer
>data State = State Int deriving (Show)
A monad over that would be written:
>data SimpleStateMonad a = SSMon (State -> (a,State))
To understand that, just remember that the…
The biggest nightmare for most people learning Haskell is monads. Monads are the
key to how you can implement IO, state, parallelism, and sequencing (among numerous other things) in Haskell. The trick is wrapping your head around them.
On the most basic level, they're actually pretty easy to understand. Think about something like IO in purely functional terms. When you write a function that performs an IO action, what are you really
doing in terms of functions? For the moment, ignore all practical concerns, and just think in the
pure abstract: what happens when you do an output operation?…
One thing that we've seen already in Haskell programs is type
classes. Today, we're going to try to take our first look real look
at them in detail - both how to use them, and how to define them. This still isn't the entire picture around type-classes; we'll come back for another look at them later. But this is a beginning - enough to
really understand how to use them in basic Haskell programs, and enough to give us the background we'll need to attack Monads, which are the next big topic.
Type classes are Haskell's mechanism for managing parametric polymorphism. Parametric polymorphism
is a…
So, we've built up some pretty nifty binary trees - we can use the binary tree both as the
basis of an implementation of a set, or as an implementation of a dictionary. But our
implementation has had one major problem: it's got absolutely no way to maintain balance. What
that means is that depending on the order in which things are inserted to the tree, we might
have excellent performance, or we might be no better than a linear list. For example, look at
these trees. As you can see, a tree with the same values can wind up quite different. In a good insert order, you can wind up with a…
In Haskell, there are no looping constructs. Instead, there are two alternatives: there are list iteration constructs (like foldl which we've seen before), and tail recursion. Let me say, up front, that in Haskell if you find yourself writing any iteration code on a list or tree-like structure, you should always look in the libraries; odds are, there's some generic function in there that can be adapted for your use. But there are always cases where you need to write something like a loop for yourself, and tail recursion is the way to do it in Haskell.
Tail recursion is a kind of recursion…
Last time around, I walked through the implementation of
a very simple binary search tree. The implementation that I showed
was reasonable, if not great, but it had two major limitations. First,
it uses equality testing for the search, so that the implementation is only really suitable for use as a set; and second, because it's such
a trivial tree implementation, it's very easy for the tree to become
highly unbalanced, resulting in poor performance.
Today, we'll look at how to extend the implementation so that our BST
becomes useful as a key/value dictionary type. We'll take two different…
For this post, I'm doing a bit of an experiment. Haskell includes a "literate" syntax mode. In literate mode, and text that doesn't start with a ">" sign is considered a comment. So this entire post can be copied and used as a source file; just save it with a name ending in `".lhs"`. If this doesn't work properly, please post something in the comments to let me know. It's more work for me to write the posts this way, so if it's not working properly, I'd rather not waste the effort. I've tested it in both Hugs and ghci, and everything works, but who knows what will happen after a pass…
Haskell is a strongly typed language. In fact, the type system in Haskell is both stricter and more
expressive than any type system I've seen for any non-functional language. The moment we get beyond
writing trivial integer-based functions, the type system inevitably becomes visible, so we need to
take the time now to talk about it a little bit, in order to understand how it works.
One of the most important things to recognize about Haskell's type system is that it's based on *type
inference*. What that means is that in general, you *don't* need to provide type declarations. Based
on how you…
I wasn't really sure of quite how to start this off. I finally decided to just dive right in with a simple function definition, and then give you a bit of a tour of how Haskell works by showing the different ways of implementing it.
So let's dive right in a take a look at a very simple Haskell definition of the factorial function:
fact n = if n == 0
then 1
else n * fact (n - 1)
This is the classic implementation of the factorial. Some things to notice:
1. In Haskell, a function definition uses no keywords. Just "`name params = impl`".
2. Function…