Philosophizing about Programming; or "Why I'm learning to love functional programming"

Way back, about three years ago, I started writing a Haskell
tutorial
as a series of posts on this blog. After getting to href="http://scienceblogs.com/goodmath/2007/01/haskell_a_first_step_into_mona_1.php">monads, I moved on to other things. But based on some recent
philosophizing, I think I'm going to come back to it. I'll start by explaining
why, and then over the next few days, I'll re-run revised versions of old
tutorial posts, and then start new material dealing with the more advanced
topics that I didn't get to before.

To start with, why am I coming back to Haskell? What changed since the
last time I wrote about it?

I last wrote about Haskell three years ago, when I was still working for
IBM. In the time since then, I've been working for Google. It's been a very
enlightening couple of years. Instead of working on isolated research
code-bases, I'm working in a truly massive code-base. I've regularly
written code that will be read by at least dozens of other engineers, and
I regularly read code written by hundreds of other people.

At Google, we generally program in three languages: C++, Java, and Python.
None of them are functional languages: are all state-heavy, imperative,
object-oriented languages. But the more I've read and written code in this
code-base, the more I've found that functional code is the best way of
building large things. When I look at a piece of code, if the code is
basically functional, I've found that it's much easier to understand,
much easier to test, and much less likely to produce painful bugs.
It's gotten to the point where when I see code that isn't functional, I cringe
a little. Almost everything that I write ends up being at least mostly
functional - the places where I use non-functional code, it's because the
language and compiler aren't up to the task of keeping the code efficient.

Writing functional code in non-functional languages is, obviously,
possible. I do it pretty much every day. But it's not easy. And it's far
less clear than it would be in a real proper functional language. As
I said above, I sometimes need to compromise for efficiency; and sometimes,
the language just isn't expressive in the right way to let me do things in the
way that a functional programmer really would.

Back when I started the original Haskell tutorial, I was rather skeptical
about Haskell. Functional languages have not, traditionally, been used for
large, complex systems. There were lots of claims about functional systems,
but no real strong evidence for those claims.

My experiences over the last few years have convinced me that the
functional approach is, really, the correct one. But why Haskell? As I've
mentioned before, I'm an obsessive programming language guy. I know way the
hell too many programming languages. In the functional realm, I've learned not
just Haskell - but the strict typed family (like SML and OCaml); the Lisp
family (CommonLisp, Scheme), other lazy languages (Clean, Miranda, Hope), and
hybrid functional languages (Erlang, Clojure). And in all of those languages,
I haven't seen any that were both as clear as Haskell, and as good at managing
complexity. I'm really convinced that for an awful lot of complex
applications, Haskell is really right.

This is, in many ways, a direct contradiction of what I said when
introducing Haskell the first time around. Back then, I said that the fact
that Haskell was referentially transparent wasn't important. Referential
transparency is another way of saying that the language is mathematically
functional: that every expression in the language is a function from
its inputs to its outputs, with no hidden parameters, no hidden
mutable state that can change the result of a call.

At the time, I said that I thought that the most common argument in favor
of referential transparency was silly. You see, people talk about referential
transparency being good because it allows you to do formal reasoning about
programs. It's close to impossible to reason about programs in languages
like C++, where you've got things like mutable pointers to functions
that contain implicit, persistent mutable state. But a lazy
functional language like Haskell, you can reason about. At the time,
my argument was that people don't really reason about real, non-trivial
programs, and that real complex systems would still be impossible
to reason about if they were written in Haskell.

I was wrong. Since then, I've done rather a lot of reasoning
about programs. Sometimes that's been in the context of dealing
with concurrency, when I've got a strange, intermittent bug which
I can't reliably reproduce, and so formally reasoning about the possible
behaviors of the system was the only way to figure out what was going
on. Other times, I've been working on things that are just too expensive
to debug - once they've shown that they can fail, you can't deploy
test runs on a thousand machines to see if, maybe, you can reproduce the problem
and generate a useful stack trace. Even if the cost of deploying a known
buggy program weren't too expensive, sorting through stacks from
a thousand machines to figure out what was going on isn't feasible. So
I've wound up coming back to formal reasoning.

You can do formal reasoning about programs written in
non-functional languages. But you've got to start by making assumptions - and
if those assumptions are wrong, you end up wasting a huge amount of time. The
style of the program has a huge impact on that: in general, the more
functional the programming style, the easier it is to work out a valid set of
assumptions to allow you to analyze the program. But no matter
what, if the language itself is hostile to that kind of reasoning,
you're going to have a much harder time of it than if you were using
a language that was designed for reasoning.

Languages like Haskell, which have referential transparency, were designed
for being analyzable and reasonable. What referential transparency does is buy
you the ability to make very strong basic assertions about your system:
assertions, not assumptions. In a functional language, you know that
certain axioms are true. For example, you know that no one could have spawned
a thread in the wrong place, because you can only create a thread in a
threading monad; code that doesn't have access to that monad can't acquire
locks, send messages, or spawn threads. If you use something like software
transactional memory, you know that no one could have accidentally mutated
something outside of a transaction - because it's impossible.

I've still got some qualms about Haskell. On one hand, it's a very elegant
language, and the functional nature of it makes it a beautiful glue language.
Some of the most beautiful, clear, elegant code I've ever seen is written in
Haskell - and that's not because it was written by exceptional programmers,
but because the nature of Haskell as a language can make things clearer than
many other programming languages.

But it's not all good. Haskell has some serious problems. In particular,
it's got two issues that worry me enough that I'm still a bit hesitant to
recommend it for a lot of applications. Those two are what I call lazy
confusion, and monad complexity.

By lazy confusion, I mean that it's often extremely difficult to predict
what's going to happen in what order in a Haskell program. You can say what
the result will be, but you can't necessarily say what order the steps will
happen in. That's because Haskell uses lazy evaluation, which means
that no computation in Haskell is really evaluated until its result
is used. You can write Haskell programs that generate infinitely long lists -
but it's not a problem, because no element of the list is ever evaluated until
you try to use it, and you'll never use more that a finite number of elements.
But lazy evaluation can be very confusing: even Haskell experts - even people
who've implemented Haskell compilers! - sometimes have trouble predicting what
code will be executed in what order. In order to figure out the computational
complexity of algorithms or operations on data structures, people often
wind up basically treating the program as if it were going to be
evaluated eagerly - because analyzing the laziness is just too
difficult. Laziness is not a bad thing; in fact, I'm pretty
convinced that very frequently, it's a good thing, which can make
code much cleaner and clearer. But the difficulty of analyzing
it is a major concern.

Monad complexity is a very different problem. In Haskell, most code is
completely stateless. It's a pure functional language, so most code can't
possibly have side effects. There's no assignments, no I/O, nothing but pure
functions in most Haskell code. But state is absolutely essential. To quote
Simon Peyton-Jones, one of the designers of Haskell: "In the end, any program
must manipulate state. A program that has no side effects whatsoever is a kind
of black box. All you can tell is that the box gets hotter." The
way that Haskell gets around that is with a very elegant concept called
a monad. A monad is a construct in the program that allows you
to create an element of state, and transparently pass it through a sequence
of computations. This gives you functional semantics for a stateful
computation, without having to write tons of code to pass the state
around. So, for example, it lets you write code like:

>fancyHello :: IO ()
>fancyHello =
>    do
>       print "What is your name?"
>       x <- getLine
>       print (concat ["Hello ", x])

Great, huh? But there's a problem: there is an object that conceptually
contains the state being passed between the steps of the "do" construct.

The reason that that's a problem is that there are multiple different
monads, to represent different kinds of state. There are monads for mutable
arrays - so that you can write efficient matrix code. There are monads for
parsing, so that you can write beautiful parsers. There are monads for IO, so
that you can interact with the outside world. There are monads for interacting
with external libraries written in non-functional libraries. There are monads
for building graphical UIs. But each of them has a packet of state that needs
to be passed between the steps. So if you want to be able to do more than one
monadic thing - like, say, write a program with a GUI that can also read and
write files - you need to be able to combine monads. And the more monads you
need to combine, the more complicated and confusing things can get.

I'll come back to those two problems in more detail in both the revised
tutorial posts, and the new posts that I'll be writing.

Tags

More like this

Before diving in and starting to explain Haskell, I thought it would be good to take a moment and answer the most important question before we start: **Why should you want to learn Haskell?** It's always surprised me how many people *don't* ask questions like that. Haskell is a valuable…
While I was waiting for stuff to install on my new machine, I was doing some browsing around the web, and came across an interesting article at a blog called "The Only Winning Move", titled [Scheme Death Knell?](http://theonlywinningmove.blogspot.com/2006/10/scheme-death-knell.html). It's not a bad…
Several commenters pointed out that I made several mistakes in my description of Erlang. It turns out that the main reference source I used for this post, an excerpt from a textbook on Erlang available on the Erlang website, is quite out of date. I didn't realize this; I assumed that if they…
In my earlier post about John Backus, I promised to write something about his later work on functional programming languages. While I was in a doctors office getting treated for an awful cough, I re-read his 1977 Turing Award Lecture. Even 30 years later, it remains a fascinating read, and far…

I've been using Haskell (only my second functional language after Lisp, the non-strictness of which left a bad taste in my mouth) for some mathematics-oriented programming and it really is elegant. Mostly.

A big problem I'd add to your list - in fact, the biggest problem I've found in writing complex (from a computational standpoint) code - is memoization. Haskell makes it very hard to control the memoization of functions. Effectively you have functions (never memoized) and lists (always memoized). If you want to do something in-between, such as a temporally-local cache-like object, you'll need to write your own monad and glue to make a function appear memoized. Tiny changes, such as moving a lambda function to a "where" clause, can have major runtime and memory usage effects.

I have some other caveats, such as the lack of true parameterized types, the limited scope options, and other more specific topics, but in general I find writing Haskell very productive and pretty fun. Finding an interesting optimization or a new way to compose functions gives a lot of "Ah-ha!" moments.

The learning curve is pretty brutal though!

I can recommend trying to digest the things pozorvlak has to say about combining monads - based on a paper by Eugenia Cheng that connects them up with knot theory and interesting braided/symmetric monoidal higher category stuff:

http://pozorvlak.livejournal.com/73533.html
http://pozorvlak.livejournal.com/84293.html

The gist of it is that the way Haskell is doing monad transformers is suboptimal, and a better way will need to start caring about knot-theoretic considerations.

As I longtime programmer (C++, Java, C#) I wholeheartedly concur with the observations you've made about non-functional programming languages.

I'm very curious about monads, in fact as soon as I've posted this I'll be off to read up about them. In my experience if there's a way to abuse a concept then programmers will abuse it (eg entire programs written as a single class). What potential exists for the abuse of monads?

@ Nate : Careful with those 'Ah-ha' moments! You might owe Oprah a royalty.

Over on Rosetta Code (where we compare how one may do things in various languages), we have a list of languages on the site that support the functional programming paradigm. We have nearly 250 examples of Haskell comparisons with other languages, if you want to take something you're familiar with and investigate how to do it in the unfamiliar.

(disclaimer: Rosetta Code is my site, but I really think the links are relevant in this discussion; I've been reading this blog now for a year or two, I think...)

I'm not sure what do you mean exactly with "A monad is a construct in the program that allows you to create an element of state, and transparently pass it through a sequence of computations." but superficially seems a somewhat narrow view on monads, there are some for which an explanation in terms of threading a state would be quite contrived, e.g. the Continuation-Passing-Style or the List monad.
Even explaining IO as threading the state of the world is a bit problematic (even if GHC implements IO as a State monad which threads around a token, by essentially cheating and introducing side-effects in its lowlevel internal language).
Another explanation is that IO actions represent side-effects like an AST for an imperative language, and that the runtime system then interprets them and executes the side-effects for us.

I find these especially good introductions to monads:
http://www.haskell.org/haskellwiki/Monads_as_computation
http://www.haskell.org/haskellwiki/Monads_as_containers

On another note, i really appreciated this post, being able to reason about your code is the main focus of the haskell community, but reading it recognized as a valuable thing from who works on large systems with pretty different languages sounds quite differently :)

And yeah, the two problems are there, in particular lazy evaluation let you decouple generation of structures from their consumption in your source code, but not during evaluation, so there's still not as much compositionality as one would hope.

By Andrea Vezzosi (not verified) on 10 Nov 2009 #permalink

@3:

Of course they can be abused!

The easiest way is by just using them everywhere. You can type every function with the IO monad, in which case Haskell turns into a very awkward imperative programming language - with all of the faults, all of the weaknesses, all of the intractability of a lousy imperative language.

looking forward to the updated series, missed it the first time around. i use erlang as my primary language now (supplanting python), but i learned FP on haskell. loved the language, but ran out of steam when i got out of the realm of toy problems and tried to use it in the "real world". but i think it really helps to get a handle on FP in haskell if you're planning on learning erlang or ocaml. takes a while to bend your head around the concepts, and it's easier to do that without getting into the lower level distractions of other languages.

The "monad complexity" problem you describe is exactly why I stepped back from learning Haskell and have prolonged my sojourn in OCaml territory. I want to eventually get back to Haskell, but I for whatever reason I find that issue really bothers me.

By UncleOxidant (not verified) on 10 Nov 2009 #permalink

Most programs jobs are to manage state, so I'm opposed to a strategy that works by making state more difficult to deal with in order to encourage programmers to use less of it, which is all that the "functional" languages I've seen do. Rather, I prefer tactics which deal with state directly and turn reasoning about it into a tractable problem.

If a function mutates local variables, but does not mutate global state, for all practical purposes it is just as good as a stateless function. From the callers perspective it is stateless.

If you avoid using global variables (or singletons), but instead use dependency injection, you allow the caller to control what state an object or function touches. Even if the total amount of state in your program is large, you can guarantee that f(&x) will only mutate x.

Better language support for programming styles that localize state might be desirable, but I often feel that FP languages go too far in this regard, and make state manipulation more difficult in cases where it would be perfectly fine.

Add me to the long list of functional programming loyalists who really hopes that the monad complexity issues is resolved or mitigated. Having five million slight variation of the state monad is a black mark on Haskell and the antithesis of clean, generic, extensible design.

Still love it anyways.

By Matt Skalecki (not verified) on 10 Nov 2009 #permalink

@9:

There are definitely domains in which functional programming seems (and almost certainly is) unnecessarily awkward. However, to keep your "f(x)" terminology, the problems start to arise when, for example, f1 calls f2 and f3, and f2 calls f4 and f5, and both f3 and f5 call f. Now it starts getting much harder to reason about the behavior of the call chain. If I find a bug that seems to come from f2, I have to worry about the state of x -- did it change due to f1->f3->f or because of f1->f2->f5->f? If multiple threads were involved, did I test all possible combinations of schedules to ensure that f(&x) worked no matter what happened up to this point?

Mutable state means that you have more to worry about, but in return, it makes certain algorithms much easier to express or to express efficiently. Like most things in the oh so cruel world that just won't let me have a free lunch, it's a trade off.

By Deon Garrett (not verified) on 10 Nov 2009 #permalink

Brendan Miller @ #9:

Just to clarify:

If a function mutates local variables, but does not mutate global state, for all practical purposes it is just as good as a stateless function. From the callers perspective it is stateless.

That's only true if "local variables" does not include static variables (which are, in C at least, global state in disguise).

In the .NET world, "command/query separation" seems to be among the major buzzwords. Every method should either return information about the current state (query) or change that state (command); queries should never change the objects they're querying.

In the course of my own programming I've found monads to be an awkward substitute for directly stateful computations. I like the syntax of Haskell otherwise, but in practice I usually find myself reverting to OCaml/F# or Lisp.

In the .NET world, "command/query separation" seems to be among the major buzzwords. Every method should either return information about the current state (query) or change that state (command); queries should never change the objects they're querying.

Which is a great principle to apply to one's programming, but it continues to piss me off that slavish adherence to c/q separation makes it impossible to write a proper "pop" method for a stack.

Yeah yeah, I know, you query what's on top of the stack and then command it to be popped. It works fine, and is arguably less error-prone. I don't give a shit. Call me a traditionalist, but when you pop a value off a stack, you get the value and you remove it from the stack. I guess I'm a bit religious about it.

"A traditional pop operation is defined as being a union between a command and a query. Any attempt to redefine the traditional pop operation is a threat to our time-honored programming values!"

Oh man, am I that guy? :/

*You can do formal reasoning about programs written in non-functional languages. But you've got to start by making assumptions - and if those assumptions are wrong, you end up wasting a huge amount of time.*

You still have to make assumptions in the functional language about the interpreter/compiler of the language and assumptions about the hardware on which it runs.

#1 http://blog.sigfpe.com/2009/11/memoizing-polymorphic-functions-with.html

#3 & #0
IMHO i think that you misunderstood the monads concept like all of us did. For example "... Haskell turns into a very awkward imperative programming language" is false, in this case Haskell remains a purely funcional language, but all the code uses one monadic interface.

"A monad is a construct in the program that allows you to create an element of state, and transparently pass it through a sequence of computations."
The monads have nothing to do with stateful computations or side-effects computations. In fact, you can do side-effects computations without monads http://www.soi.city.ac.uk/~ross/papers/Applicative.pdf A monad is just a class type used to represent computations, i.e. a computational interface. For example, we can use the maybe monad (the instance that adds the monad interface to maybe datatype) to represent computations that can fail http://en.wikipedia.org/wiki/Monad_%28functional_programming%29#Maybe_m…
We can use List monad for build lists or for get backtracking computations for free http://www.randomhacks.net/articles/2007/03/12/monads-in-15-minutes. We can do stateful pure computations with the state monad http://en.wikipedia.org/wiki/Monad_%28functional_programming%29#State_m… and so on.

So, what is the magic behind the IO monad? The answer is that is no magic. What is the only way to do a side-effect computation in a purely funcional lenguage?

myAction :: RealWorld -> RealWorld
myActionWithParams :: (RealWorld, a) -> (RealWorld, b)

The only way to do it, is that one of the params of your function have to be "all world" (or all your system, or the universe if you prefer), so you can change it and return a new world.

And what does this have to do with the IO monad?

ghci> :i IO
newtype IO a
= GHC.IOBase.IO (GHC.Prim.State# GHC.Prim.RealWorld
-> (# GHC.Prim.State# GHC.Prim.RealWorld, a #))
-- Defined in GHC.IOBase
instance Monad IO -- Defined in GHC.IOBase
instance Functor IO -- Defined in GHC.IOBase

Yes, the IO monad is just a state monad bearing the realworld state (note: the implementation isn't so simple, but is enough for the programmer), so you can change the state of your world.

Actually monads aren't complex, just start learning by the begining (Functors -> Applicative Functors -> Monads -> Arrows)

By Anonymous (not verified) on 10 Nov 2009 #permalink

If you write `main = fancyHello` then it is the business of the compiler to make an executable which, when called, will be manipulating state like crazy, of course. But then what you have done by using "main =" is say: make my computer one that does the fancyHello thing. By itself fancyHello is just an action-kind -a general item like a number or a well-ordering.

And just as `map (\x -> x*x) [1..10]` is a way of defining the sequence of the first ten squares, so the definition `print "What is your name" >> getLine >>= print . ("Hello, " ++)` is a way of defining an action-kind from more primitive action-kinds (and functions from things to action kinds). Haskell treats these action-kinds like numbers, lists and trees, though even experts implicitly deny this.

It sounds strange, but the action-kind thus defined is one that people themselves frequently do, and intend to do, and are sometimes doing, but then get interrupted and so on -- namely asking what someone's names is, in English, and then greeting them accordingly. It 's just that if you submit to the discipline of Haskell definition, then you can make your computer do it to.

Hidden state and so on are beside the point. Something (sort of) like `print "What is your name" >> getLine >>= print . ("Hello, " ++)` would be an adequate representation of a possible action, even if there were no Haskell compiler, and no computers at all, just as `map (\x -> x*x) [1..10]` would a perfectly good representation of the list of the first ten squares even apart from the possibility of machine evaluation. Similarly `putStrLn "Hello"` (or rather `say "Hello` or *saying "Hello"*) is something that happens millions of times a day.

A Haskell compiler merely asks you to define the action or action-kind that it is to compile, i.e. make your computer apt to do. The definition must of course accord with the usual lamba-calculus principles. A compiler for an imperative language doesn't ask you for a definition of the proposed action, but rather, how you would emulate a slave-driver who is trying to get someone to do it.

@9:

If a function mutates local variables, but does not mutate global state, for all practical purposes it is just as good as a stateless function. From the callers perspective it is stateless.

Indeed, and Haskell's type system is rich enough to express this constraint. See http://www.haskell.org/haskellwiki/Monad/ST#A_few_simple_examples

For a language that has all of the good features of Haskell that you mention as well as a default strict order of execution (with optional explicit lazy evaluation) and a way to do away with monads (by tracking mutability and side effects in the type system), have a look the the Disciple language and is only (not yet complete) compiler implementation DDC:

http://www.haskell.org/haskellwiki/DDC

and Ben Lippmeier's PhD thesis:

http://cs.anu.edu.au/~Ben.Lippmeier/project/thesis/thesis-lippmeier-sub…

As that wiki page suggests, this is not yet an industrial strength compiler, but the language and the concepts show great promise.

Hi Mark,

I found your blog months ago searching for Haskell information. It is with great pleasure that I see you posting Haskell related posts again. I have a question.

You say that Haskell stands out from the plethora of functional programming languages. You say the functional approach looks like the way to go for managing complexity. But you can't really recommend Haskell because of lazy confusion and monad complexity. Is that to say the functionnal approach is doomed since its "best" candidate language can't be recommended? Are there more recommendable languages, say Scheme or Clojure?

It seems to me everybody religiously praise Haskell but never choose it to build large systems. Does Haskell even have a future?

By François Leclerc (not verified) on 10 Nov 2009 #permalink

Hi Mark,

Do you have an opinion on the Scala language? More generally, do you have an opinion on multi-paradigm languages in general? Scala seems to mix imperative and functional very nicely, which should be good when you want to integrate a functional style gradually with a large existing imperative code base.

Hi Mark,

I have to repeat the question of #18: do you recommend haskell? Or better yet: Will you start using it in your everyday work?

I fell in love with haskell and the functional paradigm since my college days. Not only is the code much more elegant and intuitive, but also when you get your code to pass through the type-checker you can get a higher level of confidence on its correctness than you would get when you compile in some other language. About monads, I would say it takes a while to really understand them, but once you do, you see the beauty of it.

However, despite being probably my favorite language, I unfortunately never use it, instead I use java. Why ?

- All the libraries I needed to use in the last years always had a java version available (sometimes along with C/C++ or python, but never haskell).

- I have a nice IDE available (eclipse) with all the nice features I need (and many more).

- And maybe the most important: when I'm on a collaborative project, everybody is coding it in java.

So if you're going to use haskell, will you use it just for a formal specification, and then write the implementation in something else. Or actually do the final implementation in haskell ?

Off-topic, but still related to programming...

Please tell us about Go! (did you know, what do you think, have you used it, will you use it, will you write about it here, etc.)

1. A print view would be much appreciated.
2. What exactly do you mean by "formal reasoning about programs"?

@20:

Would I recommend Haskell? Depends on the project. There are still some things that are very awkward to write in Haskell. For a lot of projects, yes.

Would I use it for my everyday work? Given the choice, yes. Unfortunately, I don't have that choice. We've got rules in place about what languages are appropriate for production systems - and that limits me to C++, Java, or Python. (I can't even use Go, yet.)

One of the things that I'm going to be working on a lot in the coming months is basically a workflow scheduler. I think it would be *so* much better if I could write it in Haskell, even given the complexity of the foreign-function interface stuff I would need to do. But it'll end up in either C++ or Java.

@18:

I didn't say that I wouldn't recommend Haskell. There are very few programming languages that I would recommend without any reservations.

The lazy complexity thing is a tradeoff. There are some huge advantages to lazy languages, but laziness has its cost. So you need to figure out the balance point for your own work. It's a lot like type systems. Using a language with a really strong type system has some great advantages -- but it can also make some things very awkward. It can be a great tool for managing complexity, but it can also force you to do some things in a more complex way.

Monads are a bit more complicated. I think that Monads are an immature language feature. Right now, I think that before you choose to use Haskell for an application, you need to carefully look at whether the complexity of the monad combinations you'll need to use will outweigh the benefits. For an awful lot of applications, even though the monads are a bit complicated, the advantages of Haskell outweigh the complexity of the monads. For others, the monad complexity cost can still get to be overwhelming.

I don't think that that's a permanent thing. If you look back to the early days of Haskell, they started off without monads. Things got much better with the addition of monads. Some of the monad combinator work has helped make monad-related stuff less complex. I've got every confidence that over time, someone will work out better ways of doing things that will get it under control.

I came into contact with functional programming (Miranda) during my first year of college when I was still flirting with the possibility of studying computer science (I'm now a mathematician.) The main advantage, I thought then and still do, of FP is that it forces you to look at a program from a completely different angle. As such its a great educational tool (especially in a student crowd who all have some basic high school programming experience, and are already fixed in the imperative mindset.) Once you are capable of looking at a problem in a functional way, you can often implement this in your language of choice, resulting in clearing, more efficient, more easiloy testable programs. It's not specifically the language, but more the way of thinking thats important.

In my experience, using Haskell is like programming while a man with a big stick watches you. If he doesn't approve of your code, he hits you with the stick. If he does approve of your code, he still hits you with the stick - just not as hard.

Which is not to say I disliked Haskell; sometimes that whapping with a stick could feel almost like a soothing massage, and I sometimes miss it when I'm in a more imperative language.

@30: I highly recommend RWH. Chapter 18 on monad transformers addresses some of Marc's concerns and made sense of the idea of a monad transformer for me for the first time.

@26:

Your project will probably end up in Java. Probably the most popular language today. Wouldn't that make Clojure - with its tight Java integration and free from laziness confusion and monads complexity - a recommendable functionnal language for large systems? I feel Haskell is not practical enough. It's not a day to day language. Only a handful use it as such - Galois Inc. for example. Haskell needs strong support from a large company. For example, Google - why not - could write a nice IDE, maybe tweak the language to simplify Monads, etc...

By François Leclerc (not verified) on 11 Nov 2009 #permalink

One small meta-nit. Any experienced programmer can learn the basics of a familiar family (e.g., strongly-typed imperative languages) fairly quickly but it takes time to really understand a new language. Otherwise you get things like "Fortran in Ada" or "Pascal in C" code. shudder.

The consensus with coworkers is that it takes about 6 months to really understand a language (e.g., naturally using anonymous inner classes for java's Comparator instead of writing a 2-line class that's only used in one place), 2 years to really understand the standard libraries (official and de facto) to the point where you're using the right classes and methods instead of just what you're familiar with, and 5 years to reach the point where you know many of the secondary libraries, e.g., java's jakarta common's codec library.

It doesn't surprise me that your view of Haskell has changed so much with more experience. But why does this seem to surprise you?

"C++, Java, and Python. None of them are functional languages: are all state-heavy, imperative, object-oriented languages."

Python not functional? I mean, it's not purely functional, but it's recognized as being functional (functions as 1st-class citizen, closures, map, filter and reduce).

@2:

Mikael, the stuff pozorvlak mentioned about objects in Mnd(Mnd(C)) doesn't immediately give rise to anything better than a monad transformer.

You still need monads in the Kleisli category of a Kleisli category to start using the construct and it doesn't immediately follow that the desirable properties of the building block monads in question still hold. Basically building a monad in Mnd(C) is equivalent to finding an appropriate distributive law.

The other post in which he hypothesized a couple of years back that monad transformers must be related to distributive laws is correct, but is a result known since Mark Jones wrote a paper back around '93 on the topic and provided combinators for gluing together pointed functors and monads into bigger monads using distributive laws. That work basically provided the basis for the modern notion of a monad transformer.

Support for that style of distributive-law-based monad is available inside of category-extras, though, I don't know that anyone has ever bothered to use it. ;)

To start with I'm a big fan of teaching everyone to use a functional language at some point and I love lisp.

However, your arguments for FP and Haskell in particular are wholly unconvincing.

First, the comparison of Haskell and other elegant languages for us abstract math types with python, java and C is totally misleading. These languages tend to deliberately strip out the powerful features (closure/coroutine related usually) that enable elegant compact code but introduce abstractions that perplex some. Alot of the defects in python,java,C can be addressed with ruby, smalltalk or a vast array of other less popular but more powerful languages.

Secondly, the complexity of Monads directly undermine your argument about the ease of reasoning about Haskell code. While formal analysis may be easier with Monads as a practical matter it's harder to reason about state wrapped up as a Monad than directly expressed state. Especially if the primary task of the program is to manipulate state Monads just make it worse.

Also one has to be careful to compare the total complexity of programs accomplishing the same ends. It's easy to gain simplicity per line/function by forcing the programmer to write lots of code to work around the language.

Finally, I think the mandatory strong typing of Haskell is a real downside. All forms of strong typing inevitably force you to subvert the system or utilize some horribly complex typing construct (kinds, GADT etc..). Moreover, despite the religious conviction of strong type proponents types are often simply not a good fit for the kind of program annotation that would best catch errors. I think something *resembling* optional typing is desirable but I don't think we've quite found it yet.

Finally, when it comes to reasoning about parallelism this is a matter about the abstractions for parallelism offered by the language and there are many options for both imperative and functional languages.

@14: command/query separation is nice as long as you don't manage thread-safe data structures, because there it becomes in many cases a bug. If your stack has to be a concurrent data structure (like in java.util.concurrent), so that you can use it without acquiring a specific lock first, you cannot split a call to pop() into a call to top() and one to removetop(), because this would allow another thread to be scheduled among these two calls.

@36: GADTs and ranks are not horrible at all. They are maybe complex to understand, but they're very useful. I've been in need of even more complex constructs in Haskell for somewhat practical motivations.
(OK, it's more accurate to say that it was during abstract research starting from practical concerns, but we still want people to use this.)