What do functional languages (LISP / ML) offer that Julia doesn't?

I picked up some Julia fairly recently. As far as I can see, Julia supports functional programming somewhat better than Python. For example, anonymous functions (lambdas) are not limited to one line, and there is the function pipeline operator |>, which is convenient for composing functions. Still, I’m wondering if there are good reasons for learning “full-fledged” functional languages like LISP, Ocaml, and Haskell. I hear people promoting these languages all the time, but I never studied them.

From some superficial reading online, one of the many unique characteristics of these “full-fledged” functional languages is the use of linked lists as a default data structure, as opposed to arrays in Julia. In Ocaml and Haskell, arrays are available only from the standard library not the core language, while in Julia it’s the exact opposite: you cannot use linked lists unless you use e.g. DataStructures.jl, or write your own implementation. What’s the advantage in using linked lists everywhere, rather than for special problems which require fast insertion and deletion?

Also, a common feature of functional languages is tail call optimization, which promotes a heavily recursive programming style. For what kind of problems is this better than Julia’s programming style?


Actual functional programmers may weigh in, but my sense of the consensus on linked lists is that no one really thinks they’re all that. I’m pretty sure they aren’t going to top anyone’s list of reasons they prefer Haskell or OCaml or whatever.

Your point about tail call optimization is much closer to the mark; the design patterns in the functional world tend to be very different than in Julia or Python. They typically use recursion and pattern matching in place of loops and conditionals. Problems involving tree-like data structures are often especially amenable to this style; for example, Pandoc is written in Haskell.

My favorite explanation of this point is this Quora answer by Tikhon Jelvis. He details a particular example (infinite quadtrees) which illustrates the advantages of algebraic data types and laziness quite convincingly, IMO.

Julia does have MLStyle.jl and Lazy.jl, so it is possible to borrow some of the elements of functional programming when you want to. But it isn’t really idiomatic Julia, and the lack of TCO (among many other things) keeps Julia from being a compelling alternative to a Haskell programmer who wants to keep doing things the way they’re done in Haskell. But that’s fine: Julia is for those who find that alternatives aren’t quite doing it for them.


IMO, TCO is a really cool idea but is actually a mistake. Any code that needs TCO can be fairly easily written in a loop, so it’s not that big of an advantage. Furthermore, TCO relies on destroying the stack when tail calls happen. This sounds good, but especially in complicated cases makes de-bugging way harder. Furthermore, TCO isn’t an optional feature. If you are using a language that has it, you can’t ever expect to run code with it disabled, or you will get stack overflows from packages and standard library functions that assume TCO works.


Well, Haskell doesn’t really have loops, so they better have TCO. :slight_smile:


In Haskell at least, lists are often used as iterators rather than as a data structure for storage (see for example this Reddit thread, in particular this answer).


I have no experience with Haskell & Co, but I wonder if their pure functional approach has performance impacts. In this approach, it is not possible to modify data structures (like arrays) in place, which is less efficient due to allocations, right?

I think placing Lisp in the same cohort as Ocaml and Haskell is a bit of misconception.

(Common) Lisp is inherently a dynamically typed imperative interpreted language with first-class functions. Julia is in many ways just a flavor of Lisp with a more familiar syntax.

ML and Haskell are statically type-checked “real” functional languages, which makes the experience very different. Linked lists are a minor difference IMO, strong type guarantees being a more prominent change.

The main advantage of FP is the feeling that “if it compiles, it works” (or, at least, it won’t suddenly crash at runtime). One of the things that FP enforces is exhaustive condition checks in certain cases. Another is a more formal definition of traits via type classes. Say, if you define a numeric type, ideally you’d like to make sure you have all the arithmetics, comparisons and whatnot defined for it. Julia doesn’t check for that, Haskell does.

So, the major advantage of FP is that the type system is designed in order to allow compiler verify whether the programmer checked for all edge cases.

This Reddit answer is an opinion of a person who had experience in both Julia and Haskell of pros and cons of both (TLDR: they are both great, it depends on the specific task who wins).


@sswatson Thanks for linking Tikhon Jelvis’s Quora answer. Some examples, like the lazy quad tree, seems fairly difficult to reproduce in Julia. I think the package Lazy.jl only implements lazy lists, but not lazy trees. On the other hand, I find it fairly easy to use Julia to reproduce the example of simplifying arithmetic expression, and the code is just as short. A stripped down version of the Haskell code is, using algebraic data type and pattern matching,

data Expr = Var String 
          | Lit Integer 
          | Expr :+: Expr
          | Expr :*: Expr  deriving (Show, Eq)

simplify :: Expr -> Expr 
simplify expr 
  | nextStep == expr = expr 
  | otherwise       = simplify nextStep 
  where nextStep = case expr of 
          Var x -> Var x 
          Lit n -> Lit n 
          (Lit n1 :+: Lit n2) -> Lit (n1 + n2)
          (Lit n1 :*: Lit n2) -> Lit (n1 * n2) 
          (e1 :+: e2) -> simplify e1 :+: simplify e2
          (e1 :*: e2) -> simplify e1 :*: simplify e2

My Julia code is

abstract type Operator end
Math = Union{Operator, Integer}

struct Add <: Operator
    a :: Math
    b :: Math

struct Multiply <: Operator
    a :: Math
    b :: Math

evaluate(x::Integer) = x
evaluate(x::Add) = evaluate(x.a) + evaluate(x.b)
evaluate(x::Multiply) = evaluate(x.a) * evaluate(x.b)

Now if you run

formula = Multiply(2, Add(4,3))

You’ll get 14, as expected. Of course, the Haskell version is still more flexible, since you can extend the code to treat 0 and 1 differently from other numbers, as in the original Quora post, while Julia only dispatches according to types, not values. Please let me know if you see any other pitfalls of the above Julia code!


The chief advantage of Haskell, for a beginner certainly, is that it forces a strict functional style. Once you’ve got the hang of Haskell, once you’ve coded the application or library, it is obvious how you’d also code it in F# and Clojure. I initially had real problems with F# and Clojure, because I tried coding in a similar style to C and Fortran, and it’s possible but going very much against the grain. After working with Haskell, they are much easier to use.

Compared to Julia, Haskell again forces a different way of thinking about the problem. You start thinking about using map, filter, reduce and higher order functions. You write smaller functions, on the whole; and differentiate between pure and impure functions, even in languages which, unlike Haskell, don’t enforce the distinction.

Functional languages tend to use lists because of immutability. A list can be changed by adding on an item to a copy of the immutable list, ignoring the old list and continuing with the new. An array is a mutable construct at heart, and although functional languages can handle them (particularly F# and Scala which allow mutable data), it doesn’t feel like a natural fit. Julia, Fortran, MatLab - these are more natural for array processing.

Looping constructs like ‘for’ require the loop counter to be mutable. Some functional languages can handle this, some cannot. Recursion, moving onto a new immutable value each time around the loop, is a better fit for a functional language. Of the recursion styles, tail call optimisation (TCO) ensures that the stack won’t overflow. After a bit of practice, I find TCO easier to reason about than a more straightforwards recursion. Perhaps that’s just me…


comparing Haskell (and similar languages) to Julia, I wonder if it is possible to “dispatch on the empty list”
for example, a map function can be defined in Haskell as a couple of definitions (link):

mymap f [] = []
mymap f (x:xs) = f x : mymap f xs

Could this be done in Julia? that is, separate the recursion termination to another function definition based on the fact the input list is empty?

Yes, it’s possible for functions on tuples:

mymap(::Tuple{}) = ()

mymap(f, (x, xs...)) = (f(x), mymap(f, xs)...)

It is not possible to do the same with mutable collections because then “empty” is a state the collection may be in, not a value that can be associated with a specific “empty” type.


It makes perfect sense to use tuples, as functional programming does not mutate its arguments (no side effects).
I don’t know if the compiled code whether the compiled code would be efficient, but it’s good to know that the Julia syntax supports this style of (pure) functional programming

What’s the advantage in using linked lists everywhere, rather than for special problems which require fast insertion and deletion?

They are inherently well suited for pure functions (functions which don’t mutate their inputs). Imagine something like this, in a Julia-esque code. We imagine Julia arrays are linked-lists here.

xs = [4, 8, 5]
ys = pushfirst(xs, 10)
zs = pushfirst(xs, 20)

If this was to work in normal Julia, you would have to allocate a new array for ys and zs. But if xs was a linked list you could just add a different element to the front. The [4, 8, 5] part will be shared by xs, ys and zs. What this gives you is an effective way of working with large immutable data structures without getting terrible performance penalties. Adding a couple of elements to a huge list with millions of elements will not require massive memory allocation and duplication.

If you just reflect a bit on this and play with some code, I think you will see that linked lists simply work really well with a programming paradigm focused on immutability and recursion.

However this paradigm does not work well with things like matrices where you need lots of random access, which is why turning Julia into a pure functional language and centering its data structures around linked-lists never made any sense. From what I have seen, most Haskell code dealing with such cases become really messy. Right tool for the job and all that.