Does the Julia community suffer from the “Lisp curse” (whatever it means)?

The Lisp curse is that the more expressive, powerful Lisp language doesn’t really get popular like the less expressive ones, maybe because of the lack of standardization, maybe because it’s easier to mess up? Maybe because it’s harder to document? We don’t know the validity of the Lisp curse nor the true cause, so it’s just a guess. Does it apply to Julia?


  • Julia is really expressive.
  • Some complain about the correctness and documentation issues, maybe because Julia multiple dispatch, a more powerful paradigm, is also harder to document/test.
  • Julia ecosystem is composed of smaller independent libraries rather than a few big libraries. This means some libraries may be maintained by a few or even a single dev and can no longer be maintained once the dev is gone. The documentation may also be less complete because the developer team see things from fewer perspectives. Each library also imposes its own standard which can be different from each other.


  • Julia culture often induce challenge which means things don’t get “too easy”. The Greedy Julians don’t want to do something for less, they want to do more for the same. For example, they want to do source-to-source AD, inducing lots of difficulties. Maybe they want to optimize something to a ridiculous level. These can add difficulty, sometimes artificially, but it does make the library feel like something worth sharing.
  • Julia makes good generics which make library easier to make and use.

What do you think? The way I see is that I want expressive languages like Julia to truly succeed. So, if there is a problem, we need to know.


I usually find myself cursing (well, I do not curse - but it sounds good in the context of List curse) at either myself or the language when failing to write enough tests.

I think that aggressively imposing a test writing discipline would do a lot of good to the future of the ecosystem (and also avoid the Lisp curse - whatever it means).

1 Like

For the uninitiate, the essay. It would be more funny if it were less true. Usually summarized as:

Lisp is so powerful that problems which are technical issues in other programming languages are social issues in Lisp.

Good programmers in a language find most concepts elsewhere easy to re-implement in their language of choice. When it is easy enough, nobody bothers to package or document it. It could be that the task was that easy, or because it is a pain to package and document it. Then you get a profusion of mediocre blobs of code that sort of fit the bill, but are buggy and undocumented - because the author doesn’t really care about it, or doesn’t have time anymore.

All languages suffer from this to some extent - it is not a “yes, cursed” or “no, not cursed” answer. Sure, Julia is very powerful, and it may be true that there are many disposable implementations of things floating around, but it seems like the curators of the language have really gone out of their way to make code easy to package and document.

I’d say, the curse taking root largely depends on the groups of people involved, and the vagaries of fate more so than the expressiveness of a language.


When people talk about Lisp being powerful, they usually mean expressive power: using macros, one can write really compact code, and build up friendly surface syntax for whatever extension they desire.

Given this outstanding expressive power, it is puzzling why Lisp is not so widespread. The “Lisp curse” mentioned above is a theory based on social factors. But seasoned programmers (including Lispers) know that having great expressive power in a language does not mean that solutions are composable and/or performant. Yes, it is easy to write an OOP extension to scheme, but it will not compete with optimized C++, and not necessarily compose with some orthogonal extension someone else wrote.

The innovation of Julia is that it allows one to write composable, performant, expressive code, based on its type system and compilation model (new ingredient), multimethods, and macros.

Incidentally, there are three additional lesser known Lisp curses.

  1. Arguing about obscure points of the CLHS: there are necessarily corner cases that were not covered in detail, so is something a bug, or not? This resulted in a lot of acerbic 200-response mega-threads on Usenet.

  2. Having a standard, instead of a single reference implementation. It has some advantages, but at this point it is very unlikely that incremental improvements ever end up in a new standard, so Common Lisp proper is frozen 1994 forever.

  3. Attracting people who think it is some form of super-powerful ancient magic that will solve otherwise intractable problems, make P = NP, model the brain, etc. Nope, it is just a programming language.

Julia seems to be immune to all of these at the moment.


In the past I have said “Good enough for Jazz” here. Which means that you get something good enough for the analysis/simulation you need to run.
However I will now make a more apt reply to your point. There is a lot of discussion on this forum regarding optimization becasue there are many use cases being presented and discussed for Julia.
For instance yesterday I think there was a request about web applications, with a detailed discussion of cores and threads being used.
Then look at the long running discussion on small executables.

My point being that the replies to a wide range of requests for use cases will inevitably involve dives into optimization.

Maybe you are right though - the reply could be “use this library or package - it is good enough for jazz”

Well, not all of them, it would seem. :wink:


I don’t think the essay mentions performance, although you could include that. The problem with Lisp libraries for new language features was that they tended to be hacked-together, poorly-documented, bug-ridden, and incomplete, so most Lisp implementations of critical features just never caught on. There were just never polished, professional, packages for OOP in Scheme, the way there were in Java or C++.

One good overview of the situation explained:

I predict that while there will arise packages that try to address some of these issues, they will be in disagreement about what to do, they will be niche, without good core language support, and therefore not really solve the problem.

Yes, I think we do. We really love multiple dispatch, but invalidations are making Julia difficult to use due to compilation and recompilation. My sense is that we should be more judicious about how we use multiple dispatch and perhaps separate the multiple dispatch out into dedicated packages.

For example, currently we have mostly types declared in StaticArraysCore.jl and we have overloaded methods in StaticArrays.jl.

StaticArraysCore.jl is not very useful by itself. All the useful methods are in StaticArrays.jl. What if we implemented package scoped versions of methods such as StaticArraysCore.length.

module StaticArraysCore
    # ...
    length(a::StaticArrayLike) = prod(Size(a))::Int
    # ...
module StaticArrays
    # ...
    @inline Base.length(a::StaticArrayLike) = StaticArraysCore.length(a)
    # ...

With this, I could use methods in StaticArraysCore.jl specifically for StaticArrays without worrying about invalidating methods and causing recompilation elsewhere. I can still access the StaticArraysCore.length implementation without overloading Base.length.

The main reason I can think of for why we don’t do this, is because StaticArraysCore.length is not very useful since all the rest of our APIs are heavily dependent on multiple dispatch. However, it would allow someone to implement methods for StaticArrays without relying on multiple dispatch.

The overall problem is that features such multiple dispatch introduce complexity. Some simplicity in Julia would go a long way.

1 Like

This may be a naive question, but isn’t this an example of single dispatch? My impression was that multiple dispatch necessarily involves two or more objects: self, and others. Is the point more about dynamic dispatch in general?


I’m not sure, are you answering yes we do to the question do we “suffer from Lisp curse”? Which I think is about code correctness (or too many implementations).

The “and recompilation part” I don’t understand. If you precompile your code, then it should be enough (assuming it’s all compiled, or you’re not calling with alternative types, since code is generic by default). I don’t clearly see a need for even recomiling, just more compiling, if for a type not already precompiled. I know of the concept of “invalidations”, which is maybe what you had in mind, but it’s also confusing for me. It seems you could compile and reuse already precompiled code. Does this only happen because of inlining (that could be turned off, across packages)? I mean it might be an opt-in option to ask for aggressive inling, we currently do. We don’t have that third possibility yet, only inlining on or off, but not a middle ground; [Python C code packages are just compiled and never recompiled, based on how used…?! So it seems we could also do without.]

Sorry, perhaps that was bad example.

Yes. The point is that we have dynamic dispatch and that we have a tendency to overload Base methods directly. Rather I think we should consider trying to program as if we did not have any dispatch, and then integrate the dispatch via method forwarding only.

I believe this is the general statement of the Lisp curse:

Lisp is so powerful that problems which are technical issues in other programming languages are social issues in Lisp.

If we use methods where concrete types cannot be fully inferred and that are commonly overloaded, we risk our compiled code cache being invalidated forcing recompilation. Julia has the superpower of multiple dispatch, but perhaps we are abusing it.

Not “fully inferred” (you mean not type stable?); then invalidated “forcing recompilation”. I meant it’s forcing now, but is it in principle forcing, or could recompilation be avoided? So it’s a technical issue, just not yet implemented to avoid recompilation. Also besides, I’m not sure it’s a “social issue”, recompilation just works? Maybe people complain a bit, so social in that sense.

It really should not take 30 seconds to load package with precompilation, and yet we see instances such as this:

1 Like

It’s an avoidable problem (currently, i.e. then a social problem, yes, in some sense, but I did not have latency in mind for it, but you have to do something, rather than “just wait”).

Is it fully precompiled? It seems to me it’s not and worked on (why not is another story, that Julia doesn’t fully precompile by default). If you use that package in an app and you compile it with PackageCompiler.jl, will it work faster? I think not, but not sure, i.e. will it eliminate recompilation?

I was curious to look into it, if anything can be done, but after the very long download, and very long precompile, it failed, and now always fails quickly with:

julia> @time using ParameterEstimation
[ Info: Precompiling ParameterEstimation [b4cd1eb8-1e24-11e8-3319-93036a3eb9f3]
WARNING: Method definition isapprox(IntervalSets.AbstractInterval{T} where T, IntervalSets.AbstractInterval{T} where T) in module IntervalSets at /home/pharaldsson/.julia/packages/IntervalSets/viB6k/src/IntervalSets.jl:144 overwritten in module DomainSets at /home/pharaldsson/.julia/packages/DomainSets/aafhp/src/domains/interval.jl:52.
  ** incremental compilation may be fatally broken for this module **

WARNING: Method definition isapprox(IntervalSets.AbstractInterval{T} where T, IntervalSets.AbstractInterval{T} where T) in module IntervalSets at /home/pharaldsson/.julia/packages/IntervalSets/viB6k/src/IntervalSets.jl:144 overwritten in module DomainSets at /home/pharaldsson/.julia/packages/DomainSets/aafhp/src/domains/interval.jl:52.
  ** incremental compilation may be fatally broken for this module **

ERROR: LoadError: UndefVarError: `preprocess_ode` not defined

Then in Julia 2.0, nobody can claim Julia is unsafe or its standard libraries, and all Julia 2.0 code will work also in 1.x.

Could you file an issue with DomainSets? This should be resolved

This seems like a real problem, but also unrelated to the original Lisp curse, which is about how Lisp’s expressiveness creates social problems by attracting the kinds of people who can’t work well with others.

Lisp doesn’t suffer from particularly long compile times because it’s typically not JIT compiled; Julia’s decision to JIT compile into LLVM (an infamously slow compiler) is unrelated to how dynamic it is.

Right, but also not about JIT. Lisp is fast because interpreted, or if compiled, not as aggressively as Julia. LLVM is slow, but needs not be. You can choose optimization level, and inlining is the problem. If you never would then it’s not bad to compile each function separately. Julia allows that, and someone else could try with that package, running with -O0, -O1, and/or --inline=off, what I was going to do:

I filed an issue, and I can’t performance tune this or try, until I figure out getting it to install/run at all:

Lisp is fast because interpreted, or if compiled […]

Hi, my 2c: yes, implementations like SBCL or CCL (known for its fast compilation speed) compile to native code. CLISP compiles to bytecode, but it isn’t the most used one nowadays.