What don't you like about Julia for "serious work"?

If this happens only at the last step, this is, in the exact doubling that would grow the vector to use more RAM than I have available, then it is entirely reasonable. May trip someone expecting the algorithm to be oblivious the amount of RAM available but will probably do more good than bad. I remember I got worried because the discussion at some point was “what if we stop doubling and start using a smaller constant after the vector takes more than 1% of the total memory?”, this was much more arbitrary and I was working at time with algorithms that easily had vectors grow to 10% of the system memory (a single vector this is). So I got kinda worried at the moment.

A large part of this seems to be the old-but-very-legitimate static vs dynamic typing debate. We’re not likely to solve it once and for all here. If go is your favorite language, you will hate julia and I’m comfortable with that.

I see “metaprogramming is confusing so a language is better without it”. Ok fair point, but we can’t exactly just delete all the metaprogramming features. Putting “no metaprogramming” in your style guide is totally fine, but if you want to use a language where it doesn’t exist at all, you obviously have to use a different language. Maybe julia 2.0 should not have macros? I guess it would make my life a lot easier, but it would make a lot of users quite unhappy. It’s not even only about macros — you can do a lot of “clever, tricky” programming just with higher-order functions, so trying to enforce “pedestrian” code seems too costly to me.

Interfaces are a really common topic of discussion and I think at this point we’re determined to do something about it in julia 2.0 (if it requires breaking changes).

declarative (rather than imperative) code is generally considered more robust or at least easier to analyze and verify. In Go, you write imperative code in functions, but the overall organization (definition of packages and functions) is declarative. In Julia, this organization itself is imperative.

I agree and I’m willing to do something about this. It is very common to run code at the top level, and that is arguably useful, but that does not necessarily need to extend to include itself. I remember an issue somebody filed once essentially saying “julia is bad because the include function exists; please delete it”. Personally I don’t understand being so bothered by some functionality existing, but nor do I love include all that much.

Fair warning though, if eval still exists you can write include easily. And that’s the trouble with a lot of these design philosophy issues: it’s hard to pull on a thread without the whole sweater unraveling. E.g. as soon as you want to statically check for one kind of thing, you have to statically check basically everything to guarantee that the first kind of check is possible. Or as soon as you allow some amount of reflection and reuse of the language, you get eval and include and all that.

For many of the type-checking-related issues, the best path forward I see now is to use GitHub - aviatesk/JET.jl: An experimental code analyzer for Julia. No need for additional type annotations., and figure out how to build workflows around it. At this point this is a significant UI/UX problem. If you want type errors, ok, we can give you some type errors (:slight_smile:), but the challenge is how do you convey what kinds of errors you’re interested in, and how and when do we invoke the tool? I think it’s very doable but ideas, UI polish, and documentation are needed here.

We had lots of reusable internal modules and tens of thousands of lines of Julia code. Eventually we had to give it a way and move to Python

I’m sorry to hear that, and I won’t question what works best for you. I’m curious about the real-time bit though. I agree julia is not a real-time language, but how does python solve that? Is it just a matter of refcounting vs. GC, and the workload being such that interpreter overhead doesn’t matter? Or did you rewrite bits in C?


Thanks for chiming in with your thoughts! Glad to hear that this stuff is being considered.

I’m definitely not a Go fan, but I was wondering how much Julia can offset some of those trade-offs and still work.

Glad to hear about interfaces. What is your thinking on traits (and method ambiguities)?

About the talk on thtt in this thread, one thing I haven’t seen mentioned is that you can’t dispatch on a container of traited objects.

Also, I’d really like it if we kept metaprogramming :slight_smile:

1 Like

[Revise] Yes, where we could. However, for most of the things we were doing it was less confusing to start the session again so we were sure what code was being used. Most of our stuff is GPU (CUDAdrv.jl) and also we did a lot of Distributed CPU stuff so I’m not sure we were able to make Revise work with all of that.

1 Like

[How does Python solve the realtime problem?] I can see how I might have confused you there sorry. It doesn’t. We gave up and went back to the standard thinking of “prototype your signal processing code in a researchy vector language in the {MATLAB, Python, Julia} vein and then once it appears to work offline make a hard realtime C port”. We had hoped that Julia would allow us to break out of that mould and be able to run our researchy code for demo/eval purposes. It did some of the time, but you had to do quite a bit of babysitting.


Slow compile times.

I wonder how long ago you stopped using Julia. In the past year, compile times have improved dramatically. Of course, even if compile times are sufficiently short now, that doesn’t solve your other problems.

buffers of audio data are not allocated on the stack

I’m tempted to ask about passing preallocating buffers. But, I’m sure you were well aware of the options and tried them.


It’s only worse when comparing to static languages like Go, Rust, Swift, Kotlin… If you don’t have much experience with this type of languages it’s hard to explain the warm feeling you get when the code compiles. The stricter the type system, the warmer the feeling :slight_smile:

But Julia probably has lots of room to improve in this regard. To take an example from my previous post: I could not find which methods I had to implement to satisfy the IO interface. Well, @contradict found it for me. But what if the Julia developers add a new method to this interface, let’s say a method that is only used when you show an Array{T} where T<:IO with MIME type text/html? When and how will you realize you need to adapt your code? Most likely when it crashes in the face of that particular user that hits that corner case.

In Go this list of method would be written explicitely in an interface definition like this. And if they add a method in Go 2.0, my code with type T::IO will refuse to compile until I implement this new method for T.

Such explicit interfaces could be added to Julia. The compiler could then tell me it’s not properly implemented, as soon as I use it with my type T, even if the missing method is not actually used.

Indeed, but my example was from the standard library, not from an obfuscated code contest… And I’m not saying that code is bad, just pointing out a downside of expressiveness.

I think Julia makes it too easy, and therefore too likely, for some types of projects.

A language could offer every tool and leave it to the users to use them wisely. Modern C++ is a bit like that. But is it good design? There’s something true in the quote “Perfection is achieved when there is nothing left to take away”. For example, Go was designed in part by taking things away from C++, see this essay from one of the designers. I don’t think it would be the best approach for Julia, but in my opinion it makes Go a good choice for some types of projects.

And can you really “leave it to the user” to be wise in practice? My personal experience is that it doesn’t work well, especially for large projects with heterogenous teams. It’s nicer and more reliable to have discipline enforced by the compiler. And internal code styles don’t help when you want to use open source projects.

Also, the benefits of a coding style increase with the size of the team, so they’re largest when the constraints are part of the language: the style is automatically enforced for the whole ecosystem.


@jeff.bezanson thanks for the thoughtful reply.

Agreed, though I think both languages are lovely when used for the right job, but they do have different sweet spots… For me Julia has basically replaced:

  • Matlab, R, Python, Bash (in non-trivial cases) and Awk, i.e. all the scripting languages
  • C, C++ and Fortran when used for numeric performance

That’s already impressive…

Go on the other hand has replaced Java, C, C++ and some Python for server software, middleware and tooling.

I see “metaprogramming is confusing so a language is better without it”. Ok fair point, but we can’t exactly just delete all the metaprogramming features. Putting “no metaprogramming” in your style guide is totally fine, but if you want to use a language where it doesn’t exist at all, you obviously have to use a different language. Maybe julia 2.0 should not have macros?

I meant it as an illustration that “more expressive” also has a cost, so it’s a design trade-off rather than an obvious choice.

Adding metaprogramming to Go would probably be a bad idea given its objectives and application domain (especially once it finally gets generic types).

For Julia I think metaprogramming is a net positive. It’s unusually useful in Julia’s domains, for example to implement DSLs that approach the “paper syntax” of various scientific fields.

So contrary to the other points, I’m not sure there is much to improve here. I’m actually happy that Julia has macros. But it’s still a reason why I prefer Go for some projects.

Interfaces are a really common topic of discussion and I think at this point we’re determined to do something about it in julia 2.0 (if it requires breaking changes). […] I agree and I’m willing to do something about [module code organization].

It’s awesome to hear that :smiley:

Regarding top-level code: I suspect __init__ is (or could be made) a sufficient replacement for most use cases?

Thinking out loud about my “dream” Julia 2.0, it’s basically these two points (interfaces and declarative module structure), plus first-class support for:

  • static analysis
  • full pre-compilation at the application level

(and of course this time-to-first plot thing but that’s improving so fast I’m not sure it’ll be much of an issue in a year or two :slight_smile: )


Sure, my experience is with Fortran. I do not know if one can get more strict than that. It is certainly scary when one leaves that to start typing values without declaring variables before, but one gets used and the benefits are so great that I don’t remember with warm feelings my Fortran anymore.

But I see that you agree that Julia is a good replacement for

That was my one and only experience and interest. Thus it is likely that I don’t even see the issues that may be important in another kind of project.


I have a somewhat similar feeling of Python compared to C++ (although the latter provides an amazing amount of control that sometimes is of much value), where Python is my first choice for anywhere I can get away with it. But with Julia I’m still at a bit of a crossroads compared to Python. Yes, it offers much higher performance while writing similar high-level code, using a nice typing system and well-designed packaging. But the time-to-first-plot latency and subsequent delayed errors due to dynamic typing muddy the water somewhat in development experience (as mentioned earlier need to try Revise). Hopefully that will improve even beyond 1.6, perhaps with some help from the LLVM devs.


Would it partly solve your problem if you could add @checked_precompile foo(::T, ...) for methods in your packages? @checked_precompile could, at a minimum, ensure that precompile returns true and that the inferred return type isn’t Union{} (which means it will throw an error when run). I’m not certain how comprehensively this would catch errors but it would be a start. (JET would be more comprehensive, though.) As a bonus you’d have everything precompiled so your startup latency would be lower.

You’d have to add the specific directives yourself (a list of things you want to precompile), although there is SnoopCompile to help. But of course, in statically-compiled languages you have to maintain a cmake or whatever build script, and perhaps those are similar tradeoffs.


There is a way to speed up the debugger a lot: Debugging extremely slow - #4 by tim.holy

It needs people willing to donate their time. A lot of it. We got as far as we are now through some pretty massive time donations, but if it’s going to get better then others who care need to step forward. If you’re thinking, “that’s not going to be me!”, well, now you understand why this fundamental flaw is not being addressed.


What is @checked_precompile?

help?> @checked_precompile
  No documentation found.

  Binding @checked_precompile does not exist.

julia> @checked_precompile
ERROR: LoadError: UndefVarError: @checked_precompile not defined
in expression starting at REPL[8]:1

It’s a hypothetical feature. A brainstorm.

1 Like

OK, feel a bit dummy but it was not obvious and seemed quite useful.


I played with something similar here, e.g. this checks that the infered return type is Int16 :

@checked Int16 function add(x::Int8, y::Int16)

In a more complicated version you can test a predicate on all combinations of subtypes of the arguments (which quickly goes out of hand) :

@checked O -> O == promote_type(T,K) function add(x::T, y::K) where {T<: Integer, K <: Integer} 
    x + y

(This would throw an error because adding two Bools give an Int64)

On that topic I find the Linter in VS code quite useful, it often finds methods or variable errors.

1 Like

I haven’t looked into precompilation at all at this point, so it’s a bit hard to judge if your suggestion would help. What does jump out a bit is that you write “for methods in your packages”. At this point I only have a single package, which is not really large (around 3.5k lines), nor do I see (logical) ways to cut it into separate packages to do per-package precompilation. I suspect what you’re suggesting would not help when developing a single package, as the whole package is recompiled anyway when loading it and the source has changed? It also feels a bit too fine-grained, in having to specify particular methods to precompile, instead of just a complete source file like in C++. But I’m going to give SnoopCompile a go, I’m really curious what comes out for my code.

On this particular point I’m not sure what you mean. For a project consisting of just a bunch of C++ files that link against some libraries the CMake file(s) would be fairly straightforward (define input source files, define compile and link flags, define libraries to link against, define output executable). And there’s no reason to make it any more complex in order to aid in development (e.g. by default you already get release/debug/etc build targets which can easily be switched). The dependency graph maintained automatically by the build system - based on scanning the C++ source files and what’s in the CMake build files - will make sure compilation is done incrementally only for those source files that actually changed. So the compilation times are always minimized, but can be somewhat controlled by the developer in how he/she splits up the sources into multiple files.

I guess for C++ and similar statically-compiled language it is just a fundamentally different approach to compilation, where the sources for a single “package” are compiled incrementally and cached in some level of granularity (e.g. one object file per C++ source file). Whereas with Julia - as far as I can oversee - the smallest unit for caching compiled output is a whole package.

Fair enough. So maybe it is a focus and it’s a bit resource constrained. Is there a page / post / group dedicated to addressing these sort of issues?

One of the ugly things with C and C++… But I don’t think that’s generally the case for “modern” statically-typed languages.

For example in Go, each directory is a package and all the .go files are automatically compiled, with conventions for special cases: files ending in _test.go are only compiled when testing, files ending in _<os>.go or _<arch>.go only when building for that particular OS or architecture. For more complex cases, each source file can have build directives at the top: a file beginning with // +build linux,386 darwin is only built for i386 Linux and for macOS. Dependencies are simply listed in a “module” file similar to our Project.toml.

Some projects add a Makefile to deal with Docker images, generated files, etc. but most don’t need it. Real build scripts are mostly needed when using C or C++ libraries :slight_smile:

I think the situation is similar in Rust: all .rs files are included by default, with special treatment for files in bin/, examples/, tests/… and conditional compilation is achieved by setting attributes on functions, variables, etc. directly in the code.

I guess something similar could work for Julia, as hinted by Jeff when talking of doing away with include!