1.0 adoption path?

First of all: thank you (and to all of the people working on Julia). I hope you don’t take people’s frank statements about the state of the development experience personally, as everyone understands that language stability is required before a debugger. Our worry was simply that you wouldn’t be given enough time to do it prior to a v1.0 release, and that worry has been eliminated.

For what it is worth (and from someone who really likes inspecting code in the debugger), I think any time spent getting the debugger and IDE experience better on v0.6 is time-wasted. We have lived this long without a debugger, we can make it until 1.0.

I have seen a lot of complaints about the amount of time spent loading code, how Pkg.update() is a deal-breaker, etc. My general feeling is that many of them are missing the point. The code has to be compiled, and the fancier the metaprgramming, the fancier the more time it will take.

The question is: (1) when; and (2) what visual feedback do users get when they are doing Pkg.update() and using... Those are psychological issues, or questions of when to precompile, that can be handled in the alpha and beta phase of the release. My general feeling is: you can take as much time as you want with a Pkg.update() and JuliaPro installation as long as you spew out a lot of nonsensical filenames to let people know it is working. If you just precompile as much as possible then, so that using is fast afterwards, then people can walk away from their computers for 15 minutes each time. Matlab takes a good half-hour to install, and stata takes a huge amount of time each time it updates. But people are visually/mentally prepared for both.

But mostly: thanks again to you and everyone else for doing such hard and generally thankless work, with a bunch of free-riders like me who mostly seem like ingrates. Happy holidays.


Not sure if you’ve tried out Pkg3 yet, but it is significantly faster, so Pkg.update() generally takes no time at all ;).


Alas, I am more of the “wait for the beta” type, and can’t justify time spent tinkering before then. Pkg3 sure looks promising. I flipped through the source and output, and have a few concrete suggestions based on my Pkg2 experience that I can tell haven’t changed. Is that best posted in a new discourse thread, or in an issue in Pkg3 repository?

The Pkg3 repository would be best.

I posted https://github.com/JuliaLang/Pkg3.jl/issues/89 and https://github.com/JuliaLang/Pkg3.jl/issues/90 for free-disposal.


it wouldn’t be catastrophic if there were a 0.8 or even 0.9. julia would be better for it i would even say.


Wise human learns from others.

Experience with python3 adoption path could warn us. It really take few days to port some packages and >10 years to port whole ecosystem.

But they underestimate python’s popularity which is probably not our problem now…

AFAICT Python 3 was a major rehaul of the whole language. If you have a specific Julia language feature in mind that you imagine could cause similar problems, please explain what you mean.

Otherwise, I see no reason for alarmist extrapolation. Even major changes, like the “arraypocalypse”, did not cause hiccups anywhere near that timeframe.


I don’t see (and python core developers didn’t see too) any big feature changes which caused >10y delay.

Everything could be rewritten. It is not language but industry problem:

  1. it cost money
  2. it cost time (see 1)
  3. it cost human sources (see 1 and 2)
  4. it could create new bugs (see 1-3)
  5. it could revive old bugs
  6. you have to wait for new buggy dependent library (see 1-5)
  7. packages which are dependent on your packages have to wait for your new buggy versions (do you see?:slight_smile: )
  8. some packages have not maintainers (or people who understand are too busy with other problems) or have new maintainers who are not experts
  9. edit (I forgot probably most important): old solution works for many (= it cost them no money)

One big difference from Julia 1.0 is that python3 was slower and didn’t bring any killer property! :smiley:
Second one is that Julia’s ecosystem is significantly smaller. :neutral_face:

But I am not talking about not to go to 1.0!!!

I just concentrated focus on one possible oversimplification that if one package could be ported in a few days then there is not big deal.

There is a nonlinear effect that dependencies have to be updated first. So as there more packages you have under your umbrella, this window grows faster than linearly. In JuliaDiffEq it doesn’t take days to port, I really wish it would. It could be a month long process or so to actually get things right. You’ll notice that after each update it’s “quite easy” to find new bugs for about 2 months or so, and this is because updating takes a lot of time and effort. It’s not anywhere close to Python’s update problems, but it’s a very real amount of man hours.

Looking at the current state of Julia, I’m not sure how worth it it is anymore but it seems like most of the “must have” next features (debuggers, compiler optimizations, precompilation overhaul, etc.) are not language breaking, so that’s why 1.0 is a great idea: it’s a signal that we can stop working on updating code to new Julia versions every few months and start getting these cool new features which don’t require updates!


I somehow agree, however looking back at the journey from 0.3 to 0.7 which i monitored, language features and unfortunately syntax changes were (imho) triggered by development on these usability/ecosystem topics (like debugger etc.); so let’s see.

The big issue was upgrade deadlock. Instead of making it possible for packages to support both Python 2 and 3 at the same time, the Python devs provided the 2to3 tool, hoping that the entire ecosystem would upgrade from Python 2 to 3 all at once. But upgrade effort turned out not to be what stymied the transition. Packages still needed to support Python 2 because most of their users were still using it. And since most packages stayed on Python 2, users did as well – if anything they depended on was stuck on version 2, they basically had to. This created a deadlock where most packages and most users were stuck on version 2 indefinitely.

This deadlock wasn’t broken until relatively recent Python versions made it possible to support both 2 and 3 at the same time. Packages gradually started to do this, which eventually allowed users to upgrade once all the packages they depend on supported Python 3 and they did the maintenance work required to port their applications to the new version. Once most users upgraded – which is only now starting to happen – packages have finally been able to start to drop Python 2 support.

Worsening the upgrade deadlock situation for Python was the fact that so much of the ecosystem is written in C. The C API also broke compatibility but the 2to3 tool didn’t help with that. This situation seems to have been particularly painful in the SciPy ecosystem, which took unusually long to switch over. Somewhat ironically, it was probably easier to make C code support both versions of Python by using a messy but effective nest of #ifdef conditional compilation directives.

All of this is just my interpretation of the Python transition process as an outsider. I haven’t developed any Python packages and I wasn’t involved, so take it with a grain of salt and please forgive me for any inaccuracies.

The fundamental problem stated in generic terms is that packages can’t drop old version support until their users have upgraded to a new version, but users can’t upgrade until the packages they depend on support the new version. Since this process takes some time – and the larger the ecosystem, the longer it takes – this is a deadlock unless packages can support both the old and new versions during the transition. The main lesson for Julia (and other languages) is:

Always make it possible for packages to support both the previous and next versions of a language for long enough to let the ecosystem transition.

That’s what the Compat package allows and it works quite well. Having real macros helps a great deal. The only “hard breaks” we’ve had have been when we’ve introduced new syntax that was previously not parseable, which has only happened (IIRC) with the new type declaration syntax (type => mutable struct, immutable => struct, etc.) introduced in 0.6. To allow for that transition, we had to allow both old and new syntaxes in 0.6 and then deprecate the old syntax in 0.7, forcing packages to choose between supporting 0.5 and 0.7. (The old syntax will be an error in 1.0.) However, by the time 0.7/1.0 comes out, all actively developed packages (by definition almost) and the vast majority of users will already have upgraded to 0.6 so dropping support for 0.5 won’t be an issue. The main concern for the 1.0 transition is to support 0.6 and 0.7/1.0 at the same time for long enough for the ecosystem to catch up. I suspect the transition will be shorter than usual because of the promise that we’re done for the time being. [Also: FemtoCleaner]


I have barely started to look into Julia: I have liked what I see in that Julia already has packages that are stronger than what I find in Python and MATLAB. Still, I have delayed really digging into the language since it has been in transition.

To me as a newcomer, it is an advantage if v. 0.7/1.0 is frozen and released as soon as possible – I can have patience with a reasonable period of bug fixing as long as I can start to play with it knowing that the key syntax is fixed. A quick release of v. 0.7/1.0 will also help authors of Intro to Julia books to finish their books, and help explain how to use the language (several publishers seem to have put publications on hold until v. 0.7/1.0 is out). Good intro books are key to the success of a language!

A couple of questions/observations:

  • I’ve been using Python 2.7 with Jupyter for a couple of years, and have played around with IJulia – a notebook tool is useful when learning a language (but doesn’t really replace an editor/debugger).

  • Jupyter Lab is “imminent” (latest release date of beta version is January 5, 2018?) – so I’m curious if there are any plans to update IJulia to IJulia Lab…? Perhaps editor/debugger could be a plug-in in Jupyter Lab – it seems like the Jupyter Lab people think along such lines.

  • In Python, there is a function which responds with all reserved keywords in Python – useful when learning a new language. Is there a similar function in Julia?

  • As a long term MATLAB user, I appreciate that Julia has copied things from MATLAB. But I appreciate the possibility to also improve on MATLAB, as indicated by the discussion on whether function expm() can be renamed to exp() since MATLAB’s function exp() effectively is exp.(x) in Julia… In a similar vain:
    – I assume that the Control package of Julia has copied MATLAB’s LTI system. But why not improve on it? MATLAB’s LTI model essentially assumes finite dimensional state space model with allowance for time delays. But time delays are really just representations of scalar advection PDEs. Why not generalize the model to include linear DAEs and PDEs?
    – Is it possible to combine the DifferentialEquation package with AD to automatically extract such a generalized LTI model (DAE, PDE, PDAE, etc.) at a specified operating solution, and combine this with the Control package? How should that be done?

Anyway, I appreciate the hard work you guys do towards releasing v. 0.7, and look forward to start using Julia more actively.


As I understand it, IJulia already works with JupyterLab, and has for some time. JupyterLab uses the same ZMQ messaging protocol as the Jupyter notebook to “talk” to language back-end kernels, so it should work as-is with existing kernels like IJulia. Not many people have used it yet, though, so please feel free to bang on it and file issues as they arise.

Not that I’m aware of. Julia’s parser is written in Lisp, and you can get it to print the reserved-words list by typing julia --lisp and typing reserved-words, though. Currently this gives

(begin while if for try return break continue function macro quote let local
 global const do struct abstract typealias bitstype type immutable module
 baremodule using import export importall end else catch finally true false)

(I’d like to get access to the Lisp interpreter via Julia, but it has been on a back-burner for a while: https://github.com/JuliaLang/julia/issues/18029)

Yes, there’s a difference for notebooks and IDEs. For an IDE, I use Juno.


There’s also an extension for VS Code which is popular.

I know there’s this package which is currently undocumented:

Usually you avoid solving PDEs if you can. We have delay equation solvers specifically for this case:


And because the interpolation allows for grabbing things like the derivative, you can make your current derivative depend on the past derivatives, etc. MATLAB probably built theirs on top of ddesd for this reason, but of course this doesn’t cover all problems so this should be one of many methods available.

You can AD through f functions (we already do this for our implicit solvers) or you can also AD through the entire ODE solver. So yes, you can. What are you looking to do? If you want to discuss this more, you may want to go to the JuliaDiffEq channel since this is getting a little off topic from the 1.0 thread.

1 Like

Thanks for quick response.

The point with a generalized LTI is twofold, I guess.
1.a. LTI models are typically used for simulating step and impulse responses in MATLAB’s control toolbox. I assume similar possibilities are relevant for a Julia control package.
1.b. LTI models are also used for operating on the Laplace transform of a system, typically plotting Bode diagrams (amplitude/magnitude in dB vs. frequency, and phase angle vs. frequency), Nyquist diagrams, and Nichols diagrams. For this use, differential equations solvers are not normally used.

  1. Most distributed parameter systems are more complicated than advection/time delay. As an example, various configurations of heat exchangers lead to 1 PDE or 2 coupled PDEs. When linearized, these do not give time delays. Furthermore, it is normally easy to find analytic transfer function which look much more complex than time delays (they may include hyperbolic functions of s, etc.) – alternatively, these transfer functions can be computed by solving boundary value problems for each frequency (Laplace transform variable s is set to j*omega, where j is sqrt(-1) and omega is frequency in rad/time unit).

Anyway, most distributed models are not advection models, and thus lead to transfer functions which may differ widely from time delays.

OK – I guess this discussion fits better in a more specialized forum :-). I’m interested in getting involved somewhat in a discussion of a control package. I have looked briefly at the one developed at KTH/Stockholm; that looks like an excellent start. I’ll try to cook up a simple example of what I have in mind and send separately (both on control package and DiffEq linearization).

Thanks for useful suggestions!

At this point, there seems to be a variety of LTI libraries directly and as components of other packages. See Introducing a new Control Toolbox for Julia with Interactive root locus for some discussions. My guess is that post v0.7 freeze it might make sense to collect everyone who uses those sorts of control libraries, and see if a standardized interface is possible.


much like small PRs are easier to review, small frequent breaking changes are easier to adapt to. the changelog for 0.7 makes me shudder.

matlab’s annual release cycle is by far superior to julia and python’s desire to remain completely backwards compatible for years at a time.

would be great if we just stopped with the triage now, push out an alpha-0.7 ASAP (like tomorrow), and in 9 months do a 0.8 if there are still changes to be made.

Weren’t you arguing for waiting years to release 1.0 recently? Perhaps I’m misremembering.