[Article] Dear Google Cloud: Your Deprecation Policy is Killing You

Who said you are terrible at it? You guys are pretty amazing.

Straw man again?

Really?

I said that one sentence poorly for sure, but if you look at everything else?

Honestly, this whole thread reads to me like concern trolling. I get that likely isnā€™t your intention, but thatā€™s how it reads to me.

3 Likes

Dude. I read the article. Did you read it?

I just think it would be pretty awesome if Julia could ensure v1.0 code works with v2.0 (but maybe with some deprecations for what would otherwise be breaking changes). I know that everyone cares a lot about this and 99.9% of v1.0 code will work with v2.0. We are still far away from v2.0 and that is all I really wanted to say. I never meant to imply anyone here is bad.

You are asking for a guarantee that 1.0 code will run on a new release named ā€œJulia 2.0ā€. If we guaranteed that, we wouldnā€™t call that new release version 2.0. Weā€™d call it version 1.x. (Thereā€™s sometimes a misconception that 2.0 is what follows version 1.9, but thatā€™s not the case. We will happily progress to version 1.10 and 1.11 and on and on if thereā€™s still not a burning need for breaking changes.)

12 Likes

Thatā€™s what all the concern-mongering in this thread reads like :man_shrugging:

Regarding multiple dispatch and deprecations, yes, for simple renames and changes to argument types, you can do deprecations pretty easily. But those are the least interesting kinds of changes for 2.0. Thereā€™s an issue on GitHub with a bunch of name changes, but youā€™ll note that core devs arenā€™t exactly flooding it with changes. Perhaps there are better names for some things, but that kind of name game doesnā€™t really interest me. We could certainly pick a few better names and put compatiblity shims in a backwards comat package or something like that but meh.

Keeping deprecations isnā€™t without cost. Even for the simple ones, all those extra names and methods take up compilation time and increase the size of method tables. There was a marked performance improvement when we ditched all the deprecation baggage going from 0.7 to 1.0. There are also really complex syntax and semantics deprecations that are much harder because they complicate the parser and/or the compiler. By 0.7 those had gotten really bad and made the parser and compiler much harder to work on that necessary. Deleting those was pure joy and allowed improvements that would have been really hard to do with the deprecations gumming up the works.

The truly interesting kinds of changes are precisely the ones that are really hard to deprecate. For example, if we want to introduce more immutable data structures and let the compiler take advantage of that, then APIs that currently return mutable data would change to return immutable data. Thatā€™s a breaking change: if someone was calling sin.(v) and then mutating the result, their code will break. But thatā€™s fairly rare: usually people mutate arrays when constructing them and later when doing mathy stuff, work with them as values, i.e. applying functions to transform one immutable value to another. You can provide a shim that acts like 1.0 and ā€œunfreezesā€ the returned arrays before returning it, but if everyone keeps using that, whatā€™s the point? It seems much better to break this and tell people to fix their code by adding unfreeze calls where necessary. We can even do it automatically. But if all the code in the ecosystem keeps doing the 1.0 behavior, then thereā€™s no benefit from the change.

Another example is that there has been some work done at Northeastern on the tractibility of Juliaā€™s type system. It seems like contravariant type bounds make subtyping undecidable. But it also seems like we can limit the kinds of lower bounds that are allowed and eliminate the undecidability. In practice we suspect that hardly anyone ever uses type bounds in this problematic way. But changing this is technically breaking. It may turn out that relying on this is so rare that we can reasonably change it in a 1.x release, but what if thatā€™s not the case? Then this would be a prime candidate for a 2.0 change: break some obscure usages of parametric types but in exchange guarantee that Juliaā€™s subtyping algorithm will always terminate.

Another breaking change that we would like to pursue is enabling threading and parallelism by default. This is breaking because thereā€™s a lot of code out there that is very thread-unsafe and will break if Julia changes to doing everything in parallel by default. Itā€™s possible that we can coax the ecosystem into being sufficiently threadsafe to make this change in a 1.x version, but Iā€™m a bit skeptical of that possiblity. We may have to change this in 2.0 and just tell everyone ā€œIf you want to use Julia 2.0, you need to make your stuff threadsafe.ā€ Just pull the bandaid off quickly and then move forward into a brave new, parallel world.

I could go on, but hopefully you get the picture. Julia 2.0 is emphatically not about tedious renames.

44 Likes

Thanks for this Stefan.

4 Likes

Yes, I forgot about higher generalizations of numbers (while affecting very few, except maybe people like you, to me octonions are just a very interesting theoretical curiosity Iā€™ve never seen actually used, quaternions also rare, but would survive this ā€œbreakingā€ change, in cases I know of), just one more reason it would be a breaking change. I was just trying to show whatā€™s under discussion, not taking side for one, not looked much into that issue.

2 Likes

I would say the comparisson to google is wrong from the beginning: it is clear (at least for me) that every goole service costs money and if in some time not enough users are using the service it is closed. Thatā€™s the business modell (in short) of google. No business (advertising) => no service.

That canā€™t be compared to Julia the language. There is no business modell on the language itself. It is not money or keeping up employees or similar money related arguments, which result in breaking changes. It is only technical arguments which would result in breaking changes. It is: making Julia (much) better? If yes and breaking changes are needed do it for the sake of a prosperos future.

Comparing that with something based on money making leads to wrong conclusions.

(PS: I am not judging good (technical) or bad (money), just pointing on a significant difference)

4 Likes

But then the google needs to decide, what it want to be?
Advertising company or reliable service provider.

1 Like

FWIW, I read the article. It is a rant about Google discontinuing some service, but not very clearly written. 90% of it is filler material like

Google is like Lady Ascot in Tim Burtonā€™s Alice in Wonderland:

Lady Ascot: Alice, do you know what I fear most?

Alice Kingsley: The decline of the aristocracy?

Lady Ascot: Ugly grandchildren.

It is still unclear to me how this business decision is ā€œkillingā€ Google.

If you post links to writings like this, please donā€™t be surprised if readers are a uncertain about what you are actually saying, or donā€™t even waste their time reading what you linked.

In retrospect, it might have been better to just phrase your concerns in your own words.

4 Likes

Iā€™ve seen a message of this kind recently on twitter and itā€™s disturbing me.
This thread arise from 2 different cultural visions: one consist to embrace change, the other value stability (and the in between is a burden of issues). I respect all visions, but Iā€™m clearly radical in favor to changes.

Letā€™s fight stereotypes: breaking changes are good, theyā€™re a symbol of innovation, and one consequence of progress.

Iā€™ve been using julia from 0.3 version, and appreciate deeply all the changes made since then, I believe changes are part of the informal contract of this community, one root of it (otherwise it should) because the core of julia is to be a ā€œfresh approach of technical computingā€ and you canā€™t keep it fresh by not breaking its rules.
Since the beginning core developers have made smart moves (ex. immutability by default), and whatever the futures decisions, theyā€™ve my full trust and support. Iā€™d like to encourage the community to support them to continue to experiment and not being blocked by the fear of breaking changes and the burden to maintain backward compatibilities.

Keeping compatibility is also a way to kill a project, I realized it with python 2 to 3, when 2 years after the release of python 3, I had to port myself to python 3 an active project my lib was depending upon, because this project team was reluctant to change. It has taken me 1 hour of work without knowing the code before, so itā€™s clearly bad faith and a wince over change. Debates around python 3 clearly motivated me to leave python more than the lack of FP tools for ex. because it classifies python to the category of dead languages. I donā€™t want to see this happen to julia.

Everything is constantly changing, if julia is not, the world wonā€™t stop to change however, so stick to the needs and forgive compatibility, please.

3 Likes

Most things donā€™t need to break in order to improve. Things that cause massive breakage are things like changing print x to print(x) (Iā€™m looking at you Python 2->3). Those changes are completely unnecessary, and what @StefanKarpinski was saying was that we took a really hard look at all of that syntax and now the ship has sailed. Nobody is bound by ā€œnot breakingā€: there are some things people are looking into that might require breakage, and Julia 2.0 will be all about those features that require breakage. However, breakage for no reason is a big nono, and most things donā€™t require breakage, like:

  1. Improving compile times
  2. New parallel primitives
  3. More compiler optimizations
  4. New libraries, kernels, AD, etc.
  5. Exposing the compiler for new AD systems
  6. etc.

So the focus has been on v1.x 's mostly becauseā€¦ most things donā€™t need to break things in order to work.

8 Likes

Honestly, I can understand why Python devs wanted to change print from an operator with special syntax to just a function. But they blew the transition. When Python 3.0 came out it was not possible to write the same Python code and have it work on both Python 3.0 and 2.7, which were the current releases at the time. That forced library developers to choose between supporting Python 2 where all the users were and Python 3 where there were none. The choice is obvious: you keep supporting 2. The users in turn had a choice: use Python 2 where all the libraries are or Python 3 where there are none. The choice is also obvious. So everyone stayed on Python 2 indefinitely.

Forcing this choice on library developers isnā€™t necessary, fortunately: give them a way to have one code base that works for the old version and the new version, even if itā€™s kind of ugly. In the case of the print statement versus function, that just means allowing both in the same code base, even a mix of the two in one application. Thatā€™s fairly trivial to do since they have totally different syntax. There does seem to have been a way to do it with a future import, but that presumably didnā€™t work on 3 (maybe Iā€™m wrong, in which case print is a bad example, but there were others).

If itā€™s possible to support both the old and new versions, library developers will support both. Moreover, if you provide tools to help automate the transition then it can be really quick! This part the Python devs got right with the 2to3 tool. This work is also paralellizable since there are lots of libraries but also lots of developers to do the work. So an ecosystem can upgrade surprisingly quickly with the right tooling and motivation. Now consider the users in this scenario: over time it becomes more and more reasonable for them to choose the newer version as it support more libraries. If it also has nice new features and/or is faster, then itā€™s pretty enticing.

In line with this theory, the Python ecosystem didnā€™t start to rapidly transition from 2 to 3 until they made new releases of both 2.x and 3.x that allowed library authors to support both at the same time. For some reason it took them almost a decade to do that, and Iā€™m not entirely sure why.

8 Likes

Exactly stick to the needs, you should feel safe to make breaking changes if the world (knowledge, trendsā€¦) has breaking changes.
For python I think there was a polarization of the debates which has reinforced positions, people have tried to force back of the old syntax by refusing to move.
Thatā€™s why Iā€™m radical, and insisting on the fact innovation is in julia core.

Unlikely example, tomorrow a new main mathematics theory is based on 0-indexed arrays, all maths move to 0-indexed arrays, it becomes the standard.
Should julia change to 0-indexed arrays ?
Thereā€™s no good answer, but I hope you will, even if it can be a pain to check all arrays for that change.

1 Like

I think @apieum is trying to say that you should not shy away from making some kind of breaking change in v2. Of course, everything you say we agree with (donā€™t make uneccessary breaking changes and try to be conservative if possible). But, just donā€™t let that mindset limit you too much from considering new ideas.

2 Likes

Just FYI python from 2.6 could use print as function. (see: doc )

Example:

from __future__ import print_function  
print(1, 2)  # without first line python2  print tuple instead of two numbers here 

which is something like:

Be careful. Python 2->3 transition could be lesson from the past. But it probably need more than sketchy view.