You entirely summarized my feeling toward the language in the last year. Thanks for this thorough explanation about tooling and unreliability. A recent example is the lag in the REPL TAB completion (v1.12.4). Although it seems to be fixed in the next release, it’s hard to persevere as the language grows, when the user experience is randomly and periodically affected like this.
I love the mix of runtime performance, interactivity and built-in linear algebra. But, after too much struggle, we get to a point where we wonder: is it worth it? Should I go back to the good ol’ reliable-but-bulky-and-expensive MATLAB ?
Is there many people who rely on REPL TAB completion? I’ ve never used that. If most people use vscode the completion experience is kind of OK. And the newly release jetbrains plugin is even more polished in the completion feature. I’ d encourage everyone to try it out and interact with the developer to make it even better.
I don’t want to diverge too much on the main thread, but yes, it’s a new feature, but, once you get used to it, you get spoiled . I rely on it heavily to explore what are the fields of a specific struct, or the sub-module of a module. In a perfect world as a user, we should not rely too much on the fields, but, hey, we are in an imperfect world (as I’ve heard frequently in the pythonic world, we are all consenting adults, private attributes are for the feeble XP). Also, if the tooling e.g. intellisense completion in VS code was working as intended (I assume issues with the LSP), REPL completion would not be as crucial as it is right now.
All this ramble to say that I need to test the new jetbrains plugin. It seems awesome, but I need to learn a new IDE once more, that may explain why I’m doing procrastination on this
It eventually got an official history. The global/local difference in variable scoping prevents unintentional reassignment of a global variable we forgot about in an earlier evaluated file or session history; we don’t need this protection for local variables because they’re visibly contained in one expression unless we’re abusing macros to paste variable assignments. Before v1, this applied to some scopes but not others (hard/soft), and v1.0 applied it to all scopes for consistency. But people complained about not being able to write or paste code between local and global scopes for equivalent behaviors and took measures to bring back the pre-v1 behavior, so v1.5 brought the pre-v1 rules back for the REPL and notebooks as a compromise.
The motivation for a unique explicit declaration is consistency: assignments in a local scope that doesn’t declare the variable as new or global must find the variable in the nearest outer scope, whether local or global. Two declarations in a scope is a potentially static error, but it needs to be a runtime error for evals and conditional declarations. Obviously I didn’t think this through and there are reasonable debates over this; accidentally omitting an explicit declaration would silently reassign an outer, possibly global variable, and it needs to be easily written to avoid bloat (imagine writing code with only explicit local statements now). I don’t remember who, but a suggestion I liked was a reduced almost-inverse of BPCL: x=... would be strictly reassignment, and x:= would be shorthand for a typed declaration x::Any=. But that’s very breaking, type declarations on the left or right sides of assignments in v1 don’t declare new variables.
It’s good for critical tools to live outside a core. Modularity means people can choose what they use and parts can be swapped out. However, they can be developed in tandem and promoted for better user experience.
I share your sentiment. I’m fairly certain my packages 2-3 years ago will simply not run on the latest release.
This feeling definitely sucks, not just because fixing them takes more than a few minutes (so I prob won’t do it), but because it feels like all my efforts in the past have been wasted. My code was working at some point – then BAM somebody does something – and now it broke forever
I have seen this too. The allocs were only present when running from ] test, and even weirder, the number of allocs would change every time I re-ran ] test! This started suddenly appearing in 1.12.3. How do you even debug something like that?
This statement was somewhat frustrating to read. I’m not sure why it should matter whether a tool is explicitly aimed at developers or not. For example, ForwardDiff.jl isn’t really a tool for developers of the language either and that’s perfectly fine. I could even argue that the REPL itself isn’t a developer-facing tool in that sense (Julia isn’t developed in the REPL, nor are most large packages), yet it still receives a lot of care and attention.
One of the reasons Python has become so widely adopted is that it successfully attracted beginners and people who were simply curious about programming. Today it’s a de facto educational tool in universities around the world. Its low barrier to entry make it very approachable before users dive into more technical details.
In that light, Pluto plays a very positive role. It lets people try Julia with minimal friction, and that alone creates meaningful personal stake for Julia developers. From a Julia ecosystem perspective, that seems like a clear win: making it easier for more people to try Julia ultimately benefits the language and its developers as well.
I feel you’ve missed my point. I was not making a normative statement about how I wish julia was developed, I was talking about how it is developed in practice (at least as far as I can tell – note I am not a core language developer by any means).
I know you’ve made it very clear in other threads how you wish the language development was handled. The reality is that development of the language is highly decentralized, and there is no Czar barking orders and coordinating things according to some master plan.
The language’s development happens much more along the lines of individual people taking on projects that they themselves decide are important for them to work on. These people are often are not deciding to go and put extra effort or attention towards tools they don’t use or care about, even if they think the tool is somehow “important”.
There really does not seem to be much thinking along the lines of
“if we develop ___, ___, and ___, it would have ____ effect on the community and help grow the language user-base”.
At least as far as I can tell, the thinking appears much shorter term, less strategic, and less coordinated than that.
Clearly something has to be done here. The question, given the apparent lack of coordination, is how to we reach some sort of agreement on what should happen, and then how do we make people actually do that thing that there’s some apparent consensus about?
Sidenote, but this is a rather amusing example you chose:
because it highlights exactly the point I was trying to make (but probably I wasn’t clear enough).
Autodiff is super important to many many people in the julia community, and yet almost all of our autodiff tools except ForwardDiff.jl keep breaking. Zygote.jl was the latest fatality.
So why is ForwardDiff.jl working fine but Zygote.jl isn’t? Because ForwardDiff.jl is built on public, stable APIs, whereas all the “exciting” AD systems are built on the ever-shifting sands of internal compiler-APIs.
ForwardDiff.jl’s stability has nothing to do with the language devs trying to make sure they accommodate it, and a lot of people who rely on AD tools share very similar frustrations to the ones expressed by Fons above. Just a few weeks ago there was a kvetch-fest on the #autodiff Slack channel where people were making similar points, and wondering if there’s some way to convince the devs to stabilize the compiler internals, or at least communicate better when they do decide to change how compiler internal APIs work.
I guess some folks are more used to a file-based development workflow, where whatever runs is automatically saved to files. That is probably more common for library developers. In a REPL-based workflow, there is a separate step to go through the REPL history to save the working code. Sometimes, missing one or two things in the copy-paste will prevent the code from running in a new session.
Can someone explain to me how semver works? I thought the entire point of 1.0 is that there are no breaking changes, meaning if your code worked on 1.9 then it should also work exactly the same on 1.12. The changes between 1.9 to 1.12 may contain new features and underlying improvements to the compiler and so on, but shouldn’t break a package.
That’s only true of public, documented APIs. There is no guarantee that undocumented behaviors or undocumented internal functions and data structures will be stable.
Yeah, I feel that’s the pretty standard and recommended way to work with julia for the majority of devs, which is why this quote had me rolling my eyes pretty hard
The people developing julia definitely are using the REPL as their main interface to the language, and if the REPL doesn’t work, a lot of their work would grid to a halt. It’s an essential developer tool. This might not be the case in other languages, but it’s certainly the case in julia.
The point I was trying to make above was that if more devs relied on Pluto do their work instead of a REPL, they’d probably open more PRs that help Pluto.
I got around REPL by using DaemonMode.jl and a shell script in order to get an AI coder to work with a persistent Julia session. The shell script takes a .jl file or Julia code and pipes it into the daemon.
But I agree with you that REPL is essential for most people to develop in Julia. I used to do that and my students still do (Shift + Enter in VS Code). It works fine.