Is my understanding of Julia correct?

Not me. I came for expressiveness, lack of OOP, and JuMP. Julia enables me express my programming ideas better than any of the other languages I have tried over 40 years of programming (C, C++, Python, Limbo, Javascript)

sort of, in that rebinding variables as in a = 1; a = "1" is valid code
but not so in that this kind of code will fail a = 1 + "1" unlike say PHP where it evaluates to 2 or Javascript where it evaluates to “11”.

9 Likes

Julia is dynamically typed. For more info, see the answer at

2 Likes

Does this really have to do with dynamic typing? To me this seems like it could happen in either type system, depends more on how the language implements type promotion?

Julia does allow 1+1.0 which seems like pretty much the same thing as 1+"1" at the core, it allows you to add two different types.

Was also going to suggest the same discussion as the above poster :slight_smile:

2 Likes

Not only “could” happen, but is very easy, just add a method:

julia> import Base:+

julia> +(x::Number,y::String) = x + parse(Float64,y)
+ (generic function with 209 methods)

julia> 1 + "1.0"
2.0

Julia doesn’t do it because it wants to be consistent with the meaning of the operations in a more general sense, and +, for instance, is thought to represent additions that actually are well defined.

2 Likes

OK. I kinda knew I was wrong :slight_smile:

1 Like

Note that e.g. in Python (also a dynamic language) this is not allowed, like in Julia.

Allowing such operations really opens a big can of worms, I do not understand this design decision of PHP and JS (fortunately for the latter this can be mitigated by using TypeScript).

I think it should be said that most of this is a moving target, and most of the things mentioned here are things which already have PRs to greatly improve or are in process.

  1. Julia is getting major GC changes.
  2. The GC will be used a lot less with things like EscapeAnalysis removing a lot of allocations in the fairly near future (the PRs look fairly complete)
  3. Per-module union splitting controls are already in master for v1.8
  4. Vectors are moving to the stack when provably possible (PRs for this kind of stuff have already merged in some contexts for v1.8)
  5. StaticCompiler now supports the Julia runtime and some things like array allocations, so AOT is a lot better and continuing to improve. Given its current pace of change, “but very difficult” is not a good summary since it seems to have already handled the things you mentioned (this month, see the PRs that merged).
  6. Precompilation is changing dramatically with the caching of external code instances. This was the main thing lopped off of .ji’s before, and once Tim’s PR merges there will be no restriction there. Things not precompiling will generally not be an issue anymore (if you properly use calls in using etc.), instead the issue will be load times. There’s two ideas I know of for decreasing load times (“merging” usings to stop doing so many invalidation checks, and native code caching). Going down this path, I am tending to think that PackageCompiler and sysimages will be a stopgap solution only needed until this is handled.
  7. PackageCompiler works rather well. We’re using it in many industrial use cases and it never seems to be the issue. You might not want to use it in day-to-day developing, but making (relocatable) system images and compiling binaries for others to use is something we have pretty down these days, albeit with a big binary.

I think this case is mostly handled by cache external code instances now though, because the type instability in the function barrier made it not able to infer enough to know that it would hit the concrete method already compiled in RecursiveFactorizations.jl, but now it would just add it to what’s precompiled. That would increase load time, but not as big of a deal. So some of the “necessary” hacks may be unnecessary by v1.8.

17 Likes
  • Julia is getting major GC changes.

Do you happen to have a link to a corresponding github issue or so? Just interested in reading a bit about this.

It was discussed at the last CAJUN meeting.

To all in this thread discussing dynamic typing “but not like PHP or JavaScript”, while not a perfect taxonomy I suggest considering the two distinct dichotomies Static vs. Dynamic typed and Strongly vs. Weakly typed.

Static vs. Dynamic is basically the question if it is the variables/bindings that have types (and this often allows for some compilation-time type checking), or are values themselves that have a type (and this often only allows for runtime type checking).

Strong vs. Weak is about how much implicit conversion the language allows, static is often strong, but dynamic can go both ways and currently most modern languages go the strong route (PHP and JavaScript are the exception). In JavaScript, for example, 1 == '1' is true (and much more se the video at the end). Weak typing is like a convenience for very script-oriented languages, but often it causes considerable headache when you try to build anything larger than an script.

Here is one tutorial that makes this distinction: Typing: dynamic vs. static and weak vs strong | Intro to Programming

Also, I recommend the short but memorable WAT video (comedic video about how Ruby/JavaScript weak typing makes things very strange): Wat

EDIT: I should be working in my thesis, but before that, I found an example I think it is maybe the most iconic example for Weak Typing, this is from JavaScipt:

> "2" + 1
'21'

> "2" - 1
1
7 Likes

The GC will be used a lot less with things like EscapeAnalysis removing a lot of allocations in the fairly near future (the PRs look fairly complete)

Just to point out that Julia has had some form of escape analysis for a long time. What is exciting about @aviatesk work is the possibility for interprocedural escape analysis, and using escape analysis to power more optimizations on the higher level compiler.

Vectors are moving to the stack when provably possible

This would be much easier with the Buffer change discussed a while back, the reason why Arrays currently don’t partake in quite a few optimizations is because of them being special. Prem did some preparation work for having AllocOpt (the LLVM based escape-analysis) reason about small arrays, but the timeline for landing this is indeterminate.

7 Likes

The host process is managing the memory of the accelerator device. In Julia (as other ML frameworks do as well) we use some form of pooling to make allocating device memory faster, but you still run into the issue that GPU memory is a resource, that is released delayed by the GC.

In effect we implement reference counting for GPU allocations with Julia’s finalizers. If those finalizers don’t run in a timely fashion it can look like the GPU is running out of memory. As a consequent when we fail to allocate GPU memory we force a general GC run in the hope that this will return memory to the pool.

2 Likes

I think so. I’ve heard a lot of talk about relying on the optimizer for static computation, but it doesn’t seem to me like such a clean distinction as relying on the type system. I’d like to better understand what programming model we’re headed toward for static computation, and how this relates to things like generated functions, which seem to me like the canonical example of static computation and are entirely in terms of the type system.

There’s a lot more to say about this, but I’m on a family trip until tomorrow. So I’ll save it for after I return, and after this topic splits (assuming that happens).

1 Like

I had a chance to discuss it with @aviatesk in person a few days ago, but, it seems like one of the major blockers at the moment is codegen and possibly GC support (if stack objects can reference heap object). In retrospect, I think it would have been possible to start from a more dynamic approach where escapes of manually-declared stack objects are detected at run-time. It would have had a similar difficulty on codegen and GC. Of course, automatic stack allocation used across call boundaries definitely requires inter-procedural analysis and the fact that we have EA now is super exciting. But, to me, this also illustrates the challenge (and the exciting part) of designing features for Julia. We need to co-design the compiler, runtime, and language altogether and the dynamic aspect of Julia requires us to think very differently (in the sense we can’t just simply incorporate ideas from other existing projects and research).

I know very well that I don’t have to tell you @vchuravy this but I still think it’s worth bringing it up, since the main discussion about the future of Julia tends to revolve around adding more stuff to the compiler/optimizer. This certainly is a very exciting part. But I think we should pay a similar amount of attention to the language/library features as well, so that Julia programs have more predictable performance while still being easy to write and maintain.

13 Likes

It’s super convenient, but error prone. It’s appropriate for a ‘hacking together’ script, but nothing else. It’s also the behaviour of other shell script languages like PowerShell:

1+"2"    # 3
"1"+2    # 12

Many languages allow you to add a number to a string (doing an implicit number → string conversion, but not a string to a number). For example, VBA and C# do this.

The other extreme is Haskell, where there is no implicit conversion, so you have to do more work. On the other hand, it is a more rigorous approach. Is this a better trade-off?

major blockers at the moment is codegen and possibly GC support (if stack objects can reference heap object). In retrospect, I think it would have been possible to start from a more dynamic approach where escapes of manually-declared stack objects are detected at run-time.

What about for GPU code and GPU AD? Allocations are an even larger problem there, where Julia often still lags behind mainstream systems. Forgive me for not knowing the details well, but does your mention of the GC, which cannot see GPU allocations, imply that this is only a CPU solution?

Julia tends to revolve around adding more stuff to the compiler/optimizer. This certainly is a very exciting part. But I think we should pay a similar amount of attention to the language/library features as well, so that Julia programs have more predictable performance while still being easy to write and maintain.

Yes and those of use using Julia for ML feel this acutely, where the optimizer subset has far outpaced the language semantics. The workable semantics for fast GPU+AD is hard to meet
or even virtually non-existent for more complex models which often require hand rolled rrules.

The approach of designing new dynamic semantics for every new demand on the compiler (and even this is hard because of compatibility issues I guess) seems like it will always be playing catch-up because it’s comparatively easier to write passes.

This will be especially true with the upcoming compiler plugin work which are hoped to be composable and easily accessible from user-land. Is there something analogous to this for language semantics or the type system, so that people who write passes can also co-design those with semantics or invariants? Do haskell language extensions provide a useful guide here?

If Julia’s dynamic nature makes it hard, would it be feasible to explore dynamically and then declare locally static regions? Even with current optimizations like conventional devirtualization, hitting the right semantics can be hard for users, so going static opt in (if possible??) as opposed to different dynamic semantics sounds like it might have additional benefits.

There’s talk about finding a good static subset of julia semantics, but unless it has prophetic foresight, does that just punt the problem to when we want to make new compiler demands of that? So maybe better to think about how we can make that programmable. We’d have to give up certain global properties like decideability and soundness, but I think it’s worth the trade-off.

Edit: am I just describing macros?

By allocation do you mean dynamic (heap) device memory? I guess using refcount as Python does would help make releases predictable (and enable other interesting things) but that’s very different from stack memory management.

I’m definitely not the right person to answer this (as I’m playing with GPU just a little bit and not at all on AD) so my imagination is limited. I can only think about two (not mutually exclusive) directions: define static sub-language or bring up more pieces of the runtime to GPU. The former is brought up several times but I think the latter is also important. It’d be interesting to incorporate recent advances in dynamic data structures on GPU to Julia and use them for implementing some subsets of the runtime (dynamic memory management). There are also already some works on hostcall (Hostcall · Issue #440 · JuliaGPU/CUDA.jl · GitHub) which is also a way to bring dynamism to the GPU side. You’d probably want to get rid of dynamism when using GPU on production but I can imagine that it’s still useful for incrementally building up prototypes and gradually getting rid of dynamism from the critical parts.

But I think it’s fair to say that these are very hard problems that are way beyond simple “feature requests.” They require a concentrated effort of motivated and talented individuals. So, I don’t think just discussing this leads us anywhere :stuck_out_tongue: other than it increases the chance to capture some attention to the people who may want to work this in the future (which, I guess, is some small positive impact).

2 Likes

I think the Algebraic Julia Folks would agree, as they also make heavy use of type domain computations to coax generated functions, and this is critical to the success of catlab. I doubt adding more nuances to another optimizer defined sublanguage and hoping to hit them when users compose their own code would be as useful.

See here: Evan Patterson: Principles and pitfalls of designing software for applied category theory - YouTube for more on that and future research directions cc @epatters @jpfairbanks in case they have any specific thoughts here

1 Like

Should `StaticInt` be in Base? · Issue #44518 · JuliaLang/julia · GitHub related

1 Like