Parametric inheritance woes vs traits

I’ve noticed some discussion of issues relating to Julia’s parametric re-usability for array programming (see flux’s issues for example on rewriting functions for the GPU) but haven’t personally encountered some issues due to my lack of need for exotic types.

Some of the discussion here: This is also very easy in Keras / Tensorflow using the FloatX parameter, or spec... | Hacker News is on similar concerns such as the following in response to @ChrisRackauckas :

. [engineering a problem using Julia’s abstract type system] required a huge amount of backend code to set up the abstraction and make lots of built-in types that adhere to the abstraction. And in cases when the abstraction fails to offer the exact type of extensibility needed (which is most of the time unless you’re authoring yet another highly abstracted library that can tie its use cases to that underlying abstraction, which is never in practice), then it was wasted effort, and “no overhead” is a false description, because you still have to dig into the guts of all the stuff that gets auto-generated if you plugged into the abstraction and change the mechanism of how it gets auto-generated for your special case, or else (usually easier), just write separate data structures outside of the abstraction vortex and have a few small converters or helpers that marshal your custom data type into and out of the abstraction for the really tiny anount of auto-generated features that actually matter to the use case.

The “but it requires zero lines of code” thing is so misleading once you hit real use cases where the choices of how the abstraction auto-generates things end up being unusable for some specific situation.

Is this a common experience when using custom types in complex hierarchies or are there only specific instances of abstraction mismatch?

Either way, can the situation be helped with a move away from OO and towards a @andyferris traitor.jl like system in 2.0?

This is just us taking too long to find out we were getting trolled. Some anonymous person on the internet is saying that it’s impossible to do X, Y, and Z even though we have already done X, Y, and Z. Instead of talking about all of the subjective points about how “possibly easy maybe Cython could in theory” etc., I should’ve just said “we’ve already done it. Here’s an example. Please provide code that does the same thing using X” and just leave it at that.

We don’t need to talk about the future subjective merits of the language, that just invites trolling and long-winded discussions where someone who doesn’t actually understand the language gets to sound just as smart to an uninformed reader. Instead, we should avoid quagmires like this by pointing to what we’ve already done. Julia is at the point now that our generics algorithms lead to new cool results, and we can just showcase them instead of talking subjectively about the merits of Julia for generic programming. This change of narrative is a more broad mental flip I/we need to do in a world with post-v1.0 Julia. Julia isn’t just a newcomer with a possible user base in the future: writing packages in Julia gives you a larger audience than you can handle the bug reports (more than 2 million downloads). Julia isn’t a language that could possibly be fast for real scientific projects with high parallelism, we already have examples like Celeste where it WAS fast. Julia’s ML libraries aren’t subjectively good enough for people to adopt in the future, instead they are libraries which already benchmark very well and there is an entire zoo of models already showing Flux in use (most are not in the actual model zoo). Julia’s database libraries already parallelize and work out-of-core. JuMP already interfaces with more optimization tools than modeling languages outside of Julia because developers of optimization methods have already started coding the newest methods like Juniper directly into Julia for use with JuMP. DifferentialEquations.jl already supersets the methods of most other libraries, both with bindings and native Julia implementations, and the native Julia implementations win out in most benchmarks.

I messed up in this thread but that’s my plan for the next.

10 Likes

I will just say this: the comment by mlthoughts2018 referred to in the OP does not match my experience in Julia at all.

It does take a little bit of experience to create truly “generic” code in Julia - you have to think a bit about what assumptions your methods are making. If you take no attempt at prescience, well yeah, your code won’t be plug-n-play. But honestly that post sounds like my experience in some other languages/environments…

As for Traitor.jl: very occassionally, we find there’s a concept that is better served by a trait, so we go back and refactor that trait into the dispatch pattern in the backend (like when IndexStyle was added to Base). The main thing Traitor aims to achieve is to make multiple dispatch a bit more generic so that such a refactor becomes unnecessary. There’s multiple similar ideas floating around (multiple inheretence, or an “automatic” interface system (as opposed to “nominal” traits)). But I think Chris R has more-or-less proven we don’t need those to do great things with what we’ve got.

2 Likes