Well, no one is forcing anyone to use multiple dispatch. You can just leave all input arguments but the first untyped. Asking for less typing seems a bit at odds with the rest of your argument, though.
Now iterate has an arguably sensible definition for Any (exactly as print has). And btw. this is exactly the same definition as we all have for x::Number. Does it make the situation any better? I’d argue that from a user perspective it’s a worse outcome (code doing weird stuff, silently), than a verbose MethodError, even though from “math is now correct” perspective things got a bit better
We are very much on the same page here, that implementing meaningless functions for Any indeed makes everything worse. In the case of show there is a meaningful default behavior though, so its fine to define it in that way.
What I would like to be able to is to express that a function is in fact only sensibly defined for a certain set of types that can all do a certain things, but which might have a non-nominal relationship. There are a lot of things that are iterable, but it doesn’t make sense for all of them to subclass AbstractArray. You can’t express those kinds of sets with single inheritance, as you might depend on multiple different functionality.
You can hack it with multiple inheritance, but that’s a very dirty solution. The clean way would be grouping types by functionality defined for them. In Rust that’s called traits, in Haskell it’s typeclasses and in Mypy it’s protocol. And that’s usually what I want to express anyway, as I don’t relly care for hierarchies, because if applied incorrectly, they don’t make any sense, but only whether the thing I want to input into a function has everything needed for the function to work
meaningfulness of a particular definition is in the eye of the beholder You for example think that Base.iterate(x::Any) = throw(MethodError(iterate, x)) is meaningless/wrong/mathematically incorrect etc, but by principle of charity you shouldn’t assume that this is a universal fact after all designing language as consistent as julia is a major undertaking and the designers surely must have had something in mind taking this decision.
and my opinion (derived from working briefly on a large C++ codebase) is that trying to impose restrictions on the type level (by introducing another Turing complete language inside templates) leads to less readable errors (and screens of them :P) than simply failing on the non-existent method somewhere. But this is just my opinion
I think I understand what you mean. Julia’s emphasis on generic programming (with types being filled in during compilation at run time) is one of its greatest strengths and makes it an incredibly productive language. However, it comes with the same costs, other languages with generic programming have struggled with as well: How do you deal with a situation where the type provided does not match the capabilities expected?
C++ for example has had this problem since templates were first introduced. If you call a template function with a type that doesn’t fit you might just end up getting screens over screens of template compiler error salad that’s cryptic to the point of being useless. In C++ the problem is alleviated to some degree by the fact that all of these errors occur at compile time, so if it compiles you know your program is correct (in that respect at least). But the community still invested huge amounts of efforts into better tooling, documentation and programming conventions to reduce the pain of template programming. And since even that wasn’t enough, C++20 introduced Concepts which for the first time make it possible to describe exactly which interface a template parameter needs to provide.
In Julia we are in a similar situation but in some aspects made worse by the language’s dynamic nature. As the OP mentioned, as opposed to the situation in a static language we will only find out at runtime whether an object we put into a generic function fulfils the requirements. What’s more, it’s not trivial to make sure that every possible situation where an error might occur is tested. Sure - as someone mentioned - since we are talking about runtime errors it is pretty straightforward to simply catch them and deal with them when they occur, but I would still say the situation is far from ideal. And I think as Julia matures and its use for production code increases this problem is going to get worse (due to larger applications, more complex types and greater need for reliable code).
I’m not sure a full-blown Concept language as C++ provides it would be a suitable solution for Julia, but I think some way to reliably declare and test “interfaces” that goes beyond standard unit testing would be extremely useful.
Just to clarify, Julia is actively developing and there are new tools, like JET.jl which helps in this kind of question. For example
# file demo.jl
struct Foo end
foo = Foo()
for x in foo
println(x)
end
In Julia REPL
julia> using JET
julia> report_and_watch_file("demo.jl"; annotate_types = true)
[toplevel-info] virtualized the context of Main (took 0.046 sec)
[toplevel-info] entered into demo.jl
[toplevel-info] exited from demo.jl (took 3.158 sec)
═════ 1 possible error found ═════
┌ @ demo.jl:5 Base.iterate(foo::Foo)
│ no matching method found for call signature (Tuple{typeof(iterate), Foo}): Base.iterate(foo::Foo)
└─────────────
So, nowadays it is possible to catch this sort of error, despite the dynamic nature of Julia. Also, as one can see, these errors can be catched without extra type system.
This looks like a pretty cool project that covers (or aims to cover) most of the debugging issues mentioned above.
The other part of the problem is of course documentation and design discipline (yes, I am slightly moving goal posts here…). Every generic function implicitly describes an interface that its (non-typed) arguments have to conform to. A way to formally describe that interface (as done by e.g. concepts in C++ or ABCs in OOP with runtime polymorphism) would a) ensure that users know which functions need to exist or be implemented and b) nudge me as the author of that function to keep that interface as minimal as possible. That said, I am genuinely not sure if there is a way to implement some form of “declared interfaces” that is at the same time useful and general and compatible with Julia’s overall philosophy of dynamism and low ceremony.
seems really close to the answer. The only problem I know of is how to dispatch if an object has two traits that both implement the same function but neither is more specific. Having the user decide may be the best choice but it could be too much work, so there needs to be a way for users to move that work into libraries that can be shared.
Fair enough. Let me try to rephrase it as a feature instead. And since we’ve been on the enumerate example for too long: take for example this excerpt from this blog:
In the end, the requirements for a type to work in the out-of-place format can be described as the ability to do basic arithmetic (+,-,/,*), and you add the requirement of having a linear index (or simply having a broadcast! function defined) in order to satisfy the in-place format.
That is very valuable information for someone defining a new type to use with the library. The definition of for example ODEProblem however looks like this
so you can’t see any of that very valuable information from above in here. Wouldn’t it be cool if you could annotate that information directly in the function? That would save you from the trial-and-error approach of testing your type on every possible interaction.
And wouldn’t it be even cooler, if you could also test that annotation? In that case, if new requirements for your types arise, you can immediately see that!
For me being able to write types in such a way would increase the amount of how expressive I can be quite a lot. I could tell my users exactly what they would need to do in order to be compatible with my library, for which I now would have to resort to documentation, which can easily get out-of-date.
Hopefully formulated in this way people can see that doing it this way does not in fact reduce the flexibility of the functions - as they couldn’t be executed with non-compliant types anyway - but instead increases the amount of information library authors can communicate with their users.
Cheers and thanks for the lively discussion once again
There are many oversights that get made during a major project and this may simply be a mistake that hasn’t been fixed yet. I’m actually pretty sure there’s a github issue where the authors said that for this case but I can’t find it at the moment.
Have you looked at this package Motivation · BinaryTraits.jl? There is @check macro for that. But anyway as far as I understand such type of checking would never be as in static languages, basically it’s pretty close to interface test function made by trait author
I think the reasonable strategy for traits/interfaces checking in Julia is just some test macro or function which should be just a quck way to check interface compatibility to avoid potential errors in long running code runs.
Give the nature of Julia it’s not that different from compile time checking on practice, it is more a matter of getting used to dynamic languages.
Compile time checks could be more beneficial in case Julia static subset appears + static binaries adoption however. And yet would it be that different from JET?
As a feature proposal that sounds good, and as you can see there is plenty of activity in the community to address such need. None of them is mature and/or widely accepted enough to be included in the language though(and there is plenty of argument to keep developing such features in separate packages).
More standard (in julia) is to define “interfaces”, i.e. common set of functions that some objects share, see e.g. iteration, AbstractArray, AbstractDict or else. While I’d like to see more of these efforts (what is needed for MyCrazyFloat to behave like AbstractFloat ?) I don’t need these to be imposed on the type/signature level, or read them directly in code. To me such descriptions are best read in the documentation in natural language than from the templates. (Again my limited experience with C++ may compound my aversion to the latter:) )
As I understand the problem now it is more of
I need a way to tell my users (and myself) what do I expect from inputs to my software
The proposed static annotation is one of many solutions (which comes with its price of making code less readable). Another one is writing good documentation (price: maintaining the sync of docs and code). Let me add another solution to the problem: define your specification/protocols in a lightweight package and create a testset to test it. That’s the solution I myself used for GitHub - kalmarek/GroupsCore.jl: Interface for abstract groups
There it is, versioned, dependable and testable specification! But of course we’ll see how it fares in the future