I would love to be able to use keywords with positional arguments. It’s one of my favorite “code should be self-documenting” features.
I would love to be able to use keywords with positional arguments. It’s one of my favorite “code should be self-documenting” features.
When I started out with Julia, I also thought that was a missing feature (especially coming from Python), but the idea is really not compatible with multiple dispatch: different methods can use different names for the positional arguments. So, allowing keywords for positional argument would be very breaking, and thus won’t happen.
This would be a grotesque mis-feature. I fervently hope that nightmare vision will never torment my soul again.
That’s fine; it just means the positional argument names you use have to be the ones allowed by the method you’re calling. It works just the same as with keyword arguments.
Then don’t use it. But many of us like self-documenting code like we’re used to in Python. I personally can never remember the right order for the arguments in functions like Turing.sample
.
That’s not how it works. If this were ever implemented, everyone would be forced to deal with it, that’s one of the things that makes it so evil.
You can always add a method accepting only keyword arguments and forwarding those to positional arguments. I’ve certainly done that on occasion in my own packages. You could even do something crazy like dynamically inspecting the methods table and selecting a method based on matching names (although I wouldn’t recommend it). But the compiler just isn’t going to be able to help you with any of this: you’ll always end up with runtime-dispatch.
The compiler could handle it easily (see KeywordCalls.jl, which gives 0-cost keyword dispatch). The fact that keywords force runtime dispatch is because of a heuristic in the Julia compiler: the compiler assumes keyword arguments are unimportant, so it doesn’t need to compile separate methods for every possible keyword.
It’s evil to have to read
sample(rng=Xoshiro(0), sampler=NUTS(), model=my_loglikelihood)
instead of
sample(Xoshiro(0), NUTS(), model)
?
Even if you don’t like it, then just don’t allow it in your style guide. All I’m saying is that you’re never going to get people to switch over from Python if this feature is missing; it makes Julia code substantially less readable than Python. Readable isn’t the same thing as pretty. Readable requires users to be explicit about what they’re doing. Using kwargs, instead of positional arguments, forces users to document their code.
of all the things I’ve read someone claim will doom Julia, this is one of the silliest. besides being IMO a much more confusing misfeature than status quo, it’s also largely irrelevant
I also just don’t really understand your argument. why would kwargs
be synonymous, or even correlated, with documentation?
do you really want to write subset(table=df, label=:day, value="Sunday")
?
You choose an example where keywords read well, but most functions have variables that don’t have any reasonable names. It’s evil to have to read
+(x=1, y=2)
because that’s the can of worms you are opening. Every arbitrarily named argument is now part of the user interface. And every change you want to make for the sake of internal implementation reasons is now a breaking change.
Interesting, thanks for your insight here – it’s the first answer to this question of mine I’ve seen, so bear with me as I ask a few clarifying follow-up questions:
- Is the “different methods can use different names” just an issue when namespace clashes exist, such as
using Krylov
using IterativeSolvers
# stiff_matrix = ..., force_vector = ..., etc.
cg( A=stiff_matrix, b=force_vector )
Where we pretend for this question’s purpose that the cg
method from Krylov.jl
and IterativeSolvers.jl
have different names for those arguments.
-
In the actual case, where both
Krylov.jl
andIterativeSolvers.jl
have the same names and amount of positional arguments, it would seem like this wouldn’t be “the” issue. -
Are the “methods” you’re referring to from multiple dispatch of a single package’s method? It would seem to me that Julia could simply “ignore” the keyword on positional arguments. For example, it’s not clear to me how:
import LinearAlgebra
# stiff_matrix = ..., mass_matrix = ..., etc
e = LinearAlgebra.eigvals( A = stiff_matrix, B = mass_matrix )[0]
would have issues doing multiple dispatch.
It can also be a safer work process in terms of entering argument names when writing code. For example, in Rstudio when you work with a multiple dispatch method like print(), if you hit the tab key it shows you all of the arguments and their corresponding documentation. This will only show you the arguments for print.POSIXct. You can write a fair amount of code in a dynamic language with multiple dispatch, with just the tab key.
d = Sys.time()
print(d, <hit tab and select documented arguments>
Yes. See arguments of:
Code is read much more often than it is written, so plan accordingly - The Old New Thing
"Indeed, the ratio of time spent reading versus writing is well over 10 to 1. We are constantly reading old code as part of the effort to write new code. …[Therefore,] making it easy to read makes it easier to write
Quote by Robert C. Martin: “Indeed, the ratio of time spent reading versus ...”
I’m not saying that I would want it to be required (wouldn’t want to force my worldview on others) but rather supported as an option.
This is similar to my annoyance with devs using from numpy import *
or using LinearAlgebra
or using namespace std;
, especially in documentation / tutorials. It makes it difficult for a new-user, or a different developer three years later, to know what functions come from which packages. I prefer being explicit, if not pedantic, when I write code.
I’m not disputing that code should be optimized for readability
I’m disputing that this proposal achieves that aim
yes, now this is a major problem I have as well. I agree that the magic namespace loading paradigm is not good for readability. I think it is a completely independent issue though to this named positional arg thing. I don’t think the two are remotely similar
We’re really going on a tangent here… Someone should probably move this to a new thread.
But no, this has nothing to do with namespace clashes. It’s very likely that the same package will define multiple methods for the same function, using different names for the positional arguments.
Or, other packages will extend an existing function and use different names. For example, I’m usually writing packages in the context of quantum mechanics, so if I define a type that represents a Hamiltonian operator, I’ll most likely define a method
LinearAlgebra.eigvals(H; kwargs...)
for it. Or, maybe in another part of the library, it’s a data structure for a Liouvillian, and I’ll have LinearAlgebra.eigvals(L; kwargs...)
.
Obviously, people should be able to call eigvals(A)
without having to worry about what the type of A
is and whether the author extending it used A
or H
or L
or something else. It would be quite limiting if eigvals(A)
/ eigvals(A=object)
extended only to methods that used A
. I mean, sure, you could design a language like that, but it would be very different from how it works in Julia.
In any case, this would be a very breaking change to the fundamental design of the Julia, and thus won’t happen (since there’s no breaking 2.0 version planned, and I doubt even a 2.0 would touch anything so fundamental)
The answer to your wish is to improve the performance of kwargs.
But how readable will this be when you want to mix four libraries with related functionalities that all use different naming schemes for the same kind of argument? Perhaps this works for monoliths like SciPy, but I doubt it will be either pretty or readable, but rather extremely confusing in an ecosystem of many small packages.
This is one of the top 3 or 4 complaints about the issue that I hear people bring up, and probably the single biggest feature I miss when I use Turing instead of PyMC (along with the closely related array-labeling ecosystem). Those definitely take up a substantial amount of time. I think it takes me about 50% longer to write a model in Turing than it does in PyMC because of this one issue. I just spend so much more time chasing down bugs in Julia that would be extremely obvious if everything could be clearly labeled.
It seems like Julia devs still haven’t learned the lessons of TensorFlow. TensorFlow/Keras blew a 4-year lead to PyTorch because PyTorch was slightly more readable, despite TensorFlow being orders of magnitude faster under some circumstances. I originally learned Julia because macros let me write more readable/obvious/explicit code (y ~ Normal()
instead of y = pymc.normal()
), not for any performance reason.
The LSP server could provide that information. For instance, the TypeScript LSP server does it:
The only way to do that would be to make TTFP explode by having us compile methods for every possible combination of keyword arguments. Then, we would have to replace positional arguments for every function in Julia with keyword arguments.