Absolutely not. This would be only for those who actually want this sort of abomination in their code.
It’s your proposal, not mine. I want to give users that choice; you’re saying we should make keyword arguments faster and then use them to replace positional arguments.
…
I’m pretty confident there are more differences between the libraries than just “PyTorch is slightly more readable”
This is one of the top 3 or 4 complaints about the issue that I hear people bring up
they’ll get over it. this is one of those complaints that’s mostly just fear of the unfamiliar than an actual issue with the language. I would categorize it with all those who call Julia doomed because of 1-based indexing
Your suggestion implies forcing everyone to make their variable names part of the API. It’s incredibly intrusive and messy.
Variable names are internal implementation details, they should not be leaked like that, that’s what keywords are for.
Turing and other packages where this makes sense can (and in some cases should) set up something like
function sample(; # keyword args only
rng=Random.default_rng(),
logdensity,
sampler,
N_or_isdone,
kwargs...,
)
return sample(rng, logdensity, sample, N_or_isdone; kwargs...)
end
This can certainly make high-level functions easier to use, and produce more readable code, but it can also come at a performance cost, and doesn’t generally fit with arbitrary functions or interfaces.
If the developers of Turing don’t want to support such an API, I wouldn’t think it unreasonable for you to make such a definition e.g., at the top of a Jupyter notebook. Strictly speaking, it’s type piracy, but there’s nothing wrong with defining your own convenience functions in a local environment.
Not particularly. People liked PyTorch because eager mode was more intuitive and produced readable error messages, and also because it was more readable than TensorFlow. These are the main reasons any ML developer will give if you ask them why they switched.
It’s been 3 years since I learned Julia and if it weren’t for the package manager I’d go back to Python for this feature alone. I like it when my models are readable and I can check them for bugs.
You can easily implement your own keyword-only wrappers to get exactly the behavior you want. If you’re worried about it being type piracy, or don’t want to define it over and over again in every notebook, put it in a separate package.
The high level automated solver interface is the funniest thing. I never use it. The other devs never use it. In fact, many of the devs dislike it because it naturally has a bit of overhead (and some type instability due to how it currently chooses the algorithm, etc.)
But,
every time I talk to random users, it’s the best feature. DifferentialEquations.jl, just call solve(prob)
, so much easier than learning ode45
and ode15s
and trying to learn what stiff is. And when I see “semi-power users” choose algorithms, they often choose a worse one than the default algorithm.
I almost got rid of it in like 2018 because it was too suboptimal in my mind. But it’s more optimal than what most people would do, so it’s a killer feature .
How can I give a million likes to this
Chris for BDFL 2024
Excuse my naivete, but I had assumed that the “Julian” approach to something like you’ve described as being to simply add type to the positional argument:
function LinearAlgebra.eigvals( A::Hamiltonian )
# content
end
function LinearAlgebra.eigvals( A::Liouvillian )
# content
end
I hadn’t even considered that it was possible to overwrite a positional argument’s “name” in a Julia method. I can accept that my desire might require a Julia 2.0, and thus will never happen, but it is slightly disappointing to me that that functionality was preferred over allowing LinearAlgebra( A=stiff_matrix )
usage. I’m sure there’s way more functionalities and capabilities that are enabled other than just this single tradeoff, but if it was simply “one or the other” then it seems more natural to follow the form common in mathematical writing:
A generalized eigenvalue problem (second sense) is the problem of finding a (nonzero) vector \mathbf {v} that obeys
\mathbf{A} \mathbf {v} = \lambda \mathbf {B} \mathbf {v} ,
where \mathbf{A} is a sparse positive-definite matrix and \mathbf{B} is a sparse diagonal matrix
\mathbf{A} might in fact be a stiffness matrix (or an assembly operator), and \mathbf{B} a mass matrix (operator), but the equation (function) isn’t fundamentally different if you wrote it \mathbf{K} \mathbf {d} = \lambda \mathbf {M} \mathbf {d}. In my worldview the information about the structure \mathbf{A} and \mathbf{B} are should be communicated in the type (i.e., where _ is a ...
), not in the variable name.
All that being said, I view this more as a “nice-to-have” rather than a “blocker”, I just use comments in my code to get around readability issues when I see them:
# Solve Aϕ=λBϕ, where A = stiff_matrix, B = mass_matrix
Arpack.eigs( stiff_matrix, mass_matrix; nev=1, which=:SR, sigma=1.0)[0]
It seems obvious what’s going on in this case. Is there another example of where named arguments would help?
The Julia community could also adopt Plasma’s motto: “Simple by default, powerful when needed.”
Well actually the problem is that has a 50/50 shot at being the wrong order for NUTS()
and model
(I can never remember which comes first)
I mean, as someone who didn’t write @ParadaCarleton’s code, when I saw
sample(rng=Xoshiro(0), sampler=NUTS(), model=my_loglikelihood)
I immediately was able to read what he wrote:
“Ok, so use Xoshiro(0)
as the random number generator, NUTS()
is the sampler, and use my_loglikelihood
as the model.”
If instead see:
sample(Xoshiro(0), NUTS(), my_loglikelihood )
Then I need to do the following:
- Search Google (or my repo) to find the package(s) contain
sample()
method so I can then - Read the API reference doc for
sample()
to see what the arguments are- Of course, if I’m somewhere with no / poor internet connection and the docs are online, then I’m stuck.
The first example is just so much easier for me to read.
I don’t think anybody is disputing the potential value of designing an API to make heavy use of keyword arguments
however, that goal has (almost) nothing to do with the specific design proposal, of making positional arguments named.
I would conjecture that for nearly all functions that would benefit from a kwarg-only API, the extra overhead of a single function call (and defining foo(; kwargs...) = foo(values(kwargs)...)
) is completely negligible
You mean the case when you’re writing it rather than reading it?
Tangentially also Python has found it valuable to have the option to specify positional-only arguments: PEP 570 – Python Positional-Only Parameters | peps.python.org
I’ve tackled this before (and so have a lot of other much smarter people), and no, using keywords to specify positional arguments out of order really relies on a function having 1 method for each unique set of names. Simple example:
foo(a::Int, b::Float64) = a+b
foo(b::Float64, a::Int) = b-a
@keywordedpositional foo(b=1.5, a=2) # ambiguity error
KeywordCalls.jl doesn’t do an error in such a case, it uses the position of the keywords, which defeats the purpose of using keywords to shuffle positional arguments and adds an undesired layer of complexity to dispatch (f(::Int, ::Int)
type signature not enough, needs additional keywords to determine which 1 of 2 methods).
# Adapting front page example to have >1 method for arguments (a,b)
using KeywordCalls
f(nt::NamedTuple{(:b, :a)}) = println("Calling f(b = ", nt.b,",a = ", nt.a, ")")
f(nt::NamedTuple{(:a, :b)}) = println("Calling f(a = ", nt.a,",b = ", nt.b, ")")
@kwcall f(b,a) # writing f(a, b) makes no difference here
#^ but if only 1 method exists, the order should match that method
f(a=1,b=2) # Calling f(a = 1,b = 2)
f(b=2,a=1) # Calling f(b = 2,a = 1)
Was also just thinking about that. E.g., R matches unnamed arguments by position, but named ones by name:
f <- function(x, y = 1) { ... }
# Can be called as
f(1.0) # same as f(x = 1.0, y = 1)
f(y = 2, 3) # same as f(x = 3, y = 2)
f(y = 2, x = 1) # same as f(x = 1, y = 2)
Arguably, this can become a bit confusing when mixing positional and named arguments arbitrarily. On the other hand, it raises the following question for Julia:
julia> (; a = 1.0, b = 2) |> typeof
NamedTuple{(:a, :b), Tuple{Float64, Int64}}
julia> (; b = 2, a = 1.0) |> typeof
NamedTuple{(:b, :a), Tuple{Int64, Float64}}
Should this actually be the same type (and value for that matter)?, I.e., should (; b = 2, a = 1.0) == (; a = 1.0, b = 2)
be true?
no, but the associated function calls with those NT as kwargs should be equivalent