No, because that syntax is already parseable & used in some macros in the ecosystem, so reassigning that would be breaking.
It does not return a partially applied function, it returns an anonymous function that has some arguments already set.
Having it appear as a function call is beneficial because that’s exactly what it represents. Having distinct syntax for the same thing is confusing.
I’ve followed this discussion for a while now and since it’s very clear now that /> and \> have no benefit (implementation wise, since they also require special casing in the parser) over the _-lambdas, I fail to see how having them in addition to _ is useful.
I absolutely think autocomplete should be a priority! Possibly the top one! But as outlined in this post, I’ve begun to think that all that’s necessary to inform a good autocomplete is the existence of a piping operator, and first-class treatment of partial function application: by making it a proper part of the language, instead of some random package, the effort to make autocomplete work with it will be justified. We already have the former; it seems just a matter of getting the latter.
The one thing I’m trying to figure out is how to make _ placeholder partial application syntax return a function of zero arguments. That seems to be the only remaining leg up that the frontfix/backfix operators have over it.
Where _... is used to slurp args, this could do the trick:
my_func(x) = x
this_is_spartaaahh = my_func(:aaahh!, _...)
this_is_spartaaahh()
Or similarly, slurping kwargs:
zip_zip_zip = my_func(:zip!; _...)
zip_zip_zip()
Not technically a partial function of zero arguments, but it at least allows zero arguments.
Option 2
A second idea could be this:
wen_moon = my_func(:🚀🚀🚀, ^_)
wen_moon()
This borrows from the vibe of regular expressions to indicate “no placeheld arguments”.
Option 3
A third idea:
turtle_turtle = my_func(_=:🐢🐢)
turtle_turtle()
That is to say, allow _ to consume the = operator and then trigger creation of a partial function.
This option could require deeper changes to the parser, so I’m less confident in it.
In Any event,
it seems like there could be multiple syntax options that would allow underscore placeholder partial function syntax to create a function which can be called with zero arguments, i.e., a “partially applied” function that’s fully-applied.
This means that the syntax fully generalizes, fixing across any argument position and for all numbers of arguments, and I can feel good throwing my full support behind it.
Since underscore partial application would make piping much more powerful and useful, after getting it properly into the language the next to-do is to get an operator that’s cleaner-looking and less awkward to type than |>.
Basically yes, that fixes most of the problem with the Base |>
But, now we have this idea of first/last generalization of the pipe operator and it feels kinda compelling. I think Julian programmers have an affinity for that generalization concept, and this />, > notation seems to me has a bit more than the Scala style underscore placeholder.
Is the ’ |> with _ ’ equivalent to ’ /> >’ as a feature of the language?
Wow @generated is like having a superpower in your back pocket. Thanks @CameronBieganek for that!
Also: its performance is great. I think I was mistaken previously.
Third-pass Code for a typed arbitrary-index partial application functor
struct Fix{F,fixinds,V<:Tuple,KW<:NamedTuple}
f::F
fixvals::V
fixkwargs::KW
Fix{F,fixinds,V,KW}(f,fixvals,fixkwargs) where {F,fixinds,V,KW} = begin
orderok(a, b) = a < b || (a > 0 && b < 0) # not a perfect test ... just want args ordered left to right
length(fixinds) > 1 && @assert all(orderok.(fixinds[begin:end-1], fixinds[begin+1:end]))
new{F,fixinds,V,KW}(f,fixvals,fixkwargs)
end
end
Fix{fixinds}(f, fixvals; fixkwargs...) where {fixinds} =
Fix{typeof(f), fixinds, typeof(fixvals), typeof((; fixkwargs...))}(f, fixvals, (; fixkwargs...))
@generated (f::Fix{F,fixinds,V,KW})(args...; kwargs...) where {F,fixinds,V,KW} = begin
combined_args = Vector{Expr}(undef, length(fixinds)+length(args))
args_i = fixed_args_i = 1
for i ∈ eachindex(combined_args)
if any(==(fixinds[fixed_args_i]), (i, i-length(combined_args)-1))
combined_args[i] = :(f.fixvals[$fixed_args_i])
fixed_args_i = clamp(fixed_args_i+1, eachindex(fixinds))
else
combined_args[i] = :(args[$args_i])
args_i += 1
end
end
:(f.f($(combined_args...); kwargs..., f.fixkwargs...))
end
call Fix{fixinds::Tuple}(f, fixvals::Tuple; fixkwargs...) to construct functor
– fixinds is a tuple of indices starting from left (e.g. (1, 2, 3)), and any indices counting from the right are negative (e.g., (1, 2, 3, -3, -2, -1)). Index 1 is left-most argument, -1 is right-most.
Fixed keyword arguments override called keyword arguments. Not sure if this is the right decision.
There is no check that the number of arguments or keyword arguments fit a profile; the combined argument list simply grows with number of arguments passed during call, with new arguments filling in the middle between the arguments with positive indices and the arguments with negative indices.
This could use some more road testing, for sure
FixFirst(f,x) is created by Fix{(1,)}(f, (x,)) which isa Fix{<:Any, (1,)}. It is presumed that such an object would be created by f(x, _...).
FixLast(f,x) is created by Fix{(-1,)}(f, (x,)) which isa Fix{<:Any, (-1,)}. It is presumed that such an object would be created by f(_..., x).
In many locations where Base.Fix2 is used, people will probably use f(_, x), which will create a Fix{<:Any, (2,)} object. When calling a function with two arguments, the fact that Fix{<:Any, (2,)} behaves as Fix{<:Any,(-1,)} means the type signature of a partial function which does the intended task is not unique. For the people who care about the types of the object, not sure if this matters.
In terms of what can be done, yes, what can be done with piping + “1 call” underscore partial application is a superset of what can be done with /> and \> (assuming that a _... slurp is incorporated into underscore syntax).
The primary difference is that underscore syntax doesn’t assume which argument you will pipe into; it is manually typed. This is why its functionality is a superset, but it can also be less convenient.
Addressing this, I believe that autocomplete will be useful to discover functions which accept the argument type (most likely as a first argument or last argument), and will automatically enter the underscore into the appropriate argument location.
How will this actually make it easier to autocomplete? Due to multiple dispatch and the first argument not being special on a language level, there is no distinguishing feature to take advantage here. Not with Base.Fix1, not with />/\/> and not with _.
To have really good autocomplete in julia, you need to know the argument types being passed in, which means at least a partial run of type inference to select possible methods. Special syntax for fixing the first (or any argument, really) in place does not help with actually deciding whether a method taking the supposed number of arguments even exists in the first place and thus can’t be the deciding factor for whether or not some autocomplete should/can show the method.
Why? _ as proposed in the PR is literally a placeholder for an argument. It’s not at all related to creating functions that don’t take anything.
Also, there already is syntax for anonymous functions taking in zero arguments:
julia> foo(x) = x+1
foo (generic function with 1 method)
julia> () -> foo(2)
#3 (generic function with 1 method)
julia> ans()
3
But seeing as this is completely unrelated to piping-like workflows (there’s no argument to pass in after all), I don’t see how that should have an impact on the _ PR or this discussion at all.
_... already means something - slurp the splatted arguments and ignore them:
julia> foo(x; _...) = x
foo (generic function with 1 method)
julia> foo(1; bar=1, baz=2)
1
Your proposal about _... feels like needlessly requiring the argument definition of a function to be declared at the callsites in a function, instead of at the definition of the function itself. This complicates the mental load required to parse what a given expression does, since now you have to read the whole function to even figure out how many arguments it takes.
This is not a problem with regular _ because there is no concept of “splatting something I ignore”. So this:
foo = bar(_, b, baz; _...)
Quite literally already means
foo = (x,y) -> bar(x, b, baz; y...)
There is no contextually dependent different semantic of _ here and adding one seems really confusing to me. It would also prevent expressing “I want to splat keyword arguments” with this syntax, which seems a bit odd to me to disallow.
Your computer (most likely) is clocked in the gigahertz range, meaning one instruction every ~nanosecond or faster. Getting a result in that small range is VERY likely to mean that the compiler completely folded any sort of computation away and just inlined the return of that constant. Your benchmark is not representative.
Again, to do that autocomplete needs to know the type of the object and at least has to run type inference, which does not really make sense to do when the function you’re currently writing does not parse correctly, seeing as it’s literally incomplete syntax you wish to complete.
I’m all in favor for improving autocomplete, but please, let’s stay realistic and acknowledge the true problems autocomplete faces, instead of shoe-horning in a feature that does not add anything to solve those core problems. Your proposal has moved from “I want to write/autocomplete code like in a OOP language” to “oh this fancy syntax can do unrelated thing X as well!”, which to me just seems like you’re trying to sell the syntax instead of digging into why autocomplete with the existing semantics is hard (which surface-level syntax changes have no impact on and which are the reason previous proposals to make julia more OOP-like syntax wise have failed).
Argument positions may not be dictated by specific features of the language as class methods of an OO language, but it is still typical to place arguments in certain positions anyway because it’s good practice. A rudimentary autocomplete will assume the chained object should likely take the first argument position, or the last position—this would cover maybe 80% of use cases.
Moreover, and more importantly, it would allow more tightly specialized methods to float to the top of the list. When working with a distribution d = Beta(1, 2), I should be able to type d |> and see that specialized methods such as pdfsquaredL2norm, which specialize on an argument ::Beta, appear. I shared further thoughts here:
In recognition that underscore placeholder syntax is good for chaining, but is actually sugar for partial function application, we should think through how to make it do that job well too. Because if we don’t, we will end up with what was almost a solution to a bunch of other problems, but wasn’t quite good enough because we didn’t think it through. Maybe it’s just the engineer in me, but I have a bias toward trying to solve problems as generally as possible whenever I can. And, because,
This is only true when it’s an lvalue. Same could be said about _ in those situations.
When talking about underscore placeholder syntax, the discussion is about how to treat _ when it’s an rvalue, as an argument to function calls. Placeholder syntax builds out a partial function and uses the position of the _ as a placeholder. I simply propose that _... allow similar treatment, but for varargs.
The opposite, actually. For example, f=my_func(x, _...) would allow me to call f(a, b, c) and it would execute my_func(x, a, b, c). The way I’m proposing it, placeholder _... does not signify arguments you ignore, but vararg arguments you will fill in.
Thus, my_func(x, _..., y; _...) is a function (args...; kwargs...) -> my_func(x, args..., y; kwargs...).
I think your confusion is over my example, which showed that the partial function this_is_spartaaahh = my_func(:aaahh!, _...) could be called with zero arguments as this_is_spartaaahh(). This is not because _... signified ignoring arguments, but instead, simply an artifact of the fact that varargs are allowed to have zero length. You could easily make a vararg partial function which doesn’t permit zero arguments, e.g. f=my_func(x, _, _, _...) which would require at least two arguments.
Yes, I believe this is the point. I don’t want any runtime computation whatsoever for something that just calls another function.
If the IDE can’t determine the type of the variable that you just typed because it’s part of an incomplete blob, and therefore cannot autocomplete, then how do the OOP guys get functioning autocomplete?
I don’t have any domain-specific knowledge of autocomplete, but I’ll walk you through my thinking.
Take the example above, d = Beta(1, 2), a type from Distributions.jl. The object d has been declared to have type Beta, so when I type d |> the IDE should know that a) this is a Beta object and b) I’m about to pass it to a function (and most likely a partial function); so it should determine the available methods such as kldivergence and invlogccdf to show. After passing it through some transformation functions, if they are type-stable, the IDE should still know the type and be able to determine the available methods.
To me, it feels like the missing link to a respectable autocomplete is the ability to tell the IDE, in a way that is core to the language (and therefore worth the time and effort to develop around), what type of object I am going to call a method on before I begin looking for methods.
Am I on the right track, or am I deluded? If I’m deluded, what is it that OOP languages have that Julia doesn’t, that allows their autocomplete to work?
Literally the only reason the compiler was able to fold your computation was because the input to your Fix thing was part of the benchmarking expression. Here, let me help you out:
It’s not better. The only case where it could be better is if everything is tuple, always, which is just not a realistic use case.
To me your reply just shows that you’re ignorant about what you’re actually defining and how that actually works. This is not convincing.
This again requires the autocomplete to know that d is a Beta, which is information you don’t have. Constructors are (sadly) not even required to return an object of their type, meaning you have to run type inference, as I’ve repeated multiple times now, to even begin checking which methods can possibly receive d.
Again, _ has no relation whatsoever to the “no argument” case because you can’t pipe anything into functions that don’t take any arguments. There is no generalization possible here because the 0 argument case is not a generalization of the 1 argument case, and neither of the n-argument case.
This already has syntax in the form of f(d...) = my_func(x, d...), which is MUCH clearer about what’s going on than having to read the second part since ... already has the established meanings.
Because they are in a statically typed language where the type of every single object is determined just by having a method/function in the first place. In a statically typed language, not knowing the type of a variable is a compiler error. It is not in julia.
It cannot, because Beta at the syntax level is just another function that can return an object of any type. Running at least type inference to know this is required.
Whether or not the function was created with _ or not has no bearing on which methods are selected. To the compiler, these two things are almost exactly the same, save for the name:
julia> f = (a,b) -> a+b
#3 (generic function with 1 method)
julia> g(a,b) = a+b
g (generic function with 1 method)
Whether f was created like f = _ + _ or with the explicit anonymous function syntax is irrelevant - the compiler never even sees the difference.
This already exists - annotate your types on every variable. Of course, this completely removes the ability of your code to be reused & generic, which is what you typically get in a static language. If you don’t want to do that and run type inference on every step, we AT LEAST require partial type inference to run up until invalid expressions, but again, that has NOTHING to do with whether you have fancy syntax for defining partially applied functions or not.
I will just repeat myself from above: You want a specific syntax and ascribe to that syntax mythical abilities that have nothing to do whatsoever with the actual semantic problems underneath. OOP languages can have their easy autocomplete on their syntax because they are statically typed; not the other way around.
First, even if you disagree with a proposal, please do not use language like that. Well-written and detailed proposals like this deserve to be heard and discussed in good faith, even if they are not implemented.
Simply lowering something without too much transformation is common in Julia. Consider eg [] (lowered as hvcat or variants, depending on content).
I think that lowering \> and /> with the proposed associative and precedence rules into a neutrally named generic function would be great. Then a package could take that and implement FixFirst and FixLast as described, or, if it pleases, do something completely different.
We could do all the Julia benchmarks and include compile time if you prefer.
Is running type inference problematic?
Never said it did.
The first bit of information you need, before you can begin your search for methods that dispatch on your object, is a) what the object is, and b) the fact that you are about to call a function on it. That’s what piping + fancy partial application syntax provide, in an order that’s convenient (and therefore accessible and likely to be used) for the human-machine interaction in question.
I’m not claiming to solve autocomplete, not by a long shot. I’m just hoping to get over one of the first hurdles so that it can be in reaching distance.
Perhaps I shall inform this person that they are not allowed to use autocomplete in Python, because it’s not statically typed.
claims about the necessity and feasibility of autocompletion,
various side-discussions about performance and implementation issues.
Personally, I think that the interesting part is 1., and 2. and 3. are distractions and sidetrack the readers from something that would be great to add to the language in its essential form (a pair of operators with the proposed syntax and precedence, that lower into a generic function, which packages could then define methods for).
Sorry, but you missed that I jumped ship, and am now in support of essentially the proposal of #24990 (see reasoning here), with proposal for a truly generalized Fix type (description here). Still a rough draft.
In short, if done right, underscore placeholder syntax can lower into typed partial application functors, and more generally (and legibly), while treating the parser with greater kindness, than my original proposal for \> and />.
Not necessarily, but right now there is no capability to run it on broken/incomplete expressions. That has to be built first, after which you’re still left with the question of what to do in functions that don’t have their types annotated, where type inference won’t help in the slightest because it’ll end up as giving you every possible function the IDE could know about.
And I’m arguing that no, they don’t solve either problem. You need (at least partial) type inference to know what type an object is. Once you’ve created an object, of course you’re going to pass it to a function (or return it, but I’ll assume that this is not wanted), what else are you going to do? Even operators are just functions in julia. Pretty much anything can be made callable and is thus potentially the thing to use the object on.
That’s ok - but do see that the surface level syntax is not the “first hurdle” to overcome and that it doesn’t solve the fundamental problem of “there is not enough information available to the IDE to autocomplete with”.
In fact, if you start writing a function name, editors using LanguageServer.jl are already suggesting potential names, so autocomplete in that regard already works - even without type inference. It does need that additional context of “here’s the first few characters” to filter out potential matches, but that’s already how autocomplete in other languages works anyway.
Polemic rhetoric aside, you could also just ask them how they continue to develop one of the big plotting packages in julia, Makie.jl, if the autocomplete is so unbearable They’re a main contributor after all and even published a paper about it.
That post is also 4(!) years old by now and a lot has changed since then. Atom/Juno was still widely used. There was no VSCode extension. LanguageServer.jl was barely beginning to start to use the very first versions of SymbolServer.jl, which is what’s used right now for providing the symbols for autocompletion in the first place. I don’t think the thread is representative of the problems faced today.
In regards to python’s autocomplete - try this and tell me what you get:
[sukera@tempman ~]$ python3
Python 3.10.8 (main, Oct 13 2022, 21:13:48) [GCC 12.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> class Foo:
... x = 1
...
>>> Foo().<TAB>
or this:
[sukera@tempman pytest]$ python3
Python 3.10.8 (main, Oct 13 2022, 21:13:48) [GCC 12.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> class Foo:
... def test(self):
... return 1
...
>>> Foo().tes<TAB>
I can press <TAB> as many times as I like, I can’t get it to autocomplete. I can get that to work in ipython, mind you, but the exact same also already works in our REPL. What I can’t get to work even in ipython is then writing a function:
def baz(x):
x.<TAB>
and it won’t autocomplete. How could it? It has no idea what type x might be. This is the exact problem I’m trying to convey is hard. It will do it if I write def baz(x: Foo):, because then it knows the type again, which is exactly the behavior I see in julia with e.g. VSCode and LanguageServer.jl once it knows the type - just like in any other statically typed language. Annotating the type however removes exactly that genericity we so desire for dynamic workflows.
Was not intending anything, just a poor choice of words, perhaps “rigamarole” closer to what I meant.
I think the proposal to have a bunch of currying capacity is a good idea, I just think the piping situation is basically a syntactic transform and should be attacked at that level.
Indeed, I’m glad to see someone bring this up, the assertion that somehow piping is going to improve autocomplete seems a bit optimistic to put it mildly.
In an IDE like vscode if you type something like
foo |> TAB
Under what circumstances does it gain any information over just
TAB
only when it knows the type of foo and can search through all the methods of all the functions that are specialized for that type OR take a generic type for the first argument. The number of functions that take a generic type for a first argument will be very high because of generic duck typing in Julia so it isn’t going to be super helpful. You’ll wind up with 26000 options often enough.
Furthermore there will be many if not most cases where the type of foo can’t be figured out. Such as
function bar(foo, baz)
foo |> TAB
What it might do is help with top level scripts where global vars are being used. But if you care in any way about performance you’re still writing function barriers for top level scripts. Oddly the function barrier helps the compiler but makes the IDE have no idea what’s up
@Sukera If I understand your concern correctly, you are saying that adding a syntax like this does not fundamentally solve the problem of autocomplete since it’s still necessary to do type inference. That makes sense to me. However, it does seem plausible to me that syntax like this could make for a more convenient autocomplete UI experience at the user level, conditional on further engineering work under the hood—would you agree with that?
I think multiple contributors to the discussion have expressed what is also my sentiment very well, that the original proposal seems like a surprisingly elegant pair of operators, purely for syntactic convenience, and does not have to necessarily have the full power of being a perfectly consistent currying functor etc. etc.
They would not be the first operators to bind more tightly than function calls; both type annotation :: and broadcast . and do keyword do so I believe.
So, it seems there are many who would be excited for syntactic sugar for FrontFix and BackFix (or FrontPipe and BackPipe ? )
Most of the concerns I’m seeing seem like they revolve around either 1. some of the desired power of the operator or 2. some of the claimed second-order benefits of the operator, when neither of these points are actually the key selling feature, which is a nice way to thread objects through some common functional patterns.
What if the transformation is the following? Using the same $ as function application notation:
And any_expr /> any_callable becomes args -> any_callable $ {any_expr, args...}. And vice versa for backfix. Thus for one-argument functions foo, then x /> foo() == x |> foo == x \> foo()
Using the example that breaks the powerful bells-and-whistles version from @CameronBieganek [4, 9, 16] \> map $ {sqrt} \> filter $ {iseven}
No, I do not. \> or /> only exclude functions that don’t take any arguments at all from being possible matches, which aren’t considered for piping in the first place - they don’t take arguments after all. The potential engineering work by itself can inform possible method completions, but those are not going to be improved/further culled by knowing that one of the arguments to be piped is of a given type (and if it can, only in very limited cases, where the first/last argument is not already of the same type in all methods and dispatch is used for disambiguation).
The trouble with your proposal is that I can make any object callable, and that is not information available just from parsing:
julia> struct FooBar end
julia> (f::FooBar)(x) = "A `FooBar` instance got called with argument $(x)!"
julia> foobar = FooBar()
FooBar()
julia> foobar(42)
"A `FooBar` instance got called with argument 42!"
So to resolve that, you already need to know the boundary of where to stop resolving/applying the call you want to move around, which is the most contentious point of the _ proposal (julia will happily parse any symbol followed directly by parentheses as a Expr(:call, ...), but that has no bearing on whether or not the call will succeed. You either need to know ahead of time with some other information or pick semantics of when to stop moving calls around).
Another problem is that composing that with |> (which we can’t remove, would be breaking) is pretty hard - you can’t easily change lowering for it because it already has defined semantics:
julia> Meta.parse("a |> b |> c |> d")
:(((a |> b) |> c) |> d)
julia> Meta.parse("a |> b |> c |> d") |> dump
Expr
head: Symbol call
args: Array{Any}((3,))
1: Symbol |>
2: Expr
head: Symbol call
args: Array{Any}((3,))
1: Symbol |>
2: Expr
head: Symbol call
args: Array{Any}((3,))
1: Symbol |>
2: Symbol a
3: Symbol b
3: Symbol c
3: Symbol d
Possibly? The problem is that we don’t have any notion of “function application operator” and thinking of foo() as doing that is not correct, as (I think) has been pointed out either somewhere far above or in the other thread.
If you want to see it as just a syntax transform, the best way to test your idea out is to write a macro to do it, since that’s exactly what macros already do. Whether or not such a transform is wanted/useful in Base is then a different discussion, unrelated to whether the idea works in principle.