Fixing the Piping/Chaining Issue

as opposed to

filter(iseven,map(sqrt,[4,9,16]))

or something like:

[4,9,16] |> ComposeChain(Curry(map,sqrt),Curry(filter, iseven))

@dlakelan I can only speak for myself, but I would prefer a threading syntax over any of those other options. I usually default to Chain.jl in these situations.

@chain [4, 9, 16] begin
    map(sqrt, _)
    filter(iseven, _)
end

But if into / over-like syntax existed I would probably use that.

I very much enjoy thinking functionally, but when I type something like this
filter(iseven,map(sqrt,[4,9,16]))

I will quite literally type it backwards right-to-left starting with the innermost function call. It is not super ergonomic but it’s how my brain works haha. I have full conviction that somewhere buried in this (long) thread we have the pieces to build a useful piece of syntax to make these patterns more ergonomic.

Indeed, I really like Chain and it’s imported by DataFramesMeta.jl so I wind up with it by default almost everywhere I’d want it.

But if we want syntax then I think the right thing to do is propose an alternative macro:

@piperator [4,9,16] |> map(sqrt,_) |> dothing(_,[1,2,3]) |> somesuch(a,_,b)

In fact I think we already have Pipe.jl that does this right?

and if you want a curried function, how about

@currierandives f(_,b) |> q(a,_,c) |> filter(bar,vcat(b,_))

Which results via syntax transformation in something like:

x-> filter(bar,vcat(b,q(a,f(x,b),c))

I like Chain.jl a lot, and I have nothing against Pipe.jl. But I can’t help but come back to one of @uniment 's opening statements:

Wouldn’t it be nice to type

x = mystruct into foo(arg1, arg2) into bar(arg3, arg4)

As a built-in ? Even using these macros we need to remember to include it everywhere, it’s still a few more characters to type, we need to trust that they will continue to be maintained as necessary, and other packages will not necessarily be designed to interop smoothly by default with whatever syntax is chosen by these third-party macros.

Obviously the following comparison is extreme maybe to the point of uselessness, but imagine if all we had were while loops and you had to use a third-party macro to run a for loop. This is somewhat how I feel about the current underpowered state of currying/piping.

2 Likes

That’s quite impressive. Perhaps they can update their original question with the answer, as I have yet not found it.

Let’s see if we can do with Julia what we can do with Python…

Python:

Let’s define a class:

class Foo:
    x = 1
    def bar(self):
        return self.x

Tab-complete shows the property x as well as the member method bar.

Julia:

The Julian way of writing this is to recognize that Foo’s member method bar probably generalizes across a range of other Foo-like types, so we define an AbstractFoo type and specialize our methods on it. (We could specialize to Foo, and that would do exactly the same thing as having a class member method, but that’s not as Julian.) First we write Foo-specialized methods which directly access its fields, and then write methods of AbstractFoo to call them.

abstract type AbstractFoo end
Base.@kwdef struct Foo <: AbstractFoo
    x = 1
end
getx(foo::Foo) = foo.x
bar(foo::AbstractFoo) = getx(foo)

Because we have syntax sugar for getproperty (namely . dot), and because I have typed Foo(), it knows to call propertynames on this object (or something similar). Because I have typed the object description, and typed a dot, autocomplete has the information it needs to help me discover x.

However, it’s not Julian to access the object’s properties directly; the preferred idiom is instead to call methods on it. So let’s find those methods.

Unfortunately, the situation isn’t so good for helping me discover either bar(::Foo), or getx(::Foo):

What I claim is simple: that one day, autocomplete will recognize that I intend to call a function that specializes on a Foo, and it will help me find it. However, because the pipe operator can only call single-argument functions, at the moment such an autocomplete would not be very useful; we need a preferred partial application technique first, so that we can capture the whole range of possible method signatures.

Unless, of course, we should just solve the problem by making every method a dotted member method:

abstract type AbstractFoo end
Base.@kwdef struct Foo <: AbstractFoo
    x = 1
    bar = function(self) self.x end
end
Base.getproperty(x::AbstractFoo, n::Symbol) = begin
    if getfield(x,n) isa Function
        return (a...;k...)->getfield(x,n)(x,a...;k...)
    end
    getfield(x,n)
end

Now we finally get method discoverability in Julia, but it shouldn’t be this hard (or entirely non-Julian) to do.

That’s not the problem I care about, because I do not care about solving impossible problems. I would have no time left for living life if I did.

You can consider a “member method” to be simply a function which is specialized to exactly the concrete type of the class. When hoping to tab-complete on member methods, I’m trying to find the most specialized functions! Why should I wish to hit tab to see all the methods which are not specialized to this object, nor to any object like it? I can just start typing random function names anyway.

Recall from inception the purpose of Julia, which is essentially to be a scripting language which compiles: to have the benefits of quick dev time and then quick run time. Scripting languages are quick and easy to develop in, partly because of not needing to assign types to objects, sure ok, but also because if you want to scratchpad something you can just casually start scratchpadding. You can hop into a REPL and experiment, and when you’ve found the appropriate methods, algorithms, and fragments of glue code you can copypaste back into the editor; or you can hit CTRL+ENTER in VSCode, or etc. It’s part of the preferred development style.

For sections of code where arguments are fully generic to ::Any, and if you’re also not scratchpadding the code and getting global vars from it, I don’t see why anyone would expect autocomplete to help.

I think these are two separate questions. If this is really a concern, then lobbying to get Chain.jl and Pipe.jl into Base seems reasonable. At the moment I don’t think there’s a real concern here. Both Chain.jl and Pipe.jl are under some kind of MIT license, it would always be possible for Julia Base or the julialang organization to absorb them in the future.

IMHO no, why do we need special syntax for this when macros are perfect for special syntax? and the _ syntax is actually more understandable imho. and it’s going to be annoying as hell to write macros that work with this syntax, also now you will never be able to use into or over as variable names and so it’ll break people’s code who have used those variable names. Again I think it’s “julia is no longer at that stage” territory.

2 Likes

Note that the work of creating the @currierandives macro is about 90% done by Pipe.jl you just need to add an anonymous function creation to the output:

julia> using Pipe

julia> @macroexpand @pipe f(_,b) |> q(a,_,c) |> filter(bar,vcat(b,_))
:(filter(bar, vcat(b, q(a, f(_, b), c))))

@adienes What do you think?

I am leaning to support #24990, with the minor modification of supporting _... for varargs slurping to improve the underscore syntax’s generality.

@adienes It sounds like what you are proposing is basically “front pipe” and “back pipe”, implemented at the syntax level. I actually proposed that in the currying PR a couple years ago. :slight_smile:

It would be possible to have a syntax level front pipe \> and back pipe \>> such the parser translates

a \> f(b, c)

into

f(a, b, c)

and translates

c \>> f(a, b)

into

f(a, b, c)

The front and back pipe “syntactic operators” (I just made up that term) would be parsed with left associativity, so

a \> f(b, c) \> g(d, e)

would be equivalent to

(a \> f(b, c)) \> g(d, e)

which the parser would translate to

g(f(a, b, c), d, e)

But of course there are some weaknesses to this approach. As far as I know, if you only have syntactic front and back pipes, there’s no easy way to express something like this:

b |> f(a, _, c) # currying PR
3 Likes

Ah so this is why you were so critical of my original proposal; you had already been a proponent of something nearly identical :wink: I don’t know though, underscore placeholder syntax is growing on me.

I’m currently trying to imagine how to express combining multiple objects, something like…

([1, 2, 3], ", ") |> join(_, _)

I know this doesn’t work; I’m trying to think of what the syntax could look like.

3 Likes

Simple:

([1, 2, 3], ", ") |> Splat(join(_, _))
help?> Splat
search: Splat splitpath display displaysize displayable redisplay

  Splat(f)

  Equivalent to

      my_splat(f) = args->f(args...)

  i.e. given a function returns a new function that takes one argument
  and splats its argument into the original function. This is useful as
  an adaptor to pass a multi-argument function in a context that expects
  a single argument, but passes a tuple as that single argument.
  Additionally has pretty printing.

  │ Julia 1.9
  │
  │  This function was introduced in Julia 1.9, replacing
  │  Base.splat(f).
5 Likes

Thanks for this! I’m not a fan of the flow involved with ?(x, y<tab>, so it’d be neat if this could be an autocomplete option when constructing Tuples of arguments… or maybe FrankenTuples…

Just a quick confirmation that the new Fix functor operates as expected, running @code_llvm confirms it still compiles to the same thing.

julia> f() = map([1,2,3]) do x x+1 ; end
f (generic function with 1 method)

julia> g() = map(x->x+1,[1,2,3])
g (generic function with 1 method)

julia> h() = FixLast(map, [1,2,3])(x->x+1)
h (generic function with 1 method)

julia> i() = Fix{(2,)}(map, ([1,2,3],))(x->x+1)
i (generic function with 1 method)

result

julia> @code_llvm f()
;  @ REPL[17]:1 within `f`
; Function Attrs: uwtable
define nonnull {}* @julia_f_2255() #0 {
pass.2:
  %gcframe9 = alloca [4 x {}*], align 16
  %gcframe9.sub = getelementptr inbounds [4 x {}*], [4 x {}*]* %gcframe9, i64 0, i64 0
  %0 = bitcast [4 x {}*]* %gcframe9 to i8*
  call void @llvm.memset.p0i8.i32(i8* noundef nonnull align 16 dereferenceable(32) %0, i8 0, i32 32, i1 false)
  %1 = getelementptr inbounds [4 x {}*], [4 x {}*]* %gcframe9, i64 0, i64 2
  %2 = bitcast {}** %1 to { {}* }*
  %3 = call {}*** inttoptr (i64 1699154720 to {}*** ()*)() #3
; ┌ @ array.jl:126 within `vect`
; │┌ @ array.jl:679 within `_array_for` @ array.jl:676
; ││┌ @ abstractarray.jl:840 within `similar` @ abstractarray.jl:841
; │││┌ @ boot.jl:468 within `Array` @ boot.jl:459
      %4 = bitcast [4 x {}*]* %gcframe9 to i64*
      store i64 8, i64* %4, align 16
      %5 = getelementptr inbounds [4 x {}*], [4 x {}*]* %gcframe9, i64 0, i64 1
      %6 = bitcast {}** %5 to {}***
      %7 = load {}**, {}*** %3, align 8
      store {}** %7, {}*** %6, align 8
      %8 = bitcast {}*** %3 to {}***
      store {}** %gcframe9.sub, {}*** %8, align 8
      %9 = call nonnull {}* inttoptr (i64 1698961392 to {}* ({}*, i64)*)({}* inttoptr (i64 294838384 to {}*), i64 3)
      %10 = bitcast {}* %9 to i64**
      %11 = load i64*, i64** %10, align 8
; │└└└
; │┌ @ array.jl:966 within `setindex!`
    %12 = bitcast i64* %11 to <2 x i64>*
    store <2 x i64> <i64 1, i64 2>, <2 x i64>* %12, align 8
    %13 = getelementptr inbounds i64, i64* %11, i64 2
    store i64 3, i64* %13, align 8
; └└
; ┌ @ abstractarray.jl:2933 within `map`
; │┌ @ array.jl:716 within `collect_similar`
    store {}* %9, {}** %1, align 16
    %14 = getelementptr inbounds [4 x {}*], [4 x {}*]* %gcframe9, i64 0, i64 3
    store {}* %9, {}** %14, align 8
    %15 = call nonnull {}* @j__collect_2257({}* nonnull %9, { {}* }* nocapture readonly %2) #0
    %16 = load {}*, {}** %5, align 8
    %17 = bitcast {}*** %3 to {}**
    store {}* %16, {}** %17, align 8
; └└
  ret {}* %15
}

julia> @code_llvm g()
;  @ REPL[18]:1 within `g`
; Function Attrs: uwtable
define nonnull {}* @julia_g_2258() #0 {
pass.2:
  %gcframe9 = alloca [4 x {}*], align 16
  %gcframe9.sub = getelementptr inbounds [4 x {}*], [4 x {}*]* %gcframe9, i64 0, i64 0
  %0 = bitcast [4 x {}*]* %gcframe9 to i8*
  call void @llvm.memset.p0i8.i32(i8* noundef nonnull align 16 dereferenceable(32) %0, i8 0, i32 32, i1 false)
  %1 = getelementptr inbounds [4 x {}*], [4 x {}*]* %gcframe9, i64 0, i64 2
  %2 = bitcast {}** %1 to { {}* }*
  %3 = call {}*** inttoptr (i64 1699154720 to {}*** ()*)() #3
; ┌ @ array.jl:126 within `vect`
; │┌ @ array.jl:679 within `_array_for` @ array.jl:676
; ││┌ @ abstractarray.jl:840 within `similar` @ abstractarray.jl:841
; │││┌ @ boot.jl:468 within `Array` @ boot.jl:459
      %4 = bitcast [4 x {}*]* %gcframe9 to i64*
      store i64 8, i64* %4, align 16
      %5 = getelementptr inbounds [4 x {}*], [4 x {}*]* %gcframe9, i64 0, i64 1
      %6 = bitcast {}** %5 to {}***
      %7 = load {}**, {}*** %3, align 8
      store {}** %7, {}*** %6, align 8
      %8 = bitcast {}*** %3 to {}***
      store {}** %gcframe9.sub, {}*** %8, align 8
      %9 = call nonnull {}* inttoptr (i64 1698961392 to {}* ({}*, i64)*)({}* inttoptr (i64 294838384 to {}*), i64 3)
      %10 = bitcast {}* %9 to i64**
      %11 = load i64*, i64** %10, align 8
; │└└└
; │┌ @ array.jl:966 within `setindex!`
    %12 = bitcast i64* %11 to <2 x i64>*
    store <2 x i64> <i64 1, i64 2>, <2 x i64>* %12, align 8
    %13 = getelementptr inbounds i64, i64* %11, i64 2
    store i64 3, i64* %13, align 8
; └└
; ┌ @ abstractarray.jl:2933 within `map`
; │┌ @ array.jl:716 within `collect_similar`
    store {}* %9, {}** %1, align 16
    %14 = getelementptr inbounds [4 x {}*], [4 x {}*]* %gcframe9, i64 0, i64 3
    store {}* %9, {}** %14, align 8
    %15 = call nonnull {}* @j__collect_2260({}* nonnull %9, { {}* }* nocapture readonly %2) #0
    %16 = load {}*, {}** %5, align 8
    %17 = bitcast {}*** %3 to {}**
    store {}* %16, {}** %17, align 8
; └└
  ret {}* %15
}

julia> @code_llvm h()
;  @ REPL[19]:1 within `h`
; Function Attrs: uwtable
define nonnull {}* @julia_h_2261() #0 {
pass.2:
  %gcframe9 = alloca [4 x {}*], align 16
  %gcframe9.sub = getelementptr inbounds [4 x {}*], [4 x {}*]* %gcframe9, i64 0, i64 0
  %0 = bitcast [4 x {}*]* %gcframe9 to i8*
  call void @llvm.memset.p0i8.i32(i8* noundef nonnull align 16 dereferenceable(32) %0, i8 0, i32 32, i1 false)
  %1 = getelementptr inbounds [4 x {}*], [4 x {}*]* %gcframe9, i64 0, i64 2
  %2 = bitcast {}** %1 to { {}* }*
  %3 = call {}*** inttoptr (i64 1699154720 to {}*** ()*)() #3
; ┌ @ array.jl:126 within `vect`
; │┌ @ array.jl:679 within `_array_for` @ array.jl:676
; ││┌ @ abstractarray.jl:840 within `similar` @ abstractarray.jl:841
; │││┌ @ boot.jl:468 within `Array` @ boot.jl:459
      %4 = bitcast [4 x {}*]* %gcframe9 to i64*
      store i64 8, i64* %4, align 16
      %5 = getelementptr inbounds [4 x {}*], [4 x {}*]* %gcframe9, i64 0, i64 1
      %6 = bitcast {}** %5 to {}***
      %7 = load {}**, {}*** %3, align 8
      store {}** %7, {}*** %6, align 8
      %8 = bitcast {}*** %3 to {}***
      store {}** %gcframe9.sub, {}*** %8, align 8
      %9 = call nonnull {}* inttoptr (i64 1698961392 to {}* ({}*, i64)*)({}* inttoptr (i64 294838384 to {}*), i64 3)
      %10 = bitcast {}* %9 to i64**
      %11 = load i64*, i64** %10, align 8
; │└└└
; │┌ @ array.jl:966 within `setindex!`
    %12 = bitcast i64* %11 to <2 x i64>*
    store <2 x i64> <i64 1, i64 2>, <2 x i64>* %12, align 8
    %13 = getelementptr inbounds i64, i64* %11, i64 2
    store i64 3, i64* %13, align 8
; └└
; ┌ @ Untitled-6:4 within `FixLast`
; │┌ @ Untitled-6:4 within `#_#24`
; ││┌ @ abstractarray.jl:2933 within `map`
; │││┌ @ array.jl:716 within `collect_similar`
      store {}* %9, {}** %1, align 16
      %14 = getelementptr inbounds [4 x {}*], [4 x {}*]* %gcframe9, i64 0, i64 3
      store {}* %9, {}** %14, align 8
      %15 = call nonnull {}* @j__collect_2263({}* nonnull %9, { {}* }* nocapture readonly %2) #0
      %16 = load {}*, {}** %5, align 8
      %17 = bitcast {}*** %3 to {}**
      store {}* %16, {}** %17, align 8
; └└└└
  ret {}* %15
}

julia> @code_llvm i()
;  @ REPL[20]:1 within `i`
; Function Attrs: uwtable
define nonnull {}* @julia_i_2264() #0 {
pass.2:
  %gcframe9 = alloca [4 x {}*], align 16
  %gcframe9.sub = getelementptr inbounds [4 x {}*], [4 x {}*]* %gcframe9, i64 0, i64 0
  %0 = bitcast [4 x {}*]* %gcframe9 to i8*
  call void @llvm.memset.p0i8.i32(i8* noundef nonnull align 16 dereferenceable(32) %0, i8 0, i32 32, i1 false)
  %1 = getelementptr inbounds [4 x {}*], [4 x {}*]* %gcframe9, i64 0, i64 2
  %2 = bitcast {}** %1 to { {}* }*
  %3 = call {}*** inttoptr (i64 1699154720 to {}*** ()*)() #3
; ┌ @ array.jl:126 within `vect`
; │┌ @ array.jl:679 within `_array_for` @ array.jl:676
; ││┌ @ abstractarray.jl:840 within `similar` @ abstractarray.jl:841
; │││┌ @ boot.jl:468 within `Array` @ boot.jl:459
      %4 = bitcast [4 x {}*]* %gcframe9 to i64*
      store i64 8, i64* %4, align 16
      %5 = getelementptr inbounds [4 x {}*], [4 x {}*]* %gcframe9, i64 0, i64 1
      %6 = bitcast {}** %5 to {}***
      %7 = load {}**, {}*** %3, align 8
      store {}** %7, {}*** %6, align 8
      %8 = bitcast {}*** %3 to {}***
      store {}** %gcframe9.sub, {}*** %8, align 8
      %9 = call nonnull {}* inttoptr (i64 1698961392 to {}* ({}*, i64)*)({}* inttoptr (i64 294838384 to {}*), i64 3)
      %10 = bitcast {}* %9 to i64**
      %11 = load i64*, i64** %10, align 8
; │└└└
; │┌ @ array.jl:966 within `setindex!`
    %12 = bitcast i64* %11 to <2 x i64>*
    store <2 x i64> <i64 1, i64 2>, <2 x i64>* %12, align 8
    %13 = getelementptr inbounds i64, i64* %11, i64 2
    store i64 3, i64* %13, align 8
; └└
; ┌ @ Untitled-6:60 within `Fix`
; │┌ @ Untitled-6:60 within `#_#7`
; ││┌ @ Untitled-6:60 within `macro expansion`
; │││┌ @ abstractarray.jl:2933 within `map`
; ││││┌ @ array.jl:716 within `collect_similar`
       store {}* %9, {}** %1, align 16
       %14 = getelementptr inbounds [4 x {}*], [4 x {}*]* %gcframe9, i64 0, i64 3
       store {}* %9, {}** %14, align 8
       %15 = call nonnull {}* @j__collect_2266({}* nonnull %9, { {}* }* nocapture readonly %2) #0
       %16 = load {}*, {}** %5, align 8
       %17 = bitcast {}*** %3 to {}**
       store {}* %16, {}** %17, align 8
; └└└└└
  ret {}* %15
}
1 Like

That’s surprising and unusual. Do you know when it might not be done (except for errors/exceptions)? In the docs I find this constructor-line as part of a struct:

OrderedPair(x,y) = x > y ? error("out of order") : new(x,y)

It returns the OrderedPair type, or:

julia> OrderedPair(2, 1)
ERROR: out of order
[..]

julia> typeof(ans)  # Strangely for one such case I got OrderedPair
DataType

I believe in e.g. C++ constructors must construct an object of its class. I believe we might wan to restrict Julia to guarantee the same, i.e. their type (or an exception), and even if not, for autocompletion-purposes, couldn’t we assume it?

Can’t autocomplete be feasible, and useful, if it’s just returning (only) the most likely operations? Even if the list is very long and we show all of it, then if it’s sensibly ordered? I actually care more about that nr. 2, rather than the operators, nr. 1. Is something available in VS Code or elsewhere (i.e. to look up methods, as meant here)?

1 Like

I think this is a good attitude to take toward autocomplete: where a conflict arises between accuracy and usefulness, unlike a compiler, the right choice is often to err toward usefulness. It’s like a Google search; being great most of the time is better than being perfect never.

This is a fine thing for an IDE, it’s not a good principle for a language.

Here’s a fun way to do currying that doesn’t require a macro or changes to the parser. It’s basically the same as using a FixArgs functor, but with nicer syntax.

Here is how it works. To curry a function foo, wrap it in c(), and then call c(foo) they same way that you would call a function using the underscore currying PR, except use .. instead of _.

struct FreeArg end
const .. = FreeArg()

function c(f)
    function(arg_specs...)
        function(free_args...)
            _, args = foldl(arg_specs; init=(free_args, ())) do (free_args, args), arg_spec
                if arg_spec == ..
                    arg, free_args... = free_args
                else
                    arg = arg_spec
                end
                (free_args, (args..., arg))
            end
            f(args...)
        end
    end
end

Here it is in action:

[4, 9, 16] |>
    c(map)(sqrt, ..) |>
    c(filter)(iseven, ..)

It works fairly well for piping, but not so well for small anonymous functions with operators. E.g., compare the following underscore anonymous functions

_.x
_ > 2

to how they would look with the c() function:

c(getproperty)(.., :x)
c(>)(.., 2)

I haven’t done any performance testing. Of course this approach could be modified so that c(foo) returns a FixArgs functor instead of an anonymous function.

If I hijack Base.adjoint, I can make the syntax look almost exactly like the underscore syntax:

[4, 9, 16] |>
    map'(sqrt, ..) |>
    filter'(iseven, ..)
Code for `adjoint` as currying operator
function Base.adjoint(f)
    function(arg_specs...)
        function(free_args...)
            _, args = foldl(arg_specs; init=(free_args, ())) do (free_args, args), arg_spec
                if arg_spec == ..
                    arg, free_args... = free_args
                else
                    arg = arg_spec
                end
                (free_args, (args..., arg))
            end
            f(args...)
        end
    end
end

If you prefer, you can use (typed \square[TAB]) instead of ..:

const □ = FreeArg()

[4, 9, 16] |>
    map'(sqrt, □) |>
    filter'(iseven, □)
6 Likes

I thought about hijacking adjoint too! It looks like a super clean way to indicate partial evaluation.

My concern was that there could be some callable objects for which a mathematical adjoint or transpose has meaning.

Yeah, hijacking adjoint is not something Base would do, but it might be alright in a package. :slight_smile:

1 Like