Fixing the Piping/Chaining Issue

Yes. The second argument to |> must always be a single-argument function. :wink:

Hence why I think it’s so lame.

Agreed that it’s lame, and yeah, I think what we really want is @chain we don’t want any of this runtime baloney, it’s a parse-time problem (I guess it’s really code-lowering-time).

Just as an aside, you can do @less 5 |> sin which opens the source right in the REPL, or @edit to open it in your preferred text editor, or @which to just get the location of the definition for the method you’re calling:

julia> @which 5 |> sin
|>(x, f) in Base at operators.jl:911
3 Likes

I agree with @dlakelan that @chain is just too good. It would take a lot to get me to move away from Chain.jl, especially due to the @aside macro-flag.

The only gripe I have with @chain is that it requires you to know you want to pipe something at the beginning of the expression. In R I can do

t_new = foo(t) %>% # Crap, I forgot something
    bar(x, y, z) # parsed as bar(t, x, y, z)

but with @chain that would require going to the top of the expression.

A 2.0 change that would resolve this problem is some notion of backwards-looking macros. So that I could do

t_new = t @chain begin 
    foo()
end

This change would open up room for lots of currying exploration that isn’t hard-coded into the language.

1 Like

The OP proposal addresses this complaint.

t_new = t /> foo()

as does regular piping, but let’s not talk about that.

But it does so with a bunch of runtime stuff and fundamental changes to the parser

In essence what I’m hearing from you is you and I agree that we kind of expect a |> b(c,d,e) to be syntactic sugar for b(a,c,d,e) but it turns out that it’s not, it’s some functional programming construct only usable if b(c,d,e) = x -> b(x,c,d,e).

The problem is we have a syntax problem, and a half-assed functional programming runtime solution in |>.

Your OP solution appears to mix these paradigms. It requires a parsing change, but it implements a bunch of currying when in fact what is really needed is just syntax transformation.

1 Like

Incorrect. When types are stable, it’s done at compile time.

I’m not a parser guy, but if @bertschi got it to work in a simple macro using MacroTools (which I modified slightly and shamelessly copypasted) then this doesn’t feel correct.

But in any case,

Is this so wrong? Make a better partial function type, and then make syntax sugar which invokes it?

well it requires some change in the parse/macroexpand/lower pipeline, unless you want to type @|> or something to enable the functionality.

But you don’t make syntax sugar (ie. parse/macroexpand/lower time) you create a function which evaluates to a struct

These construct a structure…

a \> f(b)

will evaluate to a FixLast structure.

Which I don’t, hence why this is a language proposal instead of an announcement of a new macro.

As mentioned in the OP,

I was editing my reply while you were replying, not sure if we crossed.

Note above that
a \> f(b) doesn’t evaluate to f(b,a) it evaluates to FixLast(f,a) a structure.

1 Like

Correct, a \> f(b) evaluates to FixLast(f,a)(b).

If a is type stable, then this is handled by the compiler and construction does not occur at runtime. If the FixLast(f,a) object is otherwise never used, then it is never even allocated, and the entire thing is fully equivalent to f(b,a).

Hmm… and this is where I STRONGLY disagree with your proposal.

You are saying that

“a \> f(b)” should parse to :((a \> f)(b)) unlike everywhere else in the language where f(b) means “apply the function f to b”.

My proposal is, if you’re going to futz with the parser, at least solve the actual problem… make

“a \> f(b)” just parse to :(f(b,a)) and be done with it… (or I’d propose /> for that beacuse of the “leaning right” principle.

And then tell people that \> and /> are special syntax.

2 Likes

Check out the do statement :wink:

I find the do statement awkward mostly, but at least it’s called out essentially as foo(bar) do x ; ... end is syntactic sugar for (ie. lowers to) foo(x -> ..., bar)

you are proposing that \> and /> are functions (not syntactic sugar) but that they parse differently than all other function calls.

And I’m saying the real issue is a desire for syntactic sugar, and so let’s just make syntactic sugar.

Here we see that the do syntax lowers to the same thing as anonymous function syntax:

julia>  f() =  map([1,2,3]) do x x+1 ; end
f (generic function with 2 methods)

julia> @code_lowered f()
CodeInfo(
1 ─      #13 = %new(Main.:(var"#13#14"))
│   %2 = #13
│   %3 = Base.vect(1, 2, 3)
│   %4 = Main.map(%2, %3)
└──      return %4
)

julia> g() = map(x->x+1,[1,2,3])
g (generic function with 1 method)

julia> @code_lowered g()
CodeInfo(
1 ─      #15 = %new(Main.:(var"#15#16"))
│   %2 = #15
│   %3 = Base.vect(1, 2, 3)
│   %4 = Main.map(%2, %3)
└──      return %4
)

Looks like this thread could become even longer than the previous one … amazing how syntax always gets people and no wonder that Lisp newsgroups are full of flame wars :wink:

  1. The hacky (and buggy) macro of mine actually does change the parsing by pulling apart the function call in the parsed expression and moving the function into the previous expression.
  2. I still view the change to the parser as an issue of precedence though, i.e., also function application can be considered as yet another infix operator. Indeed, Haskell explicitly inserts such an operator for the same reason as here, namely to change the usual precedence:
foo x . baz y z    # usually application parses as (foo x) compose (baz y z)
foo x . baz y $ z  # parses as ((foo x) compose (baz y)) z

Thus, $ is just an explicit marker for the otherwise implicit function application operator. Yet, with another precedence, i.e., binding less strongly than function composition ..
A similar idea is proposed here: /> and \> bind more strongly than function application. It is correct though, that no such operators currently exist in Julia. In a sense all infix operators are syntactic sugar and require special handling by the parser. In contrast to Haskell, the Julia has a fixed set and rules for parsing infix operators (not all of which are currently used though) and does not allow user-defined variations with specified associativity and precedence.

2 Likes

Hah right? Truly kicking a hornet’s nest. :sweat_smile: Apparently Discourse thinks it’s a “good topic,” but it feels like dodging machinegun fire.

I wonder if it is perhaps similar to spoken language, which humans instinctively use to classify each other into “in-groups” and “out-groups.” Language is closely tied to identity and fiercely guarded.

I have called them syntax sugar, but I have also called them operators (which in Julia are functions, so therefore I have called them functions ipso facto). Please forgive me for not being precise with my vocabulary.

Your issue it seems is whether the result of this syntax sugar is a functor object of type Union{FixFirst, FixLast}, or whether it is the result of a fully applied function. Considering that, in a chain of function calls, your suggestion will behave exactly the same as the OP (and therefore any costs associated with a learning curve will be unchanged), I believe there is no downside in it being a functor object but there is upside—due to it being now a new functor that can be passed to other functions, which can even specialize on its type. So I advocate for it being a functor object.

Case in point, consider your f and g functions, defined as

julia> f() = map([1,2,3]) do x x+1 ; end
f (generic function with 1 method)

julia> g() = map(x->x+1,[1,2,3])
g (generic function with 1 method)

Let h be:

julia> h() = FixLast(map, [1,2,3])(x->x+1)
h (generic function with 1 method)

Here we see that using h compiles to the same thing as f and g:

julia> @code_llvm f()
;  @ REPL[5]:1 within `f`
; Function Attrs: uwtable
define nonnull {}* @julia_f_402() #0 {
pass.2:
  %gcframe9 = alloca [4 x {}*], align 16
  %gcframe9.sub = getelementptr inbounds [4 x {}*], [4 x {}*]* %gcframe9, i64 0, i64 0
  %0 = bitcast [4 x {}*]* %gcframe9 to i8*
  call void @llvm.memset.p0i8.i32(i8* noundef nonnull align 16 dereferenceable(32) %0, i8 0, i32 32, i1 false)
  %1 = getelementptr inbounds [4 x {}*], [4 x {}*]* %gcframe9, i64 0, i64 2
  %2 = bitcast {}** %1 to { {}* }*
  %3 = call {}*** inttoptr (i64 1699154720 to {}*** ()*)() #3
; ┌ @ array.jl:126 within `vect`
; │┌ @ array.jl:679 within `_array_for` @ array.jl:676
; ││┌ @ abstractarray.jl:840 within `similar` @ abstractarray.jl:841
; │││┌ @ boot.jl:468 within `Array` @ boot.jl:459
      %4 = bitcast [4 x {}*]* %gcframe9 to i64*
      store i64 8, i64* %4, align 16
      %5 = getelementptr inbounds [4 x {}*], [4 x {}*]* %gcframe9, i64 0, i64 1
      %6 = bitcast {}** %5 to {}***
      %7 = load {}**, {}*** %3, align 8
      store {}** %7, {}*** %6, align 8
      %8 = bitcast {}*** %3 to {}***
      store {}** %gcframe9.sub, {}*** %8, align 8
      %9 = call nonnull {}* inttoptr (i64 1698961392 to {}* ({}*, i64)*)({}* inttoptr (i64 269475952 to {}*), i64 3)
      %10 = bitcast {}* %9 to i64**
      %11 = load i64*, i64** %10, align 8
; │└└└
; │┌ @ array.jl:966 within `setindex!`
    %12 = bitcast i64* %11 to <2 x i64>*
    store <2 x i64> <i64 1, i64 2>, <2 x i64>* %12, align 8
    %13 = getelementptr inbounds i64, i64* %11, i64 2
    store i64 3, i64* %13, align 8
; └└
; ┌ @ abstractarray.jl:2933 within `map`
; │┌ @ array.jl:716 within `collect_similar`
    store {}* %9, {}** %1, align 16
    %14 = getelementptr inbounds [4 x {}*], [4 x {}*]* %gcframe9, i64 0, i64 3
    store {}* %9, {}** %14, align 8
    %15 = call nonnull {}* @j__collect_404({}* nonnull %9, { {}* }* nocapture readonly %2) #0
    %16 = load {}*, {}** %5, align 8
    %17 = bitcast {}*** %3 to {}**
    store {}* %16, {}** %17, align 8
; └└
  ret {}* %15
}

julia> @code_llvm g()
;  @ REPL[6]:1 within `g`
; Function Attrs: uwtable
define nonnull {}* @julia_g_414() #0 {
pass.2:
  %gcframe9 = alloca [4 x {}*], align 16
  %gcframe9.sub = getelementptr inbounds [4 x {}*], [4 x {}*]* %gcframe9, i64 0, i64 0
  %0 = bitcast [4 x {}*]* %gcframe9 to i8*
  call void @llvm.memset.p0i8.i32(i8* noundef nonnull align 16 dereferenceable(32) %0, i8 0, i32 32, i1 false)
  %1 = getelementptr inbounds [4 x {}*], [4 x {}*]* %gcframe9, i64 0, i64 2
  %2 = bitcast {}** %1 to { {}* }*
  %3 = call {}*** inttoptr (i64 1699154720 to {}*** ()*)() #3
; ┌ @ array.jl:126 within `vect`
; │┌ @ array.jl:679 within `_array_for` @ array.jl:676
; ││┌ @ abstractarray.jl:840 within `similar` @ abstractarray.jl:841
; │││┌ @ boot.jl:468 within `Array` @ boot.jl:459
      %4 = bitcast [4 x {}*]* %gcframe9 to i64*
      store i64 8, i64* %4, align 16
      %5 = getelementptr inbounds [4 x {}*], [4 x {}*]* %gcframe9, i64 0, i64 1
      %6 = bitcast {}** %5 to {}***
      %7 = load {}**, {}*** %3, align 8
      store {}** %7, {}*** %6, align 8
      %8 = bitcast {}*** %3 to {}***
      store {}** %gcframe9.sub, {}*** %8, align 8
      %9 = call nonnull {}* inttoptr (i64 1698961392 to {}* ({}*, i64)*)({}* inttoptr (i64 269475952 to {}*), i64 3)
      %10 = bitcast {}* %9 to i64**
      %11 = load i64*, i64** %10, align 8
; │└└└
; │┌ @ array.jl:966 within `setindex!`
    %12 = bitcast i64* %11 to <2 x i64>*
    store <2 x i64> <i64 1, i64 2>, <2 x i64>* %12, align 8
    %13 = getelementptr inbounds i64, i64* %11, i64 2
    store i64 3, i64* %13, align 8
; └└
; ┌ @ abstractarray.jl:2933 within `map`
; │┌ @ array.jl:716 within `collect_similar`
    store {}* %9, {}** %1, align 16
    %14 = getelementptr inbounds [4 x {}*], [4 x {}*]* %gcframe9, i64 0, i64 3
    store {}* %9, {}** %14, align 8
    %15 = call nonnull {}* @j__collect_416({}* nonnull %9, { {}* }* nocapture readonly %2) #0
    %16 = load {}*, {}** %5, align 8
    %17 = bitcast {}*** %3 to {}**
    store {}* %16, {}** %17, align 8
; └└
  ret {}* %15
}

julia> @code_llvm h()
;  @ REPL[7]:1 within `h`
; Function Attrs: uwtable
define nonnull {}* @julia_h_417() #0 {
pass.2:
  %gcframe9 = alloca [4 x {}*], align 16
  %gcframe9.sub = getelementptr inbounds [4 x {}*], [4 x {}*]* %gcframe9, i64 0, i64 0
  %0 = bitcast [4 x {}*]* %gcframe9 to i8*
  call void @llvm.memset.p0i8.i32(i8* noundef nonnull align 16 dereferenceable(32) %0, i8 0, i32 32, i1 false)
  %1 = getelementptr inbounds [4 x {}*], [4 x {}*]* %gcframe9, i64 0, i64 2
  %2 = bitcast {}** %1 to { {}* }*
  %3 = call {}*** inttoptr (i64 1699154720 to {}*** ()*)() #3
; ┌ @ array.jl:126 within `vect`
; │┌ @ array.jl:679 within `_array_for` @ array.jl:676
; ││┌ @ abstractarray.jl:840 within `similar` @ abstractarray.jl:841
; │││┌ @ boot.jl:468 within `Array` @ boot.jl:459
      %4 = bitcast [4 x {}*]* %gcframe9 to i64*
      store i64 8, i64* %4, align 16
      %5 = getelementptr inbounds [4 x {}*], [4 x {}*]* %gcframe9, i64 0, i64 1
      %6 = bitcast {}** %5 to {}***
      %7 = load {}**, {}*** %3, align 8
      store {}** %7, {}*** %6, align 8
      %8 = bitcast {}*** %3 to {}***
      store {}** %gcframe9.sub, {}*** %8, align 8
      %9 = call nonnull {}* inttoptr (i64 1698961392 to {}* ({}*, i64)*)({}* inttoptr (i64 269475952 to {}*), i64 3)
      %10 = bitcast {}* %9 to i64**
      %11 = load i64*, i64** %10, align 8
; │└└└
; │┌ @ array.jl:966 within `setindex!`
    %12 = bitcast i64* %11 to <2 x i64>*
    store <2 x i64> <i64 1, i64 2>, <2 x i64>* %12, align 8
    %13 = getelementptr inbounds i64, i64* %11, i64 2
    store i64 3, i64* %13, align 8
; └└
; ┌ @ REPL[4]:1 within `FixLast`
; │┌ @ REPL[4]:1 within `#_#2`
; ││┌ @ abstractarray.jl:2933 within `map`
; │││┌ @ array.jl:716 within `collect_similar`
      store {}* %9, {}** %1, align 16
      %14 = getelementptr inbounds [4 x {}*], [4 x {}*]* %gcframe9, i64 0, i64 3
      store {}* %9, {}** %14, align 8
      %15 = call nonnull {}* @j__collect_419({}* nonnull %9, { {}* }* nocapture readonly %2) #0
      %16 = load {}*, {}** %5, align 8
      %17 = bitcast {}*** %3 to {}**
      store {}* %16, {}** %17, align 8
; └└└└
  ret {}* %15
}

Indeed, I think adding new language features is something that shouldn’t be taken lightly; if we opened the floodgates to new operators, new operator precedences, and new operator associativities, things could become incomprehensible quickly, like a Tower of Babel.

I do believe new language features (especially when they challenge “normal” behavior) should be accepted only if they solve an important and common problem. And I believe there is no closed-form solution to the question, “what is important and common enough?” At the end of the day, it is indeed subjective. I think I would use the fixing operators more frequently than the do statement, so there’s that.

I have made an effort to devise a proposal to improve multiple activities (chaining, autocomplete, and partial evaluation resulting in meaningfully typed functors), and attempted to show that a) these are indeed problems worth solving, and b) this proposal indeed solves them better than alternative approaches. My hope is that the case I laid is compelling.

From where I sit, the most compelling counter-case to my proposal remains the competing proposal for underscore syntax. So I hope we can take the conversation back into the direction of debating this.

1 Like

There are definitely downsides.

julia> f() = map(x->x+1,a)
f (generic function with 2 methods)

julia> @code_lowered f()
CodeInfo(
1 ─      #17 = %new(Main.:(var"#17#18"))
│   %2 = #17
│   %3 = Main.map(%2, Main.a)
└──      return %3
)

julia> h() = FixLast(map,a)(x->x+1)
h (generic function with 1 method)

julia> @code_lowered h()
CodeInfo(
1 ─ %1 = Main.FixLast(Main.map, Main.a)
│        #19 = %new(Main.:(var"#19#20"))
│   %3 = #19
│   %4 = (%1)(%3)
└──      return %4
)

Try @code_llvm on these two

julia> @code_llvm f()
;  @ REPL[33]:1 within `f`
define nonnull {}* @julia_f_1150() #0 {
top:
  %0 = alloca [2 x {}*], align 8
  %gcframe2 = alloca [3 x {}*], align 16
  %gcframe2.sub = getelementptr inbounds [3 x {}*], [3 x {}*]* %gcframe2, i64 0, i64 0
  %.sub = getelementptr inbounds [2 x {}*], [2 x {}*]* %0, i64 0, i64 0
  %1 = bitcast [3 x {}*]* %gcframe2 to i8*
  call void @llvm.memset.p0i8.i32(i8* noundef nonnull align 16 dereferenceable(24) %1, i8 0, i32 24, i1 false)
  %thread_ptr = call i8* asm "movq %fs:0, $0", "=r"() #3
  %ppgcstack_i8 = getelementptr i8, i8* %thread_ptr, i64 -8
  %ppgcstack = bitcast i8* %ppgcstack_i8 to {}****
  %pgcstack = load {}***, {}**** %ppgcstack, align 8
  %2 = bitcast [3 x {}*]* %gcframe2 to i64*
  store i64 4, i64* %2, align 16
  %3 = getelementptr inbounds [3 x {}*], [3 x {}*]* %gcframe2, i64 0, i64 1
  %4 = bitcast {}** %3 to {}***
  %5 = load {}**, {}*** %pgcstack, align 8
  store {}** %5, {}*** %4, align 8
  %6 = bitcast {}*** %pgcstack to {}***
  store {}** %gcframe2.sub, {}*** %6, align 8
  %7 = load atomic {}*, {}** inttoptr (i64 140354160639512 to {}**) unordered, align 8
  %8 = getelementptr inbounds [3 x {}*], [3 x {}*]* %gcframe2, i64 0, i64 2
  store {}* %7, {}** %8, align 16
  store {}* inttoptr (i64 140355629522336 to {}*), {}** %.sub, align 8
  %9 = getelementptr inbounds [2 x {}*], [2 x {}*]* %0, i64 0, i64 1
  store {}* %7, {}** %9, align 8
  %10 = call nonnull {}* @ijl_apply_generic({}* inttoptr (i64 140355395385472 to {}*), {}** nonnull %.sub, i32 2)
  %11 = load {}*, {}** %3, align 8
  %12 = bitcast {}*** %pgcstack to {}**
  store {}* %11, {}** %12, align 8
  ret {}* %10
}

julia> @code_llvm h()
;  @ REPL[35]:1 within `h`
define nonnull {}* @julia_h_1152() #0 {
top:
  %0 = alloca [2 x {}*], align 8
  %gcframe2 = alloca [3 x {}*], align 16
  %gcframe2.sub = getelementptr inbounds [3 x {}*], [3 x {}*]* %gcframe2, i64 0, i64 0
  %.sub = getelementptr inbounds [2 x {}*], [2 x {}*]* %0, i64 0, i64 0
  %1 = bitcast [3 x {}*]* %gcframe2 to i8*
  call void @llvm.memset.p0i8.i32(i8* noundef nonnull align 16 dereferenceable(24) %1, i8 0, i32 24, i1 false)
  %thread_ptr = call i8* asm "movq %fs:0, $0", "=r"() #3
  %ppgcstack_i8 = getelementptr i8, i8* %thread_ptr, i64 -8
  %ppgcstack = bitcast i8* %ppgcstack_i8 to {}****
  %pgcstack = load {}***, {}**** %ppgcstack, align 8
  %2 = bitcast [3 x {}*]* %gcframe2 to i64*
  store i64 4, i64* %2, align 16
  %3 = getelementptr inbounds [3 x {}*], [3 x {}*]* %gcframe2, i64 0, i64 1
  %4 = bitcast {}** %3 to {}***
  %5 = load {}**, {}*** %pgcstack, align 8
  store {}** %5, {}*** %4, align 8
  %6 = bitcast {}*** %pgcstack to {}***
  store {}** %gcframe2.sub, {}*** %6, align 8
  %7 = load atomic {}*, {}** inttoptr (i64 140354160639512 to {}**) unordered, align 8
  %8 = getelementptr inbounds [3 x {}*], [3 x {}*]* %gcframe2, i64 0, i64 2
  store {}* %7, {}** %8, align 16
  store {}* inttoptr (i64 140355395385472 to {}*), {}** %.sub, align 8
  %9 = getelementptr inbounds [2 x {}*], [2 x {}*]* %0, i64 0, i64 1
  store {}* %7, {}** %9, align 8
  %10 = call nonnull {}* @ijl_apply_generic({}* inttoptr (i64 140355570978032 to {}*), {}** nonnull %.sub, i32 2)
  store {}* %10, {}** %8, align 16
  store {}* inttoptr (i64 140355629522536 to {}*), {}** %.sub, align 8
  %11 = call nonnull {}* @ijl_apply_generic({}* nonnull %10, {}** nonnull %.sub, i32 1)
  %12 = load {}*, {}** %3, align 8
  %13 = bitcast {}*** %pgcstack to {}**
  store {}* %12, {}** %13, align 8
  ret {}* %11
}

The point is because it lowers to something with an intermediate functor, in cases where the compiler doesn’t know types, it will further obscure the code, I think this will generally result in poorer performance for no added value.

I had forgotten to define the structs you propose and evaluation function…

it looks like it compiles to similar stuff even when the type is unknown. I guess I don’t know how much potential for problems there is with the intermediate functor… when does that fail to compile to just the same as application? I don’t know.

The only reason to have the intermediate functors IMHO is if you have some condition where you want to use them:

foo = a \> b(x) /> d
#...do stuff here
foo(e,f) # evaluates to what d(b(x,a),e,f) would give or something ?

I’m finding the operators extremely hard to keep track of which fixes first and which last etc.

just trying to figure that out, I can guarantee I won’t be using this syntax. really hard compared to:

@chain a begin
  b(x,_)
  d(e,f)
end
2 Likes

I appreciate you trying to find ways in which my proposal will not work. It’s good to prove out the idea and make sure it’s not terrible. :+1:

It’s better to have them and not need them, than to need them and not have them.

When chaining function calls, you don’t store the functor and it’s never allocated, so the fact that it’s a functor is completely transparent to the user. But when constructing a partially applied function to pass to another method (which is what people currently use Base.Fix1 and Base.Fix2 for), it’s nice to have a typed object that the method can specialize on.

Consider example from OP (this doesn’t capitalize on the functor’s type, but it does make use of it being a callable object):

Indeed, constructing partial functions for passing as arguments is a different use case from chaining, but I don’t see a problem in allowing the same operator to solve both problems.

After all, language features should be as general as possible, within the constraint that they actually do provide satisfactory solutions to the problems they set out to solve.

As with any novel syntax feature, it takes practice. You don’t have practice yet, so don’t feel bad.

The remainder of this objection is in my estimation largely a matter of familiarity, which is impossible for an operator that doesn’t even exist yet.

That said, I feel like I have a pretty good idea how this will play out. I will pay you actual, physical money, in the denomination of your choice, if this syntax gets adopted into the language and autocomplete starts to work with it, to not use it. (Conditional on you continuing to use the language, of course.)

1 Like

Note that I have zero objection to creating a set of Functors that represent different common currying situations. I’m sure id use the heck out of those

Really my concern is with the parsing and evaluation of the special operators and it’s interaction with the function application, and in particular I don’t think Julia even has the notion of () being an “operator” in f() that’s parsed as a call object currently not as f and ()

julia> foo = Meta.parse("f(a,b,c)")
:(f(a, b, c))

julia> foo.head
:call

julia> foo.args
4-element Vector{Any}:
 :f
 :a
 :b
 :c

1 Like

Just a small comment, I think there might be some precedent for operators that bind more tightly than function calls, e.g. ::

julia> f(x) = 2*x
f (generic function with 1 method)

julia> g(x)::Float64 = 2*x
g (generic function with 1 method)

julia> code_lowered(f)
1-element Vector{Core.CodeInfo}:
 CodeInfo(
1 ─ %1 = 2 * x
└──      return %1
)

julia> code_lowered(g)
1-element Vector{Core.CodeInfo}:
 CodeInfo(
1 ─ %1 = Main.Float64
│   %2 = 2 * x
│   %3 = Base.convert(%1, %2)
│   %4 = Core.typeassert(%3, %1)
└──      return %4
)

Not sure if this counts though.

2 Likes