Is it possible to splat into ccall?

As the title says – is it possible? A straightforward attempt doesn’t seem to work:

julia> ccall(:pow, Cdouble, (Cdouble, Cdouble), (2, 10)...)
ERROR: syntax: more types than arguments for ccall around REPL[65]:1

The same question, but relating to @ccall, was asked here:

But the answer is not exactly appealing:

Translating this to my example above:

julia> @eval @ccall pow($((:($x::Cdouble) for x in (2, 10))...))::Cdouble
1024.0

I suppose I’d even be fine with it being a little ugly, but the need for @eval is a showstopper.

Mildly hacky, but maybe to your taste:

julia> abstract type ccaller{fun, retT, argT} end

julia> @generated ccaller{fun, retT, argT}(arg...) where {fun, retT, argT} =
           :(ccall(fun, $retT, ($((argT isa Tuple ? argT[i] : argT for i=eachindex(arg))...),), $((:(arg[$i]) for i=eachindex(arg))...)))

julia> ccaller{:pow, :Cdouble, :Cdouble}(2, 10)
1024.0

Splattable:

julia> ccaller{:pow, :Cdouble, ((:Cdouble, :Cdouble)...,)}((2, 10)...)
1024.0
1 Like

No, you kind of can’t do this (except for hacks!). And the presence of hacks is a sign that something “might be wrong” and you should be doing it a different way.

Can you explain your actual use case? pow isn’t a very realistic example, because you can just index the args instead

args = (2,10)
ccall(:pow, Cdouble, (Cdouble, Cdouble), args[1], args[2])

Presumably there’s some reason why you don’t want to do it this way?

1 Like

Well, the canonical example would be varargs functions in C (e. g. as linked in my original post):

I used pow as an example because I wanted to focus on the syntax without adding the complications that come with varargs. Using @uniment’s method, the linked printf example becomes

julia> ccaller{:printf, :Cint, (:Cstring, ntuple(_ -> :Cint, 5)...)}("%d %d %d %d %d\n", (1:5)...);
1 2 3 4 5

without @eval – much better!

But to be honest, that isn’t even how I came across this question. I just have some C functions (in an external library) with annoyingly long argument lists (9 arguments) that I wanted to call and validate/transform the arguments for. Instead of copy-pasting the same code 9 times to handle each argument, I tried to pack them in a tuple and process them in a loop instead. So now I already have the arguments as a tuple, so the easiest (and least error-prone!) way of passing them to ccall would be to use splatting. Of course I could also just index all 9 arguments individually, but that just seems stupid when splatting exists! We’re not writing C here after all.

So just for illustration, what I wanted to write was

function external_func_wrapper(args::NTuple{9, Integer})
	for x in args
		validate(x) || error(x)
	end
	return ccall(
		:external_func, Cdouble,
		(Cint, Cint, Cint, Cint, Cint, Cint, Cint, Cint, Cint),
		(transform(x) for x in args)...
	)
end

But since that’s not allowed, I’d have to change it to

function external_func_wrapper(args::NTuple{9, Integer})
	for x in args
		validate(x) || error(x)
	end
	arg1, arg2, arg3, arg4, arg5, arg6, arg7, arg8, arg9 =
		(transform(x) for x in args)
	return ccall(
		:external_func, Cdouble,
		(Cint, Cint, Cint, Cint, Cint, Cint, Cint, Cint, Cint),
		arg1, arg2, arg3, arg4, arg5, arg6, arg7, arg8, arg9
	)
end

which makes me list the entire indexed argument list twice, instead of not at all if splatting were allowed.

I don’t really agree with this. I’d even almost say it’s the opposite: The fact that “hacks” like @uniment’s method exist demonstrates that there isn’t any fundamental reason why this shouldn’t work. Clearly, Julia has all the information necessary to determine whether the call is valid and carry it out. It’s just that ccall arbitrarily rejects this specific syntax. As you say yourself in that same thread I linked above:

What @generated is doing here is circumventing the restriction the compiler usually enforces, of having the number of declared arguments match the number of passed arguments. Arguably, this is a bug in how @generated interacts with ccall and should be fixed.
The reason it’s not allowed is that, in general, what you wrote may lead to calling an external function with a different number of arguments than it expects, which can lead to all kinds of issues, and splatting alone cannot check that this will work at compile time, since the length is not guaranteed to be statically known information.

For example, printf just trusts that there are the same number of arguments passed to it as are declared in the format string. Not passing enough can lead to security vulnerabilities. ccall (ordinarily) enforcing that such a mismatch cannot happen from the julia side is a feature, not a bug. For me at least, that’s enough of a “fundamental reason” not to want this.

I’m aware of the importance of ensuring that functions are called with the correct signatures. There is nothing Julia can do to enforce this – when calling external functions, it’s the programmer’s responsibility to ensure that the function signature is defined correctly in Julia and that, for example, arguments correctly matching the format string are passed to functions like printf. And in C, this is just how it works – there is no way for printf to verify how many arguments were actually passed to it.

When it comes to making sure that the numbers of argument types and argument values are equal, there is no reason that it has to be done at a syntax level or at compile time – in the simplest case, ccall could just check if they match at runtime and throw an exception if they don’t. But this is really a flaw in the design of ccall: Why are the argument types and values passed separately? If they were passed together (which is exactly how it works with @ccall), there wouldn’t even be any need to check that the lengths match!

Rather, what I think is going on here is that this might be an accidental artifact of history rather than a conscious design decision. As @c42f explains in the thread I linked, ccall was created at a very early stage:

Perhaps splatting in its current form just didn’t exist at the time, and ccall simply incorrectly interprets a splatting expression as a single argument? The error message is definitely incorrect and misleading:

ERROR: syntax: more types than arguments for ccall around REPL[65]:1

Also, note that in my example, it’s guaranteed even at compile time that the correct number of arguments is passed, because args is of type NTuple{9, Integer}.

1 Like

The error you see is a syntax error, because ccall needs special casing during parsing/lowering to be able to be passed through the compiler and handled as a foreign call, as well as for inserting the calls to cconvert and unsafe_convert. It cannot see what types the splatted object eventually ends up as and it certainly doesn’t know that it will be a NTuple{9, Integer}. Since it fails at the syntax level, there is no later type based compiler magic you can do to fix this, you need to (at minimum) change how ccall is handled in parsing & lowering.

@ccall is the “escape hatch” that we use right now in lieu of modifying the parser to make ccall more special - not to mention that any change to the parser there needs to be backwards compatible with the existing syntax, as to not introduce a breaking change. The relevant Scheme code in the parser is here and here. There is a mention of ... in there, so it does already seem to be aware of varargs to some extent, but not so far as to allow splatting of arguments (likely in part because we’d still need to know how many arguments we actually want to pass during link time, if I’m not mistaken). This is something the @generated approach avoids via sleight of hand, by not splatting at runtime, but during compilation/parsing, when the @generated function is compiled for the passed in (known & fixed length) arguments.

This doesn’t seem to match with @c42f’s statement in the linked thread:

Not sure who’s right here? :wink:

Yes, this is what I’m saying: This shouldn’t fail at the syntax level. The length check should instead be done at compile- or runtime.

What I’m arguing for is making ccall less special. Normal function calls in Julia don’t behave like this syntactically, and given that it looks the same, it would be better for ccall to behave as closely to a normal function call as possible.

The ... in the code refers to specifying varargs in the type signature, as in

julia> ccall(:printf, Cint, (Cstring, Cint...), "%d %d %d\n", 1, 2, 3);
1 2 3

which is yet another special-case syntax that only exists within the context of ccall.

I don’t know if “link time” refers to something else in the context of Julia, but in C, you can link to functions perfectly fine without even specifying any function signature. And of course, with varargs you’ll also never know how many arguments are going to be passed, and this number can change, too. So I’m not sure what the problem would be here.

And the result is a clearer, more consistent, and more flexible syntax, even within the constraints of the current implementation of ccall. Wouldn’t it be better if ccall worked like this?

1 Like
julia> @macroexpand @ccall printf("abcd"::Ptr{Cchar})::Cvoid
quote
    local var"#1#arg1root" = Base.cconvert(Ptr{Cchar}, "abcd")
    local var"#2#arg1" = Base.unsafe_convert(Ptr{Cchar}, var"#1#arg1root")
    $(Expr(:foreigncall, :(:printf), :Cvoid, :(Core.svec(Ptr{Cchar})), 0, :(:ccall), Symbol("#2#arg1"), Symbol("#1#arg1root")))
end

julia> Meta.@lower ccall(:printf, Cvoid, (Ptr{Cchar},), "abcd")
:($(Expr(:thunk, CodeInfo(
    @ none within `top-level scope`
1 ─ %1 = Core.apply_type(Ptr, Cchar)
│   %2 = Base.cconvert(%1, "abcd")
│   %3 = Core.apply_type(Ptr, Cchar)
│   %4 = Base.unsafe_convert(%3, %2)
│   %5 = $(Expr(:foreigncall, :(:printf), :Cvoid, :(Core.svec(Core.apply_type(Ptr, Cchar))), 0, :(:ccall), :(%4), :(%2)))
└──      return %5
))))

They both use the same Expr(:foreigncall) under the hood and both use the same cconvert/unsafe_convert duality, because they work the same. They are just different interfaces to the same thing. Like I said, @ccall is the same as ccall, just in macro form.

The length check cannot in general be done at compile time. A macro like @ccall, just like regular ccall doesn’t know anything about the type of the object you end up splatting. There’s nothing you can check at that level other than the raw syntax, and since the FFI in julia uses the Expr(:foreigncall) mechanism (which you can’t use at runtime, as Expr as used here is a compile time concept), you either have to rework how the compiler does FFI calls, or change parsing/lowering.

Well, it’s an FFI call - it cannot, and should not, behave the same, I think. There are some limitations you have to account for due to how the C ABI works here and due to the safety the compiler provides/wants to provide, and at what level :person_shrugging: I for one am thankful for ccall checking that I can’t even accidentally introduce an argument mismatch here, and for @ccall providing that same guarantee by requiring all arguments to have an annotation (so no splatting can happen there).

Yes, and that explicitly exists to allow calling of vararg functions (which already require all of the arguments to have the same type). I guess it could work to have splatting work there if you define the argument types to be to a vararg function, but I’m really unsure of how this would work on a type level. Since vararg requires all arguments to have the same type, we’d have to be able to enforce that any object you’d splat there, would end up as a collection of the same type as specified in the type signature. That seems like a lot of additional checking.

I don’t know, because relying on runtime information for a (usually/currently) compile time safety feature seems really shaky to me. There isn’t really a whole lot the compiler can trust here, because allowing splatting would mean relying on e.g. eltype to be correct for the given iterable. In other cases, trusting an implementation of a user function to be correct at that level was decided not to be worth it, e.g. in discussions about whether or not fallback implementations for AbstractArray should use @inbounds or not (trusting the user-type to use @boundscheck and @propagate_inbounds correctly).

Yes, I know. I’m just saying that the ccall interface is really strange and unusual because nothing else in the language works like this, while @ccall is just another macro.

Yes, that would obviously require a change to the language (even if just by adding a new mechanism while keeping the old ccall). Perhaps I should say it more explicitly – when I said above that

I didn’t mean that it must be possible to somehow work around this in the current implementation of the language, but rather, that it should be straightforward to modify the implementation in such a way to make this possible.

That’s another thing I don’t get: Vararg functions can have arguments with arbitrary types (see printf!). This requirement only seems to be there because anything else couldn’t easily be made to fit the existing ccall syntax. And it seems like all it does is turn off the length check! After all, since I explicitly have to individually pass the arguments to ccall anyway, instead of using ..., I could just add however many argument types I need to make them match the argument values.

Python doesn’t seem to have a problem with having FFI calls as perfectly normal function calls:

>>> from ctypes import *
>>> libc = CDLL("libc.so.6")
>>> printf = libc.printf
>>> printf(b"%d %g %s\n", 7, c_double(1.1), b"abc")
7 1.1 abc

Now obviously the technical details are completely different in Python, but what I’m saying is that syntactically, it should work the same. If that perhaps requires multiple steps or a macro, I think that’s still better than a weird special construct with unclear rules that behaves like nothing else in the language.

Again, I’m not saying that there shouldn’t be error checking, just that that it should be done in a way that’s consistent with the rest of the language.

But all the checks are still there even with the @generated approach, since currently everything eventually has to go through ccall! The fact that it’s possible to make the syntax more powerful using @generated just proves that the existing ccall syntax arbitrarily limits something that’s already possible.

And why would it be necessary to check eltype? cconvert and unsafe_convert are already called for all arguments. If something of the wrong type is passed, this will just give an error.

1 Like

Well, changing how ccall lowers and the whole Expr(:foreigncall) machinery that’s underpinning ccall unfortunately isn’t straightforward. Granted, at least ccall is still exposed as a regular Expr(:call) in the parsed code and it’s only the lowering stage that would need to change, but at the very least, once you actually want to emit the correct call into C, you need to know how many arguments you’ll end up passing, and you better have the correct number or you’ll get very nasty vulnerabilities.

Let’s take a look at what Expr(:foreigncall) needs right now:

foreigncall Statically-computed container for ccall information. The fields are:

  • args[1] : name
    The expression that’ll be parsed for the foreign function.
  • args[2]::Type : RT
    The (literal) return type, computed statically when the containing method was defined.
  • args[3]::SimpleVector (of Types) : AT
    The (literal) vector of argument types, computed statically when the containing method was defined.
  • args[4]::Int : nreq
    The number of required arguments for a varargs function definition.
  • args[5]::QuoteNode{Symbol} : calling convention
    The calling convention for the call.
  • args[6:5+length(args[3])] : arguments
    The values for all the arguments (with types of each given in args[3]).
  • args[6+length(args[3])+1:end] : gc-roots
    The additional objects that may need to be gc-rooted for the duration of the call. See Working with LLVM for where these are derived from and how they get handled.

To generate all those arguments to the Expr(:foreigncall), you need to know how many arguments you end up passing. To know that, you need to be able to trust length of the iterable object, which you don’t know the type of during parsing. The @generated approach above gets around that by deferring the actual generation of the ccall/Expr(:foreigncall) to such a time where you do already know the argument types of the call, which you just don’t have with the current ccall interface and changing that is tricky, because that’s the documented API we guarantee for ccall to use. This is something a user written macro has access to, so any change there has the potential of being breaking, unfortunately.

Just adding a late expansion of the object/a new function barrier ala the @generated approach breaks that Expr(:call) interface we currently guarantee, but without that you can’t generate all those cconvert/unsafe_convert you need to ensure safety (and to be able to call the correct function), due to only knowing the syntax (and possibly the type, depending on the stage you want to do this in, which may not be enough). The only option left would be to insert a check based on the eltype of the splatted argument (assuming you do know the type), or actually collecting the splatted argument and checking each object for conformity individually in case of a type instability, before being able to hand them off to ccall (which would be quite redundant, since ccall would then have to do that internally again to not break the existing interface).


I mean, all of that is totally possible if we can totally reinvent ccall, but I shudder at the thought of what happens when there’s a mismatch between what length reports and what is actually produced when you iterate that object. You might end up with values being used by C that aren’t rooted in the GC, which the GC then happily frees while C accesses it - a classic use after free. That’s the sort of thing that just cannot happen when you give the syntactic guarantee of having live objects, due to them being used explicitly in the ccall (modulo objects you have to GC.@preserve manually due to passing pointers to them into C…). Not to mention that all the current machinery we do have is working purely in the syntactic domain, and isn’t dynamic to the extent that it can call back into user code to figure out how many things will be passed to the C function.

All in all, to me it doesn’t seem worth it to add all of this complexity, for a (seemingly) minor benefit in convenience. The whole compiler pipeline has the information you’re looking for, but not any individual piece (and adding more mechanisms that act like @generated is bound to make the compiler devs unhappy, from what I heard them say about @generated and the headaches it can cause).

I feel like this isn’t really going anywhere. I keep saying that the FFI interface in Julia could be improved, and you keep dismissing that by pointing to current implementation details (although I do appreciate the links to the relevant parts of the code!).

So just require it to be a tuple, where the length is given by the type parameter (no need to trust length). The returntype argument to ccall is already required to even be a tuple expression, so that’s consistent in some sense. I’d be happy to be able to use splatting even if it means I have to have the arguments in a tuple. My main complaint is that it’s not possible at all to pass the arguments in some kind of container.

Perhaps the easiest way to have a more flexible and consistent interface would be to add a dynamic_ccall function, where everything is checked and done at runtime like in Python (which would obviously be less efficient than Julia’s current ccall!).

1 Like

I agree that it’s technically feasible to implement some version of this! Though it would be a fair amount of work and I don’t think it’s an improvement because it papers over a lot of possible foot guns. Getting foreign calls right is hard enough without adding splatting. Explicit is good here!

But if you disagree, it should be possible to build this feature without modifying Julia itself - by implementing your own macro @myccall which lowers the surface syntax (including splatting) down to a special type and passes that to the @generated function as demoed in the second post here by @uniment.

This is basically how it would need to work if it was a builtin feature - in general not knowing the type of the thing being splatted, we need runtime dispatch on that type which will then generate the associated code to marshal the splatted arguments and conform to the ABI of the foreign function. It’s all a huge footgun though.

A less scary version would be requiring the thing being splatted to be annotated with its length in some way - then it’s quite easy to write a macro which expands to ccall with the appropriate number of arguments. As a purely syntactic transformation this seems a lot less dangerous. It could be part of @ccall in principle. Though on balance I still feel it’s a feature to have to be very explicit about the argument list in FFI calls.

3 Likes

KermitWatGIF

How is this even considered valid Julia? :rofl:

1 Like

I’m not dismissing it , and I’m sorry if it comes across that way. I’m giving references and justifications for why the current interface is the way it is, what its limitations are and how they come to be and why I think that due to those tradeoffs, being dynamic to the extent you seek is unlikely to change (not to mention the big amount of work that entails - which, if someone wants to implement that, these are the places you’ll have to touch, at minimum).

There’s another possible limitation of having ccall being dynamic to that extent - it requires the runtime (and potentially even codegen) to be available to generate the specialized call, at least in the @generated version and probably even in a “taking the length of a tuple” version, due to likely requiring dynamic dispatch (you can of course technically specialize the implementation to emit all the right things to not need that, but tht would entail making this more special again…). On its own that’s pretty ok, but is a downside in a static compilation context, which is becoming more important in julia every day (think pkgimages or PkgCompiler, or a custom sysimage). Calls involving such a dynamic ccall cannot be precompiled if the length of the tuple is unknown (and I’m unsure whether they can be with @generated), so there’s definitely a bunch of unknown edge cases that need to be worked out.

Well, yes and no. It’s true that julia internally can assume the type parameter layout of a tuple since it owns the type, but from a higher level, it’d have to go through length again. You could make ccall a builtin function so people can’t add functions to it, but I imagine the underlying implementation of that builtin will be quite ugly (not to mention, it would be a guaranteed call into the julia runtime, before finally ending up in the library you ultimately want to call). The current ccall doesn’t have that limitation, due to everything already happening in the generated code once the call is emitted, not at runtime.

It’s required to be a tuple expression because that’s one of the few per-element typed containers Core has available that early in the game. The tuple (as shown above) ultimately ends up as a Core.svec, which even your hypothetical would likely need to end up as (unless you want every such ccall to dynamically allocate an array that can’t ever be eliminated, due to crossing the FFI border :grimacing: ). It ending up as a Core.svec during lowering, resulting from a literal tuple, also means we don’t have to trust length or the type parameter of a tuple at all.

Less hacky, maybe to your taste:

julia> ccaller(fun, retT, argT, args...) = ccaller(Val(fun), retT, argT, args...)
       ccaller(fun::Val, retT, argT, args...) = ccaller(fun, retT, Tuple{argT...}, args...)
       @generated ccaller(::Val{fun}, ::Type{retT}, ::Type{argT}, args...) where {fun, retT, argT<:Tuple} = let;
           length(argT.parameters) == length(args) || error("bruh not cool")
           quote @ccall $fun($((:(args[$i]::$(argT.parameters[i])) for i=1:length(args))...))::$retT end
       end
ccaller (generic function with 4 methods)

julia> ccaller(:pow, Cdouble, (Cdouble,Cdouble), 2, 10)
1024.0

Splattable:

julia> ccaller(:pow, Cdouble, ((Cdouble,Cdouble)...,), (2, 10)...)
1024.0
2 Likes

Wow, amazing! I didn’t think it’d actually be possible to turn this into a standard Julia function! And there doesn’t seem to be any overhead either (at least with these simple examples)! This even seems like something that’d be great to have in the standard library. Would you mind if I turn this into a package?

I have a few small questions/comments, too:

  • Why did you use let – just to avoid having to type function? :smiley:
  • Similarly, is there any particular reason why you used quote/end instead of :()?
  • Sticking with ccall instead of @ccall would eliminate the need to manually check the lengths, because ccall does that anyway.
  • It seems like you get infinite recursion when passing something that’s not a tuple as argT.

So my version of this would be

@generated function Ccall(
	::Val{func}, ::Type{returntype}, ::Type{argtype}, args...
) where {func, returntype, argtype <: Tuple}
	return :(ccall(
		func, returntype, ($((x for x in argtype.parameters)...),),
		$((:(args[$i]) for i in eachindex(args))...)
	))
end
Ccall(func::Val, returntype, argtype::Tuple, args...) =
	Ccall(func, returntype, Tuple{argtype...}, args...)
Ccall(func, returntype, argtype::Tuple, args...) =
	Ccall(Val(func), returntype, argtype, args...)
1 Like

Would you mind if I turn this into a package?

Be my guest!

Why did you use let – just to avoid having to type function? :smiley:

Yup lol, force of habit

Similarly, is there any particular reason why you used quote/end instead of :()?

Nope, either is fine

Sticking with ccall instead of @ccall would eliminate the need to manually check the lengths, because ccall does that anyway.

Yeah but then you’d rely on ccall; I wanted to write something that could conceivably replace it because ccall seems hacky as all heck to me

It seems like you get infinite recursion when passing something that’s not a tuple as argT.

Oops! Fixed :pray:

OK, cool!

Hm, I actually thought @ccall was something like a wrapper around ccall, but they actually just both turn into Expr(:foreigncall). Makes sense though, since @ccall actually has more functionality than ccall. In that case, @ccall really just seems like the better option in all cases!

1 Like

Yes, they use the same underlying functionality, as I pointed out above:

This is out of necessity, because there’s no other way to communicate with the compiler that you want to do an FFI call. Anything more than generating an Expr(:foreigncall) requires modification of codegen and custom compiler pass, which is what I tried to get across in my posts above. Maybe that got lost in translation somewhere.