[ANN] MutableArgumentContracts.jl (experimental, not yet registered)

Good question, I wanted to avoid any confusion with the ! negation operator. Even though it’s not an issue on the function definition side. Also the convention for mutable function is for it to appear after the name. In a speculative 2.0, Julia could potentially formalize ! as a suffix that perhaps invoked similar functionality, without the extra syntax overhead, e.g. foo!(x!, y!::DenseVector, z)

2 Likes

I think that should work for dispatch because subtyping of type parameter constraints implies subtyping of the iterated unions. For example, AbstractVector is a supertype of both Vector and UnitRange, and X{S} where S<:AbstractVector is a supertype of both X{Vector} and X{UnitRange}. This isn’t exactly parameter covariance, as the type constraints are not the parameters, and TypeVar parameters like S are not types let alone subtypes. Involving iterated unions and type constraints do complicate things, but I don’t think that’ll be much of an issue. The 2-tuple idea and the iterated union idea add about the same extra hurdles to get to the important type e.g. Tuple{Ref, Integer}.parameters[2] versus Ref{<:Integer}.var.ub to get to Integer.

The exception for mutations of obvious and possibly default arguments is documented here:

Functions related to IO or making use of random number generators (RNG) are notable exceptions: Since these functions almost invariably must mutate the IO or RNG, functions ending with ! are used to signify a mutation other than mutating the IO or advancing the RNG state. For example, rand(x) mutates the RNG, whereas rand!(x) mutates both the RNG and x ; similarly, read(io) mutates io , whereas read!(io, x) mutates both arguments.

My personal notation is to keep ! for callables that mutate inputs and suffix the variables for said inputs with !!. I don’t really use this much because the existing convention and descriptive names are almost always good enough, I’ve done this when I really need the variable to stick out to me.

1 Like

You can use Bumper.jl for the cases I think you have in mind. [and heap allocations aren’t actually slow, they are rather fast, just imply GC pressure, but the optimizer could free in many cases like Mojo, automatically, what Bumper does manually.]

1 Like

Even if we discount the time taken taken by the GC, typical heap allocations do take more and poorly bounded time to find large enough blocks of free memory, compared to stack allocations. Heap allocations can get much faster when limitations on how you allocate can be communicated, like with Bumper.jl.

2 Likes