Promotion and literals


#1

So, crazy idea time (I am not sure whether this is a good idea):

It is somewhat painful to deal with non-machine-sized values. For example an increment of an Int16 reads as x += Int16(1). I always thought that this was unavoidable; but we now have x^4 and x^(-4) which are lowered differently than pow(typeof(x), Int).

The crazy idea would be to lower x+4 into +(x, convert(typeof(x), 4)). Advantage: shorter code. I can write x+=1 for increments and have it do the right thing. Most of the time, if I add a literal to something, I do not want to promote. Same with multiplication. Same holds for Float32.

Disadvantage: The same as with the literal pow, this needs people to be very careful with associativity and parentheses and might be too error-prone:

4^(-1)
#0.25
4^(-2+1)
#DomainError with -1:
#Cannot raise an integer x to a negative power -1.
#hypothetical:
x::Int16
x += 2 #still Int16
x = x+1 -1 #still Int16, + is left-associative
x = x + (1-1) #now Int64

In total I think this is a bad idea (simple predictability of rules before conciseness or convenience), but the new negative integer literal powers already break this anyway; and then, such a change would be awfully convenient.


#2

https://github.com/stevengj/ChangePrecision.jl is perhaps relevant.


#3

Also relevant: https://github.com/JuliaGPU/CUDAnative.jl/issues/25 (which includes a possible fix using everyone’s favorite package from the future, Cassette.jl).


#4

We’ve actually discussed this in the past. It’s not super crazy and could basically be handled by emitting LazyParse("1") style objects that parse themselves in appropriate precision if involved in arithmetic (which e.g. would make it easy to work with literals and decimal floats). However, there was quite a bit of concern that this would over-tax the compiler on the one hand, and be confusing on the other hand (e.g. f(1) wouldn’t be able to dispatch to f(::Int) anymore). Of course you can fix that problem by breaking referential transparency, but that’s always extremely controversial (the literal pow stuff seems to have gone ok, and I do think there’s a stronger argument that it may be ok there, but still let’s avoid doing too much of that).


#5

One idea I’d had, that would preserve information (unlike the current parser, where floating point values are converted to Float64 and the original string is lost, which limits the usefulness of things like ChangePrecision.jl), would be to have the parser emit something like:
Expr(:literal, 0.035), Expr(:literal, 0.035, "0.035000000000000006") or Expr(:literal, 0.035, "35e-3"), respectively, for Meta.parse("0.035"), Meta.parse("0.035000000000000006"), and Meta.parse("35e-3"). The second arg would only need to be emitted when outputting the literal value would not result in exactly the same string as was parsed (i.e. it is in a "canonic" form). Retaining the original form of literals could be done only if a keyword were passed toparse, and they could beSubString` types (of the input string) so as not to allocate a lot of extra space for string literals.

This :literal expression type could also be used to solve the type instability of Meta.parse, allow it to simply extend Base.parse, and be more consistent with the other parse methods.
Instead of Meta.parse, you could have parse(Expr, str) which would always return something of type Expr, instead of sometimes some type that has a literal representation, such as String, Int, Float64.
(This returning the Expr(:literal, ...) would really only be necessary at the top-level, other literal values (if canonic or no keyword to preserve information were passed) could simple just have the value, as happens currently).


#6

Hmm. After seeing that I’m not alone with this problem and the fact that rust appears to have a workable solution I am more partial to getting something less verbose. Writing e.g. x += Float32(1.0) is not just more lengthy; it is very un-generic if the code suddenly runs with x::Float64 and I am not sure how well it plays with things like AD. And x += typeof(x)(1) is not necessarily better.

Would literal + literal stay a literal? (probably not, but I could imagine a dispatch for basic arithmetic; but one would probably make sure that inlining does not propagate literal-ness, because inlining would then create semantic differences)
Would const x= literal make x stay a literal? Could one get this by const x::literal = ...?

Regarding dispatch of f(1): couldn’t there be a magical default dispatch f(x::literal) = f(eval_literal(x)), where eval_literal might even be changeable for code blocks, a la ChangePrecision.jl? How would one handle type-instability? That is, if we don’t know typeof(x) at compile-time then x+1 propagates the instability (so no accidental eventual type stability by convergence to 64bit), and might need run-time call to eval_literal (become even slower!); on the other hand, having the semantics depend on inference is somewhat ugly.


#7

There’s a strong tension between dynamic, polymorphic behavior and static, monomorphic behavior here. In a language like Rust, which is static and affords less polymorphism than Julia, you can do this much more easily. In Julia, since the question of what does f(...) do depends so much on what exactly ... evaluates to, it’s really problematic if what ... evaluates to depends on what f is. Since + and other arithmetic operations are just function calls with special syntax, that is essentially what this boils down to. So we’ve stuck with a hard and fast rule that the meaning of ... doesn’t change based on where it appears (aside from scope, of course).

The way I think of the literal power business is that ^2 is simply a different operation than ^. Slightly confusing since x^2 looks like ^(x, 2) but is actually (^2)(x). If you think about it this way it makes sense.


#8

I have not been following this closely, but is this special casing still needed after improved constant propagation and folding?


#9

Are you referring to the literal powers? If so, then half and half. Here are the motivating use cases:

  1. Allow 2^-1 to produce a Float64 while 2^2 is an Int.
  2. Allow (2m)^2 to predictably produce a unitful quantity with m^2 units.

The latter can be addressed by constant propagation – anywhere that literal powers solve the problem, constant propagation can also help. Since m^n is inherently type unstable, these are no better or worse in the non-literal case.

The former case of 2^-1 is different. We could define Int^Int to have return type Union{Int, Float64} depending on the value of the argument. But then we’re introducing a type instability into every bit of code that uses the ^ function on integers. What we really want – and what we currently have – is for Int^Int to remain always an Int in the generic case, but to allow Int^-1 and other negative literal exponents to return a Float64.


#10

Thanks for the explanation.

julia> struct Foo end

julia> Base.literal_pow(::typeof(^), ::Foo, ::Val{p}) where p = 
    "this is exciting and terrifying at the same time (with a value of $p)"

julia> Foo()^-1
"this is exciting and terrifying at the same time (with a value of -1)"

#11

Although, it might make more sense for negative integer powers, if integer powers are not going to be type stable anyway, to return a Rational instead of a Float64. For example, x^-2 could return Rational{typeof(x)}(1, x^2).