Floats or Integers or Vectors with specific ranges?

If you want to extend a function from another module, you have to preface the name with the module (see the error message).
For example, instead of show, I had to say Base.show.

In your case, that means (you also need to define convert methods, so Julia knows how to do the conversion):

Base.promote_rule(::Type{Int}, ::Type{FiveToTwenty}) = Int
Base.convert(::Type{Int}, x::FiveToTwenty) = Int(x.val)
FiveToTwenty(14) + 2
FiveToTwenty(14) + 20

The output is of a regular integer (64 bit on 64 bit systems).
That doesn’t for for showing though.
Honestly, I haven’t messed with displays for custom types nearly enough, so I’m not really sure what the best way to do things are. If you delete the <: Unsigned part, it will display automatically as:

julia> struct FiveToTwenty3
              val::UInt8
              @inline function FiveToTwenty3(val)
                  @boundscheck begin @assert val >= 5 && val <= 20 end
                  new(UInt8(val))
              end
         end

julia> FiveToTwenty3(8)
FiveToTwenty3(0x08)

julia> isbits(ans)
true

A fallback show for numeric types doesn’t seem to be defined. The above is normal, although not especially pretty.

If you’re adding a lot of new types, you can

import Base: promote_rule
promote_rule(::Type{Int}, x::FiveToTwenty) = Int(x.val)

In Julia, you can get range types like

julia> typeof(1:20)
UnitRange{Int64}

julia> typeof(linspace(0.3, 17, 100))
StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}

I love Julia’s innovations.
Multiple dispatch is incredible.
I think it can take a while to get used to, but how it lets you get low/zero overhead performant abstractions with extreme generalization…just mind blowing!

Like, about a month ago I was working on a problem, and my answers were nonsense.
The functions involved inverting and multiplying a few (ill-conditioned) matrices. Simply calling BigFloat.() on the inputs → everything worked, the answers were right, and I had derivative-enhanced quadrature rules.

More impressive of course is how automatic differentiation “just works” on most code, or how @ChrisRackauckas got a bunch of functions to also work on GPUArrays…

3 Likes