[RFC] What should the arithmetic within the FixedPointNumbers be

:smile:

3 Likes

There is also the generated function solution

@generated myop(a, b) = :($(Symbol(DEFAULT_ARITHMETIC, :_myop))(a, b))

which will compile depending on DEFAULT_ARITHMETIC to the correct function on first use:

DEFAULT_ARITHMETIC = :wrapping
myop(1, 2) # calls `wrapping_myop(1, 2)`

edit: though on second thought there are caveats for composability

1 Like

This “conflict” meant “what you want in common notations”. Although the inductive and deductive approaches are two sides of the same coin, I was trying to articulate what we want to do and what we don’t want to do.


Changing my question, under what conditions would you give the green light for releasing FixedPointNumbers v0.9.0?

I’m trying to avoid rework. This isn’t a matter of my workload, but rather because the changes to the API which once is public will cause a lot of confusion on the dependency chain.

I’m considering the following steps

  • Choose Option 2
  • Suspend work on CheckedArithmeticCore
  • Move arithmetic definitions/implementations into a “single” submodule (FixedPointArithmetic)
  • Make wrapping_* and saturating_* private.

Please note that these steps do not dictate our long-term direction.

1 Like

I’ll give another nod to FPGA development. At least for the use case I’ve worked on I’d want something like Fixed{Int21,11}, for example. I don’t know if there’s a package providing weird sized integers. Also I’d want full control over the rounding mode and overflow behavior of every single operation, so a global using probably wouldn’t work. Finally I’d want to just specify the output type for each op. Yes, all very low-level stuff but still much more convenient than VHDL :stuck_out_tongue:.

Of course I currently have no plans to implement bit-true firmware modeling in Julia and don’t know if this use case is in-scope for FixedPointNumbers.jl.

For image processing not on hardware, I usually take the view suggested by @stevengj and treat fixed point (and even Float16) as an “at rest” format and promote to floating point for any number crunching. Otherwise a lot of hard thinking is usually required.

3 Likes

I’m interested in Fixed numbers from a DSP perspective (probably none of this applies to Normed). There are a few inconsistencies here, like the following.

julia> x1, x2 = 0.5Q0f7, 1.75Q1f6
(0.5Q0f7, 1.75Q1f6)

julia> x1 + x1   # fails
-1.0Q0f7

julia> x2 + x2  # fails
-0.5Q1f6

julia> x1 + x2   # widens
2.25Q8f7

julia> x1 * x1   # OK
0.25Q0f7

julia> x2 * x2  # fails
-0.94Q1f6

There are no perfect options here. Any proposal I can think of to fix these issues has downsides. The best solution may be to have no auto-widening anywhere and let the user manage their own word sizes.

Maybe an operation could have an output type specified with something like

add(Q9f6, 1.23456Q1f14, 100.0Q7f8)

which outputs 101.23Q9f6 instead of 101.23456Q17f14. The function would automatically truncate/round, and wrap/saturate according to settings, in the most efficient way possible, and it’s the user’s responsibility to manage their own settings.

Another thing that would be nice to remove is the restriction that for word size w and a Qxfy number, x+y=w-1. It would be good to make that an inequality instead.

I realise this is all low priority as most users seem to be interested in Normed types for image processing.

1 Like

I find this idea appealing as a user because I think it’s good for users to be able to specify their behavior in a granular way.

On the other hand, some users don’t like the measure of specifying a float() manually (as it is currently used).

Implicit and explicit conversions can coexist, but mediation is required. In the case of the 3-arg methods, the operator symbol conflicts may not be much of a problem, since we cannot use the infix notation.

Now, adding 3-arg wrapping_*/saturating_* methods to specify the destination type doesn’t bother anyone at the moment. (I don’t think the 3-arg checked_* methods are necessary right now. There are few advantages in going to the trouble of specifying a destination type which can cause a overflow at extra cost. Perhaps they are only for debugging the 3-arg wrapping_*.)

Therefore, we still need to discuss the usability or interface to simplify the notation, but I am thinking of adding the 3-arg methods. However, as a practical matter, the signed Normed to fully represent x::N0f8 - y::N0f8 is not yet supported. (https://github.com/JuliaMath/FixedPointNumbers.jl/pull/143) I think the 3-arg methods are valuable with (and only with) support for signed Normed/ unsigned Fixed.

However, changing Normed{T <: Unsigned, f} to Normed{T <: Integer, f} will result in vicious bugs in the codes which assume that Normed is unsigned. We can’t emit a depwarn because they continue to work properly for unsigned Normed.

I can draw a roadmap to support the 3-arg methods and unsigned Fixed / signed Normed step by step. (Of course, I’ll need your help to make that roadmap a reality.) The problem with supporting them is that it is time consuming, but from a different perspective, it provides time to have discussions about usability across the ecosystems, including the downstream packages.

Or is it better to hard switch to a particular method without prolonging the discussion?

Or is anyone planning to introduce a game changer (e.g. a revamped arithmetic system in Julia’s Base/Core, a new “pixel type” support in JuliaImages, etc.) in the next six months to a year or so?

xref: https://github.com/JuliaMath/FixedPointNumbers.jl/issues/142#issuecomment-699642981

coming late to the party…

TL;DR

  • checked OK for CPU, useless for DSP

  • v0.9 : op(x, y, Val(:checked))

  • ? : operators wrapped as op(dst_type, x, y), op!(dst, x, y)

  • v0.11 : standart operator symbols use overflow behaviour defined implicitly via operand types (and possibly result type)

  • future : bit-true modeling via masks

  • far future : a profile macro to track floating point values to give hints on good candidate Fixed types

I’m also from a DSP / FPGA perspective. Imho the package should start “low-level” (dont overburden releases) for low-level usecases (DSP) and further versions could include more elaborate features.
The Readme should make clear that the package is aimed towards resource efficient implementation on digital hardware, but not on exact arithmetics (Rational exists for this, they also have a performance discussion and want to keep exactness).

I see uses for a checked behaviour which widens the memory location only when needed. Some of you discussed it alot. You want to boost performance but also keep accuracy when needed, sacrificing a portion of the performance. This is a valid use case. But it’s on general CPUs, not on DSPs,
this is a potential conflicting goal / purpose of the package and should be documented / discussed clearly!

Real DSP hardware has fixed registers, even on FPGAs you “can’t” change the register width at runtime, you specify it in the design phase. (For the package, Reconfigurable computing is far out of scope…)
The ideal user / usecase I have in mind is me :joy: :
Implement relatively complex algorithms in Julia with Floats (as reference), refactor in Julia to Fixed, simulate overflow behaviour with fixed sizes, crosscompile / synthesize to target platform.
The only use I could imagine is a very naive Fixed-Point simulator: You would know for your inputs which operations overflowed and propagated expensive wide registers through your calculations. Therefore a bit-true emulation is an extremely useful feature for high level DSP / FPGA design. You end up with some “uncommon” bit-widths or aim for 18, 24, 40 bits (common in DSPs). See posts / user stories by @chrisvwx @bhawkins
Note that these simulations are not efficient by themself, they are a tool to design efficient implementations in hardware!
For the far future bit-true types and a profile / simulation macro would be a dream. Dealing with checked is easy: error, on the first FixedPoint{..., checked}, as it will not work in a hardware implementation.
There is no harm in using / implementing checked behaviour, we can stay friends.


at 1.)
Granularity is a must-have, so expose 3 OP functions of the form op3(dst_type, x, y) to closely resemble RISC R-Type mnemonics. For low-level people this is most convient imho. 2 OP alike op2(dst_type, src), e.g. bitwise not or conversion.
Another variant are in place operations, provided that the result variable is preallocated. I assume with meta-programming / generators the effort for inplace variants will be neglectible (no expierience from my side here).
Exposed convenient add(dst_type, x, y) could be wrappers for add(x, y, Val(:checked)).

at 1.), 2.), 4.)
My favourite would be type parameters for the behavior, so you could set default behavior for some following computations / infix operators. Imho propagating behavior is convenient up to the point where you want full control via operation or type conversion. For the checked do you also want to propagate the checks or stop at 64 bits?
In my experience the overflow behavior is the same in one “functional unit” and could change at “interfaces”.
The goal is to make the common case convenient and I consider pipelined calculations / for loops common. (e.g. all accumulators in an IIR filter are of the same type, before and afterwards one may need another overflow behavior).

The “behavior type parameter” will determine the behavior when the variable is written to (conversion, 3-arg). Operators would be only defined for matching behavior with type interference from the operands (2-arg, can be infix). Standarts (+, -, *, /, %) can be used in infix notation, maybe even near-standart for bit manipulation.
Thus :+1: for add(x, y, Val(:checked)) as starting point.

No promotion between different behavior parameters. Error to inform the user that some thinking is needed. Imho implementation of clear cases can start first, those which need some discussion can wait for the next releases. Int18, Int24 can wait for the future.

Fixed{..., sat} + Fixed{..., wrap} => error # not defined / allowed
Fixed{Int8, f1, wide} * Fixed{Int16, f2, wide} = Fixed{Int24, f1*f2, sat} # ok
Fixed{Int8, f1, wide} + Fixed{Int8, f1, wide} = Fixed{Int9, f1, wide} # wide avoids overflow 

We really have to discuss promotion and widening:

Fixed{Int8, f1, sat} + Fixed{Int8, f1, sat} => Fixed{Int8, f1, sat} # ok
Fixed{Int8, f1, sat} + Fixed{Int8, f2, sat} => error # adding two values 
# with different scaling not defined, more save than promotion

Fixed{Int8, f1, sat} * Fixed{Int8, f2, sat} = Fixed{Int8, f1*f2, sat} # ok, scaling adjusted
# if not ok, one would use explicit call here
z = multiply(Fixed{Int8, x.f*y.f, sat},  x, y) # only T, T -> T supported for all operations

# DISCUSS
Fixed{Int8, f1, sat} * Fixed{Int16, f2, sat} = Fixed{Int16, f1*f2, sat} # promote to biggest
Fixed{Int8, f1, sat} * Fixed{Int16, f2, sat} = Fixed{Int24, f1*f2, sat} # this is actually widening behaviour!
Fixed{Int8, f1, sat} * Fixed{Int16, f2, sat} => error # most restricting and transparent

If widening is specified the result type will differ from the operand types.
The scaling behavior has to be discussed, imho implicit adjustment of scaling is common and should be default. Workaround with 3-arg if you want to have different scaling.
The user has to split multiple calculations from one line in sub-expressions when she wants to combine different behaviors.

I somehow dislike setting the behaviour per code-block as the only way to do it, but unsure why. Maybe I dont see how this could easily be refactored incrementally from floating point reference implementation.

The package hasn’t had a lot of recent development, but now there’s a new possibility: use Preferences.jl to let the user configure the desired behavior of the package. This would be a compile-time preference so there would be no overhead to making it configurable (other than needing the recompile everything if you change it).

I’d be inclined to choose option 5 (all math operations promote to float) as the default, because it’s the safest behavior and causes errors only in cases where results are truly nonsensical (e.g., trying to convert(N0f8, 1.1)). But Preferences would make it very easy to configure different behavior, and even picking different behavior in different Projects.

The use of preferencies is very interesting idea but the big problem with this global setting for the spooky actions at the distance.
The FixedPointNumbers.jl is used by many other packages, each of the package having a different view of how fixed numbers should behave - they only share the same name and storage layout, but differs in expected arithmetic.

In image processing, conversion to floats to never overflow/underflow etc. is natural. In prototyping code that will run on an FPGA, this is not what we want - we need the arithmetic to be true fixed point with very fine control of all operations. Using those two high-level libraries with single global settings for what arithmetic to use in one project will break the assumption for at least one of them, causing a huge source of correctness bugs.

Either we need to give up using FixedPointNumbers for both use-cases and have a separate library for each of the arithmetic or a much more complex way how to select the desired operations with some explicit macros in a similar way the @inbounds macro works.

Or maybe we can discuss a change in core Julia language where each package can specify the preference setting for the package at the import/using statement

using FixedPointNumbers @preference promote_float=true

that would effectively split the FloatingPointNumbers into two different packages with different pre-cached DLLs loaded. I’m just speculating here with no strong opinion other than that the proposed Preferences approach is not a viable solution.

2 Likes

Why now promote, but e.g. N0f8 * N0f8 -> N0f16 (from UInt8 to UInt16)? Isn’t N0f8 one of the most important types there? And do similarly for N0f16.

Except for /, div, rem etc. Then go to Float64 (or Rational) as with the regular integers. And are these common enough to worry about is code to be speed-critical.

We could actually do: N0f8 + N0f8 → N0f9

For accumulation go to N57f8 which is bad since doesn’t fit 64-bit? Or just to N56f8 (with or without overflow checking?

It would go to N0f128, but neither type is actually much used? And except for that last proportion all promotions very fast and fitting in one (64-bit) register. Likely N0f128 (or even N0f64) shouldn’t provide any operations, only be there to scale back to fit into a smaller format. Or possibly go to BigFloat?

I’m basically thinking in terms of e.g. Fixed{Int8,7} that only goes up to “≈ 0.992”, but N0f8 goes to 1.0. Does that matter?

Preferences are Project-specific. So you can have, e.g., one global default setting and then customize it as desired in each project environment. It’s even stackable by directory: if you put a bunch of projects in separate folders inside an FPGA_projects folder, you can set the LocalPreferences in the FPGA_projects folder and it should propagate to all the specific environments below it.

Does that address your concern, or are there still issues we’d need to address?

Because

acc = zero(eltype(list))
for x in list
    acc += x
end

is now type-unstable.

I think the issue is that, like pretty much all global state, it isn’t composable, so if PkgX has some functionality that requires one choice and PkgY has some functionality that requires another choice, they can’t be used together if that choice must be made globally. I think I don’t really know enough about uses of fixed point arithmetic to know if this is a common situation though.

In my experience though, code initially developed as application/user-code/scripts/etc can often end up getting promoted to library-level code as a codebase develops, which makes me wary of requirements like “intermediate packages don’t get to make this choice, only the end-user does”.

3 Likes

You can always add this to your library:

if @load_preference(fpn_promotion, :overflow) !== :overflow
    error("to use this library, FixedPointNumbers must be using :overflow arithmetic")
end

If the only worry about introducing a type parameter is breaking downstream packages, you could just introduce a new supertype and make the current ones an alias for that type with a particular type parameter, no?

I don’t understand the attraction of using Preferences here. If the code for all of the different arithmetic semantics is going to be present in the package anyway, dispatching on a type parameter seems like it will be the same amount of code as enabling it conditionally, and is more composable and flexible.

5 Likes

How does that work? +(param, x, y)? What fraction of arithmetic operations are written that way? Especially in packages where img .- background may or may not use fixed-point arithmetic? Would you really want to write every operation that way?

No, you have different methods for +(x::Fixed{:checked}, y::Fixed{:checked}) vs +(x::Fixed{:saturating}, y::Fixed{:saturating}) vs. …, i.e. you dispatch on a new type parameter of the fixed-point type(s).

It sounds like you’re proposing to implement all these methods anyway in a Preferences-based approach, just without a type parameter and instead putting an if statement around them to decide which version is included at load time. Why not use dispatch instead?

3 Likes

Sure, we can use dispatch. But it would be nice to control the default, and that’s orthogonal to having these type parameters.

The issue is, what type does load("myimg.jpg") return? What about workflows like

  1. Set to a safe value (e.g., promote to floating point)
  2. Test my application and make sure it’s free of errors
  3. Set to overflow for optimal performance
  4. Run benchmarks etc

In that case the global setting is very nice, and local to my task. But anyone who wants fine-grained control can get it with the type parameters.

But we can put that setting on the IO packages. I guess the key place you’d still like to have it for FPN is in what N0f8(x) builds when you don’t specify the overflow behavior.