Is it possible to force use Int32, instead of Int64

Is there any command, that will force to use Int32 instead of Int64?

If you do your math with Int32s you will get Int32 as the result. Is your question about h
ow to create Int32 literals or something else?

I mean, there is packages.
I am sure, that Int32 is enough for me. But Packages use Int64.

I would like to force the system, use Int32 instead of int64

For type stable functions in packages, passing int32 into them will result in int32 being used throughout. You can’t force all system math to be in int32, (nor should you) since things like pointers have to be 64 bits. Is there a specific package you’re worried about here?

I’ve just asked in general about possibility.

I think, it will be useful

One option is to use the 32-bit version of the Julia binary.

Is there a specific motivation to use Int32? I am not certain but I think Int32 does not necessarily yield better performance than Int64 in a 64-bit system. In fact, it may be worse.

1 Like

Are you talking about being able to tell the parser that number literals should be parsed as Int32? So that

n = 3

becomes

n = Int32(3) 

?

I would guess that you might be able to achieve this for a code block with some macro, but probably not inside other peoples’ packages.

Why do you want this? I doubt that it would be useful for anything except storage of large arrays.

I think the question is motivated by the memory limitations exposed in this thread:

It helps both in memory and in speed

But it’s still not clear what you actually want. Is it for specifying types when loading dataframes, or for parsing literals in code, or something else?

1 Like

I just wonder, if it possible to have some overall system option, to switch all computation into 32bit mode.

The closest thing I can think of is re-defining Int to mean Int32. It would help when someone writes

Vector{Int} 

But not for

Vector{Int64} 

This sounds like an issue with the CSV package, and you should get help on that. Changing ‘system integers’ to be 32 bit doesn’t sound like a great idea.

1 Like

With the 32-bit Julia binaries Int is Int32. So you could try that.

2 Likes

The problem is that it’s not entirely clear what you mean by that. Every program starts out with some kind of literal numbers like a = 2.72. Then you apply some functions to those, like sin(a). By default, a will be a Float64, and then the result will be a Float64. If you start out with a Float32 like a = 2.72f0, then sin(a) will be a Float32. The same goes for the vast majority of functions in Julia, be they system functions like sum, maximum, +, *=, or the functions you write yourself. Typically you get back what you put in there.

One exception is if you compute things like a+b, where a is a Float32 and b is a Float64, then a will be promoted to Float64, and you get back a Float64. The same thing happens if you compute things like sin(Int32(2)), you get back a Float64. But putting integers into real functions isn’t a good practice anyway, even though there is an explicit conversion from Int32 to Float64 somwhere in Julia for this case.

Ordinarily it will suffice if you explicitly use Float32 and Int32 for your number literals, then most functions will stick to that. That is actually one of the main features of Julia, your functions are compiled for the types you call them with.

Other than such functions, you will certainly not want that internal pointer arithmetic and things like that are done with 32-bit types on a 64-bit machine.

1 Like

I’d discourage that. The 32-bit builds of Julia will use only 8 floating point registers.
x86_64 systems all have 16 or 32 floating point registers.

This will make a lot of code run slower, due to far more frequent and severe register spills and smaller blocking parameters meaning far more loads/stores needed in general.

EDIT:
An example of how extreme this can be, the official 64 bit 1.4.1 binary:

julia> using LoopVectorization, BenchmarkTools, LinearAlgebra

julia> BLAS.set_num_threads(1)

julia> M = K = N = 120;

julia> A = rand(M,K); B = rand(K,N); C = rand(M,N); C2 = A * B;

julia> function AmulB!(C, A, B)
           @avx for n ∈ axes(C,2), m ∈ axes(C,1)
               Cmn = zero(eltype(C))
               for k ∈ axes(B,1)
                   Cmn += A[m,k] * B[k,n]
               end
               C[m,n] = Cmn
           end
       end
AmulB! (generic function with 1 method)

julia> @benchmark AmulB!($C,$A,$B)
BenchmarkTools.Trial:
  memory estimate:  0 bytes
  allocs estimate:  0
  --------------
  minimum time:     34.795 μs (0.00% GC)
  median time:      34.931 μs (0.00% GC)
  mean time:        35.010 μs (0.00% GC)
  maximum time:     90.112 μs (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     1

julia> @benchmark mul!($C2,$A,$B)
BenchmarkTools.Trial:
  memory estimate:  0 bytes
  allocs estimate:  0
  --------------
  minimum time:     81.226 μs (0.00% GC)
  median time:      81.454 μs (0.00% GC)
  mean time:        81.670 μs (0.00% GC)
  maximum time:     127.942 μs (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     1

julia> C ≈ C2
true

julia> versioninfo()
Julia Version 1.4.1
Commit 381693d3df* (2020-04-14 17:20 UTC)
Platform Info:
  OS: Linux (x86_64-pc-linux-gnu)
  CPU: Intel(R) Core(TM) i9-7900X CPU @ 3.30GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-8.0.1 (ORCJIT, skylake)

The official 32 bit version:

julia> using LoopVectorization, BenchmarkTools, LinearAlgebra

julia> BLAS.set_num_threads(1)

julia> M = K = N = 120;

julia> A = rand(M,K); B = rand(K,N); C = rand(M,N); C2 = A * B;

julia> function AmulB!(C, A, B)
           @avx for n ∈ axes(C,2), m ∈ axes(C,1)
               Cmn = zero(eltype(C))
               for k ∈ axes(B,1)
                   Cmn += A[m,k] * B[k,n]
               end
               C[m,n] = Cmn
           end
       end
AmulB! (generic function with 1 method)

julia> @benchmark AmulB!($C,$A,$B)
BenchmarkTools.Trial:
  memory estimate:  0 bytes
  allocs estimate:  0
  --------------
  minimum time:     72.075 μs (0.00% GC)
  median time:      72.274 μs (0.00% GC)
  mean time:        72.468 μs (0.00% GC)
  maximum time:     125.262 μs (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     1

julia> @benchmark mul!($C2,$A,$B)
BenchmarkTools.Trial:
  memory estimate:  0 bytes
  allocs estimate:  0
  --------------
  minimum time:     235.576 μs (0.00% GC)
  median time:      235.826 μs (0.00% GC)
  mean time:        236.387 μs (0.00% GC)
  maximum time:     280.883 μs (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     1

julia> C ≈ C2
true

julia> versioninfo()
Julia Version 1.4.1
Commit 381693d3df* (2020-04-14 17:20 UTC)
Platform Info:
  OS: Linux (i686-pc-linux-gnu)
  CPU: Intel(R) Core(TM) i9-7900X CPU @ 3.30GHz
  WORD_SIZE: 32
  LIBM: libopenlibm
  LLVM: libLLVM-8.0.1 (ORCJIT, skylake)
2 Likes