The speed of light is an integer. Why should we care?

Since 1983, the speed of light is defined to be exactly equal to 299,792,458 meters per second. So, it seems natural to define it as an integer while doing some culculations with Julia, for example, to calculate a Planck’s length. This is how it can be done in Julia REPL:

c = 299_792_458;  # the speed of light
G = 6.67430e-11;  # Gravitational constant
h = 6.62607015e-34;  # Planck's contant 
h_bar = h / (2*pi);  # reduced Planck's constant
planks_length = sqrt(G*h_bar/c^3)

As a result, we immediately get 3.5793584153084007e-32 which at first sight is what the Planck’s length should be.

However, Wikipedia says that the Planck’s length is equal to 1.616255e-35.

You probably already realized that the reason for this is the integer overflow that happens when raising c to the power of 3. Moreover, it is easy to fix this redefining c to be a float:

c = 2.99792458e8

After doing so, we get the value of the Planck’s length to be equal to 1.6162550244237053e-35, which coincides with the value stated in Wikipedia.

However, it is not so easy for an everage student of Physics, and it is worring how easy such an error can be done in Julia.

Yes, I do know that this is done for speed and that the possibility of integer overflow is mentioned in the documentation.

However, it is quite worrying that Julia produces such a result even without any warning.

At the very least this means that Julia REPL cannot be used as a scientific calculator by an everage student.

Unfortunately, I do not know if it can be fixed on the design level. But if not, this example should be put at the very beginning of Julia official documentation so that every single user of Julia would be warned about such a possibility: a simple warning about an abstact integer overflow is not enough.


This isn’t a serious answer, but one way to deal with this is to use the unit system where c=1. Numerical problem solved?


As is the case for most other programming languages using machine integers, this is not unique to Julia. Julia is a programming language and not a scientific calculator. As such, it does require you to learn and understand some things up front.

This only works until you realize each individual has something different they think is important to warn users about and there is only one beginning of the documentation. Thankfully the documentation is very readable so reading it is a good start before using Julia.


idk man, if you just use scientific notation all the way?

julia> c=3e8

julia> G = 6.67430e-11;  # Gravitational constant

julia> h = 6.62607015e-34;  # Planck's contant

julia> h_bar = h / (2*pi);  # reduced Planck's constant

julia> planks_length = sqrt(G*h_bar/c^3)

Also, if you think about it, , this is also an Integer, :man_shrugging:


The section on integers and floating-point numbers starts by listing minimum and maximum values for each built-in numeric type. This is followed by a section dedicated to overflow in the docs


Shameless plug. I’d recommend the average Physics students to use PhysicalConstants.jl:

julia> using PhysicalConstants.CODATA2018

julia> sqrt(NewtonianConstantOfGravitation * (PlanckConstant / 2pi) / SpeedOfLightInVacuum ^ 3)
1.6162550244237053e-35 J^1/2 s kg^-1/2

You can also get the exact value in arbitrary precision with

julia> sqrt(big(NewtonianConstantOfGravitation) * (big(PlanckConstant) / pi / 2) / big(SpeedOfLightInVacuum) ^ 3)
1.616255024423705286500047697249314156917498590079876463568563666973739718761242e-35 J^1/2 s kg^-1/2

It is an unfortunate trade-off between performance and safety.
However, I think that the decision to default to fast integers in Julia is correct, most of the time there is no overflow risk.
However, it would be great if future CPUs would have hardware-support for zero-cost overflow safe integers.

They already have hardware support for an integer type supporting values up to 2^53 (about 9e15), which is enough for most purposes.
When these integers “overflow” they turn into Inf, which will contaminate downstream computations, so that you know something went wrong.
Further more, while they only support integer values up to 2^53, they actually support much larger values than this, but at the downside of no longer supporting every integer. For example, at 1e18, the spacing between actually represented integers is 128.

I think this type is good enough for most computational purposes.


In theory, it might be possible to detect integer overflow involving constants without any runtime cost. (See my proposal here.), So code like

const c = 299_792_458;  # the speed of light
planks_length = sqrt(G*h_bar/c^3)

could give a warning.

However, this might actually make things more confusing, because users who see this warning might assume that integer overflow always results in a warning, and then write code that gives overflow at runtime.

1 Like

Perhaps just include this information in the warning then? In general, I think more features are better than fewer and then helping the user to use them most appropriately.


Python is also a programming language and not a scientific calculator. However, as such, it provides all functionality of a scientific calculator and does not require user to constantly figure out if nowhere during the whole program execution an integer number overflow did not happen and thus the result is correct.

Julia was positioned as a language for scientific calculations which is meant to completely replace Python in this domain. However, with such an approach, it will always remain a niche programming language to be used only when Python with all its infrastructure will prove to be unefficient. However, at this stage it will be too late to re-write the whole project in Julia anyway.

In general, it will go as follows:

  1. One morning: “I just need to make some short calculations and I don’t want to bother myself with the integer overflow issues. So, I run Python interpreter. By the way, it starts quicker.” :slight_smile:

  2. A few hours later: “Well, it becomes a bit complecated. Let’s save all this into a python script.”

  3. A few months later: “Finally, we prototyped all our calculations! But it is a bit slow. Well, let’s vectorize it and use special packages.”

  4. Half a year later: “Everything is fine now. However, we still have a few bottlenecks here and here. What? Re-write everything in Julia? Are you crazy? The whole project is already up and running! We just need to re-write these two small places with Cython/C++/Fortran or just use Numba!”


So does julia, you just have to read the manual. No one has ever said that julia solves all your problems without requiring any effort or thought. If integer overflow is a big problem for you, you may consider using

I bet you can find a lot of people in this forum who disagrees, but if julia does not suit your needs, you’re free to either abandon it or help to improve it. In particular, if you find yourself in the scenario you outlined, then by all means use Cython/C++/Fortran or just use Numba (in which case you still need to worry about integer overflow).


Python is also a programming language and not a scientific calculator. However, as such, it provides all functionality of a scientific calculator and does not require user to constantly figure out if nowhere during the whole program execution an integer number overflow did not happen and thus the result is correct.

Not the case if you use numpy, which also overflows silently, just like Julia:

>>> import numpy as np
>>> x = np.array([10**10])
>>> x**2
array([7766279631452241920], dtype=int64)

numpy is the bread and butter of most scientific work done in python, so everything you said about Julia applies to it. And if you insist on using python’s built-in integer, the performance of the program will be so bad it’s actually impossible to do any real-world performance-critical work with it.


Actually, I was in that situation just last week, writing a function that had large integers as intermediate results.

I still went with Julia, though. I just wrote my function, without worrying about types, and then called it with BigInts.

Later, I wanted to find out at which point overflow became a problem, and it was super easy to write a for-loop that compared the result of Int to BigInt.

(In the end it turned out that the best approach was to use Float64 to represent the integers in my specific case. The binary representation of the large integers had a lot of zeros at the end.)


I find that the value of any physical constant is best thought of in scientific notation from the start, e.g. c = 2.998e8 (defaults to Float64) instead of a nine digit integer, since the order of magnitude is explicitly shown. This is especially true for SI units where there span about 50 orders of magnitude between commonly used constants.

Just because some constants can be represented as integers doesn’t mean they should be. In a quick calculation, I find it a bit unwieldy to define c as an integer since it would require specifying all digits when I only care about the first few digits and order of magnitude, and the answer will be limited by the precision of G anyway.


299792458 is less than 2^{53}, so it is exactly represented as a Float64 value:

julia> 299_792_458 == 299792458.0

So, there is absolutely no reason not to use Float64 to represent this exact integer constant, and doing so greatly reduces the possibility of catastrophic overflow in subsequent calculations.



This is also what Chris was alluding to in the post

but it was hidden between the lines… :slight_smile:


To be honest, I think the problem starts at the physics students not being taught some basic CS or the lack of the need of such during a physics study. We often have bachelor and master (even PhD) students who show up and never ever used anything else than Excel to create plots or even fit(!) some models to them. I think basic CS (which includes understanding at least how primitive data is represented in a computer) is as necessary as understanding integrals and other “basic math”. These (math and computers) are our main tools, that’s a fact.
Luckily I see some movement and changes in the recent months and few years (at the FAU Erlangen we started enforcing a programming course for the physics bachelors last year).

…also if a student calculates 3.5793584153084007e-32 as Planck’s length and doesn’t notice that something is horribly wrong (downstream), would be kind of worrying :wink: they should learn to question their own results and also their tools and practices; and how to debug them.

I don’t think that it’s Julia who should take over the education in this case.

Edit. btw. it’s rather the meter which has been defined to be the path traveled by (monochromatic) light in vaccuum in 1/299 792 458 of a second, not the other way around. :see_no_evil:


I totally agree.

We use speed of light in vacuum a lot. Although our calculations won’t cause an integer overflow, most will habitually write it as “c0 = 299792458.0” or simply “c0 = 299792458.

This is true for both Julia and Python users. Those who got trained in Python started with Python 2.x where 3/2 still gives you 1 like in C, unless you import __future__. Julia does automatic conversion from integer to floating number in this use case.

So I don’t see this as a big problem to tell the student that integers and floating numbers are handled differently in computers, and for majority of use cases it’s just easier to always use floating numbers when working with physical constants. In addition, it’s also important to put units either in comments/docstrings, or in the variable name. Or to handle both, use a dedicated package like PhysicalConstants with unit handling, which is a more organized and scalable approach with a little bit of overhead.

That said, I think having an optional integer overflow warning that can be turned off may not be a bad idea for Julia.