How to show a floating point number without truncating, when the number can be expressed with a finite number of digits?

Today I attended the keynote talk by Stephen Wolfram in JuliaCon 2023, and he showed how Julia functions can be called inside Mathematica.

He also showed an example where Julia evaluates 4^45 = 0.

I figured out that 4.0^45 works, but the number is shown in an approximate form

julia> 4.0^45

How to show it exactly in Julia?

For example, Mathematica produces

In[1]:= 4^45

Out[1]= 1237940039285380274899124224

You need to use “big” numbers:

julia> big"4"^45

Is there any way to make the “big” number the default type for a particular session?

There isn’t an easy way to make big numbers the default, no. EDIT: see the following post for something you might consider sufficiently easy. Although I don’t know the full scope of where it does and doesn’t apply the transformation, i.e., it may be REPL-only. I suppose someone could write a macro to wrap all constants in a code block with big wrappers, but I can’t point you to one that I know of.

But a few strategically placed big"123" or big(123) (the latter only working if the number is representable as a basic type to begin with) will probably accomplish what you need. BigInt/BigFloat numbers take priority over smaller types, so once they’re part of a particular operation the output will also be big. For example,

julia> big"4"^45

julia> 4^big"45"

both give a BigInt result.

1 Like

Since you’re new here, I’m going to show you you can get what you want with Julia “for the session” (at least to make Julia a sort of an advanced calculator).

It’s good to know of @Mason’s awesome project, at least for other things, e.g. ReplGPT that builds on it:

julia> using ReplGPT
REPL mode ChatGPT_mode initialized. Press } to enter and backspace to exit.

>Why does Julia not support arbitrarily big numbers by default (and how to change that)?

See below. But for big numbers, i.e. with ReplMaker.jl go to Example 3: Big Mode at the link above, and copy the few lines there to enable it.

Now press > at the repl to enter Big-Mode

BigJulia> 4^45

BigJulia> typeof(ans)

BigJulia> 4.0^45

BigJulia> typeof(ans)

BigJulia> 4.0^245  # Note, BigInt will give you arbitrary long integers, but as here BigFloat will show limited, you can extend the precision though.

BigJulia> 1 + 1

BigJulia> typeof(ans)  # You are asking for this slower type, but note when most of your numbers are non-huge, using big is an overkill.

BigJulia> 1+1  # for some reason Example 3 has this bug, it needs a space after the plus, so apparently it's not much used, only a demo, but it can probably be fixed easily, or at least avoided with the space...
ERROR: Base.Meta.ParseError("cannot juxtapose string literal")

I believe that’s this does that, and can also be used for big types:

i.e. you can change default (literal) types, not just for the REPL. And while you shouldn’t, I fully support non-big types by default, and non-overflow checked by default (it’s faster) it is a bit of a problem, mostly for ^ why I would want it changed in Julia 2.0 to return Float64. You can also use (that “fixes” in a different way):

julia> using ReplGPT

ChatGPT> Why does Julia not support big numbers by default?
  OpenAI API key not found! Please set with ReplGPT.setAPIkey("<YOUR OPENAI API KEY>") or set the environment variable OPENAIAPIKEY=<YOUR OPENAI API KEY>

[This should be easily configured, you can also ask ChatGPT directly as I did in my faked answer, I was trying ReplGPT for the first time. Anyway, I know the answer. It’s in Julia’s documentation. And ChatGPT also has the right answer.]

The answer started as (was otherwise good and slightly different), until I changed to “arbitrarily big numbers”:

As of my last update in September 2021, Julia actually does support big numbers by default.

I then tried:

Why does Julia not support arbitrarily big numbers by default?

and got:

As of my last update in September 2021, Julia, like many other programming languages, does not support arbitrarily big numbers by default for performance and memory efficiency reasons.

When a programming language uses fixed-size numeric types like Int64 (64-bit signed integer) or Float64 (64-bit floating-point number), it can perform calculations much faster because hardware architectures have native support for these fixed-size types. [long but good part of answer skipped]

On the other hand, supporting arbitrarily big numbers (arbitrary-precision arithmetic) comes at a cost in terms of performance and memory usage. When you work with arbitrarily big numbers, the calculations require more time and memory, as the language needs to perform operations on potentially large numbers that don’t fit in a fixed-size data structure.

Julia, as a high-performance language, aims to strike a balance between efficiency and flexibility. By default, it uses fixed-size types for integers and floating-point numbers to provide fast computation for the vast majority of use cases, where numbers fall within a reasonable range. For most scientific and engineering applications, the standard fixed-size numeric types (Int64, Float64) are sufficient. […]

In summary, Julia chooses fixed-size numeric types by default to achieve high-performance computation. If you require arbitrary-precision arithmetic, you have the option to explicitly use BigInt and BigFloat to handle large numbers at the cost of potentially reduced performance and increased memory usage.

I did obtain an OpenAI API key from OpenAI Platform as the docs instructed, but I got

ERROR: HTTP.Exceptions.StatusError(429, "POST", "/v1/chat/completions", HTTP.Messages.Response:
HTTP/1.1 429 Too Many Requests
    "error": {
        "message": "You exceeded your current quota, please check your plan and billing details.",
        "type": "insufficient_quota",
julia> a = 4.0^45

julia> b = BigInt(a)

julia> print(b)


julia> print(BigInt(4.0^45))