Hi everyone
I’m new to programming. Julia is the first language I learn in principle.
I do not understand the meaning of the following text. If you please and explain to me, I will be grateful.
The size of the binary data item is the minimal needed size, if the leading digit of the literal is not 0
. In the case of leading zeros, the size is determined by the minimal needed size for a literal, which has the same length but leading digit 1
.— https://docs.julialang.org/en/v1/manual/integers-and-floating-point-numbers/
Welcome to the language and to programming! That sentence is a bit inscrutable (I probably wrote it long ago). It means that
-
0x1
and 0x12
are UInt8
literals,
-
0x123
and 0x1234
are UInt16
literals,
-
0x12345
and 0x12345678
are UInt32
literals,
-
0x123456789
and 0x1234567890adcdef
are UInt64
literals,
- etc.
Even if there are leading zero digits which don’t contribute to the value, they count for determining storage size of a literal. So 0x01
is a UInt8
while 0x0001
is a UInt16
. Hope that makes sense.
Btw, if you want to go all in and make your first open source contribution, you could take my explanatory text here and add it to the manual right after that paragraph
7 Likes
I really appreciate your answer
I have another question. I do not know whether I have to put it here or I have to create a new topic?
My question:
Why does Julia convert 0x001
to 0x0001
or 0x00001
to 0x00000001
?
I’m really embarrassed to say that, but my English is not very good, and I do not understand your meaning like this.
Btw, if you want to go all in and make your first open source contribution, you could take my explanatory text here and add it to the manual right after that paragraph
I apologize again for asking this.
It doesn’t “convert” really, it’s just that 0x0001
is how the UInt16
representing the integer value 1 is printed. Each hexadecimal digit is four bits (2^4 = 16), so eight bits (aka a byte) is two hex digits. So a UInt8
which is eight bits is exactly two hexadecimal digits. Similarly, a UInt16
is 16 bits, which is exactly four hex digits. Julia prints unsigned values with the full number of hex digits they can represent, even if the leading digits are zeros. This has a few benefits:
- If you cut and paste the printed form an unsigned number back into the REPL (interactive prompt), you get the same value and type back;
- If you print a bunch of these values they always have the same width so they tend to align nicely in the printed output.
We treat unsigned and signed integers differently based on this observation: when people are working with unsigned integers, they are often interested in specific bit patterns, which are easier to understand in hexadecimal. When people care more about integer arithmetic, they tend to use signed types, which can also represent negative values, and they want to see the values in decimal so that’s how we print signed integers. When numbers are printed in decimal, the nice correspondence between digits and bits doesn’t work because 10 isn’t a power of 2, so we just print signed integers normally without any leading zeros. This means that a signed integer value can lose its type when you copy it and paste it back into a program, but that’s usually not a big issue since most people have 64-bit machines these days and 2^63 is plenty big for most things.
I do not know whether I have to put it here or I have to create a new topic?
Since it’s closely related, it’s better to ask in this thread. If it was a totally different topic it would be better to start a new thread.
I apologize again for asking this.
Don’t worry at all. That’s what this forum is for!
1 Like
It was a very good explanation. I knew exactly what happened.
I do not know how to thank you.
I can only say thank you so much.
God thank you.
2 Likes