Hello everyone,
first of all, I’m sorry if this is a stupid question, but I’m fairly new to Julia and also not a seasoned programmer or computer scientist at all. But I’m scratching my head a little here about something.
I’m not obsessed with the speed of programming languages at all, but I’d really like to understand this one.
So, after having watched some random Youtube video that compared the execution speed of a simple program counting from 0 to 1 billion in Python and C++, I got curious and just wanted to test how well Julia would perform in this regard, since it is somewhat hailed for its speed (compared to Python at least)
I also found this article where Logan Kilpatrick attempted the same thing:
https://juliazoid.com/no-julia-is-not-34-000-000-000-times-faster-than-python-f63e956313d7
I did not use any of Julia’s internal benchmarking tools/libraries but merely “time julia 1billion.jl” in the command line. Let’s disregard the fact that this is probably not a good way for comparing execution speed to other languages because of Julia’s rather long startup time.
However, the thing that stuck me and that I don’t quite understand is the following:
function count()
n = 0
while n < 1_000_000_000
n +=1
end
end
count()
On my machine this program has a reasonable and even quite impressive execution time of about 150ms.
However, if I don’t wrap the counting loop in a separate function but run the program like this
n = 0
while n < 1_000_000_000
global n +=1
end
It should basically be the same thing, but this program by contrast takes about 75 seconds to execute. Which is even slower than a Python script with the same code.
Is this really to be expected that this simple counter runs about 500 times slower if not wrapped in a function definition? What is the important thing to take away here when writing real world applications at some point?