âreally wantsâ is something subjective. When I teach something I want that the students that really want (or need) learn and enjoy, but that is generally easy. It is harder to make things enjoyable and profitable for the ones that want (or need) less. My suggestions are hopefully to catch the attention of those as much as possible.
I really cannot evaluate that and I trust your judgment. I liked many things I have viewed and read, but I have no parameter because I never used those resources in other languages. I think anyway that this feedback is important, because there might be people and companies willing to invest on that. To be truth, I myself have written quite extensive materials for the courses I teach. But they are not for the general public.
a âproblemâ is defined after someone makes a claim and that âproblemâ prevents achieving that claim. There is no expectation from Python to have high execution speed, whereas there is such an expectation from Julia, based on Juliaâs claim about speed. This makes Juliaâs time-to-first-plot an important issue comparing to Pythonâs execution speed.
While I would also love to see startup times further improved, where does Julia make a claim about its time-to-first-plot speed? Itâs the runtime speed it makes claims about.
Thank you for introducing this training course. It would be great if there was a topic that every time someone prepares a training course add its link to it.
I would agree that Juliaâs speed (fast execution but with a significant compilation penalty up front) can be more easily be misleading for beginners than discussing Pythonâs (snappy for anything lightweight or anything done in complied libraries behind the scenes, slow for anything heavy and written in Python). I would also agree that Juliaâs latency issues are much more visible to beginners than Pythonâs. In fact, I would bet that quite a few Python users never even notice Pythonâs slowness, because for a wide range of use cases, the differences arenât meaningful.
However, it is also true that people really, really want to make Python faster, and a prodigious amount of effort has gone into doing just that. So my original point does apply to both of the examples mentioned, regardless of any semantic disagreement we may have about the scope of the word âproblemâ.
Although having faster execution speed can help dozens of businesses using python but it is wrong to expect high execution speed from a programming language that its goal is having a clear syntax or sth other than speed; in general we canât expect sth that wasnât been claimed.
I couldâve sworn we added a weaknesses/âwhy not Juliaâ section⌠somewhere⌠(to either the main website or the docs)⌠following a thread similar to this one to help set expectations. Maybe itâs still a PR? Anyone with better search-foo or memory on this one?
This is the second point I wanted to talk about, and thank you for mentioning it.
Most python users, as you said, donât face the problem of execution speed because, for a wide range of use cases, the differences arenât meaningful. although @lungben referred to JetBrains Python Developers Survey 2020 and mentioned that âhigher runtime speedâ is the 2nd item in usersâ wishlist, we donât have any information about their main problem with python. Besides, this problem would not occur for a wide range of users that doesnât need higher speed (although all agree that having a higher speed is better than a lower one!). But Juliaâs long initial start time problem occurs to anyone who starts using it. If it was a problem for a small group that needed a feature, it wasnât a big deal.
Thank you, you are quite right about my confusion. I did read the posts you mentioned and am now slightly less confused. I still donât know what the difference between AbstractVector{<:Real} and Vector{<:Real} is though. I had declared it as Vector{Real} which obviously didnât work.
A major difference is scaling:
Run times scale with the size of the problem - doubling of the sample result in double run time.
Compile times, in contrast, are constant. They are always a few seconds (depending only on the software you are using), independent of the calculation sample size. Thus, they become insignificant for numerically challanging stuff. On the other hand, they are rather annoying if your calculations themselves are very quick.
Probably that should be split in a new topic. But if you use AbstractVector your function will accept, for example, views, StaticArrays, and other array types which are not Vectors in the strict sense:
julia> f(x::Vector) = 1
f (generic function with 1 method)
julia> g(x::AbstractVector) = 1
g (generic function with 1 method)
julia> x = [1,2,3];
julia> f(@view x[1:2])
ERROR: MethodError: no method matching f(::SubArray{Int64,1,Array{Int64,1},Tuple{UnitRange{Int64}},true})
Closest candidates are:
f(::Array{T,1} where T) at REPL[1]:1
Stacktrace:
[1] top-level scope at REPL[3]:1
julia> g(@view x[1:2])
1
julia> using StaticArrays
julia> x = zeros(SVector{3,Float64});
julia> f(x)
ERROR: MethodError: no method matching f(::SArray{Tuple{3},Float64,1,3})
Closest candidates are:
f(::Array{T,1} where T) at REPL[1]:1
Stacktrace:
[1] top-level scope at REPL[8]:1
julia> g(x)
1
Concerning Real vs. <:Real I like my way to explain this, for obvious reasons.
Honestly, I feel like python gets too much credit here. It is easy to end up with python installs that have serious usability problems. I regularly have 5-20 time to first being able to type in ipython. I donât really know why, because it varies from system to system and itâs not worth my time to debug a transient problem one system that isnât a 100% show stopper (I work on many very similar, but slightly different computer systems running very similar, but slightly different pieces of equipment). I regularly (many times in the past few years) have issues where matplotlib is nearly unresponsive or hard freezes my terminal. At least some of the blame may fall on the somewhat opaque security settings my organization applies to our machines I always see these threads that take it as given that the python world has plotting and interactivity nailed, and it is not true. Itâs mostly true, but I see exceptions on a regular basis.
Once you get familiar with a tool, and you come to understand itâs deficiencies and find workarounds for those deficiences, they stop bothering you as much. You get used to them and sometimes even stop noticing them.
Later, if you try out a different tool that has a different set of tradeoffs, strengths and weaknesses, youâll maybe notice the strengths, but youâll also definitely notice the weaknesses. When you compare that new tool to the old tool, it becomes very difficult to make objective comparisons because youâve trained yourself to not notice the problems with your old tool.
This effect is why a lot of Python programmers will brush off complaints about speed, expressiveness or composability and say ânah itâs not that big a dealâ. Itâs also why a lot of julia programmers (myself included) will just brush off complaints about compiler latency, documentation and such.