At first wasn’t very concerned about this example. It’s typical of examples meant to illustrate the rampant incorrectness.
In general the function is called like map!(c, a, b). In the examples in the doc string, the input and output are clearly not aliased. In fact c is freshly initialized. Many numerical libraries work like this. It is very common to read something like “output must not be aliased to input”. So if I see the doc to map!(c, a, b) and thought, “well it didnt say I cant alias, I guess I just do it and move on”, I’m asking for trouble. If I didn’t read the doc string, I’d never assume it. I’d check it first or ask or something.
If this were the end of the story, I’d say it’s not such a big deal.
The bigger problem, apparently, is that
behavior with aliasing is not documented.
behavior with aliasing was not properly tested.
behavior with aliasing changed in a (non-breaking) release.
Now there’s nothing I can do to avoid a bug in my code, even if I’m careful.
It’s possible that documenting it more carefully may have avoided this. It would have made it less likely that adequate tests were overlooked, and that a breaking change would be introduced.
Note that although the previous behavior has been restored, it’s still not documented.
More generally: I won’t put much stock in the idea that Julia is really buggy based on anecdotes. I’d have to see some attempt at an analysis. And discussion of which languages you’re comparing to.
I have long had a question related to this. I’m not in a position to let other people to use my code. All I write are very small and ad-hoc. But, some of my tools may grow enough to be interesting to other people in the future.
So, I wonder: Why am I writing for j in 1:size(arr,2); for i in 1:size(arr,1), assuming that the array index starts from 1 and is contiguous, even when I don’t need to assume that.
So, my question is, why don’t you recommend writing everything as general as possible if that doesn’t increase the size of the code too much and if that doesn’t reduce the capability of the functions you are writing?
Using abstract types requires significantly more testing than using concrete types.
I wonder if you can elaborate on that? Do you mean that testing is more difficult even if you test only on the few concrete types that you would be anyway using if you wrote a concrete interface?
. . . Anyway, we need some textbook or document that describes “the best practices”. Everything about generality seems confusing.
This is a great list @adienes, thanks. Care to try tackling some of these yourself? Those who notice problems from such issues are usually best-motivated to start fixing them. And thanks for the several pull requests, too!
I understand your frustration, but these are internal organizational tools that have very clearly defined processes and requirements. This does not mean the issue is not valued. It doesn’t mean that leading team members don’t care about correctness. It doesn’t mean it won’t get fixed. It doesn’t mean that its fix (or mitigation or …) won’t get into 1.10, either.
There are currently 25 bugs in the issue tracker that have the label “correctness issue”. Two of them are on the list of the bugs to be fixed for 1.10., which is nice… But it also means we probably need to wait another 6 years or 12 releases until all of them are fixed…
An easy, and pretty reasonable, solution to the aliasing “bug” would be to add notes to all such functions that “The mutated argument must not alias any other argument”. I personally wouldn’t really expect such a call that involves aliasing between arguments to behave in any particular way, unless explicitly stated in the docsting.
It seems that the issue has been open since January '21 - and the behavior was observed even in 1.5.3.
I wouldn’t expect the issue suddenly becoming a release-blocking issue.
However, some low-hanging fruits are more worrying: I understand the bug fix not being a release-blocking priority, but why not add the minimal documentation to help people avoid the issue in the meantime?
On the same note - it has been known for a while that @async is to be avoided in favor of @spawn to the extent that we have a clear warning against using @async: however, the new users reading the manual are clearly presented with learning material that goes against the documentation.
These easy-to-achieve goals that seem to reflect caring about new users are things that get me worried: the experienced users can get along and just avoid these traps (having access to a kind of special lore or simply being familiar to a larger extent with the up-to-date documentation), while new Julia users (who are primarily reading the manual before going into more depth) are actually taught to use the language in the wrong way.
Again, you’re misreading what it means to be (or not be) on a particular milestone release. Being placed on a milestone means that it’s a release blocker — that it must be fixed for the release. Not being on a milestone does not mean the converse.
I understand how it feels, especially when we encounter issues that are messing up our use cases.
My personal pain point: I don’t feel at all comfortable with the idea of having scenarios where the responsivity of tasks running HTTP listeners (on their own dedicated thread) are actually dependent on the main thread not doing work - it is somehow mindblowing.
However, I wouldn’t say that the leading team doesn’t care about the issue (I am more concerned that I didn’t do a good enough job to convey the practical implications of the issue). I have no idea if there are plans to fix that in the near future - the only clear thing is there are signs that awareness exists, and the root cause is known.
In fact, I am convinced that, even if only as a matter of being proud of their work, the leading team is clearly focused on doing a great job - but we cannot expect their priorities to be perfectly aligned with those of individual language users (and where there is such alignment, we cannot expect them to assign the same level of urgency to individual issues).
Many people here are pointing out that there is an easy (partial) fix for some of these issues, namely adding doc warnings. In fact, the fix is so easy that it doesn’t take being a “leading team member”:
My two cents is that we would all be much better off if we poured our frustrations into PRs, however small, rather than just complaints.
I agree - but there is no clear threshold between severe matters and the simple PRs you mention. For example, I understand that there is somebody actively involved in rewriting the manual (sorry, I don’t have a reference at hand). At that point, if I want to contribute and invest some time in fixing obvious outdated stuff in the manual, I have no idea if that work is for nothing.
Also, although I feel pretty confident at this point in a few areas of the language, I feel that the manual should reflect the very best practices that are vetted by those who can add a kind of official expert stamp to that work.
And this is adding even more weight to my point - the manual, which should actually reflect the best practices and put the newcomers on the right track is outdated and recommends practices prohibited by the documentation.
So it is not very clear when we should just push for changes as outsiders or we should become part of some active force for change. There is no clear guideline on that.
However - I think complaining should be welcomed: if this language is going to grow and (also) be used in areas that are going to depart from technical/mathematical computing - we should expect an increase in the complaints/PR ratio. And I would say that would be a very good sign
The point I was trying to make is that most of the people in this conversation are not “outsiders”, far from it. We may not have merge rights for the Julia repo, but I think we often underestimate our ability to help. And by “we” I mean the active Julia users that like the language enough to keep reading when a thread gets past 50 messages.
To me, the vetting is the PR review. If my contribution is not worthy, it will be rejected or just languish in the endless pit of despair where code goes to die. So usually when I open a PR, my primary concern is not complete exactness: I mostly try to get things moving.
If someone is rewriting the manual that’s amazing news, but it doesn’t mean development has to pause. And it’s probably someone careful enough to merge the latest changes done by other people into their branch. At least I would suppose so.
Maybe I didn’t use the right word there - I meant a kind of Julia repo outsiders.
It would be nice to see this kind of message delivered by the core developers - and that doesn’t imply I don’t appreciate this coming from you.
Not sure if this holds for many other Julia users, but my admiration for these people is also somehow intimidating when comes to starting PRs.
Obviously, the PR review is literally a vetting process. But specifically when it comes to writing manual stuff, I feel it is somehow special - I mean, it would be a great learning experience (because I feel good enough is not enough, and the one making the contribution(s) should be somehow morally obligated to ensure both complete exactness and not leaving out relevant information).
Imagine that the manual might be the first contact with the language for many new users.
But enough about this - I think we are on the same page, and even more - an unsuccessful PR would still be a learning experience (if not just ignored or rejected without a stated reason).
This thread already spawned a topic (the reverse branch). I don’t insist on spawning the “how to contribute to Julia repo if you suffer from acute impostor syndrome” topic.
I learned important aspects of Julia by [hesitantly] submitting a PR and then working with the helping hands that be in PR-land to take that through to acceptance. It took some determination; at that time, eagerness worked best.