You keep repeating that Base can not check any single package. Nobody says that this is the solution, so I don’t know why you focus so much on this approach. That’s why the public/private modifiers exist and are implemented at compiler/interpreter level.
So, in your opinion, it makes sense for a package maintainer to check every. single. function. used in any PR?
Yes, and in fact I would go so far as to say that you should write tests for code you write yourself and expect to rely on:
# search all commits for the phrase `modules_warned_for`
[sukera@tempman Genie.jl]$ git log -Smodules_warned_for
commit 1980b485f4f8b74de33cf5db8eb48cae38888fc6
Author: Adrian Salceanu <adrian@stipple.app>
Date: Sun Aug 15 19:37:22 2021 +0200
Bug fixes, cleaning
This change was not introduced as a PR, but by you yourself as core maintainer of your package, as far as I can tell from your own public git history. It is your responsibility to ensure that the code & libraries you’re using in your code adhere to the standards you set to your own code, such as not relying on internal, undocumented functionality that is not part of the documented API of julia for use by developers using the julia language.
The commit that removed this got merged into the julia repo on February 4th, 2022, which is plenty of time to catch this in CI when you’re running nightly (or even just the release candidate, which is exactly what its there for).
Sure, but shouldn’t we help make that easier for devs? I really don’t understand why anyone would argue against access annotations (which could then easily be checked during testing, similar to deprecation warnings or ambiguity checking).
@Sukera
I don’t remember the exact context as it was a while ago, but the solution was actually from a contributor. Maybe it was in an issue or on some other forum.
However, how is that relevant for my question? The question remains: do you have a solution to avoid that a maintainer has to check all the functions in a PR?
@Tamas_Papp
Again, besides the point. The fact that an issue is eventually caught, doesn’t mean it can be (easily) solved. When the exception has popped up, it forced a breaking change in Genie and all the Genie user apps.
But yes, agreed, also maybe the tests don’t catch it (though they should but looks like they don’t) - which is even more need to have it at compiler level.
Sorry, I lost track of what your point actually is, since you raised so many in this discussion, and you keep changing them, declaring replies to previous ones irrelevant or beside the point.
In any case, the bottom line is that if you use internals, it is your responsibility to ensure that they keep working. It is prudent to at least test for the symbol being defined, which you didn’t do (many packages do this). Tooling for this could perhaps be improved, but it already is possible.
Also, it is quite unfortunate that you only provided concrete details around the 60th reply. This makes it difficult to suggest concrete solutions.
@Tamas_Papp Sorry that I have followed up the discussion and replied to the various comments. Next time I’ll try to foresee the answers and objections of all the responders and introduce everything in the first post.
OK - so you suggest that all the packages assert that all the functions used in a piece of code actually exist?
Nope. I am suggesting the following: if I really have to use undocumented internals from either Base or another module, I would just put an @assert Base.isdefined like the example above in the unit tests.
This gives me an early warning when they are removed.
No, because that is not a problem, that is how it’s supposed to work.
As a maintainer of a package, if I want to maintain a certain level of quality or want to give certain guarantees to my users, I have to take care what code I use and from where I get it. I can’t just e.g. copy code 1:1 from StackOverflow or from wherever else I get it and expect it to be up to the standards established/required in the context of my package. You just have to look at the myriad of supply chain attacks that have recently come into fashion in the npm world to see why that is a bad idea.
Code review is a core part of accepting PRs in open source work, just blindly accepting patches without checking whether they’re fine for you in the long run is, from my POV, irresponsible.
Noone here is suggesting that such a thing is not needed. Heck, I myself have linked very early on in this thread a comment of mine about requesting a clearer way to mark something as API, irrespective of whether it’s exported or not.
HOWEVER - that is not the same as a public/private modifier on the compiler level, which would categorically, absolutely deny access to private variables, as those modifiers are commonly understood.
As for the argument “but it could warn instead of error when accessing a private variable/method/function/module!” - we’d be in the exact same place we are right now. Someone would ignore the warning and use the thing anyway, the thing gets removed and a big thread ensues about how this should have been prevented by the compiler, when the clear & unmistaken solution is to not rely on internals in the first place & write tests for those that you do, so that you can take action should they be removed.
See how we’re going in circles? We’re arguing about the same exact thing as 12 messages into this thread originally.
Is it though? It has a docstring, but a search in the docs doesn’t turn up a result. Personally, I can’t decide if it’s API or not. That makes it difficult to adopt the position “if it is not documented expect breakage”.
@Sukera politely disagree. Of course it’s normal to review a PR, but it’s unfeasible to expect that all the function in Base (or other modules) are manually checked to see if they are exported.
But yes, I see your point. You agree with need for tools, not necessarily the compiler. Perfectly fine.
However, I don’t understand what is the issue with having public/private modifiers? Without these it’s impossible to develop certain types of features. Most basic one: applications that can be extended by plugins. Since there is no encapsulation, any code can (accidentally or not) access code in other modules and interfere with secrets and access sensitive functionality (think about code triggering functions in a payment module for example).
Why in the world would anybody want to be able to access “private” APIs when everybody on this page agrees that it shouldn’t be done anyway?
Important distinction here: Access modifiers in memory-unsafe languages DO NOT help with security at all. Period. They can help with safety though (in the sense that they make it harder to write code that uses functions it shouldn’t be using).
I don’t think so. Code review precisely means “checking whether the new code is up to the standards of the existing code base” (whatever your standards are).
That’s a tall claim! Your concrete example of a payment provider with plugins (to me at least) already fails the most basic of rules when having security in mind: Don’t run untrusted code in the same environment you run your sensitive application in, i.e. review the code you use before you use it, to standards appropriate to your application & the guarantees you want to make. At the very least, you can require some third-party provided (i.e., not stored in the code) secret that cannot be read in by a malicious plugin and secure your API access that way, by authenticating any access to sensitive parts through that token.
There is no magic feature that will make your code “secure”; the compiler will not catch your logic bugs.
We’re not saying that it shouldn’t ever be done, but that IF you need to do it to accomplish your task, that you accept the responsibility of it potentially breaking down the road. That’s been my position since I first stated it in my third reply in this thread, at the very top:
This of course comes with the understanding that it is also your responsibility to check whether the thing you want to use is actually appropriate & available to use.