using
is for simplified interactive REPL use. Sometimes, simplified language overlaps. That’s why we have namespaces. The fact that this can happen doesn’t mean we have to abandon all simplicity, that’s just silly. There’s no reason to start doing Base.+
, Base.kron
, etc. everywhere (like NumPy/SciPy…), but there’s also no reason to expect that everyone can label a function f
with no ambiguity. Instead, using
should give a good REPL experience for a single package. But when building libraries with multiple packages together, it can be better to import
exactly what you need to get the job done. Larger projects need to be more explicit at the cost of usability, while interactive work needs simplicity. These two are competing workflows which both have a solution in Julia, and fully advocating to always do one or the other will never feel right (which is why we have both!).
I think it would be valuable to think this option through in more detail (although maybe in a new thread or github issue). C++ does something like this with its overload resolution.
My initial guess is that it would require a set of rules that work pretty well 95% of the time, but are fairly complicated and thus cause quite a bit of confusion 5% of the time. Regardless, the rock-solid solution for package developers would still probably be to avoid using using
, as witnessed by Google’s C++'s style guide, Google C++ Style Guide, “Do not use using-directives (e.g. using namespace foo
).”. On the other hand, this approach could still be a win in code that doesn’t need to be of ‘production’ quality.
I think Kristoffer’s point,
is a good one. Since functions are just like other values in Julia, what would ‘merging’ (in certain cases) imply regarding whether A.f === B.f
?
Similar advice and clarifications on usage would be great in the modules documentation.
It seems to me that function merging can done in just one line, for those who want it.
For example, let’s say that both modules A and B provide the function foo
, so I can only import one of them
import A: foo
import B
…but when the imported foo
doesn’t have a method for my object, I want to use the foo
from B, without having to specify B.foo
each time. Then I can do:
foo(args...) = B.foo(args...) # merge B.foo into the imported A.foo
(This could easily be shortened even further to something like @merge B foo
.)
Of course I have to decide which function to import first, in case there are overlapping methods.
I could also continue the chain with
B.foo(args...) = C.foo(args...) # merge C.foo into the other foos
etc.
Or am I missing something?
If this were true, no communication between people would be possible.
If there were no humans involved in programming then I would agree with this. The only question then would be what does this program cause the computer to do? But humans write and read programs and a programming language is a human-computer interface. Programming language design is not an abstract mathematical endeavor, it is applied psychology. The real question when designing a programming language (or an API or a library) is:
Will programs do what the people writing and reading them expect them to do?
And this question is all about meaning. From the computer’s perspective, of course it doesn’t matter if a function called mean
is a semantically muddled mess that sometimes computes a summary statistic and sometimes determines a personality trait. The program has some bits which are run through another program which produces bits which cause the computer to do something. Meaning never enters into it.
But if that was all there was to programming, we’d all just write machine code and we wouldn’t be having this discussion. The real question about a program isn’t what it actually does, but whether what it does is what the person who wrote the program intended for it to do. And just as importantly whether what it does is what a person reading the code would expect it to do. This is where meaning comes into play: the code only does what you expect it to do if all the words and constructs in it mean what you think they mean. That is why meaning is not only not irrelevant, it is the essential problem in programming language design.
To me, the fact that meaning is so fundamental to Julia programming means that we’re focused on exactly the right thing. Meaning is the only part of communication that really matters—the rest is just boilerplate and noise. The fact that the most human, subjective, and hard to pin-down aspect of communication—meaning—is so central in Julia means that we’ve put the unimportant parts of programming in the background where they belong and are focused on what actually matters.
I think the best approach may be to have an “explicit import bot” that goes around making PRs to packages to turn using X
into explicit imports of the names in X
that are actually used. That way people can write code with using
for convenience but the bot would make the meanings explicit for them and also future-proof their code.
Since this has been referred to a few times as a “simple approach” it might be helpful to explain why it’s not so simple and cannot be implemented in the way that this wording suggests.
Auto-qualification would work something like this: if f(x)
appears in code where f
could mean A.f
or B.f
Julia decides which one is intended based on the type of x
and the code acts as if that’s what was written. This probably seems obvious to someone coming from a static language with method overloading and C++ does this exact thing for namespace collisions. However, that approach doesn’t work at all in a dynamic language with first-class functions.
First of all, you cannot statically replace f
with one of A.f
or B.f
and get correct dynamic behavior. The proposal means that if you see
f(a)
f(b)
where only A.f
can apply to a
and only B.f
can apply to b
then Julia would pretend that this is what had been written:
A.f(a)
B.f(b)
So far so good. But what if someone refactors this code as a loop:
for x in (a, b)
f(x)
end
This loop should behave in exactly the same way but now there’s only one call to f
. Should f
be replaced with A.f
or B.f
? Neither answer is correct. Instead, f
must be a new function which calls A.f
in some cases and B.f
in other cases.
So suppose we disallowed that: unqualified calls to f(x)
are only allowed in cases where f
could be replaced with one of A.f
or B.f
. That sounds fine in principle, but we cannot actually know this in a dynamic language because we don’t generally know what the possible types of x
are. Type inference can sometimes prove things about the types of expressions, but unlike static languages, that’s not part of the semantics of the language—it’s just an optimization. In a static language, you can do static method overloading based on the types of variables and expressions because those are part of the behavior of the language but in Julia expressions don’t have types, values do and the behavior only depends on the actual type of a value, not the type of the expression that produces that value.
Then there’s the problem of higher order functions. In map(f, v)
how would you resolve whether f
means A.f
or B.f
? There isn’t even an argument to base the choice on. The element type of v
doesn’t help either since:
- We don’t know it statically.
- It could have element type
Any
but only contain elements to whichA.f
applies or elements to whichB.f
applies or some mix of both. map
is just a function, not a built in. How would the compiler know that it needs to do some special argument disambiguation for the first argument instead of just passing arguments in the normal way?
The bottom line is that in a dynamic language with first-class functions the “merged f
function” needs to be a first-class function object—you can’t just to some kind of static rewrite like method overloading in a static language.
I will summarize once more how I see the problem for clarity.
Suppose you have a package A which defines 10 methods for the function degree
. Then when using A
,
everything works well, you can use unqualified all methods of degree
. And this has nothing to do with
the “meaning” of the methods, you could well have in the package a method degree
for polynomials and another method degree
for permutations, you can use them without qualification provided they come from the same package.
If now you decide to put 5 of the methods for degree
in a new package B (perhaps a subset of the methods
with the “same” meaning), suddenly using A
and using B
does not allow any more the unqualified use of the name degree
.
I find this
-
Inconsistent. I do not see why the decision to split the methods in two packages should lead to a change of behaviour.
-
Not conducing to best design: it incites you to coalesce packages into artificially big ones to avoid this problem.
How can you program against a function if you don’t know its meaning? The core of programming is expressing your intent to the computer. This has everything to do with the meaning of a function.
Note that many functions are coded in completely generic terms — against Any
arguments — and then have specializations for particular types. There’s no way you can support such an idiom if a more specific type overrides that fallback to mean something completely different because another package is using the same name for something different.
@Jean_Michel, I’m confused because it seems like there are two straightforward solutions that already exist in Julia to address the problem as stated and which have been mentioned above a few times.
If you own one of the packages (for example, B) then you can call:
import A: degree
Inside of module B. No more problem.
If you don’t own either package then you can do the following
using A
using B
degree(x::TypeFromA) = A.degree(x)
degree(x::TypeFromB) = B.degree(x)
And this will resolve the ambiguity when calling degree
on the respective types. (Though maybe this isn’t best practice, that doesn’t stop you from personally using this approach to solve the problem as you see it in your own code).
Is the issue that you want that to happen automatically? It might be obvious how this would work for simple examples, but I understand the point of the naysayers above to be that it isn’t so clear how this would work in general. Is your point something like “I’m not sure how this should be fixed, but I find it a major pain to do this manually”? What’s an example where one of the solutions above is hard to implement?
using A
using B
degree(x::TypeFromA) = A.degree(x)
degree(x::TypeFromB) = B.degree(x)
I prefer the workaround where both packages by convention extend Base.degree
, and
to make this work the user puts in his .juliarc
if ! isdefined(Base,:degree)
@eval Base function degree end
end
The reason for that is that I think the only reason that the current state of affairs is livable is that quite a few names are already defined in base. Imagine a world (like some languages
strive to) where Base
was really minimal, and * was not defined in Base
. There would be
a package Numbers
where * is defined for Int
and Float
, and a package LinearAlgebra
where * is defined for matrices. Well, this is hardly desirable in julia, because then every use
of * would have to be qualified! If you replace *
by a new operator which is defined independently in two packages, you may see the problem: one of the packages has to depend on the other even if there is no logical reason for that.
I prefer that everything depends on Base
, adding if needed names to it. But rather than
a workaround, I would prefer a solution like the one proposed by @greg_plowman
This came up on slack recently, with respect to statistical fit!
functions. This is an interesting case, because StatsBase.jl and OnlineStats.jl don’t really need to depend upon each other but both want to use the word fit!
in compatible matter.
At the same time, though, SkLearn.jl also defines its own fit!
, but instead of using rows as its observations (like the former two packages do) it uses columns!
Now in the first case, it seems obvious that the two methods could be merged. You can write your own code that calls fit!
and doesn’t really care if you’re doing your stats immediately or incrementally. But if that third method gets merged, too, now suddenly you do need to care that you’re not doing your stats via SkLearn or things will go sideways fast.
There is a pain point here, but automatic method merging isn’t going to be the solution, nor is pushing everything into Base
. Lots of words have been written about why those aren’t going to be the solution by many authors above. If you’re still not convinced, I encourage you to try re-reading those posts. For now there are those workarounds, and we may develop other alternatives in the future.
One of the thoughts I’ve had for a long time now is being able to have shared namespaces like StatsBase
but without actually needing to have a common package. I can imagine saying import "Stats": mean
which would provide a shared mean
function based on the string "Stats"
. Any package that declares mean
this way would extend the same mean
function.
Yes, you are not thinking of this from the perspective of a library user. A matlab or python user wouldn’t even get to the point where they understand the issue of why they can’t have two functions of the same name, let alone explain to them what on earth that merging is about and why it is necessary…
I am not worried about package developers, who are capable of doing all sorts of tricks and probably should never be doing using
anyways.
It is the tutorials and sample code which are the issue, not the documentation. The issue is that a relatively nontechnical user can take two sets of code with using
, and try to do both of them in the same script, and get utterly confused why it doesn’t work.
>>> def f(x):
... return x
...
>>> def f(x, y):
... return x + y
...
>>> f(1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: f() takes exactly 2 arguments (1 given)
>>> np = 2
>>> import numpy as np
>>> np + np
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for +: 'module' and 'module'
I think Python programmers understand this perfectly well.
That example is clearly ambiguous. Nobody is suggesting the ability to “guess the meaning”, but rather that if is “meaning” is unambiguous from the types being passed in, then there is no reason to require redundant namespaces. If there is any whiff of ambiguity, then it should force the user to choose.
Sorry, you are right that was ambigous. I meant coming in from a single-dispatch where there is no reason that you can’t have two f
with different types.
If you do from numpy import *
and from mypkg import *
and there is a function called f
in numpy
and one called f
in mypkg
then you cannot use f
unqualified, right?
In general, I think it’s best not to pretend that there are no new concepts to learn when going to a different language. That’s the whole point of having different languages.
I think I see the point you guys are making “Hey, isn’t it weird that functions not in Base
are so much harder to extend between packages because then everyone has to decide on which one to extend” and that it’s a perfectly valid one. However, I see this as only a fairly minor awkwardness which pales in comparison to the confusion that would result from evaluating every new method in Base
by default. I have never once found this issue to be a significant difficulty.