Yes, I was wondering the same thing as @aplavin. It does seem like using duck typing in Python helps to address at least some of the manifestations of the expression problem.
Now write this in such a way that itâs extensible for other colour subtypes, e.g. HSV
, CMYK
, etc.
Furthermore, you want a system such that people who write their own colour subtype will also be able to interface with the add
and norm
functions.
Duck typing helps initially, but only so far.
I completely agree with you. Itâs just that the short two-line example is not enough to illustrate multiple dispatch vs others.
Right. So thatâs one of the other âsolutionsâ to this problem in my talk â donât use OOP at all, just use external functions. Which does work, but then you can no longer specialize or dispatch on the first argument, c1
and c
(let alone c2
). Moreover, the argument follows this trajectory:
- Hereâs a problem that is hard in OOP.
- Solution: donât use OOP at all, just use functions.
That is a solution, but it does not counter the fact that the problem is a problem for OOP, it instead shows that you can solve the problem by abandoning OOP entirely. In general, in functional programming it is easy to add new operations to existing types: functions donât live inside of types like methods do in OOP, so you can just define a new external function to add an operation to an existing type. Problem solved. However, you have the opposite difficulty in the functional approach: itâs hard to make existing operations apply to new types. This is easy to do in OOP â you just subclass whatever the operation is defined on and add a specialized method.
Consider this add
function that you just defined in Python. It works great if c1
and c2
are ducklike enough to quack in response to accessing the .r
, .g
and .b
fields. But what if wanted to make the add
function work on something less ducklike that needed a different implementation? You need to edit the original add
function and add if/else type check for each new implementation that you want to support. Which works, but isnât extensible: you need to modify the function for each new type you want to support. This is exactly the kind of problem that OOP was introduced to solve. Multiple dispatch doesnât have any problem here: you can define add
like that and then just write more specialized methods as needed. No fuss, no muss.
In summary, the expression problem presents as different problems for object-oriented and functional programming paradigms:
- In OOP, it presents as difficulty adding new operations to existing types, which is easy when just using functions;
- In functional programming, it presents as difficulty making existing operations apply to new types, which is easy in OOP.
The magic of multiple dispatch is that both aspects are unproblematic: you can add new operations to existing types and extend existing operations to new types with equal ease and simplicity.
The problem with this is that itâs becoming stylish to duck type external functions because abstract typing is too restrictive, since there is no multiple inheritance. So weâre back in python land, to some extent.
So itâs harder to specialize behavior, even with traits since they arenât first class (canât be used in type parameters, triangular dispatch etc )
So we have things like istable
, or worse, just trying to do something. Iâd rather not make wrapper types to specialize behavior, I want different kinds of tables, but also have my own type hierarchy.
Edit: One example is the manifolds ecosystem. Lots of things can be seen as manifolds, but thereâs an abstract type so itâs an island in a sense.
Also, I assume trait hierarchies give rise to serious method ambiguities, but havenât tried yet.
Really? The C++ multi-methods is the well-known part of the C++ OOP techniques since a long time. See, for example, Pirkelbauer, P., Solodkyy, Y. and Stroustrup, B. Open multi-methods for C++. 2007.
I think this post may get at why itâs so hard to come up with a single definitive example where itâs clear that you need multiple dispatch. Because languages like Python have both methods and external functions, each of which are good at addressing one side of the expression problem but each of which have serious trouble with the other. So for any particular problem, you can just say âah, but you should have been using methodsâ or âyou should have been using functionsâ. The issue is that you donât know in advance which problem is going to occur â and they may both occur, in which case there is no option thatâs good for all situations. Is someone going to want to add new operations to one of the types youâve defined? Or are they going to want make one of the operations youâve defined apply to a new type? In a hybrid language like Python, you can pick one or the other, but once youâve picked, then one thing will be easy and the other will be hard. Itâs possible that you picked wrong and the other design would have lead to fewer problems â you made something a method when it should have been a function or vice versa.
Compare this with Julia where thereâs no choice between function and method. All functions are generic and can have methods added to them. You define a type and then you define a function that operates on that type. Thatâs it. Thereâs no other way to do it. You canât make the wrong choice because there is no choice. Other people can extend your function to apply to new types if they want to and they can define new functions that operate on your types if they want to. This is what makes code reuse so straightforward â there is no wrong choice that can be made that prevents it.
Itâs a bit unclear what youâre saying here. Is the argument that C++ is a single-dispatch language and there exist implementations of multimethods for C++, therefore multimethods are single dispatch?
Thatâs what it seems like, but itâs clearly⌠why do I even go on the internet lol.
Anyways, thereâs no reason for multimethods to be part of single dispatch syntax, and Bjarne thinks the Julia way of doing it is the right way to express it as of 2019:
Unified function call: The notational distinction between x.f(y) and f(x,y) comes from the flawed OO notion that there always is a single most important object for an operation. I made a mistake adopting that. It was a shallow understanding at the time (but extremely fashionable). Even then, I pointed to sqrt(2) and x+y as examples of problems caused by that view. With generic programming, the x.f(y) vs. f(x,y) distinction becomes a library design and usage issue (an inflexibility). With concepts, such problems get formalized. Again, the issues and solutions go back decades. Allowing virtual arguments for f(x,y,z) gives us multimethods.
So the updated version from the mouth of the author to that paper is, the OOP syntax was a bad idea and it should all be about multimethod semantics on f(x,y,z)
. So (a) multimethods are not OOP and (b) OOP syntax is not necessary or sensical to use with multimethods.
Indeed. It should also be pointed out that while itâs currently normal to conflate OOP with using classes, this is not how the coiner of the term thought about OOP, and indeed structs + multiple dispatch are considered a form of object oriented programming by many.
For instance, the Common Lisp Object System (CLOS) is very similar to juliaâs dispatch type system and in itâs day was accepted as object oriented. Today, OOP is a bit of a dirty word in many circles, so the julia community has not put much work into reminding people that by many definitions, julia IS object oriented. In fact, many will say things like âjulia is not object orientedâ
I think this is strategically sound, but also a little unfortunate. I donât think we should allow the likes of Java, Python and C++ to have a monopoly on defining what is meant by the very useful concept of Object Oriented Programming.
Instead, Iâd rather we focused on saying things like âJulia does not use classes or concrete inheritance and we think those are bad ideasâ.
I want to say that multi-methods could be quite natural things in single-dispatch languages. OOP provides you basic building blocks and applying some well-known patterns (known more than 20 years) you can quite efficient build the very complex concepts.
For me it is too strong to say that the classical OOP does not support âsomethingâ, because OOP besides core base includes a lot of well-known patterns for many problems. And these patterns are natural part of the modern OOP methodology.
I think you are confusing functional languages and purely functional languages. For example, Haskell claims to the purely function and therefore side-effects do not exist (ok, in fact they exist, it would be impossible for the to not exist, but they are very strongly restricted to their own world, and most of the code is pure).
I really advise looking at Haskell, it was my preferred language before Julia and now is tied with it. The no variable can be used twice
bit is also present there, but the fact is, Haskell has no variables at all (variables imply state, and side-effects), it is constants and function parameters all the way down, XD.
What delighted me the most in Haskell was how function typing synergized with the search system. Many times, I wanted a generic high-order function that I had no idea what name it could have (or in which package it could be) but I knew (i) it probably existed; (ii) the number of times each generic type parameter would appear in the high-order function and how it would be the signature of the function I wanted to pass as a parameter. Almost every time I made a search I would find the generic high-order function I wanted.
A language that has multi-methods is, by definition, not a single-dispatch language. Itâs very hard to read that paper as being about how awesome single dispatch is when the entire introduction is talking about the limitations of single dispatch and how elegantly multiple dispatch solves them. It then goes on to propose an extension to C++ which makes it a multiple dispatch language.
For me it is too strong to say that the classical OOP does not support âsomethingâ, because OOP besides core base includes a lot of well-known patterns for many problems. And these patterns are natural part of the modern OOP methodology.
OOP is not a super well-defined term, so if you want to include multiple dispatch in OOP, then sure. All Turing-complete programming languages can be implemented in each other, so by that line of reasoning all languages have all features and we should all just program in machine code.
In reality, some things are easier in some languages than others. Design patterns are the common ways that people compensate for the limitations of the languages they use. The fact that there exist design patterns that allow you to solve some aspects of the expression problem in single dispatch languages doesnât negate that there was a problem that needed solving in the first place.
That is really cool. It would be possible in Julia as well: you would run type inference on all the code in all registered packages and then build an index of the type maps of all the methods which you could use to look up what methods might provide the mapping of types that you want. It would be little fuzzier than in Haskell, but still doable!
Yes, anyone has its own experience. However, I agree with @pixel27, because I have being working extensively with Java in different companies, before the Academy, and the advantages of OO are not so great. Actually, the majority of pattern designed in a OO as Java are not need in a function language as Julia or Python, because functions can be parameters of other functions, Design Patterns in Dynamic Languages.
I have spent many years teaching how to design OO, and to show all the problems that inheritance have, and how the composition is usually a better option. In these cases, the code using multi-dispatch is very similar, mainly only changing the âobject.method(âŚ)â => âmethod(object, âŚ)â.
You have right about that. I suggest you to look at https://github.com/tk3369/BinaryTraits.jl, for that type of problems it could be useful.
That would be great, but I think it worked so well in Haskell because: functions in Haskell are very polymorphic but not so much as in Julia, there are not multiple methods (not without extensions I think, there is pattern matching but this only cares about values), what changes are the types of the parameters, the type parameters inside the parametric types of the parameters, and the types in the signature of the parameter functions, not the total number of arguments for example. And each function did not have their own type like in Julia, but a single most generic possible type like id :: a -> a
, or (+) :: Num a => a -> a -> a
, or (for a high order example), map :: (a -> b) -> [a] -> [b]
. So the signature on the function parameters was either inferred to something very tight (because all restrictions I pointed above) or it was manually written to be âjust rightâ (allowing the generality desired while giving as much information for a possible search as possible).
I really like how the two languages have gone very deep on their own set of trade-offs (Julia centering on multiple dispatch and Haskell focusing in purity) and delivered a distinct but enjoyable experience to their users.
Incidentally, I find it somewhat puzzling that multiple dispatch was/is not used more widely.
Historically, AFAIK the concept was invented in the 1980s in Common Lisp â the fact that you could just add it to the language shows how powerful Lisp is. But after that, for decades it only shows up in niche languages (Dylan is probably the most well-known of these), or people try to add it to mainstream languages and more or less fail to gain traction for the idea.
Then, after a long winter, Julia is designed with multiple dispatch baked in, combined with a powerful parametric type system.
What happened in the meantime? How was the concept practically forgotten? I canât explain this rationally.
I think the corporate factor had a big role in this sad story:
LISP languages promote a functional paradigm (they donât necessarily enforce it) and tend to be more memory hungry. But memory was limited and expensive back then, so LISP never really took off(also probably because of failures in AI research in the 60s and 70s). Most of the concepts it introduced, however, lived on.
LISP had garbage collection and multimethods and supported OOP, but OOP was somehow distorted over the years into the syntax that most people are familiar with at the moment. Probably big corporations like Sun, Apple and Microsoft had something to do with this, when they were battling in the 90s. They didnât need a flexible language like LISP probably because they were afraid of fragmentation, with everyone inventing their own dialects. They needed something restrictive, controllable and modular for the huge teams of developers that would come and go over the years.
Java was more user-friendly than C++. Couple that with massive marketing and you have an industry trend. Corporate backup is always a strong engine, just look at Go.
Why didnât they add multiple dispatch from the beginning in Java or C#? Probably because they didnât need it in an era where people were chasing GUIs and single dispatch obj.method() call was more than enough to achieve that.
Going from what I remember to have read about this (on this board, I think), two things come to mind: Itâs hard to implement multiple dispatch and make it efficient. And it isnât that beneficial unless you go all in. Optional multiple dispatch is less useful, and if itâs slow as well, people will opt out, or not opt in.