Occasionally someone will be annoyed that Julia requires an actual true or false value in conditionals, whereas many dynamic languages (including Lisps) allow arbitrary values in conditions and have rules about what count as “truthy” or “falsey” in that context. Many static languages even allow a degree of this: C doesn’t have a boolean type and instead uses non-zero integer values to indicate truth and zero values to indicate false. Julia could easily do something along these lines by having a Base.istruthy(x)
generic function that types can “register” themselves with; conditionals would implicitly call Base.istruthy(cond)
to check which branch should be evaluated.
Proponents of truthiness will generally argue that it’s obvious what values are truthy and which are falsey. What’s interesting about that line of reasoning is that even though it’s supposedly obvious, different languages completely disagree on what is or isn’t truthy. Most languages with truthiness have followed C’s example and consider zero to be false and non-zero values to be true. But not all of them! Consider Clojure (I just saw this on Hacker News), which considers zero to be truthy because “0 is not ‘nothing’, it’s ‘something’”. Which is a perfectly valid line of reasoning, and highlights just how arbitrary truthiness is. Apparently Common Lisp also considers zero to be truthy, so I guess Clojure followed CL rather than C. But languages with truthiness can’t even agree on whether zero is true or false! If you think you’re safe if you just avoid weird old languages like Lisp, think again: Ruby follows Lisp here and considers zero to be true.
Even if we could all agree that zero is false (which apparently we can’t ), what about other integers? In languages that copied C, they’re all truthy. But does that really make the most sense? Asking if something is true/false is generally expressed as converting a value to the boolean type, which is often viewed as a 1-bit integer. If you want to do it explicitly, you do bool(x)
in Python, for example. But when you convert an integer type to a smaller integer type, you typically truncate and keep the trailing bits. By that logic shouldn’t the last bit of an integer dictate its value when converted to boolean? I.e we would consider all odd numbers true and all even numbers false? That makes at least as much sense to me as zero versus non-zero. I can imagine an alternate history where that was the common rule and everyone would be aghast if you couldn’t write if (n) {/*odd*/} else {/*even*/}
. “What do you mean I have to write n % 2
in order to check for parity? It’s such a basic operation!” I suspect that our actual history only transpired because jz
and jnz
(jump if zero, jump if not zero) happen to have been chosen as assembly instructions. But we could just as easily have had je
and jo
(jump if even, jump if odd) as assembly instructions that jump based only on the last bit of a register.
Another fun thing that Lisps don’t even agree on: in Common Lisp the empty list ()
is false but in Scheme it is true. . And of course we have the same schism in more modern languages too: []
is false in Python and true in Ruby.
More fun and games with truthiness, because I just can’t stop now. Older versions of Python considered midnight and only midnight to be a false time. After a great deal of arguing and waffling, this was deemed to be enough of a footgun that it was changed in Python 3.8 (breaking change in a minor version, anyone?). But isn’t the real issue here that there’s this notion that it’s somehow better to write if t:
to check if a time is not midnight than to explicitly write if t != midnight:
? Doesn’t that suggest that if you want to check if a number is non-zero you should write if x != 0:
rather than just if x:
and if you want to check if an array is empty, you should write if len(a) > 0:
rather than just if a:
.
More but with strings… In Python only the empty string is false and non-empty strings are true. In Ruby all strings are true. In PHP the empty string is false and so are non-empty strings… except for the string '0'
. This makes a little sense if you think that you try to parse a string as an number first and then see if that number is zero or not, but that’s not what’s happening since the strings '00'
and '0.0'
are both true… …
I could go on—I haven’t even mentioned JavaScript or Perl. But I’ll stop. You hopefully get the point: it’s almost as if these languages were just making up random shit and then claiming that it’s obvious. If you want to know if a number is zero, compare it to zero. If you want to know if a string or array are empty, check if they’re empty.
Just say no to truthiness.