On the arbitrariness of truth(iness)

Another language (or whole family of languages) where the numeric values of true and false don’t follow the C convention is the Unix shell (e.g. bash). Admittedly, this works on program return codes, not variables’ values, but still:

$ true; echo $?
0
$ false; echo $?
1
$ if true; then echo yes; fi
yes

That is, only the return code zero is “true”.

6 Likes

It also allows to do mean(rand(1000) .> 0.5) to count proportions, which I find quite convenient. I’m not sure what’s the use case for treating Bool as Int1.

1 Like

Especially since you can truncate to one bit at the end of any sequence of ring operations and get the same answer.

Since @jar1 thrust this conversation into discussing Boolean algebra, a field in which they recently changed my mind, I wanted to share some of these new thoughts.

TLDR: Corresponding false/true with 0/1 is arbitrary and awkward; it’s more coherent to use -inf/inf instead.

Like the truthy and falsy values of the OP, the choice to consider 0 falsy and 1 truthy is similarly arbitrary, and as this thread demonstrates it makes for an awkward algebra. There are other ways in which it’s awkward too: for example, and (\wedge) and or (\vee) are commonly considered analogous to multiplication and addition respectively, with operator precedences reflecting that, yet they both distribute over each other (in contrast, numeric addition does not distribute over multiplication!).

In these Unified Algebra essays 1 and 2, compsci prof Hehner at U of T argues that, to integrate Boolean algebra cleanly into our algebra of numbers, the numeric value corresponding to truth should be \top (the top of the number lattice, usually positive infinity), and the numeric value of falsity should be \bot (the bottom of the number lattice, usually negative infinity). Instead of the conjunction \wedge and disjunction \vee operators, use \downarrow (min) and \uparrow (max) respectively.

Then, where p,q\in\{\bot,\top\}, instead of p\wedge q (p and q) you’d write p \downarrow q (p min q), instead of p \vee q (p or q) you’d write p \uparrow q (p max q), and instead of p \veebar q (p xor q) you’d write p \ne q. Also, instead of p \implies q (p implies q) you’d write p \le q (p order q), instead of p \iff q (p iff q) you’d write p=q, and instead of writing DeMorgan’s laws as \neg(p\wedge q)\iff\neg p\vee\neg q and \neg(p\wedge q)\iff\neg p\vee\neg q, they’d be written: -(p\downarrow q)=-p\uparrow -q and -(p\uparrow q)=-p\downarrow -q. Notice that all the operators—min, max, negation, and comparisons—are in common with regular numeric algebra.

Mathematically, I like that this concept unifies Boolean algebra with numeric algebra. Epistemologically, I like that whatever discomfort people have toward the concept of infinity, they should rightly have toward the attainability of certain truth (an attitude of humility). And practically, with a background in analog chips, I like that this concept is reflective of the circuits we design to convert a continuous signal into a binary one: comparators, circuits which ideally have infinite gain about their decision point, whose output saturates at finite V_L and V_H out of practical necessity but which otherwise could be thought of as returning V_\bot or V_\top. Also, the designs of and and or gates are complementary; it’s strange for one to correspond to an operator with higher precedence than the other (mobility \mu_e/\mu_h notwithstanding).

It’s a major departure from popular languages though, so I don’t think we’ll see it anytime soon, but dropping other silly notions of truthy and falsy is a good start. Maybe in Julia 3.0 :wink:

6 Likes

What you are describing appears equivalent to a fuzzy logic, i.e., an extension of Boolean logic to real truth values in [0, 1]. Common choices include, p \land q = p \min q, \; p \lor q = p \max q or p \land q = p \cdot q, \; p \lor q = p + q - p \cdot q.
In general, \land is replaced by a so called t-norm and \lor the corresponding t-conorm conforming with De-Morgan’s law under the negation \lnot p = 1 - p, i.e., p t-conorm q = 1 - ((1 - p) t-norm (1 - q)).
The version you state above is equivalent to the max/min version under the mapping [0, 1] \to \mathbb{R} via the monotone function p \mapsto \log(\frac{p}{1-p}), i.e., identifying 0 and 1 with negative and positive infinity respectively.
In any case, not all laws of Boolean logic carry over to the extension, e.g., losing the law of excluded middle p \lor \lnot p \equiv T \; \forall p which was a major motivation for developing fuzzy logic in the first place.

Another approach is taken by the J programming language where and and or are extended as the lcm (least common multiple) and gcd (greatest common divisor) respectively – I’m not sure about the rationality/implications of that choice though.

3 Likes

Not quite. Indeed, it carries similarities such as using \max and \min, and involves a mapping of [0,1]\rightarrow\mathbb R (which exact mapping I’m not sure, but p\mapsto\log\left(\frac{p}{1-p}\right) seems reasonable). But it still has only two values, and the law of excluded middle remains: for p \in \{\bot, \top\}, p\uparrow -p = \top.

In Hehner’s formulation, if you wish to lose the law of excluded middle, the first thing to do would be to introduce a zero value such that p\in\{\bot,0,\top\}, making it a three-valued algebra. Or for full fuzziness, let p\in\mathbb R.

1 Like

Not exactly, there’s a pretty solid connection: Boolean algebra - Wikipedia

3 Likes

Good point, it looks like that Wikipedia article needs to be edited :wink:

Should have been more precise: Any fuzzy logic reduces exactly to Boolean logic when restricted to the truth values \bot and \top (usually represented as 0 and 1 or negative and positive infinity in your case). The law of excluded middle (and possibly other laws as well) are lost as soon as additional values, e.g., 0.5, 0.1 etc., are introduced.
What I wanted to stress is that there are many ways (fuzzy logics) extending Boolean logic, i.e., identical on \top and \bot alone, but with different properties on other values. In the end, the algebra on real numbers is not a Boolean algebra and every embedding has its pros and cons. On the other hand, just identifying \top and \bot with 1 and 0 does not aim to embed the Boolean algebraic structure in any meaningful way. Instead, I tend to think of it as a short-hand notation for the indicator function or Iverson bracket which is quite handy for arithmetic involving Boolean conditions, e.g., abs(x) = x * (x > 0) - x * (x < 0).

5 Likes

I should mention another philosophical benefit of identifying logical \{\bot,\top\} with numeric \{-\infty, +\infty\}.

In the field of communication, detecting a binary symbol entails roughly demodulating and filtering of some sort, followed by (for phased arrays) a projection from a high-dimensional space to a scalar, followed by a comparison against a threshold.

A natural question should be: what is the ideal binary symbol? Well, it’d be the one that can be subject to any degree of noise pollution as it propagates through the channel, and still maintain zero transmission errors—there should be infinite spacing between symbols. The natural answer is an infinite signal in \{\bot,\top\}.

In practice one must of course trade off the various limitations that don’t permit real signals to have infinite (or negative) power or bandwidth, but the notion of an ideal is generally a useful mental tool.

This is a good perspective—identifying logical \{\bot,\top\} with \{0,1\} isn’t the most philosophically coherent, but it makes the Iverson bracket implicit, which is handy practically. It means Boolean vectors implicitly serve the role of indicator functions, which is also handy.

Maybe if Julia had a nice indicator function/explicit Iverson bracket, we could work around this. The parsing rules around numeric literals make it difficult to use 1 as an Iverson bracket. Then again, the implicit Iverson bracket sure is handy for Code Golf.

There’s a bold 𝟏 (\bfone)

𝟏(x::Bool) = x ? 1 : 0
julia> 𝟏(3 < 4)
1

I didn’t get this point. What does it look like in code?

1 Like

I suppose there’s also a hypothetical future in which Julia’s parsing rules change such that numeric literals are callable. Then it’d be trivial to define:

(x::Number)(y::Number) = x*y # for backwards-compatibility
(x::Number)(b::Bool) = b ? x : zero(x) # for indication

In addition to 1(p::Bool) serving as an indicator, this concept also has the advantage(?) that the product (a+b)(c+d) would become valid.

Suppose I wish to calculate the sum of non-negative values of xs. Since indicating is implicit, I might write:

(xs .≥ 0)' * xs

If Bool identified with infinite values, we’d need an explicit indicator and be forced to write a longer expression such as:

1.(xs .≥ 0)' * xs
1 Like