Philosophy of programming

What is philosophy of programming is, when syntax is solved on the fact that something solves to certain mathematical formulae, to solve around it all the code so that the pretension holds, even if the content of the formulae is not for computer.


I don’t understand what you mean. Can you elaborate or give examples? Splitting your question into smaller sentences and fixing mistakes would help, too.


The close example, with almost trivial content, would be that Pi is not exactly Pi, but a rounded number - but we make everything around it to make it look like Pi. Mani things have done a trickery to make them behave like something, so that they behave like them.

This is also inevitable that we could use all programming objects for something else, but it’s mathematical to use them for a correct thing.

(moved to Offtopic. Please kindly refrain from posting vague stuff like this in Internals & design, it does not belong here.)


I definitely agree that programming language design is more philosophical than many give it credit for… but I’m still not clear on what sort of discussion you’d like to have here.


Regarding this particular point, in Julia pi is pi, not a rounded number:

julia> typeof(pi)

When transformed by some calculation, in most cases the result is converted to another “rounded” type indeed:

julia> typeof(2 * pi)

But in theory it is possible to define functions for pi that would give results with arbitrary precision.


You can, for example, ask for the built-in value pi to any precision (up to typemax(Clong)):

julia> convert(BigFloat, pi)

julia> setprecision(1024)

julia> convert(BigFloat, pi)

julia> setprecision(4096)

julia> convert(BigFloat, pi)

I would say the philosophy of programming is to make the computer DO SOMETHING. Sometimes to do that something the computer would take too long or require too much memory or too much disk space. In those situations either you wait for the hardware to improve or make sacrifices in the code to approximate what you want the computer to do within the constraints of time and hardware.

It seems to me that much of what you describe is in line with transcendental idealism as described by Kant. We create models not as they are in reality, but how we perceive them. We are limited not by the reality (e.g. the true value of pi) but by our perceived models (our use of pi in calculations).

I think that Julia’s multiple dispatch + type system works really well in large part because it allows our models (perceptions) to be expansive, yet ambiguous, and all the while running a computation in reality that maps very closely to those models–resolving ambiguity becomes a task for the language and libraries instead of the user.

For example, our imperfect mental model of the + method is sufficient for us to call 1 + 5.5 and be perfectly satisfied with a result of 6.5. Yet the underlying reality of representing numbers in bits is far more complex, and a closer look reveals a daunting depth of complexity, not to mention adding two separate representations (floating point and integer). Impressive for such an outwardly simple arithmetic statement. This phenomenon is the case the case with any domain. How can a tool reflect reality and also be a tool for incomplete, ambiguous thought?

With C, you are able to closely reflect the reality of the machine. But you are able to do little else–how deeply can your mental models hold when the underlying domain is always the machine? It is the job of the user to manage all operations correctly.

With Haskell near the other end of the spectrum, you have a tool with which you can express precise, expansive mental models. The underlying domain is whatever you decide, no questions asked (so long as they are consistent and please the compiler). Your mental models, however, are minimally related to the instructions that actually run on the machine.

OOP languages like Java, C++, or TypeScript attempt a balance, but in my opinion the “Object Oriented Programming” that we saw emerge in the early 2000s was a misguided, incomplete attempt at Alan Kay’s vision. Kay was interested in the “object” as a manifestation of Platonic ideas or Leibniz’ monads. What came about was far from that. The expressive power provided by the “class” system in these recent languages explodes in complexity as the mental models necessarily evolve to match reality. What begins as a simple expression in one’s mental map becomes a soup of interfaces and classes, whose ingredients are objects which have outlived their names.

I think Julia finds a sweet spot here, relative to what exists today. A user can build up layers of abstraction without a combinatorial explosion of behavior resulting from any nontrivial type hierarchy. With good method names, our mental models can extend far and wide and we can often expect behavior consistent with those models without knowing, or needing to know, exactly what our computation is doing. Julia’s “magical moment” is a testament to this.

Still, we are sometimes forced into dealing with the reality of the machine, if our library author’s mental map does not match our own. To be sure, there is often no replacement for assembly or C, but for most users such a language is impractical. In a high level language, it seems, the machine is best left ignored, inaccessible to the user (e.g. Python, other interpreted languages). The crux of the two-language problem. But by granting access to Julia’s mechanism for resolving ambiguity (multiple dispatch) the user can hook in where necessary to their own abstractions and tune into the reality of the machine. We are never too far from either end.


Computer would give the same display with any other calculation, for example you could find an other mathematical method giving you pi with last comma preciseness, but then find out that with the meaning of this symbol, you suddenly break your consistency and find out you have to do an operation, which says that in another O space, the result is completely different.

Another example is Int and Int64 - the meaning of “Int” is more or less philosophical, as it means “what means a natural number for your generation of languages”, and the philosophy follows, whether it wants to be compatible with your computer, language, or some other sense.

For example, typeval constants seem constant for Int type, but for generations of languages, they are really not - for example typemax(Int) seems to be strong, but typemax(Int64) is way stronger constant.

I am not sure that’s philosophical in any sense — it is just a technical question.

Surely we can call everything philosophical, but then the term becomes rather vacuous.


Int is just an alias for either Int32 or Int64 depending on how big array indices are on your hardware. Integer is an abstraction which includes all kinds of integer-like types. Not sure what the philosophical issue is here.

1 Like

Kant seems to have said that philosophy cannot be learned, that we can at most learn to philosophize. This comforts all of us who found that learning philosophy in school was useless.

1 Like