Version 1.0 released of Nim Programming Language

I think both Nim and Julia are kind of mirroring each other, due to different starting points.

Julia starts as a dynamic language due to its scientific computing heritage that requires rapid prototyping and REPL-first experimentation.
However, to solve what is commonly known as the 2-language problems (write in Python, Matlab, R and productionize in C, C++, Fortran), it also provides an efficient JIT (provided types can be inferred). This sometimes requires Julia to drive towards specifying types and static language) to help the compiler.

Nim on the other end, starts as a static language due to its focus on safety. But it also has a focus on expressiveness, this shows in the dynamic like construct that the language allows.
For instance, Nim has a very strong appeal to game developers. And not unlike scientific computing, game dev also has a 2-language problem where the game needs to be in a static language for hardware reason, for platform support, for security/obfuscation, memory use, etc. But often games also require a scripting language for scripting high-level behaviours or proposing a modding language to modders, this has traditionally been Lua or Lisp or a custom language (it’s a rabbit hole on Youtube) but Nim has an unique proposition there:

  • Nim compiler can be used a a library (as soon as this regression is fixed)
    import compiler/[nimeval, llstream]
    
    proc evalString(code: string, moduleName = "script.nim") =
      let stream = llStreamOpen(code)
      let std = findNimStdLibCompileTime()
      var intr = createInterpreter(moduleName, [std])
      intr.evalScript(stream)
      destroyInterpreter(intr)
      llStreamClose(stream)
    
    let runtimeString = "echo \"Hello World\""
    evalString(runtimeString)
    
  • It also provides hot-code reloading, so you can hot-patch code while a program is running.

In short:

  • Julia starts within a strongly dynamic domain and wants to allow the speed of static languages when it matters without their ergonomic flaws.
  • Nim targets static domains where strong memory, speed and dependency-free guarantees are required and still wants to allow the flexibility of dynamic languages.
11 Likes

I haven’t used nim for a serious project so far, but by playing with it it “feels” extremely user friendly, which is impressive for a statically compiled languages. From what I understand, a key practical difference (concerning the discussion above on multiple dispatch) is that even though there is some type inference in nim, the return type of any proc must be explicitly provided.

Some of the multiple dispatch in julia would simply not work under these premises. In many functions the type of the input determines the type of the output (which means, if the compiler is smart enough, it will run fast), but it’s very hard for the developer to know this mapping explicitly. For example if my function takes an AbstractArray and I call similar inside the function body (which often, but not always, preserves the types), it’s very hard to write the type of AbstractArray that I get. I suspect some of the discussion above was also along the lines of

Let’s say one makes the return type optional, and lets the compiler figure it out, giving error if the compiler can’t figure it out. Then changing the inference heuristics in the compiler could break code.

Indeed I’m curious to see how things develop as one adds more and more array types optimized for different things in nim.

2 Likes

Actually it doesn’t, you can use a generic return type like SomeInteger, SomeNumber, T or even auto which leaves no constraint on the return type.
For example, one of the early issues I had when creating my tensor library for Nim was how to iterate into arbitrarily nested arrays (length known at compile-time) or sequence (dynamic size) like [[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]].
I could tell the compiler the depth of the nesting so that it would retrieve the type of elements from that depth, but that wouldn’t be very ergonomic. So instead I let it recursively call the iterator until it can’t, and deduce the type from there:

iterator flatIter*[T](s: openarray[T]): auto {.noSideEffect.}=
  ## Inline iterator on any-depth seq or array
  ## Returns values in order
  for item in s:
    when item is array|seq: # This is a compile-time "if" and doesn't exist at runtime
      for subitem in flatIter(item):
        yield subitem
    else:
      yield item
3 Likes

I am not a Nim user,so correct me if i make a mistake.
I think SomeNumber and SomeNnumber are not generic types, they are actually sum types(generic types often refer to type parameterized by type variables).
And what you say " auto which leaves no constraint on the return type" is not precise. It just tells the compiler to do type inference for programmers, which implies the program must be well-typed. What is more approciate here is any.
I think piever(who you reply to) wants to express that, no matter what annotations(auto,SomeFloat,etc) you use, you give more or less some information to the compiler, while in Julia annotating return types is not a must.

What’s to stop you declaring all function return types as auto? Why not just make that the default?

They are generics with constraints (we call them typeclasses). Those are not materialized at runtime and only exist in the compiler.

Sum types are called object variants and compile down to C tagged unions (with additional compile-time and runtime checks to ensure you don’t access the wrong field).

Yes auto leaves the inference to the compiler as described here. That said, a sound type system is supposed to prevent logic bug so if your program doesn’t compile, either it is buggy or the compiler is buggy.

Actually, Nim is also able to model complex relationships similar to Haskell monadic constructs, an example is Emmy: GitHub - andreaferretti/emmy

type
  AdditiveMonoid* = concept x, y, type T
    x + y is T
    zero(T) is T
  AdditiveGroup* = concept x, y, type T
    T is AdditiveMonoid
    -x is T
    x - y is T
  MultiplicativeMonoid* = concept x, y, type T
    x * y is T
    id(T) is T
  MultiplicativeGroup* = concept x, y, type T
    T is MultiplicativeMonoid
    x / y is T
  Ring* = concept type T
    T is AdditiveGroup
    T is MultiplicativeMonoid
  EuclideanRing* = concept x, y, type T
    T is Ring
    x div y is T
    x mod y is T
  Field* = concept type T
    T is Ring
    T is MultiplicativeGroup

Regarding Any, Nim actually has that, it also works at runtime. But it’s was only used for the multimethods and channels AFAIK, so it’s being deprecated to simplify the runtime: std/typeinfo

Nothing. Actually in the past, input parameters type were completely optional as well, but that was removed. I guess it caused (perf?) issues for inference.

The standard library prefers explicit types, it acts as a documentation and a guarantee/safety measure.

I still don’t understand. Are these “typeclass” the same as those in Haskell? When I see SomeFloat in Nim’s document:

SomeFloat = float|float32|float64

That’s why I think they are sum type. And also when I see:

# Vector copy
proc cublas_copy*[T: SomeFloat](
  n: int; x: ptr T; incx: int;
  y: ptr T; incy: int) {.inline.}=

  check cublasSetStream(cublasHandle0, cudaStream0)

  when T is float32:
    check cublasScopy(cublasHandle0, n.cint, x, incx.cint, y, incy.cint)
  elif T is float64:
    check cublasDcopy(cublasHandle0, n.cint, x, incx.cint, y, incy.cint)

You declare T to be the “typeclass” SomeFloat, and do some conditional branchings (and I think they can get removed during compilation, because you say they only exist in compilation).accoding to the type of T. But isn’t typeclass can apply to any type which implements all the methods under the typeclass?(For example, in order to become an instance of the typeclass Functor in Haskell, we just need to implement map function). In the codes above, it seems that T can only be float,float 32 and float 64,which is not open.

No it’s not the same as typeclass in Haskell. Haskell typeclasses are better match with Nim concepts (for example the AdditiveMonoid).

Functors from the concepts docs:

import sugar, typetraits

type
  Functor[A] = concept f
    type MatchedGenericType = genericHead(f.type)
      # `f` will be a value of a type such as `Option[T]`
      # `MatchedGenericType` will become the `Option` type
    
    f.val is A
      # The Functor should provide a way to obtain
      # a value stored inside it
    
    type T = auto
    map(f, A -> T) is MatchedGenericType[T]
      # And it should provide a way to map one instance of
      # the Functor to a instance of a different type, given
      # a suitable `map` operation for the enclosed values

import options
echo Option[int] is Functor # prints true

So I think actually typeclass in Nim is still sum type but compiler can know them at compile time? Though they can’t be materialized(sum type in Julia can’t be materialized either), and programmer can use them for static dispatch(which is exactly what Union in Julia does)

You can see them as such, though in the community we just call them generics as in practice that distinction is really small.

Open:

type DataLayout = enum
  rowMajor
  colMajor

type Matrix[T] = object
  m, n: int
  layout: DataLayout
  data: seq[T]

Closed (sum-type like)

type DataLayout = enum
  rowMajor
  colMajor

type Matrix[T: SomeFloat] = object
  m, n: int
  layout: DataLayout
  data: seq[T]

No, I use open and close to refer “we can add subtypes to a type at any time(especially after defining the type or we can create many instances of a particular thing”, for example I say Functor is open, because I can add instances to this typeclass.
That’s why I say, it seems that “typeclass” in Nim is closed. What I mean is that I can’t add a subtype(for example, I have defined a float128 and float256 and I want to make them to be “typeclass” SomeFloat). I think this is not allowed, because this will break the code(for example, the code cublas_copy)

And also, you said

That said, a sound type system is supposed to prevent logic bug so if your program ?doesn’t compile, either it is buggy or the compiler is buggy.

The problem is that we need to at first ensure that the type system is sound. That’s why StefanKarpinski asks you how Nim does type inference and the concrete specification of Nim’s type system. As far as I know, many type systems are not sound at all, even many static type systems that are believed for a long time to be sound(for example,Java and Scala’s) but actually they are not. Because Hindly-Miller type system is delicate and fragile, so basically adding anything interesting into this system will break this system. Nowadays I think it’s nearly impossible to say(and proof) firmly a complex enough type system(with genetic types,typeclass,subtyping and some fancy things) is sound, and worsely, they might not at all sound.
That’s why StefanKarpinski emphasizes Julia is dynamic, because failure of type inference in static type system will lead to the reject of programm, even though it is runnable, but in dynaimc type system(at least in Julia) this will only affect performance,program is still legal.

1 Like

I think that’s a difference in audience and philosophy. In Nim if the compiler is not able to type check, it will not compile. Personally, I think it’s a feature.

I think of the type system as this person (hopefully friend) that is constantly over your shoulder and from time-to-time say “Look, I don’t really know what you are trying to do, but keep up the good work, just a quick note: here you wanted bananas but now I got tomatoes, I don’t want to bend reality too much”. :wink:

Obviously when you are in research phase, being constantly reminded to take care of your types might disrupt your flow but Nim is on the low end of type annoyance and that’s something that you can learn.

1 Like

I think that’s a difference in audience and philosophy. In Nim if the compiler is not able to type check, it will not compile. Personally, I think it’s a feature.

A beginner in Julia (especially someone not from computer science) will almost always write Julia code that is not optimized and might actually be as slow as R/MATLAB. For this reason, I think it would’ve been great if every variable in Julia needed to be annotated and there was some sort of static type checking.

On the other hand, the dynamic nature (program will run as long as its legal, REPL) is incredibly useful and handy in research.

1 Like

I think that’s a difference in audience and philosophy. In Nim if the compiler is not able to type check, it will not compile. Personally, I think it’s a feature.

That’s what StefanKarpinski wants to say. Whether a program can be checked and whether a program is well-typed should be a property of type system, not how the compiler implements type inference.
If a type system is unsound, but a compiler “manages to do some type checking” in this system, then it means the implementation is actually wrong and it won’t completely satisfly the specification of the type system.
In Julia, types are as important as those in static type systems. Researchers also care about types because performance matters.The problem is that it’s impossible to completely infer genetic types in such a expressive type system.Just like you need “concept”,“typeclss”,“generate” to constrain the type so that compiler can get enough type information to do static type inference.But many functions in Julia are so generic and they are not constrained by any interface or concept. The real reason researchers don’t annotate types is not entirely we don’t want to "…being constantly reminded to take care of your types "., it’s because multiple dispatch allows that a function can have mutiple methods that can return different types, so you can’t annotate types because this might be wrong(you don’t know return types because this depends on which methods is used).This is much powerful than typeclass(or concept), because type classes constrain the method must have the same type(for example, all the map needs to be (a-b)->[a]->[b],even if a,b can range over all the types, but a type map:: (a->b->c)->[a]->[b]->[c] is not allowed,actually the latter is function zip). The type in Julia is always open,while in Nim you have algebraic type,sum type,type class.These are all closed types. Open type systems are actually impossible to do “static type checking at only one time” .So we have to give up some of type system’s static properties in exchange of the expressive of type system.

5 Likes

I used to love this Julia feature, up until the point where I needed to compile my code into a self-contained small binary that I can just execute and expect it to be working, which I can easily do in Nim while still maintaining very expressive code and very fast compile times. Actually, most of the time Nim’s compile time + execution happens to be faster than the first JIT compilation of some Julia code on function call.

This is the use case that I think currently Julia fails at, while doing very well for anything regarding prototyping and REPL based coding.

5 Likes

I used to love this Julia feature, up until the point where I needed to compile my code into a self-contained small binary that I can just execute and expect it to be working, which I can easily do in Nim while still maintaining very expressive code and very fast compile times.

No offense. I believe what you say is generally true, Nim is much more dynamic than Haskell (without extension) and can still get smoothly compiled . Though I’am not a professional Nim programmer, but I still can see :

  # Materialize the algorithm for float32
  # ----------------------------------------------
  generate foobar:
    proc foobar(a: Tensor[float32], b, c: Tensor[float32]): Tensor[float32]

The signature of foobar is proc foobar(a, b, c: Fn): Fn, so this is exactly the same situation in Julia, there is no way for the compiler to infer this type (because a,b can potentially be any legal type),you have to supply concrete type to force creating the instance of this function. In Julia, we can use already use precompile to do so.
Having wrote many words under this post, what I just want to point out is that some dynamic behaviours can never get statically analyzed unless we get further information(so keeping a runtime environment like LLVM and JVM is necessary to get the information).

REPL and prototyping are two reasons why Julia should be dynamic, but that’s not the whole story. I think (this is my own opinion), cocreators of Julia intended this type system to be such dynamic because it’s a suitable abstraction for scientific problems and they don’t use it for safety checking(at least ,it’s not so important and it won’t alert if compiler encounters some ill-typed program). This the fundamental difference of Julia and other languages. Static type language lift information to types, so compiler can use the information to do many static analysis. In order to do such analysis as much as possible, types need to get “complex” so they can hold more info. So I guess why there is no concept or typeclass or type family in Julia. This things are acually guarantee of type system, sematically they can’t make type system more expressive at all, while in Julia, (abstract) types are designed to cooperate with mutiple dispatch, so expressiveness is important. Maybe outside this field, such as system programming or game programming, we don’t need such a generic abstraction and it’s fine to restrict the type system to make it static (for example, devirtualize all multiple dispatchs), and with this restriction our program looks still expressive and compiler will be happier to do type inference.

But anyway making type system static will still somehow weaken expressiveness of type system. So although both Julia and Nim have something like generaic types, sum types or any,etc, they are not 1:1 map. It seems that dispatch or overloading is only a terminology difference. But supporting dynamic dispatch needs runtime support while static overloading don’t add any essential expressive power to type system (it’s actually a grammer sugar,but I admit it’s convenient).

Open type system is game changer. If we want to extend types, then we can’t avoid this. Many OO languages are actually open types because it allows user to extend the type as long as they can implement all the virtual methods of this class, but when you try to generate a binary file, this finite type world is actually closed since compiler can run through the whole program to gather all the information it needs, so they appear closed(However, class loader in Java is essentially open type,so there is no way to analyze them ahead of time) . I think Julia can do this too, but removing LLVM and Base dependency is little hard (can Java run without JVM?I think some basic functions such IO, still need runtime support, so it’s not so easy), hoping that one day developer can solve this problem.

Many Julia developers benefit from this system and many features are widely used in libraries.
After a long time I come to realize that being dynamic is not just becuase we want REPL or we are just want convenience, but also that expressive type system (generic and union), mutiple dispatch and open types get bound togerther tightly, which forces the type system to be dynamic. Just as some Julia users in this post like ChrisRackauckas have said, many static languages (some exceptions including the dependent type language) bypass dynamic issues as if they don’t exist. So letting the type closed makes multiple dispatch not so useful because we can always pattern match(or use “typeclass” in Nim to do some kinds of specialization at compilation time), then language will become more predictable. And without multiple dispatch(remember that Julia’s multiple dispatch can return different types), open type will become something like typeclass in Haskell(we need to ensure they can be applied to some function) or they can become something extremely dynamic like those in Python(because you have to do if -elseif to specialize type by hand). So different parts reflect and corresond to each other, they are not placed together randomly. This also causes compilation hard, but to get the full power of the whole type system, we have to give up this to do so (And I think Julia is initially not designed to be compiled) . Just like in order to gain the full power of macro in Lisp, we have to write a lots of parentheses everywhere.

4 Likes

I think as long as your functions are type stable, you don’t need to annotate types to have your optimized code. Of course, depending on his/her background, a beginner may be prone to writing overly dynamic code.

1 Like

Performance issues aside, I’d take type errors from the compiler before type errors at runtime every day of the week. I’m sort of hoping runtime type errors can be eliminated (perhaps optionally) by the static compiler.

Readability, to say nothing of making sure changes to the implementation don’t break the interface.

3 Likes