Proper way of organizing code into subpackages

Hi all,

I am in the process of converting a fairly complicated proprietary simulation model that I wrote in Python to Julia 1.5.3. The code is broken into around 20 different sub packages on the file system by their purposes for organization and reusability. Each sub package also contains a tests folder that contains the unit tests for the different modules inside the sub package. In Python this can be done very easily because each .py file is a module, each directory is a package containing these modules, all imports should be done explicitly (ie: from foo_dir.foo import stuff at the top of bar.py) and you can configure PyTest to systematically find and execute all unit tests in all packages in the entire project folder tree. I have spent a couple of days trying to figure out how to do all of this properly in Julia. Couple of things I have tried:

A. Use include("relative path to foo.jl") directly at the top of bar.jl

Pro:

  • Very easy to use
  • Very easy to understand.
  • Both Juno and VSCode intellisense plays well with this method.

Con:

  • If foo.jl moves on the file system modification process is initially simple. However this quickly becomes a pain if a lot of barN.jl includes foo.jl all over the place. Python actually suffers from the very same problem but PyCharm tends to refactor this automatically so it’s not a super big issue.

  • Can cause redefinition of global constants in foo.jl if include(foo.jl) were called multiple times due to multiple inclusions. No control over what’s included from foo.jl and what’s not which means name collisions can happen easily. This will basically make it useful for just very small projects. This is a deal breaker.

B. Wrap code in foo.jl in module Foo ... end, export desired public stuff from foo.jl. Import with using .Foo.x, .Foo.y or import .Foo.x, .Foo.y

Pro: AFAIK, none! More on this in the Con section.

Con:

  • This only “looks” more like what Python does but in order to actually import/using the module it either requires include("foo.jl") just like A (and therefore inherit all of A’s cons) or adding the code to path.

  • Actual name from Foo can live under all kinds of weird prefixes if done without Reexport and the @reexport macro. This feels hacky and tbh, pretty unintutive.

C. Build sub packages into actual packages. Do generate foo_dir/foo under Pkg mode and use dev foo_dir/foo in bar’s environment. Any sub packages (ie: bar) that want to use/extend foo can do import Foo.x, Foo.y etc in its code.

Pro:

  • This is by far the most well behaving solution and the one I am leaning towards. In practice everything pretty much behaves like how I want it to because AFAIK it’s kinda like “installing” the Foo package into the parent environment (in dev mode but whatever).

  • I haven’t tried out setting up all the unit tests for different sub packages yet but I imagine it’s pretty much the same process as setting up the unit tests for just a single package.

Con:

  • Environments everywhere. Every sub package gets its own environment with its own dependency (ie: 20 different Manifest.toml and Project.toml in the project repository at various locations). This also makes me worried about the possible clusterfuck I might have to deal with when some foo.jl inevitably has to move out/get merged from/into another (new) sub package during development.

  • It’s a pain to generate the proper file system tree for the sub packages. First of all the source files for each sub package now live under the subpackage_dir/src/ directory instead of at the subpackage root dir. TBH I can live with this. The other problem is in order to generate the package at the correct file system location you have to be careful with which environment you have activated right now. This is quite a bit more thinking and running commands and swapping environments than I like for something that should be very simple.

  • VSCode intellisense seems to play with this badly. I am guessing this might have something to do with the cache its language server generates is based on hashing the version number of the package. Since the sub package isn’t really a package that is being distributed over some registry, its version number won’t be updated for different releases either (and therefore cache doesn’t get regenerated when say, foo.jl changes). Again, just a guess. At least Juno seems to do intellisense properly under this method.

It’s entirely possible I have missed something simple and obvious. Is there a recommended way to deal with this?

15 Likes

Yeah, definitely a deal breaker. And, in fact, it’s even worse than you’ve said because any types you define in foo.jl will be re-defined each time it is included, resulting in incompatible types with the same name. Ever file must be included no more than once, period.

This sounds pretty appealing, although I agree it adds some complexity. I think an important question that only you can answer is: Is each sub-module truly an independent entity? That is, is it something you imagine working on completely in isolation, managing its own dependencies and perhaps even installing on its own? If yes, then this makes sense–the sub-module is its own package, therefore it must know what it depends on (and therefore it must have its own folder, its own Project.toml, its own tests, etc). If no, then perhaps this sub-module is just a logical chunk of some larger project, but not something you’d actually want to install by itself. In that case, I’d propose something like what JuMP and many other projects do, in which there are some sub-modules but they are all included exactly once by the main JuMP.jl file, e.g. https://github.com/jump-dev/JuMP.jl/blob/master/src/JuMP.jl and a sub-module here: https://github.com/jump-dev/JuMP.jl/blob/master/src/Containers/Containers.jl This avoids all the extra src folders and .toml files, since it treats the sub-modules as parts of a greater whole rather than standalone projects. It should also work better with VSCode’s intellisense.

I suspect the latter is closer to what you’ll want–after all, unless you actually had setup.py or requirements.txt files in each of your Python project’s sub-folders, those components are not really independently installable either.

Really the only downside of the latter approach is that it doesn’t provide an automatic way to test only the sub-set of code in one of those modules. Projects using the structure I proposed above (like JuMP) almost always organize their test folder to match the hierarchy of src, which can allow to you to test specific chunks by including only whatever subset of those files corresponds to the module of interest. I agree this isn’t as nice as being able to pytest foo.bar, but I have found it to work well enough.

11 Likes

Tips: include()

  • Do not use include() to “include” a package into your project.
  • Actually, include() loads code from a file. I don’t think Python has such a low level command. In C/C++, it probably corresponds to what the linker does (Of course it also loads the code in memory).
  • include() was meant to load code stored in subfiles (not submodules) - allowing you to break your solution up into multiple files.
  • Only include() files that are directly part of your current package/project.
  • Do not include() files from other “software modules” (projects).

Tips: module

  • module MyMod is not a “software module” as you likely think of it. It is simply a global namespace.
  • You can have more than one module per file, and can have multiple files per module (because they are namespaces - not "software modules)).

Tips: Package

  • A package is probably what you should think of when you intend on developing a “software module”.

Tips: import MyPkg/using MyPkg

  • Yes, import or using is the thing to do if you import a separate package.
  • When you import MyPkg (or using MyPkg), Julia first checks if it is already loaded.
  • If not already loaded, Julia checks in its package “locations” (Project.toml+LOAD_PATH folders), and loads it.
  • import MyPkg (or using MyPkg) then returns a reference to the package’s associated module (i.e. namespace).
16 Likes

Tips directory structure/LOAD_PATH

If you are starting such a large project (approx 20 modules), I suggest taking advantage of LOAD_PATH instead of using pkg> add or pkg> dev. It is the easiest way to learning Julia development, and you can migrate to the pkg> system when your are closer to publishing your solution to Julia’s General registry (if you so desire).

Quick dev: 1 file per “software module”:

If you are migrating from Python, you might want to try the 1 file per “software module” solution:

my_package_repo
├── Module1.jl
├── Module2.jl
├── Module3.jl
...
├── ModuleN.jl

NOTE:

  • I myself have never tried this “single file” module solution, but it is supposed to work.
  • Don’t forget that the code from each Modulei.jl file must be encased in a module Moduleiend block.

Add my_package_repo to LOAD_PATH

You just have to make sure you first add:

push!(LOAD_PATH, "/path/to/my_package_repo")

You can set this variable in your ~/.julia/config/startup.jl or set it from a shell environment variable:

If you do this, you should no longer use pkg> add Module1. LOAD_PATH takes care of making it available to your project.

Typical package directory structure:

But to be able to migrate to the pkg> system more smoothly, I suggest you use the proper Julia directory structure from the start. Among other benefits, this structure (optionally) includes test/ folders for ci tests, and more clearly groups together package solutions that are split across multiple files.

my_package_repo
├── Module1
│   ├── Project.toml (optional)
│   ├── src
│   │   ├── Module1.jl
│   │   ├── Module1SubFile1.jl
│   │   └── Module1SubFile2.jl
│   └── test
│       └── runtests.jl
├── Module2
│   └src
│       └── Module2.jl
├── Module3
│   └src
│       └── Module3.jl
...
└── ModuleN
    └src
        └── ModuleN.jl

Again, I re-iterate:

  • Don’t forget that the code from each Modulei.jl file must be encased in a module Moduleiend block.
7 Likes

Understanding modules

Here are a few threads you might want to read:

Multi-package repositories

If you truly are going to write multi-package repositories, you might want to look at the following:

I would guide you on how to use them, but I don’t find the current solution practical (maybe I just don’t understand well enough how to use it).

4 Likes

Thanks @rdeits @MA_Laforge for your input! I will do a bit more reading and try some things out.

Actually, the good news is that of literally today, you now can get Python-style from syntax in Julia. (It’s even a little bit better, as it doesn’t have a couple of the edge-case warts Python has.) Then you can organise things pretty much like you would in Python, without the various complexities described above.

The package for doing so is FromFile.jl. (It’s not been registered yet so look up how to install from GitHub, but it’s fully tested and as far as we know bug-free!) If you’re curious, FromFile.jl is a draft implementation for Issue 4600, where there is an ongoing discussion about how to solve the exact issue you’re describing.

5 Likes

There’s not much point in rewriting a Python package into Julia if you’re just going to do a word-for-word translation. I would step back to see the bigger picture. Ask yourself these questions:

  • Are there similarities among the sub-packages that can be codified into a generic interface?
  • Are there common operations (methods) that appear across different sub-packages?

The single most important feature of the Julia language is multiple-dispatch. Generic programming and multiple-dispatch are the core of the language. Dividing your code into dozens of modules works against this. Try to find generic functions that make sense for your domain and then overload them as needed. Whereas large Python packages contain complicated trees of nested modules, Julia packages are relatively flat in order to leverage generic functions and multiple dispatch.

Suppose your Python code contains methods like this:

pkg.module1.module2.classA.foobar()
pkg.module3.module4.classB.foobar()

Please do not create doubly nested modules in your Julia translation of this code. Do this instead:

struct A
    # stuff
end

struct B
    # stuff
end

foobar(a::A) = # ...
foobar(b::B) = # ...

Note that I mean that the above chunk of Julia code should be at the top-level. In other words, the only module that it is inside of is the module for your overall package.

8 Likes

Thanks for the advice but I am pretty aware of the mechanical difference between the two languages in that regard already. The point of the rewrite is for runtime performance and the rewrite is certainly not a word for word translation, precisely because of multiple dispatch and the “data only” OO model in Julia. A significant chunk has already been rewritten (in a Julian way) and benchmarked against their python counterpart. However I do want to preserve the overall code organization structure already adopted by the python code because the structure represents (nested) logical components of the problem it is trying to solve. After some experimentations I think it is possible - more on this later.

1 Like

Here’s the solution I tested and will probably adopt. Basically it’s “add all possible paths in this project to LOAD_PATH and modularize all necessary entry points”.

  1. Create a setpaths.jl that contains the following:
if !isdefined(Main, :__setpaths__)
  global const __setpaths__ = true

  function getpaths(root::String=@__DIR__)
    return collect(Base.Filesystem.abspath(stuff[1]) for stuff in Base.Filesystem.walkdir(root))
  end


  function setpaths(paths::Vector{String})
    for path in paths
      push!(LOAD_PATH, path)
    end
  end

  paths = getpaths("$(your project root here)")
  setpaths(paths)
end

This code should be guarded so can be included multiple times. Include this file somewhere at the top for any executions.

  1. Basically just wrap module entry point code in module ... end and use them directly in other codes like an installed package module (without the . prefix) wherever you like. I also found out that under this method, a) Module entry point must be stored in .jl file named the same as the module name. For example, say entry point code is module Foo ... end then the file must be named Foo.jl. b) Entry point can only contain one module definition. Not sure why.

Pro:

  • (Almost) all of the benefits of the sub-package through generate/dev techniques, none of the complexity.

Con:

  • Modules are all “flat” in global so if you have a lot of nested structure you can become lost with no ideas where to find the code. Then again, even if you go with the sub-package technique you will have the same issue.

  • Breaks all intellisense/autocomplete known to men when not developing within a module. Combined with the above it can be kinda devastating.

Can you give us a hint of the domain of your application? With more context, it’s possible that we might be able to recommend an alternative way of organizing your code.

1 Like

I can’t really change the organization structure too drastically because then the junior programmers who are not as well versed in the design of the solution will become entirely lost when they can no longer use the more familiar python code base as a reference. But let me give a very vague example demonstrating how the structure came to be.

Suppose you are trying to solve a problem. Let’s say part of the problem involves finding out how much an insurance company will pay for a particular claim. There are different types of claims. Regardless of the specific type of claim they all must follow a flow of procedures. However the specifics of the procedures can be different. Yet every procedure still shares certain similarity.

This leads to the following natural nested structure:

Project
|- some_file_for_project.jl
|----Claim
|      |- some_file_for_claim.jl
|      |----Procedure
|      |        |----some_file_for_procedure.jl

Note that it makes sense for Procedure to live under Claim because it is about procedures specifically related to the Claim and nothing else. Also, it makes it much easier for the person reasoning about the solution to organize the code because it has a close correspondence to the logical structure of the problem.

Yes, see https://github.com/julia-vscode/julia-vscode/issues/307

Ok, note that Julia packages can and do use sub-directories to organize the code, but usually without introducing modules and sub-modules. See, for example, the following:

Of course there are some packages that do introduce a few sub-modules (though not one for every single file). It’s worth taking a look at what they do:

5 Likes

Thanks for the list! I am going to take a closer look at ForneyLab.jl since this one seems to have the deepest nesting.

2 Likes

I actually did try FromFiles already. I ran into an issue when I was trying out some weird imports from upper level filesystem hierarchy. I will see if I can reproduce the problem and either figure out if it’s me or an actual issue.

1 Like

Please do let us know! If there is a problem then we should like to fix it.

Is the approach of loading things from files as opposed to packages and modules possibly going in the wrong direction? How do we know we can trust stuff that can be pulled out of files? The file may be a script for all we know.

If we’re trying to access a file’s contents, we should already know whether or not it is a script. (And I’d note that the same is already true of the current approach when accessing a file’s contents via include.)

Yeah rest assured I am not advocating for the ability to import from any arbitrary files, just user source files under the project root folder. If the user can’t figure out which file he should/shouldn’t import despite having direct access or even having written the code himself then I’d say he has no business fooling with the code in the first place.

Also, I am pretty sure FromFile does not deny the user the ability to manually specify modules as usual (and subsequently import said module using @from). That is probably the proper way to use it.