Parametric Markdown comments?


Following the recommendations given in Documentation · The Julia Language, I’d like to also automatically include information on

  • the computer brand, type and model
  • the operating system version
  • the Julia version

currently used within the in-line Markdown documentation of each function.

Is there a way to retrieve these information items programmatically and insert them in the documentation, so that I do not have to update each and every function when I change computer, or upgrade the OS, or update Julia?

Ideally, I’d like to be able to use some Markdown syntax (like an environment variable) to fetch such information at the time of last change, or publication (e.g., on GitHub), to provide metadata on the computing environment used during the most recent update of the functions.

Thanks in advance, Michel.


Hey @mmv

Interesting question! I tried my hand at a very quick proof of concept which I think gets close to what you are looking for. Here is how it works:

Suppose you have a package you are creating. As a first step, you can create some package specific global variables as follows:

module MyGreatPackage

using Suppressor

captured_version_info = @capture_out versioninfo();

global julia_version = ""
global os_version = ""

for line in split(captured_version_info, "\n")
    if contains(l, "Julia Version")
        global julia_version = strip(l)
    elseif contains(l, "OS:")
        global os_version = strip(l)

# Code that goes into your package


Where in my case, the two global variables would now print the following:

julia> julia_version
"Julia Version 1.8.0"

julia> os_version
"OS: Linux (x86_64-linux-gnu)"

Going from there, say I want to incorporate this into the docstring of a function within my package. What I could do is the following:


Print the sound an animal makes!

# Function Metadata


function animal_sound(animal)
    if animal == "Cow"
    elseif animal == "Cat"

Which produces:


You could then go forward and find/add more information as you go forth but using this approach, whenever a package is used on a computer this information could be reported per function and if you deploy this to Documenter.jl, you could get information about the deployment’s build system automatically built into website documentation.

Admittedly, the above is a bit contrived but I think this could start to an answer the question and should be somewhat future proof. Nonetheless, hope it helps!

Also, out of curiosity, for what purpose are you trying to do this? For testing purposes?


~ tcp :deciduous_tree:

1 Like

@TheCedarPrince: Sorry for the delay in answering. That solution works fine (replace “line” by “l” in the “for” loop definition): Thanks a lot!

The reason I am interested in this feature is because I’m currently using a MacBook Pro (Intel) but will probably move to a newer M-based machine sooner or later. I also see that the Julia language itself is still evolving. So I’d like to keep a record of which computing environment is used when a function was last developed, modified, tested or deployed.

Ideally, it would be nice to keep track of all such environments where the function worked as intended, but that would become much more complex, as such a record should (1) only be implemented when the function is fully tested in each environment, and (2) be added to the historical context only if it is different from previous records… Perhaps a more straightforward approach would be to setup a process where the function is systematically tested on different machines, operating systems and Julia versions simultaneously to generate the “official” documentation, but that could only be implemented in an institutional context with substantial resources…

Thanks again for this neat solution.

1 Like

I believe there is a better way to do what you have in mind. Here is one option that works fairly well for most julia projects:

Set up a “continuous integration” environment for your project. If you host it on github, that is particularly easy: you just add a configuration file to the repository, github detects its presence, and runs tests or benchmarks as prescribed by the configuration file. These tests and/or benchmarks are executed for every single change to the code (e.g. git push to the repository), for any version of julia you want, for most operating systems you would probably care about, and for most common CPU architectures.

Here is an example of such test config file:

This might seem like a lot of boilerplate however:

  • it is probably less that the custom solution you have in minde
  • it is “standard” so it would be very easy for you to get help from this community and it will continue working in the future
  • it uses the “continuous integration” servers provided by github so you get to test hardware configurations you do not own

You probably would also like to employ the git blame command (also available as a pretty webpage on github for each code file), to see when something was changed in your code.

If you want to go more fanciful, you can even keep records of the benchmark results. For instance, I keep this separate repository, purely of benchmark records, that show how the library has evolved (on different julia versions): QuantumClifford benchmarks. These are generated by a fairly simple plotting script that reads the benchmark logs and uses Makie and DataFrames to make the plots. However, this is just a custom solution, not something widespread as the tools covered above.


@Krastanov: Wow! That is indeed a really professional approach… I’ll keep that in mind and implement it once my IDL codes to process NASA remote sensing data have been successfully converted to Julia on my laptop: there is a steep learning curve ahead…

Thanks a lot for alerting me about those public resources.

1 Like

Ah! Thanks for giving more insight into your rationale! I was a bit perplexed as to your approach as although fully legitimate, I thought it a bit challenging to do manually for you. To add to @Krastanov 's excellent answer on continuous integration (CI) (I use this approach for a variety of packages to great success and have had very good success in teaching my student assistants the basics of the approach), one thing that may make things even easier in Julia-based projects (i.e. analyses, visualizations, etc. – not full on packages that you would need to develop) is to look at the notion of the Project.toml and Manifest.toml.

These not only preserve the state of a computing environment (to borrow your phrasing) but also you can specify very strongly where and how your project should run on someone else’s machine. For project’s specifically, I suggest the fantastic packages:

  • DrWatson.jl - package to help with structuring and managing a Julia project
  • PackageCompatUI.jl - interactive tool to quickly build compatibility requirements for Julia packages and/or projects

Finally, to show an example of a Project.toml I have set-up for a specific Julia package of mine, here is an example:

name = "OMOPCDMCohortCreator"
uuid = "f525a15e-a73f-4eef-870f-f901257eae22"
authors = ["Jacob Zelko <>"]
version = "0.2.1"

DBInterface = "a10d1c49-ce27-4219-8d33-6db1a4562965"
DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0"
Dates = "ade2ca70-3891-5945-98fb-dc099432e06a"
FunSQL = "cf6cc811-59f4-4a10-b258-a8547a8f6407"
TimeZones = "f269a46b-ccf7-5d73-abea-4c690281aa53"

DBInterface = "2.5"
DataFrames = "1.3"
FunSQL = "0.10"
TimeZones = "1.9"
julia = "1.6"

The [deps] section gets automatically generated, the [compat] section is defined by myself, and the top section before [deps] is also defined by me. Using this configuration, I have been able to develop a suite of dozens of tests to run and then be tested via CI not only on different operating systems but also different versions of Julia on those operating systems.

I apologize if you know much of this already but since I saw in another post you were coming from IDL, I wasn’t sure if you were already aware of these tools. Hope this helps and glad to have you in the community @mmv !

1 Like

Hi @TheCedarPrince. Thanks a lot for this very interesting addition: I did already know about Project.toml and Manifest.toml, but was unaware of the possibility to use those resources in additional ways, and had never heard of the other two packages you mentioned.

I looked at DrWatson’s package documentation, which appears to offer a detailed and systematic way to organize project files. I was surprised, though, by the absence, in the default project structure, of subdirectories for “benchmarks”, “docs” (for the HTML documentation automatically generated with docstrings, although this may be partially compensated by “notebooks” and “papers”), and especially “tests”, which should be a key component of professional software development. I presume those can be added separately.

I’m a member of the MISR Science Team at NASA JPL [] and have indeed used IDL for many years: it’s a convenient environment for rapid development, with excellent graphics capabilities. However it’s quite an expensive commercial software package, and I suspect modern languages like Julia may be more suitable to exploit multi-threaded software and/or multi-core platforms for parallel computing, cloud computing, etc.

I am still in the very early stages of exploring Julia, but I must say I’m impressed by the extent of resources available beyond the language definition itself to develop, test and document the projects. There is a lot to learn, but I can see the advantage of mastering those tools to ensure the quality and robustness of the codes that are ultimately shared with the scientific community.

Again, thanks for your inspiring inputs. Michel.

1 Like