Why is ArgParse slow?

I really love ArgParse.jl - it has a ton of features, the documentation is great, and it works exactly as advertised. But it’s slow. Adding even a couple of arguments adds several seconds to start up time. It’s really amazing that running simple scripts in julia (so long as you’re not using crazy numbers of dependencies) is under a second, but adding ArgParse to this is rough.

I wanted to do a bit of testing, so I made a tiny package
In the bin/ folder you’ll find a couple of test scripts that do the same thing:

$ time julia --project=bin bin/lite.jl foo --opt1 bar -o baz --flag1
Parsed args:
  flag1  =>  true
  arg1  =>  foo
  opt1  =>  bar
  opt2  =>  baz
julia --project=bin bin/lite.jl foo --opt1 bar -o baz --flag1  0.87s user 0.15s system 198% cpu 0.512 total
$ time julia --project=bin bin/argparse.jl foo --opt1 bar -o baz --flag1
Parsed args:
  flag1  =>  true
  arg1  =>  foo
  opt1  =>  bar
  opt2  =>  baz
julia --project=bin bin/argparse.jl foo --opt1 bar -o baz --flag1  3.55s user 0.17s system 115% cpu 3.217 total

It’s not code-loading time, because adding using ArgParse to the lite.jl script doesn’t affect timing much. Could it be something about the use of macros? I tried wading through the code from ArgParse.jl and got bogged down in the macro stuff.

ArgParse has a lot more features, but it seems unlikely that providing the ability to check types, print useful help messages and have default values could account for the difference in this simple script. That said, I’m wary of putting much more effort in before understanding what’s causing the slow down.

Note: just before posting this, I noticed @kevin.squire made ArgParse2 for largely the same reasons. I haven’t tested it out, but based on the README it looks like load time was part of the motivation.

3 Likes

Sounds like you’re just looking at compilation time?

Hence, more to compile.

The compilation start-up time is why Julia isn’t currently too useful for little scripts that launch Julia, do some tiny calculation, and then exit. You either want to keep Julia running (e.g. in a long interactive session where you do lots of little calculations), or use Julia for big calculations where startup compilation time doesn’t matter.

In the future, as Julia gains the ability to cache more compiled code (or compile less code — i.e. run in an interpreter mode for short calculations), this issue will go away, similar to the “time to first plot” issue.

1 Like

FYI, I created a command line interface framework for Julia: GitHub - tkf/JuliaCLI.jl I haven’t registered it yet but it already works well for me.

The idea is to have a backend server with bunch of Julia worker processes. The CLI frontend simply connects to one of the backend and run some code. The first invocation is slow as usual but from the second time it’s very fast.

Here is an example of using JuliaFormatter.jl and ArgParse.jl: jlfmt/jlfmt.jl at master · tkf/jlfmt · GitHub. You can also have an instantaneous REPL startup.

6 Likes

I looked into this (a tiny bit) a while ago: use @nospecialize to help with compile time by KristofferC · Pull Request #76 · carlobaldassi/ArgParse.jl · GitHub.

I think that the code handed to LLVM just takes a long time for LLVM to optimize.

2 Likes

But even when many of those features aren’t used, there’s a huge slow-down. It seems to happen regardless of what goes into the @add_arg_table macro.

Yeah, I know all of this - the use-case is ultimately for longer running scripts, but especially when developing and running test inputs (or just trying to check that the argument parsing is working) or if a user just runs julia my_script.jl --help, taking 3-5 sec for each invocation is a real pain.

This is a super interesting idea, thanks! I’ll take a look.

Awesome, thanks for chiming in! Does this seem to you like something that will be intrinsic to any full-featured argument parsing library, or do you think there’s something about the design of that package in particular? I see your PR got merged a while ago, but it’s still pretty slow. I’m wondering if it’s worth continuing to add features to ArgParseLite.jl (or more likely, contribute to ArgParse2.jl), or if getting to feature parity is likely to lead to the same slow-down.

I think it is just design choices of the package in particular. It uses a lot of Metaprogramming and the traces in the profile pointed towards NamedTuples so it might create many specializations based on values of the arguments? Tightening up some types like String instead of AbstractString everywhere might help as well.

2 Likes

Good to know. I guess I’ll keep going then :slight_smile:

I just added an example using ArgParse2.jl:

$ time julia --project=bin bin/argparse2.jl foo --opt1 bar -o baz --flag1
Parsed args:
  arg1  =>  foo
  opt1  =>  bar
  opt2  =>  baz
  flag1  =>  true
julia --project=bin bin/argparse2.jl foo --opt1 bar -o baz --flag1  2.39s user 0.19s system 124% cpu 2.065 total

Yep, that was part of the motivation. As you can see from your example, it is a little faster, but it was also unclear to me whether any advantage would remain after adding features.

That said, I just did a little bit of timing optimization, which cut timing on your example above by ~25-30%. I’d still like it to be faster, but it seems promising, so I’m going to continue working on it, and I’ll probably register it soon. Contributions are very welcome.

Cheers,
Kevin

2 Likes

In this type of programs, sometimes it is useful to run julia with parameter “-O0” to reduce the initial wait (mainly in simple scripts programs, because the program could be slower in more complex programs).