Need help wrapping cuOpt in julia

NVIDIA recently open-sourced their cuOpt solver suite. The source code is mostly C++, and it has a C API.

I’m looking into wrapping that in Julia and setting up a JuMP/MOI interface so we can use cuOpt through JuMP. I have never wrapped a C library before (I imagine it will take a combination of Clang, BinaryBuilder and a _jll), let alone one that needs a GPU to run.

If anyone has skills related to this and would be open to help me, I’d very much appreciate it :folded_hands::folded_hands:
I have access to a GPU machine a pretty much anything but Docker, and I’m happy to do the heavy lifting, what I really need is guidance on the overall wrapping / building process.

13 Likes

I don’t know the first thing about C or C++ but I just want to say that I wholeheartedly support this initiative! Good luck, keep us posted :folded_hands:

4 Likes

First step is to be able to install the solver in Julia.

The place to start is to look at GitHub - JuliaPackaging/Yggdrasil: Collection of builder repositories for BinaryBuilder.jl

As an example, heres the script that builds SCS with a GPU:

The binarybuilder docs have some tips and tricks for building: Home · BinaryBuilder.jl

2 Likes

Also @amontoison might be a good person to talk to

I was at Stanford a few months ago and discussed with Chris Maes (lead developer of cuOpt).
Based on the current implementation and our discussion, I don’t really see the advantage compared to cu-pdlp or HiGHS with the support of it.

JuMP / MOI is also not GPU-oriented so I am not sure what you want to achieve at the end except maybe a proof-of-concept or testing the accuracy.

If you want to work on it, be aware that it is not a small task.

2 Likes

Another big thing to consider is the availability of CI testing. (I know we could sort something with JuliaGPU.) I have no plans to work on this for jump-dev unless NVIDIA come to the party somewhat with $ or hardware.

1 Like

I’m more curious about their discrete algorithms on GPU, MILP solver + routing algorithms.

To the best of my knowledge, the MILP / routing algorithms are runned on CPU internally.
Only very specific components of the code are on GPU.

Source: private discussion with Chris

Yes, I would expect so :sweat_smile: I’m still trying to compile it from source.

TBH, figuring out the software aspect of it is as motivating to me as being able to call cuOpt from julia.

1 Like

Progress update: I am still having trouble building cuopt from source :frowning:

My fallback right now is to build a Julia client API for the cuOpt server API.
The intended workflow would be something like

  1. [outside of julia] start your own cuOpt server
    (I’ll be doing that using their container)
  2. [inside julia] wrap the cuOpt client API into an AbstractOptimizer, which make look like
    using JuMP
    using cuOptServer
    
    model = JuMP.Model(() -> cuOptServer.Optimizer(<options>))
    # build model ...
    JuMP.optimize!(model)
    

I dont know what the API is for their server, but this might be helpful:

1 Like

Hi all,

I’m from the cuOpt team. I’m glad to hear there is interest in getting cuOpt interfaced with JuMP and Julia. My colleague Rajesh Gandham is working on an interface now and could certainly use the community’s help. Stay tuned for further discussion from him on this.

@amontoison Just to clear up a few misconceptions earlier in this thread.

  1. We do believe that cuOpt has an advantage over cuPDLP (which is currently integrated into HiGHs). We spent some time in our 25.05 release optimizing PDLP. This can be seen in the latest benchmarks comparing both solvers: plato.asu.edu/ftp/lpfeas.html . We are continuing to improve cuOpt all the time.

  2. cuOpt’s MILP solver is a hybrid solver running on both the GPU and the CPU. Currently, bound strengthening and primal heuristics are run on the GPU, while branch and bound is run on the CPU. Although, our MILP solver is new, we still hope it will be useful.

odow Regarding CI, within the cuOpt Github repo we currently run CI on GPUs. We can work together to figure out how to run cuOpt’s Julia / JuMP interface on CI with GPUs.

@mtanneau I’m sorry you ran into trouble compiling cuOpt from source. Several others reported this same issue. We recommend that you manage all your dependencies with conda. See our documentation on building from source for more info.

We do hope that members of the julia / JuMP community will benefit from GPU acceleration of LP/MILP problems through cuOpt. We look forward to collaborating with you all.

Thanks,
Chris

8 Likes

Great! I told Rajesh on slack that the easiest approach is to open souce the Julia package under the NVIDIA org in an unfinished state Then we can answer some code review questions, and you can probably sort CI on your hosted runners?

Thank you Chris for the information!

Great to hear that NVIDIA is working on an official Julia interface :slight_smile: I suppose that makes my effort obsolete :upside_down_face: I would be happy to beta-test the interface and help with the JuMP integration.

I did make some progress on building a Julia-based self-hosted client API (following the lines of the python one), and got around the building from source issues by running a self-hosted container.

1 Like