JLL package product type for interpreter script

I want to pre-package the tools necessary to read performance parameters of AMD GPUs. The package is rocm_smi_lib, which includes an .so library provides an API for access and a python script that works as a command line interface. The BinaryBuilder script for compiling the .so library is fairly trivial and I have it working now. What is the appropriate way to package the python file, so that I can run it from my Julia code?

Running the python file from Julia “manually”, without using all of the jll functionality also works fine, so the question is specifically about packaging conventions.

Long term it would be great to just write a Julia script that interfaces with the .so, but I would prefer to start with something that requires less work.

For reference, here is the build script:

# Note that this script can accept some limited command-line arguments, run
# `julia build_tarballs.jl --help` to see a usage message.
using BinaryBuilder, Pkg

name = "rocm_smi_lib"
version = v"4.0.0"

# Collection of sources required to complete build
sources = [
    ArchiveSource("https://github.com/RadeonOpenCompute/rocm_smi_lib/archive/refs/tags/rocm-$(version).tar.gz", "93d19229b5a511021bf836ddc2a9922e744bf8ee52ee0e2829645064301320f4")
]

# Bash recipe for building across all platforms
script = raw"""
cd ${WORKSPACE}/srcdir/rocm_smi_lib-rocm-*/
mkdir build && cd build
cmake -DCMAKE_INSTALL_PREFIX=${prefix} -DCMAKE_PREFIX_PATH=${prefix} ..
make -j${nproc}
make install
"""

# These are the platforms we will build for by default, unless further
# platforms are passed in on the command line
platforms = [
    Platform("x86_64", "linux"; libc = "glibc")
]


# The products that we will ensure are always built
products = [
    LibraryProduct("liboam", :liboam, "oam/lib"),
    LibraryProduct("librocm_smi64", :librocm_smi, "rocm_smi/lib")
]

# Dependencies that must be installed before this package can be built
dependencies = [
    Dependency(PackageSpec(name="hsa_rocr_jll", uuid="dd59ff1a-a01a-568d-8b29-0669330f116a"))
]

# Build the tarballs, and possibly a `build.jl` as well.
build_tarballs(ARGS, name, version, sources, script, platforms, products, dependencies; julia_compat="1.6", preferred_gcc_version = v"8.1.0")

@jpsamaroo , pinging you in case this is of interest, as it relates to AMDGPU.

1 Like

Generally I would treat the Python script as an executable, since that’s how it’s being used. I would guess that ExecutableProduct won’t have problems locating it (since it should be executable by the end of the build).

1 Like

My bad for forgetting to include this in the original post, but BinaryBuilder’s documentation says to not use ExecutableProduct for interpreted scripts.

Typically we use ExecutableProduct only for binary executables (ELF executables on Linux, for example), not for scripts which require an external interpreter. The difference being that the scripts aren’t self-sufficient but need something else to run them

So does that mean I should just use the general file product? Then I will probably just use something like Conda.jl or PyCall.jl?

A file product would always work, yes. It does nothing special, checks that the file exists in the tarball and provides a variable referencing its path, nothing more.

For Python packages that’s probably a better approach. I’m not eager to turn BinaryBuilder into yet another Python package manager, we have already tools.

Thanks! If it is a single python script that the AMD folks provide and that depends only on the standard library and that only serves as a CLI interface for their so file, may we keep it in the build_tarball.jl script, just so that keeping versions synced is easier?

It is what I did here https://github.com/JuliaPackaging/Yggdrasil/pull/4346