[ANN] Shuffle.jl: requesting feedback

Hey julia community,

I started on Shuffle.jl, a package for a number of different shuffling algorithms in julia.

See the docs for more details.

I’d like to add a Cut shuffle that does a cut like you’d do with a deck of cards at some given position. May be we could add shuffling algorithms for arrays that don’t use one based indexing, too. In general, adding shuffling for other collection types is something I want to think about.

Any feedback would be appreciated. Let me know what you think.

EDIT: So, I’ve added a code coverage badge as well as automatic testing via github actions. Thanks for the suggestions. Next step: add some benchmarks and have automatic benchmarking.

8 Likes

Looks good overall, I would add automatic tests on travis with a badge in the readme, it always look more serious with it. You can check an example a config file here.

4 Likes

Clean code and extensive docstrings, that’s definitely worth a :star:

3 Likes

It looks very nicely done, and well-documented. I would second @jonathanBieler’s suggestion for coverage and CI badges. Also, later on autogenerated docs would be nice. Incidentally, you may find

https://github.com/JuliaDocs/DocStringExtensions.jl

useful for writing docstrings.

2 Likes

What do you mean by autogenerated? They already are automatic in the sense that docstrings are compiled into the docs website on every push to the main branch/release.

Missed that, sorry.

Something I’d really like to get to work is automatic benchmark, testing and docs deployment on every commit to main or dev. Not quite sure how I can benchmark against the latest tagged version automatically with PkgBenchmark though. Also not sure where to put the results, I guess just display them and I’ll see them when I look at the travis job?

I’m not sure if it does quite what you want, but I’d check out https://github.com/tkf/BenchmarkCI.jl which can produce comments like https://github.com/JuliaFolds/FLoops.jl/pull/49#issuecomment-687710965

3 Likes

That looks like exactly what I wanted!

2 Likes

Actually, that’s just me manually quoting the benchmark result :slight_smile: . The real auto-generated comments are something like this: https://github.com/JuliaFolds/Transducers.jl/pull/308#issuecomment-647841299

While ago, I switched to pushing the result to a Git repository rather than the comment. Something like this: https://github.com/JuliaFolds/FLoops-data/blob/benchmark-results/2020/09/06/063739/result.md

I find that it was rather annoying to get a comment with a large text for each push. Getting just a link is much less annoying. It also helps me find the regression by a post-hoc analysis since it stores the result JSON files. (Incidentally, the PR you linked was an example of it.)

Ah, thanks for the clarification! I was surprised the comment was from your account, I had remembered it being from another one. That explains it then :slight_smile:.

2 Likes