[ANN] DiffPrivacyInference.jl - Infer differential privacy of your julia code

We proudly announce DiffPrivacyInference.jl, a package that lets you analyze julia code for its differential privacy!

Its main purpose is enabling the implementaion of novel differentially private mechanisms, using primitives whose privacy guarantees are known from literature, without having to make manual proofs about their properties. Instead, we automatically infer them from your julia source code!

We provide an example implementation of Differentially Private Stochastic Gradient Descent, that allows you to train a neural network using the Flux.jl machine learning framework. We used our software to verify that the resulting trained model preserves the privacy of the dataset you did the training on. Take a look at the walkthrough :slight_smile:

Writing differentially private code is not straightforward. Our program can tell you that you failed, but you’ll have to learn how not to fail yourself. We provide instructions on how to write code that we can verify in our documentation.

Our backend is a typechecker written in haskell based on the Duet typesystem. It’s repo is linked to our issue tracker, where there is a lot of ideas for improvements. Let us know if you want to contribute, we’ll be glad to assist.

We’d love to hear about your thoughts and suggestions for this package. If you want to use it and are experiencing troubles, also don’t hesitate to get in touch!

7 Likes

As discourse does not allow me to put more than two links in my post: