Flux on Production enviroment

We came across this thread (which dates back to two years), which pretty much demonstrates the state of Flux in production environment, which remains intact. Julia is indeed provides a tech stack, which a company once chooses, never moves on, because of the performance Julia offers ! Flux on the otherhand is doing job of more than an ML framework by integrating with scientific & statistical packages ( eg. neural ODE) and thus is unparalleled. (Python doesn’t even come close when it comes to neural ode in specific).

But Flux is not production ready.

and this post intends to find out How can we ?

I’m interested in contributing to Flux through GSoC / JSoC '23 . Flux has ONNX .jl, FastAI .jl now and standard models etc. but nothing that comes close to other frameworks for production.

I’m not aware of any companies serving Julia deep learning models but I think they are there for sure (and will increase in future). These companies have to build their own custom pipelines from scratch which indeed makes product development a bit sluggish compared to other ecosystems.

So With due respect, We intend to ask with the companies using Julia for ML in production, the following questions.

  1. Who is using or trying to use DL models in production, what challenges they face and where they think the biggest gaps are ?
  2. Where do you think Flux lacks compared to PyTorch and Tensorflow for production environment ?

Again the goal is to make development and deployment frictionless in flux, so we are open to suggestions !

If the information is not disclosable to public yet you still wanna help please reach out at shivanshuman021@gmail.com

Thank you

1 Like

My first thoughts from my side are development of a package similar to Torchserve and TF Serving

How will this help in general ?

  1. I would say number of high quality pretrained models is probably a key hurdle which is also hard to mitigate without resources or proper model zoo’s that are as maintained as torchhub or hugging face etc. This is really a place where throwing money at the problem can make a big difference. Unfortunately for deep learning in julia that money is not there yet.

  2. Easy wrapping of your model in a web service and /or UI. Like Gradle, Shiny, Streamlit etc.

  3. Better serialization and inference a la ONNX