Primer feasibility question - modelling soft tissue behavior using ML

Get interested in Julia since seems crossing many of the themes i’m touching lately: HPC, biophysics, GPGPU, ML, with the goal to create interactive systems.

Not a coder, I’m a solution designer that is deepening on different topics enough to understand what’s possibile and what approach is better to use (or avoid).

Normally modifying other code to test or just doing small programs building line by line to test and learn.

A past long ago as plain C scientific coder, past experience on Matlab for testing on quant finance models, recent experiences in Python using FastAI to learn about the possibilities of ML.

Didn’t like Python (just personal preference), wanted to approach HPC and GPGPU as personal experience, C++ really too steep to unlock GPU potential, Rust not really dedicated, C# not really used in this domain, reading about Julia I like his focus on HPC and elegant approach (even if I’m not sure to like Matlab-like syntax).

Wanted to ask if might be a too steep work starting with Flux/Fastai.jl/SciML to achieve something like this

based on this work (so an U-Net derivative), just to have an interesting goal.

Looks right up our ally.

This has a lot of the FEM meshing tools you need from your paper. I don’t know if GMesh is up to date specifically, but I believe so.

UNets are here:

Connecting it to could be an interesting possibility. I’m quite interested in this field as well so feel free to follow up with an email.

1 Like

I spend some time talking with @davide445 and below I wrote summary of his problem.

Final aim is to develop simulator of surgery navigation for educational purpose, that can be interacted with by using haptics device. It is easier to show it than explaining by words, so check this video.

For this simulator be useful it must work in real time, rendering 60 frames per second (60 FPS), with roughly 30 ms per frame. You need to ask @davide445 why this 30 ms is so important, I don’t know that. Today this can be done for simulations when the main interest is operation on bones, because bones can be treated in large extent as rigid bodies (no surprise here). But making a simulation of soft tissue e.g. liver is much more challenging computationally. Again, no surprise.

Program must works in real time and must simulate cutting of soft tissue in the way that is realistic enough.

Program of simulation roughly operate in the way listed blow. @davide445, I hope I get it mostly right.

  1. Take data of liver and produce it visual model.
  2. Take user input.
  3. Produce data input for PDEs that describe movement of the liver.
  4. Send PDEs data input to solver.
  5. Solve PDEs with given PDEs data input, generated PDEs data output.
  6. Send PDEs data output from solver to visualisation of solver.
  7. Go back to point 1.

Main question of @davide445 is: should he give Julia and SciML ecosystem as computation engine in point 5? In other words, we can assume that visualisation and I/O software is ready and we want use Julia only for computing time evolution of liver. Is this worth a try?

The goal is to find such a way to make such simulation possible under time constrain of 30 ms, while only some part of this time can be used for solving problem.

My intuition is: yes, Julia and SciML can give you as good performance in solving differential equations as you can get. If some solver is faster, you just open GitHub issue “Hey, SciML solver is slower than solver X.” and then collaboration will work until they remove this bug (this is how Chris Rackauckas describe their workflow in one talk from 2020). But there are few caveats.

First caveat. From what I understand @davide445 and his collaboration still aren’t sure if solving PDEs isn’t to big bottle neck and in the end some Machine Learning is needed to make computation fast enough. I trust SciML when you need to solve differential equations and never check they ML ability. I’m also agnostic about how good SciML is in mixing PDEs solvers with ML algorithms, I never put too much attention to neural differential equations.

Second caveat. In point 4 and 5 you need pipe Julia code with external software, this can be a big bottle neck. From what @davide445 told me, they have their own program that works entirely on GPU, where CUDA is used for communication, sharing memory and all other things. I’m not GPU guy, so I may understand him wrong and I can’t judge if would it be a problem or not.

At this stage I don’t know how much time implementing such thing in Julia would take. It can be work for 3 days or for 3 months and @davide445 need opinion someone knowledgeable is this effort worth pursuing or not? I can’t give him such answer and he need it to the end of the April at most.

1 Like

Another question from @davide445 was: when to find skilled Julia developer? I suggested posting job offer here and on Julia Zulip, but he probably welcome all good ideas.

… or 3 years (this doesn’t sound like a master thesis).

1 Like

Indeed. I should add “or 3 years”.

Thanks @KZiemian , just to point out a few details: surgery nativation it’s a specific kind of surgery intraoperative system, our goal is to create a pre-surgery training simulator.

The min goal for VR visualization is 30 fps, meaning btw every frame there are 33ms, so a reasonable “round” goal is 30ms of response time for the simulation. An ideal goal for VR visualization is to have below 20ms latency, meaning a min 50fps visualization.

Just to point out a liver it’s just an example of a goal, the final product can be focused on a different anatomic area. Also need to add the haptic feedback as a requirements, that need to run at 1khz update, so will probably use a separate and more coarse simulation syncronized with the visual one.

The main goal is to have a 6 months POC demonstrating all the basic features and if succesful a 3 years project to deliver a working and complete solution. The POC project will start probably before end of June.

1 Like

A layman’s view: you can throw quite a bit of hardware on this problem (some cluster or tensor processing units). IMHO you should have an idea how parallelizable your algorithms will be.

I believe they have some problem with hardware to. About parallelization I know nothing.

As I recently learned: you are on the safe side of things if your algorithms are embarrassingly parallel.

1 Like

In our previous conversation @davide445 point to algorithm described in GPU Implementation of extended total Lagrangian explicit (gpuXTLED) for Surgical Incision Application. I don’t remember what is the role of this algorithm in the project, but it is reasonable to mention it here.

Also, even if they have they own visualization GPU software, it may be good to ask Makie collaboration about few hints. They have a lot of experience with Julia and GPU visualization.

Being a VR based interactive application there is no way to use more than a single compute node (a workstation), due the fact having a distributed system will add too much latency.
So there will be max 4 compute accelerators (GPUs or whatever), possibly all connected into a fast low latency P2P connection (i.e. NVLink).

1 Like

In general I’m seeinfg three possible way forward

  • finding and efficient PDE numerical solver specialized for our needs and able to get the performances we need considered all the above requirements → I consider this unlikely
  • find an SciML hybrid version of this PDE (so numerical + ML) that can speedup things → a possibility with more uncertains but possibly more accurate
  • using an enstablished soft tissue simulation framework (there are a few of them around) to generate a suitable training set and use Julia packages “just” to design, implement, train a suitable DNN architecture to reproduce the expected behavior → probably the lower risk option, with a great doubt if will be possible to achieve the desired dynamics accuracy in realistic conditions
1 Like

We still far from answers, but at least we make questions and challenges easier to understand.