Hello everyone :)…
Based on my understanding of Turing based inference(case of NUTS). We see that during warmup time chain gets to convergence and then we collect samples. Now there are two things .

when model is simple and inference is running fast there is no issue with current setup.

But when model is complex and inference is slow (example – doing lets say 100 warm up and 100 samples in 2 days) problem starts to emerge because you kind of working with a black box. In NUTS you must experiment with warm up to finally see convergence. Now with big models it’s difficult.
A solution I think can be if Turing can give info while running. Let’s say after finishing warmup (before collecting sample) it says convergence have been achieved or still not converged because then person can just stop and run with bigger warmup.
Is a way already exits?