I’m trying to fit a model to predict graduation versus dropout, that is, a binary logistic regression. Theoretically and based on the data, I suspect that dropouts are more likely to be extremely slow or extremely quick, as seen in the following image containing median response times:

I have a dozen of these variables all showing the same pattern. In the frequentist world, I would fall back to calculating the middle of the data and add a new variable containing the absolute difference from the middle. In Turing, my idea would be the same, but to let Turing estimate the middle.

Does this approach make sense or does anyone here know how problems like this are normally solved?

I prefer to stay away from Bayesian mixed models. I find fitting such models too difficult/error prone (as shown in Bayesian Latent Profile Analysis (mixture modeling) - Huijzer.xyz).