Nested cross-validation

For my PhD, I’m mainly dealing with data containing 10 to 20 features and sample sizes of about 100 to 500. According to quite some literature, cross-validation (CV) is biased and should be replaced by nested cross-validation whenever you can, computationally, afford it (Krstajic et al, 2014; Vabalas et al, 2019). I was just thinking about writing a paper where I manually compare 4 models. Something like comparing a linear model to two Turing.jl models and maybe a random forest.

So, as a sanity check: should I put the 4 models in a nested cross-validation loop to get an automated answer to the following questions.

  1. Which models performs the best?
  2. How good will the best model perform?

I expect that runtime will be okay. Compared to cross-validation, runtime is only multiplied by the number of models than I want to compare and the outer loop can run in parallel.

Hi Rikh, a few comments if I may,

Perhaps just whenever you use it to select a model? Just for validating a model CV should be fine I believe.

Be careful with automated answers unless you’re okay with automated mistakes.

I would say this question makes proper sense when stated as “Which models perform best in this very specific data set?”. You’ll find that any of those four models you’re considering will perform best or not based on which data set you choose. This is specially true if your paper is related to QSAR data sets like Krstajic et al, [2014].

1 Like

Hmm, I just thought it was the other way around. But, I’m also not sure.

Very good comment about the automated mistakes!