...just the final model fitting, or also the feature construction/selection phase?

Created by Miron Kursa mbq
But LOO requires 125 repetitions, this seriously limits computationally intensive methods; on the other hand "expert-knowledge"-based approaches would have no problem pushing overfitting through that. Maybe you could allow 10-fold?
Everything contributing to the model building if the internal steps, such as feature construction/selection, are using the training data. All these steps must be redone for each LOO training set consisting of the n-1 examples and the resultant model should be used to generate prediction(s) for the left out example. This is done to separate the training & prediction phases of each base learner as best as possible, and thus reduce overfitting of the supervised ensembles we intend to construct from them.

What shall we LOO-cross-validate? page is loading…