Title: Linear Unlearning for Cross-Validation Status: Preprint, submitted for Advances in Computational Mathematics, April 1995. Authors: Lars Kai Hansen and Jan Larsen CONNECT, Electronics Institute B349 Technical University of Denmark DK-2800 Lyngby, Denmark Phones: (+45) 45253889, (+45) 45253923 Fax: (+45) 45880117\\ Emails: lkhansen,jlarsen@ei.dtu.dk ABSTRACT: The leave-one-out cross-validation scheme for generalization assessment of neural network models is computationally expensive due to replicated training sessions. In this paper we suggest linear unlearning of examples as an approach to approximative cross-validation. Further, we discuss the possibility of exploiting the ensemble of networks offered by leave-one-out for performing ensemble predictions and for obtaining error bars on future examples. We show that 1) the optimal ensemble prediction, based on linear combinations of the individual networks, is obtained by weighting the networks equally, and 2) that the generalization performance of the ensemble predictor is identical to that of the network trained on the whole training set. Numerical experiments on the sunspot time series prediction benchmark demonstrates the potential of the linear unlearning technique.