TITLE: Adaptive Regularization in Neural Network Modelling AIUTHORS: Jan Larsen1, Claus Svarer2, Lars Nonboe Andersen1 and Lars Kai Hansen1 1) Department of Mathematical Modelling, Building 321 Technical University of Denmark DK-2800 Lyngby, Denmark emails: jl@imm.dtu.dk, lna@imm.dtu.dk, lkhansen@imm.dtu.dk www: http://eivind.imm.dtu.dk 2) Neurobiology Research Unit Department of Neurology, Building 9201 Copenhagen University Hospital Blegdamsvej 9 DK-2100 Copenhagen O, Denmark email: csvarer@pet.rh.dk www: http://neuro.pet.rh.dk abstract: In this paper we address the important problem of optimizing regularization parameters in neural network modelling. The suggested optimization scheme is an extended version of the recently presented algorithm \cite{Larsen4}. The idea is to minimize an empirical estimate -- like the cross-validation estimate -- of the generalization error with respect to regularization parameters. This is done by employing a simple iterative gradient descent scheme using virtually no additional programming overhead compared to standard training. Experiments with feed-forward neural network models for time series prediction and classification tasks showed the viability and robustness of the algorithm. Moreover, we provided some simple theoretical examples in order to illustrate the potential and limitations of the proposed regularization framework. To appear in "Neural Networks: Tricks of the Trade" G.B. Orr, K. Mueller (eds), Springer-Verlag.