TITLE: On Comparison of Adaptive Regularization Methods

AUTHORS: Sigurdur Sigurdsson, Lars Kai Hansen and Jan Larsen
Department of Mathematical Modelling, Building 321
Technical University of Denmark, DK-2800 Lyngby, Denmark
emails: siggi,lkhansen,jl@imm.dtu.dk
www: http://eivind.imm.dtu.dk

ABSTRACT:

Modeling with flexible models, such as neural networks, requires careful control of the model complexity and generalization ability of the resulting model which finds expression in the ubiquitous bias-variance dilemma.

Regularization is a tool for optimizing the model structure reducing variance at the expense of introducing extra bias. The overall objective of adaptive regularization is to tune the amount of regularization ensuring minimal generalization error.

This paper investigates recently suggested adaptive regularization schemes. Some methods focus directly on minimizing an estimate of the generalization error (either algebraic or empirical), whereas others starts from different criteria, e.g., the Bayesian evidence.

We suggest various algorithm extensions and performed numerical experiments with linear models.

Appears in proc. of NNSP2000, Sydney, Australia, Dec. 11-13, 2000.