Regularization is a tool for optimizing the model structure reducing variance at the expense of introducing extra bias. The overall objective of adaptive regularization is to tune the amount of regularization ensuring minimal generalization error.
This paper investigates recently suggested adaptive regularization schemes. Some methods focus directly on minimizing an estimate of the generalization error (either algebraic or empirical), whereas others starts from different criteria, e.g., the Bayesian evidence.
We suggest various algorithm extensions and performed numerical experiments with linear models.
Appears in proc. of NNSP2000, Sydney, Australia, Dec. 11-13, 2000.