ABSTRACT OF Lecturer Ghahramani's TALK

 

Occam's Razor and Infinite Models

Friday September 6, 2002, 8:30-9:20

Occam's Razor, the modelling principle which prefers simpler explanations over more complex ones, arises naturally out of the Bayesian model averaging framework. I will review this framework and various approaches that have been used to approximate the integrals required for Bayesian inference, focusing especially on variational approximations. I will briefly describe the application of variational Bayesian methods to many different models in machine learning, focusing on the question of learning model structure (e.g. number of hidden states, mixture components, etc). In apparent contradiction with Occam's Razor, it is often natural in the Bayesian framework to use models with infinitely-many parameters (e.g. infinitely many hidden states, mixture components, etc). I will discuss how this is not in fact a contradiction and give examples of several infinite models which can be handled tractably using sampling methods. Finally, I will turn to the question of whether one should pick a simple model with few parameters or a model with infinitely many. Joint work with Matthew J Beal and Carl Edward Rasmussen.