Classical approaches to system identification are based on parametric estimation paradigms from mathematical statistics. In this setting, a key point is the selection of the most adequate model
structure which is typically performed via complexity measures such as the Akaike's criterion. Starting from the linear scenario, then moving to the nonlinear one, this talk will describe how the model selection problem can be successfully faced by a different approach based on regularization theory. In particular, I will discuss the use of Bayesian kernel-based methods where the unknown system is seen as a Gaussian process whose covariance (kernel) includes information on
system stability and/or fading memory. Here, tuning of model complexity gets a whole new dimension and richness in the choice of (continuous) regularization parameters compared to the choice of (discrete) model orders.