lindermanlab / ssm

Bayesian learning and inference for state space models

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Regularization of GLM weights for input driven observations and transitions

sarathnayar opened this issue · comments

I see that for InputDrivenObservations, there are parameters prior_mean and prior_sigma which governs strength of the prior on GLM weights. Does this act as an L2 regularization? Otherwise, if I were to add an L1 or L2 penalty or a custom regularizer on the GLM weights, should I add the regularization term only in the calculation of log_prior or also in the _objective, _gradient, and _hess functions?

Looking at the code, it doesn't look like prior_mean actually gets used in the M step! It only gets used in the calculation of log_prior(), which is used for tracking convergence of EM. On the other hand, prior_sigma does get used, and it specifies the scale of the L2 regularization though. If you wanted to add L1 regularization you'd have to make the change in all of the functions you listed. However, there are better optimization methods for mixed L1/L2 regularization than the trust_ncg method used in this implementation. It could be a lot of work... it might be better to call into an off-the-shelf optimizer like cvxpy.