glm-tools / pyglmnet

Python implementation of elastic-net regularized generalized linear models

Home Page:http://glm-tools.github.io/pyglmnet/

Repository from Github https://github.comglm-tools/pyglmnetRepository from Github https://github.comglm-tools/pyglmnet

Can I remove the intercept?

DreHar opened this issue · comments

Hi All,

I'm sorry if I am just being dull, but is it possible to remove the intercept in GLM/GLMCV? I believe it is supported in the R version of glmnet (https://cran.r-project.org/web/packages/glmnet/ChangeLog version 1.9-3).

Could I comment out the intercept/beta0 calculations in pyglment.py and just set beta0 to 0?

Thanks!

It's not supported at the moment but it would be a welcome addition. You'd need a fit_intercept parameter which is a boolean

I have made an attempt to add in fit_intercept as a parameter and made a [WIP] PR - #310

Please let me know if there are any changes or adjustments. There are a few style points I wasn't sure about. I am not sure if it would be better to

  • pass fit_intercept everywhere as a parameter
  • change all methods to expect beta0 and beta and pass beta0 as 0 for the no intercept case everywhere
  • in gradL2Loss, whether it should be one fit intercept check at the bottom and then calc by type, or multiple fit intercepts inside the current if else. Currently I just compute grad0 and then don't use it

What I have tried to do is add the fit_intercept parameter to the GLM and GLMCV classes. Then within all methods the state of fit_intercept is inferred. Then before finishing the GLM class will separate out the beta0_ and beta_. I had a look at sklearn.linear_models and this appeared to be how they handled it.

Sorry, I just realised I made a mistake and I do have some unit test failures. I will fix these up; any style or directional change still much appreciated

That seems looks fair and yeah, I would try to mimic sklearn as much as possible. Thanks a lot for the PR!

closed by #310