IntelLabs / bayesian-torch

A library for Bayesian neural network layers and uncertainty estimation in Deep Learning extending the core of PyTorch

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

FR: Enable forward method of Bayesian Layers to return value only for smoother integration with PyTorch

piEsposito opened this issue · comments

It would be nice if we could store the KL divergence value as an attribute of the Bayesian Layers and return them on the forward method only if needed.

With that we can have less friction on integration with PyTorch. being able to "plug and play" with bayesian-torch layers on deterministic models.

It would be something like that:

def forward(self, x, return_kl=False):
    ...
    self.kl = kl

    if return_kl:
        return out, kl
    return out   

We then can get it from the bayesian layers when calculating the loss with no harm or hard changes to the code, which might encourage users to try the lib.

I can work on that also.

Thanks @piEsposito, this feature will be helpful for inference in particular as KL compute is not required.

@ranganathkrishnan you are welcome. Also, I've already linked a PR that introduces this feature.