hughperkins / pytorch

Python wrappers for torch and lua

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Is it possible to write my own loss function (criterion)using sklearn for torch, while the new loss function still use some original torch criterion?

brisker opened this issue · comments

commented

Is it possible to write my own loss function (criterion)using sklearn for torch, while the new loss function still use some original torch criterion?

I guess your question comes down to "can I pass a python function into lua, so that it becomes a lua function, in lua, that proxies into the python function?"

It seems like it should be possible to implement such a thing, no particular reason I can think of why that wouldnt be fairly straightforward to get working, but pretty sure that it's not implemented currently. I dont have time to implement this at this time. You would need to add such functionality yourself, into pytorch.

What works currently:

  • passing an object from lua to python, such that the lua object is wrapped by a python object (or retrieving such an object out of the lua, from python)
  • calling methods on this object, from python, which methods can return additional lua objects, which are wrapped into python objects, and returned to the python side
commented

@hughperkins
Thanks for your reply!
Let me make my question more specifically:
I want to write the code mainly in lua, and calling numpy and sklearn functions in lua to operate on torch tensors. and the return tensors of numpy and sklearn needs to be back to lua again.
Is that possible right now?

commented

@hughperkins
Thanks for your reply!
Even more specifically: how to call back into lua ? It seems that there is no "require numpy" or "require sklearn" in lua or torch.
note that I want to write the main code in lua.

commented

@hughperkins
Hi,I am using pytorch, and after I finish training a model, I find the training epoch seems not enough, so I reload it and try to finetune(retrain), but why the training loss seems like the beginning training loss, just like I am training it from scratch?! I got really confused about this bug! Help?