google-deepmind / deepmind-research

This repository contains implementations and illustrative code to accompany DeepMind publications

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Enformer loaded from checkpoint does not work because of "missing positional argument: is_training"

frstyang opened this issue · comments

I ran the checkpoint loading portion of the enformer-training.ipynb provided (I believe) by Kyle Taylor and @alimuldal, but the model cannot run a forward pass. It says TypeError: __call__() missing 1 required positional argument: 'is_training' even though is_training=False is explicitly provided as a keyword argument. Is there a workaround/fix for this? I would like to be able to access the .trunk attribute to compute internal embeddings of sequences, but I get the same problem when I try to call that method as well.

image

I ran the checkpoint loading portion of the enformer-training.ipynb provided (I believe) by Kyle Taylor and @alimuldal, but the model cannot run a forward pass. It says TypeError: __call__() missing 1 required positional argument: 'is_training' even though is_training=False is explicitly provided as a keyword argument. Is there a workaround/fix for this? I would like to be able to access the .trunk attribute to compute internal embeddings of sequences, but I get the same problem when I try to call that method as well.

image

I met the same problem, did you figure it out? Thanks!

This was a hack, but I edited the enformer.py file to change all instances of
is_training: bool
to
is_training: bool = False
and then I was able to extract embeddings in the notebook.

This was a hack, but I edited the enformer.py file to change all instances of
is_training: bool
to
is_training: bool = False
and then I was able to extract embeddings in the notebook.

thanks a lot! It saved me a lot of time