Lightning-AI / torchmetrics

Torchmetrics - Machine learning metrics for distributed, scalable PyTorch applications.

Home Page:https://lightning.ai/docs/torchmetrics/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)

alexge233 opened this issue · comments

🐛 Bug

I'm using the minimal example of BERTScore with predicted and target strings.
The code is wrapped within a LightningModule and called after a step.
I decode the tokens into strings, and feed them into BERTScore.
The end result is this runtime error

def metrics(self, pred_ids: torch.Tensor, target_ids: torch.Tensor):
     tokenizer   = self.get_tokenizer()
     str_preds   = tokenizer.batch_decode(
            pred_ids,
            skip_special_tokens=True
     )
     str_actual   = tokenizer.batch_decode(
            target_ids,
            skip_special_tokens=True
     )
    bertscore = BERTScore(verbose=False, device = self.device)
    return bertscore(
                preds = str_preds,
                target = str_actual
            )['f1'].mean()

where tokenizer is a huggingface AutoTokenizer.

Environment

  • TorchMetrics version, using pip I've got torchmetrics 1.3.0.post0
  • Python & PyTorch Version: 3.10 and torch 2.1.2
  • AWS base-pyth-ml-g5-48xlarge

Changing the device doesn't seem to make any difference.

Hi! thanks for your contribution!, great first issue!

Hi @alexge233, thanks for reporting this issue.
I am trying to reproduce the problem. Would it be possible for you to send some more information?

  • The full traceback?
  • What are self.device in the code?