davidtvs / pytorch-lr-finder

A learning rate range test implementation in PyTorch

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ValueError: too many values to unpack (expected 2)

franz101 opened this issue · comments

Code:
lr_finder.range_test(dataloader, end_lr=100, num_iter=100)
Response:
ValueError: too many values to unpack (expected 2)

Latest Pytorch and data loader

I'm working on this and will update this thread if it requires further information to reproduce this error.

BTW, I ran into a problem related to this issue: "Segmentation Fault (core dumped) with 1.4.0" earlier while investigating this issue. I am going to check it first.

OK, problem of segmentation fault is solved. It was resulted by PyTorch 1.4 installed with incorrect CUDA backend (10.1 -> 9.2). To install latest version of PyTorch with pip, we should use === instead of == to specify the version we want.

Let's get back to this issue.
Currently, I cannot reproduce it, and it seems like not a problem related to the version of PyTorch.
Could you please provide a minimal, reproducible example of your problem for us?

Besides, possibly related fix has been done by @davidtvs at eaf78b6 , so be sure that you are using the latest version of torch-lr-finder too.

Ya, I would also need more information to look into this properly. But my first guess is the same as @NaleRaphael mentioned, try the latest version of the package.

Other than that, we would need a copy-paste of the error message and stack trace output by Python or a minimal reproducible example.

`---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-73-2490cbba3642> in <module>()
      5 optimizer = torch.optim.Adam(model.parameters(), lr=0.1, weight_decay=1e-2)
      6 lr_finder = LRFinder(model, optimizer, criterion, device="cuda")
----> 7 lr_finder.range_test(train_dl, val_loader=val_dl, end_lr=1, num_iter=100, step_mode="linear")
      8 lr_finder.plot(log_lr=False)
      9 lr_finder.reset()

3 frames
/usr/local/lib/python3.6/dist-packages/torch_lr_finder/lr_finder.py in range_test(self, train_loader, val_loader, start_lr, end_lr, num_iter, step_mode, smooth_f, diverge_th, accumulation_steps)
    187         for iteration in tqdm(range(num_iter)):
    188             # Train on batch and retrieve loss
--> 189             loss = self._train_batch(iter_wrapper, accumulation_steps)
    190             if val_loader:
    191                 loss = self._validate(val_loader)

/usr/local/lib/python3.6/dist-packages/torch_lr_finder/lr_finder.py in _train_batch(self, iter_wrapper, accumulation_steps)
    235         self.optimizer.zero_grad()
    236         for i in range(accumulation_steps):
--> 237             inputs, labels = iter_wrapper.get_batch()
    238             inputs, labels = self._move_to_device(inputs, labels)
    239 

/usr/local/lib/python3.6/dist-packages/torch_lr_finder/lr_finder.py in get_batch(self)
    483 
    484     def get_batch(self):
--> 485         return next(self)

/usr/local/lib/python3.6/dist-packages/torch_lr_finder/lr_finder.py in __next__(self)
    470         # Get a new set of inputs and labels
    471         try:
--> 472             inputs, labels = next(self._iterator)
    473         except StopIteration:
    474             if not self.auto_reset:

ValueError: too many values to unpack (expected 2)
`

Hi @AlxZed, thanks for posting the traceback.
There is indeed a missing asterisk operator for unpacking remaining elements at L472.

def __next__(self):
# Get a new set of inputs and labels
try:
inputs, labels = next(self._iterator)
except StopIteration:

I will make a patch for it later.

This issue should be fixed in v0.1.4. Thanks @AlxZed for reporting and @NaleRaphael for fixing it