insung3511 / torch-study

Studying repository for Pytorch library

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

styling_gan_model.ipynb Object type return error

insung3511 opened this issue · comments

Error Code

def runStyleTransfer(cnn, contentImg, styleImg, num_steps=300, style_weight=100000, content_weight=1):
    inputImg = contentImg.clone().detach().requires_grad_(True)
    model, styleLosses, contentLosses = getStyleModelAndLosses(cnn, styleImg, contentImg)
    optimizer = optim.LBFGS([inputImg])
    iteration = [0]
    
    while iteration[0] <= num_steps:
        def closuer():
            inputImg.data.clamp_(0, 1)
            optimizer.zero_grad()
            model(inputImg)
            
            styleScore = 0
            contentScore = 0
            for sl in styleLosses:
                styleScore += sl.loss
            
            for cl in contentLosses:
                contentScore += cl.loss
            
            loss = (style_weight * styleScore) + (content_weight * contentScore)
            loss.backward()
            iteration[0] += 1
            if iteration[0] % 50 == 0:
                print('Iteration {}: Style Loss {:4f}\tContent Loss : {:4f}'.format((
                    iteration[0], styleScore.item(), contentScore.item()
                )))
            return torch.Tensor(styleScore + contentScore)
        optimizer.step(closuer)
    return inputImg.data.clamp_(0, 1)

Error

Output exceeds the [size limit](command:workbench.action.openSettings?[). Open the full output data [in a text editor](command:workbench.action.openLargeOutput?859d898f-236e-4f66-9f8a-135b525cfe5d)
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
Cell In[80], line 1
----> 1 output = runStyleTransfer(cnn, contentImg=contentImg, styleImg=styleImg)

Cell In[78], line 30, in runStyleTransfer(cnn, contentImg, styleImg, num_steps, style_weight, content_weight)
     28         return torch.Tensor(styleScore + contentScore)
     29     print(type(closuer)) 
---> 30     optimizer.step(closuer)
     31 return inputImg.data.clamp_(0, 1)

File ~/miniconda3/lib/python3.10/site-packages/torch/optim/optimizer.py:140, in Optimizer._hook_for_profile.<locals>.profile_hook_step.<locals>.wrapper(*args, **kwargs)
    138 profile_name = "Optimizer.step#{}.step".format(obj.__class__.__name__)
    139 with torch.autograd.profiler.record_function(profile_name):
--> 140     out = func(*args, **kwargs)
    141     obj._optimizer_step_code()
    142     return out

File ~/miniconda3/lib/python3.10/site-packages/torch/autograd/grad_mode.py:27, in _DecoratorContextManager.__call__.<locals>.decorate_context(*args, **kwargs)
     24 @functools.wraps(func)
     25 def decorate_context(*args, **kwargs):
     26     with self.clone():
---> 27         return func(*args, **kwargs)

File ~/miniconda3/lib/python3.10/site-packages/torch/optim/lbfgs.py:438, in LBFGS.step(self, closure)
...
---> 26         iteration[0], styleScore.item(), contentScore.item()
     27     )))
     28 return torch.Tensor(styleScore + contentScore)

AttributeError: 'int' object has no attribute 'item'

This error happend on optimizer.step(clouser) function, it's working before 50th loop. Still try to solve this problems. If anyone knows about this solutions please comment on this issue. Thanks :)

Still can't get it how to possible object function get a return. Also, I got a really don't know where has got a interger variable in clouser object.

I checked variable type and values.

returnValue = styleScore + contentScore
print(returnValue)
return returnValue

and this is result

tensor(7.9810e-05, device='mps:0', grad_fn=<AddBackward0>)
tensor(7.9804e-05, device='mps:0', grad_fn=<AddBackward0>)
tensor(5.8912e-05, device='mps:0', grad_fn=<AddBackward0>)
tensor(4.0803e-05, device='mps:0', grad_fn=<AddBackward0>)
tensor(2.8521e-05, device='mps:0', grad_fn=<AddBackward0>)
tensor(3.7512e-05, device='mps:0', grad_fn=<AddBackward0>)
tensor(4.9474e-05, device='mps:0', grad_fn=<AddBackward0>)
tensor(3.6544e-05, device='mps:0', grad_fn=<AddBackward0>)
tensor(0.0001, device='mps:0', grad_fn=<AddBackward0>)
tensor(3.5965e-05, device='mps:0', grad_fn=<AddBackward0>)
tensor(1.6215e-05, device='mps:0', grad_fn=<AddBackward0>)
tensor(3.3496e-06, device='mps:0', grad_fn=<AddBackward0>)
tensor(1.5352e-06, device='mps:0', grad_fn=<AddBackward0>)
tensor(7.3660e-07, device='mps:0', grad_fn=<AddBackward0>)
tensor(4.1677e-07, device='mps:0', grad_fn=<AddBackward0>)
tensor(2.9011e-07, device='mps:0', grad_fn=<AddBackward0>)
tensor(2.0193e-07, device='mps:0', grad_fn=<AddBackward0>)
tensor(1.6363e-07, device='mps:0', grad_fn=<AddBackward0>)
tensor(1.4279e-07, device='mps:0', grad_fn=<AddBackward0>)
tensor(1.2995e-07, device='mps:0', grad_fn=<AddBackward0>)
tensor(1.0171e-07, device='mps:0', grad_fn=<AddBackward0>)
tensor(7.7450e-08, device='mps:0', grad_fn=<AddBackward0>)
tensor(4.7257e-08, device='mps:0', grad_fn=<AddBackward0>)
tensor(1.9968e-08, device='mps:0', grad_fn=<AddBackward0>)
tensor(9.0057e-09, device='mps:0', grad_fn=<AddBackward0>)
tensor(6.4314e-09, device='mps:0', grad_fn=<AddBackward0>)
tensor(5.8167e-09, device='mps:0', grad_fn=<AddBackward0>)
tensor(5.8167e-09, device='mps:0', grad_fn=<AddBackward0>)
tensor(5.1039e-09, device='mps:0', grad_fn=<AddBackward0>)
tensor(5.1039e-09, device='mps:0', grad_fn=<AddBackward0>)
tensor(4.1998e-09, device='mps:0', grad_fn=<AddBackward0>)
tensor(4.1998e-09, device='mps:0', grad_fn=<AddBackward0>)
tensor(2.6534e-09, device='mps:0', grad_fn=<AddBackward0>)
tensor(4.5921e-09, device='mps:0', grad_fn=<AddBackward0>)
tensor(1.8199e-09, device='mps:0', grad_fn=<AddBackward0>)
tensor(1.7101e-09, device='mps:0', grad_fn=<AddBackward0>)
tensor(1.7101e-09, device='mps:0', grad_fn=<AddBackward0>)
tensor(1.6390e-09, device='mps:0', grad_fn=<AddBackward0>)
tensor(1.6390e-09, device='mps:0', grad_fn=<AddBackward0>)
tensor(1.5769e-09, device='mps:0', grad_fn=<AddBackward0>)
tensor(1.5769e-09, device='mps:0', grad_fn=<AddBackward0>)
tensor(1.4023e-09, device='mps:0', grad_fn=<AddBackward0>)
tensor(1.4023e-09, device='mps:0', grad_fn=<AddBackward0>)
tensor(1.3590e-09, device='mps:0', grad_fn=<AddBackward0>)
tensor(1.3590e-09, device='mps:0', grad_fn=<AddBackward0>)
tensor(1.2115e-09, device='mps:0', grad_fn=<AddBackward0>)
tensor(1.2115e-09, device='mps:0', grad_fn=<AddBackward0>)
tensor(1.1137e-09, device='mps:0', grad_fn=<AddBackward0>)
tensor(1.1137e-09, device='mps:0', grad_fn=<AddBackward0>)

Output exceeds the [size limit](command:workbench.action.openSettings?[). Open the full output data [in a text editor](command:workbench.action.openLargeOutput?a0dfa821-c439-4662-85dd-f9a3ecf32e9f)
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
Cell In[17], line 1
----> 1 output = runStyleTransfer(cnn, contentImg=contentImg, styleImg=styleImg)

Cell In[15], line 32, in runStyleTransfer(cnn, contentImg, styleImg, num_steps, style_weight, content_weight)
     30         print(returnValue)
     31         return returnValue
---> 32     optimizer.step(closuer)
     33 return inputImg.data.clamp_(0, 1)

File ~/miniconda3/lib/python3.10/site-packages/torch/optim/optimizer.py:140, in Optimizer._hook_for_profile.<locals>.profile_hook_step.<locals>.wrapper(*args, **kwargs)
    138 profile_name = "Optimizer.step#{}.step".format(obj.__class__.__name__)
    139 with torch.autograd.profiler.record_function(profile_name):
--> 140     out = func(*args, **kwargs)
    141     obj._optimizer_step_code()
    142     return out

File ~/miniconda3/lib/python3.10/site-packages/torch/autograd/grad_mode.py:27, in _DecoratorContextManager.__call__.<locals>.decorate_context(*args, **kwargs)
     24 @functools.wraps(func)
     25 def decorate_context(*args, **kwargs):
     26     with self.clone():
---> 27         return func(*args, **kwargs)

File ~/miniconda3/lib/python3.10/site-packages/torch/optim/lbfgs.py:438, in LBFGS.step(self, closure)
...
     27     )))
     29 returnValue = styleScore + contentScore
     30 print(returnValue)

AttributeError: 'int' object has no attribute 'item'

Solved. It wasn't not a big issue with it. While training model, contentLoss become variable int type. Happened this problem at printing iteration, styleScore, contentScore.

Original code was like this.

if iteration[0] % 50 == 0:
                print('Iteration {}: Style Loss {:4f}\tContent Loss : {:4f}'.format(
                    iteration[0], styleScore.item(), contentScore.item()
                ))

and this is changed

if iteration[0] % 50 == 0:
                print('Iteration {}: Style Loss {:4f}\tContent Loss : {:4f}'.format(
                    iteration[0], styleScore.item(), contentScore
                ))

Looks like @insung3511 is trying to award a sticker, but something went wrong while doing so. See this page for more information.