Stonesjtu / pytorch_memlab

Profiling and inspecting memory in pytorch

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Error when running on Colab CPU instance

ProGamerGov opened this issue · comments

When I attempt to import anything from pytorch_memlab on a Google Colab CPU instance, I get the following error:

Could not reset CUDA stats and cache: 'NoneType' object has no attribute 'lower'

Can you post the full stack trace and your PyTorch version?

Using PyTorch 1.8.0+cu101 results in:

from pytorch_memlab import MemReporter

Results in:

---------------------------------------------------------------------------

RuntimeError                              Traceback (most recent call last)

<ipython-input-4-f9afd24a6183> in <module>()
----> 1 from pytorch_memlab import MemReporter

5 frames

/usr/local/lib/python3.7/dist-packages/pytorch_memlab/__init__.py in <module>()
      1 from .courtesy import Courtesy
      2 from .mem_reporter import MemReporter
----> 3 from .line_profiler import LineProfiler, profile, profile_every, set_target_gpu, clear_global_line_profiler
      4 try:
      5     from .line_profiler.extension import load_ipython_extension

/usr/local/lib/python3.7/dist-packages/pytorch_memlab/line_profiler/__init__.py in <module>()
      1 from .line_profiler import LineProfiler
----> 2 from .profile import profile, profile_every, set_target_gpu, clear_global_line_profiler

/usr/local/lib/python3.7/dist-packages/pytorch_memlab/line_profiler/profile.py in <module>()
      4 
      5 global_line_profiler = LineProfiler()
----> 6 global_line_profiler.enable()
      7 
      8 

/usr/local/lib/python3.7/dist-packages/pytorch_memlab/line_profiler/line_profiler.py in enable(self)
     88         try:
     89             torch.cuda.empty_cache()
---> 90             self._reset_cuda_stats()
     91         # Pytorch-1.7.0 raises AttributeError while <1.6.0 raises AssertionError
     92         except (AssertionError, AttributeError) as error:

/usr/local/lib/python3.7/dist-packages/pytorch_memlab/line_profiler/line_profiler.py in _reset_cuda_stats(self)
     80 
     81     def _reset_cuda_stats(self):
---> 82         torch.cuda.reset_peak_memory_stats()
     83         torch.cuda.reset_accumulated_memory_stats()
     84 

/usr/local/lib/python3.7/dist-packages/torch/cuda/memory.py in reset_peak_memory_stats(device)
    236     """
    237     device = _get_device_index(device, optional=True)
--> 238     return torch._C._cuda_resetPeakMemoryStats(device)
    239 
    240 

RuntimeError: invalid argument to reset_peak_memory_stats

Well it looks like pytorch's keeping changing the Error type it raised for no GPU situation.

Can you try catch an extra RuntimeError here at /usr/local/lib/python3.7/dist-packages/pytorch_memlab/line_profiler/line_profiler.py:92

Probably we should use torch.cuda.is_available() instead of try-catch block.

@Stonesjtu Hi and thanks for the fix!
Would it be possible to make a Pypi release with this MR ?

Thanks

The travis-ci seems broken, working on that.

@RobinFrcd I've uploaded a new version 0.2.4 manually onto pypi, could you try upgrading it?

@Stonesjtu Perfect, it works like a charm! Thank you very much for the prompt release! 👍