Stonesjtu / pytorch_memlab

Profiling and inspecting memory in pytorch

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Support for gpu2,3,4

WindChimeRan opened this issue · comments

pytorch_memlab works excellent for gpu0, however, all mem of tensors turn to 0 when I used gpu2,3,4.

Thank you for the productive tools for the open source community!

Can you please share the sample output for that

import torch
import torch.nn as nn
from pytorch_memlab import LineProfiler, profile, profile_every

def test_line_report_method(device: int):
    class Net(torch.nn.Module):
        def __init__(self):
            super().__init__()
            self.linear = torch.nn.Linear(100, 100).cuda(device)
            self.drop = torch.nn.Dropout(0.1)

        @profile_every(1)
        def forward(self, inp):
            return self.drop(self.linear(inp))

    net = Net()
    inp = torch.Tensor(50, 100).cuda(device)
    net(inp)

if __name__ == "__main__":
    test_line_report_method(2)
Line # Max usage   Peak usage diff max diff peak  Line Contents
===============================================================
    12                                                   @profile_every(1)
    13                                                   def forward(self, inp):
    14     0.00B        0.00B    0.00B    0.00B              return self.drop(self.linear(inp))

I would like to introduce a global switch to select the GPU of interest.