pytorch / glow

Compiler for Neural Network hardware accelerators

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Inference time for Glow with CPU Backend is 10 times higher compared to Pytorch(CPU) for pretrained model resnet50

teena3 opened this issue · comments

commented

Hi,

I was trying to comparing Inference time of Glow with CPU backend vs Pytorch (built with USE_CUDA=0) for pertained model resnet50 with batch size 1 for total 384 images.

Based on the Glow examples, I added below lines to enable Glow with CPU backend.

  spec = torch_glow.CompilationSpec()
  spec.get_settings().set_glow_backend("CPU")

  compilation_group = torch_glow.CompilationGroup()
  spec.compilation_groups_append(compilation_group)

  input_spec = torch_glow.InputSpec()
  input_spec.set_same_as(inputs)

  compilation_group.input_sets_append([input_spec])

  traced_m = torch.jit.trace(resnet, (inputs))
  lowered_model = torch_glow.to_glow(traced_m, spec)

 // To infer, looped thru all images in dataset.
  t0 = time.time()
  with torch.no_grad():
     out = lowered_model(inputs)
  t1 = time.time()
  time_elapsed = (t1-t0)

Inference time (mentioned in second) is 10 times higher in Glow with CPU Backend than pytorch(CPU).
I was expecting to see some reduced inference time for Glow compared to pytorch.

image

Glow commit id: 3d54e1e
pytorch commit id: a7b6b1f0614f43b643b32127e52834300f1aecee
pytorch build command: USE_CUDA=0 BUILD_BINARY=OFF BUILD_TEST=0 BUILD_CAFFE2_OPS=0 BUILD_CAFFE2=ON USE_FBGEMM=ON python3.7 setup.py install