AllanYangZhou / nfn

NF-Layers for constructing neural functionals.

Home Page:https://kaien-yang.github.io/nfn-docs/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Running into InternalTorchDynamoError while executing INR2ARRAY classifier using MNIST dataset.

kvamsid opened this issue · comments

Hello Team,
While running the code for INR2ARRAY classification on MNIST Dataset, I was stuck with InternalTorchDynamoError and below are the logs for the same. Can you please help me regarding this.

[2023-12-18 18:21:07,432] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo start tracing getitem
[2023-12-18 18:21:07,438] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo start tracing _get_item_by_idx
[2023-12-18 18:21:07,441] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo start tracing len
[2023-12-18 18:22:01,441] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo start tracing
0%| | 0/200000 [04:56<?, ?it/s]
Error executing job with overrides: ['dset=mnist', 'model=nft', 'compile=true']
Traceback (most recent call last):
File "/home/sbajjur3/.conda/envs/inr2array/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 324, in _compile
out_code = transform_code_object(code, transform)
File "/home/sbajjur3/.conda/envs/inr2array/lib/python3.8/site-packages/torch/_dynamo/bytecode_transformation.py", line 445, in transform_code_object
transformations(instructions, code_options)
File "/home/sbajjur3/.conda/envs/inr2array/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 299, in transform
tracer = InstructionTranslator(
File "/home/sbajjur3/.conda/envs/inr2array/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 1670, in init
self.symbolic_locals = collections.OrderedDict(
File "/home/sbajjur3/.conda/envs/inr2array/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 1673, in
VariableBuilder(
File "/home/sbajjur3/.conda/envs/inr2array/lib/python3.8/site-packages/torch/_dynamo/variables/builder.py", line 172, in call
return self._wrap(value).clone(**self.options())
File "/home/sbajjur3/.conda/envs/inr2array/lib/python3.8/site-packages/torch/_dynamo/variables/builder.py", line 248, in _wrap
output = [
File "/home/sbajjur3/.conda/envs/inr2array/lib/python3.8/site-packages/torch/_dynamo/variables/builder.py", line 249, in
VariableBuilder(self.tx, GetItemSource(self.get_source(), i))(
File "/home/sbajjur3/.conda/envs/inr2array/lib/python3.8/site-packages/torch/_dynamo/variables/builder.py", line 172, in call
return self._wrap(value).clone(**self.options())
File "/home/sbajjur3/.conda/envs/inr2array/lib/python3.8/site-packages/torch/_dynamo/variables/builder.py", line 238, in _wrap
return self.wrap_tensor(value)
File "/home/sbajjur3/.conda/envs/inr2array/lib/python3.8/site-packages/torch/_dynamo/variables/builder.py", line 639, in wrap_tensor
tensor_variable = wrap_fx_proxy(
File "/home/sbajjur3/.conda/envs/inr2array/lib/python3.8/site-packages/torch/_dynamo/variables/builder.py", line 754, in wrap_fx_proxy
return wrap_fx_proxy_cls(
File "/home/sbajjur3/.conda/envs/inr2array/lib/python3.8/site-packages/torch/_dynamo/variables/builder.py", line 814, in wrap_fx_proxy_cls
example_value = wrap_to_fake_tensor_and_record(
File "/home/sbajjur3/.conda/envs/inr2array/lib/python3.8/site-packages/torch/_dynamo/variables/builder.py", line 957, in wrap_to_fake_tensor_and_record
fake_e = wrap_fake_exception(
File "/home/sbajjur3/.conda/envs/inr2array/lib/python3.8/site-packages/torch/_dynamo/utils.py", line 808, in wrap_fake_exception
return fn()
File "/home/sbajjur3/.conda/envs/inr2array/lib/python3.8/site-packages/torch/_dynamo/variables/builder.py", line 958, in
lambda: tx.fake_mode.from_tensor(
File "/home/sbajjur3/.conda/envs/inr2array/lib/python3.8/site-packages/torch/_subclasses/fake_tensor.py", line 1324, in from_tensor
return self.fake_tensor_converter(
File "/home/sbajjur3/.conda/envs/inr2array/lib/python3.8/site-packages/torch/_subclasses/fake_tensor.py", line 314, in call
return self.from_real_tensor(
File "/home/sbajjur3/.conda/envs/inr2array/lib/python3.8/site-packages/torch/_subclasses/fake_tensor.py", line 272, in from_real_tensor
out = self.meta_converter(
File "/home/sbajjur3/.conda/envs/inr2array/lib/python3.8/site-packages/torch/_subclasses/meta_utils.py", line 502, in call
r = self.meta_tensor(
File "/home/sbajjur3/.conda/envs/inr2array/lib/python3.8/site-packages/torch/_subclasses/meta_utils.py", line 381, in meta_tensor
s = t.untyped_storage()
NotImplementedError: Cannot access storage of BatchedTensorImpl

Set torch._dynamo.config.verbose=True for more information

You can suppress this exception and fall back to eager by setting:
torch._dynamo.config.suppress_errors = True

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/scratch/sbajjur3/nfn-main/experiments/launch_inr2array.py", line 9, in main
train_and_eval(cfg)
File "/scratch/sbajjur3/nfn-main/experiments/inr2array.py", line 320, in train_and_eval
pred_img = nfnet_fast(params)
File "/home/sbajjur3/.conda/envs/inr2array/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/sbajjur3/.conda/envs/inr2array/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py", line 82, in forward
return self.dynamo_ctx(self._orig_mod.forward)(*args, **kwargs)
File "/home/sbajjur3/.conda/envs/inr2array/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py", line 209, in _fn
return fn(*args, **kwargs)
File "/scratch/sbajjur3/nfn-main/experiments/inr2array.py", line 182, in forward
out = params_to_func_params(out)
File "/scratch/sbajjur3/nfn-main/experiments/inr2array.py", line 185, in
out = self.batch_siren(out)
File "/home/sbajjur3/.conda/envs/inr2array/lib/python3.8/site-packages/torch/_functorch/vmap.py", line 434, in wrapped
return _flat_vmap(
File "/home/sbajjur3/.conda/envs/inr2array/lib/python3.8/site-packages/torch/_functorch/vmap.py", line 39, in fn
return f(*args, **kwargs)
File "/home/sbajjur3/.conda/envs/inr2array/lib/python3.8/site-packages/torch/_functorch/vmap.py", line 619, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/home/sbajjur3/.conda/envs/inr2array/lib/python3.8/site-packages/torch/_dynamo/eval_frame.py", line 337, in catch_errors
return callback(frame, cache_size, hooks)
File "/home/sbajjur3/.conda/envs/inr2array/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 404, in _convert_frame
result = inner_convert(frame, cache_size, hooks)
File "/home/sbajjur3/.conda/envs/inr2array/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 104, in _fn
return fn(*args, **kwargs)
File "/home/sbajjur3/.conda/envs/inr2array/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 262, in _convert_frame_assert
return _compile(
File "/home/sbajjur3/.conda/envs/inr2array/lib/python3.8/site-packages/torch/_dynamo/utils.py", line 163, in time_wrapper
r = func(*args, **kwargs)
File "/home/sbajjur3/.conda/envs/inr2array/lib/python3.8/site-packages/torch/_dynamo/convert_frame.py", line 394, in _compile
raise InternalTorchDynamoError() from e
torch._dynamo.exc.InternalTorchDynamoError

What's your pytorch version, GPU, and operating system?

Hi Allan, thanks for the reply.

Here are the requested details:
PyTorch Version: 2.0.1
OS: Rocky Linux, Version: 8.5 (Green Obsidian)
GPU information:
Screenshot 2024-01-01 at 12 09 49 PM

Let us know if you need any additional information.

I see. For an initial workaround, maybe just turn off compile? (pass compile=false instead of compile=true)

Hi Allan,

Thanks for the workaround. It seems to be running now. Will contact you if we need any additional help.

Many thanks!