ar4 / deepwave

Wave propagation modules for PyTorch.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Issue when maximum offset is smaller than the full model

alaliaa opened this issue · comments

Dear developer,

First, I'd like to thank you for such wonderful module.

I was trying to implement FWI in a model that is laterally long, such that the wave is not recorded on all the receivers when the source is at the edge. I am getting the error below. If I increase the recording time, I won't get the error but this will increase the computation time which I am trying to reduce as possible.

The below error is actually from the example in the fwi notebook
(https://colab.research.google.com/drive/1PMO1rFAaibRjwjhBuyH3dLQ1sW5wfec_).

I just change nt to :
nt= int(1.5/dt)

Traceback (most recent call last):
File "deepwave_seam_example1.py", line 141, in
batch_rcv_amps_pred = prop(batch_src_amps, batch_x_s, batch_x_r, dt)
File "python3.7/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/python3.7/site-packages/deepwave/base/propagator.py", line 147, in forward
*model.properties.values())
File "/python3.7/site-packages/deepwave/scalar/scalar.py", line 62, in forward
timestep = Timestep(dt, model.dx, max_vel)
File "/python3.7/site-packages/deepwave/scalar/scalar.py", line 464, in init
self.step_ratio = int(np.ceil(dt / max_dt))
ValueError: cannot convert float NaN to integer

You can see that things sort of blow up even the loss become nan. I checked the CFL condition and everything seem correct.

I hope you can figure the error.

Regards
Abdullah

Dear Abdullah,

I am delighted to hear that you enjoy Deepwave, but sorry that you experienced a problem with it. Thank you for taking the time to report it.

I have unfortunately not yet been able to reproduce the issue: I replaced the line nt = int(2 / dt) in the example with nt = int(1.5 / dt), but did not receive an error. Would it be possible for you to create and share a Colaboratory notebook where the error occurs?

-Alan

Hi Alan,

You are right, 1.5 works fine. The error occurs when I make nt=int(1.3/dt), sorry my bad I put the wrong number in the first comment.

Currently I cannot share a Colaboratory notebook, I think my workstation just broke down. I am connecting to it remotely and suddenly I cannot connect anymore, so I need to go there and check it out.

If you still cannot reproduce the error, let me know. I will try to fix my workstation and share the notebook.

  • Abdullah

Hi Abdullah,

Ah yes, I can reproduce the error with 1.3. This is in fact not caused by Deepwave, but by the normalization I use in the example. In the example I divide each receiver's recording by the maximum amplitude of that receiver as a demonstration of the kind of processing that you can add after wave propagation (and PyTorch will automatically calculate the effect it has on the gradient calculation). I was a bit careless, however, and didn't account for the possibility that a receiver could have zero amplitude at all times, in which case there will be a division by zero. You may fix it by replacing the line in the example code

batch_rcv_amps_pred_norm = batch_rcv_amps_pred / batch_rcv_amps_pred_max

with

batch_rcv_amps_pred_norm = batch_rcv_amps_pred / (batch_rcv_amps_pred_max.abs() + 1e-10)

Taking the absolute value of the maximum amplitude and adding 1e-10 in the denominator will ensure that the denominator is never zero, and so should solve the problem.

Thank you again for reporting this issue - I will fix the example code.

Please let me know if this resolves the problem for you.

-Alan

Thank you,

Your solution work very well. I see where things went south.

Another solution can be to not normalizing the data, but the gradient. This is what I often do when I implement FWI. something like

mx = model.grad.abs().max()
model.grad = model.grad/mx
optimizer.step()

I just implement it and it worked, I do not know which one is more efficient though

Thank you again

Excellent. Let me know if anything else goes wrong.