Function value in boundary condition fails for polar grid in nopython mode pipeline
risinggard opened this issue · comments
Hi!
I am trying to solve the heat equation on a grid with polar symmetry subject to the boundary condition ∂u/∂n = -0.5 · (u − 1). (This boundary condition is equivalent to Newton’s law of cooling with a heat-transfer coefficient of 0.5 and an ambient temperature of 1.)
The following four examples illustrate my problem:
First I try to do this on a one-dimensional Cartesian grid:
grid = CartesianGrid([[0, 1]], [32], periodic=False)
state = ScalarField.from_expression(grid, '0')
eq = PDE({'T': 'laplace(T)'}, bc=[{'derivative': 0}, {'derivative_expression': '-.5*(T-1)'}])
storage = MemoryStorage()
result = eq.solve(state, t_range=3, tracker=['progress', storage.tracker(.1)])
I get an error message saying that T
is not in the expression signature ['value', 'dx', 'x', 't']
. I assume value
means the value of the solution at the boundary. Substituting value
for T
, I get a reasonably-looking solution:
grid = CartesianGrid([[0, 1]], [32], periodic=False)
state = ScalarField.from_expression(grid, '0')
eq = PDE({'T': 'laplace(T)'}, bc=[{'derivative': 0}, {'derivative_expression': '-.5*(value-1)'}])
storage = MemoryStorage()
result = eq.solve(state, t_range=3, tracker=['progress', storage.tracker(.1)])
for data in storage.data:
plt.plot(data)
Now I repeat for a polar grid:
grid = PolarSymGrid(radius=1, shape=32)
state = ScalarField.from_expression(grid, '0')
eq = PDE({'T': 'laplace(T)'}, bc=[{'derivative': 0}, {'derivative_expression': '-.5*(T-1)'}])
storage = MemoryStorage()
result = eq.solve(state, t_range=3, tracker=['progress', storage.tracker(.1)])
I get the same error message: T
is not in the expression signature ['value', 'dx', 'r', 't']
. However, when I substitute value
for T
:
grid = PolarSymGrid(radius=1, shape=32)
state = ScalarField.from_expression(grid, '0')
eq = PDE({'T': 'laplace(T)'}, bc=[{'derivative': 0}, {'derivative_expression': '-.5*(value-1)'}])
storage = MemoryStorage()
result = eq.solve(state, t_range=3, tracker=['progress', storage.tracker(.1)])
I get an error message saying that the interpreter failed in nopython mode pipeline. Although this issue is similar to issue 12, my numba
version is up to date at version 0.56.2 as seen by calling pde.environment()
:
{'package version': '0.21.0',
'python version': '3.8.10 (default, Jun 22 2022, 20:18:18) \n[GCC 9.4.0]',
'platform': 'linux',
'config': {'numba.debug': False,
'numba.fastmath': True,
'numba.parallel': True,
'numba.parallel_threshold': 65536},
'mandatory packages': {'matplotlib': '3.3.3',
'numba': '0.56.2',
'numpy': '1.19.4',
'scipy': '1.6.3',
'sympy': '1.7.1'},
'matplotlib environment': {'backend': 'module://ipykernel.pylab.backend_inline',
'plotting context': 'JupyterPlottingContext'},
'optional packages': {'h5py': '3.7.0',
'napari': 'not available',
'pandas': '1.2.1',
'pyfftw': 'not available',
'tqdm': '4.64.1'},
'numba environment': {'version': '0.56.2',
'parallel': True,
'fastmath': True,
'debug': False,
'using_svml': False,
'threading_layer': 'omp',
'omp_num_threads': None,
'mkl_num_threads': None,
'num_threads': 4,
'num_threads_default': 4,
'cuda_available': False,
'roc_available': False}}
The full error message is attached as a file.
Is this an error in py-pde
? Is there a workaround for polar grids?
(Note that I have used a user-defined PDE in these examples. However, using the predefined DiffusionPDE
gives the same results for the second example both in the Cartesian and polar case.)
Thanks for any help to resolve this!
Thanks for reporting the bug, which I can indeed reproduce. I have a hunch where the bug might come from, but I don't have time to correct it right now. I'll thus leave the issue open as a reminder.
In the meantime, your problem can be circumvented since your boundary condition can be expressed as a Robin boundary condition, {"type": "mixed", "value": VALUE, "const": CONST}
, which obvious replacements.
Additional comments:
- You could use
DiffusionPDE
instead of the more generalPDE
to get faster compilation - You already discovered that you need to use
value
instead of your variableT
when specifying boundary conditions. This is because the variable name is not well defined for all equations, e.g., forDiffusionPDE
and boundary conditions thus expect the genericvalue
.
Thanks for confirming that this is indeed a bug, and for the useful tip on how to circumvent it. The mixed boundary condition does exactly what I need!
About the DiffusionPDE
vs. the user-defined one, I noticed the difference in speed. I used the user-defined one in the examples to trigger the printing of the expression signature, as I did not find the use of value
in the boundary condition documented elsewhere.
I agree that the documentation can be improved.
In terms of speed, you might also get an improvement by using the built-in explicit time stepper (e.g., by specifying a value for dt
) and enabling adaptive time stepping (by supplying adaptive=True
). I think your current setup falls back to the scipy
solver, which can be slow since it is not implemented using numba
. I thus suggest to use result = eq.solve(state, t_range=3, dt=1e-3, adaptive=True, ...)
.
Very good to know what happens behind the scenes. Adding ..., dt=1e-3, adaptive=True, ...
to trigger the use of the numba
-implemented solver does indeed give a very significant speed-up for longer times (e.g. t_range=30
). Thanks!
I just merged PR #287 that fixes your reported bug. If there are no more questions, feel free to close this issue!