ydluo / qdyn

A Quasi-DYNamic earthquake simulator

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Number of mesh elements

Polyatykin opened this issue · comments

Hi Team,

why is number of mesh elements N:

Find next power of two for number of mesh elements

N = int(np.power(2, np.ceil(np.log2(resolution * L / Lb)))) ?

Where is equation for it?

Thank you!

Because the code uses an FFT (Fast Fourier Transform) that requires the length of an array to be a power of 2.

Just to unpack this expression (N = int(np.power(2, np.ceil(np.log2(resolution * L / Lb))))) a bit:

  • resolution * L / Lb is the number of desired mesh elements. This may or may not be a power of two
  • np.ceil(np.log2(...)) finds the next largest power of 2
  • int(np.power(2, ...)) raises 2 to this power to give us the number of mesh elements that is a power of two

A shorter way of writing this would be: N = scipy.fft.next_fast_len(resolution * L / Lb)

Thank you!

But why the assessment of number of desired mesh elements is resolution * L / Lb ?

And could you please advice me some (your)article where is the exact physical set up of the problem(1d or 2d) with the same parameters?

Thanks a lot!

Because we define "resolution" as the desired number of elements per Lb length, that is, the value of the ratio Lb/dx = Lb/(L/N). In addition to this, we need N to be a power of 2.
Lb is the size of the zone near the rupture front where frictional weakening happens, also known as "process zone size" (e.g. equation 7 of Rubin and Ampuero, 2005, https://doi.org/10.1029/2005JB003686).

Removing all the clutter from the example notebook, we have:

Lasp = 7                    # Length of asperity / nucleation length
L = 5                       # Length of fault / asperity length
ab_ratio = 0.8              # a/b of asperity
cab_ratio = 1 - ab_ratio
resolution = 7              # Mesh resolution / process zone width

# Compute relevant length scales:
# Process zone width [m]
Lb = set_dict["MU"] * set_dict["SET_DICT_RSF"]["DC"] / (set_dict["SET_DICT_RSF"]["B"] * set_dict["SIGMA"])
# Nucleation length [m]
Lc = Lb / cab_ratio
# Length of asperity [m]
Lasp *= Lc
# Fault length [m]
L *= Lasp

# Find next power of two for number of mesh elements
N = int(np.power(2, np.ceil(np.log2(resolution * L / Lb))))

First, the length scales are defined relative to the nucleation length. The asperity length is defined as 7 times the nucleation length, which allows for fast earthquake ruptures. If the asperity size were to be close to 1, only slow-slip events would develop, and below 1 it would show stable creep. This nucleation length is computed as Lc = Lb / cab_ratio = Lb / (1 - a / b), with a and b being the rate-and-state friction parameters of the asperity, and Lb the width of the process zone. The process zone is calculated as Lb = mu * Dc / (b * sigma) (see the reference given by Pablo above).

To ensure numerical stability, the mesh size needs to be smaller than Lb; here we define that the mesh size is 7 times smaller than Lb (defined as resolution). So once we know the nucleation length Lc (in units of metres), we can calculate the asperity length Lasp in metres, and then the fault length L. The number of mesh elements is then L / (Lb / resolution) = resolution * L / Lb, which needs to be increased to reach the next integer power of 2.

There is actually a "typo" in the Jupyter notebook, describing L as Length of fault / nucleation length, instead of Length of fault / asperity length. This is not correct, and I'll fix this in an upcoming release.