apply MASW to ndarray data instead of file
dylanmikesell opened this issue · comments
Hi @jpvantassel. Thanks for this package!! I am looking forward to using it. This is more of a question than an issue, but not sure where to post it. It would be nice to have an example of how to run your MASW tools on just a numpy array. I don't have segy or su files, so I am trying to figure out how to do it now. Here is where I am at, and I am wondering how best to proceed. Below I tried to make an example with random data.
# make the source object
source = swprocess.Source(-10.0, 0.0, 0.0)
# make some noise data
nstats = 11
npts = 1000
amplitude = np.random.rand(npts, nstats)
x = np.linspace(0, 100, nstats)
y = np.zeros(nstats)
z = np.zeros(nstats)
# make the sensor object
sensors = []
for i in range(nstats):
sensor = swprocess.Sensor1C(amplitude[:,i], dt, x[i], y[i], z[i])
sensors.append(sensor)
data_array = swprocess.Array1D(sensors, source)
_ = data_array.plot()
_ = data_array.waterfall()
Everything seems good to here and the plots look at expected. data_array
is swprocess.array1d.Array1D
.
Now, if I build settings
following your jupyter notebook example, I more or less end up thinking I should be able to run something like
swprocess.Masw.run(fnames=data_array, settings=settings)
Obviously this dos not work. fnames
should be a list of filenames. Any tips on how to proceed and run MASW on this data matrix?
Hi @dylanmikesell,
What I would recommend after you create the Array1D
object using the code you have above is to use the to_file
method (i.e., data_array.to_file("myfile.su")
) and write the data out to disk in the SeismicUnix (SU) format. You can then load this back into the MASW workflow using the file name as detailed in the MASW notebook.
Of course, this is certainly not ideal, as it would make more sense to be able to run any MASW Workflow on any Array1D, but the way I implemented things (trying to make it simpler for the end user) ended up complicating this interface. If I had my druthers I would rewrite swprocess, but I am currently rewriting hvsrpy and so this is on hold until that is complete. I will note that if you are running lots of processing (hundreds or thousands of MASW transformations) and are concerned about IO issues with repeatedly writing data to disk, you could try writing the data out to a memory buffer rather than to a file on disk.
Hope this help
Joe
Hi @jpvantassel,
That works just fine for me. I am just testing this on some DAS data, so no matter what I will figure out an I/O solution later. I got your solution to work for now though :).
Thanks!