ColumbiaCMB / kid_readout

Code for the ROACH KID readout.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Some heterodyne data is multiplied by -1 in part of a stream

danielflanigan opened this issue · comments

@gitj @mabitbol This is from the same data set as #14. A few of the individual data streams (StreamArrays) have s21 data that appears to be multiplied by -1 in the middle of the stream.

This causes the absolute value of s21 to be lower, because the data is being averaged with its negative: one_bad_stream_array.pdf

This plot shows some time-ordered data from the first four channels of 256 read out. Here, red is s21_raw.real and blue is s21_raw.imag. The horizontal axis is offset, so the -1 seems to take effect for all of the streams simultaneously: bad_stream_array_four_streams.pdf

The plots are from this notebook.

commented

I think this will be addressed by the work Max has been doing.
On May 3, 2016 5:36 PM, "danielflanigan" notifications@github.com wrote:

@gitj https://github.com/gitj @mabitbol https://github.com/mabitbol
This is from the same data set as #14
#14. A few of the
individual data streams (StreamArrays) have s21 data that appears to be
multiplied by -1 in the middle of the stream.

This causes the absolute value of s21 to be lower, because the data is
being averaged with its negative: one_bad_stream_array.pdf
https://github.com/ColumbiaCMB/kid_readout/files/247990/one_bad_stream_array.pdf

This plot shows some time-ordered data from the first four channels of 256
read out. Here, red is s21_raw.real and blue is s21_raw.imag. The
horizontal axis is offset, so the seems to -1 take effect for all of the
streams simultaneously: bad_stream_array_four_streams.pdf
https://github.com/ColumbiaCMB/kid_readout/files/247991/bad_stream_array_four_streams.pdf

The plots are from this notebook
https://github.com/danielflanigan/notebooks/blob/master/mkid/multichroic/wafers/Demo03Nb/0506/2016-05-03_heterodyne_scan_measurement_roach2.ipynb
.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#15

I have implemented a fix that should address both these issues I think. I
was going to do a bit more testing because NaNs can be pretty disruptive,
but I can push this today and we'll fix the nan problems as they come up.

On Wed, May 4, 2016, 12:15 AM Glenn notifications@github.com wrote:

I think this will be addressed by the work Max has been doing.
On May 3, 2016 5:36 PM, "danielflanigan" notifications@github.com wrote:

@gitj https://github.com/gitj @mabitbol https://github.com/mabitbol
This is from the same data set as #14
#14. A few of the
individual data streams (StreamArrays) have s21 data that appears to be
multiplied by -1 in the middle of the stream.

This causes the absolute value of s21 to be lower, because the data is
being averaged with its negative: one_bad_stream_array.pdf
<
https://github.com/ColumbiaCMB/kid_readout/files/247990/one_bad_stream_array.pdf

This plot shows some time-ordered data from the first four channels of
256
read out. Here, red is s21_raw.real and blue is s21_raw.imag. The
horizontal axis is offset, so the seems to -1 take effect for all of the
streams simultaneously: bad_stream_array_four_streams.pdf
<
https://github.com/ColumbiaCMB/kid_readout/files/247991/bad_stream_array_four_streams.pdf

The plots are from this notebook
<
https://github.com/danielflanigan/notebooks/blob/master/mkid/multichroic/wafers/Demo03Nb/0506/2016-05-03_heterodyne_scan_measurement_roach2.ipynb

.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#15


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#15 (comment)

Thanks, Max. I'll start working on code to replace NaNs with noise, and also to optionally retake data that has NaNs.

If a packet is bad or missing, are you planning to return all of its samples as NaN? All of the samples taken simultaneously with any of the bad samples? From a coding point of view it's easier to mark an entire chunk of a multiple-channel stream as bad, instead of having per-channel masks, but I guess this would mean throwing away some good data.

Ok thanks. I only replace missing or bad packets with NaN. When reading out
more than 32 or so channels it is very likely to get missing and bad
packets, so we can't throw away all the data. I will try to track down and
fix the source of bad data but that will take a week or more. I'm somewhat
surprised that reading 4 channels gave you so many bad packets...let's talk
later if that persists after my fix.

On Wed, May 4, 2016, 9:27 AM danielflanigan notifications@github.com
wrote:

Thanks, Max. I'll start working on code to replace NaNs with noise, and
also to optionally retake data that has NaNs.

If a packet is bad or missing, are you planning to return all of its
samples as NaN? All of the samples taken simultaneously with any of the bad
samples? From a coding point of view it's easier to mark an entire chunk of
a multiple-channel stream as bad, instead of having per-channel masks, but
I guess this would mean throwing away some good data.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#15 (comment)

Ok, I’ve pushed the NaN changes.

On May 4, 2016, at 9:27 AM, danielflanigan notifications@github.com wrote:

Thanks, Max. I'll start working on code to replace NaNs with noise, and also to optionally retake data that has NaNs.

If a packet is bad or missing, are you planning to return all of its samples as NaN? All of the samples taken simultaneously with any of the bad samples? From a coding point of view it's easier to mark an entire chunk of a multiple-channel stream as bad, instead of having per-channel masks, but I guess this would mean throwing away some good data.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub #15 (comment)

Ahh -- it's shown in the notebook but I should have mentioned here that I was actually reading 256 at once. I can easily dial back the number of channels for now.

Oh ok. We’ll thats fine, but you’ll definitely drop some packets. I think this might be on the cpu side actually, because the cpu usage goes up to like 80% when taking that much data. Still working on tracking this down however.�

On May 4, 2016, at 2:14 PM, danielflanigan notifications@github.com wrote:

Ahh -- it's shown in the notebook but I should have mentioned here that I was actually reading 256 at once. I can easily dial back the number of channels for now.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub #15 (comment)

This should be resolved by the NaN addition.