JARVIS-MoCap / JARVIS-AcquisitionTool

AcquisitionTool to record multi-camera recordings for the JARVIS 3D Markerless Pose Estimation Toolbox

Home Page:https://jarvis-mocap.github.io/jarvis-docs/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Multicamera acquisition

Antorithms opened this issue · comments

Hi Timo,

We are still working to get the system up to speed for six cameras.
Just to recap: we have 6 BFS-U3-32S4M cameras and a computer with an i7-9800X, an RTX 3080, an M2 SSD, 2PCiE Fresco 4xUSB cards.
All below is doe with
We did some progressive testing and were able to record 4 cameras at fullres, JPEG95 compression and we got synchronized videos of the same n of frames. Here we always get the error below error in the command window upon start. Additionally, sometimes upon stop of recording the acquisition tool crashes.

Could not create/open file:
Metadata filepath: C:/ProgramData/JARVIS-Mocap-Acquisition/Data/Test4cam/Test_2/test6_bin2/metadata.csv

With 5 or 6 cameras we are not able to record at full res without the tool crashing immediately at start. Upon binning we can occasionally record with 5 or 6 cameras but either the tool crashes at start time or when we stop the recordings.

Finally and probably most imporantly, we tried to compress the data more (JPEG50) and the acquisition starts with 6 cameras at full resolution (same error as above) and works (tested up to 10 minutes more or less) but upon stopping the AcqTool freezes and the command window shows:

Trying to stop Acquisition
stopped Worker
After 30mins of frozen screen we stopped the AcqTool and were able to open the video files that have insanely different number of frames (50k one camera, 14k other camera)

Sorry for the multilayered issue! Let us know what you think about these issues and thanks a lot for all the help and the nice repo and software!

Hi Timo,
Just adding one detail, during more testing today we noticed that right before crashing at recording stop there is a big spike to 100% in cuda usage. While recording cuda usage is at 50%. All of this observed from task manager.

Hi Timo,
Thanks a lot for the updates that fixed the crashes.
We stressed-test the system a bit to match our requirements and noticed a few things.

  1. Videos differ by one, two, max three frames in length between each other (and this is consistent with the metadata csv file). Why is that ? Have you also observed it? In that case can you suggest a way to exclude "missed" frames so as to get synchronized videos of the same length for further processing ?
  2. We collected exposure active timestamps from the cameras independently and observed there are consistently more exposed active frames than saved ones (around 5 more, same number for all cameras), is this also normal behavior ?
  3. Finally we would need to get Exposure Active timestamps of written frames and to deliver them to our neural recording rigs to be able to sync video and other data streams, would this be feasible in some way ?

Thanks a lot for your help and again for the amazing tool !!

Attached are two example metadata files with the reported behavior.
metadata11.csv
metadata.csv

  1. I have encountered this issue before when running the software under Windows. Somehow there is a large spike in latency at the beginning of the recording which causes some frame drops. This seems to also be the case in your recordings, since the metadata files you send me show a number of dropped frames (~8) for each camera after about 30 frames of recording. After this initial hiccup everything is stable and all frames are recorded. Unfortunately I am not able to reproduce this behavior on the Windows machine I have available for testing, so it's hard for me the find the cause for this. Switching to the Linux version of the tool would be a quick solution until the Windows issues are resolved, but that's probably not an option for you right?
  2. I am assuming this is a result of the frame drops described above. I will double check this on my side, to make sure the number of exposed frames matches the number of saved ones when the system works like intended without frame drops.
  3. Can you go into a little bit more detail? Do you need them to be delivered online during a recording, or afterwards, say trough the metadata file?

Dear Timo,
Thanks a lot again for your support.

  1. For us its probably better to stay with Windows but we could switch to Linux in case no other solution is available.
  2. Thanks for double checking.
  3. We checked and using a combination of exposure active timestamps and metadata we can identify the frame numbers of the missing frames, making it a solvable issue for analysis and synchronization with neural recordings. On the other hand, is this a problem to use JARVIS-MoCAP tools later on on the videos? if not, we are good to go !!

Thanks again for your kind help.

Good to hear you figured out a solution that works for you!

I think it should be possible to identify dropped frames reliably using the frame_id column in the metadata. If a camera drops a frame the frame_id will still be incremented on the camera. This means that you know that any a camera with frame_ids ...,13,14,17,18,19,... has dropped frames 15 and 16.

You can use those videos in the rest of the JARVIS pipeline as long as you correct the videos to be synchronized and have the same length. I would suggest to do this by adding additional frames (either black ones, or repeats of the ones before) to 'replace' the dropped ones and restore synchronization. I will try to add functionality to do this to the AnnotationTool in the next release, or at the very least upload a script that does it automatically.

Please let me know if you have any more questions or need any help getting the videos synchronized. I'll close this issue for now but feel free to reopen it at any point in time!