labstreaminglayer / liblsl-Csharp

C# bindings for liblsl

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Serializing marker streams

hernandezurbina opened this issue · comments

Hi! We would like to use LSL in the lab with a virtual reality application made in Unity. The aim is to have an Android app running in a mobile device that it's displaying VR content to a participant and at the same time recording EEG signals in a computer. We would like to save triggers generated by LSL into the device so that after an experiment we take the marker streams and combine them with the EEG data. For that purpose we were wondering whether it is possible to serialize the contents of a LSLMarkerStream object in Unity. Any hints would be appreciated. Thanks!

Hi there.

Typically the way LSL streams are combined is that different devices (e.g. an EEG headset and stimulus markers from a Unity app) are stored simultaneously using LabRecorder. This data can later be loaded using Matlab, Python, and (most recently) Julia order to synchronize the data in preparation for analysis on those platforms.

I am not familiar with the Unity side of things, but in liblsl-C# there isn't an LSLmarkerStream object. Rather, there are generic inlets (pushing data) and outlets (pulling data from inlets) and these can be constructed to fit different types of data. If you really want to serialize the markers anyway, you would need to pull them with an inlet and then push the markers and their timestamps into a List or some custom object. I would probably use a class that includes some fields for meta-data that I am interested in, an IReadOnly List<Tuple<string, double>> type to hold the markers and their timestamps, and anything else that is necessary (e.g. a table showing the significance of various markers or whatever). This can implement the ISerializable interface and then simply be serialized in the traditional way.

Thanks for the info @dmedine We thought about using LabRecorder but as we would like to from the built application in the mobile phone, we thought about serialization. We are going to try what you suggest. Thanks!

@hernandezurbina

I'm in full agreement with David that if you're using a PC running LabRecorder to record EEG signals anyway then you should use that same instance of LabRecorder to also record events in your Unity game. The LabRecorder and LSL subsystem will make sure that the event timestamps coming from your Unity game in Android are synchronized with the EEG data.

If you have an EEG system that does not require a PC at all, then you might be able to record everything in Android. @s4rify might be able to advise you as to how to record LSL streams in Android.

You should be aware that the difference in the time that Unity thinks it's doing something and the time it actually happens can be quite large. The following info was tested on a PC running Windows, it might be better or worse on Android. For visual events, the delta from the Unity timestamp and the on-screen event is pretty consistently 2 or 3 frames. For audio events, the lag can be ~100 msec, and the jitter can be 25 msec or greater. LSL is not aware of this because it uses the Unity engine timestamps, not the framebuffer or DAC buffer timestamps. So if you're interested in narrow ERPs then you should characterize the delay and jitter using photosensor and/or microphone input into your EEG amplifier's auxiliary input.

Thank you for your reply @cboulay We are also exploring some other approaches, such as sending the trigger data from the VR in the mobile phone to the recorder via WiFi or Bluetooth. My guess is that this will also result in latency problems. Have you or someone in the team explored this direction and could provide some insight? Can LabRecorder record from a wireless signal? Thanks again!

sending the trigger data from the VR in the mobile phone to the recorder via WiFi or Bluetooth
Can LabRecorder record from a wireless signal?

LabRecorder can record from any device on the network as long as it is capable of TCP and UDP communication, this includes WiFi. I've never tried Bluetooth as a form of network connectivity so I don't know if that will work.

My guess is that this will also result in latency problems.

There is transmission latency, but if all you're doing is storing data then this won't matter. The events are timestamped at the source. The receiving computer running LabRecorder calculates the clock offsets between the source and itself (including an adjustment for transmission latency), and stores those offsets in the data file. Upon import, the clock offsets are compensated, eliminating the transmission latency. In the end, the timestamps reflect when Unity thought the event happened, not when LabRecorder received the event.

If you wanted to do any online (real-time) analysis that included (e.g.) segmenting or labeling EEG data based on Unity markers, then your analysis program would have to buffer a little bit of EEG data and set some post-processing flags, but that's outside the scope of this conversation.

Thanks @cboulay This is very valuable insight that is starting to make sense as we progress with the project. Please, let us get back to you in case we have more doubts.

Hello @cboulay, sorry to bother you again. I have written a script to send UDP messages to another computer from a Unity project (i.e. using C#) Now I'm trying to put it together with the code I have to write marker streams. So, I have a line of code that calls a function in an object to send a UDP packet (to another computer in the same network) like this:

sendObj.sendString("Trigger at: " + _timer.CurrentTime);

and to write a marker stream I do:

marker.Write("Trigger at: " + _timer.CurrentTime);

How can I combine both in order to send a UDP package with the marker stream data to another computer running LabRecorder? Thanks for any insight!

Sending LSL markers using C# is demonstrated here: https://github.com/labstreaminglayer/liblsl-Csharp/tree/5b549a035d5199bc704ec6b1a80140830b0e2ff8/examples

LSL streams are sent as TCP packets, not UDP. LSL uses UDP for connecting and handshaking, but not data transmission. This is intentional because UDP allows for lost packets. It is possible that someone has monkeyed with this and made an option fro UDP transmission, but I am not aware of it.

LSL uses UDP for connecting and handshaking, but not data transmission.

Just to clarify: the stream discovery and time synchronization is done via UDP. The handshake before data transmission is done via TCP.

It is possible that someone has monkeyed with this and made an option fro UDP transmission, but I am not aware of it.

I have a somewhat viable PoC for TCP time synchronization, but data transmission via UDP is a bad idea for a lot of reasons so it's unlikely to ever find its way into liblsl.

@hernandezurbina ,

How can I combine both in order to send a UDP package with the marker stream data to another computer running LabRecorder?

Maybe you're misunderstanding how LSL works or maybe I misunderstand your question. And it's probably my fault for mentioning TCP/UDP to begin with as that was a superfluous detail that likely isn't relevant.

There's no need for a separate "serialized marker" stream in addition to sending markers via LSL. You can use LSL to send the markers, and LabRecorder will collect those markers along with the EEG data and store them to disk.

I recently got LSL working on the Pico Neo 2 Eye. I wrote up some instructions and provided some sample code https://github.com/labstreaminglayer/PicoNeo2Eye
It has an integrated eye tracker which is probably not relevant to you, but the information on how to setup the project, and the marker stream and a head pose stream might be useful to you.

@tstenner , thanks for clarifying that about the handshake. And, just to add my own bit of clarification, UDP is not appropriate for most LSL applications because it allows for packet loss---which is a deal-breaker 99% of the time, for sure.

However, streaming EEG data via UDP in high-speed real-time applications such as closed-loop stimulation might not be such a bad idea. So far, the setups I've seen require custom hardware to handle the analysis and triggering. If LSL could go over UDP it could bypass a lot of cost and custom programming. For this reason I don't see UDP as entirely off the table for liblsl. Although, that being said, applications requiring UDP are extremely niche and it is yet to be seen if the little bit of speedup that this provides is enough to warrant a whole batch of work implementing a UDP data streaming protocol in liblsl. Actually implementing it in the lib shouldn't be much more than a few tweaks to the existing ASIO specialization, but there is a lot of downstream effort involved in adding any core functionality that needs to be exposed in the APIs.

But I digress. I think the OP is probably making life hard for himself by trying to re-invent the wheel. @hernandezurbina , I recommend that you just send markers as demonstrated in the example programs, use LabRecorder, sit back and relax.

And, just to add my own bit of clarification, UDP is not appropriate for most LSL applications because it allows for packet loss---which is a deal-breaker 99% of the time, for sure.

Not just packet loss, the order in which the packets arrive isn't guaranteed at all so even if all packets arrive they'd have to be reordered as well.
The latency could be a bit better, but I know of at least one embedded closed loop stimulation device that uses LSL to transport the data.

Actually implementing it in the lib shouldn't be much more than a few tweaks to the existing ASIO specialization

That should take a week or two. Avoiding all the edge cases a lot more.

but there is a lot of downstream effort involved in adding any core functionality that needs to be exposed in the APIs.

That's just a config option, e.g. data_transport = TCP for TCP or data_transport = lets_hope_nothing_goes_wrong for UDP.

While we're hijacking this thread...
Do you think it's possible to add a layer of abstraction on the data transport protocol and support multiple protocols? Then inlets could specify a protocol on creation - always falling back to TCP if the chosen protocol is incompatible (e.g., outlet using different version of lib).

This might make it possible to use shared memory when outlet and inlet are on the same device.

And we could profile other options like QUIC (implementation here https://github.com/facebookincubator/mvfst from Facebook, but MIT licensed).

Parts of it are already there with the old v100 protocol and the newer v110 protocol. I already have some ideas, but not much time to implement any of it. Anything that doesn't run on the ASIO event loop is going to be difficult with the current implementation, but at least on linux localhost "network" connections bypass large parts of the IP stack. I don't know if lsl is smart enough to recognize this, but it's one of the low hanging fruit optimizations that might improve things a lot without complicated changes.

@tstenner I had forgotten that UDP will mix up packets.

I 100% agree that LSL as-is can be an acceptable solution for high-speed closed-loop applications. There is a powerful contingent of German researchers and industry influencers that disagree.

I also think that @cboulay's suggestion is an interesting one. I know of a few people that have hacked LSL so that it can run on an embedded system, which means not only bypassing ASIO, but also bypassing LSL's heap memory management system. They didn't abstract the problem away, however, but rather tore it up and kludged.

However, if the goal is embedding LSL, I think it would be easier to make an embedded transmission system that looks like LSL instead of retrofitting LSL itself for deployment on embedded devices. I can imagine drowning in a sea of #defines and compilation flags.

@cboulay @dmedine
Thank you for your responses. Maybe I also didn't explain myself too well when telling you about the problem that we are trying to solve. We are building a VR app using Unity which will be deployed on an Android phone. We want to conduct an EEG experiment on subjects that will be using the VR app in the phone. We have a computer recording the EEG data using eego hardware and software. The app running in the phone needs to send triggers whenever there is a change in what is being shown to the user. Because the app won't be connected to the computer, we thought of sending the triggers via WiFi.

If I run the app from Unity (that is, without building it into an Android phone) I can send triggers and record them via LSL into LabRecorder. My question is how can I do the same if the app is built into the phone, which in turn isn't connected to a computer, and the computer recording the EEG signals is using the eego software. (We are not tied to using the eego software, and we would use LabRecorder if it suits our requirements.) We appreciate any hints. Thanks!

@hernandezurbina

Yes, streaming Unity triggers using LSL over WiFi should work fine. I have done it, and I also stream the HMD pose and eye gaze. And yes, you will make your life much easier if you save those streams along with the EEG data all with LabRecorder. I believe the latest versions of the ANT Neuro software come with integrated supported for LSL so you should look into that.

Thanks @cboulay Do you have an example of the code that sends the triggers using LSL over WiFi? I was following a tutorial I found on the web, but it might not be what we need.

There's nothing special about WiFi vs wired. The operating system handles the network transmission.

There is a difference between running on your PC and running on the phone, but that has nothing to do with wired vs wifi. That has to do with the Android device using a different operating system and architecture (Linux on ARM) vs your PC which is probably running Windows on x86. So for LSL to work properly when running on Android, it needs the Linux-ARM version of liblsl.so.

I linked to my PicoNeo2Eye project above, which is for a standalone Android device. There are some tips in there about how to setup the project and get the correct liblsl.so into the correct location.

Ok, thanks. I will have a look in depth at the project. Cheers!

Hi @cboulay
Sorry to bother you with more questions. I'm having trouble receiving the LSL streams in LabRecorder. I have the following set up: computer A runs a Unity program which uses LSL to send streams. Computer B is in the same network and runs ANT Neuro's EEGO recorder. I have installed LabRecorder in both computers for the sake of testing. Both are running version 1.12, as I couldn't run the latest version on computer B. When I launch the program in computer A, computer B is able to see that there is an incoming stream via LSL. I verify this in LabRecorder and EEGO. However when I start recording using LabRecorder in both computers, computer A outputs an XDF file that contains a header with all the information that I supplied (i.e. stream id, name, channel format, etc.) whereas computer B outputs an XDF file that contains only the xml version and none of the headers. What do you think could be the issue here? Thanks for your help!

Well you're using 2 different versions of LabRecorder and getting 2 different XDF files. LabRecorder 1.12 is quite old and might predate my becoming an active contributor to LSL. Why wouldn't a recent version of LabRecorder work on computer B? Let's fix that problem.

Both computers are running the same version of LabRecorder (1.12), because I couldn't run the latest version in computer B. Wherenever I try to run the program I get the following error message: "The code execution cannot proceed because VCRUNTIME140_1.dll was not found. Reinstalling the program might fix this problem". Thanks!

Sorry I misunderstood.

To solve the VCRUNTIME140_1.dll error, download and install vc_redist.x64.exe from here: https://support.microsoft.com/en-ca/help/2977003/the-latest-supported-visual-c-downloads

Thanks @cboulay
I was able to run the latest version of LabRecorder in computer B (the one that also runs EEGO recorder) But when hitting "stop recording" the program freezes and changes status to "not responding". I'm wondering whether there is an underlying issue with our Windows 10 installation that prevents both LabRecorder and EEGO recorder to get the streams from LSL correctly.

Hi guys, I'm reaching out again with more updates from our side. I've tested LabRecorder with the following setup:
~Computer A: sending LSL streams via Unity. Running also LabRecorder
~Computer B: receiving LSL streams, running LabRecorder
~Computer C: same as B.
All computers are running Windows 10 Business edition. When sending the streams, I start the recorder in all 3 computers. The results are the following using 2 different versions of LabRecorder:
~LabRecorder 13: records well in Computer A. On Computers B and C the program freezes when stopping the recorder. Nothing is saved in the file.
~LabRecorder 12: no freezing. All computers are able to save the XDF file. However, only computer A shows a full info header with info about data type, computer id, etc. Computers B and C only show the info tag for XML.

At this point I'm wondering why LabRecorder freezes on computers B and C when using the latest version, and why it doesn't show the full info header when using a previous version. I can give more details about our code and system setup if needed. Thank you!

Hello,

this sounds very much like a firewall issue. The initial handshake information is transmitted successfully, which is why all streams are visible in every instance of your labrecorder on all machines (correct?). Then, the data streaming starts and remote machines B and C receive the initial information but nothing more, whereas machine A records everything successfully, possibly because it is receiving and recording from localhost.
Some suggestions to test my hypothesis and get further info:

  • try using Wireshark (https://www.wireshark.org/) or another network-package-analysis program to investigate what packages are sent out by computer A and what packages are then successfully received by machines B and C.
  • my assumption is backed by the fact that the older version of LabRecorder is not crashing with an empty file, whereas the newer versions are crashing. I assume that the newer version has a check integrated to check for empty files which causes the freezing of the program. You could try to run the LabRecorder from a terminal (maybe with verbose option, if possible) and try to receive the error message somewhere (or even a stack frame error dump).
  • check whether the (near) empthy files do have a header AND a footer. If yes: only the data are blocked, but the meta information is successfully transmitted. This would also support my guess that Windows is blocking the transmission
  • Try your setup in a network in which you have full control on the settings! Try not to use a campus network or anything else which is not under your control. Create a local wifi.

Hope this helps! :)

Thank you so much @s4rify
It was indeed a network issue. I turned off the firewall in all computers and now both the EEGO and the LabRecorder are recording the markers. :) Now I will try sending the markers from an Android device.

Perfect! Good that you caught the one thing I forgot to suggest: turning off the firewall :D
One word of advice for Android (we can also continue this discussion somewhere else if you run into trouble with Android): depending on your OS version (newer is more restrictive), the battery options must be set in a way that Android does not kill your app because of high load. So if you don't see anything in the network, drop me a mail, I struggled with LSL on Android for quite some time now.

many thanks @s4rify I have sent you an email to your uol email with a couple of questions. :)