healthonrails / annolid

An annotation and instance segmentation-based multiple animal tracking and behavior analysis package.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Add sample dataset

jeremyforest opened this issue · comments

In the continuity of #21 I think it would be a good idea to have a sample dataset available to show how to process it. It makes it easier for people to have an exemple of name, format and so on.
@healthonrails @shamavir do we have a sample dataset on hand that we can open source and use in here ? Doesn't have to be massive, just a behavioral session of a single animal would be enough for demonstration purposes at first.

We have lots of datasets of various sorts. I imagine that we will want to have a few sample datasets at some point to highlight some of the quite different ways Annolid can be used. What features would you want for the first such sample dataset? Multiple animal tracking, perhaps? We can pick the dataset that works best for us and then, if it isn't ours, ask the producers if we can open source it.

@jeremyforest - I've given you access to three Cornell Box folders containing videos (one of these Chen hasn't seen yet so I added you too, Chen). One is a bunch of collaborator videos from various people explicitly given to us to test out Annolid. One is some new behavior work by lab undergrads that we also can use. And one is sort of an archive of behavior videos from the lab at large, including some from Christiane's group (these generally were intended for manual scoring, so they are a nice challenge to get adequate automatic scoring/tracking).

@jeremyforest - I've given you access to three Cornell Box folders containing videos (one of these Chen hasn't seen yet so I added you too, Chen). One is a bunch of collaborator videos from various people explicitly given to us to test out Annolid. One is some new behavior work by lab undergrads that we also can use. And one is sort of an archive of behavior videos from the lab at large, including some from Christiane's group (these generally were intended for manual scoring, so they are a nice challenge to get adequate automatic scoring/tracking).

Great! I am processing about 1200 minutes of multiple frogs tracking videos and I will ask the Stanford group if we can share some of their tracking results for demo purposes.