LINCellularNeuroscience / VAME

Variational Animal Motion Embedding - A tool for time series embedding and clustering

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Create_trainset error

Chengaowei opened this issue · comments

Hi,
Thanks for your work. It's excellent!
I run the demo.py file. And got an error at --'create_trainset'.

It says: 'ValueError: need at least one array to concatenate'
could you tell me how to fix it?
image
image

commented

Hi,

I think this error appears because your config.yaml has no Data specified. Did you follow the VAME workflow guide?

What I see from the first image you attached is that your video folder name is wrong, you miss the video file format ending like .mp4 or .avi. The specification videoType=.mp4 is not adding this to the string. Therefore, I believe your config.yaml is created wrongly and under the header video_sets is no video name listed. Can you confirm this?

If this is the case you need to re-initialize the project. Just delete the current one and create an new one with the right video path string.

I hope this helps!

Thanks for your quickly reply! @KLuxem and sorry for this simple mistake.
you are right! I missed the video file format at end.

I follow the workflow guide with your example video, and now get another error at 'vame.pose_segmentation(config)'

It says: 'Program is Terminated. Because you tried to allocate too many memory regions.'
image

commented

So far I haven't seen this specific problem. Can you set a flag in the pose_segmentation script at line 245 to see if the error occurs after the k-Means calculation? Just type something like print("Flag") so that we understand better where the program breaks for you. Or you can just uncomment line 250

Print-245
print-245

Print-250
print-250

commented

So your error seems related to BLAS. Are you using or have installed anything like Dask?
We could not reproduce this error on two different machines so far. .

Check out this thread:
https://stackoverflow.com/questions/45086246/too-many-memory-regions-error-with-dask

Maybe setting export OMP_NUM_THREADS=1 could help.

I checked VAME envs, and didn't find something like Dask~~
image

I want to try 'export OMP_NUM_THREADS=1'. where should I do this?

AND I found a page here:
https://github.com/xianyi/OpenBLAS/wiki/faq#program-is-terminated-because-you-tried-to-allocate-too-many-memory-regions
image
Do you think this can help ?

commented

You should be able to type 'export OMP_NUM_THREADS=1' just in your console and try again.

This issue seems to be more machine depended than on the VAME package. Maybe updating the numpy version can help.

What you could still check if is the Kmeans call in function same_parameterization at line 112 is working. Maybe set another flag in line 115 and line 123 to pin down where the error is happening. And another one at line 133. If you think its the Kmeans call for you try out the example from sklearn and see if you run into similar problems

I changed another PC; and the error disappeared!
Anyway, it's working now.

I'll go ahead with your example data.

commented

Perfect! I close this issue then!
Enjoy the example ;)