tryolabs / luminoth

Deep Learning toolkit for Computer Vision.

Home Page:https://tryolabs.com

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How to use lumi generate coco's subset

bleedingfight opened this issue · comments

Hi,guys,I wan't use coco's subset train my model.but I can't get coco's subset use lumi dataset transform.
My env:

nvidia GTX1080x2
E5-1620
ubuntu 16.0.4

I had download coco's dataset to /tmp/coco.

572K    coco/annotations/deprecated-challenge2017
2.5G    coco/annotations
6.3G    coco/test2017
19G     coco/train2017
788M    coco/val2017
28G     coco

I just try this command to get apple and orange's tfrecords

lumi dataset transform --type coco \
         --data-dir /tmp/coco/ \
          --output-dir /tmp/apple/ \
         --split train --split val \
          --only-classes apple,orange \
          --debug

something wrong like this:

  File "/home/amax/anaconda3/lib/python3.5/site-packages/tensorflow_probability/__init__.py", line 68, in <module>
    _ensure_tf_install()
  File "/home/amax/anaconda3/lib/python3.5/site-packages/tensorflow_probability/__init__.py", line 65, in _ensure_tf_install
    present=tf.__version__))
ImportError: This version of TensorFlow Probability requires TensorFlow version >= 1.12.0; Detected an installation of version 1.10.0. Please upgrade TensorFlow to proceed.

tensorflow is compile form source,In order to avoid recompile source and conflict,I use conda create a new env:

conda create -n lumi-dev python=3.6 
source activate lumi-dev
pip install tensorflow # (not tensorflow-gpu in order to avoid conflict with mine,pypi's url:tuna.tsinghua.edu.cn)
pip install luminoth

luminoth‘s like this:

luminoth                  0.2.3                     <pip>

and then i use the same command(I wan't generator apple and orange's tfrecords in /tmp/apple):

lumi dataset transform --type coco \
         --data-dir /tmp/coco/ \
          --output-dir /tmp/apple/ \
         --split train --split val \
          --only-classes apple,orange \
          --debug

output just like this:

DEBUG:tensorflow:Loading annotation json (may take a while).

output apple dir:

pple
├── classes.json
├── train.tfrecords
└── val.tfrecords

0 directories, 3 files

19 G

Have I some step wrong?why all data are generat for tfrecords