nshaud / DeepNetsForEO

Deep networks for Earth Observation

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

create_lmdb error

tongyl opened this issue · comments

I use the potsdam data ,when i run the create_lmdb.py i got this error:

Traceback (most recent call last):
  File "create_lmdb.py", line 143, in <module>
    create_image_lmdb(target_folder, samples, bgr=True)
  File "create_lmdb.py", line 92, in create_image_lmdb
    sample = sample[:,:,::-1]
IndexError: too many indices for array

but i test the image independently it runs well:

sample = io.imread('/home/tukrin1/Breeze/rs/Potsdam/RELEASE_FOLDER/Potsdam/potsdam_128_128_32/irrg_train/5411.png')
sample = sample[:,:,::-1]
print sample.shape
>> (128, 128, 3)

i'm so confused about the error ,any idea about what to do ?
thanks

Did you adjust config.py accordingly ? Maybe check using a print statement (e.g. print sample.shape) after the sample = sample[:,:,::-1] line in the create_lmdb.py script. It seems that you are feeding the create_image_lmdb() function with data that's not in the (W,H,C) format.

I re-extract the data and the error disappeared, but it abrupt crash when training:
Segmentation fault (core dumped)

This can happen for lots of reasons. Can you post Caffe's stack trace ? I'm starting to think that you are not creating the LMDBs right or that they are corrupted for some reason.

@nshaud
Sorry to post it in the the issue thread. But I have no idea where to start. I have cloned your repo along with submodule init and submodule update successfully. I have also downloaded pretrained caffemodels that you specified on the homepage of this repo. Now, I want to use those pre-trained models to try and segment my images (satellite images of Lahore) to see if it requires further fine tuning or it would work out of the box. Can you please guide me how to use your models to segment my images?

@zahidmadeel

Do you know how to use the Caffe framework ? If not, it might be useful to follow the tutorials to better understand what to do with the pre-trained weights.

First, you have to edit the config.py. The parameters BASE_DIR, DATASET, FOLDER_SUFFIX = '_fold1', BASE_FOLDER, DATASET, folders, train_idsandtest_ids` should be modified according to your own dataset.

Then, you can use the inference.py script to test the SegNet model using one of the pre-trained weights on one image. Please not that our models were trained on 3-bands images (RGB or IRRG) on very high resolution (<10cm/pixel).

Hope that helped. Open another issue if you have more questions.

@nshaud in inference.py to deploy the model we must use 'test_segnet.prototxt' , what changes to the model should I make to make that file?
thank you in advance .

@azikovskih test_segnet.prototxt is auto-generated by the inference.py script based on the configuration params from config.py, so you shouldn't have to edit anything manually.