oist / Usiigaci

Usiigaci: stain-free cell tracking in phase contrast microscopy enabled by supervised machine learning

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

"Failed to deploy inference, skipping" when running inference on my own images

brettob716 opened this issue · comments

I have time series images of neural progenitor cells that I have trained on the Mask R-CNN model for segmentation. Inspecting the model using Matterport's 'Inspect_Model" Jupyter Notebook gave pretty good results (example image included)
Screen Shot 2020-05-11 at 1 10 13 PM

However, I am unable to run the Inference script successfully with my images. Running the provided example images through the Inference script works perfectly fine. I have spent a few days trying to troubleshoot with no success.

I am working in a Google Colab environment with Keras==2.2.5 and Tensorflow-gpu ==1.13.1

Any advice would be greatly appreciated.

Thank you!

huh...
I'm not sure... we didn't test this implementation on colab environment because we were having alot of problem getting matterport maskrcnn to work with newer version of tensorflow..

if you can run the inspect model with maskrcnn matterport, it usually means the platform is ready.
failed to deploy inference is in our code part.
because we code this very specific to deal with image files.
You should arrange raw images in a unique folder and no other file types can be there.

Thanks for the reply!

It took a little bit of work but I eventually got the Mask R-CNN model to train in colab (50 training images with 5 val images and 100 epochs takes about two hours to complete)

I was curious if this may have anything to do with the NES sorting you mention in the Read Me File?

I will try starting with some fresh directories for raw images and let you know.

Thanks again

nice to hear that colab is workable too. thanks!
The training time is quite good.
To answer your question:
Well not necessarily.
It's just our expectation is that the images files of a particular field-of-view over a time lapse will be arranged in one folder. (so segmentation and tracking is easier, and you can check it easily by importing into imageJ)
So the NIS sorting is just arranging a multidimensional acquisition outputed data (NIkon's software when exporting to image files, it's just a huge pile of tif files).

however, I do know that if you have some text files or spreadsheet files in the folder with the images, it can cause error. It's likely a bug that can be circumvented but we didn't get to it.

I have time series images of neural progenitor cells that I have trained on the Mask R-CNN model for segmentation. Inspecting the model using Matterport's 'Inspect_Model" Jupyter Notebook gave pretty good results (example image included)
Screen Shot 2020-05-11 at 1 10 13 PM

However, I am unable to run the Inference script successfully with my images. Running the provided example images through the Inference script works perfectly fine. I have spent a few days trying to troubleshoot with no success.

I am working in a Google Colab environment with Keras==2.2.5 and Tensorflow-gpu ==1.13.1

Any advice would be greatly appreciated.

Thank you!
if you want run the inference script, your images's format must be tiff ,not just the image's name contain 'tiff'。before you run it , you should convert the images's format to tiff

it's not limited to tiff files.
But in this version, the inference file type and the training file type and size will have to be same.

Hi there. So I was unable to run the Inference.py script in my colab environment (training still works fine), so I tried cracking into it on my macos platform (no cuda capable GPU).
I am able to run the Inference.py script on the example images provided using my generated model weights; however, when I run my own images through the script, it simply returns a blank black image for the masks.

I have always been able to run the 'model.detect' function on my macos using tensorflow 1.13.1 configured for CPU, so I decided to create a make-shift inferencing script to generate the instance-aware masks (from just one model, for now) and it seems to have been successful (images of my script and the generated masks pasted below)

Screen Shot 2020-05-27 at 8 44 30 AM

20180101ef002xy01t01

My overall goal is to be able to get my images ready for your cell tracker program. Using the example images and masks provided, the Tracker works beautifully, and I was even able to use the tracker on your images with masks generated from my 'custom' inference script. Unfortunately, when I try to run my images through the tracker I get the error "axes don't match array" when loading the images in (Traceback pasted below)
Screen Shot 2020-05-27 at 9 05 13 AM

I am still rather new to python, so apologies if there are some obvious things that I am missing. I think my problem may be rooted at the shape of the input images for training and inference (and possibly the pixel scale). I'm hoping to run some experiments today and get to the bottom of this! Thank you again for all the advice!

It is also worth noting that I am aware of the infamous ".DS_Store" hidden file that periodically shows up in directories for macos and have made sure to get rid of it before executing scripts. Image acquisition of time series data was done using Sartorius' Incucyte S3 live cell imager and images all exported as .tif files

Hi, we've never tried on mac os.
great to see you've got it sorted.

one thing i want to confirm.
for the tracker's error we didn't see it before.
the python version seems a bit quite advanced than we used.

can you confirm if you've replaced the imageitem.py of pyqtgraph? (our tracker will have error due to a bug in imageitem.

yes i have replaced it with the imageitem.py found in the Tracker folder, which produces the same error.
as for my python version, I am running 3.6 in Spyder4

Finally caught my mistake and noticed my images were in RBG rather than grayscale. Converting to grayscale got rid of the error and I was able to load my images into the tracker and successfully run tracking.

I also noticed that the shape of my training images vs inference images were slightly different, but this did not seem to prevent inferencing, as the script still produced the masks.

Training the model on colab and modifying the Inference.py script seems to yield decent results on macos platform.

Thanks again for the help and I hope to be citing your work in the near future!

Hi, really appreciate your work. I am having similar problems with the Inference.py code.
I think I found something worth looking into but I haven't quite been able to figure out yet..

When I use my own weights for my own images, I noticed that some weights are working (or making masks) while others are not. The weights that are working are the poorly trained ones which will make masks even when there isn't a object in the image. On the other hand, the weights that were trained properly are not working because it correctly assesses that there are no instances.

So my hypothesis is that the "Failed to deploy inference, skipping" will occur when the image has no object. The problem is that once it reaches an image that has no object, it will skip the entire dataset!

Also, it seems like there is something going on during inference. My weights work perfectly well when I apply masks on video files using openCV. But when I use the same weights on the Inference.py code to get the masks in image formats, it misses a lot more objects.

Anyone have some suggestions?

Was able to solve this by writing an if function.

    if mean!=mean:
       img=Image.fromarray(array)
       img.save(out_path)
    else:
       cv2.imwrite(out_path, instance_masks)

@

Was able to solve this by writing an if function.

    if mean!=mean:
       img=Image.fromarray(array)
       img.save(out_path)
    else:
       cv2.imwrite(out_path, instance_masks)

I think I am having the same issue with the inference.py file!
Can you share your solution in more detail?
I tried to use your solution but failed..!