oist / Usiigaci

Usiigaci: stain-free cell tracking in phase contrast microscopy enabled by supervised machine learning

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

adding a test set alongside training and validation sets

nabeelkhalid92 opened this issue · comments

Hello,
I have achieved good results on my own data set using your approach. These are the results.
Input image:
000009
Output Image:
000009

My question is that is it possible to take labeled test data to make predictions on at the end of training and gives the metrics like loss, mrcnn_bbox_loss, mrcnn_class_loss etc as in the case of validation or training?

Glad to hear it's working on your end too.
That's doable. I think it was somewhere in my old jupyter notebook.
(I took it out and replace with scripts for it'll be easier to use)

let me try to find it.
if you know your way around opencv, it can be done.
But we usually calculate segmentation metrics, F1 score, Jaccard index, accuracy precision etc.
(as these are more intuitively understandable and relevant compared to loss)

Thank you for your reply and if you do find it kindly share.

hello,
Is that notebook in your previous commits? If that is the case I can go look for it myself.
Thank You

sorry i couldn't find it.
in our paper we did it in imageJ to ensure the results.
To incorporate the function is possible by using opencv, but it might take a while to incorporate the function.

Thank you so much for looking into this. I have given it a try and here is where i stand right now.
-The weights that are saved after the training have to be loaded into a model. So i have to build the same model again?

  • In the code of model there are two modes i) training ii) inference do i have to build another mode for testing?
  • And lastly after doing all that. My understanding was that i will use model.evaluate function of keras to find out accuracy etc.
    Sorry for all the questions, i am actually new to all of this.
    Thank you again

Hi,
So what we are thinking is on the training step.
the model weight will be updated for every epoch during the training on all the training data. then calculate the loss from the trained results against the ground truth in the validation set.

We could integrate selection of the best model weight function together with calculation of metrics.
In model.evaluate from keras, i think it gives out loss. but for segmentation performance, it's more intuitively understandable if we calculate the full panel (F1 score, accuracy, precision, recall).

this will require performing the segmentation results from the trained model weight, use opencv to calculate the overlap between the result and the ground truth given in the validation set.

Thank you for the explanation. I will give it a try now.

Hi,
I was able to calculate the segmentation metrics like jaccard index etc thanks to you.
I have another question. I am getting very bad tracking results on my 3 hours update images. The results of 5 minutes update are good. I have tried tweaking the parameters in cell_tracking.py but still not getting good tracking results. Cell id's change from frame to frame.
Can you please tell which parameters i have to change to get good tracking results, or is this related to training? I took your weights and trained on 35 images for training and 5 for validation.
Thank you in advance

First, congrats.
Sorry I don't have the best platform with all the functions for you.

For tracking, first, I would say the rule of thumb is to choose good interval. You don't want to collect too many results that will just take too long to process while you also don't want make the interval too long that tracker cannot tell each cell apart (basically the Nyquist-like criteria should be satisfied). Unfortunately sometimes its quite empirical because different cells migrate at different speed.

It is also possible to certain degree that the tracking parameters may be needed to be optimized.
to try to adjust tracking parameters. please see the cell_tracking.py line 46~53.
these are weighted parameters used for tracking.

Best,

no new updated, consider closed