C-Achard / EmbedSeg-napari

Extending EmbedSeg for use in napari

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

EmbedSeg-napari

Here, we attempt to extend EmbedSeg for use with napari. This is in an early phase of development and useability should become better in the upcoming weeks. Please feel free to open an issue, suggest a feature request or change in the UI layout.


Getting started

Create a new python environment with a napari installation (referred here as napari-env). Next run the following commands in the terminal window:

git clone https://github.com/juglab/EmbedSeg-napari
cd EmbedSeg-napari
conda activate napari-env
python3 -m pip install -e .
conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch
napari

Obtaining instance mask predictions on new images using pretrained models

Place all the images you wish to segment, under directories test/images/. We provide three 2D and four 3D pretrained models here. One could follow the sequence of steps:

  1. Select Predict subpanel under EmbedSeg-napari plugin.
  2. Browse for the test directory containing the images you wish to evaluate on. (Evaluation images should be present as test/images/*.tif).
  3. Next, browse for the pretrained-model weights. (Pretrained model weights have extension *.pth)
  4. Then, browse for the data_properties.json file which carries some dataset-specific properties
  5. Check test-time augmentation, Save Images and Save Results checkboxes
  6. Next, browse for a directory where you would like to export the predicted instance segmentation tiff images
  7. If everything went well so far, the paths to all the files specified by us, can be seen in the terminal window
  8. Now we are ready to click the Predict push button
  9. All test images are processed one-by-one, and the original image and network prediction are loaded in the napari visualizer window
  10. If ground truth instance masks are available, then one can also calculate the accuracy of the predictions in terms of mAP
  11. Toggle visibility of image back-and-forth to see the quality of the instance mask prediction
  12. One could also drag and drop the images and predictions from the save directory into another viewer such as Fiji
embedseg_predict.mp4

Training and Visualization

  1. Select Train subpanel under EmbedSeg-napari plugin
  2. Browse for crops generated in the preprocessing stage. Single click on directory, one level above train and val
  3. Browse for data_properties.json which carries some dataset-specific properties
  4. Browse for directory where intermediate model weights and log files should be saved
  5. Set the other parameters such as train and val size, train and val batch size etc
  6. Click on Begin training button
  7. Note that internally visualization is updated every 5 training and validation steps
  8. Stop any time and resume from the last checkpoint by browsing to the last saved model weights (checkpoint.pth)
embedseg_train.mp4

TODOs

  • Add visualization for virtual batch > 1
  • Add code for displaying embedding
  • Fix the callback on Stop training button
  • Show visualizations for 3d as volumetric images and not as z-slices
  • Use threadworker while generating crops in preprocessing panel
  • Remove EmbedSeg core code and include as a pip package

Issues

If you encounter any problems, please file an issue along with a detailed description.

About

Extending EmbedSeg for use in napari

License:BSD 3-Clause "New" or "Revised" License


Languages

Language:Python 100.0%