Here, we attempt to extend EmbedSeg
for use with napari
. This is in an early phase of development and useability should become better in the upcoming weeks. Please feel free to open an issue, suggest a feature request or change in the UI layout.
Create a new python environment with a napari
installation (referred here as napari-env
). Next run the following commands in the terminal window:
git clone https://github.com/juglab/EmbedSeg-napari
cd EmbedSeg-napari
conda activate napari-env
python3 -m pip install -e .
conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch
napari
Place all the images you wish to segment, under directories test/images/
. We provide three 2D and four 3D pretrained models here. One could follow the sequence of steps:
- Select Predict subpanel under EmbedSeg-napari plugin.
- Browse for the test directory containing the images you wish to evaluate on. (Evaluation images should be present as test/images/*.tif).
- Next, browse for the pretrained-model weights. (Pretrained model weights have extension *.pth)
- Then, browse for the data_properties.json file which carries some dataset-specific properties
- Check test-time augmentation, Save Images and Save Results checkboxes
- Next, browse for a directory where you would like to export the predicted instance segmentation tiff images
- If everything went well so far, the paths to all the files specified by us, can be seen in the terminal window
- Now we are ready to click the Predict push button
- All test images are processed one-by-one, and the original image and network prediction are loaded in the napari visualizer window
- If ground truth instance masks are available, then one can also calculate the accuracy of the predictions in terms of mAP
- Toggle visibility of image back-and-forth to see the quality of the instance mask prediction
- One could also drag and drop the images and predictions from the save directory into another viewer such as Fiji
embedseg_predict.mp4
- Select Train subpanel under EmbedSeg-napari plugin
- Browse for crops generated in the preprocessing stage. Single click on directory, one level above train and val
- Browse for data_properties.json which carries some dataset-specific properties
- Browse for directory where intermediate model weights and log files should be saved
- Set the other parameters such as train and val size, train and val batch size etc
- Click on Begin training button
- Note that internally visualization is updated every 5 training and validation steps
- Stop any time and resume from the last checkpoint by browsing to the last saved model weights (checkpoint.pth)
embedseg_train.mp4
- Add visualization for virtual batch > 1
- Add code for displaying embedding
- Fix the callback on
Stop training
button - Show visualizations for 3d as volumetric images and not as z-slices
- Use
threadworker
while generating crops inpreprocessing
panel - Remove
EmbedSeg
core code and include as a pip package
If you encounter any problems, please file an issue along with a detailed description.