mberkay0 / automated-labelme

Image Annotation Tool with Automated labelme in Python (automated rectangle or polygonal annotation)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool


Automated labelme

Image Annotation Tool with Automated labelme in Python



Description

Labelme is a graphical image annotation tool inspired by http://labelme.csail.mit.edu. It is written in Python and uses Qt for its graphical interface. You can find this tool at: LabelMe in Python In addition to this tool, a plugin has been developed that will provide automatic annotation for PCM time series, which have very few annotations and are very difficult to label due to artefacts. The method available at ConvNeXt MaskR-CNN uses the pre-trained model for the PCM data to obtain semi-automated annotated images. As a result, you can speed up annotation with the plugin developed for PCM images.

Also, if you want to train the model for your specific topic, see ConvNeXt MaskR-CNN. After you prepare your model, be sure to manipulate the detector.py file and the default_config.yaml file based on your work. Comments that can manage these files are available inside the files.

Various primitives (polygon, rectangle, circle, line, and point).

Features

  • Image annotation for polygon, rectangle, circle, line and point.
  • Image flag annotation for classification and cleaning. (#166)
  • Video annotation.
  • GUI customization (predefined labels / flags, auto-saving, label validation, etc). (#144)
  • Exporting VOC-format dataset for semantic/instance segmentation.
  • Exporting COCO-format dataset for instance segmentation.
  • Pretrained Model for semi-auto image annotation Drive link for PCM Cell Detection (bbox and segmentation auto annotation)
  • Ease of use in automatic detection.
  • Useful for fast data labelling in data-scarce environments such as cell detection, segmentation and tracking.

Requirements

  • Linux / macOS / Windows
  • Python3.7 or more
  • PyQt5 / PySide2
  • Pytorch

Installation

Linux, Windows and MacOS

git clone https://github.com/mberkay0/automated-labelme.git
  
cd automated-labelme
  
python -m venv env
# then activate virtual env.
#.\env\Scripts\activate #Windows
#./env/bin/activate #Linux, MacOS

pip install .
# then use from command line 
labelme

Usage

Before you begin, copy the pre-trained model into the model folder. If you want to use the model used in the project, download it via the link.

  • You can select the detection type from the "Select detection type" section under the "Detection" menu, either 'bbox' or 'mask'. By default, it comes mask.

  • Then use the buttons below to load the data.


  • Next, let's look at other settings for automatic annotation.

  • If you have selected the detection type mask, you can change the number of samples. This value is how often points to be detected are annotated. It is between 0 and 120. Deals close to 0 allow more points to be seen. Deals close to 120 enable fewer points to be seen. If 0 is given, all detected points are created. By default, a value of 25 comes. Enter values close to 0 to make a more detailed annotation.

  • Enter a value between 0 and 1 to access reliable results. A value close to 1 is the result that the model predicted with higher confidence. Enter a value between 0 and 1 to threshold the probability map generated by the model.

  • Press the Detect button for automatic annotation and wait a bit. All automatically caught detections will be annotated in the specified type.

  • Finally, you can get the automatic detection results below.

Automated annotation result types (polygon, rectangle).

Run labelme --help for detail. The annotations are saved as a JSON file like COCO dataset.

How to build standalone executable

Below shows how to build the standalone executable on macOS, Linux and Windows.

# clone the repo
git clone https://github.com/mberkay0/automated-labelme.git
  
cd automated-labelme
  
python -m venv env
# then activate virtual env.
#.\env\Scripts\activate #Windows
#./env/bin/activate #Linux, MacOS

# Build the standalone executable
pip install .
pip install pyinstaller
pyinstaller labelme.spec

Command Line Arguments

  • --output specifies the location that annotations will be written to. If the location ends with .json, a single annotation will be written to this file. Only one image can be annotated if a location is specified with .json. If the location does not end with .json, the program will assume it is a directory. Annotations will be stored in this directory with a name that corresponds to the image that the annotation was made on.
  • The first time you run labelme, it will create a config file in ~/.labelmerc. You can edit this file and the changes will be applied the next time that you launch labelme. If you would prefer to use a config file from another location, you can specify this file with the --config flag.
  • Without the --nosortlabels flag, the program will list labels in alphabetical order. When the program is run with this flag, it will display labels in the order that they are provided.

For more details, check the forked repo.

Acknowledgement

This repo is the fork of wkentaro/pylabelme.

About

Image Annotation Tool with Automated labelme in Python (automated rectangle or polygonal annotation)

License:MIT License


Languages

Language:Python 100.0%