Preparing high-quality datasets for Machine Learning activities requires much time and effort. Leveraging the power of mobile SAM [3,2,1,4] (segment-anything) can be achieved easily. This project presents a simple segmentation by mouse click.Accepted input types are web-cam-by-index, *.mp4, *.jpg, *.png, *.meta
.
As a post-processing step, the segmented images can be overlayed on different random backgrounds, with different positions and sizes. Use training_sampe_gen.py
, as output a ".meta" file will be generated, that holds the bounding box positions, and label of the object.
Is recommended to use a conda environment for this installation.
- conda create --name fsannotator python=3.10, activate environment
- if you have CUDA installed and want to use the GPU, check "nvidia-smi" for CUDA version (i.e. 11.2). Check pytorch versions [5], for the corresponding installation of the components. use the corresponding
pip
variant. - clone this project
git clone https://github.com/fvilmos/frame_annotator
- install segment_anything :
pip install git+https://github.com/facebookresearch/segment-anything.git
- install MobileSAM:
pip install git+https://github.com/ChaoningZhang/MobileSAM.git
- download weights
https://github.com/ChaoningZhang/MobileSAM/blob/master/weights/mobile_sam.pt
, copy into the./weights
folder. - configure your data collection strategy in the
./utils/fa_cfg.json
file.
All data capture-related information is in the ./utils/fa_cfg.json
file, take a look into it to understand the parameters.
The training set generator training_sampe_gen.py
has it's own configuration file ./utils/ts_cfg.json
.
- segment-anything
- MobileSAM
- FASTER SEGMENT ANYTHING: TOWARDS LIGHTWEIGHT SAM FOR MOBILE APPLICATIONS
- Segment Anything
- torch CUDA enabled versions
- CARLA simulator
/Enjoy.