some ideas about 'Enable model services' section
Yukun-Cui opened this issue · comments
Enable model services
Make sure you have installed NVIDIA Driver and NVIDIA Container Toolkit. But you do not need to install the CUDA Toolkit, as it already contained in the model image.
# You need set "default-runtime" as "nvidia" in /etc/docker/daemon.json and restart docker to enable NVIDIA Container > Toolkit { "runtimes": { "nvidia": { "path": "nvidia-container-runtime", "runtimeArgs": [] } }, "default-runtime": "nvidia" }
In Enable model services
section, you need set default-runtime
as nvidia
. If you use Docker Desktop
+ WSL2.0
, you have to set it at Docker Engine
of Docker Desktop
just like the picture below.
May I ask if you have successfully run the platform through WSL 2.0
?
Is it because some components do not support WSL2.0
, causing the platform to fail?
Yes, I have successfully run the platform through Docker Desktop
+ WSL2.0
.
Great! I will update it to Readme!
I am trying to annotate images for semantic segmentation. I would like to make use of pretrained models to assist this segmentation.
However, COCO for Object detection is the only available model.
Info:
Processor: 11th Gen Intel(R) Core(TM) i9-11950H @ 2.60GHz 2.61 GHz
Installed RAM: 64.0 GB
OS: Windows 10 Enterprise
GPU: NVIDIA RTX A3000 Laptop
CUDA Version: 12.3
Driver Version: 546.12
WSL2 Description:
Ubuntu 22.04.4 LTS
Release: 22.04
Codename: jammy
You need to find a segmentation model.
Pre-trained models for semantic segmentation are generally less accurate, so it's better to label them manually.