tomas-gajarsky / facetorch

Python library for analysing faces using PyTorch

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Permission error when downloading the models

AmmarRashed opened this issue · comments

Hello,
This might sound stupid, but I am trying to learn about the tool.
When I try to init the analyzer using the config you used in the demo notebook, I get a "permission error" for the face detector. For some reason, it is not finding the model.

InstantiationException: Error in call to target 'facetorch.analyzer.detector.core.FaceDetector': PermissionError(13, 'Permission denied') full_key: analyzer.detector

Thank you,

Hello,

I got the same problems, using a jupyter notebook I had no permissions to download the models, but it can still be downloaded from the browser, you have to use the google drive IDs that are provided in config file and store them in the folder structured of the config file as well. After downloading all the models from browser it works properly. Here is the structure: (in /opt fonder)

facetorch
├── data
│   └── 3dmm
│   └── meta.pt
└── models
└── torchscript
├── detector
│   └── 1
│   └── model.pt
└── predictor
├── align
│   └── 1
│   └── model.pt
├── au
│   └── 1
│   └── model.pt
├── deepfake
│   └── 1
│   └── model.pt
├── embed
│   └── 1
│   └── model.pt
├── fer
│   └── 2
│   └── model.pt
├── va
│   └── 1
│   └── model.pt
└── verify
└── 2
└── model.pt

Hello,

Thank you for reaching out and no worries, there are no stupid questions when it comes to learning about a new tool. Your issue with the "permission error" is indeed a valid concern and something we can work through together.

The error you're encountering typically happens due to Google Drive's limitations on download traffic for publicly shared files. When a file exceeds Google's download quota, it temporarily restricts further downloads, resulting in the PermissionError you're seeing.

To resolve this, you have a couple of options:

  1. Manual Download:

    • As @davidorp rightly pointed out, manually downloading the models can circumvent this limitation. You can download the required model files directly from the Google Drive links provided in the documentation or the demo notebook. Once downloaded, place them in the expected folder structure within your project. This approach ensures that the analyzer can access the models without hitting Google Drive's download quota.
  2. Implementing a Custom Downloader:

    • Another solution is to implement a custom downloader that fetches the model files from an alternative hosting platform that doesn't impose such download limits. This would involve modifying the codebase to integrate the downloader and ensuring that it correctly fetches and caches the models. If you choose this route, consider hosting platforms like AWS S3, Dropbox, or a dedicated server, which offer more control over file access and traffic.

Please, let me know if you need further assistance with either of these solutions or if you encounter any other issues.

Best regards,
Tomas

Thank you so much. That solves the problem. There is some issue with the AU model, it just never finishes (no errors though). That's out of the scope of this issue ticket though. Thanks!

On Ubuntu 22, I have simply changed the folder path in gpu.config.yml from "/opt/facetorch/data/3dmm/meta.pt" to "opt/facetorch/data/3dmm/meta.pt" for all path in .yml.

This is my gpu.config.yml:

analyzer:
device: cuda
optimize_transforms: true
reader:
target: facetorch.analyzer.reader.ImageReader
device:
target: torch.device
type: ${analyzer.device}
optimize_transform: ${analyzer.optimize_transforms}
transform:
target: torchvision.transforms.Compose
transforms:
- target: facetorch.transforms.SquarePad
- target: torchvision.transforms.Resize
size:
- 1080
antialias: True
detector:
target: facetorch.analyzer.detector.FaceDetector
downloader:
target: facetorch.downloader.DownloaderGDrive
file_id: 1eMuOdGkiNCOUTiEbKKoPCHGCuDgiKeNC
path_local: opt/facetorch/models/torchscript/detector/1/model.pt
device:
target: torch.device
type: ${analyzer.device}
reverse_colors: true
preprocessor:
target: facetorch.analyzer.detector.pre.DetectorPreProcessor
transform:
target: torchvision.transforms.Compose
transforms:
- target: torchvision.transforms.Normalize
mean:
- 104.0
- 117.0
- 123.0
std:
- 1.0
- 1.0
- 1.0
device:
target: torch.device
type: ${analyzer.device}
optimize_transform: ${analyzer.optimize_transforms}
reverse_colors: ${analyzer.detector.reverse_colors}
postprocessor:
target: facetorch.analyzer.detector.post.PostRetFace
transform: None
device:
target: torch.device
type: ${analyzer.device}
optimize_transform: ${analyzer.optimize_transforms}
confidence_threshold: 0.02
top_k: 5000
nms_threshold: 0.4
keep_top_k: 750
score_threshold: 0.6
prior_box:
target: facetorch.analyzer.detector.post.PriorBox
min_sizes:
- - 16
- 32
- - 64
- 128
- - 256
- 512
steps:
- 8
- 16
- 32
clip: false
variance:
- 0.1
- 0.2
reverse_colors: ${analyzer.detector.reverse_colors}
expand_box_ratio: 0.0
unifier:
target: facetorch.analyzer.unifier.FaceUnifier
transform:
target: torchvision.transforms.Compose
transforms:
- target: torchvision.transforms.Normalize
mean:
- -123.0
- -117.0
- -104.0
std:
- 255.0
- 255.0
- 255.0
- target: torchvision.transforms.Resize
size:
- 380
- 380
antialias: True
device:
target: torch.device
type: ${analyzer.device}
optimize_transform: ${analyzer.optimize_transforms}
predictor:
embed:
target: facetorch.analyzer.predictor.FacePredictor
downloader:
target: facetorch.downloader.DownloaderGDrive
file_id: 19h3kqar1wlELAmM5hDyj9tlrUh8yjrCl
path_local: opt/facetorch/models/torchscript/predictor/embed/1/model.pt
device:
target: torch.device
type: ${analyzer.device}
preprocessor:
target: facetorch.analyzer.predictor.pre.PredictorPreProcessor
transform:
target: torchvision.transforms.Compose
transforms:
- target: torchvision.transforms.Resize
size:
- 244
- 244
antialias: True
- target: torchvision.transforms.Normalize
mean:
- 0.485
- 0.456
- 0.406
std:
- 0.228
- 0.224
- 0.225
device:
target: torch.device
type: ${analyzer.predictor.embed.device.type}
optimize_transform: ${analyzer.optimize_transforms}
reverse_colors: false
postprocessor:
target: facetorch.analyzer.predictor.post.PostEmbedder
transform: None
device:
target: torch.device
type: ${analyzer.predictor.embed.device.type}
optimize_transform: ${analyzer.optimize_transforms}
labels:
- abstract
verify:
target: facetorch.analyzer.predictor.FacePredictor
downloader:
target: facetorch.downloader.DownloaderGDrive
file_id: 1WI-mP_0mGW31OHfriPUsuFS_usYh_W8p
path_local: opt/facetorch/models/torchscript/predictor/verify/2/model.pt
device:
target: torch.device
type: ${analyzer.device}
preprocessor:
target: facetorch.analyzer.predictor.pre.PredictorPreProcessor
transform:
target: torchvision.transforms.Compose
transforms:
- target: torchvision.transforms.Resize
size:
- 112
- 112
antialias: True
- target: torchvision.transforms.Normalize
mean:
- 0.5
- 0.5
- 0.5
std:
- 0.5
- 0.5
- 0.5
device:
target: torch.device
type: ${analyzer.predictor.verify.device.type}
optimize_transform: ${analyzer.optimize_transforms}
reverse_colors: true
postprocessor:
target: facetorch.analyzer.predictor.post.PostEmbedder
transform: None
device:
target: torch.device
type: ${analyzer.predictor.verify.device.type}
optimize_transform: ${analyzer.optimize_transforms}
labels:
- abstract
fer:
target: facetorch.analyzer.predictor.FacePredictor
downloader:
target: facetorch.downloader.DownloaderGDrive
file_id: 1xoB5VYOd0XLjb-rQqqHWCkQvma4NytEd
path_local: opt/facetorch/models/torchscript/predictor/fer/2/model.pt
device:
target: torch.device
type: ${analyzer.device}
preprocessor:
target: facetorch.analyzer.predictor.pre.PredictorPreProcessor
transform:
target: torchvision.transforms.Compose
transforms:
- target: torchvision.transforms.Resize
size:
- 260
- 260
antialias: True
- target: torchvision.transforms.Normalize
mean:
- 0.485
- 0.456
- 0.406
std:
- 0.229
- 0.224
- 0.225
device:
target: torch.device
type: ${analyzer.predictor.fer.device.type}
optimize_transform: ${analyzer.optimize_transforms}
reverse_colors: false
postprocessor:
target: facetorch.analyzer.predictor.post.PostArgMax
transform: None
device:
target: torch.device
type: ${analyzer.predictor.fer.device.type}
optimize_transform: ${analyzer.optimize_transforms}
dim: 1
labels:
- Anger
- Contempt
- Disgust
- Fear
- Happiness
- Neutral
- Sadness
- Surprise
au:
target: facetorch.analyzer.predictor.FacePredictor
downloader:
target: facetorch.downloader.DownloaderGDrive
file_id: 1uoVX9suSA5JVWTms3hEtJKzwO-CUR_jV
path_local: opt/facetorch/models/torchscript/predictor/au/1/model.pt # str
device:
target: torch.device
type: ${analyzer.device}
preprocessor:
target: facetorch.analyzer.predictor.pre.PredictorPreProcessor
transform:
target: torchvision.transforms.Compose
transforms:
- target: torchvision.transforms.Resize
size:
- 224
- 224
antialias: True
- target: torchvision.transforms.Normalize
mean:
- 0.485
- 0.456
- 0.406
std:
- 0.229
- 0.224
- 0.225
device:
target: torch.device
type: ${analyzer.predictor.au.device.type}
optimize_transform: ${analyzer.optimize_transforms}
reverse_colors: false
postprocessor:
target: facetorch.analyzer.predictor.post.PostMultiLabel
transform: None
device:
target: torch.device
type: ${analyzer.predictor.au.device.type}
optimize_transform: ${analyzer.optimize_transforms}
dim: 1
threshold: 0.5
labels:
- inner_brow_raiser
- outer_brow_raiser
- brow_lowerer
- upper_lid_raiser
- cheek_raiser
- lid_tightener
- nose_wrinkler
- upper_lip_raiser
- nasolabial_deepener
- lip_corner_puller
- sharp_lip_puller
- dimpler
- lip_corner_depressor
- lower_lip_depressor
- chin_raiser
- lip_pucker
- tongue_show
- lip_stretcher
- lip_funneler
- lip_tightener
- lip_pressor
- lips_part
- jaw_drop
- mouth_stretch
- lip_bite
- nostril_dilator
- nostril_compressor
- left_inner_brow_raiser
- right_inner_brow_raiser
- left_outer_brow_raiser
- right_outer_brow_raiser
- left_brow_lowerer
- right_brow_lowerer
- left_cheek_raiser
- right_cheek_raiser
- left_upper_lip_raiser
- right_upper_lip_raiser
- left_nasolabial_deepener
- right_nasolabial_deepener
- left_dimpler
- right_dimpler
va:
target: facetorch.analyzer.predictor.FacePredictor
downloader:
target: facetorch.downloader.DownloaderGDrive
file_id: 1Xl4ilNCU_DgKNhITrXb3UyQUUdm3VTKS
path_local: opt/facetorch/models/torchscript/predictor/va/1/model.pt
device:
target: torch.device
type: ${analyzer.device}
preprocessor:
target: facetorch.analyzer.predictor.pre.PredictorPreProcessor
transform:
target: torchvision.transforms.Compose
transforms:
- target: torchvision.transforms.Resize
size:
- 224
- 224
antialias: True
- target: torchvision.transforms.Normalize
mean:
- 0.485
- 0.456
- 0.406
std:
- 0.229
- 0.224
- 0.225
device:
target: torch.device
type: ${analyzer.predictor.va.device.type}
optimize_transform: ${analyzer.optimize_transforms}
reverse_colors: false
postprocessor:
target: facetorch.analyzer.predictor.post.PostLabelConfidencePairs
transform: None
device:
target: torch.device
type: ${analyzer.predictor.va.device.type}
optimize_transform: ${analyzer.optimize_transforms}
labels:
- valence
- arousal
offsets:
- 0
- 0
deepfake:
target: facetorch.analyzer.predictor.FacePredictor
downloader:
target: facetorch.downloader.DownloaderGDrive
file_id: 1GjDTwQpvrkCjXOdiBy1oMkzm7nt-bXFg
path_local: opt/facetorch/models/torchscript/predictor/deepfake/1/model.pt
device:
target: torch.device
type: ${analyzer.device}
preprocessor:
target: facetorch.analyzer.predictor.pre.PredictorPreProcessor
transform:
target: torchvision.transforms.Compose
transforms:
- target: torchvision.transforms.Resize
size:
- 380
- 380
antialias: True
- target: torchvision.transforms.Normalize
mean:
- 0.485
- 0.456
- 0.406
std:
- 0.229
- 0.224
- 0.225
device:
target: torch.device
type: ${analyzer.device}
optimize_transform: ${analyzer.optimize_transforms}
reverse_colors: false
postprocessor:
target: facetorch.analyzer.predictor.post.PostSigmoidBinary
transform: None
device:
target: torch.device
type: ${analyzer.device}
optimize_transform: ${analyzer.optimize_transforms}
labels:
- Real
- Fake
threshold: 0.7
align:
target: facetorch.analyzer.predictor.FacePredictor
downloader:
target: facetorch.downloader.DownloaderGDrive
file_id: 16gNFQdEH2nWvW3zTbdIAniKIbPAp6qBA
path_local: opt/facetorch/models/torchscript/predictor/align/1/model.pt
device:
target: torch.device
type: ${analyzer.device}
preprocessor:
target: facetorch.analyzer.predictor.pre.PredictorPreProcessor
transform:
target: torchvision.transforms.Compose
transforms:
- target: torchvision.transforms.Resize
size:
- 120
- 120
antialias: True
device:
target: torch.device
type: ${analyzer.predictor.align.device.type}
optimize_transform: ${analyzer.optimize_transforms}
reverse_colors: false
postprocessor:
target: facetorch.analyzer.predictor.post.PostEmbedder
transform: None
device:
target: torch.device
type: ${analyzer.predictor.align.device.type}
optimize_transform: ${analyzer.optimize_transforms}
labels:
- abstract
utilizer:
align:
target: facetorch.analyzer.utilizer.align.Lmk3DMeshPose
transform: None
device:
target: torch.device
type: ${analyzer.device}
optimize_transform: false
downloader_meta:
target: facetorch.downloader.DownloaderGDrive
file_id: 11tdAcFuSXqCCf58g52WT1Rpa8KuQwe2o
path_local: opt/facetorch/data/3dmm/meta.pt
image_size: 120
draw_boxes:
target: facetorch.analyzer.utilizer.draw.BoxDrawer
transform: None
device:
target: torch.device
type: ${analyzer.device}
optimize_transform: false
color: green
line_width: 3
draw_landmarks:
target: facetorch.analyzer.utilizer.draw.LandmarkDrawerTorch
transform: None
device:
target: torch.device
type: ${analyzer.device}
optimize_transform: false
width: 2
color: green
logger:
target: facetorch.logger.LoggerJsonFile
name: facetorch
level: 20
path_file: opt/facetorch/logs/facetorch/main.log
json_format: '%(asctime)s %(levelname)s %(message)s'
main:
sleep: 3
debug: true
batch_size: 8
fix_img_size: true
return_img_data: true
include_tensors: true