jolibrain / deepdetect

Deep Learning API and Server in C++14 support for Caffe, PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE

Home Page:https://www.deepdetect.com/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Inconsistent predictons using refinedet model

YaYaB opened this issue · comments

commented

Configuration

  • Version of DeepDetect:
    • Locally compiled on:
      • Ubuntu 18.04 LTS
      • Other:
    • Docker CPU
    • Docker GPU
    • Amazon AMI
  • Commit (shown by the server when starting):
    GIT REF: heads/v0.13.0:1d85f0063f44f574cd1ca3fcfc9339d8c055f220

Your question / the problem you're facing:

I've been analysing a bit the predictions made by detection models and more particularly refinedet model and I found at some point that the predictions with a batch_size at least equal to 2 are not consistent.
To illustrate this I took a model available in deepdetect's models: faces-512.
The following script will prepare all we need to setup to launch the test:

# choose a base path
BASE_PATH=TODO
mkdir "$BATH_PATH/faces_512"
cd "$BATH_PATH/faces_512"

# Download and extract the model
wget https://deepdetect.com/models/init/desktop/images/detection/faces_512.tar.gz && tar -xvf faces_512.tar.gz

# Download corresponding DD's image
docker pull jolibrain/deepdetect_gpu

# Launch container based on this image
docker run --name dd --gpus 1 -u $(id -u ${USER}):$(id -g ${USER}) -v $BATH_PATH/faces_512:/opt/models -p 8083:8080 -it jolibrain/deepdetect_gpu:latest

Error message (if any) / steps to reproduce the problem:

  • Service creation
curl -X PUT "http://localhost:8083/services/raw_face_detection" -d '{
"mllib":"caffe",
"description":"image classification service",
"type":"supervised",
"parameters":{
    "input":{
    "connector":"image",
    "width":512,
    "height":512
    },
    "mllib": {
       "nclasses": 3,
       "best":-1,
       "gpuid":0,
       "net": {"test_batch_size":2}
    }
},
"model":{
    "repository":"$BATH_PATH/faces_512"
}
}'
  • Prediction
    If you launch this prediction request several time you'll observe that the results obtained are not the same
curl -X POST "http://localhost:8083/predict" -d '{
      "service":"face_512",
      "parameters":{
        "input":{
          "width":512,
          "height":512
        },
      "mllib":{"net":{"test_batch_size":2}},
        "output":{
          "bbox": true
        }
      },
      "data": ["https://picsum.photos/id/0/600/600", "https://picsum.photos/id/1/600/600", "https://picsum.photos/id/2/600/600", "https://picsum.photos/id/3/600/600", "https://picsum.photos/id/4/600/600", "https://picsum.photos/id/5/600/600", "https://picsum.photos/id/6/600/600", "https://picsum.photos/id/7/600/600", "https://picsum.photos/id/8/600/600", "https://picsum.photos/id/9/600/600", "https://picsum.photos/id/10/600/600", "https://picsum.photos/id/11/600/600", "https://picsum.photos/id/12/600/600", "https://picsum.photos/id/13/600/600", "https://picsum.photos/id/14/600/600", "https://picsum.photos/id/15/600/600", "https://picsum.photos/id/16/600/600", "https://picsum.photos/id/17/600/600", "https://picsum.photos/id/18/600/600", "https://picsum.photos/id/19/600/600", "https://picsum.photos/id/20/600/600", "https://picsum.photos/id/21/600/600", "https://picsum.photos/id/22/600/600", "https://picsum.photos/id/23/600/600", "https://picsum.photos/id/24/600/600", "https://picsum.photos/id/25/600/600", "https://picsum.photos/id/26/600/600", "https://picsum.photos/id/27/600/600", "https://picsum.photos/id/28/600/600", "https://picsum.photos/id/29/600/600", "https://picsum.photos/id/30/600/600", "https://picsum.photos/id/31/600/600", "https://picsum.photos/id/32/600/600", "https://picsum.photos/id/33/600/600", "https://picsum.photos/id/34/600/600", "https://picsum.photos/id/35/600/600", "https://picsum.photos/id/36/600/600", "https://picsum.photos/id/37/600/600", "https://picsum.photos/id/38/600/600", "https://picsum.photos/id/39/600/600", "https://picsum.photos/id/40/600/600", "https://picsum.photos/id/41/600/600", "https://picsum.photos/id/42/600/600", "https://picsum.photos/id/43/600/600", "https://picsum.photos/id/44/600/600", "https://picsum.photos/id/45/600/600", "https://picsum.photos/id/46/600/600", "https://picsum.photos/id/47/600/600", "https://picsum.photos/id/48/600/600", "https://picsum.photos/id/49/600/600", "https://picsum.photos/id/50/600/600", "https://picsum.photos/id/51/600/600", "https://picsum.photos/id/52/600/600", "https://picsum.photos/id/53/600/600", "https://picsum.photos/id/54/600/600", "https://picsum.photos/id/55/600/600", "https://picsum.photos/id/56/600/600", "https://picsum.photos/id/57/600/600", "https://picsum.photos/id/58/600/600", "https://picsum.photos/id/59/600/600", "https://picsum.photos/id/60/600/600", "https://picsum.photos/id/61/600/600", "https://picsum.photos/id/62/600/600", "https://picsum.photos/id/63/600/600", "https://picsum.photos/id/64/600/600", "https://picsum.photos/id/65/600/600", "https://picsum.photos/id/66/600/600", "https://picsum.photos/id/67/600/600", "https://picsum.photos/id/68/600/600", "https://picsum.photos/id/69/600/600", "https://picsum.photos/id/70/600/600", "https://picsum.photos/id/71/600/600", "https://picsum.photos/id/72/600/600", "https://picsum.photos/id/73/600/600", "https://picsum.photos/id/74/600/600", "https://picsum.photos/id/75/600/600", "https://picsum.photos/id/76/600/600", "https://picsum.photos/id/77/600/600", "https://picsum.photos/id/78/600/600", "https://picsum.photos/id/79/600/600", "https://picsum.photos/id/80/600/600", "https://picsum.photos/id/81/600/600", "https://picsum.photos/id/82/600/600", "https://picsum.photos/id/83/600/600", "https://picsum.photos/id/84/600/600", "https://picsum.photos/id/85/600/600", "https://picsum.photos/id/86/600/600", "https://picsum.photos/id/87/600/600", "https://picsum.photos/id/88/600/600", "https://picsum.photos/id/89/600/600", "https://picsum.photos/id/90/600/600", "https://picsum.photos/id/91/600/600", "https://picsum.photos/id/92/600/600", "https://picsum.photos/id/93/600/600", "https://picsum.photos/id/94/600/600", "https://picsum.photos/id/95/600/600", "https://picsum.photos/id/96/600/600", "https://picsum.photos/id/97/600/600", "https://picsum.photos/id/98/600/600", "https://picsum.photos/id/99/600/600"]
    }' >>predicts_1,json
    

curl -X POST "http://localhost:8083/predict" -d '{
      "service":"face_512",
      "parameters":{
        "input":{
          "width":512,
          "height":512
        },
      "mllib":{"net":{"test_batch_size":2}},
        "output":{
          "bbox": true
        }
      },
      "data": ["https://picsum.photos/id/0/600/600", "https://picsum.photos/id/1/600/600", "https://picsum.photos/id/2/600/600", "https://picsum.photos/id/3/600/600", "https://picsum.photos/id/4/600/600", "https://picsum.photos/id/5/600/600", "https://picsum.photos/id/6/600/600", "https://picsum.photos/id/7/600/600", "https://picsum.photos/id/8/600/600", "https://picsum.photos/id/9/600/600", "https://picsum.photos/id/10/600/600", "https://picsum.photos/id/11/600/600", "https://picsum.photos/id/12/600/600", "https://picsum.photos/id/13/600/600", "https://picsum.photos/id/14/600/600", "https://picsum.photos/id/15/600/600", "https://picsum.photos/id/16/600/600", "https://picsum.photos/id/17/600/600", "https://picsum.photos/id/18/600/600", "https://picsum.photos/id/19/600/600", "https://picsum.photos/id/20/600/600", "https://picsum.photos/id/21/600/600", "https://picsum.photos/id/22/600/600", "https://picsum.photos/id/23/600/600", "https://picsum.photos/id/24/600/600", "https://picsum.photos/id/25/600/600", "https://picsum.photos/id/26/600/600", "https://picsum.photos/id/27/600/600", "https://picsum.photos/id/28/600/600", "https://picsum.photos/id/29/600/600", "https://picsum.photos/id/30/600/600", "https://picsum.photos/id/31/600/600", "https://picsum.photos/id/32/600/600", "https://picsum.photos/id/33/600/600", "https://picsum.photos/id/34/600/600", "https://picsum.photos/id/35/600/600", "https://picsum.photos/id/36/600/600", "https://picsum.photos/id/37/600/600", "https://picsum.photos/id/38/600/600", "https://picsum.photos/id/39/600/600", "https://picsum.photos/id/40/600/600", "https://picsum.photos/id/41/600/600", "https://picsum.photos/id/42/600/600", "https://picsum.photos/id/43/600/600", "https://picsum.photos/id/44/600/600", "https://picsum.photos/id/45/600/600", "https://picsum.photos/id/46/600/600", "https://picsum.photos/id/47/600/600", "https://picsum.photos/id/48/600/600", "https://picsum.photos/id/49/600/600", "https://picsum.photos/id/50/600/600", "https://picsum.photos/id/51/600/600", "https://picsum.photos/id/52/600/600", "https://picsum.photos/id/53/600/600", "https://picsum.photos/id/54/600/600", "https://picsum.photos/id/55/600/600", "https://picsum.photos/id/56/600/600", "https://picsum.photos/id/57/600/600", "https://picsum.photos/id/58/600/600", "https://picsum.photos/id/59/600/600", "https://picsum.photos/id/60/600/600", "https://picsum.photos/id/61/600/600", "https://picsum.photos/id/62/600/600", "https://picsum.photos/id/63/600/600", "https://picsum.photos/id/64/600/600", "https://picsum.photos/id/65/600/600", "https://picsum.photos/id/66/600/600", "https://picsum.photos/id/67/600/600", "https://picsum.photos/id/68/600/600", "https://picsum.photos/id/69/600/600", "https://picsum.photos/id/70/600/600", "https://picsum.photos/id/71/600/600", "https://picsum.photos/id/72/600/600", "https://picsum.photos/id/73/600/600", "https://picsum.photos/id/74/600/600", "https://picsum.photos/id/75/600/600", "https://picsum.photos/id/76/600/600", "https://picsum.photos/id/77/600/600", "https://picsum.photos/id/78/600/600", "https://picsum.photos/id/79/600/600", "https://picsum.photos/id/80/600/600", "https://picsum.photos/id/81/600/600", "https://picsum.photos/id/82/600/600", "https://picsum.photos/id/83/600/600", "https://picsum.photos/id/84/600/600", "https://picsum.photos/id/85/600/600", "https://picsum.photos/id/86/600/600", "https://picsum.photos/id/87/600/600", "https://picsum.photos/id/88/600/600", "https://picsum.photos/id/89/600/600", "https://picsum.photos/id/90/600/600", "https://picsum.photos/id/91/600/600", "https://picsum.photos/id/92/600/600", "https://picsum.photos/id/93/600/600", "https://picsum.photos/id/94/600/600", "https://picsum.photos/id/95/600/600", "https://picsum.photos/id/96/600/600", "https://picsum.photos/id/97/600/600", "https://picsum.photos/id/98/600/600", "https://picsum.photos/id/99/600/600"]
    }' >>predicts_2,json

Then you can use the following script to compare the obtain results. Make sure that the scripts is launched in the same folder where the previous json are stored.

 python cmp_results.py

On one random test I got the following result:

Number of documents: v1(98), v2(98)
Number of predictions: 98
1_diff_mean             0.00199539913814895
1_diff_std              0.010848498196088466 
1_diff_max              0.07114413380622864 
1_diff_min              0.0 

Now If I do the same exercise but this time with a batch size of 1 I get no differences

curl -X POST "http://localhost:8083/predict" -d '{
      "service":"face_512",
      "parameters":{
        "input":{
          "width":512,
          "height":512
        },
      "mllib":{"net":{"test_batch_size":1}},
        "output":{
          "bbox": true
        }
      },
      "data": ["https://picsum.photos/id/0/600/600", "https://picsum.photos/id/1/600/600", "https://picsum.photos/id/2/600/600", "https://picsum.photos/id/3/600/600", "https://picsum.photos/id/4/600/600", "https://picsum.photos/id/5/600/600", "https://picsum.photos/id/6/600/600", "https://picsum.photos/id/7/600/600", "https://picsum.photos/id/8/600/600", "https://picsum.photos/id/9/600/600", "https://picsum.photos/id/10/600/600", "https://picsum.photos/id/11/600/600", "https://picsum.photos/id/12/600/600", "https://picsum.photos/id/13/600/600", "https://picsum.photos/id/14/600/600", "https://picsum.photos/id/15/600/600", "https://picsum.photos/id/16/600/600", "https://picsum.photos/id/17/600/600", "https://picsum.photos/id/18/600/600", "https://picsum.photos/id/19/600/600", "https://picsum.photos/id/20/600/600", "https://picsum.photos/id/21/600/600", "https://picsum.photos/id/22/600/600", "https://picsum.photos/id/23/600/600", "https://picsum.photos/id/24/600/600", "https://picsum.photos/id/25/600/600", "https://picsum.photos/id/26/600/600", "https://picsum.photos/id/27/600/600", "https://picsum.photos/id/28/600/600", "https://picsum.photos/id/29/600/600", "https://picsum.photos/id/30/600/600", "https://picsum.photos/id/31/600/600", "https://picsum.photos/id/32/600/600", "https://picsum.photos/id/33/600/600", "https://picsum.photos/id/34/600/600", "https://picsum.photos/id/35/600/600", "https://picsum.photos/id/36/600/600", "https://picsum.photos/id/37/600/600", "https://picsum.photos/id/38/600/600", "https://picsum.photos/id/39/600/600", "https://picsum.photos/id/40/600/600", "https://picsum.photos/id/41/600/600", "https://picsum.photos/id/42/600/600", "https://picsum.photos/id/43/600/600", "https://picsum.photos/id/44/600/600", "https://picsum.photos/id/45/600/600", "https://picsum.photos/id/46/600/600", "https://picsum.photos/id/47/600/600", "https://picsum.photos/id/48/600/600", "https://picsum.photos/id/49/600/600", "https://picsum.photos/id/50/600/600", "https://picsum.photos/id/51/600/600", "https://picsum.photos/id/52/600/600", "https://picsum.photos/id/53/600/600", "https://picsum.photos/id/54/600/600", "https://picsum.photos/id/55/600/600", "https://picsum.photos/id/56/600/600", "https://picsum.photos/id/57/600/600", "https://picsum.photos/id/58/600/600", "https://picsum.photos/id/59/600/600", "https://picsum.photos/id/60/600/600", "https://picsum.photos/id/61/600/600", "https://picsum.photos/id/62/600/600", "https://picsum.photos/id/63/600/600", "https://picsum.photos/id/64/600/600", "https://picsum.photos/id/65/600/600", "https://picsum.photos/id/66/600/600", "https://picsum.photos/id/67/600/600", "https://picsum.photos/id/68/600/600", "https://picsum.photos/id/69/600/600", "https://picsum.photos/id/70/600/600", "https://picsum.photos/id/71/600/600", "https://picsum.photos/id/72/600/600", "https://picsum.photos/id/73/600/600", "https://picsum.photos/id/74/600/600", "https://picsum.photos/id/75/600/600", "https://picsum.photos/id/76/600/600", "https://picsum.photos/id/77/600/600", "https://picsum.photos/id/78/600/600", "https://picsum.photos/id/79/600/600", "https://picsum.photos/id/80/600/600", "https://picsum.photos/id/81/600/600", "https://picsum.photos/id/82/600/600", "https://picsum.photos/id/83/600/600", "https://picsum.photos/id/84/600/600", "https://picsum.photos/id/85/600/600", "https://picsum.photos/id/86/600/600", "https://picsum.photos/id/87/600/600", "https://picsum.photos/id/88/600/600", "https://picsum.photos/id/89/600/600", "https://picsum.photos/id/90/600/600", "https://picsum.photos/id/91/600/600", "https://picsum.photos/id/92/600/600", "https://picsum.photos/id/93/600/600", "https://picsum.photos/id/94/600/600", "https://picsum.photos/id/95/600/600", "https://picsum.photos/id/96/600/600", "https://picsum.photos/id/97/600/600", "https://picsum.photos/id/98/600/600", "https://picsum.photos/id/99/600/600"]
    }' >>predicts_1,json
    

curl -X POST "http://localhost:8083/predict" -d '{
      "service":"face_512",
      "parameters":{
        "input":{
          "width":512,
          "height":512
        },
      "mllib":{"net":{"test_batch_size":1}},
        "output":{
          "bbox": true
        }
      },
      "data": ["https://picsum.photos/id/0/600/600", "https://picsum.photos/id/1/600/600", "https://picsum.photos/id/2/600/600", "https://picsum.photos/id/3/600/600", "https://picsum.photos/id/4/600/600", "https://picsum.photos/id/5/600/600", "https://picsum.photos/id/6/600/600", "https://picsum.photos/id/7/600/600", "https://picsum.photos/id/8/600/600", "https://picsum.photos/id/9/600/600", "https://picsum.photos/id/10/600/600", "https://picsum.photos/id/11/600/600", "https://picsum.photos/id/12/600/600", "https://picsum.photos/id/13/600/600", "https://picsum.photos/id/14/600/600", "https://picsum.photos/id/15/600/600", "https://picsum.photos/id/16/600/600", "https://picsum.photos/id/17/600/600", "https://picsum.photos/id/18/600/600", "https://picsum.photos/id/19/600/600", "https://picsum.photos/id/20/600/600", "https://picsum.photos/id/21/600/600", "https://picsum.photos/id/22/600/600", "https://picsum.photos/id/23/600/600", "https://picsum.photos/id/24/600/600", "https://picsum.photos/id/25/600/600", "https://picsum.photos/id/26/600/600", "https://picsum.photos/id/27/600/600", "https://picsum.photos/id/28/600/600", "https://picsum.photos/id/29/600/600", "https://picsum.photos/id/30/600/600", "https://picsum.photos/id/31/600/600", "https://picsum.photos/id/32/600/600", "https://picsum.photos/id/33/600/600", "https://picsum.photos/id/34/600/600", "https://picsum.photos/id/35/600/600", "https://picsum.photos/id/36/600/600", "https://picsum.photos/id/37/600/600", "https://picsum.photos/id/38/600/600", "https://picsum.photos/id/39/600/600", "https://picsum.photos/id/40/600/600", "https://picsum.photos/id/41/600/600", "https://picsum.photos/id/42/600/600", "https://picsum.photos/id/43/600/600", "https://picsum.photos/id/44/600/600", "https://picsum.photos/id/45/600/600", "https://picsum.photos/id/46/600/600", "https://picsum.photos/id/47/600/600", "https://picsum.photos/id/48/600/600", "https://picsum.photos/id/49/600/600", "https://picsum.photos/id/50/600/600", "https://picsum.photos/id/51/600/600", "https://picsum.photos/id/52/600/600", "https://picsum.photos/id/53/600/600", "https://picsum.photos/id/54/600/600", "https://picsum.photos/id/55/600/600", "https://picsum.photos/id/56/600/600", "https://picsum.photos/id/57/600/600", "https://picsum.photos/id/58/600/600", "https://picsum.photos/id/59/600/600", "https://picsum.photos/id/60/600/600", "https://picsum.photos/id/61/600/600", "https://picsum.photos/id/62/600/600", "https://picsum.photos/id/63/600/600", "https://picsum.photos/id/64/600/600", "https://picsum.photos/id/65/600/600", "https://picsum.photos/id/66/600/600", "https://picsum.photos/id/67/600/600", "https://picsum.photos/id/68/600/600", "https://picsum.photos/id/69/600/600", "https://picsum.photos/id/70/600/600", "https://picsum.photos/id/71/600/600", "https://picsum.photos/id/72/600/600", "https://picsum.photos/id/73/600/600", "https://picsum.photos/id/74/600/600", "https://picsum.photos/id/75/600/600", "https://picsum.photos/id/76/600/600", "https://picsum.photos/id/77/600/600", "https://picsum.photos/id/78/600/600", "https://picsum.photos/id/79/600/600", "https://picsum.photos/id/80/600/600", "https://picsum.photos/id/81/600/600", "https://picsum.photos/id/82/600/600", "https://picsum.photos/id/83/600/600", "https://picsum.photos/id/84/600/600", "https://picsum.photos/id/85/600/600", "https://picsum.photos/id/86/600/600", "https://picsum.photos/id/87/600/600", "https://picsum.photos/id/88/600/600", "https://picsum.photos/id/89/600/600", "https://picsum.photos/id/90/600/600", "https://picsum.photos/id/91/600/600", "https://picsum.photos/id/92/600/600", "https://picsum.photos/id/93/600/600", "https://picsum.photos/id/94/600/600", "https://picsum.photos/id/95/600/600", "https://picsum.photos/id/96/600/600", "https://picsum.photos/id/97/600/600", "https://picsum.photos/id/98/600/600", "https://picsum.photos/id/99/600/600"]
    }' >>predicts_2,json

The result obtain is:

Number of documents: v1(98), v2(98)
Number of predictions: 98
1_diff_mean             0.0
1_diff_std              0.0
1_diff_max              0.0
1_diff_min              0.0

I suspect some issue when batches are processed. At beginning I thought that some predictions were switched between examples. It happens but I also observe completely new predictions that did not appear before...

Note that I've tested this on classification models and I do not have any issue on those.
cmp_results.zip

Hi, have you tried with / without CUDNN ? CUDNN non deterministically chooses internal algorithms depending on batch size. The non deterministic component comes from rolling dice on some decisions, AFAIK.

The fact that there's no issue with classification models may rule this out, but it is still worth testing.

@fantes if not CUDNN, it may be in one of the bbox output layers ? (e.g. DetectionOutput ?).

commented

I tried with a cpu image but it can not even load the model because of a caffe error (refinedet can not be used with cpu only?). I am building a version without cudnn to see if that works but I am not sure if I will be able to create the service before trying predictions.

[2021-02-24 15:01:41.628] [caffe] [error] Layer conv1_1 has unknown engine.
[2021-02-24 15:01:41.638] [face_512] [error] Error creating network
[2021-02-24 15:01:41.640] [face_512] [error] service creation call failed: Dynamic exception type: CaffeErrorException
std::exception::what: /opt/deepdetect/build/caffe_dd/src/caffe_dd/include/caffe/llogging.h:194 / Fatal Caffe error

maybe you built w/o cpu but with USE_CUDNN=ON , in case of refinedet and CUDNN set to on at compile time, we force cudnn engine, but if it is not built into caffe it will fail. Could you double check that ?

commented

The caffe gpu build without CUDNN just finished and I get tthe same error when trying to create the service. I've check that I specified DUSE_CUDNN=OFF that was also printed when building the image.

[2021-02-24 17:09:07.103] [caffe] [error] Layer conv1_1 has unknown engine.
[2021-02-24 17:09:07.104] [face_512] [error] Error creating network
[2021-02-24 17:09:07.104] [face_512] [error] service creation call failed: Dynamic exception type: CaffeErrorException
std::exception::what: /opt/deepdetect/build/caffe_dd/src/caffe_dd/include/caffe/llogging.h:194 / Fatal Caffe error

Hi @YaYaB
did you clean up your repo between your tests w/o cudnn ? maybe your first tests added cudnn engine param in deploy.prototxt (you can search "engine" in deploy.prototxt to check out) and subsequent tests do not support it (because of no cudnn at compile time in dede/caffe ). When cudnn is not compiled in dede, this keyword should not be added (but it is not removed if present, we did not think about this case) . (I'd like to solve this engine problem before looking at the main issue)

commented

@fantes Ah indeed, the cudnn engine is indicated in the deploy.prototxt, my bad I should have checked this. I've removed it and it runs correctly. However a more verbose error indicated that the deploy is set for cudnn could be helpful to debug :)

Then, without cudnn I still obtain differences.

Number of documents: v1(98), v2(98)
Number of predictions: 98
1_diff_mean	 	0.0004954249122921301
1_diff_std	 	0.004904456410707221 
1_diff_max	 	0.048551641404628754 
1_diff_min	 	0.0 

ok, thx, I am starting investigations

Hi @YaYaB here is a fix that makes your example work here #1210
...
shortest patches sometimes are the longest to produce

commented

Nice, It happens to everyone then ^^
I'll try it first thing tomorrow, thanks @fantes

Congrats to both of you @YaYaB and @fantes for catching this one! Good detailed report + careful inspection are rewarded!

commented

Sorry for the late answer. It seems to work correctly on my side, cheers @fantes @beniz