apache / tvm

Open deep learning compiler stack for cpu, gpu and specialized accelerators

Home Page:https://tvm.apache.org/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[CI Problem] Microtvm demo is failing to run successfully

lhutton1 opened this issue · comments

CI runs are failing due to a model from https://github.com/ARM-software/ML-zoo that could not be downloaded:

$ curl --retry 64 -sSL https://github.com/ARM-software/ML-zoo/raw/b9e26e662c00e0c0b23587888e75ac1205a99b6e/models/image_classification/mobilenet_v2_1.0_224/tflite_int8/mobilenet_v2_1.0_224_INT8.tflite -o ./mobilenet_v2_1.0_224_INT8.tflite
$ python3 -m tvm.driver.tvmc compile --target=ethos-u,cmsis-nn,c --target-ethos-u-accelerator_config=ethos-u55-256 --target-cmsis-nn-mcpu=cortex-m55 --target-c-mcpu=cortex-m55 --runtime=crt --executor=aot --executor-aot-interface-api=c --executor-aot-unpacked-api=1 --pass-config tir.usmp.enable=1 --pass-config tir.usmp.algorithm=hill_climb --pass-config tir.disable_storage_rewrite=1 --pass-config tir.disable_vectorize=1 ./mobilenet_v2_1.0_224_INT8.tflite --output-format=mlf
Error: input file not tflite

It seems to be an issue with the repository itself:

$ git clone git@github.com:ARM-software/ML-zoo.git
Cloning into 'ML-zoo'...
remote: Enumerating objects: 1670, done.
remote: Counting objects: 100% (416/416), done.
remote: Compressing objects: 100% (242/242), done.
remote: Total 1670 (delta 115), reused 365 (delta 114), pack-reused 1254
Receiving objects: 100% (1670/1670), 62.90 MiB | 8.17 MiB/s, done.
Resolving deltas: 100% (568/568), done.
Updating files: 100% (562/562), done.
Downloading models/anomaly_detection/micronet_large/tflite_int8/ad_large_int8.tflite (442 KB)
Error downloading object: models/anomaly_detection/micronet_large/tflite_int8/ad_large_int8.tflite (f1cd8da): Smudge error: Error downloading models/anomaly_detection/micronet_large/tflite_int8/ad_large_int8.tflite (f1cd8dabc12adb91b89d936bdac3eb3ebff6770bf9b9c449cb800d08fcb160a8): batch response: This repository is over its data quota. Account responsible for LFS bandwidth should purchase more data packs to restore access.

Closing as the issue with the repo has been resolved.