microsoft / Olive

Olive: Simplify ML Model Finetuning, Conversion, Quantization, and Optimization for CPUs, GPUs and NPUs.

Home Page:https://microsoft.github.io/Olive/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Mistral Run Optimization(GPU) program end without generate model

tjinjin95 opened this issue · comments

Describe the bug
According to the readme: https://github.com/microsoft/Olive/tree/main/examples/mistral,
Failed at step: Run Optimization(GPU)
program end without generate model.
model folder is a run_history_gpu-cuda.txt (model_id 84241cde)

To Reproduce
my virtual environment pip list:
Package Version Editable project location


accelerate 0.33.0
aiohappyeyeballs 2.4.0
aiohttp 3.10.5
aiosignal 1.3.1
alembic 1.13.2
annotated-types 0.7.0
attrs 24.2.0
certifi 2024.7.4
charset-normalizer 3.3.2
colorama 0.4.6
coloredlogs 15.0.1
colorlog 6.8.2
contourpy 1.2.1
cycler 0.12.1
datasets 2.21.0
Deprecated 1.2.14
dill 0.3.8
evaluate 0.4.2
filelock 3.15.4
flatbuffers 24.3.25
fonttools 4.53.1
frozenlist 1.4.1
fsspec 2024.6.1
greenlet 3.0.3
huggingface-hub 0.24.6
humanfriendly 10.0
idna 3.8
inquirerpy 0.3.4
Jinja2 3.1.4
joblib 1.4.2
kiwisolver 1.4.5
lightning-utilities 0.11.6
Mako 1.3.5
MarkupSafe 2.1.5
matplotlib 3.9.2
mpmath 1.3.0
multidict 6.0.5
multiprocess 0.70.16
networkx 3.3
neural_compressor 3.0
numpy 1.26.4
olive-ai 0.7.0 D:\windowsAI\Olive
onnx 1.16.2
onnxconverter-common 1.14.0
onnxruntime-directml 1.19.0
onnxruntime_extensions 0.12.0
onnxruntime-gpu 1.19.0
opencv-python-headless 4.10.0.84
optimum 1.21.4
optuna 3.6.1
packaging 24.1
pandas 2.2.2
pfzy 0.3.4
pillow 10.4.0
pip 24.2
prettytable 3.11.0
prompt_toolkit 3.0.47
protobuf 3.20.2
psutil 6.0.0
py-cpuinfo 9.0.0
pyarrow 17.0.0
pycocotools 2.0.8
pydantic 2.8.2
pydantic_core 2.20.1
pyparsing 3.1.4
pyreadline3 3.4.1
python-dateutil 2.9.0.post0
pytz 2024.1
PyYAML 6.0.2
regex 2024.7.24
requests 2.32.3
safetensors 0.4.4
schema 0.7.7
scikit-learn 1.5.1
scipy 1.14.1
sentencepiece 0.2.0
setuptools 73.0.1
six 1.16.0
skl2onnx 1.17.0
SQLAlchemy 2.0.32
sympy 1.13.2
tabulate 0.9.0
tf2onnx 1.16.1
threadpoolctl 3.5.0
tokenizers 0.19.1
torch 2.4.0
torchaudio 2.4.0
torchmetrics 1.4.1
torchvision 0.19.0
tqdm 4.66.5
transformers 4.43.4
typing_extensions 4.12.2
tzdata 2024.1
urllib3 2.2.2
wcwidth 0.2.13
wrapt 1.16.0
xxhash 3.5.0
yarl 1.9.4

Expected behavior
generate a optimized model.

Olive config
--config mistral_fp16_optimize.json

Olive logs
`(mistral_env) D:\windowsAI\Olive\examples\mistral>python mistral.py --optimize --config mistral_fp16_optimize.json
optimized_model_dir is:D:\windowsAI\Olive\examples\mistral\models\convert-optimize-perf_tuning\mistral_fp16_gpu-cuda_model
Optimizing mistralai/Mistral-7B-v0.1

[2024-08-31 14:37:58,425] [INFO] [run.py:138:run_engine] Running workflow default_workflow
[2024-08-31 14:37:58,476] [INFO] [cache.py:51:init] Using cache directory: D:\windowsAI\Olive\examples\mistral\cache\default_workflow
[2024-08-31 14:37:58,582] [INFO] [engine.py:1013:save_olive_config] Saved Olive config to D:\windowsAI\Olive\examples\mistral\cache\default_workflow\olive_config.json
[2024-08-31 14:37:58,721] [INFO] [accelerator_creator.py:224:create_accelerators] Running workflow on accelerator specs: gpu-cuda
[2024-08-31 14:37:58,829] [INFO] [engine.py:275:run] Running Olive on accelerator: gpu-cuda
[2024-08-31 14:37:58,838] [INFO] [engine.py:1110:_create_system] Creating target system ...
[2024-08-31 14:37:58,848] [INFO] [engine.py:1113:_create_system] Target system created in 0.000996 seconds
[2024-08-31 14:37:58,852] [INFO] [engine.py:1122:_create_system] Creating host system ...
[2024-08-31 14:37:58,858] [INFO] [engine.py:1125:_create_system] Host system created in 0.000000 seconds
passes is [('convert', {}), ('optimize', {}), ('perf_tuning', {})]
[2024-08-31 14:37:59,262] [INFO] [engine.py:877:_run_pass] Running pass convert:OptimumConversion
Framework not specified. Using pt to export the model.`

Other information

  • OS: [Windows]
  • Olive version: [main]
  • ONNXRuntime package and version: [e.g. onnxruntime-gpu: 1.19.0]
  • Transformers package version: [e.g. transformers 4.43.4]
  • GPU memory: 4G

Additional context
None