mudler / LocalAI

:robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.

Home Page:https://localai.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Intel GPU support

mudler opened this issue · comments

Is your feature request related to a problem? Please describe.

While Intel GPUs acceleration is partially supported with llama.cpp , https://github.com/intel/intel-extension-for-transformers seems to be specifically optimized to run optimized flow for intel.

For example, to run SD: https://www.intel.com/content/www/us/en/developer/articles/technical/stable-diffusion-with-intel-arc-gpus.html

Describe the solution you'd like

Support for
https://github.com/intel/intel-extension-for-transformers

Describe alternatives you've considered

Additional context