Mozer / talk-llama-fast

Port of OpenAI's Whisper model in C/C++ with xtts and wav2lip

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Build for mac (Сборка под мак)

freQuensy23-coder opened this issue · comments

Modern Arm based macbooks are very powerfull and can be used to inference LLMs with acceptable speed without gpu. Can you create a build for MacOs, without cuda, or it is not possible?

commented

The readme says, "First, you need to compile everything." Whisper.cpp compiles perfectly, so the author may have added something unknown that doesn't compile. I was in the middle of figuring this out when I saw the issue.

commented

I believe this issue is the same as #1

What errors do you get?

  1. You need to find and link libcurl library for mac or linux to compile it.
  2. SDL library for linux/mac should also be linked, i think there's was a guide in the original repo about sdl.
  3. In talk-llama.cpp you need to change GetTempPath function to some function in linux/mac to find the 'temp' directory. I think GetTempPath is for windows only.

anyone trying to get this working on mac ??

any updates for mac?

same question here. very interested in following this develop.