PotatoSpudowski / fastLLaMa

fastLLaMa: An experimental high-performance framework for running Decoder-only LLMs with 4-bit quantization in Python using a C/C++ backend.

Home Page:https://potatospudowski.github.io/fastLLaMa/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Unable to build bridge.cpp and link the 'libllama'

rohitgr7 opened this issue · comments

fastLLaMa git:(main) ./build.sh
sysctl: unknown oid 'hw.optional.arm64'
I llama.cpp build info: 
I UNAME_S:  Darwin
I UNAME_P:  i386
I UNAME_M:  x86_64
I CFLAGS:   -I.              -O3 -DNDEBUG -std=c11   -fPIC -pthread -mf16c -mfma -mavx -mavx2 -DGGML_USE_ACCELERATE
I CXXFLAGS: -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread
I LDFLAGS:   -framework Accelerate
I CC:       Apple clang version 14.0.0 (clang-1400.0.29.202)
I CXX:      Apple clang version 14.0.0 (clang-1400.0.29.202)

ar rcs libllama.a ggml.o utils.o
./build.sh:18: command not found: cmake
Unable to build bridge.cpp and link the 'libllama'

getting the following error. Any suggestions?

Please install cmake before running the script.

Updated the readme with instructions. Thanks :)