abetlen / llama-cpp-python

Python bindings for llama.cpp

Home Page:https://llama-cpp-python.readthedocs.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

termux build issues

SubhranshuSharma opened this issue · comments

pip install llama-cpp-python doesn't work on termux and gives this error, even if ninja is installed it gives same error, might be a hardcoded absolute path problem

i have tried to use portable venv setup on my linux machine but running it on termux gave a dependency not found error, so maybe some paths in the source code were still absolute(even after the correction attempts in the blog post)

tried using pyinstaller, but it doesn't support this library yet, same missing dependency issue

another option is to use docker on termux but that requires root privileges and custom kernel

i hv tried to look into the source code of this repo but donno where to start, any hint on where to start?

the original llama.cpp library works fine on termux but doesn't have a server inbuilt, and doesn't work well unless using bash

should i make a pull request editing the readme, linking to the docker workaround on rooted phones

commented

I resolved ninja installation by termux-chroot. Also added following to
project CMakeLists.txt:
set(CMAKE_C_COMPILER "/data/data/com.termux/files/usr/bin/clang")
set(CMAKE_CXX_COMPILER "/data/data/com.termux/files/usr/bin/clang++")
And passing opencl args:
CMAKE_ARGS="-DLLAMA_CLBLAST=on -DLLAMA_OPENBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python
Here's output eror:
../CMake-src/Utilities/cmlibarchive/libarchive/archive.h:101:10: fatal error: 'android_lf.h' file not found
#include "android_lf.h"

commented

Looks to be related to scikit-build/cmake-python-distributions#223

The longer term solution seems to be migrating the project to scikit-build-core (something I'm in the process of doing). However someone in the thread mentioned they were able to get their example to work by building cmake and ninja from source on the android.

commented

Also tried following:
cmake .. -DLLAMA_CLBLAST=on -DLLAMA_OPENBLAS=on
make
make install

Here's output:
CMake Error: Target DependInfo.cmake file not found
[100%] Built target run
Install the project...
-- Install configuration: "RelWithDebInfo"
-- Up-to-date: /usr/local/llama_cpp/libllama.so

Is installing successful?

Python:

import llama_cpp
Traceback (most recent call last):
File "", line 1, in
ModuleNotFoundError: No module named 'llama_cpp'

while trying to make it work on termux i ended up writing a small llama.cpp wrapper myself, instructions to use which are here, at the time llama.cpp cache files could be read but not generated/edited in termux, but that problem is sorted now, so put cache_is_supported_trust_me_bro=True in discord/termux/settings.py to use it.

these highlighted lines are all you need to make a wrapper of your own

commented

Is it compatible with babyagi and other stuff that needs llama python wrapper?

nope, my wrapper is tightly integrated with my usecase and isn't a separate installable package, was thinking of doing so, but would be too much of a headache

commented

@SubhranshuSharma so for your wrapper you just built the libllama.so seperately correct, did you just build llama.cpp with the Makefile? Maybe one solution is to avoid building llama.cpp on install by setting an environment variable / path to a pre-built library.

@abetlen Yeah, I run git clone https://github.com/ggerganov/llama.cpp ~/llama.cpp;cd ~/llama.cpp;make

I suggest, try building llama.cpp but don't crash if u can't, just check if the main file exists at a pre defined/user inputted path, if not use if os.system('git clone https://github.com/ggerganov/llama.cpp ~/llama.cpp;cd ~/llama.cpp;make')!=0:print('windows is for pussies, install git and cmake');exit() to clone and make it, way more simple and easy to maintain.

commented

The longer term solution seems to be migrating the project to scikit-build-core

Any progress about it?

Any progress about it?

seems related

commented

@SubhranshuSharma @Freed-Wu implementing in #499 but I just have some issues with Macos still

unrelated question: is there any way of storing cache files on disk for quick reboot in the api

implementing in #499 but I just have some issues with Macos still

i would still suggest treating this repo and llama.cpp as different things and not letting failure in one stop the other (for as long as its possible), so make the compilation a try except pass, if compile fails, force user to set a system variable pointing to llama.cpp, I would also suggest keeping the system variable at first priority, as in if the system variable is set, it will be given the first priority. That way the project will be more robust, letting people find workarounds to issues that originate in llama.cpp.

@SubhranshuSharma sorry for this very late reply but I finally merged in #499.

You can now set CMAKE_ARGS="-DLLAMA_BUILD=OFF pip install llama-cpp-python to avoid building from source, then just set LLAMA_CPP_LIB to the path to the shared library to use a pre-built library.

commented

So can we close this issue now?

@Freed-Wu , @abetlen , @SubhranshuSharma
I also tried packaging a python binary with llama-cpp-python, but when I try to run the binary standalone it fails with below error

INFO:root:Loading Model: TheBloke/synthia-7b-v1.3.Q5_K_M.gguf, on: cpu
INFO:root:This action can take a few minutes!
INFO:root:synthia-7b-v1.3.Q5_K_M.gguf
INFO:root:Using Llamacpp for GGUF/GGML quantized models
ERROR:root:Exception occurred: Shared library with base name 'llama' not found

But when I run simply by python3 main.py it works fine.

For pyinstaller do I need to generate the shared library seperatly and then use LLAMA_CPP_LIB

But I am using termex, plain terminal.

You can now set CMAKE_ARGS="-DLLAMA_BUILD=OFF pip install llama-cpp-python to avoid building from source, then just set LLAMA_CPP_LIB to the path to the shared library to use a pre-built library.

this is the error i still get in termux when running CMAKE_ARGS="-DLLAMA_BUILD=OFF" pip install llama-cpp-python

am i missing something

@SubhranshuSharma if you want to build cmake module termux/termux-packages#10065

or build without it

  1. CMAKE_ARGS="-DLLAMA_BUILD=OFF" pip install llama-cpp-python --no-build-isolation -v

  2. check error message and install missing modules

repeat 1-2 steps

is the python library working for anyone?

@Freed-Wu is this related to adding original llama.cpp to termux package manager? if yes, llama.cpp was working out of the box on termux anyway, that's why i could make my own usecase-specific python wrapper around it, to quote myself:

@abetlen Yeah, I run git clone https://github.com/ggerganov/llama.cpp ~/llama.cpp;cd ~/llama.cpp;make

@romanovj this solution of yours did install cmake without errors, now pip list is returning cmake and a simple import cmake is working, but pip install llama-cpp-python still gives error saying No module named 'cmake', so maybe not all the references to installed libraries were updated.

and your second solution worked, and pip list is returning llama_cpp_python but import llama_cpp is returning shared library with base name 'llama' not found

@SubhranshuSharma
still gives [error](https://pastebin.com/FkXtBYcm) saying No module named 'cmake

reinstall cmake
pkg rei cmake

also you can copy compiled cmake wheel to ~/wheels folder and install modules with
pip install ... --find-links ~/wheels


You need to build shared libs for llama.cpp like this
cmake -S . build64 -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=$PREFIX -DLLAMA_CLBLAST=ON -DLLAMA_OPENBLAS=ON
cmake --build build64

then export path to libllama.so

export LLAMA_CPP_LIB=/data/data/com.termux/files/home/llama.cpp/build64/libllama.so

python
import llama_cpp

ok

Hey guys, I was finally able to put some time into this, the following worked for me:

pkg install python-pip python-cryptography cmake ninja autoconf automake libandroid-execinfo patchelf
MATHLIB=m CFLAGS=--target=aarch64-unknown-linux-android33 LDFLAGS=-lpython3.11 pip install numpy --force-reinstall --no-cache-dir
pip install llama-cpp-python --verbose

@abetlen it works inconsistently, on a clean termux install with python installed i usually also haveto install libexpat then openssl then run pkg update && pkg upgrade

this works more consistently for me, keep selecting default answers to prompts

pkg install libexpat openssl python-pip python-cryptography cmake ninja autoconf automake libandroid-execinfo patchelf
pkg update && pkg upgrade 
MATHLIB=m CFLAGS=--target=aarch64-unknown-linux-android33 LDFLAGS=-lpython3.11 pip install numpy --force-reinstall --no-cache-dir pip install llama-cpp-python --verbose

then run python -c 'import llama_cpp' to check if it installed