nomic-ai / gpt4all

GPT4All: Chat with Local LLMs on Any Device

Home Page:https://gpt4all.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Need `#include <algorithm>` to build `gpt4all-backend/llamamodel.cpp`

gmontamat opened this issue · comments

Bug Report

When building gpt4all-chat on ArchLinux from the AUR, the compiler fails with an error message about transform, find, find_if not being a member of std:

/home/user/.cache/paru/clone/gpt4all-chat/src/gpt4all-2.7.5/gpt4all-backend/llamamodel.cpp:892:22: error: ‘transform’ is not a member of ‘std’
  892 |                 std::transform(embd, embd_end, embd, [mean](double f){ return f - mean; });
      |                      ^~~~~~~~~
/home/user/.cache/paru/clone/gpt4all-chat/src/gpt4all-2.7.5/gpt4all-backend/llamamodel.cpp:901:22: error: ‘transform’ is not a member of ‘std’
  901 |                 std::transform(embd, embd_end, embd, product(1.0 / std::sqrt(variance + 1e-5)));
      |                      ^~~~~~~~~
/home/user/.cache/paru/clone/gpt4all-chat/src/gpt4all-2.7.5/gpt4all-backend/llamamodel.cpp:906:18: error: ‘transform’ is not a member of ‘std’
  906 |             std::transform(embd, embd_end, out, out, [scale](double e, double o){ return o + scale * e; });
      |                  ^~~~~~~~~
/home/user/.cache/paru/clone/gpt4all-chat/src/gpt4all-2.7.5/gpt4all-backend/llamamodel.cpp: In member function ‘void LLamaModel::embedInternal(const std::vector<std::__cxx11::basic_string<char> >&, float*, std::string, int, size_t*, bool, bool, bool (*)(unsigned int*, unsigned int, const char*), const EmbModelSpec*)’:
/home/user/.cache/paru/clone/gpt4all-chat/src/gpt4all-2.7.5/gpt4all-backend/llamamodel.cpp:934:14: error: ‘transform’ is not a member of ‘std’
  934 |         std::transform(embd, embd_end, embd, product(1.0 / total));
      |              ^~~~~~~~~
/home/user/.cache/paru/clone/gpt4all-chat/src/gpt4all-2.7.5/gpt4all-backend/llamamodel.cpp:938:14: error: ‘transform’ is not a member of ‘std’
  938 |         std::transform(embd, embd_end, embeddings, product(scale));
      |              ^~~~~~~~~
/home/user/.cache/paru/clone/gpt4all-chat/src/gpt4all-2.7.5/gpt4all-backend/llamamodel.cpp: In function ‘bool is_arch_supported(const char*)’:
/home/user/.cache/paru/clone/gpt4all-chat/src/gpt4all-2.7.5/gpt4all-backend/llamamodel.cpp:980:21: error: no matching function for call to ‘find(std::vector<const char*>::const_iterator, std::vector<const char*>::const_iterator, std::string)’
  980 |     return std::find(KNOWN_ARCHES.begin(), KNOWN_ARCHES.end(), std::string(arch)) < KNOWN_ARCHES.end();
      |            ~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Steps to Reproduce

(ArchLinux steps only, it may be due to the compiler version that comes with base-devel)

  1. Clone the gpt4all-chat packaging repository from AUR: git clone https://aur.archlinux.org/gpt4all-chat.git
  2. Change into the cloned directory: cd gpt4all-chat
  3. Run makepkg -si to build and install the package

Expected Behavior

To resolve this issue, I added a single line of code (#include <algorithm>) at the top of the llamamodel.cpp file. This patch allows compilation to succeed.

I reported this potential Arch Linux-specific bug to the AUR package maintainer and found that similar issues have been discussed on Stack Overflow (see 1 and 2). These threads suggest that including <algorithm> is a common solution for these types of errors.

Your Environment

  • GPT4All version (if applicable): 2.7.5
  • Operating System: ArchLinux
  • Chat model used (if applicable): N/A