ufownl / lua-cgemma

Lua bindings for gemma.cpp

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Open in Kaggle Open in HF Spaces

lua-cgemma

Lua bindings for gemma.cpp.

Requirements

Before starting, you should have installed:

Installation

1st step: Clone the source code from GitHub: git clone https://github.com/ufownl/lua-cgemma.git

2nd step: Build and install:

To build and install using the default settings, just enter the repository's directory and run the following commands:

mkdir build
cd build
cmake .. && make
sudo make install

3rd step: See here to learn how to obtain model weights and tokenizer.

Usage

Synopsis

-- Create a Gemma instance
local gemma, err = require("cgemma").new({
  tokenizer = "/path/to/tokenizer.spm",
  model = "2b-it",
  weights = "/path/to/2b-it-sfp.sbs"
})
if not gemma then
  print("Opoos! ", err)
  return
end

-- Create a chat session
local session, seed = gemma:session()
if not session then
  print("Opoos! ", seed)
  return
end

print("Random seed of session: ", seed)
while true do
  print("New conversation started")

  -- Multi-turn chat
  while session:ready() do
    io.write("> ")
    local text = io.read()
    if not text then
      print("End of file")
      return
    end
    local reply, err = session(text)
    if not reply then
      print("Opoos! ", err)
      return
    end
    print("reply: ", reply)
  end

  print("Exceed the maximum number of tokens")
  session:reset()
end

APIs for Lua

cgemma.info

syntax: cgemma.info()

Show information of cgemma module.

cgemma.new

syntax: <cgemma.instance>inst, <string>err = cgemma.new(<table>options)

Create a Gemma instance.

A successful call returns a Gemma instance. Otherwise, it returns nil and a string describing the error.

Available options:

{
  tokenizer = "/path/to/tokenizer.spm",  -- Path of tokenizer model file. (required)
  model = "2b-it",  -- Model type:
                    -- 2b-it (2B parameters, instruction-tuned),
                    -- 2b-pt (2B parameters, pretrained),
                    -- 7b-it (7B parameters, instruction-tuned),
                    -- 7b-pt (7B parameters, pretrained),
                    -- gr2b-it (griffin 2B parameters, instruction-tuned),
                    -- gr2b-pt (griffin 2B parameters, pretrained).
                    -- (required)
  weights = "/path/to/2b-it-sfp.sbs",  -- Path of model weights file. (required)
  scheduler = sched_inst,  -- Instance of scheduler, if not provided a default
                           -- scheduler will be attached.
}

cgemma.scheduler

syntax: <cgemma.scheduler>sched, <string>err = cgemma.scheduler([<number>num_threads])

Create a scheduler instance.

A successful call returns a scheduler instance. Otherwise, it returns nil and a string describing the error.

The only parameter num_threads indicates the number of threads in the internal thread pool. If not provided or num_threads <= 0, it will create a default scheduler with the number of threads depending on the concurrent threads supported by the implementation.

cgemma.compress_MODEL_weights

model syntax
2b <boolean>ok, <string>err = cgemma.compress_2b_weights(<string>weights, <string>compressed_weights[, <cgemma.scheduler>sched])
7b <boolean>ok, <string>err = cgemma.compress_7b_weights(<string>weights, <string>compressed_weights[, <cgemma.scheduler>sched])
gr2b <boolean>ok, <string>err = cgemma.compress_gr2b_weights(<string>weights, <string>compressed_weights[, <cgemma.scheduler>sched])

Generate compressed weights from uncompressed weights.

A successful call returns true. Otherwise, it returns false and a string describing the error.

Parameters:

name description required
weights Path of uncompressed weights file. Yes
compressed_weights Output path of compressed weights file. Yes
sched Instance of scheduler, if not provided a default scheduler will be attached. No

cgemma.instance.session

syntax: <cgemma.session>sess, <number or string>seed = inst:session([<table>options])

Create a chat session.

A successful call returns the session and its random seed. Otherwise, it returns nil and a string describing the error.

Available options and default values:

{
  max_tokens = 3072,  -- Maximum number of tokens in prompt + generation.
  max_generated_tokens = 2048,  -- Maximum number of tokens to generate.
  temperature = 1.0,  -- Temperature for top-K.
  seed = 42,  -- Random seed. (default is random setting)
}

cgemma.session.ready

syntax: <boolean>ok = sess:ready()

Check if the session is ready to chat.

cgemma.session.reset

syntax: sess:reset()

Reset the session to start a new conversation.

cgemma.session.dumps

syntax: <string>data, <string>err = sess:dumps()

Dump the current state of the session to a Lua string.

A successful call returns a Lua string that stores state data (binary) of the session. Otherwise, it returns nil and a string describing the error.

cgemma.session.loads

syntax: <boolean>ok, <string>err = sess:loads(<string>data)

Load the state data from the given Lua string to restore a previous session.

A successful call returns true. Otherwise, it returns false and a string describing the error.

cgemma.session.dump

syntax: <boolean>ok, <string>err = sess:dump(<string>path)

Dump the current state of the session to a specific file.

A successful call returns true. Otherwise, it returns false and a string describing the error.

cgemma.session.load

syntax: <boolean>ok, <string>err = sess:load(<string>path)

Load the state data from the given file to restore a previous session.

A successful call returns true. Otherwise, it returns false and a string describing the error.

metatable(cgemma.session).__call

syntax: <string or boolean>reply, <string>err = sess(<string>text[, <function>stream])

Generate reply.

A successful call returns the content of the reply (without a stream function) or true (with a stream function). Otherwise, it returns nil and a string describing the error.

The stream function is defined as follows:

function stream(token, pos, prompt_size)
  if pos < prompt_size then
    -- Gemma is processing the prompt
    io.write(pos == 0 and "reading and thinking ." or ".")
  elseif token then
    -- Stream the token text output by Gemma here
    if pos == prompt_size then
      io.write("\nreply: ")
    end
    io.write(token)
  else
    -- Gemma's output reaches the end
    print()
  end
  io.flush()
  -- return `true` indicates success; return `false` indicates failure and terminates the generation
  return true
end

License

BSD-3-Clause license. See LICENSE for details.

About

Lua bindings for gemma.cpp

License:BSD 3-Clause "New" or "Revised" License


Languages

Language:C++ 87.5%Language:CMake 12.5%