AriticGPT-Dalai is a platform for training machine learning models, including language models like GPT. Yes, you can train your own GPT model using your own data in AriticGPT-Dalai.
To do this, you would first need to prepare your data and create a dataset that follows the format required by the GPT model. This typically involves tokenizing your text data and dividing it into training and validation sets. Once you have prepared your data, you can use the training scripts provided in Dalai to train a GPT model on your data.
Keep in mind that training a language model from scratch can be a computationally intensive task, so you will need access to significant computing resources. Additionally, training a high-quality language model requires careful selection of hyperparameters and training settings, so it may take some experimentation to achieve good results.
Alternatively, if you have a smaller amount of data or are looking for a faster way to fine-tune an existing GPT model, you can use a pre-trained GPT model and fine-tune it on your data. This approach is often faster and requires less computational resources than training a model from scratch.
Sample commands - connect for specific Aritic related
You can use CSV or Excel for training and for this take your file and convert into training file
dalai dataset create \
--input /path/to/input.csv \
--output /path/to/output/dir \
--tokenizer-type bert \
--tokenizer-name bert-base-uncased \
--max-seq-length 128 \
--train-split 0.8 \
--dev-split 0.1 \
--test-split 0.1 \
--label-column label \
--text-column text
- input: path to the input CSV file
- output: directory where the output files will be saved
- tokenizer-type and --tokenizer-name: specify the type and name of the tokenizer to use for tokenizing the text data. In this example, we're using the BERT tokenizer with the "bert-base-uncased" pre-trained model.
- max-seq-length: the maximum length of input sequences to the tokenizer (and subsequently, the maximum length of input sequences to the model)
- train-split, --dev-split, --test-split: the proportions of the data to allocate to the training, development (validation), and test sets, respectively. These should add up to 1.0.
- label-column and --text-column: the names of the columns in the CSV file that contain the label (target) and text data, respectively. This command will tokenize the text data in the CSV file using the specified tokenizer and split the data into training, validation, and test sets. The resulting dataset will be saved to the specified output directory in the format required for fine-tuning a pre-trained language model like GPT using the dalai train command (as shown in my previous response).
Again, keep in mind that this is just an example command, and you will likely need to modify some of the options to suit your specific dataset and training goals. Additionally, you will need to have the necessary data and environment set up before running this command. You can get connect with AriticGPT-Dalai team for this.
dalai train \
--model gpt \
--train-data /path/to/train/data.txt \
--validation-data /path/to/validation/data.txt \
--output-dir /path/to/output/dir \
--num-layers 12 \
--hidden-size 768 \
--num-attention-heads 12 \
--batch-size 8 \
--learning-rate 1e-4 \
--epochs 3 \
--gradient-accumulation-steps 2 \
--max-seq-length 128
- model gpt: specifies that we want to train a GPT model
- train-data and --validation-data: paths to the training and validation data files, respectively
- output-dir: directory where the trained model and other output files will be saved
- num-layers, --hidden-size, --num-attention-heads: hyperparameters that control the size and complexity of the model architecture
- batch-size: number of examples to process in each training step
- learning-rate: controls the rate at which the model weights are updated during training
- epochs: number of times to iterate over the entire training dataset
- gradient-accumulation-steps: controls the number of training steps to accumulate gradients before applying them to update the model weights
- max-seq-length: the maximum length of input sequences to the model
Run LLaMA and Alpaca on your computer.
Both alpaca and llama working on your computer!
- Powered by llama.cpp, llama-dl CDN, and alpaca.cpp
- Hackable web app included
- Ships with JavaScript API
- Ships with Socket.io API
Dalai runs on all of the following operating systems:
- Linux
- Mac
- Windows
Runs on most modern computers. Unless your computer is very very old, it should work.
According to a llama.cpp discussion thread, here are the memory requirements:
- 7B => ~4 GB
- 13B => ~8 GB
- 30B => ~16 GB
- 65B => ~32 GB
Currently 7B and 13B models are available via alpaca.cpp
Alpaca comes fully quantized (compressed), and the only space you need for the 7B model is 4.21GB:
Alpaca comes fully quantized (compressed), and the only space you need for the 13B model is 8.14GB:
You need a lot of space for storing the models. The model name must be one of: 7B, 13B, 30B, and 65B.
You do NOT have to install all models, you can install one by one. Let's take a look at how much space each model takes up:
NOTE
The following numbers assume that you DO NOT touch the original model files and keep BOTH the original model files AND the quantized versions.
You can optimize this if you delete the original models (which are much larger) after installation and keep only the quantized versions.
- Full: The model takes up 31.17GB
- Quantized: 4.21GB
- Full: The model takes up 60.21GB
- Quantized: 4.07GB * 2 = 8.14GB
- Full: The model takes up 150.48GB
- Quantized: 5.09GB * 4 = 20.36GB
- Full: The model takes up 432.64GB
- Quantized: 5.11GB * 8 = 40.88GB
If your mac doesn't have node.js installed yet, make sure to install node.js >= 10
Currently supported engines are llama
and alpaca
.
Currently alpaca only has the 7B model:
npx dalai alpaca install 7B
To download llama models, you can run:
npx dalai llama install 7B
or to download multiple models:
npx dalai llama install 7B 13B
Now go to step 3.
Normally you don't need this step, but if running the commands above don't do anything and immediately end, it means something went wrong because some of the required modules are not installed on your system.
In that case, try the following steps:
In case homebrew is not installed on your computer, install it by running:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
Or you can find the same instruction on the homebrew hompage: https://brew.sh/
Once homebrew is installed, install these dependencies:
brew install cmake
brew install pkg-config
Just to make sure we cover every vector, let's update NPM as well:
npm install -g npm@latest
Now go back to step 2.1 and try running the npx dalai
commands again.
After everything has been installed, run the following command to launch the web UI server:
npx dalai serve
and open http://localhost:3000 in your browser. Have fun!
On windows, you need to install Visual Studio before installing Dalai.
Press the button below to visit the Visual Studio downloads page and download:
Download Microsoft Visual Studio
IMPORTANT!!!
When installing Visual Studio, make sure to check the 3 options as highlighted below:
- Python development
- Node.js development
- Desktop development with C++
IMPORTANT
On Windows, make sure to run all commands in cmd.
DO NOT run in powershell. Powershell has unnecessarily strict permissions and makes the script fail silently.
Currently supported engines are llama
and alpaca
.
Currently alpaca only has the 7B model. Open your cmd
application and enter:
npx dalai alpaca install 7B
To download llama models. Open your cmd
application and enter:
npx dalai llama install 7B
or to download multiple models:
npx dalai llama install 7B 13B
In case above steps fail, try installing Node.js and Python separately.
Install Python:
Install Node.js >= 18:
After both have been installed, open powershell and type python
to see if the application exists. And also type node
to see if the application exists as well.
Once you've checked that they both exist, try again.
After everything has been installed, run the following command to launch the web UI server (Make sure to run in cmd
and not powershell!):
npx dalai serve
and open http://localhost:3000 in your browser. Have fun!
You need to make sure you have the correct version of Python and Node.js installed.
Make sure the version is 3.10 or lower (not 3.11) Python must be 3.10 or below (pytorch and other libraries are not supported yet on the latest)
Make sure the version is 18 or higher
Currently supported engines are llama
and alpaca
.
Currently alpaca only has the 7B model:
npx dalai alpaca install 7B
To download llama models, you can run:
npx dalai llama install 7B
or to download multiple models:
npx dalai llama install 7B 13B
In case the model install silently fails or hangs forever, try the following command, and try running the npx command again:
On ubuntu/debian/etc.:
sudo apt-get install build-essential python3-venv -y
On fedora/etc.:
dnf install make automake gcc gcc-c++ kernel-devel python3-virtualenv -y
After everything has been installed, run the following command to launch the web UI server:
npx dalai serve
and open http://localhost:3000 in your browser. Have fun!
Dalai is also an NPM package:
- programmatically install
- locally make requests to the model
- run a dalai server (powered by socket.io)
- programmatically make requests to a remote dalai server (via socket.io)
Dalai is an NPM package. You can install it using:
npm install dalai
const dalai = new Dalai(home)
home
: (optional) manually specify the llama.cpp folder
By default, Dalai automatically stores the entire llama.cpp
repository under ~/llama.cpp
.
However, often you may already have a llama.cpp
repository somewhere else on your machine and want to just use that folder. In this case you can pass in the home
attribute.
Creates a workspace at ~/llama.cpp
const dalai = new Dalai()
Manually set the llama.cpp
path:
const dalai = new Dalai("/Documents/llama.cpp")
dalai.request(req, callback)
req
: a request object. made up of the following attributes:prompt
: (required) The prompt stringmodel
: (required) The model type + model name to query. Takes the following form:<model_type>.<model_name>
- Example:
alpaca.7B
,llama.13B
, ...
- Example:
url
: only needed if connecting to a remote dalai server- if unspecified, it uses the node.js API to directly run dalai locally
- if specified (for example
ws://localhost:3000
) it looks for a socket.io endpoint at the URL and connects to it.
threads
: The number of threads to use (The default is 8 if unspecified)n_predict
: The number of tokens to return (The default is 128 if unspecified)seed
: The seed. The default is -1 (none)top_k
top_p
repeat_last_n
repeat_penalty
temp
: temperaturebatch_size
: batch sizeskip_end
: by default, every session ends with\n\n<end>
, which can be used as a marker to know when the full response has returned. However sometimes you may not want this suffix. Setskip_end: true
and the response will no longer end with\n\n<end>
callback
: the streaming callback function that gets called every time the client gets any token response back from the model
Using node.js, you just need to initialize a Dalai object with new Dalai()
and then use it.
const Dalai = require('dalai')
new Dalai().request({
model: "7B",
prompt: "The following is a conversation between a boy and a girl:",
}, (token) => {
process.stdout.write(token)
})
To make use of this in a browser or any other language, you can use thie socket.io API.
First you need to run a Dalai socket server:
// server.js
const Dalai = require('dalai')
new Dalai().serve(3000) // port 3000
Then once the server is running, simply make requests to it by passing the ws://localhost:3000
socket url when initializing the Dalai object:
const Dalai = require("dalai")
new Dalai().request({
url: "ws://localhost:3000",
model: "7B",
prompt: "The following is a conversation between a boy and a girl:",
}, (token) => {
console.log("token", token)
})
Starts a socket.io server at port
dalai.serve(port)
const Dalai = require("dalai")
new Dalai().serve(3000)
connect with an existing http
instance (The http
npm package)
dalai.http(http)
http
: The http object
This is useful when you're trying to plug dalai into an existing node.js web app
const app = require('express')();
const http = require('http').Server(app);
dalai.http(http)
http.listen(3000, () => {
console.log("server started")
})
await dalai.install(model_type, model_name1, model_name2, ...)
model_type
: the name of the model. currently supports:- "alpaca"
- "llama"
model1
,model2
, ...: the model names to install ("7B"`, "13B", "30B", "65B", etc)
Install Llama "7B" and "13B" models:
const Dalai = require("dalai");
const dalai = new Dalai()
await dalai.install("llama", "7B", "13B")
Install alpaca 7B model:
const Dalai = require("dalai");
const dalai = new Dalai()
await dalai.install("alpaca", "7B")
returns the array of installed models
const models = await dalai.installed()
const Dalai = require("dalai");
const dalai = new Dalai()
const models = await dalai.installed()
console.log(models) // prints ["7B", "13B"]
By default Dalai uses your home directory to store the entire repository (~/dalai
). However sometimes you may want to store the archive elsewhere.
In this case you can call all CLI methods using the --home
flag:
npx dalai llama install 7B --home ~/test_dir
npx dalai serve --home ~/test_dir
To make sure you update to the latest, first find the latest version at https://www.npmjs.com/package/dalai
Let's say the latest version is 0.3.0
. To update the dalai version, run:
npx dalai@0.3.0 setup
Have questions or feedback? Follow the project through the following outlets: