LambdaPi is a serverless runtime environment designed for code generated by Large Language Models (LLMs), complete with a user interface for writing, testing, and deploying the code.
As LLMs continue to revolutionize software development, LambdaPi aims to provide a robust open-source ecosystem for building, testing, and deploying APIs (and eventually other types of software). By harnessing the power of GPT models, LambdaPi automatically containerizes and deploys code in a serverless fashion, with containers created and destroyed with each request.
LambdaPi features a shared Redis database and file storage for communication, storage, and input/output needs. Functions can be called using JSON or query parameters and return a variety of formats, including JSON, plain text, and files (png, csv).
LambdaPi supports a wide range of runtimes, including Python, Node, Bash, Ruby, Shell, C++, and others of your choice (just create Dockerconfig). Functions are designed as straightforward programs, with parameters passed as command line arguments. Output is returned to the user via stdout, ensuring a streamlined process:
command line parameters as input ->
-> program execution ->
-> printed result.
This simplicity caters to both humans and LLMs, making it easier for anyone to create and deploy code. With support for multiple runtimes and an emphasis on simplicity, LambdaPi enables developers and LLMs alike to create powerful, efficient code with minimal barriers to entry.
View the system prompt configuration in config.yaml.example
output_video.mp4
- ๐ Runtimes:
- Python
- Node.js
- C++
- Kali Linux
- Ruby
- ๐ File storage
- ๐๏ธ Redis storage
- ๐ค JSON, plain text, or file output support
- ๐๏ธ Easy scripting and command line execution
Here, main app is run inside Docker, and all functions are run as a Docker-in-Docker (DinD)
- Docker
- Python 3.x
- Redis
Clone the repository:
git clone https://github.com/jb41/lambdapi
Navigate to the project directory:
cd lambdapi
Copy the example configuration:
cp config.yaml.example config.yaml
Add your GPT API key to config.yaml (obtain it from https://platform.openai.com/account/api-keys)
Building Kali Linux images can be time-consuming due to the installation of the kali-linux-headless package. To speed up the setup process, you can remove the package by executing the command rm -rf runtimes/kali.yaml
.
Execute the setup script, which will setup database, configure directories, generate Dockerfiles, and build the corresponding Docker images.
docker-compose -f docker-compose.setup.yml up
Install dependencies:
apt-get install redis docker-ce docker-ce-cli containerd.io
Install required libraries:
pip3 install -r requirements.txt
Run
python3 setup.py
Run
docker-compose up --build
Visit http://localhost:8000/index.html to view the dashboard.
Run
python3 main.py
Visit http://localhost:8000/index.html to view the dashboard.
All files saved in the current directory (/app) are returned in the response. These files, along with the container, are deleted after the request is completed. If you want to preserve a file for future use or access existing files, store or read them from the /data directory.
The Redis URL is stored in the REDIS_HOST environment variable. Redis is shared among all containers and operates on the default port.
You can pass parameters as query parameters or a JSON body. However, they are forwarded to the script as command-line arguments. For instance, a JSON body like this:
{
"a": 1337,
"b": 42,
"foo": "bar"
}
will be passed to a Python script as:
python3 "1337" "42" "bar"
In the UI (/functions-details.html
), you can provide parameters in the input text area. Separate each parameter with a newline.
Please note that there are limitations in preserving parameter names and handling more complex data structures, such as strings with spaces, hashes, and others. Contributions to address these issues are welcome.
LambdaPi is compatible with both GPT-3.5 and GPT-4 models, but performs significantly better with GPT-4, often requiring little or no manual input. GPT-3.5 can be used with some manual adjustments or with simpler scripts.
To optimize performance in a production environment, it's important to run multiple workers. When running LambdaPi using Docker, the application will automatically utilize a number of workers equal to the number of CPU cores. You can configure this setting for your specific Docker instance to ensure efficient operation.
We warmly welcome your contributions and appreciate your support in enhancing the project. There are several areas where you can help improve LambdaPi:
- Frontend expansion: Generate HTML files with frontend JavaScript and set up a /public directory for hosting files (similar to the /data implementation).
- Chaining: Develop unit tests with LLM, execute code, and verify output before presenting to users.
- Dockerfile optimization: Reconsider the use of Python for generating Dockerfiles, and explore potential improvements in managing Docker containers (including handling, setup, and execution).
- Additional runtimes: Incorporate more programming languages and technologies, such as Go or Rust, to further broaden the range of supported runtimes.
- Alternative Linux distributions: Evaluate other Linux distributions that may be beneficial for the project (currently using Kali Linux - beacause of it's preinstalled tooling).
- UX/UI enhancements: Improve the user interface by rethinking the layout of elements like the code editor, prompt, LLM text response, parameters, and results.
- User management: Consider implementing user registration and authentication features.
- HTTP method restrictions: Limit access to specific HTTP methods, instead of allowing all methods (GET, POST, PUT, PATCH, DELETE, OPTIONS, HEAD).
- HTTP Basic Auth: Implement endpoint protection using HTTP Basic Authentication.
- Cronjobs: Enable the execution of scripts using cron jobs.
- Sqlite3 integration: Add support for SQLite3 databases, providing an alternative to the existing file system for those who require a relational database.
- Chaining when errors: Run /completions with with code and error in order to fix it
- Fix formatting for non-JSON reponses: in
/functions-details.html
, non JSON responses with\r\n
could be formatted better.