serpa-cloud / gpt-time-nixtla

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

FinOps

FinOps GPT

A FinOps application, powered by TimeGPT, GPT-4, and Vantage. 100% open source.
⛅️💸🤖

Demo

Time.GPT.Vantage.-.19.August.2023.mp4

Interactive Demo

Live Demo: TimeGPT + Vantage by Serpa Cloud

About

This project fetches historical cloud costs from Vantage's API. Using TimeGPT, a general pre-trained model for time series, it analyzes past behaviors, identifying trends and patterns to accurately predict future usage and detect anomalies. Finally, it uses GPT-4 to provide a possible explanation for the cause of the atypical spend.

The project includes a server component to manage the services and a web app based on React.

Before Start

Before running this application, you'll need:

Getting Started

After cloning, install dependencies by running

yarn

Running in Development Mode (localhost)

To run the project, you need to execute both components in parallel:

Server

export NIXTLA_TOKEN="MY_NIXTLA_TOKEN" && export OPENAI_API_KEY="MY_OPENAI_API_KEY" && yarn serve

This runs the backend server at http://localhost:7001.

Client

yarn start

This runs the web app in development mode. Open http://localhost:3000 in your browser. The page will reload when you make changes.

Running in Live Mode

Compile the Web App

yarn build

Server

export NIXTLA_TOKEN="MY_NIXTLA_TOKEN" && export OPENAI_API_KEY="MY_OPENAI_API_KEY" && yarn serve

This runs the backend server at http://localhost:7001 and exposes static files at the root.

Build Docker from Source

Create the Docker Image

docker build --pull --rm -f "Dockerfile" -t gpt-time-nixtla-vantage:latest "."

Run the Docker Image

docker run --env OPENAI_API_KEY="MY_OPENAI_API_KEY" --env NIXTLA_TOKEN="MY_NIXTLA_TOKEN" -p 127.0.0.1:80:80/tcp gpt-time-nixtla-vantage:latest

This runs the backend server at http://localhost and exposes static files at the root.

Running using Docker from Dockerhub

Run the container

docker run --env OPENAI_API_KEY="MY_OPENAI_API_KEY" --env NIXTLA_TOKEN="MY_NIXTLA_TOKEN" -p 127.0.0.1:80:80/tcp serpacloud/gpt-time-nixtla-vantage:latest

This runs the backend server at http://localhost and exposes static files at the root.

Serpa Cloud Deploy

You can make modifications to this application and easily deploy it using Serpa Cloud. Just fork it on GitHub or upload the application to a new repository.

Serpa

When configuring the deployment, remember to add the environment variables with your API keys, OPENAI_API_KEY, and NIXTLA_TOKEN.

Contributing

This is a demo of the capabilities of using Nixtla, Vantange, and OpenAI to forecast and analyze cloud costs. It also intends to showcase how easy deployments are with Serpa. However, if the community shows interest, we are happy to include more features.

Top of mind, some next steps could be:

  • Add support for Pump
  • Add support for Cloudthread
  • Update the front-end to show progress.

If you're a developer who'd like to help with any of these, please open an issue to discuss the best way to tackle the challenge.

License

License

Made with love in 🇲🇽 by 🏳️‍⚧️.

About

License:MIT License


Languages

Language:JavaScript 97.3%Language:CSS 1.6%Language:HTML 1.0%Language:Dockerfile 0.1%