Streamline your code deployment so you can focus on your product.
There are 20,881 repositories under deployment topic.
Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
A deployment automation tool built on Ruby, Rake, and SSH.
Package desktop applications as AppImages that run on common Linux-based operating systems, such as RHEL, CentOS, openSUSE, SLED, Ubuntu, Fedora, debian and derivatives. Join #AppImage on irc.libera.chat
Node.js Application Configuration
Deploying a React App (created using create-react-app) to GitHub Pages
StackStorm (aka "IFTTT for Ops") is event-driven automation for auto-remediation, incident responses, troubleshooting, deployments, and more for DevOps and SREs. Includes rules engine, workflow, 160 integration packs with 6000+ actions (see https://exchange.stackstorm.org) and ChatOps. Installer at https://docs.stackstorm.com/install/index.html
Build & ship backends without writing any infrastructure files.
Build powerful pipelines in any programming language.
A web-based UI for deploying and managing applications in Kubernetes clusters
PaddlePaddle End-to-End Development Toolkit(『飞桨』深度学习全流程开发工具)
An MLOps framework to package, deploy, monitor and manage thousands of production machine learning models
A guideline for building practical production-level deep learning systems to be deployed in real world applications.
k8s tutorials | k8s 教程
🚀 Automatically deploy your project to GitHub Pages using GitHub Actions. This action can be configured to push your production-ready code into any branch you'd like.
GPP is Android's unofficial release automation Gradle Plugin. It can do anything from building, uploading, and then promoting your App Bundle or APK to publishing app listings and other metadata.
Tool integration platform for Kubernetes
🪄 Turns your machine learning code into microservices with web API, interactive GUI, and more.
Simplify deployments in Elixir with OTP releases!
Replace OpenAI GPT with another LLM in your app by changing a single line of code. Xinference gives you the freedom to use any LLM you need. With Xinference, you're empowered to run inference with any open-source language models, speech recognition models, and multimodal models, whether in the cloud, on-premises, or even on your laptop.
OpenMMLab Model Deployment Framework