PipelineAI Quick Start (CPU + GPU)
Train and Deploy your ML and AI Models in the Following Environments:
Having Issues? Contact Us Anytime... We're Always Awake.
- Slack: https://joinslack.pipeline.ai
- Email: help@pipeline.ai
- Web: https://support.pipeline.ai
- YouTube: https://youtube.pipeline.ai
- Slideshare: https://slideshare.pipeline.ai
- Workshop: https://workshop.pipeline.ai
- Troubleshooting Guide
PipelineAI Community Events
- PipelineAI Deep Learning Workshops (TensorFlow + Spark + GPUs)
- Advanced Spark and TensorFlow Meetup (Global)
Home
PipelineAIFeatures
PipelineAIConsistent, Immutable, Reproducible Model Runtimes
Each model is built into a separate Docker image with the appropriate Python, C++, and Java/Scala Runtime Libraries for training or prediction.
Use the same Docker Image from Local Laptop to Production to avoid dependency surprises.
Sample Machine Learning and AI Models
Click HERE to view model samples for the following:
- Scikit-Learn
- TensorFlow
- Keras
- Spark ML (formerly called Spark MLlib)
- XGBoost
- PyTorch
- Caffe/2
- Theano
- MXNet
- PMML/PFA
- Custom Java/Python/C++ Ensembles
Supported Model Runtimes (CPU and GPU)
- Python (Scikit, TensorFlow, etc)
- Java
- Scala
- Spark ML
- C++
- Caffe2
- Theano
- TensorFlow Serving
- Nvidia TensorRT (TensorFlow, Caffe2)
- MXNet
- CNTK
- ONNX
Supported Streaming Engines
- Kafka
- Kinesis
- Flink
- Spark Streaming
- Heron
- Storm
Advanced PipelineAI Product Features
- Click HERE to compare PipelineAI Products.