mzahran001 / ray

A fast and simple framework for building and running distributed applications. Ray is packaged with RLlib, a scalable reinforcement learning library, and Tune, a scalable hyperparameter tuning library.

Home Page:https://ray.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

image

image

Ray provides a simple and universal API for building distributed applications.

Ray is packaged with the following libraries for accelerating machine learning workloads:

  • Tune: Scalable Hyperparameter Tuning
  • RLlib: Scalable Reinforcement Learning
  • RaySGD: Distributed Training Wrappers
  • Ray Serve: Scalable and Programmable Serving

Install Ray with: pip install ray. For nightly wheels, see the Installation page.

NOTE: As of Ray 0.8.1, Python 2 is no longer supported.

Quick Start

Execute Python functions in parallel.

To use Ray's actor model:

Ray programs can run on a single machine, and can also seamlessly scale to large clusters. To execute the above Ray script in the cloud, just download this configuration file, and run:

ray submit [CLUSTER.YAML] example.py --start

Read more about launching clusters.

Tune Quick Start

image

Tune is a library for hyperparameter tuning at any scale.

To run this example, you will need to install the following:

This example runs a parallel grid search to optimize an example objective function.

If TensorBoard is installed, automatically visualize all trial results:

RLlib Quick Start

image

RLlib is an open-source library for reinforcement learning built on top of Ray that offers both high scalability and a unified API for a variety of applications.

Ray Serve Quick Start

image

Ray Serve is a scalable model-serving library built on Ray. It is:

  • Framework Agnostic: Use the same toolkit to serve everything from deep learning models built with frameworks like PyTorch or Tensorflow & Keras to Scikit-Learn models or arbitrary business logic.
  • Python First: Configure your model serving with pure Python code - no more YAMLs or JSON configs.
  • Performance Oriented: Turn on batching, pipelining, and GPU acceleration to increase the throughput of your model.
  • Composition Native: Allow you to create "model pipelines" by composing multiple models together to drive a single prediction.
  • Horizontally Scalable: Serve can linearly scale as you add more machines. Enable your ML-powered service to handle growing traffic.

To run this example, you will need to install the following:

This example runs serves a scikit-learn gradient boosting classifier.

More Information

Older documents:

Getting Involved

About

A fast and simple framework for building and running distributed applications. Ray is packaged with RLlib, a scalable reinforcement learning library, and Tune, a scalable hyperparameter tuning library.

https://ray.io

License:Apache License 2.0


Languages

Language:Python 53.3%Language:C++ 32.5%Language:Java 9.3%Language:Starlark 1.9%Language:TypeScript 1.2%Language:Shell 0.8%Language:C 0.4%Language:HTML 0.3%Language:Go 0.2%Language:CSS 0.1%Language:Dockerfile 0.1%Language:Makefile 0.0%Language:Jupyter Notebook 0.0%Language:JavaScript 0.0%