Amanda-Barbara / xla

A community-driven and modular open source compiler for ML.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

OpenXLA, a community-driven and modular open-source compiler (actively migrating from tensorflow/xla).

The OpenXLA compiler is a community-driven and modular ML compiler. It will enable efficient optimization and deployment of ML models from most major frameworks to any hardware backend notably CPUs, GPUs, and ML ASICs.

Warning This repo is currently being migrated from TensorFlow. Until the migration is complete, this repo will not be accepting PRs.

It is currently in the process of being created from the code currently inside tensorflow, under the OpenXLA SIG governance.

If you want to use XLA with your ML project, refer to the corresponding documentation for your ML framework:

Everything else in this repo is intended for XLA developers and integrators (to debug or add support for ML frontends and hardware backends).

Get started

Here's how you can start developing in the XLA compiler:

Note: If you're not contributing code to the XLA compiler, you shouldn't clone and build this repo. To simply compile a model with XLA, see the links above to use one of the supported ML frameworks.

To build XLA, you will need to install Bazel. Bazelisk is an easy way to install Bazel and automatically downloads the correct Bazel version for XLA. If Bazelisk is unavailable, you can manually install Bazel instead.

Clone this repository:

git clone https://github.com/openxla/xla && cd xla

We recommend using a suitable docker container to build/test XLA, such as TensorFlow's docker container:

docker run --name xla -w /xla -it -d --rm -v $PWD:/xla tensorflow/build:latest-python3.9 bash

Run an end to end test using an example StableHLO module:

docker exec xla bazel test xla/examples/axpy:stablehlo_compile_test --nocheck_visibility --test_output=all

This will take quite a while your first time because it must build the entire stack, including MLIR, StableHLO, XLA, and more.

When it's done, you should see output like this:

==================== Test output for //xla/examples/axpy:stablehlo_compile_test:
[==========] Running 1 test from 1 test suite.
[----------] Global test environment set-up.
[----------] 1 test from StableHloAxpyTest
[ RUN      ] StableHloAxpyTest.LoadAndRunCpuExecutable
Loaded StableHLO program from xla/examples/axpy/stablehlo_axpy.mlir:
func.func @main(
  %alpha: tensor<f32>, %x: tensor<4xf32>, %y: tensor<4xf32>
) -> tensor<4xf32> {
  %0 = stablehlo.broadcast_in_dim %alpha, dims = []
    : (tensor<f32>) -> tensor<4xf32>
  %1 = stablehlo.multiply %0, %x : tensor<4xf32>
  %2 = stablehlo.add %1, %y : tensor<4xf32>
  func.return %2: tensor<4xf32>
}

Computation inputs:
        alpha:f32[] 3.14
        x:f32[4] {1, 2, 3, 4}
        y:f32[4] {10.5, 20.5, 30.5, 40.5}
Computation output: f32[4] {13.64, 26.78, 39.920002, 53.06}
[       OK ] StableHloAxpyTest.LoadAndRunCpuExecutable (264 ms)
[----------] 1 test from StableHloAxpyTest (264 ms total)

[----------] Global test environment tear-down
[==========] 1 test from 1 test suite ran. (264 ms total)
[  PASSED  ] 1 test.

Contacts

  • For questions, contact Thea Lamkin - thealamkin at google

Resources

Code of Conduct

While under TensorFlow governance, all community spaces for SIG OpenXLA are subject to the TensorFlow Code of Conduct.

About

A community-driven and modular open source compiler for ML.

License:Apache License 2.0


Languages

Language:C++ 83.7%Language:MLIR 8.1%Language:Starlark 6.6%Language:Python 0.7%Language:C 0.4%Language:Smarty 0.3%Language:CMake 0.2%Language:Shell 0.0%Language:SourcePawn 0.0%Language:LLVM 0.0%Language:Cython 0.0%Language:Batchfile 0.0%