GregBowyer / ProGraML

Graph-based Program Representation & Models for Deep Learning

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ProGraML: Program Graphs for Machine Learning

ProGraML is a representation for programs as input to a machine learning model.

Key features are:

  • Expressiveness: We represent programs as graphs, capturing all of the control, data, and call relations. Each node in the graph represents an instruction, variable, or constant, and edges are positional such that non-commutative operations can be differentiated.
  • Portability: ProGraML is derived from compiler IRs, making it independent of the source language (e.g. we have trained models to reason across five different source languages at a time). It is easy to target new IRs (we currently support LLVM and XLA).
  • Extensibility: Features and labels can easily be added at the whole-program level, per-instruction level, or for individual relations.

Getting Started

To get stuck in and play around with our graph representation, visit:

Program Explorer

Or if papers are more your ☕, have a read of ours:

Preprint

Constructing the ProGraML Representation

Here's a little example of producing the ProGraML representation for a simple recursive Fibonacci implementation in C.

Step 1: Compiler IR

We start by lowering the program to a compiler IR. In this case, we'll use LLVM-IR. This can be done using: clang -emit-llvm -S -O3 fib.c.

Step 2: Control-flow

We begin building a graph by constructing a full-flow graph of the program. In a full-flow graph, every instruction is a node and the edges are control-flow. Note that edges are positional so that we can differentiate the branching control flow in that switch instruction.

Step 3: Data-flow

Then we add a graph node for every variable and constant. In the drawing above, the diamonds are constants and the variables are ovals. We add data-flow edges to describe the relations between constants and the instructions that use them, and variables and the constants which define/use them. Like control edges, data edges have positions. In the case of data edges, the position encodes the order of a data element in the list of instruction operands.

Step 4: Call graph

Finally, we add call edges (green) from callsites to the function entry instruction, and return edges from function exits to the callsite. Since this is a graph of a recursive function, the callsites refer back to the entry of the function (the switch). The external node is used to represent a call from an external site.

The process described above can be run locally using our clang2graph and graph2dot tools: clang clang2graph -O3 fib.c | graph2dot

Datasets

Please see this doc for download links for our publicly available datasets of LLVM-IRs, ProGraML graphs, and data flow analysis labels.

Running the code

Requirements

  • macOS ≥ 10.15 or GNU / Linux (we recommend Ubuntu Linux ≥ 18.04).
  • bazel ≥ 2.0
  • Python ≥ 3.6
  • (Optional) NVIDIA GPU with CUDA drivers for TensorFlow and PyTorch

Test that you have everything prepared by building and running the full test suite:

$ bazel test //programl/...

Command-line tools

In the manner of Unix Zen, creating and manipulating ProGraML graphs is done using command-line tools which act as filters, reading in graphs from stdin and emitting graphs to stdout. The structure for graphs is described through a series of protocol buffers.

Build and install the command line tools to ~/.local/opt/programl (or a directory of your choice) using:

$ bazel run -c opt //programl:install ~/.local/opt/programl

Then to use them, append the following to your ~/.bashrc:

export PATH=~/.local/opt/programl/bin:$PATH
export LD_LIBRARY_PATH=~/.local/opt/programl/lib:$LD_LIBRARY_PATH

Dataflow experiments

Download and unpack our dataflow dataset, then train and evaluate a graph neural network model using:

bazel run //programl/task/dataflow:train_ggnn \
    --analysis reachability \
    --path=$HOME/programl

where --analysis is the name of the analysis you want to evaluate, and --path is the root of the unpacked dataset. There are a lot of options that you can use to control the behavior of the experiment, see --helpfull for a full list. Some useful ones include:

  • --batch_size controls the number of nodes in each batch of graphs.
  • --layer_timesteps defines the layers of the GGNN model, and the number of timesteps used for each.
  • --learning_rate sets the initial learning rate of the optimizer.
  • --lr_decay_rate the rate at which learning rate decays.
  • --lr_decay_steps number of gradient steps until the lr is decayed.
  • --train_graph_counts lists the number of graphs to train on between runs of the validation set.

Using this project as a dependency

If you are using bazel you can add ProGraML as an external dependency. Add to your WORKSPACE file:

load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")

http_archive(
    name="programl",
    strip_prefix="programl-<stable-commit>",
    urls=["https://github.com/ChrisCummins/labm8/archive/<stable-commit>.tar.gz"],
)

# === Begin ProGraML dependencies ===
<WORKSPACE dependencies>
# === End ProGraML dependencies ===

Where <WORKSPACE dependencies> is the block of delimited block of code in @programl//:WORKSPACE (this is an unfortunately clumsy workaround for recursive workspaces).

Then in your BUILD file:

cc_library(
    name = "mylib",
    srcs = ["mylib.cc"],
    deps = [
        "@programl//programl/ir/llvm",
    ],
)

py_binary(
    name = "myscript",
    srcs = ["myscript.py"],
    deps = [
        "@programl//programl/ir/llvm/py:llvm",
    ],
)

Acknowledgements

Made with ❤️️ by Chris Cummins and Zach Fisches, with help from folks at the University of Edinburgh and ETH Zurich: Tal Ben-Nun, Torsten Hoefler, Hugh Leather, and Michael O'Boyle.

Funding sources: HiPEAC Travel Grant.

About

Graph-based Program Representation & Models for Deep Learning

License:Other


Languages

Language:C++ 36.1%Language:Python 34.4%Language:Starlark 26.6%Language:Shell 2.5%Language:C 0.4%