xiebaiyuan / paddle-tvm

Open deep learning compiler stack for cpu, gpu and specialized accelerators

Home Page:https://tvm.apache.org/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Open Deep Learning Compiler Stack

Documentation | Contributors | Community | Release Notes

Build Status WinMacBuild

Apache TVM is a compiler stack for deep learning systems. It is designed to close the gap between the productivity-focused deep learning frameworks, and the performance- and efficiency-focused hardware backends. TVM works with deep learning frameworks to provide end to end compilation to different backends.

License

© Contributors Licensed under an Apache-2.0 license.

如何使用TVM编译PaddlePaddle模型

import paddle
paddle.enable_static()
from tvm import relay
import tvm
import numpy as np

# 加载Paddle模型
place = paddle.CPUPlace()
exe = paddle.static.Executor(place)
[prog, feeds, outs] = paddle.static.load_inference_model('model/inference', exe)

# 将Paddle模型转为TVM Relay IR(Function and Parameters)
mod, params = relay.frontend.from_paddle(prog)

with tvm.transform.PassContext(opt_level=1):
    intrp = relay.build_module.create_executor("graph", mod, tvm.cpu(0), 'llvm')

# 进行推理
input_data = np.random.rand(1, 3, 224, 224).astype('float32')
tvm_outputs = itrp.evaluate()(tvm.nd.array(input_data), **params).asnumpy()

About

Open deep learning compiler stack for cpu, gpu and specialized accelerators

https://tvm.apache.org/

License:Apache License 2.0


Languages

Language:Python 53.8%Language:C++ 39.6%Language:Rust 1.5%Language:C 1.2%Language:Java 0.8%Language:Shell 0.7%Language:CMake 0.7%Language:Go 0.5%Language:TypeScript 0.4%Language:Objective-C++ 0.3%Language:Makefile 0.2%Language:Objective-C 0.1%Language:JavaScript 0.1%Language:Cuda 0.1%Language:Batchfile 0.0%Language:HTML 0.0%Language:RenderScript 0.0%