snadampal's repositories
arm64-python-wheel-tester
Test python wheels import correctly on Graviton2
aws-graviton-getting-started
This document is meant to help new users start using the Arm-based AWS Graviton and Graviton2 processors which power the 6th generation of Amazon EC2 instances (C6g[d], M6g[d], R6g[d], T4g, X2gd, C6gn)
benchmark
TorchBench is a collection of open source benchmarks used to evaluate PyTorch performance.
builder
Continuous builder and binary build scripts for pytorch
cpuinfo
CPU INFOrmation library (x86/x86-64/ARM/ARM64, Linux/Windows/Android/macOS/iOS)
deep-learning-containers
AWS Deep Learning Containers (DLCs) are a set of Docker images for training and serving models in TensorFlow, TensorFlow 2, PyTorch, and MXNet.
djl
An Engine-Agnostic Deep Learning Framework in Java
intel-extension-for-pytorch
A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
java
Java bindings for TensorFlow
javacpp-presets
The missing Java distribution of native C++ libraries
MNN
MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba
mxnet
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
oneDNN
oneAPI Deep Neural Network Library (oneDNN)
OpenBLAS
OpenBLAS is an optimized BLAS library based on GotoBLAS2 1.13 BSD version.
llama.cpp
Port of Facebook's LLaMA model in C/C++
onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
openblas-feedstock
A conda-smithy repository for openblas.
pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
serving
A flexible, high-performance serving system for machine learning models
tensorflow
An Open Source Machine Learning Framework for Everyone
Tool-Solutions
Tutorials & examples for Arm software development tools.
tutorials
PyTorch tutorials.
xla
Enabling PyTorch on Google TPU
XNNPACK
High-efficiency floating-point neural network inference operators for mobile, server, and Web