Maheshwari Natarajan (mahinlma)

mahinlma

Geek Repo

0

followers

0

following

Company:Zoho Corp

Location:India

Github PK Tool:Github PK Tool

Maheshwari Natarajan's repositories

Accelerating-CNN-with-FPGA

This project accelerates CNN computation with the help of FPGA, for more than 50x speed-up compared with CPU.

Language:C++License:NOASSERTIONStargazers:0Issues:1Issues:0

Alveo-PYNQ

Introductory examples for using PYNQ with Alveo

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:0Issues:0Issues:0

audio

Data manipulation and transformation for audio signal processing, powered by PyTorch

Language:PythonLicense:BSD-2-ClauseStargazers:0Issues:0Issues:0
Stargazers:0Issues:0Issues:0
Language:HTMLLicense:MITStargazers:0Issues:1Issues:0

benchmark

TorchBench is a collection of open source benchmarks used to evaluate PyTorch performance.

Language:PythonLicense:BSD-3-ClauseStargazers:0Issues:0Issues:0

blinky

Example LED blinking project for your FPGA dev board of choice

Language:VerilogLicense:MITStargazers:0Issues:0Issues:0

brevitas

Quantization-aware training in Pytorch

Language:PythonLicense:NOASSERTIONStargazers:0Issues:0Issues:0

metrics

Machine learning metrics for distributed, scalable PyTorch applications.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0
Stargazers:0Issues:0Issues:0

examples-1

TensorFlow examples

License:Apache-2.0Stargazers:0Issues:0Issues:0

FFmpeg

Mirror of git://source.ffmpeg.org/ffmpeg.git

License:NOASSERTIONStargazers:0Issues:0Issues:0

FPGA-Devcloud

Get started using Intel® FPGA tools on the Devcloud with tutorials, workshops, advanced courses, and sample projects built specifically for students, researchers, and developers. Visit our official Intel® FPGA Devcloud website:

Language:ShellStargazers:0Issues:0Issues:0

mlir

"Multi-Level Intermediate Representation" Compiler Infrastructure

License:Apache-2.0Stargazers:0Issues:0Issues:0

netron

Visualizer for neural network, deep learning, and machine learning models

License:MITStargazers:0Issues:0Issues:0

OLive

OLive, meaning ONNX Runtime(ORT) Go Live, is a python package that automates the process of accelerating models with ONNX Runtime(ORT). It contains two parts including model conversion to ONNX with correctness checking and auto performance tuning with ORT. Users can run these two together through a single pipeline or run them independently as needed.

License:MITStargazers:0Issues:0Issues:0

oneAPI-samples

Samples for Intel oneAPI toolkits

License:MITStargazers:0Issues:0Issues:0

onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

Language:C++License:MITStargazers:0Issues:0Issues:0

open_model_zoo

Pre-trained Deep Learning models and samples (high quality and extremely fast)

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

OpenVINO-Custom-Layers

Tutorial for Using Custom Layers with OpenVINO (Intel Deep Learning Toolkit)

Stargazers:0Issues:0Issues:0

optimum

🏎️ Accelerate training and inference of 🤗 Transformers with easy to use hardware optimization tools

License:Apache-2.0Stargazers:0Issues:0Issues:0
Stargazers:0Issues:0Issues:0

PYNQ-experiment

This repository contains a "Hello World" introduction application to the Xilinx PYNQ framework.

Language:Jupyter NotebookLicense:BSD-3-ClauseStargazers:0Issues:0Issues:0

pytorch_quantization

Pytorch Model Quantization, Layer Fusion and Optimization

Language:Jupyter NotebookStargazers:0Issues:0Issues:0

SDAccel_Examples

SDAccel Examples

License:NOASSERTIONStargazers:0Issues:0Issues:0

spooNN

FPGA-based neural network inference project with an end-to-end approach (from training to implementation to deployment)

License:AGPL-3.0Stargazers:0Issues:0Issues:0
License:Apache-2.0Stargazers:0Issues:0Issues:0

Vitis_Accel_Examples

Vitis_Accel_Examples

License:NOASSERTIONStargazers:0Issues:0Issues:0

Vitis_Libraries

Vitis Libraries

Language:C++License:Apache-2.0Stargazers:0Issues:0Issues:0
License:MITStargazers:0Issues:0Issues:0